A friend of mine once said ‘The good thing about sculpting in clay is that you can stop at almost any point and the model can still be interesting’. A great observation, I thought. But I wondered whether it was true, or rather whether the implication that the same can’t be said for models made on the computer, was true. Because if it is, the consequences are far reaching and explain something that has bothered me for a while; why is it that 3d modeling is not used by artists outside of the entertainment industry as part of their work or for itself, in the same way that painters do with paint or sculptors with whatever materials they may be using at the time? Pretty much every other type of medium and activity or object can be found in galleries around the world. Is there something inherent in 3D on the computer that prevents it from being adopted as a useful expressive medium in it self? Perhaps it’s the association with glossy Hollywood films that turns people off, or maybe its something a little deeper.
At the risk of ruining the elegant precision of my mates point, it’s worth trying to unwrap it a little before moving on. What I think he was saying was that clay modeling isn’t a process that is entirely defined by your goals i.e. clay has real physical properties that can’t be ignored, and that a model made with it benefits from these properties even when incomplete. Indeed it’s often more interesting to see a clay model half finished than totally nailed. (For the purposes of clarity I’m talking here about clay modeling, but these observations are applicable to all natural media, from stone, to paint to pencils).
The reasons are interesting and perhaps have a lot to do with the fact that clay modeling not only proceeds through a dialogue with the material (an idea commonly used to describes the process of making) but, importantly, makes that dialogue visible. It can be seen in the form of tool marks, fingerprints, areas of incompleteness, or areas of redundancy were the clay sits in its untouched lumpen rawness.
There is no digital equivalent to this, for while you can argue that effective digital modeling relies on a similar dialogue with edge loops, polygonal flows, surface continuities, discontinuities, the digital stuff does not explain itself or record the artists interaction in the way that clay does. Points, polygons, edges or curves carry with them no visible history and in its absence the viewer is left with nothing more than a shape.
Why is this significant? One reason might be that modern taste enjoys the play between the raw material and the shape into which it has been or is being formed. It creates a satisfying irresolvable tension (a bit like the famous rabbit/old woman drawing that is both things at the same time): clay not clay, or stone not stone. We clearly enjoy it and throughout the 20th century in particular artists have made much of the tease. From the impressionists onwards, serious artists have been careful to declare their methods and materials, whatever other intentions they may have had, and it has struck a chord. So much so that now we search out the same qualities in work from earlier times, which were almost certainly overlooked at the time the work was made. We love Michelangelo’s slaves because we see the chiseled stone and sense the block from which they were hewn, or Constable’s cloud studies and their quick thick paint, or Turner’s seascapes (is it a cloud or a dab of paint?). We love them for the same reason we enjoy Rodin, Phillip Guston, Frank Auerbach, Henry Moore or Richard Deacon (the list is long), because we can see the material and the process at the same time as what is being represented, and that’s exciting. If at the same time we catch a glimpse of the artist at work so much the better. It’s about transformation and personality.
Now, the idea of transformation is a massive topic which I don’t intend to deal with here even if I could, I hope it is enough to say that it has been central to the work of many artists, as well as our appreciation of it. And, in relation to my mate’s observation, it is helpful because it points to change as being an important ingredient of work we find interesting. Its worth emphasizing this: whether you’re modeling a figure, an abstract form, or just messing, you are changing one thing to another, and importantly, when working with natural media you start with something that already has an identity, in this case clay. Through the process of modeling it may take on other characteristics, skin, hair, smoothness, lightness, etc but they never completely overwhelm the material that you start with. Or at least we prefer it if they don’t. I think this is part of the interestingness that Jon (my mate) was talking about.
Returning to the 3D computer model in the light of this discussion, it’s interesting to consider what’s being transformed? Frankly, very little. Points, polygons, edges and curves have no especially strong natural characteristics that provide for the sort of tease outlined above. They are brilliantly neutral, like water molecules, and give no hint of any qualities that can emerge from their combination. But even if you concede that they do, or that the process of working them can be similar from the point of view of the artist to working with clay, the absence of any visible record of interaction can’t be denied and affects what we think about them.
Perhaps this is the problem then; digital models hide too much of the process, and the artist behind it to be very engaging, and further don’t have enough in the way of raw material presence to make any transformation especially satisfying.
Maybe, but I’m not sure. Apart from anything else this all sounds very old-fashioned, referencing a bunch of old ideas that suppose that meanings are constructed from the inside out. It incorporates concerns for ‘truth to material’ and by extension authenticity and honesty. Didn’t these go out with the ark? I think so, and whatever their continued resonance in some quarters, and the abiding popularity of those artists already mentioned, we live in an exciting new age of immersive/alternate realities and virtual communities. In this arena, embodied by the Internet on one hand and films on the other, we can forget worn notions of reality and build new ones, forget about our old selves and try on different ones. New media is not so much concerned with the journey but with destinations, or more accurately, arrivals. It’s about what things look like and not what they are.
And in these times nobody is going to allow themselves to be tripped up or held back by a set of musty old concerns that passed there sell by date at about the same time that Gollum crept around the corner of that rock. Take photography, it is a powerful medium that is widely used and appreciated in many different contexts. It declares no history of construction, its methods are similarly transparent to the viewer, and it is no less effective for that. But this makes it all the more surprising that the same can’t be said (yet?) for 3D.
You might argue that it’s just a question of maturity: Cg in all its forms is very new and it will take time to find its place in the scheme of things. After all it’s only in the last few years that the tools and hardware have been available at prices that individuals can afford. This is true in one way, but if, rather than thinking in terms of the number of years it’s been around, you think in terms of the man hours that have been devoted to it, Cg, and 3D particularly, suddenly look ancient! It may be that my perspective on these things is skewed by too many years scheduling in game production, but it seems to me that there’s been acres of time. Add to this the fact that the art world thrives on novelty, and it’s still interesting to ask why, outside of the entertainment business, or scientific imaging communities, 3D has not had much of a run.
So, if the problem (if it is a problem) is not to do with the material, and you allow that the point about maturity is not watertight, and further, accept that conditions are not unfavourable to new approaches, perhaps the explanation can be found in a more difficult area, to do with expectations and context.
In the second part of this article I’ll be looking at some of other issues that impact on the ‘interestingness’ of 3D models. Stay tuned!
Taken from an essay written in 2005
One of the better things I heard last year came from a TV program that in every other respect was a bit rubbish. It was a series called ‘Walking with Humans’ and dealt with the evolution of man. My son loves monkeys, he especially loves people dressed up as monkeys and since this program was filled with actors in monkey suits jumping about, hollering and picking nits of each other it was a non-negotiable TV commitment for us all!
Anyway, one night, amidst all the lolloping and screeching of the monkey men, the narrator outlined a fascinating evolutionary insight that I thought was seamlessly applicable to the creative industries of game and film where tight deadlines and long hours are common place. It was salutary and was roughly this.
Around two million years ago our immediate ancestors had reached an evolutionary plateau. That is, in all significant biological/anatomical respects they were us; they walked on two legs, had opposable thumbs, had big brains and were only slightly hairier than we’d find acceptable in polite society these days. Their technology and tools consisted of chipped flints; sharp hand held bits of rock that they made by smashing stones against each other until they split giving them a cutting edge (the monkey men showed us how)
Now, the interesting thing, it turns out, is that having reached this point of biological and technological sophistication there was no further development of tools for another million years! Chipped flints ruled. And then, all of a sudden, about a million years ago there occurred a technological and creative explosion. Suddenly we have spears and knives with handles, jewelry, carved wood and stone figures.
The question is why? There was no biological/anatomical change over that time; our brains were no bigger nor our hands more dexterous. So evolutionary biologists pondered this question for a while until one bright spark made the inspired suggestion that perhaps it was all to do with the discovery of fire! The implications of this are huge although not immediately obvious.
It suggests that for a million years we spent all of our time busily keeping alive, keeping the wolf, quite literally, from the door. Its easy to imagine how our lives might have played out, huddled up in some tree or behind a bush, gripping chipped flints in sweaty hands and looking out for which ever hungry predator was eyeing us up. But when some genius discovered fire everything changed! Other animals are frightened of fire and our evenings could be spent warm and relaxed doing nothing but gazing vacantly into the flames! So the invention of fire bought with it the invention of laziness and for the first time in our history we could do nothing, and, crucially, all that spare mental capacity and manual dexterity that had sat dormant for a million years had space and time to work!
As an explanation for the creative explosion in our distant past it is satisfyingly elegant. But more than that (and whether or not its true) it encapsulates brilliantly a strong argument against one of the defining characteristics of the entertainment industry: tight deadlines and long working hours. These are so much a part of the industry that they are barely noticed; indeed those keen to show their commitment, ambition and seriousness often enthusiastically embrace them. But what this evolutionary tale suggests is what we all really know, namely that if we are serious, and have real ambition and want to be really good then we’d better be lazy, sometimes at least. More than that it warns us that if we’re don’t we’ll remain stupid. And there’s more than enough in the collective output of the game and film industry (some of which I’ve been responsible for) to suggest that this is not a lesson well learnt.
One of the many differences between digital drawing and traditional drawing is that you have to name the work you make on the computer. You have to name all of it! And I’m not sure that this is the trivial difference that it might at first seem. Whether it’s “untitled 56” or “my dog (red final)” or “doodle6_largecoloured” or “explosion with rabbit_(rubbish4a)”, it all has to have a name and this simple fact probably does more to change the way we think about the work than any of the other more obvious differences that exist between the media.
In fact, it might be the thing that most elegantly differentiates digital art from traditional. Because, if you don’t name it you can’t save it, and if you can’t save it, it will cease to exist, and, if it doesn’t exist you can’t show it to anybody, and if no one else sees it, one of the most important ingredients in the work, it’s relationship with the viewer, ends before it has begun! And, for all practical purposes you will have done nothing and may as well not have bothered… (regardless of whatever else you may have learnt from the process of making).
So, on the computer, until you ‘name and save’, the thing you’re doing has only a very fragile existence. But, strangely enough, once named, the digital artwork passes quickly from the non-existent to the (arguably) too-existent, extinguishing in the process some of the possibilities and opportunities that existed in its nameless state (It’s primitive I know, but the idea that naming something kills it, has always made sense to me).
Compare this to traditional ways of making work. The process here is the exact opposite. Having once started on some work you have to choose to name it, and you may well choose not to. More than that, you have to make an effort if you want to destroy it; scrunch it up, take a hammer to it, burn it or blame it on someone else. However you do it, the work is there until you make it not there.
The reversal in the process is significant, not only because it highlights the obvious fact that digital artwork has no physical autonomy, but, that when working digitally we have to make explicit, at a very early stage, something about our intention for the work and our relationship to it. It’s like getting married during your first date, and the consequences are significant! Because the act of naming work is a declaration of intent that confers status and meaning, and, however subtly, it probably changes the way we think about things. More importantly, it probably affects the way we can interact with the work from then on, and it will do so regardless of motives at the moment of naming.
I mean, most names, whether for traditional artworks or digital, are conceived as convenient identifiers that have no special significance and might easily change if a better one comes to mind. The difference is that with digital art we have no choice; the name has to happen quickly and should be frequently confirmed. In no other part of life is it sensible to commit so soon and so often!
The analogy is frivolous but the point is that the requirement to name things is no small matter and probably impacts very heavily on the nature of the things we produce, as well as the way we produce them.
On a more practical note, it’s never a good idea to include the word ‘final’ in any name, for the obvious reason that it never is. Revisions are always probable and in order to maintain clear heredity as versions accumulate, you will be obliged to enter the un-chartered and thickly fogged land of ‘names that lie beyond final’. It is full of strange and fantastical constructions, each of which stridently proclaims its own finality. If you get there you’ll get completely lost and may languish there with your colleagues for days. Don’t do it!