Artistry and Artifice


Published May 22, 2005

In Character, Spring 2005

In a 1973 short story called “Light Verse,” science fiction author Isaac Asimov describes a dinner party at a lavishly decorated home. The charming and wealthy hostess is renowned for her unparalleled expertise at the art of “light-sculpture”: “three-dimensional curves and solids in melting color, some pure and some fusing in startling, crystalline effects that bathed every guest in wonder.” At the story’s climax, it is revealed that the multicolored marvels were actually produced by a defective household robot. When a party guest repairs the robot’s broken brain, the hostess becomes furious. “It was he who created my light-sculptures,” she shrieks. “It was the maladjustment, the maladjustment, which you can never restore, that – that – .” She murders the guest.

The notion of a robot creating works of art that exceed in greatness and sublimity the best efforts of humans isn’t surprising; writers throughout the twentieth century imagined machines that were superior to people in every conceivable aspect of brains and brawn, thought and thew. Asimov’s interesting twist is that it’s an autistic artistic robot. In this story, normal robots are obedient servants, but the special robot is creative only because it is out of kilter. The robot is creative by accident, not by design.

In real life, creativity is a trait that roboticists and computer scientists dearly desire to design into their machines. Yet when it comes to creativity, the field of artificial intelligence (AI) has had precious few results. The goal of making creative machines hangs forever on the horizon, never getting any closer. It remains perhaps the greatest challenge facing artificial intelligence researchers. As one of the field’s pioneers put it two decades ago, “The ultimate criterion for expertise in any area, whether chess or football or dance, is the ability to create something new…. Ultimately, creativity is the issue in AI.” The story of the many failures and partial successes of AI researchers seeking to develop creative machines is an instructive one – in no small measure because of what it teaches us about human thinking, desires, and creativity.

* * * * *

The fundamental question about any creative machine is whether its abilities derive only from its maker, or whether it is capable of independent and surprising originality. This key question dates back to the earliest efforts in robotics. Starting in the seventeenth century, European tinkerers and engineers began to adapt clockworks to build automata, novel mechanisms that moved on their own. As the technology advanced, some automata were designed to mimic human artistic output. In the 1730s, for instance, the Frenchman Jacques de Vaucanson exhibited mechanical men that played songs on a flute and a tambourine. Other inventors built automata that could play different instruments, draw elaborate pictures, and write out poetry (sometimes in more than one language). But these cleverly designed machines only simulated creativity – ultimately, they were no more creative than a player piano, their every move programmed in advance by human hands.

The earliest days of the computer era raised the same question: Can these new machines, seemingly capable of emulating rudimentary thought, actually think creatively? The first attempt at an answer came from Augusta Ada King, the Countess of Lovelace, daughter of the poet Byron. Lady Lovelace studied mathematics and befriended Charles Babbage, an eccentric English inventor and mathematician whose life’s obsession was the design of complex calculating machines. Babbage’s machines are considered mechanical forebears of today’s electronic computers, and the countess’s collaboration with Babbage has led some historians to call her one of the first computer programmers. She warned in 1843 against getting carried away with “exaggerated ideas” about the creative powers of one of Babbage’s proto-computers:

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with (emphases in original).

This critical passage was cited in 1950 by the brilliant mathematician Alan Turing, the father of modern computer science, in his famous essay that asked, “Can machines think?” Turing swept aside Lady Lovelace’s objection and concluded that machines would someday be able to think creatively. “The view that machines cannot give rise to surprises,” Turing wrote, is “a fallacy to which philosophers and mathematicians are particularly subject.”

The field of artificial intelligence has its provenance in that essay by Turing, and much of the last half-century of intense theorizing, research, and development can be traced back to questions he raised. All this work in AI has brought some impressive successes, like the chess-playing computer Deep Blue that beat world chess champion Garry Kimovich Kasparov in Philadelphia in 1997. It was the first match Kasparov lost, ever. He apparently got spooked by the machine; according to press accounts, Kasparov said there were moments when the machine seemed to evince an unexpected creativity.

Yet the fact remains that we can open up Deep Blue and explore its innards and learn exactly how and why it decided to capture a pawn, move a rook, sacrifice a knight. We can compare it to previous chess-playing machines, and see how each generation brought more computational power to bear on “solving” the game of chess. And because we can understand how it works, it becomes easy to say that chess-playing programs aren’t truly creative or intelligent. This is the slippery problem of defining artificial intelligence: In the 1950s, a chess machine that could defeat a human grandmaster would indubitably have been considered intelligent. Today, even though advanced chess programs can trounce the world’s best players, even their makers deny that what the machines do can be considered thinking. As artificial intelligence advances, the bar for judging authentic machine intelligence continues to rise.

Still, some AI researchers believe that true machine intelligence is possible – even inevitable. The argument goes like this: Computers will inexorably surpass human brains in complexity and processing power, machine consciousness will emerge, and the world will be transformed in wholly unpredictable ways. Some futurists speculate about a moment called the “singularity,” when breakthroughs in artificial intelligence and several other disciplines radically change the world – what one writer has described as an “eschatological cataclysm … when computers become the ultra-intelligent masters of physical matter and life.”

From the perspective of today’s AI, such talk seems totally outrageous. The history of artificial intelligence is largely a tale of gross overconfidence, appalling misconceptions, shaky assumptions, and discouraging dead ends. Giving machines basic vision, motion, and navigational abilities has turned out to be incredibly difficult. Making computers that can respond to natural language, or recognize patterns, or learn new facts remains very tricky. The “expert systems” that were all the rage in the 1970s and 80s (programs loaded with facts and basic decision-making algorithms) did prove useful in making medical diagnoses and making industrial recommendations – but they are little more than glorified flowcharts, capable only of solving “any problem that can be and frequently is solved by your in-house expert in a ten- to thirty-minute phone call,” according to one description. When compared to the optimistic dreams of the AI pioneers in the 1950s – “machines will be capable of doing any work that a man can do,” they wrote – today’s AI is an intellectual wasteland.

* * * * *

Two of those AI pioneers, Herbert Simon and Allen Newell, claimed in a 1958 paper that “there are now in the world machines that think, that learn, and that create.” The claim of creativity was based on experiments with computer programs – primitive by today’s standards – that were capable of “discovering” simple mathematical proofs. Later programs were able to devise more complicated proofs, and by the 1970s, AI researcher Douglas Lenat had developed for his Ph.D. thesis a program that sought to generate new mathematical concepts instead of just proving existing theorems. The program did some interesting work – it seemed to formulate several well-known mathematical theorems – but ultimately was “not able to discover any ‘new-to-mankind’ mathematics purely on its own,” as Lenat put it. He moved on to making computer programs that were better at creatively solving problems, including one that repeatedly won a naval war-gaming contest. But Lenat became dissatisfied with these programs, describing their creative capabilities as “extraordinarily meager” when “compared with human capabilities.”

In fact, no computer program has ever managed to come up with an important new mathematical theorem. “In the early 1980s the computer scientist and entrepreneur Edward Fredkin sought to revive the flagging interest in artificial mathematics by creating what came to be known as the Leibniz Prize … for the first computer program to devise a theorem that has a ‘profound effect’ on mathematics,” according to science writer John Horgan. When Horgan asked one of the judges, mathematician David Mumford, when the $100,000 prize was likely to be claimed, Mumford replied, “Not now, not a hundred years from now.”

Many AI researchers who don’t want to wait that long have become intrigued in the last decade or so by the prospect of making machines creative by imitating the creative force of biological evolution. Some AI programs use so-called genetic algorithms – programs that include rules analogous to resources, reproduction, competition, and mutation. In their most basic form, these programs can produce interesting patterns and show the “evolution” of simple systems, with each iteration of the program showing another “generation.” For example, Oxford zoologist Richard Dawkins, in his 1986 book The Blind Watchmaker, describes his fascination with “biomorphs” – simple computerized two-dimensional branching patterns that seem to evolve right in front of his eyes. “When I wrote the program, I never thought that it would evolve anything more than a variety of tree-like shapes,” Dawkins writes. “I had hoped for weeping willows, cedars of Lebanon, Lombardy poplars, seaweeds, perhaps deer antlers. Nothing in my biologist’s intuition, nothing in my 20 years’ experience of programming computers, and nothing in my wildest dreams, prepared me for what actually emerged on the screen”: shapes that look like “fairy shrimps, Aztec temples, Gothic church windows, aboriginal drawings of kangaroos,” and more.

More advanced evolutionary programs can be used for problem solving by proposing different solutions that must compete with one another – a sort of electronic “survival of the fittest.” Thus, in the 1990s, Karl Sims, who had studied both life sciences and computer graphics at MIT, developed a program for “virtual creatures” that evolved in accordance with simple rules. Looking somewhat like bizarre snakes and amphibians, these computerized “creatures” were made of basic building-block shapes. They existed in a virtual world with a few basic rules, and they competed to crawl the fastest on virtual land and swim the fastest in virtual water. Over several generations, the virtual creatures – nicknamed “Blockies” – evolved in ways better adapted for their environment, growing different joints, limbs, tails or flippers. By the late 1990s, such evolutionary design programs moved from the virtual world to the real world: Researchers at Brandeis University’s Pentagon-funded “Golem Project” used rapid-prototyping technology to fabricate real “Blockies” out of plastic.

Today, high-tech companies are eagerly finding business applications for different kinds of genetic algorithms and evolutionary programs, even though such programs have a number of shortcomings: They can be painfully slow, their results can be ungainly, and the rules that control the evolution must be devised with the utmost care, since so much depends upon them. Programs that incorporate evolution-like elements routinely produce results that their human designers could not anticipate or intuit; they can even seem to be, as a 2003 Discover magazine article explained, “genuinely creative, capable of imaginative leaps and subtle connections” that elude human minds. True, the results they produce may be surprising, but they are surely only the result of a mere partial creativity – a creativity constrained by the rules that govern the program’s evolution, and ultimately incapable of the full and rich complexity that true biological evolution entails.

* * * * *

Problem solving isn’t the only kind of creativity AI researchers have been seeking after. A great deal of work has gone into making machines capable of basic creative expression through natural language – that is, language as spoken and written by real people. Nowadays, digital lowlifes routinely employ software that uses basic grammatical rules to generate tricky text for spam or for “trap” Web sites – obscenity-laced verbigeration that doesn’t qualify as true creativity no matter how much it may remind the reader of the poetry of Allen Ginsberg. It isn’t difficult to program computers to communicate in sentences that obey the basic rules of grammar. It is hard, though, to make machines that appear to actually understand language. In the 1970s, AI researchers designed programs that seemed to understand news stories fed into them. They could analyze and summarize the stories, and even respond to basic questions about them. Other programs could even create new stories. But the researchers soon found that the programs just didn’t know enough to convincingly portray the real world; there was, according to leading AI researcher Roger Schank, “a huge gap between what the programs could understand and what they needed to understand.”

One example of what you might call this “common-sense gap” was a program called TALE-SPIN, which was designed in the 1970s to concoct children’s stories using a few programmed names, settings, characters, and scripted goals. Here is the sort of story the program churned out: “Henry Ant was thirsty. He walked over to the riverbank where his good friend Bill Bird was sitting. Henry slipped and fell into the river. He wasn’t able to call for help. He drowned.”

As Schank describes it, this grim little story was the result of the program’s not having been built with such concepts as “noticing” (hence Bill Bird’s obliviousness to Henry’s predicament) and “drowning” (here not allowing Henry Ant to call for help). After making a few tweaks to the program, the AI researchers ran the same story again. This time, the ending changed: “Henry Ant was thirsty. He walked over to the riverbank where his good friend Bill Bird was sitting. Henry slipped and fell into the river. Gravity drowned.”

You get the picture: The program lacks even the most basic common-sense understanding of the real world, because it has never lived in the real world. Storytelling programs have improved immensely since the days of TALE-SPIN – one recent storytelling program specializes in narratives involving betrayal – but the common-sense gap is still quite wide.

At least one AI researcher has sought to give worldly knowledge to computers by force-feeding them. Douglas Lenat, mentioned earlier, is now more than two decades into a multimillion-dollar project to cram all the conceptual and factual material in an entire encyclopedia into a computer program. Lenat and his team have had to “teach” the program concepts and relationships that are obvious to anyone actually in the world – such concepts as people, darkness, part, intangible, death. It remains to be seen whether Lenat’s program will be useful in closing the common-sense gap.

Until that gap is closed, it will remain difficult for machines to exhibit even the sort of everyday creativity required for the simplest tasks – like stringing words together with sufficient cogency to carry on a conversation. Convincing conversation has been the notorious measuring stick of AI since Alan Turing first proposed a test – now known as the Turing Test – that would pit a computer against a person in a conversational contest. A long line of ever-more-sophisticated chatterbots – from the famous ELIZA psychoanalysis program in the 1960s to present-day online “personas” with monikers like Ramona and Jabberwock – have failed to pass the test. In fact, even though for the last fifteen years there has been a $100,000 prize awaiting the maker of the first computer to convincingly converse like a human, no computer has come anywhere close to claiming the bounty.

* * * * *

The common-sense gap doesn’t prohibit computers from demonstrating other limited forms of creativity. Conversational ineptitude aside, computers have constantly improved at chopping up, deconstructing, and making predictions about human creativity. There is, for instance, a computer algorithm that can analyze a given text and predict the sex of the author with startling accuracy. The algorithm looks at the patterns and pieces of the text to make its determination – and it’s right 80 percent of the time.

In a similar vein, composer David Cope, a UC Santa Cruz professor, has invented software that analyzes music, seeking the patterns and styles, the rhythms and harmonies, that might be said to make up a composer’s unique fingerprint. Cope’s “Experiments in Musical Intelligence” (EMI, pronounced “Emmy”) disassembles the music, makes some alterations, and then recombines the parts to make a new whole. The result is music that sounds eerily like that of the original composer – a new invention in the style of Bach, a new rag in the style of Joplin, a new mazurka in the style of Chopin, a new opera (!) in the style of Mahler. Cope has even used EMI to make new compositions in the style of Cope – using the machine to deconstruct and reconstruct his own music.

There have been many other efforts at computerized music creation, dating back to the 1950s, but none produced anything as listenable as the music made by Cope’s EMI. Admittedly, some of EMI’s songs have melodies that go nowhere, or quickly get tedious, or sound soulless. But others sound humanly expressive. Some of the songs seem jarringly similar to the original music that served as the source material; one bizarro-Beethoven sonata is plainly a rip-off of the first movement of the “Moonlight” Sonata. Others are much more original. [Click here to hear some samples.] And several of the songs sound authentic enough to pass what might be considered a musical version of the Turing Test: On many occasions, audiences have been unable to distinguish between the man-made and machine-made music.

Experts in AI are divided over whether EMI is an example of machine creativity. Computer scientist Douglas Hofstadter, who won the Pulitzer Prize in 1980 for his book Gödel, Escher, Bach, has described EMI as a “coup in the modeling of artistic creativity” but still missing the “motivating core” of human creativity. AI theorist Terry Dartnall speaks not of creativity but of musical “reverse engineering.” And for his part, Cope doesn’t really care: “Creativity isn’t such a big deal to me.” Still, he warns, “We should not define creativity so narrowly that the definition itself precludes the possibility that computer programs can be creative or create elegant works.”

In the end, it makes little sense to say that these in-the-style-of-X compositions show that EMI is a creative machine, since all the creative work really comes from the original composer and from Cope’s programming. Giving EMI credit for creativity for these songs makes about as much sense as locking David Cope and Johann Sebastian Bach together in an apartment, commanding them to compose a new invention, and crediting the apartment with having done the creative work.

A more confusing case is that of the life’s work of Harold Cohen, a UC San Diego professor who has spent more than three decades on AARON, a computer program capable of drawing and painting three-dimensional representations of people and plants. No pictures are programmed in advance; AARON isn’t just a modern-day version of an eighteenth-century drawing automaton, repeatedly churning out the same picture. Instead, AARON makes an endless supply of wholly original artwork. Cohen – who already had an international reputation as a painter when he started this work – had to program AARON with rules for artistic concepts related to perspective, occlusion, the use of color, and much more. And like the storytelling software, AARON had to be programmed with common-sense guidelines – including basic facts about the world (like the fact that objects behind other objects can’t be seen) and the fundamentals of human bodies (like the fact that people have two arms).

Several major museums have exhibited the paintings produced by AARON, no doubt more because of the novelty of their means of production than because of the quality of the artwork. That isn’t to criticize AARON’s paintings – in a day when museums exhibit canine scribblings, people pay thousands of dollars to purchase gorillas’ finger paintings, and Web surfers can visit a virtual Museum of Non-Primate Art, the paintings made by AARON are surprisingly human and attractive. AARON’s art can be considered imaginative insofar as it isn’t copied from anything in the real world: Old versions of AARON drew imaginary acrobats, and more recent versions paint imaginary people milling about in vivid-hued rooms or jungles. AARON’s imagination cannot exceed its programming – it will never be able to paint a one-armed person, for example, unless Cohen fiddles with the code. But within the confines of its programming, it can produce unique and aesthetically pleasing pictures.

We must concede that there is a modicum of artistic creativity at work here, and that not all of that creativity can be credited to Cohen. He is, in essence, the primum mobile: he wrote AARON’s program, but AARON now produces its original paintings without Cohen’s intervention.

Oscar Wilde once wrote that it is the artist, not the subject, who is truly portrayed in a painting; the artist “reveals himself” on the canvas. Studying AARON’s ever-expanding portfolio does not reveal much about the artist’s machine nature – until you notice that no matter how many different paintings AARON produces, its distinctly recognizable style does not change. It cannot change. Unlike the human artist, AARON has no ability or desire to experiment with other artistic styles or techniques; it is locked into its programming and is incapable of being bored by aesthetic sameness. But it is conceivable, according to philosophy professor and AI theorist Margaret A. Boden, that a computerized artist could be programmed to modify itself – to reflect upon and criticize its own work, and then to modify its technique. “In principle,” she writes in her 1990 book The Creative Mind: Myth and Mechanism, “some future version of AARON might autonomously do a pleasing drawing which it could not have done before.”

But by what criteria would a computer artist ever criticize its own work? How could a program like AARON ever choose to modify itself, without some external standard to which it could appeal? Therein lies the problem of machine creativity. Human creativity is rooted in our interaction with the world – what we see and experience, including all the ambiguities and contradictions that imbue everyday life. Human creativity is inseparable from our physical embodiment, our genes and brains and brui
ses and flaws that give rise in mysterious ways to our predispositions and preferences. Human creativity is bound to our emotions, our hopes and hates, desires and fears, grief and love and faith.

The age of creative machines is not near at hand, for actually being in the world is the price of true creativity.

Adam Keiper is the managing editor of The New Atlantis and co-director of the Ethics and Public Policy Center’s program on Science, Technology, and Society.


Most Read

EPPC BRIEFLY
This field is for validation purposes and should be left unchanged.

Sign up to receive EPPC's biweekly e-newsletter of selected publications, news, and events.

Upcoming Event |

The Promise and Peril of Civic Renewal: Richard John Neuhaus, Peter L. Berger, and “To Empower People”

SEARCH

Your support impacts the debate on critical issues of public policy.

Donate today

More in Bioethics and American Democracy