Ninety years ago André Breton asked: “Can’t the dream be used in solving the fundamental problems of life?”
The answer, of course, is no. The unconscious does not offer some insight into the mystery of the human will, the relationship between subjective and objective reality, or the possibility of world peace.
But does it make for good art? On that score, the results are decidedly mixed.
Let me start by admitting that it is odd and usually unhelpful to think of artistic practices as either successes or mistakes. Practices are developed, rise or fall in popularity, are used to good effect by good artists and to poor effect by bad ones. But I would like to make the distinction here between the use of the fantastic in late 19th and early 20th century art and literature, which more or less developed organically, and Breton’s attempt to codify and regulate particular practices under the term surrealism.
The fantastic is not the same as surrealism. It is the use of images, often borrowed or indebted in some way to Greek and pagan mythology or Christianity, in the context of a larger work in such a way that surprises or that is particularly evocative. Examples include the work of François de Nomé, Gustave Doré, William Blake, and others. Breton’s surrealism, on the other hand, is both idealistic and ideological. It prescribes certain artistic practices—automatism—for certain aesthetic and social ends.
The word “surrealism” was first used in 1917 by Guillaume Apollinaire in reference to Cocteau’s Parade. Apollinaire was rather liberal in his use and definition of the term, as Ruth Brandon notes in Surreal Lives. (“When man wanted to imitate walking,” Apollinaire once wrote, “he invented the wheel, which does not look like a leg. Without knowing it, he was a Surrealist.”) Breton was not.
Breton argued that the use of automatism might provide a more all-encompassing, “synthetic” expression of the world—one in which all differences, including those between social classes, were obliterated. In his first Manifesto of Surrealism, Breton states that “I believe in the future resolution of these two states, dream and reality, which are seemingly so contradictory, into a kind of absolute reality, a surreality.” And in his Second Manifesto of Surrealism, he writes:
Everything tends to make us believe that there exists a certain point of the mind at which life and death, the real and the imagined, past and future, the communicable and the incommunicable, high and low, cease to be perceived as contradictions. Now, search as one may one will never find any other motivating force in the activities of the Surrealists than the hope of finding and fixing this point.
American attitudes and uses of surrealism have been more pragmatic. Painters such as Gerome Kamrowski and William Baziotes rejected surrealism’s radical politics but played with images associated with dreams in their work. And in a talk at “The First Papers in Surrealism” in 1942, Robert Motherwell argued that while automatism was, technically speaking, impossible, a version of it—what he called “plastic automatism”—could be a useful tool in picture-making.
In the end, though, plastic automatism has little in common with Breton’s surrealism. It is simply the painter’s free use of paint and other materials in a work as these items come to mind or to hand. Painting takes time to execute, and because of this, it is impossible to reduce the time between brush stroke and some thought supposedly at the edge of consciousness to maintain the fantasy that the unconscious is in any way being explored.
Poetry has not been so lucky. Paul Éluard, Blaise Cendrars, and others attempted to put Breton’s ideas (which were, after all, principally addressed to writers) into practice and the results were incredibly boring poems, even despite the occasional violent or sexual image. Benjamin Péret’s metaphors are striking enough (see “Hello”), but all in all, it’s been a miserable failure.
Despite this, soft surrealism—that is, a little incoherence there, an out of place violent or sexual image there (no one tries to actually use automatism)—is still relatively popular today. It makes a poem look edgy, in-the-know, and it has a nice leftist pedigree. The problem is that this soft surrealism can hide incompetence and often adds nothing to a poem, other than the above stylish marking. (Examples—almost all published this month—can be found here, here, here, here, and here.)
Stephen Burt has written against this soft surrealism, which he calls “elliptical poetry,” and has suggested that a renewed focus on objects in poetry—on “well-made, attentive, unornamented things”—might (and should) replace the “slippery, digressive, polyvocalic,…overlapping, colorful fragments” of a still fashionable soft surrealism.
I would propose a different route. Getting rid of incoherence, meaningless images, fragmented syntax, and so forth, could open a much needed opportunity for a fantastic in poetry that makes sense. Too long has the fantastic been wedded to Breton’s watered-down automatism, and breaking definitively free from it might open the field for more poems like Marly Youmans’s Thaliad or Joe Fletcher’s Sleigh Ride. And that would be a very good thing.
At BetaNews, Robert X. Cringely writes that fifty years ago attempts to create artificial intelligence (AI) failed because there was not enough “processing power” to do it: “But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.”
At The Washington Post, Dominic Basulto tells us that AI is “the next big thing” for Silicon Valley start-ups. Not only will it be created, it will be done so easily and cheaply:
AI will move from something that took tens of millions of dollars and thousands of people to create, to something that takes tens of thousands of dollars and can be created by a group of kids after an all-night Red Bull session. When they do, then we’ll know that visionaries like Erik Brynjolfsson and Andrew McAfee were right – we are entering the dawn of the age of artificial intelligence.
The problem with AI is that we don’t really know what the problem is, or agree with what success would look like. With your cellphone (or any number of similar rapidly-improving technologies) we are perfectly aware of what constitutes success, and we know pretty well how to improve them. With AI, defining the questions remains a major task, and success a major disagreement. That is fundamentally different from issues like increasing processor power, squeezing more pixels onto a screen, or speeding up wireless internet. Failing to see that difference is massively unhelpful.
If people want to reflect meaningfully on this issue, they should start with the central controversy in artificial intelligence: probabilistic vs. cognitive models of intelligence.
John Ashbery once wrote that “All poetry is against war and in favor of life, or else it isn’t poetry, and it stops being poetry when it is forced into the mold of a particular program. Poetry is poetry. Protest is protest.”
Over at Capital Commentary, I take a look at the context of Ashbery’s remark and suggest that while he was wrong that poetry is always against war, he’s right that one of the key characteristics of protest and propaganda is a forcing of language.
Ashbery’s point was not that poetry is apolitical, but that poetry devoted to a single cause or program lacks the independence to deal with human experience in all its ambiguities and paradoxes…a poem is much less a poem to the extent that it forces its words to carry an idea further than the words themselves will allow.
Brett Beasley, in turn, offers a nice defense of war poetry via Tennyson’s “The Charge of the Light Brigade” over at The Curator, and shows that one can remember the fallen dead–and even inflate their heroism and sacrifice–without naively praising war itself:
Tennyson’s poem might misrepresent the facts, but, despite initial appearances, it by no means presents a simple or one-sided view of war or heroism. The soldiers of the Light Brigade face an absurd situation filled with mismanagement (“someone had blundered”) and hopelessness (“Theirs not to reason why, / Theirs but to do and die”) as well as defenselessness (they charge with minimal armor, wielding swords against cannons). In preserving the memory of the Light Brigade, Tennyson preserved the particular set of complexities and contradictions that characterized what many have called “the first modern war.”
Read the rest of Beasley’s essay here.
This Atlantic article on Vladimir Nabokov’s wife, Vera, who devoted her life to his writing, is making the rounds. Executive summary: Most writers “pine” for a “do-it-all spouse” like Vera, but it’s harder for women writers to find one.
It is a beautiful thing when couples sacrifice themselves for each other–that is what marriage is all about–but why single out the sacrifices of non-writerly spouses for writerly ones as somehow special, as if writing–more so than other vocations–requires and is more deserving of such sacrifice? It’s not.
Were he alive today, Robert Frost–that great megalomanic misogynist–would have scoffed at such horseradish. When Amy Lowell made Frost’s wife, Elinor, out to be “the conventional helpmeet of genius” in an essay on the poet, Frost wrote Louis Untermeyer to complain:
Catch her getting any satisfaction out of what her housekeeping may have done to feed a poet! Rats! She hates housekeeping. She has worked because the work has piled on top of her. Be she hasn’t pretended to like housework even for my sake. If she has liked anything it has been what I may call living it on the high. She’s especially wary of honors that derogate from the poetic life she fancies us living. What a cheap common unindividualized picture Amy makes of her.
Update: Marly Youmans–who is a novelist and a poet–put it this way earlier this week: ”What is a spouse for? Not to be your personal servant, certainly! I’m glad to have married a man who likes to cook and does so. But I didn’t and don’t expect my husband to read or critique manuscripts, act as my secretary, clean the bathrooms, do the laundry for five people (or however many are in residence at the moment), vacuum, etc. Do I wish he would do all those things? It’s a bit tempting . . . but no, not really, thanks.”
Bryan Appleyard speaks truth to techno-futurist craziness over at The New Statesman:
Futurologists are almost always wrong. Indeed, Clive James invented a word – “Hermie” – to denote an inaccurate prediction by a futurologist. This was an ironic tribute to the cold war strategist and, in later life, pop futurologist Herman Kahn. It was slightly unfair, because Kahn made so many fairly obvious predictions – mobile phones and the like – that it was inevitable quite a few would be right.
Even poppier was Alvin Toffler, with his 1970 book Future Shock, which suggested that the pace of technological change would cause psychological breakdown and social paralysis, not an obvious feature of the Facebook generation. Most inaccurate of all was Paul R Ehrlich who, in The Population Bomb, predicted that hundreds of millions would die of starvation in the 1970s. Hunger, in fact, has since declined quite rapidly.
Perhaps the most significant inaccuracy concerned artificial intelligence (AI). In 1956 the polymath Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do” and in 1967 the cognitive scientist Marvin Minsky announced that “within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved”. Yet, in spite of all the hype and the dizzying increases in the power and speed of computers, we are nowhere near creating a thinking machine.
Such a machine is the basis of Kurzweil’s singularity, but futurologists seldom let the facts get in the way of a good prophecy. Or, if they must, they simply move on. The nightmarishly intractable problem of space travel has more or less killed that futurological category and the unexpected complexities of genetics have put that on the back burner for the moment, leaving neuroscientists to take on the prediction game. But futurology as a whole is in rude health despite all the setbacks.
Why? Because there’s money in it; money and faith. I don’t just mean the few millions to be made from book sales; nor do I mean the simple geek belief in gadgetry. And I certainly don’t mean the pallid, undefined, pop-song promises of politicians trying to turn our eyes from the present – Bill Clinton’s “Don’t stop thinking about tomorrow” and Tony Blair’s “Things can only get better”. No, I mean the billions involved in corporate destinies and the yearning for salvation from our human condition.
Appleyard goes on to discuss the hope that futurists place in neuroscience:
We are, it is said, on the verge of mapping, modelling and even replicating the human brain and, once we have done that, the mechanistic foundations of the mind will be exposed. Then we will be able to enhance, empower or (more likely) control the human world in its entirety. This way, I need hardly point out, madness lies.
There is, of course, a lot of good, practical work done in neuroscience in the area of memory loss, attention disorders, brain development, and so forth. But in other areas it is often infused with a fair bit of futurist mumbo-jumbo.
Walter Sinnott-Armstrong is no futurist, but his remarks on the possibility of using transcranial magnetic stimulation (TMS)–which can disrupt the neurons in the area of the brain associated with moral reasoning–to change how psychopaths think is the sort of technological breakthrough that futurists hope will transform human existence:
“It’s possible that if we understand the neural circuits that underlie psychopaths and their behavior, we can use medications and magnetic stimulation to change their behavior,” he said.
* * *
Existing studies tend to only look at how the brain responds to one kind of moral question: Circumstances in which a hypothetical person in some way causes harm, Sinnott-Armstrong said.
But there are many other areas to explore, such as disloyalty to friends, “impure” sexual acts, and procedural injustice. How does the brain respond to a good outcome achieved by questionable means, such as a good leader coming to power in an unjust process? These topics are all ripe for future study.
Believe that homosexuality is a sin? Zap. Unkind and unjust to friends and family? Buzz.
It won’t happen, of course, because moral reasoning cannot be reduced to the brain, even if it is not entirely separate from it either. (More on that, perhaps, in a later post.) But I think that Appleyard is right that the problem with the futurist vision is not that it might come true but that it is a waste of time and money.
In the Times Literary Supplement, Jonathan Benthall reviews Jeremy Seabrook’s Pauperland in which Seabrook claims that the poor in Britain are no longer treated with dignity but are viewed with disdain. The poor, in turn, pursue “a degraded version of aristocratic grandeur.” Benthall writes that according to Seabrook:
In earlier times, attempts were made by the comfortably off to curb the desires of poor people, and their profligacy was frowned on. But high consumption by the poor is now encouraged as a source of commercial profit. Bombarded with incitements to spend, the poor no longer aim at securing a modest sufficiency, but tend to become caricatures of the rich.
Seabrook may be right that the poor today, more so than fifty years ago, attempt to imitate the “ostentatious kitsch and bling” of the wealthy. Here in Appalachia, I have driven past trailers with plastic toys, bikes, trampolines, plastic swimming pools, play sets, and so forth, strewn across the yard in the rain. Many poor are indeed consumed with consumerism.
At the same time, there are also those trailers that are free of clutter, with a small vegetable garden to the side and flowers in front. My wife, who works at the local library, encounters both the demanding, chain-smoking couple on food stamps who check out movie after movie and the High School girl who works to support her parents while attending school and who still finds time to check out books for her younger siblings.
So, yes, the poor are more ostentatious, but not without qualification. There are still those who live a life of what Seabrook calls “modest sufficiency.”
Nor is the solution to our disdain for the poor the hatred of the wealthy as Seabrook proposes. Being rich is not evil, and an inordinate love of temporal things–greed–is as easily exercised in a free market economy as it is in a command one.
Update: Let me add that personal charity towards the poor offers the opportunity to identify with another person’s poverty and is a tacit acknowledgement that wealth and human dignity are two separate things. Hate simply breeds more hate.
Verse from the Author, in his Ninety-Seventh Year, on His Stomach
Bernard Le Bovier de Fontenelle
Although they argue ad hoc and ad hac*
On my veritable life,
I am now no more than a stomach;
It’s not much, but it’ll do.
Qu’on raisonne ad hoc et ad hac
Sur mon existence présente,
Je ne suis plus qu’un estomac;
C’est bien peu, mais je m’en contente.
*Carries the sense here of “left and right.”
Bernard Le Bovier de Fontenelle (1657-1757) lived a long life, and his age was much discussed in his time–hence the above poem. He studied at Rouen and practiced briefly as a lawyer before devoting himself to writing. While he wrote poems, plays, and opera, he is mostly remembered for his essays on nature, Greek thought and metaphysics. His works include Dialogues des morts [Conversations among the Dead], Entretiens sur la pluralité des mondes [Investigations on Multiple Worlds] and Discours sur l’églogue [Discourse on the Pastoral]. He was elected to the French Academy in 1691.
My colleagues here at TAC are currently running a nice banner ad encouraging you to subscribe to Prufrock–a daily newsletter on books, arts and ideas. Every weekday, you receive 10 to 14 items of interest, an excerpted essay, a poem or translation, an image and a link to a forthcoming book. Here’s sample. I’m not always successful, but I try to pick items that are provocative, substantive, and stylish. The newsletter is free. Why not give it a try?
Buzzfeed started a new vertical two days ago called “Ideas.” The introduction is not very promising. It reads like someone who does not have any ideas trying to have one. At first I thought it was an April Fools joke, but no. First sentence:
There’s no other time in history I envy compared with the present.
I am tired of reading pieces that begin with personal preference or feeling (and I’m not contradicting myself when I say that), but it is particularly out of place here. Who cares what you envy, like, loathe, distrust. Let’s focus on ideas, shall we?
“Mass communication” has finally become more practice than product, and the confrontation between those who once directed the mainstream and those disserved by it has never been more possible.
Not mass communication but “mass communication,” and what does it mean that the “confrontation”—is that the right word?—between mainstream media and those “disserved by it” (conservatives?) has “never been more possible”? I understand what I’m supposed to feel here—Buzzfeed is doing something new and it’s going to be interesting (process over product!)—I’m just not sure what this sentence means.
The public sphere now means a larger, more empowered public, and no new venture will survive without appreciating that fact. I’m proud to helm a project that isn’t suspicious of this shift.
Since this is going to be an ideas section, let’s investigate this one: Is the “public sphere” now “larger” and “more empowered”? What is “the public sphere”? Are people on Twitter and with individual blogs part of “the public sphere”? And are people—all people or just Americans?—generally more “empowered”? Or does the ability to readily share our opinions on Twitter and Facebook simply make us feel more empowered?
The category of writing broadly referred to as criticism was once just how a select few determined canon. It’s now a genre revitalized by a cultural shift toward conversation. And the angle with the most longevity on any topic is always the one that aims beyond the last word. That’s the difference between Ideas and the next cycle of think pieces: a valuation of inquiry that builds — not just reacts.
Let’s start with a false distinction borrowed from a local undergraduate literary theory class (once “a select few determined canon” but now the canon is determined by a wide variety of identity theorists), followed by another vaguely hopeful assertion (criticism is “a genre revitalized”!) and a muddled take home: Buzzfeed “Ideas” is going to publish angles that aim “beyond the last word” (and in language!) because these pieces last the longest. So, ideas and traffic, but mostly traffic.
This is a rather harsh reading of the introduction, and I considered toning it down, but I decided not to because good ideas that are clearly expressed are important, though they are rarely popular, at least initially. Perhaps it will surprise me, but based on its introduction, Buzzfeed’s new section looks to be more interested in the simulacrum of thought and in the polishing of dull clichés than in actual thinking.
In The Atlantic, Rebecca J. Rosen directs our attention to Ford’s response to Cadillac’s much-discussed “Why do we work so hard?” commercial. Rosen writes that while the two ads share a high view of work, they present two radically different reasons for working hard.
Here are the two ads:
Rosen: “For Cadillac, the answer to that question [ 'Why do we work so hard?'] is ‘all that stuff.’ For Ford, it’s ‘to make the world better.’ The justifications for a hard-working life are night and day, but neither ad disparages the slog itself.”
Well, no. The reason why Americans work so hard, according to the Cadillac ad, is not “all that stuff,” it is to do something great—fly, box, go to the moon. The “stuff” is just the cherry on top–though, granted, it’s a pretty visible cherry.
As brazen as the Cadillac ad is, the two commercials are not all that different. They do differ in how they define excellence. In the Cadillac spot, it’s innovation in something—anything. In the Ford, it’s doing something our culture considers morally laudable—saving the planet. (Though it’s worth noting that both are advertisements for electric cars—the innovation subtly pushed in the Cadillac commercial is their use of “green” technology.)
But both share the view that what makes a person better than another is his or her work in the service of doing something great in this life. Americans in the Cadillac commercial are better than the French because they’ve gone to the moon. The woman in the Ford commercial is better than the Cadillac guy because she buys local food and recycles.
Both are good ads, but I think the Cadillac one is better. Its blunt honesty about our unabashed materialism and arrogance may be many things, but it’s not boring, as Rod noted a few weeks ago. By contrast, the Ford ad tells us that we are all good people (or should be!, tsk, tsk) because we make the world a better place–unlike those crass materialists. All this while trying to sell us a new car. Other than the fact that it’s a response to Caddy, it’s like most commercials these days.