Raise your hand if you’re a conservative who has cited Edmund Burke without actually having read him closely.
Really—you’re all scholars of the Irish-born MP and oft-celebrated “father of modern conservatism”?
Okay, what did Burke mean by the phrase “the little platoon”?
Yuval Levin explains in his wonderful new book The Great Debate: Edmund Burke, Thomas Paine, and the Birth of Right and Left:
The division of citizens into distinct groups and classes, Burke writes, “composes a strong barrier against the excesses of despotism,” by establishing habits and obligations of restraint in ruler and ruled alike grounded in the relations of groups or classes in society. To remove these traditional restraints, which hold in check both the individual and the state, would mean empowering only the state to restrain the individual, and in turn restraining the state with only principles and rules, or parchment barriers. Neither, Burke thought, could be stronger or more effective than the restraints of habit and custom that grow out of group identity and loyalty. Burke’s famous reference to the little platoon—“To be attached to the subdivision, to love the little platoon we belong to in society, is the first principle (the germ as it were) of public affections”—is often cited as an example of a case for local government or allegiance to place, but in its context in the Reflections, the passage is very clearly a reference to social class.
Still feeling Burkean? Ready to go the pipe-and-slippers, Brideshead cultist route and declare yourself a loyal subject of the queen?
Levin reminds us that the context in which Burke wrote those words was a long-running intellectual dispute with a European-born radical, a man who was cheering on the secular revolution in France—and, oh, by the way, also one of the forefathers of our own revolution, favored by none other than Ronald Reagan himself—the Common Sense and The Crisis pamphleteer Thomas Paine.
That the rivalry between Burke and Paine cuts both ways through our hearts—this is precisely the kind of dialectic, if you will, that Levin hopes to provoke in the reader.
Make no mistake, though; Levin is a Burkean. In fact, the most eloquent exponent of Burkean conservatism, properly understood, since George Will circa 1983’s Statecraft as Soulcraft.
While scholarly and measured in tone, The Great Debate is a readable intellectual history that fairly crackles with contemporary relevance.
Indeed, The Great Debate is the must-read book of the year for conservatives—especially those conservatives who are profoundly and genuinely baffled by the declining popularity of the GOP as a national party. How can America, these conservatives ask, the land of the rugged individual, the conquerors of the frontier, choose statism and collectivism over freedom and liberty?!
Levin’s book provides the answer: You’re looking at the Democratic Party all wrong. It’s just as individualist as you are—maybe more so.
And that is the problem!
While many Americans give during the holiday season, the religious are most likely to feel charitable: according to a new book by David E. Campbell, American Grace, U.S. giving has always been heavily tied to religion. Those affiliated with a religion are most likely to contribute time and money to various philanthropic causes. But Campbell proposes that the actual motivator behind charitable giving is not God or a specific doctrine of charity. It’s actually the religious community, as he explained in a Thursday TIME article:
Rather than religious beliefs, we found that the “secret ingredient” for charitable giving among religious Americans is the social networks formed within religious congregations. The more friends someone has within a religious congregation, the more likely that person is to give time, money, or both, to charitable causes. In fact, even non-religious people who have friends within a religious congregation (typically, because their spouse is a believer) are highly charitable—more so than strong believers who have few social ties within a congregation.
Campbell goes on to propose secular, tight-knit organizations (such as atheistic churches) to help encourage charity amongst non-religious people.
But what is it about community, specifically, that encourages giving? Campbell doesn’t elaborate on this. Perhaps it is the love fostered through relationship. It could also be a sense of accountability derived from close community: if your best friend sponsors a child overseas, you may be prompted to do so as well. Community may also lend a feeling of immediacy to various issues: we may not be next-door to those fighting poverty, but we’re next-door to those fighting it.
This sense of immediacy may be one of the most important factors in charitable giving: The Atlantic shared thoughts Monday from bioethics professor Peter Singer’s “practical ethics” class at Princeton. He believes a feeling of remoteness can significantly affect giving:
It’s human nature to feel compelled to save those who are close to us—our immediate kin, our friends, the little boy we stumble upon whose desperate movements in the water tug at our hearts. It’s much harder to feel that sympathy for faceless children somewhere else (whether in another neighborhood in our town, or halfway across the world). Studies have shown that people tend to give more generously when they are shown a photo and told a story about one, identifiable, specific child.
How do we combat this geographical apathy? Interestingly, Singer points to research as an antidote: he instructs students to research four organizations, and determine which is the most meritorious. Through this exercise, the students “learn that their money will always go much further overseas: that a very small amount of money for an American can be life-saving to someone who is desperately poor. In other words, they learn about the tenets of effective altruism: how to evaluate organizations for transparency and benefits, and figure out which forms of aid are the most cost-effective. This is information that tends to inspire more giving.”
But Singer’s research-based tactic takes the human face away from charity—and according to Campbell’s research, this human face is an essential facet to long-term giving. Additionally, while it makes most logical sense to put your dollar where it will have the greatest practical benefit, Singer’s “effective altruism” distances the giver from the need. If community and immediacy are key ingredients to philanthropic giving, then this method—while useful in a utilitarian sense—may falter faster than community-fostered giving.
Haven’t you heard? Amazon is debuting its very own delivery-by-drone. So Jeff Bezos revealed to Charlie Rose in last night’s
Cyber Monday infomercial ”60 Minutes” report. In the ensuing commentary on Twitter, though, McKay Coppins of Buzzfeed noted that
Lots of these Amazon drone tweets remind me of this 1995 Newsweek essay on how the whole internet thing is hype: http://t.co/IpelTyr2r8
— McKay Coppins (@mckaycoppins) December 2, 2013
Clifford Stoll’s 1995 essay quickly circulated as the epitome of myopic grouching, with Scott Winship of the Manhattan Institute musing
Seriously, this ’95 Newsweek column on internet-boosting hucksters may be least prescient thing ever written http://t.co/QJw94jIgau
— Scott Winship (@swinshi) December 2, 2013
Yet not all of Stoll’s criticisms are wholly wrongheaded, and pulling apart what he got (very) wrong from what still stands can teach us about technology’s ability to live up to the hype. Back in February 1995, Michael Jordan was a baseball player, the Dow Jones was hitting 4,000 for the very first time, and Pamela Anderson was joined in holy matrimony with Motley Crue drummer Tommy Lee. Stoll looked around himself, and wrote that
Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.
The funny thing is, that’s exactly what many futurists of today are still heralding, nearly 20 years later. Given the pace of digital advancements and the rapid development of internet technology, you would hope they’d have a new future to sell us on, once the old one was thoroughly obtained. Then Stoll got himself in his first sticky situation, saying “The truth in [sic] no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.” As Jeff Bezos himself can attest, the daily newspaper is not faring well compared to its online competition. Yet Stoll’s second two points are sound. For all the MOOC hype, teaching really is a fundamentally human activity, born of interaction and guidance, response and customization. Information may be able to be transmitted and tested online, but an education will only be obtained from a teacher. And as much as the digital age has changed parts of our politics, the tasks of governance and compromise have remained stubbornly resistant to solution by algorithm. If anything, the internet has given the government new things to fail at.
Driving from Washington, D.C., to Atlanta this Thanksgiving weekend, I had the opportunity to read Burkhard Bilger’s great New Yorker article on the development of self-driving cars. It’s a long, involved story melding technical accomplishments with personal storytelling, and throws in a healthy dash of historical context. I was able to take the time to work through the full thing because I was in the back seat, freed from driving responsibilities by my absence from the rental car agreement my parents had signed up in York, PA. From time to time I booted up my laptop, and started surfing the web using a Verizon wireless hotspot, at full 4G LTE speeds. My sister used this same arrangement to watch movies streaming from Netflix, one more way to pass the tedium. We are just old enough (mid-twenties) to still be able to occasionally gasp at the seeming absurdity of streaming high quality video and maintaining instantaneous communication with the wider world while hurtling down the highway at 70 miles an hour. The road trip entertainment of our childhood was strictly restricted to the print and personal variety.
We now have ever more activities to occupy our time, and a worldwide connection that can follow us nearly anywhere we go. We don’t need to lose connection when we take off or land in a plane. Why shouldn’t the driver be able to get in on the fun?
From the consumer’s point of view, this is the great appeal of self-driving vehicles: liberation from the monotony of hurtling down empty expanses of highway, or inching along in the gridlock of the commute. Bilger cites an earlier advertisement for the long prophesied self-driving cars as depicting a family turned toward the each other, playing checkers as they move. But as Bilger describes Google’s motivations in pouring its resources into developing this technology, the men of Mountain View have more on their mind than consumer convenience. Relief from tedium through automation was the promise of the last century, the pitch that sold a thousand washing machines.
Instead, Sergey Brin, one of Google’s co-founders, wants nothing more than to (wait for it) “fundamentally change the world with this.” He looks out on the expanse of America’s urban landscape and sees wide swaths of wasted land as cars are used for a couple hours a day at most, then occupy prime real estate unproductively the rest of the day. His self-driving cars can become a fleet, providing personal car service to commuters at a far higher efficiency than today’s taxies, yet more flexible than metro, bus, or light-rail systems. As Brin said, “We’re not trying to fit into an existing business model … We are just on such a different planet.” At least so far, though, that different planet doesn’t let free the driver from his responsibility behind the wheel. Attentive human beings are required to be at the ready in case the car needs to hand off responsibility, having become confused. Even assuming as we surely should that Google makes enormous strides in ironing out what few errors remain, it already takes measurable seconds for a human in the driver’s seat to reorient to the situation after being distracted. Imagine if that person first had to be spun around from their checkers match with the kids. Read More…
Where I grew up, autumn is a season of first fruits. Work-hardened hands are connected to soft, generous hearts. Heritage is plowed into your heart-soil, tradition resonates in everyday rhythms, and praise is the crop that bursts forth from rich hard earth.
My farmer great-grandpa (we called him “Grandpa Dad”) would hold me on his knee, calloused hands cradling my four-year-old frame, and tell me stories. He painted pictures with soft, deep words: of silent movies and driving a four-horse team at age eight. Of his father, who traveled west in a covered wagon and homesteaded in wild, bare Idaho land. I can still see his handsome, wrinkled face; still feel him pull me into his strong, cologne-spiced hug; still hear the rich velvety tones of his voice—a voice that would always melt into chuckles of peace and praise.
Every fall, we sat around the rough wooden picnic table, shucking golden sweet corn: Grandpa Dad, Grandpa Wally, Daddy, my brothers. Grandpa Wally was a pepper-haired man with an infectious belly laugh, who waltzed with me as a baby and always told me, “Grace, you should go to a school out east. You should see the world.” He put on his overalls and work boots, and worked while the sun slumbers. To bed at 8 p.m., awake at 4 a.m. His sweet corn, fresh beef, and brown-speckled eggs filled our stomachs year round. Face brown and wrinkled from the sun, teeth glinting with gold eyes glinting with humor, his bass voice made the floor tremble. He raised five children to the gospel truth, to hard work, to the golden laughter of peace and praise.
Our Thanksgiving table was always heavy-laden with turkey, potatoes, stuffing, biscuits, all the food our stomachs could hold (and more). We weren’t all farmers, but we shared our labors, prepared with soft and calloused hands alike. We found rest for our souls at that table, though sometimes that meant words were left unspoken—stuffed under the rug or left outside the door in chilly November air.
I never appreciated that time when living it. There was a casual, steady reliability in it. There was no reason to expect anything else. Grandma’s candles and china, her careful place settings—they never changed. Neither, I thought, would we. But people change and move with the seasons. When I look out on sunsets and leaves painted cinnamon, I think of home. When I see a field of tall, golden-crowned corn, counting their glorious rows, I remember the harvest—always given to family.
I remember the warmth of Grandpa Dad’s red flannel shirt, his straw hat perched on snowy white hair, and his straight white teeth smiling joyously back at me. Though he passed on to glory at 96, I still see him in the harvest. His life trained ours—to work for God and for family, to give back the first fruits with praise.
I remember my Grandpa Wally’s words when I was about eleven years old: “Grace, when you grow up, you should write a story about me.” It was said jokingly. But the tan farmer with a twinkle in his eye, who waltzed and laughed and cried with me, taught me something invaluable about life: if you don’t share it with your family, there is no joy. He gave, and gave. He and his father put their shoulders to the plow, bore the fruit, and poured it forth with thanksgiving. Then they started over.
This age of impatience chokes the remembrance out of thanksgiving. And without remembrance, we grow ungrateful. We no longer have the strength, patience, or time to dig the hard furrows or sow the slowly-growing fruit. We demand, and forget to serve. Thanksgiving becomes a time of “dealing with” relatives, a time of bearing the silent torment of kinship. Family alienation threads its way through the holidays, bearing thorns instead of fruit. How do we redeem the crop?
Sometimes it starts with one seed, or two. Sometimes it starts with one farmer, willing to fight the horrors of Depression and drought to bring forth a harvest. I peel back the memories like a cornhusk, and stare at the golden treasure beneath. Memories of the flat farmland, the vibrant saffron sunsets, making applesauce with my sister, mother, and grandmother: they draw tears and smiles of peace and praise.
This will be my first Thanksgiving away from home. But the thanksgiving will not change. Family, wherever it lies, brims over with offering, tears, and laughter. The praise comes as we give our first fruits, wherever we might be.
The Internet has changed the way we communicate. Most commonly, we see its effects permeate our grammatical discourse: the pronouncement of “selfie” as word of the year was perhaps the best indication of this. But our conversations have also changed in a deeper sense. According to some observations by Atlantic contributor Andrew Simmons, it may help some teenagers release emotion:
On Facebook, even popular students post statuses in which they express insecurities. I see a dozen every time I log on. A kid frets that his longtime girlfriend is straying and wishes he hadn’t upset her. Another admits to being lonely (with weepy emoticons added for effect). Another asks friends to pray for his sick little sister. Another worries the girl he gave his number to isn’t interested because she hasn’t called in the 17 minutes that have passed since the fateful transaction. Another disparages his own intellect. “I’m so stupid, dad told me to drop out,” he writes. Another wonders why his parents are always angry, and why their anger is so often directed at him. “Brother coming home today,” another posts. “Gonna see how it goes.”
It seems social media may encourage less benign emotional expressions, as well. Relevant Magazine posted an article last week lamenting the angry discussions that often boil over on Facebook. “You log into Facebook and it has happened once again,” author Brandon W. Peach writes, “Some broad political sentiment sparks a flame-war and everyone seems to want to weigh in with a jab, meme, ad hominem attack or (arguably worst of all) a wall of text that begs for you to ‘see more.’” Sometimes, truly insightful discussions can take place on Facebook. But too often, a posted article or controversial status open up Pandora’s box, unleashing a swath of ridicule, offense, and disdain.
Does all this online emotion carry forward into real-time conversations? Forbes contributor Donna Sapolin doesn’t think so. In a Monday post, she shares a conversation she had with her son after a power blackout. He told her he was happy for the outage, because it led him and his friends to have a deep, meaningful conversation. She ponders:
It seems the younger generations are deeply hungry for meaningful face-to-face interactions but feel they have to devise a new approach in order to get beyond shallow chit-chat. This isn’t exactly surprising considering that the bulk of Gen X and Y communication takes place via texts, social media posts and email, and camaraderie takes the form of things watched or played together on screens. We’ve deemed these generations to be the most connected, but they may, in fact, be the most disconnected.
Facebook friends do not constitute true “community.” They are virtual presences, people we cannot see, hear, or touch. In discussing (or arguing) sensitive and personal topics with other users, it is impossible to know the immediate impact of our words. We cannot see furrowed brows, bit lips, or clenched fists. Thus, online discussions become immensely dramatic, sarcastic, and inflammatory—much more than usual face-to-face conversations.
If Sapolin is right, true face-to-face discourse is becoming more rare, even as our online presences devolve into emotion-spewing excess. How many high school students who pour out their souls online will have meaningful conversations with their grandmothers on Thanksgiving? How many people will Instagram pictures of turkey or post a “Happy Thanksgiving” status on Facebook, but never deeply converse with those they are breaking bread with?
To occupy our present space, with grace and candor, is perhaps one of the most difficult challenges of our technological age. Virtual reality’s isolated safety beckons appealingly to us. But if there’s one thing social media is teaching us, it is that there are better forms of communication—deeper, truer, sweeter—than it can offer.
Science fiction has always been one of the leading ways for our culture to process and project the changes that science and technology will introduce into our lives, going back to Francis Bacon’s New Atlantis, perhaps the first work of sci-fi from one of modern science’s founding fathers. Fox took up the Baconian baton this week when it launched its new show “Almost Human” about a broken down cop returning to the field alongside his administratively imposed, but likewise retrieved from the scrap heap, android partner.
In her initial recap, Christine Rosen, a senior editor at The New Atlantis (the magazine) and fellow at the New America Foundation, notes that “Fear and loathing of robots is a trope of long standing in science fiction,” but that “fear and loathing might be giving way to grudging acceptance.” Robots after all, are creeping ever further into every facet of our everyday life, whether deploying in the place of citizen soldiers in the distribution of lethal force, or offering comfort to the lonely and aged in Japan’s great greying. Machines have replaced men in factories for doing the heavy labor, and are being adapted to supplement the remaining human workforce elsewhere. Even cars, the symbol of American liberation from constricting locality, are seen to have an automated future when it might very well be unethical or illegal for an unreliable and precious member of our species to be wielding the wheel unaided.
Rosen notes previous cop-machine pairings in popular culture, like KITT and Michael in Knight Rider:
KITT the computerized talking car was a reliable and sympathetic friend to Michael (played by David Hasselhoff). But although the Hoff was always thankful for KITT, he rarely ended an episode of Knight Rider without securing a new girlfriend. The ineffable pleasure that relationships with other human beings bring was something earlier shows took for granted.
Rosen sees “Almost Human” to suggest, in line with techno-utopian tendencies currently lurking in some prominent corners of Silicon Valley, that robots can replace or substitute for human intimacy, emotional or otherwise. After all Dorian, the outdated android partner, is programmed with an empathetic circuitry that seems to exceed the wetware of his grizzled human companion, John Kennex. And the second episode centered around breakthroughs in “sex-bot” technology that replace significant chunks of demand for human prostitutes by offering attractive and attentive androids for rent. To Rosen, “Almost Human” ”is normalizing the notion that we can create technologies that can teach us how to be better human beings.”
From my first watching, of the first two episodes that have so far aired, however, “Almost Human” may be susceptible to a more generous reading. Read More…
Ariel Levy’s story of miscarriage ran last week in the New Yorker and exploded across the country, receiving a resoundingly positive reaction from empathetic readers. When the Dish picked up her story, their readers also responded with an outpouring of comments describing the grief and pain of miscarriage. This bursting forth has opened a door, shedding new light on a previously unseen grief. Melissa Lafsky Wall explained the reaction Monday in her piece “Giving Voice to the Silent Sorrow”:
I never heard of the “silent sorrow” until a few months later. Learning that a phrase existed for women who’ve miscarried made me even sadder. Its presence means that there are untold armies of women marching grimly through life, carrying their silent sorrow like a wound patched up with duct tape, and no one even knows what they’re suffering. Pain will always accompany losing a pregnancy. But silence — that part is optional.
Earlier this year, I had the opportunity to interview a mother who lost three children to miscarriage. This is her story. My hope (and hers) is that it will keep the conversation going, and help other grieving mothers know that they are not alone, and that every lost child counts.
Mike and Katie sped along a country road in Fruitland, Idaho. The local hospital was on the other side of the Snake River in Ontario, Oregon—thankfully, only minutes away. It was only a few days after Katie’s 20th birthday, four months after their marriage in an old Idaho schoolhouse.
Katie prayed frantically. “I don’t care if this baby is handicapped, I don’t care if this baby doesn’t have an arm or leg,” she thought. “I just want this baby. I want this baby alive.”
She remembered how happy they had been when they found out she was pregnant, only five weeks after their August wedding. The following months were gloriously normal. Katie had terrible morning sickness, and puked on the side of the road during a Thanksgiving vacation.
But in December, she began bleeding. She thought to herself, “Maybe the doctors can stop it… maybe the baby can still live…”
But the baby was gone. Friends told them helpfully, “It’s just your first baby, you’ll have another one.” Others said, “You weren’t very far along, it’ll be better the next time.” Katie smarted under their words. Her arms hung limp and empty, aching for her child.
Katie had three normal, smooth pregnancies between 2002 and 2006: the three boys, Braden, Nathan, and Ian, were healthy and boisterous.
When Ian was eight months old, Katie found out she was pregnant with her fifth baby. At 19 weeks, the family went in for an ultrasound. Everything was fine. Katie had begun to feel her baby’s little kicks. She hoped it was a girl.
On September 9, 2013, a conference entitled “The Changing Role of Education in America: Consequences of the Common Core” was held at the University of Notre Dame. I was invited to deliver an introductory set of remarks on the first panel of that conference. I post those comments here in full.
(Following the conference, a first-rate letter opposing the adoption of the Common Core in Catholic schools, composed by ND Law Professor Gerard Bradley, was circulated widely to Catholic faculty. I was proud to sign this letter, which stresses especially the profound insufficiency the narrowly utilitarian aims of the Common Core curriculum).
The Purpose of Education in American Society
Remarks Delivered at the Common Core Conference
September 9, 2013
University of Notre Dame
I have been teaching a freshman seminar for about eight years that is entitled “The End of Education.” In the seminar we study about ten different authors, ranging from Plato and Aristotle to John Dewey and Allan Bloom, all with an eye to exploring the questions “what is education for?” “What end does it seek to achieve?” The aim of the course is not necessarily to give my students the answer to that question but to make them aware of the intense debate that has taken place over the history of Western Civilization over the ends and purpose of education. As I begin my first class by explaining, if you want to know the commitments of a civilization, look at what it aims to teach its young. If one of the main marks of a civilization is its effort to perpetuate itself over successive generations, then its deepest and ultimate cares will be reflected in its educational commitments.
So I must acknowledge that at first glance the question that I’ve been asked to address for this session—“The Purpose of Education in America”—is exceedingly difficult, since there has been no national educational system in America, current efforts notwithstanding. This might be a sign or indication that America, as a civilization, has no civilizational commitments to its young, that it is a uniquely peculiar nation for not having long had a strong national curriculum like that of England or Germany or Japan. Many look at the patchwork, state- and local-controlled variety of education in America and conclude that it is time to standardize and modernize, time to adopt an American set of educational commitments. Read More…
George Vanderbilt, a late 19th century heir to the family fortune and builder of the Biltmore Estate, reportedly read 3,159 books during his lifetime (approximately 80 books per year). He kept a list of the books he had read in a diary; his last book was Henry Adams’ third U.S. history volume.
Most of us wish we could amass the knowledge that represents. Books give us insights into the perceptions and perspectives of foreign minds. They widen our horizons, and foster our understanding of beauty. But few of us will surpass Vanderbilt’s reading achievements (unless we inherit large fortunes and thus become able to amass and devour the contents of a 10,000-book library). We lack the time available to Vanderbilt; he had neither work nor Twitter to distract him from his reading. Reading takes time—and in our technological, time-driven age, we’ve become ever more aware of how time-consuming reading can be. New Yorker contributor Rachel Arons wrote Monday of a recent proliferation of speed reading apps on the market:
As we’ve transitioned from print to screens, we’ve started clocking how long reading takes: Kindles track the “time left” in the books we’re reading; Web sites like Longreads and Medium include similar estimates with their articles (total reading time for “Anna Karenina”: eighteen hours and twenty-two minutes); in June, Alexis Ohanian, a co-founder of Reddit, published a book with a stamp on the cover advertising it as a “5 hour read.” … The fact is that little of what we read on the Web today is formatted in discrete pages, so it seems logical that, as reading online continues to supplant reading in print, hours and minutes will become increasingly useful units for measuring our progress.
I used to think that, if I tried really hard, perhaps I could read as many books as Vanderbilt. When I realized that this was probably an impossible goal, it felt something like a punch in the stomach: it was a moment of finitude.
Because of that moment, I could empathize with a girl I recently overheard talking with friends at Capitol Hill Books. Browsing the overstuffed shelves, she mentioned that bookstores often scared her, because she realized she “would never be able to read them all.” It’s only a matter of time before we realize that our to-read booklists can easily surpass the bounds of reason. There are so many tantalizing stories lying outside our grasp, and never enough time to read them all.
Some reject the infinitude stretching before them by deciding not to care. There’s too much to ever possibly absorb, and we become frightened and disheartened by the realization that we cannot have it all. Some become reading automatons, determined to absorb as much information as possible before they die. Speed reading apps, despite their usefulness, can turn reading into a personal competition or race to win. This often takes the joy out of reading, and makes it a chore (though for some, competition may enhance the experience).
When we can, we should read for quality’s sake: savoring every book, re-reading the ones that enchant us most. Yet at the same time, not every essential read is worth savoring. Speed reading is useful for the accumulation of necessary knowledge. Slow reading is essential for the appreciation of written beauty. Perhaps our best reading choices lie at the junction of quality and quantity: we can speed read tedious or secondary works, then slowly absorb the masterpieces worth relishing.
Few of us will meet or surpass Vanderbilt’s incredible standard—even if the speed reading apps may help. But do we really want to read 3,159 books? The Preacher observes in Ecclesiastes, “Of making many books there is no end; and much study is a weariness of the flesh.” At some point, even the greatest bookworms must set down their books and live the life that enriches our readings with understanding. After all, that’s the lesson of the bookstore’s infinitude: our lives aren’t long enough to chase after the endless.