Wednesday, June 28, 2017

Getting Enough Sleep Isn't Enough

Dr. Mark Burhenne, DDS, The 8-Hour Sleep Paradox: How We Are Sleeping Our Way to Fatigue, Disease, & Unhappiness

Snoring isn’t cute. Despite adorable viral videos of snoring babies, puppies, and grannies, snoring is a serious health issue. By now, with the prevalence of CPAP machines and mandibular advancement devices, we’re probably all somewhat familiar with sleep apnea. But Mark Burhenne insists this is only one form of “sleep-disordered breathing,” a category of breathing illnesses that can have cascading effects on your health, happiness, and quality of life.

Burhenne is one of a developing class of dentistry, specializing in sleep disorders. Many warning signs of sleep-disordered breathing, he writes, manifest in the mouth. This may include crowded teeth and recessed jaw, damage from chronic teeth grinding, and others. However, the signs he identifies still need quantified by MD sleep specialists before treatment can begin, or be compensated by insurance. That, he says, is where things become tricky.

The 8-Hour Sleep Paradox, Burhenne writes, is assuming counting the hours you spend asleep and assuming, as Mother probably taught you, that eight hours equals a good night. But simply being unconscious isn’t the same as getting a good night’s sleep. Citing multiple sources, Burhenne suggests that anywhere from half to four-fifths of Americans aren’t getting enough deep, restorative sleep, and lack of air is the most common reason.

Worse, we have a tendency to minimize or dismiss real problems. I say “we,” meaning the general population, but Burhenne writes, that includes medical professionals too. Patients diagnosed with “mild” sleep apnea often get sent home with best wishes and little more, but even mild apnea means a person’s airways close. And that means the person comes partway awake to breathe, and therefore isn’t getting necessary Stage-4 and REM sleep.

Mark Burhenne, DDS
This book’s first few chapters explain the warning signs in themselves. These include now-familiar signs of sleep problems, like obesity and chronic fatigue. It also includes, but isn’t limited to, signs of poor sleep, like needing caffeine and frequent naps (guilty); or waking up with dry mouth or headaches, signs you’ve spent the night gasping for air. But if you’re browsing this book, you already know you need to change.

But Burhenne avoids the most common shortcoming of self-help books, encouraging readers to diagnose themselves. Rather, after just two chapters on recognizing signs of sleep-disordered breathing, the largest portion of the book focuses on working with a sleep specialist to get an actual diagnosis that leads to treatment. This is difficult, Burhenne admits, because sleep studies are expensive, and insurance companies incentivize doctors to avoid costly tests.

Getting an appointment with a sleep specialist usually requires a recommendation from your GP. And if you aren’t middle-aged and overweight, Burhenne writes, many GPs won’t make that recommendation. So he provides tools to increase your doctor’s cooperation in the fifteen minutes you usually have, including the Epworth Sleepiness Scale and the STOP-BANG questionnaire. He also includes important questions to ask, and important information to provide.

After going through all that, patients have no guarantee their insurance will actually cover the procedures. Burhenne copiously expounds how to navigate the paperwork necessary to get treatments covered. This includes how to convince your insurance provider that you need some treatment other than CPAP, which is the most commonly used anti-apnea technology, but doesn’t work for everybody. Getting the right treatment takes effort, apparently, but it’s worth it.

Burhenne does provide life hacks that patients can apply individually. To give just two examples, this includes mouth taping, which is exactly what the name implies. If you have only a mildly obstructed airway, closing your mouth overnight with surgical tape can ensure you breathe through your nose for maximum efficiency. It also includes certain non-steroidal nasal sprays to keep airways open, again, for nose breathing, like your body prefers.

Nevertheless, Burhenne doesn’t mainly emphasize these internet-friendly hacks. He primarily keeps focus on medical science, including the most recent discoveries (as of when this book was written), and the information you need to get best results from your physician. A certain distrust for the medical establishment lingers beneath Burhenne’s prose. Though admittedly, this makes sense, considering the ideas he describes are still controversial in certain circles.

Medical pundits tell patients to watch our weight, manage our stress, and get our eight hours nightly. But we can’t do these if we’re tired from lack of deep, restorative sleep. It’s surprising to read this advice from a dentist, and I admit, I needed some convincing, but Burhenne certainly provides that. If, like me, you need not just more but better sleep, start here.

Monday, June 26, 2017

Who Am I When I Am You?

Makoto Shinkai, your name.

Mitsuha Miyamizu attends high-school in Japan’s rural uplands. After school, she’s an acolyte in her grandmother’s Shinto shrine, but she dreams of sleek, modern Tokyo lifestyles. Taki Tachibana attends high school in Tokyo, has a part-time restaurant job, and, though a talented artist, has no leisure time to practice. One day, they spontaneously wake up in one another’s bodies. This begins an adventure veering from madcap to remarkably poignant.

Japanese film critics have hailed Makoto Shinkai as the true successor to legendary anime director Hayao Miyazaki. Audience reactions to his latest movie, your name., certainly validate this claim. It’s broken Japanese box-office records, and created momentum that carries wholesale to American markets. According to his afterword, this novel isn’t an adaptation; rather, he created book and movie together, and they represent two halves of the same whole.

After overcoming denial, Mitsuha and Taki begin delving into living one another’s lives. Taki’s adolescent male anger helps turn Mitsuha assertive, bold, and popular. Mitsuha unlocks Taki’s buried feminine side, helping him snag a date with a pretty older co-worker. Without meaning to, the mismatched pair gradually coaxes one another to become the persons they’re meant to be. Despite their inability to communicate directly, they sense a growing spark.

If I hadn’t read high-browed reviews for Shinkai’s anime, I would’ve expected this premise to yield low comedy. Japanese gender-bending entertainments, like Rumiko Takahashi’s Ranma ½, generally use gender flips as opportunities for mild bawdry. And in fairness, Shinkai does have Taki occasionally feel up his newly-acquired female form. But these interludes stay brief; Shinkai is more interested in themes of identity, social role, and how others shape us.

Makoto Shinkai
Taki has no particular spiritual inclinations, until Mitsuha’s reverent grandmother begins teaching him how Shinto philosophy ties reality together. He shows developing interest in transcendent topics, and begins slowing his lifestyle to commune with the universe. Meanwhile, Mitsuha, accustomed to relaxing into society’s tides and simply going along, learns to assert herself, beginning to consider her own dreams worthwhile. They both grow.

Then, suddenly as they began, the body-flips stop.

The book’s second half veers into more esoteric territory. Desperate to reconnect with Mitsuha, Taki begins researching everything he remembers about her, but facts slip away like half-remembered dreams. He uncovers secrets he never expected, most of them quite dark. Chief among those is Mitsuha’s connection to a disaster so abrupt and unexpected, it almost brought Japan to a screeching halt several years ago, and which could happen again.

Here’s where I, the reviewer, risk becoming excessively pointy-headed. I cannot say what Shinkai intended in creating this story, but his narrative reflects themes found in several important critics. Mircea Eliade’s theory of the Eternal Return; Joseph Campbell’s myth of the circular journey; Umberto Eco’s hypothesis that priesthoods of story keep reality alive after exact facts fade from memory. Shinkai’s themes suddenly become profuse with deeper possibility.

Audiences unaccustomed to anime conventions may find Shinkai’s sudden thematic shifts confusing. Japanese pop storytelling doesn’t demand narrative through-lines like Western literature does; English-speaking audiences may feel like this story’s two halves belong in completely different books. Even I, a sometime anime connoisseur, find the transition jarring. Reading along, I felt Shinkai hadn’t finished the first half’s themes before the second half went a completely different direction.

On consideration, this raises a second critique: readers expecting something deep and literary, like Haruki Murakami or Kenzaburō Ōe, may consider this book under-written. Shinkai introduces momentous themes, but doesn’t investigate them. Remember, this is the companion volume to an animated feature film, not a novel in its own right. You must read this book within its own genre, even when it ventures into deeper territory.

So, readers must understand this novel within these stipulations. Considered that way, I find it remarkably sophisticated, a pop entertainment that exceeds its genre stereotypes. Shinkai introduces his characters, not by describing them, but by dropping them into deep water and showing us how they survive. And if he doesn’t resolve every theme he raises, he at least does them justice, keeping us thinking after we close the book.

Dedicated audiences and veteran anime fans can finish this compact book, under 175 pages, in one caffeine-fueled Saturday. But, like with many other similarly brief novels, you’ll wish it was longer. The final page is as unexpected as it is poignant. Like waking from an intense dream, one of Shinkai’s themes, you’ll remember a strikingly realized world that’s now gone, and you’ll wish you could go back.


Friday, June 23, 2017

More Human Than You

1001 Books To Read Before Your Kindle Battery Dies, Part 83
David Livingstone Smith, Less Than Human: Why We Demean, Enslave, and Exterminate Others


Nazis characterized Jews as rats, while Rwandan propagandists called Tutsis cockroaches. But while turning humans into household vermin justified killing them, colonists characterized Africans and Native Americans as cattle, which justified enslaving them. Both were instruments of control. To take away power from other people, we must first take away their human essence. But how do we do that, and why, and how do we live with ourselves afterward?

Philosopher David Livingstone Smith, in seeking sources to answer these questions, found that little has been written about dehumanization. The term gets discussed widely, especially in contexts of racism, sexism, and wartime propaganda. But little scholarly research has really addressed the social and psychological processes that let us perceive humans as “lower” life forms. That seems an oversight in today’s brutally sectarian times, which Smith decided to rectify.

We must begin any consideration of dehumanization with the question: what makes us human? This seems an obvious question, one easily answerable by science, but this is an illusion. Excessively specific definitions of humanity risk excluding groups, from racial categories to the disabled. Broader definitions risk including chimpanzees. Were australopithecines and Neanderthals human? Contemplate the question, and humanity becomes a philosophical rather than a scientific category.

It helps to understand the concept of essentialism here. Smith provides lucid explanations, which he clarifies throughout this volume, but the concept looms so large, it deserves some definition. Humans are different: skin color, height, language, disability. Yet across these superficial divides, we generally agree, some fundamental essence preserves our core humanity. The argument then becomes, what essence truly defines humanity? And does everyone classed “human” actually have human essence?

David Livingstone Smith
Humans, it appears, are master creators of categories and groups. While chimpanzees comprehend “Us” and “Them,” and sometimes brutally slaughter Them, only humans create narrow, intricate in-groups. Only humans create shifting alliances between such groups. Only humans institute rituals designed to reinforce such groups… and only humans show conscience enough to recognize when our group-creating inclinations harm the insiders we intended to help.

This capacity for unbounded cruelty, coupled with this unique ability to reflect on our own thinking— what Smith calls second-order thought— puts humans in the unique position of being both nature’s most destructive species, and its most creative. The two tendencies often travel together. The tendency to redefine humans into livestock, vermin, or monsters which need defeated, has often produced humanity’s most creative thinking, to our eternal discredit.

Understanding our capability for dehumanization requires delving into humanity’s most shameful history. Smith unpacks various genocides, like Rwanda, the Holocaust, the Turkish slaughter of Armenians, and Darfur. He also looks into European colonial history, where peoples once regarded as equals and allies, like Africans and Native Americans, became subhuman enemies almost overnight. The patterns Smith uncovers are chilling and informative. But as you’d expect, it makes for difficult reading.

This dovetails into humanity’s tendency to create races. Social scientists and philosophers have written on how races, far from being consistent or biologically mandated, are created and constantly reinvented by human societies. I was particularly struck, in Smith’s analysis, by how early children divide humans into groups, and how little those groups resemble the racial categories adults encourage others to fear. Racism both does and doesn’t need to be taught.

As the argument progresses, solutions become murkier. You cannot insist on transcendent human essentialism to people who believe designated groups lack human essence. And even when stereotypes of designated groups prove unreliable, bigotry remains remarkably intractable, immune to evidence. Smith doesn’t insult readers’ intelligence with false hopes or pat solutions. He makes readers live with our indictments, because nobody is immune from the capacity to push others outside humanity.

This isn’t a scientific text. Smith doesn’t rely on recently fashionable sciences like brain imaging and behavioral economics, currently voguish in mass-market nonfiction. Not only are such sciences less reliable than often peddled, but science lacks the vocabulary to describe the complex, amorphous interactions involved in this process. Humanity, and dehumanization, aren’t scientific facts to be analyzed, like amoebae. They’re philosophical concepts, changed by the fact of being examined.

Smith doesn’t pretend he has the last word. In his final chapter, Smith lays out questions that still need examined moving forward. This book represents an intermediate step in comprehending the ways human beings steal other humans’ essence. But as an intermediate, if considered with the sobriety the topic demands, this book offers us an opportunity to move forward. We could reclaim our humanity by restoring it to others.

Wednesday, June 21, 2017

Time For a New Economic Yardstick

Lorenzo Fioramonti, The World After GDP: Economics, Politics and International Relations in the Post-Growth Era

The Gross Domestic Product has proven a mediocre economic measurement at best. It totals the cash value of all economic transactions, but doesn’t measure costs and benefits commensurately; car wrecks and traffic congestion have cash value, but lovingly restoring Grandma’s classic Fairlane doesn’t, unless we sell it. I grew disgusted with GDP fifteen years ago, when national leaders presented shopping as the solution to the 9/11 attacks.

South African economist Lorenzo Fioramonti begins this dissertation with a brief history of Palau, an island nation once touted as miraculous for its powerful economy. After independence, it parlayed massive mineral reserves into Earth’s largest per-capita GDP. But that wealth wasn’t distributed equally, and GDP didn’t include non-priced factors, like environmental decay. When the minerals were tapped, the miracle proved illusory. Palau is now poor, physically blighted, and without hope.

Fioramonti sees a parable of modernity here. Economic measurements aren’t value-neutral; what economists count inevitably becomes what leaders and entrepreneurs pursue. Price elasticity causes an inverse relationship between market price and social value, meaning things we struggle to measure in dollars, like the environment or human communities, get forgotten… until catastrophe strikes. But it doesn’t have to be this way. Fioramonti progresses from grim history to optimistic forecasting.

GDP arose during World War II, for specifically wartime purposes: to quantify America’s ability to manufacture military supplies. Quoting several other economists, Fioramonti compares GDP to the Manhattan Project, a wartime planning tool that somehow persisted into peacetime and remains impervious to changing conditions. (It even triumphed in the Soviet Union, eventually, because the preferred Leninist measurement failed to account for the service industry.)

Lorenzo Fioramonti
But even GDP’s chief inventor turned against his creation. Contemporary critics deride GDP for its inability to incorporate environmental costs: dirty air and flammable rivers have no price, and therefore no economic weight. But GDP pioneer Simon Kuznets realized his invention didn’t encompass human costs. Worn-out workers, sundered families, and communities severed from their roots have consequences, but no price, so they don’t get figured into the GDP.

And this only includes what happens visibly. Early in this book, Fioramonti uses a familiar, but still impactful analogy. He writes that “food cooked at a restaurant and purchased by consumers is registered as part of a nation’s economy, but the same food cooked at home and shared with family and guests is not.” We could continue: grocery shopping counts, gardening doesn’t; replacing old socks counts, darning them doesn’t.

This leads directly into Fioramonti’s most important precept for creating an alternate economic measure: “one important step in shifting attention is to make the invisible visible. This is what ‘post-GDP’ scholars and activists are trying to achieve.” This proves more ideal that systematic. Though Fioramonti lists several alternate economic yardsticks devised since around 1975, none encapsulates every possible contingency. We need complementary measures, Fioramonti writes, not one-size-fits-all.

Among other topics, Fioramonti spends considerable time on what officials euphemistically call the “informal economy.” This sometimes means off-the-books accounting, like the Mafia, but it also includes everything productive we do that doesn’t generate money. Volunteer work, time spent with family, and home-cooked dinners all create value, but in ways that lack price, and therefore the GDP cannot track them. Does mom’s home cooking have no economic value?

“The GDP-induced categorization of work,” Fioramonti writes, “also hides the fact that only a fraction of people’s time is spent on formal jobs.” But other systems of measurement can include these pastimes. If the economic devaluation of environmental destruction doesn’t convince you the GDP measures the economy badly, then maybe you’ll be convinced when other measurements place value on your hobbies, community, or family. The GDP considers these wasted time.

I repeat, because Fioramonti does, that economic yardsticks are never value-neutral, despite what ardent capitalists claim. The GDP rewards whatever costs money, hides whatever “externalities” get buried off the books, and encourages reckless, interest-bearing debt. Fioramonti does a remarkable job detailing this history. Committed followers of events, like me, may have some prior familiarity with Fioramonti’s descriptions of what already exists, though he collates diverse sources in new, enlightening ways.

Then, when we’re convinced the status quo cannot continue, Fioramonti provides us the alternative. These aren’t just alternate accounting systems. They’re innovative value measurements, means of rewarding productive behaviors beyond slapping price tags on everything. Our world is changing, bringing the marketplace with it. If we don’t change our economic paradigms appropriately, history will surely leave us holding the bag for costs we’re not yet prepared to pay.

Monday, June 19, 2017

The Economics of Addiction


My father has emphysema. Who wouldn’t, after fifty-six years of smoking? Though he hasn’t been formally diagnosed, his pained breathing and persistent fatigue have finally forced him to use the word “emphysema” for the first time, at age 72. He once told me he began smoking at age fourteen, though he never mentioned why; after several false starts, he finally kicked the habit about eighteen months ago.

When my family relocated from California to Nebraska in 1992, so my parents could retire near where they grew up, I immediately noticed how many people smoked. My first job, behind the convenience store counter, involved accessing the tobacco rack for customers, a position for which I now suspect I was technically underage. While it wouldn’t be accurate to say most purchases included packs of smokes, enough did to worry me.

At that time, still under the sway of neoliberal political thinking, I would’ve never attributed economic reasoning to personal habits. I just wondered why smoking seemed so pervasive in Nebraska culture compared to California. Though I knew smokers Out West, they remained primarily obscure, pursuing their habits less blatantly. My workplace never completely stopped at smoke-break time, as it did in Nebraska, where smokers herded, lemming-like, toward the doors.

But thinking about my father’s struggling health, I realized I did see something similar in California. Though people smoking weed and consuming other drugs needed to maintain more cover than smokers do, the same basic behavior obtained. People embraced work, school, and other mandatory responsibilities as clear-headed as circumstances allowed, then when duty ended, they raced headlong to whatever substance made them feel human again. Legal or otherwise.

Addiction specialist Gabor Maté writes that understanding substance addicts in terms of recreational users is mistaken. Some people smoke weed, inject heroin, snort cocaine, and consume other drugs because their substances make them feel good. Addicts don’t want to feel good, however; they want to feel normal. They want whatever suffering infects their sober lives to vanish under the comforting glow of their favored substance, even for an hour.

Cocaine and heroin are painkillers. Before they became illegal, snake-oil salesmen included these drugs in their patent medicines because, no matter whatever else their concoctions included, Peruvian marching powder made the pain go away. So when considering what turns people into coke or smack addicts, or what hooks people on other painkillers like alcohol or Vicodin, we must look not at the drugs, but at whatever pain needs killed.


Nicotine and cannabis, however, aren’t painkillers. Like Valium, another widely abused substance, they’re anti-anxiety drugs. When jitters paralyze you, having a smoke, a toke, or a tab of V really drains the tension. So if painkillers require us to find the user’s unexamined pain, logic dictates, anti-anxiety drugs require us to find the unexamined anxiety. Why would a 14-year-old from bucolic western Nebraska have anxieties that need smoked out?

Rural life is frequently precarious. The principal economic driver, farming, is constantly subject to weather, market fluctuations, and other forces individuals cannot control. Dips in commodity prices take money from farmers, but also from businesses dependent on farmers, like equipment dealers, small-town banks, and entire rural downtowns. Despite tough-talking individualist myths, rural and small-town people, the western Nebraska population, live constantly at the verge of a sheer cliff.

Compare big-city life. Even after the collapse of 2008, the financial services sector remains America’s largest industry, in dollar terms. People wager massive fortunes on a 24-hour cycle. As we learned during the last economic contraction, financial services operates like a casino, plying big winners with rewards to keep them at the table. In Vegas, the rewards include comped drinks. One icon of bankers’ lifestyles is the three-martini lunch.

So while small-town people live constantly with the anxiety of hoping they’ll make next month’s payments, big-city moguls swallow the risks of gambling away Grandmother’s retirement savings. People raised in rural life, like my dad, or in California’s suburban uncertainty, smoke the anxiety away. While Bernie Madoff-type gamblers kill the pain of knowing they’re rewarded while they’re winning, but could lose everything at any moment.

Cocaine and heroin have little presence in western Nebraska, where I live, but at my construction job, I’m among the few men who don’t use tobacco. This isn’t coincidental. People’s favored drugs reflect their circumstances, and their circumstances have dollar measurements. Though hard drugs remain the province of urbanism, where difficulty and pain define daily life, rural workers will always prefer to smoke their fears away.

Wednesday, June 14, 2017

Tommy Gunn's School Daze

Laurie R. King, Lockdown: a Novel of Suspense

Career day at Guadalupe Middle School will make or break Principal Linda McDonald. After turning a failing school around, Linda has managed to corral enough community members to remind her students they have a future. For one day, they’ll forget their personal dramas, the pains festering at home, and the two criminal investigations lingering at the margins, and celebrate their potentials. Too bad somebody’s coming to school with a gun.

The dust-flap synopsis on Laurie R. King’s newest standalone novel is slightly misleading. Though the story promises a violent schoolyard confrontation, the anticipated explosion doesn’t actually arrive for over 300 pages. Rather, King places the emphasis on the buildup, the suspense as a school of over seven hundred students simmers. We know something’s coming. We’re left to wonder not what, but who, and why. Because King offers multiple suspects.

Principal McDonald has shepherded her school through several powerful conflicts in her first year. A well-liked, but possibly abused, student has disappeared, leaving behind a best friend pitching conspiracy theories pinched from Doctor Who. A high-school gangland murder drags the middle school in because the only witness was one of McDonald’s students. And that’s just the problems McDonald can see. She has multiple cauldrons waiting to boil over.

There’s the kid harboring a nasty grudge and carrying something in his backpack so powerful, he can’t bring himself to think about it directly. The beautiful but damaged teen desperate to escape the stultifying strictures her political refugee parents place upon her. The principal’s husband, always fleeing his personal demons. The wannabe gang-banger desperate to prove his chops. And the janitor, known only as Tío, carrying a bloody secret.

Laurie R. King
Guadalupe MS, in the (fictional) agricultural community of San Felipe, California, has dozens of conflicting forces pushing on its students. They come from a mix of economic, racial, and cultural backgrounds: poor Hispanic migrant workers and software developers send their children to one school. Also fugitives fleeing poorly defined threats, and working families hoping to shield their children from gangs. Add heat, and watch the chaotic combination boil.

Our story unfolds, minute by minute. King offers us glimpses into different viewpoint characters’ heads, so we see the same events from multiple contexts. What one character considers a flippant comment, another perceives as an insufferable slight. Principal McDonald has at least two opportunities to deflect the looming violence, but misses them because she can’t read students’ minds. Characters live in their own brains, never knowing how close they miss.

This, King implies, is the theme of life in public society: everyone thinks their personal dramas are unique. Especially in middle school, with the simmering pressures of looming adulthood, every character sees their own conflicts, and doesn’t realize others have the same. King only addresses this indirectly, as when her wannabe gangster thinks the beautiful girls have life easy. But it’s constantly present: everyone has problems nobody else sees.

The front cover calls this “A Novel of Suspense,” which isn’t inaccurate: we know something catastrophic will happen, changing characters’ lives forever. But this isn’t like bog-standard procedurals or action potboilers. King offers an overlapping matrix of character dramas, inviting us to tease out secrets and layers. The suspense comes from wondering which of the many private controversies will eventually spill over into public violence.

King is most famous for writing novels starring Mary Russell and an obscure supporting character named Sherlock Holmes. The publisher bills this as King’s first standalone novel in over a decade. But even that isn’t entirely accurate, since there’s a brief chapter linking this novel to King’s lesser-known series protagonist, SFPD detective Kate Martinelli. It’s a fun teaser, but one needn’t know King’s prior works to appreciate this story.

If this novel suffers one shortcoming, King introduces so many characters, with their own private plotlines, that she can’t possibly defuse them all. We know, with the bloody climax coming, that King can’t resolve both the miscommunication between the clique of insecure pretty girls, and Tío’s attempt to save the gangbanger from his myths. King starts multiple interesting stories which remain unfinished. Maybe she’s saving something for the sequel.

Nevertheless, she does a remarkable job displaying not only the causes of life-changing violence, but the lives that will be changed. Middle school is a crucible, even when literal blood doesn’t spill, a dark and brooding place where everyone thinks they suffer alone. And, as we read, we realize: maybe we aren’t so different from these kids ourselves. Which is actually a liberating thought.

Monday, June 12, 2017

Götterdämmerung, the Reader's Digest Version

Daniel Kehlmann, You Should Have Left: a Novel

A successful screenwriter rents an AirBnB in the mountains to write. His studio wants a sequel, and his family needs the money. But secluded on a vacation cabin with his glamorous, bored actress wife and their daughter, the words begin to flow. Until the bad dreams begin. And right angles don’t add up to ninety degrees. And his little girl wakes up speaking prophecies of doom.

Veteran novelist Daniel Kehlmann is a household name in Germany, but remains largely unknown to English-speaking readers. This novella might change that. Mixing elements of Shirley Jackson, Stephen King, and Elizabeth Hand, Kehlmann creates the kind of creeping dread that American paperback readers love, channeled through the kind of linguistic mindset that gave us Thomas Mann and Günter Grass.

The story unfolds as our nameless narrator’s journal. On one hand, he jots notes for his screenplay, a John Hughes-like coming-of-age comedy where two “Besties” adjust to an adult friendship. The story provides ironic commentary on events building around him, as his vacation home apparently grows new bedrooms overnight, reveals a massive mountain nobody else can see, and insinuates itself into his marriage. Symbolism abounds.

The story immediately invites comparison to King’s The Shining and Jackson's The Haunting of Hill House. Kehlmann doesn't even pretend to deny such allusions, though he doesn't acknowledge them either. But such comparisons, while accurate, are nevertheless incomplete. Kehlmann doesn't so much present a horror novella, as a novella of what Freud calls “the Uncanny,” the subconscious made manifest in the protagonist's senses.

The house seemingly accentuates its inhabitants’ identities. When the narrator and his wife fight, they fight like feral cats cornered in the same Dumpster. When they agree, they mesh like two hemispheres of the same brain. And when their daughter begins speaking grim, powerful words beyond her ken, they realize, almost without words, that the problem isn't them, it’s the house.

Daniel Kehlmann
Much more happens, of course. But not, I fear, enough. I like this book, and don’t want anybody to ever say I said otherwise, but man, this book is short. The story itself runs under 110 pages, which makes its $18 list price awfully steep. I read the whole thing inside three hours. And though Kehlmann is easily King’s or Jackson’s equal in scene-setting, his conclusion is abrupt, without resolution, leaving only questions.

Horror, after all, is a genre of balance. Writers must reveal enough to keep readers engaged, but withhold enough to maintain suspense. No wonder I mainly write poetry.

So I vacillate on how to review this book. For most of the reading experience, I wanted to lavish praise upon it: though he reuses tropes horror readers have seen before, Kehlmann employs them in ways that create tension and make us care about his characters. Then he just stops. His characters, complex and deeply humane, deserve more explanation at the denouement than he offers. It’s like he just got bored.

Perhaps I’m being overly critical. Kehlmann clearly comes from a literary, rather than genre, background. His emphasis rests on creating nuanced characters and telling details, rather than dawning fear. In that case, rather than Stephen King, his work more closely resembles Brian Evenson or Thomas Ligotti. As with those writers, the why of the situation matters less than its imminence for the characters.

Nevertheless, Kehlmann’s audience reads works like this every day. Veteran readers recognize both the similarities, and the differences, with King’s Jack Torrance or Liz Hand’s Julian Blake. We deserve some explanation of how our nameless narrator finds himself in this situation. Without that, the final four pages appear weirdly disconnected from what came before. Like Kehlmann was writing for a predetermined end, rather than one arising from the situation.

Translator Ross Benjamin has a long history of translating German-language writers into American English. His CV reads like a Who's Who of contemporary German writers, though most will remain unfamiliar to English-speaking audiences. The one readily familiar name also clarifies his qualifications to translate this novella: Benjamin has won awards for translating Franz Kafka.

On balance, I suppose I should recommend this book. Some of my favorite authors, like Joseph Conrad and Salman Rushdie, have difficulty writing resolutions worthy of the novels they’ve crafted. And King himself extols the short form because it exonerates authors from the imperative to explain. My disappointment arises not because this book is weak, but because it's so strong that it sets itself a high bar. This is a good novella. I just wish it was great.

Thursday, June 8, 2017

Welcome to Fatland

You’ve probably seen something like this image recently; several versions circulate. Don’t write another article on obesity in America until you explain why fatty, unhealthful foods cost more than their healthy, nutritionally complete equivalents. And I’ve seen several answers back, like: well, if you only eat McDonalds then yeah; or, anything is cheaper deep-fat-fried than prepared in a healthy way. I’d like to offer just one possible explanation.

You’ve probably heard lots about America’s notoriously subsidized agriculture. Because of massive monetary transfusions used to keep farmers working and food affordable, American crops are often cheaper than the dirt they grow in. That’s especially true with today’s inflated land values. When NAFTA lowered trade barriers, subsidized American-grown food hit Mexican markets below the cost of growing, causing rural poverty to hit seventy percent in Mexico.

But those subsidies don’t go just anywhere. Since the Great Depression, America has subsidized just five staple crops: corn, wheat, rice, sorghum, and cotton. These staples all have long shelf lives, which makes their market value very volatile: oversupply can last a long, long time. If farmers overplant lettuce, it’ll rot within a matter of weeks. If farmers overplant corn—and who know’s what’s too much at planting time?—markets could be destabilized for a year.

So, America has decided we owe our planters of cereal grains and natural fibers the dignity of a stable income. After all, an unstable grain market owing to oversupply jeopardizes farmers, but we still need to eat. Grains provide dietary fibers that we all need, and unlike fruit or salad greens, we can ship corn to where it’s needed. Why not, therefore, dedicate public money to ensuring people who grow our corn aren’t rolling the dice on uncertain markets.

Except that hasn’t been the effect. By subsidizing only a few crops, we’ve created cash incentives for farmers to overproduce these grains at massive numbers. Cotton is so cheap now that we use it to make disposable shop rags. According to agricultural journalist George Pyle, American farmers currently produce twenty times as much corn as American consumers can possibly eat. All that oversupply has to go somewhere.

And that “somewhere,” overwhelmingly, is animal feed. American agricultural policy doesn’t directly subsidize livestock agriculture. However, we have Earth’s cheapest meat, because by encouraging oversupply, we indirectly subsidize cattle farming. Cattle raised on grass, like God intended, reach market weight in about two years. Cattle raised on corn, fed to them in confined feedlots, reach market weight in about fourteen months. It’s a cash boon for livestock farmers.

A typical confined animal feeding operation—in this case, a hog pen

Stay with me here. The wheat used in making buns is directly subsidized. The beef slapped between those buns is indirectly subsidized. Even the cheese used to make the burger taste less like dead flesh is subsidized, because dairy oversupply keeps threatening to crash market values; the government buys excess dairy and pours it on the ground to stabilize prices. Does the government want us to eat more burgers?

Of course not. They just don’t want farmers subject to the instabilities of market fluctuations. Readers old enough to remember the “tractorcades” of the 1980s know that farmers are more beholden to market forces than most other producers. As we learned in 2008, housing oversupply is bad for home builders; but builders can store their tools, pull in their claws, and wait. Farmers, to keep their families together, often have to sell their land.

This says nothing about side effects of agricultural policy. Subsidizing only five crops has led to massive monocropping, which overtaxes the soil of certain nutrients. To keep the land producing crops, farmers saturate it with fertilizers derived from hydrocarbons. American farms today produce more greenhouse gases than cars do, not from inefficiency, but because farmers need the five magic crops to show a profit. And nutrient-depleted topsoil washes away whenever it rains.

That seems simple enough. The makings of a burger are directly or indirectly subsidized, while the makings of a salad are not. If the ways we spend our money reflect our cultural values, then apparently we place higher value on maintaining certain food crops than on encouraging Americans to eat well. This approach, though moralistic, isn’t wrong. Maintaining the status quo is cost-efficient, while changing the system, even a system that causes bad health, is scary.

Designing an agricultural policy that would result in more diverse crops, better land management, and healthier foods at more modest prices, will challenge even seasoned legislators. Even in today’s environment of armchair quarterbacking, I don’t dare extend myself this way. But somebody must. Because the meme isn’t wrong: we won’t tackle American obesity until ordinary Americans can afford better-quality food.

Wednesday, June 7, 2017

Did Chain Restaurants Just Declare Capitalism Dead?


The language couldn’t be more moralistic: “Millennials are killing chains like Buffalo Wild Wings and Applebee’s,” screamed the headline. As if the murder image weren’t clear enough, the tagline continues: “Casual dining is in danger — and millennials are to blame.” I wish I was kidding; had my students written this as fiction, I’d have returned it marked too high-handed to be plausible. But this screen-capture proves my point:



This unsubtle attempt to shift blame for flagging sales off chains that haven’t handled changing times, onto the young customer base they’ve long taken for granted, is stacked with assumptions. By saying customers are “killing chains,” the article makes customers into murderers, and chains into victims. By unambiguously assigning “blame,” it makes the demand side of economics responsible for chains’ fortunes, rather than chains themselves.

But most important, it implies that Millennials, possibly the most poorly defined generational cohort since popular media began naming generations, have the same choices their parents had about spending money. This is, of course, ridiculous to anybody who follows economics. Starting wages are down, housing costs—especially in major cities—are way up, and entry-level jobs frequently require graduate degrees just for consideration. Youth have no remaining money for hot wings out.

Though writer Kate Taylor acknowledges, in her article, that blaming Millennials has become “a trend to the point of cliché,” she ultimately maintains the pattern, squarely hanging responsibility for chains’ fortunes on young, supposedly childless customers. Throughout her diatribe, Taylor implies customers have a moral obligation to create demand for poor, beleaguered chains. Which spits in the eye of that beloved libertarian fetish, the supply-demand curve.

Think back to college economics. You took college economics, right? If statistics hold, you probably didn’t; your last mandatory exposure to economic theory probably happened in high school civics class, sandwiched between a unit on the Constitution and one on the War Powers Resolution of 1973. Therefore, you probably have a glimmering of the supply-demand curve, but nothing concrete. You vaguely remember that when supply equals demand, we know what something is worth.

But Taylor’s article says that demand should rise to meet supply. So-called casual dining chains have grown since around 2000 by oversaturating markets, mostly suburban malls, and maintaining the just-in-time resupply model pioneered by big-box retailers like Walmart. This flood-marketing model has, for years, served to create demand among young families with reliable income and limited expenses. McDonald's and Pizza Hut used the same model in prior generations.

That model demands steady, basically white-collar economic growth. This failing model dominated the 2016 presidential campaign. Hillary Clinton stumped on the same suppositions, while Donald Trump channeled outrage at the disappearance of jobs in manufacturing and extraction. The jobs that haven’t been automated, have largely been shipped overseas. Domestic employment has bifurcated into executive leadership and the service industry. Working Americans now sell each other, well, Buffalo Wild Wings.

Chains, facing slumps because youth would rather eat at home, or cannot afford to eat out, cry foul. Because they think demand should follow supply, not vice versa. Economists call this “induced demand.” I first encountered the concept in models of urban parking: more parking lots cause more driving, largely because putting stores and homes further apart makes walking an unsustainable cost. Gasoline is cheap; walking becomes tedious.


But food doesn’t follow that one-point model. As better grocery stores increasingly offer inexpensive delivery service, and meal-kit marketers like Blue Apron make gourmet home-cooking available to pure amateurs, simply building new stores behind massive parking lots doesn’t induce demand anymore. In short, market forces no longer make dining out a desirable choice, no longer cheap and convenient, especially in relation to youths’ wages.

Libertarian capitalism declares that market forces are sacrosanct. Whatever people willingly pay for, must perforce be good. Don’t interfere with markets. Unless, apparently, markets shift, and a previously successful business model becomes untenable. The underlying core of this article declares that demand exists to serve producers, not vice versa, and customers who don’t want their product are just wrong. Sounds like chain restaurants just basically declared libertarian capitalism dead.

Productive American industries have cut costs, including labor, for three generations. And service providers now whine that customers won’t, or can’t, buy their product. I think they think they’re crying foul, like an entire generation isn’t playing by the supply-side rules. Under that, though, they secretly concede the truth, that they don’t trust customers. Market providers apparently just can’t handle capitalism if it doesn’t serve the capitalists.

Monday, June 5, 2017

Wonder Woman and the True Meaning of No-Man's Land

Wonder Woman (Gal Gadot) preparing to go "over the top" into No-Man's Land

You’ve seen the trailer footage: Diana Prince, Wonder Woman, clad in Greco-Roman armor, pausing to stand tall amid a fire-blackened landscape, before charging into overlit tracer bullet fire. The footage doesn’t make entirely clear that she’s just risen from a British trench in the Great War, crossed into No-Man’s Land, and begun to charge the German line. And, when those numerous, fast-moving bullets inevitably pin her down, men crest the trench and follow her lead.

This doesn’t just create a good visual. After a three-movie streak of stinkers from DC studios, this moment demonstrates what makes superheroes, something Zack Snyder apparently doesn’t appreciate. Heroes represent, not the recourses we’re willing to live with, as with Snyder's Superman, but the aspirations we pursue, the better angels we hope to achieve. We all hope, faced with the nihilism of the Great War, that we’d overcome bureaucratic inertia and face our enemies head-on.

In some ways, this Wonder Woman, directed by relative novice Patty Jenkins, accords with DC’s recent cinematic outings. Diana’s heroism doesn’t stoop to fighting crime, a reflection of cultural changes since the character debuted in 1941. Ordinary criminals, even organized crime, seem remarkably small beer in today’s world. Crime today is often either penny-ante, like common burglars, or too diffuse to punch, like drug cartels. Like the Snyder-helmed movies, this superhero confronts more systemic problems.

But Snyder misses the point, which Jenkins hits. Where Snyder’s superheroes battle alien invaders, like Superman, or pummel the living daylights out of each other, Wonder Woman faces humanity’s greatest weaknesses. The Great War, one of humanity’s lowest moments, represents a break from war’s previous myths of honor. Rather than marching into battle gloriously, Great War soldiers hunkered in trenches for months, soaked and gangrenous, seldom bathing, eating tinned rations out of their own helmets.

Steve Trevor (Chris Pine) and Wonder Woman (Gal Gadot) strategize their next attack

This shift manifests in two ways. First, though Diana speaks eloquently about her desire to stop Ares, the war-god she believes is masquerading as a German general, this story is driven by something more down-to-earth. General Ludendorff’s research battalion has created an unusually powerful form of mustard gas. The very real-world Ludendorff, who popularized the expression “Total War,” here successfully crafts a means to destroy soldiers and civilians alike. He represents humanity’s worst warlike sentiments.

Second, this Wonder Woman doesn’t wear a stars-and-stripes uniform. Comic book writer William Moulton Marston created Wonder Woman as an essentially female version of Superman’s American values, an expression externalized in her clothing. This theme carried over into Lynda Carter’s TV performance. But this Wonder Woman stays strictly in Europe, fights for high-minded Allied values rather than one country, and apparently retires to curatorship at the Louvre. Her values are unyoked to any specific nation.

Recall, Zack Snyder’s Superman learned from his human father to distrust humankind, and became superheroic only when threatened by Kryptonian war criminals. Diana, conversely, learned to fight for high-minded principles—which she learned through myths which, she eventually discovers, are true without being factual. Snyder’s Superman, in fighting General Zod, showed remarkable disregard for bystanders, his film’s most-repeated criticism. But Diana charges into battle specifically to liberate occupied civilians. The pointed contrast probably isn’t accidental.

Unfortunately, Diana learns, war isn’t about individual battles. She liberates a shell-pocked Belgian village, and celebrates by dancing with Steve Trevor in the streets. But General Ludendorff retaliates by testing his extra-powerful chemical weapons on that village. No matter what piteous stories she hears about displaced, starving individuals, ultimately, her enemy isn’t any particular soldier. It’s a system that rewards anyone willing to stoop lower than everyone else, kill more noncombatants, win at any cost.

This picture doesn't serve my theme; I just really like that it exists (source)

In a tradition somewhat established by the superhero genre, Diana culminates the movie with a half-fight, half-conversation with her antagonist. Ares offers Diana the opportunity to restore Earth’s pre-lapsarian paradise state by simply scourging the planet of humanity. (Though Greek in language, this movie’s mythology reflects its audience’s Judeo-Christian moral expectations.) Diana responds by… well, spoilers. Rather, let’s say she simply resolves that fighting the corrupt system is finally worthwhile, even knowing she cannot win.

Wonder Woman’s moral mythology resonates with audiences, as Superman’s doesn’t, at least in the Snyderverse, because she expresses hope. Watching Diana, we realize it’s easy to become Ludendorff, wanting to not just beat but obliterate our opponents. Yet we desire to emulate Diana, standing fast against human entropy and embodying our best virtues. Diana is a demigod, we eventually learn, and like all good messiahs, she doesn’t just rule humanity, she models humanity’s truest potential.