Guilty…

One evening in the spring of 2015, I filmed a 15-second video out the window of an Amtrak train as it rattled across the barren flatlands of southern New Jersey. There’s nothing artful or interesting about the clip. All you see is a slanted rush of white and yellow lights. I can’t remember why I made it. Until a few days ago, I had never even watched it. And yet for the past nine years, that video has been sitting on a server in a data center somewhere, silently and invisibly taking a very small toll on our planet…

Data centers and data-transmission networks now account for as much as 1.5 percent of global electricity consumption…

With other forms of consumption that are bad for the planet, we all understand that the main burden of responsibility falls on the big players—industry, government, the rich and powerful. But we also acknowledge that everyone else has a part to play too. I stop running the water while I’m brushing my teeth. I carry groceries in a burlap tote. I turn off the lights whenever I step out of my apartment, regardless of whether I’m leaving for five minutes or a week…

Every time we make a new video or send an email, or post a photo of our latest meal, it’s like turning on a small light bulb that’ll never be turned off…We’ve got to think about whether it’s really bad to carry on with our current digital practices.”In other words: To help save the planet, should we be using less data? Given how much of modern life depends on megabytes and teraflops, the answer could be a key facet to living nobly in the AI age…

By my estimate, following a formula included in a recent research paper, storing my train video has created about 100 grams of CO2 over the past decade. At first blush, this is effectively nothing: less than one three-100ths of a percent of the yearly CO2 emissions from a pet cat. But data slough off us like skin cells. Last year, I sent 960 videos to the cloud. Because phones record videos in much higher quality these days, most of these clips are larger than that 15-second video from 2015. And like many other people, I have a sprawling digital footprint; many of my stored videos have been either sent to or received from at least one other person who is also storing them on one or two cloud platforms…

We just need to start to think around the impact of every button we press ‘Send’ or ‘Upload’ on,” Jackson told me. As a first step, he suggests going back through your phone and computer and getting rid of all the data that you’ll never use again. (The industry term for such detritus is dark data; much of Jackson’s research focuses on teaching companies to reuse old information instead of making new bytes.) That’s easier said than done. When I was looking through old videos for this story, I found many clips that sparked cherished memories. None of these videos was particularly fascinating. But a data center had conserved the data for so long that watching them now transported me, joyfully, to a simpler time. Deciding whether to scrap any of these is not the same as deciding whether to turn a light bulb off when you step out of a room. “The light bulb, you can just come back and switch it back on,” Jackson admitted. “Once you’ve gotten rid of data, it’s gone.” Even my feelings about the train video—which did not spark any fond memories—remain unresolved. For now, it’s still up there…

In a report published in 2021, Berners-Lee and a team of researchers found that if the information-and-communications sector is going to match the reductions necessary to keep global warming under the 1.5 degrees Celsius threshold, it will have to cut its carbon emissions by 42 percent by the end of this decade, and 72 percent by the end of the next…’

More fundamentally, maybe we don’t need to turn everything into data. If I put down my phone the next time I’m on a train, it won’t save the planet. But I’ll be looking out the window with my own eyes, creating a memory that emits no carbon at all.

Arthur Holland Michel, from “Every Time You Post to Instagram, You’re Turning on a Light Bulb Forever.” (The Atlantic, July 5, 2024)

What we need most at the end of the day…

In a recent Times article, the reporter Emma Goldberg wrote about how the rise of social media and influencer power has made it such that young people, in particular, find their livelihood, success and sense of self inextricably entwined with an online presentation. She wrote, “With personal branding, the line between who people are and what they do disappears. Everything is content.” A strange, exhausting new twist in being human is that each day, each of us must decide how much of ourselves, our family life, thoughts, work, photos and feelings we will share with strangers online. Goldberg quoted Tom Peters, a marketing writer, who explained that, we are each “head marketer for the brand called You.”

To reduce ourselves to brands, however, is to do violence to our personhood. We turn ourselves into products, content to be evaluated instead of people to be truly known and loved. We convert the stuff of our lives into currency.

This new way of interacting with the world is driving institutional dysfunction, personal anxiety and the hollowing out of ourselves…Klein confessed that social media had made him hungry for validation. It offers us, he said, a steady drumbeat of “You exist. You are seen.” This longing to be seen and validated is universal, but this desire has been co-opted by technologists to capture more and more of our time and attention…

I have gotten letters from time to time from readers declaring me their pastor, and of course, I’m flattered and grateful. I hope to be of help to them, yet I cannot be their pastor. I cannot hold their hands and pray over them in the hospital. I cannot grieve with them after the loss of a loved one or rejoice when they land a job. A pastor and the work of local churches more broadly are tethered to a place, an institution and a particular people, with all the complexity, hilarity, struggle and mystery of their lives.

What we need most at the end of the day has nothing to do with influence or brands. We need quiet beauty and enduring truth that we share with those who walk this journey with us…

—  Tish Harrison Warren, from “The Temptations of the ‘Personal Brand’” (New York Times, January 29, 2023). Tish Harrison Warren (@Tish_H_Warren) is a priest in the Anglican Church in North America and the author of “Prayer in the Night: For Those Who Work or Watch or Weep.”

Monday Morning Wake-Up Call

So when I came across Carr’s book in 2020 (The Shallows: What the Internet Is Doing to Our Brains), I was ready to read it. And what I found in it was a key — not just to a theory but to a whole map of 20th-century media theorists…who saw what was coming and tried to warn us. Carr’s argument began with an observation, one that felt familiar:

The very way my brain worked seemed to be changing. It was then that I began worrying about my inability to pay attention to one thing for more than a couple of minutes. At first I’d figured that the problem was a symptom of middle-age mind rot. But my brain, I realized, wasn’t just drifting. It was hungry. It was demanding to be fed the way the Net fed it — and the more it was fed, the hungrier it became. Even when I was away from my computer, I yearned to check email, click links, do some Googling. I wanted to be connected.

Hungry. That was the word that hooked me. That’s how my brain felt to me, too. Hungry. Needy. Itchy. Once it wanted information. But then it was distraction. And then, with social media, validation. A drumbeat of: You exist. You are seen…

These are industries I know well, and I do not think it has changed them, or the people in them (myself included), for the better.  But what would? I’ve found myself going back to a wise, indescribable book that Jenny Odell, a visual artist, published in 2019. In “How to Do Nothing: Resisting the Attention Economy,” Odell suggests that any theory of media must first start with a theory of attention. “One thing I have learned about attention is that certain forms of it are contagious,” she writes.

When you spend enough time with someone who pays close attention to something (if you were hanging out with me, it would be birds), you inevitably start to pay attention to some of the same things. I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.

I think Odell frames both the question and the stakes correctly. Attention is contagious. What forms of it, as individuals and as a society, do we want to cultivate? What kinds of mediums would that cultivation require?

This is anything but an argument against technology, were such a thing even coherent. It’s an argument for taking technology as seriously as it deserves to be taken, for recognizing, as McLuhan’s friend and colleague John M. Culkin put it, “we shape our tools, and thereafter, they shape us.”

There is an optimism in that, a reminder of our own agency. And there are questions posed, ones we should spend much more time and energy trying to answer: How do we want to be shaped? Who do we want to become?

— Ezra Klein, from “I Didn’t Want It to Be True, but the Medium Really Is the Message” (NY Times, August 7, 2022)

I’ve taken a million pictures – 50 were good.

‘Do not call me master, for heaven’s sake,” says Ferdinando Scianna, welcoming me inside his studio, a cosy ground-floor space in the centre of Milan. “I do not teach anything to anyone. Come in, take a seat.”

Scianna has just turned 79. Photography, for him, was an obsession that lasted 60 years. “And it is over today,” he declares. He has not taken pictures for years and says that when young photographers approach him for advice, he wants to ask them for theirs instead. “I tell them the most obvious thing: photograph what you love and what you hate. But they should tell me how to sneak around in this weird era that I do not really know.”

Scianna has taken more than a million photographs and, in his words, the good shots number about 50…

He loves to work on books though. He has published over 70; more, he says, than prudence would have advised him. The first was published in 1965 and is about religious rituals in Sicily (Feste Religiose in Sicilia). “I was just a 21-year-old Sicilian kid, and that book built my career. Today, when I leaf through the pages, I feel confused. I look at my photos and I ask myself, who took those images? I was too young and ignorant. You know, I learned to take pictures over the years – basically, just by taking them.” …

I do not think I can change the world with my photographs, but I do believe that a bad picture can make it worse,” he says. “And the point is that we have too many images. If you eat caviar every day, eventually you will want pasta e fagioli.” He thinks that photography went into an irreparable crisis a couple of decades ago, when we stopped building family photo albums. “Today we all take photos with our phones, but they are background images. Even a selfie is not a self-portrait but a kind of neurosis about a moment of existence that must immediately supplant another, and so on. And we all know what happens when something loses the identity that has determined its success and cultural function. It dies.” …

He also disdains the pace of change driven by the internet. “On the web, everything is consumed quickly. Culture, on the other hand, is slowness and choice. I made my theory; it is the theory of the three risottos. Do you want to hear it?” He clears his throat. “If someone has never eaten a risotto in his life – and if they have never been to Sicily, they certainly never have eaten a good one – the first time they taste it, they can only say if they liked it or not. The second time, however, they can argue that it was better or worse than the first one. Only from the third time on can they have their own theory of risotto and, if they want, give advice on how it should be cooked. Culture, to me, is knowing things and having a choice.” …

His last solo exhibition was at the prestigious Palazzo Reale in Milan. More than 200 photos were on show and, on some days, there were long queues waiting to get in. “Graham Greene once wrote, while travelling from Marseille to Paris, at some point he deeply believed in the existence of God. With photographs it is a bit the same. And the world, you know, practises forgetfulness. Millions of men lived before us, men who had dreams, who have done things. We do not know anything about them.”

But then, I ask, what remains in history? “Things that have found their shape,” he replies instinctively, adding: “I have walked my entire my life only to take photos. I am like those little dogs who, while walking, have left their poop around the streets. But if you really want to know the truth, then yes, taking pictures has given me a lot of happiness.” He takes another puff on his pipe and watches the smoke slowly rise towards the ceiling until it becomes a giant white cloud that evaporates in a second.

— Maurizio Fiorino, excerpts from “”I’ve taken a million pictures – 50 were good’: photographer Ferdinando Scianna” (The Guardian, July 26, 2022)


Notes:

Hard Truth

When Facebook (and all the others) decide what you see in your news feed, there are many thousands of things they could show you. So they have written a piece of code to automatically decide what you will see. There are all sorts of algorithms they could use—ways they could decide what you should see, and the order in which you should see them. They could have an algorithm designed to show you things that make you feel happy. They could have an algorithm designed to show you things that make you feel sad. They could have an algorithm to show you things that your friends are talking about most. The list of potential algorithms is long.

The algorithm they actually use varies all the time, but it has one key driving principle that is consistent. It shows you things that will keep you looking at your screen. That’s it. Remember: the more time you look, the more money they make. So the algorithm is always weighted toward figuring out what will keep you looking, and pumping more and more of that onto your screen to keep you from putting down your phone. It is designed to distract. But, Tristan was learning, that leads—quite unexpectedly, and without anyone intending it—to some other changes, which have turned out to be incredibly consequential.

Imagine two Facebook feeds. One is full of updates, news, and videos that make you feel calm and happy. The other is full of updates, news, and videos that make you feel angry and outraged. Which one does the algorithm select? The algorithm is neutral about the question of whether it wants you to be calm or angry. That’s not its concern. It only cares about one thing: Will you keep scrolling? Unfortunately, there’s a quirk of human behavior. On average, we will stare at something negative and outrageous for a lot longer than we will stare at something positive and calm. You will stare at a car crash longer than you will stare at a person handing out flowers by the side of the road, even though the flowers will give you a lot more pleasure than the mangled bodies in a crash. Scientists have been proving this effect in different contexts for a long time—if they showed you a photo of a crowd, and some of the people in it were happy, and some angry, you would instinctively pick out the angry faces first. Even ten-week-old babies respond differently to angry faces. This has been known about in psychology for years and is based on a broad body of evidence. It’s called “negativity bias.”

There is growing evidence that this natural human quirk has a huge effect online. On YouTube, what are the words that you should put into the title of your video, if you want to get picked up by the algorithm? They are—according to the best site monitoring YouTube trends—words such as “hates,” “obliterates,” “slams,” “destroys.” A major study at New York University found that for every word of moral outrage you add to a tweet, your retweet rate will go up by 20 percent on average, and the words that will increase your retweet rate most are “attack,” “bad,” and “blame.” A study by the Pew Research Center found that if you fill your Facebook posts with “indignant disagreement,” you’ll double your likes and shares. So an algorithm that prioritizes keeping you glued to the screen will—unintentionally but inevitably—prioritize outraging and angering you. If it’s more enraging, it’s more engaging.

If enough people are spending enough of their time being angered, that starts to change the culture. As Tristan told me, it “turns hate into a habit.” You can see this seeping into the bones of our society. When I was a teenager, there was a horrific crime in Britain, where two ten-year-old children murdered a toddler named Jamie Bulger. The Conservative prime minister at the time, John Major, responded by publicly saying that he believed we need “to condemn a little more, and understand a little less.” I remembered thinking then, at the age of fourteen, that this was surely wrong—that it’s always better to understand why people do things, even (perhaps especially) the most heinous acts. But today, this attitude—condemn more, understand less—has become the default response of almost everyone, from the right to the left, as we spend our lives dancing to the tune of algorithms that reward fury and penalize mercy.

In 2015 a researcher named Motahhare Eslami, as part of a team at the University of Illinois, took a group of ordinary Facebook users and explained to them how the Facebook algorithm works. She talked them through how it selects what they see. She discovered that 62 percent of them didn’t know their feeds were filtered at all, and they were astonished to learn about the algorithm’s existence. One person in the study compared it to the moment in the film The Matrix when the central character, Neo, discovers he is living in a computer simulation.

I called several of my relatives and asked them if they knew what an algorithm was. None of them—including the teenagers—did. I asked my neighbors. They looked at me blankly. It’s easy to assume most people know about this, but I don’t think it’s true.

Johann Hari, “Stolen Focus: Why You Can’t Pay Attention–and How to Think Deeply Again” (Crown, January 25, 2022)


Notes: