A friend and colleague did an interesting thing recently. He took his iPhone back to an AT&T store and departed with an ordinary cell phone. One that doesn’t have an Internet browser or email or GPS or Angry Birds or a library of his favorite 2,000 songs or pictures of his two adorable kids. He did it, he said, to get his life back from a smartphone’s siren pull on his attention. The manager of the AT&T store said that a striking number of his customers were doing the same thing—and so had he. He was fed up with the distraction.
You may have noticed, on a digital screen of your own or the television or in that relic called a newspaper, that distraction has come in for much attention. Attention from parents, teachers, social critics, neuroscientists, productivity gurus. The advent and swift saturation of digital technology has prompted renewed concern regarding the importance, and difficulty, of concentration. It has also prompted a proliferation of books, articles, and websites dedicated to the problem of attention—now more commonly called “focus”—in a time of what may be the apotheosis of distraction, the Internet.
There is something simultaneously contemporary and very old about this concern. Plato warned that the revolutionary intellectual technology of his day—writing—would be the ruin of the intellect, or at least the ruin of memory. By William Shakespeare’s time, the proliferation of printed texts, courtesy of Gutenberg, and Elizabethan anxiety about the mind’s capacity prompted the invention of portable, erasable tablets then known as “tables,” which Hamlet mentions in the eponymous play. (Author William Powers playfully refers to them as “Hamlet’s BlackBerry” in his book of the same title.) In 2011, we complain about the constant distraction of email, pausing in the middle of the complaint to check our inboxes. We applaud and admonish ourselves simultaneously for being multitaskers, we write laws to force drivers to pay attention to the highway and not text messages, we follow website links to 17 journal articles and read not one all the way through, and we buy (and perhaps even read) books like Rapt: Attention and the Focused Life, The Shallows: What the Internet Is Doing to Our Brains, and The Thinking Life: How to Thrive in the Age of Distraction.
It’s not just social commentators and productivity consultants and authors who are attentive to distraction. So are neuroscientists, as they try to figure out how the brain focuses on one thing and not another, how it shifts that focus, how it remembers enough to create what we call concentration. The command that so many of us silently issue in our minds—All right, all right, all right, just pay attention—embodies a world of complications.
THE THINKING LIFE, published by St. Martin’s Press in September, was written by P.M. Forni, professor of Romance languages in the Krieger School of Arts and Sciences. Forni worries that reliance on the Internet, plus the distractions inherent in constantly being connected to digital information streams—Facebook, texting—has begun to undermine deep reading, serious thought, and the ethical engagement of one person with another. “I am convinced that in order to become a deep thinker, you must become a deep reader,” he says. “I’m not necessarily claiming that we read or think less than we used to. Due to the Internet, we may be writing and reading more than we used to. The question is, what are we writing? What are we reading? And the ultimate question, what are we thinking? As we leap from one website to the next, we remain on each of them for something like 40 or 50 seconds, then there is a new slew of information, and the next, and the next again. Yes, we are thinking, but we are thinking in a very shallow way.”
Text on the Internet, such as a newspaper report, magazine piece, or journal article, is embedded in what writer Cory Doctorow has aptly described as an “ecosystem of interruption technologies.” On the same page as the text, there will be links to other websites; advertisements, many of them selected by Google’s advertising algorithms as most likely to catch your eye and divert your attention; and the ever present lure of the email inbox that pings every time a new missive arrives. Says Forni, “We have this relentless avalanche of information that is coming our way, and we cannot begin to sift through it. The sheer size of it makes it difficult to distinguish what’s important from what’s unimportant. There is a sort of varnish of equivalence in all things coming from the Net.”
THE ANSWER SEEMS simple enough. Need to read deeply for a sustained period of hours? Close the computer and open a book. Need to screen out distractions? Turn off your cell phone, your email, and your browser. Better yet, turn off everything electronic. Reach for a legal pad and a pen. What’s so hard?
What’s so hard, according to recent research by Krieger School professor of psychological and brain sciences Steven Yantis and two Johns Hopkins colleagues, Brian Anderson and Patryk Laurent, is “value-driven attentional capture”—the potential for reward. “Anything that you do, from reaching for a drink of water up to accepting a job offer or proposing marriage, is embedded within a hierarchy of goals driven ultimately by some kind of reward,” says Yantis. The potential for reward draws our attention for obvious reasons. A drink slakes our thirst, a pork chop satisfies our hunger, an attractive person may hold the promise of sex. So we are distracted by a clinking glass, the smell of onions browning, or the cute person crossing our line of sight. Even the tiniest of possible rewards lures us. Says Yantis, “When my email chime goes off, because that’s associated with tiny amounts of reward in the past—one out of 100 emails is actually interesting—my brain has learned to pick up on that cue and say, ‘Ah, something interesting might be associated with that.’”
For their experiment, Anderson, Laurent, and Yantis first had volunteers scan an array of colored circles on a computer display and find a red or green one. During this training phase, the participants gradually accrued monetary rewards, such as a nickel for every red circle they found and a penny for every green one. During the second phase of the experiment, they were instructed to look for diamond shapes among the circles. All the diamonds and circles were in various colors, but the volunteers were told to ignore that; only shape mattered now. Despite those instructions, subjects had slower response times whenever a circle appeared in red or green, meaning it had distracted the subjects from their search for diamonds. Because red or green circles had been attached to tiny rewards in the recent past, they distracted the volunteers.
The research subjects all had been tested the day before the experiment to gauge the capacities of their working memories, the short-term bits of recall that do not linger long in the brain but are vital for decision making and other executive functions. The study found a correlation between lower capacity and being prone to distraction. Nicholas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, calls working memory “the mind’s scratch pad,” and it is essential to the formation of long-term memory and higher intellectual ability like reasoning and creating knowledge. The amount of information flowing into working memory, known as the cognitive load, can exceed working memory’s capacity for storing and processing it. Whenever this happens it impairs learning, knowledge, and deeper thinking.
This is what scares critics of the Internet like Carr. If reading a book can be likened to dripping information into working memory at the speed of reading, the Internet is more like a gallon of water dumped on us from a bucket. Scientists have determined that working memory can deal with at most a half-dozen elements at one time, possibly fewer. Carr notes that studies of people reading on the Net have found their brains are engaged in constant problem solving—evaluating links embedded in the text and deciding whether or not to click on them, for example—and exhibit divided attention. Both activities tax working memory and thus make concentrating on the text more difficult.
Johns Hopkins faculty have watched laptop computers and smartphones invade the lecture hall and seminar room, bringing with them their potential to divert attention from the discussion or the lecture. And students now read online much of what they used to read on a printed page. Some faculty say that while students don’t seem any more distracted today than they were 10 years ago, coincident with the use of digital technology have come changes in reading and research habits. When someone finds a needed book in the Eisenhower Library, the other related volumes shelved with it make apparent the deep intellectual resources available for any research topic. The Internet does not make the same impression. Erica Schoenberger, a professor of geography in the Whiting School of Engineering, says of undergraduates, “They don’t know that beyond Wikipedia there’s an immense world of resources.” If they do know of those greater resources, they’ve become used to skipping around and reading fragments of texts instead of long articles or books. Krieger School history professor Gabrielle Spiegel has noticed the effect of this on doctoral students. “They are used to really small chunks of things,” she says. “A noticeable consequence is they now have greater difficulty in building a long logical argument, [understanding] what should be subordinate, etc. It’s fairly subtle, but noticeable. I spend more time now reviewing [dissertation] outlines than I ever have in the past.”
What Forni has observed of changes in undergraduates’ ability to think concerns him. He says, “The students at the top of their game are articulate and able to use critical thinking in ways as good as ever. On the other hand, students who are not at the top of the class display less ability to articulate their thinking than their peers of 20 years ago. My impression is that the gap has grown. Hopkins is Hopkins and will always have bright youngsters. But when I travel, I continue to hear provosts and deans and professors at different schools articulating the thought that the decline in the ability to engage in critical thinking is across the board and has increased.”
THE STUDY OF distraction is really the study of attention, because during our waking hours we are never not paying attention. When we’re distracted, we are still paying attention—just not to the task that was the previous still point of our intentional neural processing. A cell phone conversation distracts us from safe driving because we are paying attention—to the call, not to traffic and speed and staying in our lane.
There is no discrete portion of the mind that could be called the “attention center.” But researchers like Yantis and Johns Hopkins computational neuroscientist Ernst Niebur are steadily learning more and more about the granular details of attention: what the neurons are up to; the interaction of components of the brain like the prefrontal cortex and the hippocampus; the interplay of narrow and broad attention as you, say, concentrate on the words of this story while remaining aware that the cat just entered your peripheral vision as a breeze stirs the air in the room. Scientists are figuring out, sometimes one neuron at a time, what William James described in 1890 in The Principles of Psychology as “the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought.”
Yantis defines attention as the “selection of one source of sensory input for increased cognitive processing.” Selection is the key term; attention is, in the words of Winifred Gallagher, author of Rapt, a “neurological sorting operation.” In a forthcoming textbook he is just completing, Yantis distinguishes various forms of this sorting. Divided attention lets us steer the car into the driveway while simultaneously noticing that the grass needs cutting and the kids are home. Focused attention is the sort of acute attentiveness of the batter at the plate in a baseball game. Every form of attention is selective all the time, however, because even divided attention is divided among only a few of the myriad things that our senses pick up every waking second. Attention is the filter by which we choose what to be actively aware of, what to remember, and what will guide our actions.
Human powers of concentration are strong enough for us not to notice something right in front of us if we’re paying close attention to something else. This is called inattentional blindness and is central to the famous “dancing gorilla” experiment by Daniel Simons and Christopher Chabris in 1999. Students were shown a film of a basketball game and told to count how many times either the team in white or the team in black passed the ball. Almost everyone in the experiment failed to notice when someone dressed in a gorilla suit strolled through the middle of the game. Focused on the basketball and the players, they were oblivious to anything else. Yet, in other circumstances, people are easy to distract. At a crowded, noisy party you could be deep in conversation with the soul mate of your dreams, but if someone else across the room in a separate conversation utters your name, that will cut through the din and instantly divert you; in 1953 researcher Colin Cherry named this “the cocktail party problem.”
Says Yantis, “There’s too much information in any scene, visual stuff and auditory stuff and tactile stuff. Our brains cannot process it all at the same time. So there’s a need to select. That raises the question, Why can’t the brain process it all at once?” A simplified answer to that question is the brain may contain billions of neurons, but that isn’t enough to account for every bit of sensory information continually streaming in from the eyes, ears, nose, palate, and skin. So the brain has to marshal phenomenal yet finite resources to sift through the sensory overload and take note of what we most need to know. It does this in part by imposing constraints to organize all these stimuli. For example, if your eyes gaze at a garden that contains a red rose and a yellow daisy, neurons receptive to red and neurons receptive to yellow are firing in the same visual area of your brain. Neurons tuned to respond to red have to “decide” whether to fire, if you are attending to the rose, or not, if you attending to the daisy. That’s one example of a constraint. Another arises because sensory stimuli compete for a neuron’s attention, and the only way the brain can resolve the competition and so distinguish the rose from the daisy is to attend, however fleetingly, to the rose as if nothing else were there.
Says Yantis, “The brain is essentially using attention as a way to have the flexibility to represent anything that could possibly occur in the perceptual world.” So attention is not just vital, it is unavoidable. But there is sound evolutionary reason for being distractible. Imagine one of our ancient primate forebears concentrating so hard on catching a fish for dinner that he fails to notice the saber-toothed cat that considers him dinner. That’s how you exit the gene pool. To function, we have to pay attention. But we also need to be distractible, for awareness of potential hazards. The same applies to awareness of potential rewards.
Given that set of imperatives, look at what we face in the 21st-century developed world. Technology has created a modern environment in which we can be bombarded every minute of every day with stimuli that all promise rewards. None of which might matter were our brains capable of switching attention at the speed of light. But they are not. The brain’s chemical and electrical processes may be very fast, but they are not instantaneous. Processing visual stimuli, then initiating a response—look, there’s your best friend coming toward you, wave!—requires a lot of neurons to fire, and the milliseconds add up. If Niebur is right, a further source of lag may be the brain’s division of labor. He and his colleagues in the computational neuroscience lab at the Krieger Mind/Brain Institute have created a computer model that seems to explain a neurological puzzle.
Scientists know that different parts of the brain play various roles in attention. The prefrontal cortex, which controls executive function such as the decision to concentrate on object A and ignore object B, does not bother itself, so to speak, with the billions of details that pour in through the senses. That’s the job of the various sensory cortices, which react to whatever the senses pick up. But if I am focused on an ant that is moving horizontally across my desk, and the ant turns right and starts moving vertically, my prefrontal cortex somehow maintains my attention on the ant even though all the neurons that registered changes in the ant’s position and direction are elsewhere in my brain. How does the prefrontal cortex keep my attention on the ant even though none of its neurons fire when the ant changes direction? In a 2001 paper in Proceedings of the National Academy of Sciences, Niebur proposed an answer: a feedback process by which the prefrontal cortex issues commands but lets the sensory cortices fill in the details required for execution of those commands. Experiments on human subjects have produced results consistent with the model. It’s a remarkable bit of brain engineering, but this feedback loop adds still more time to the brain’s processing. What this can mean in our distracted lives was demonstrated by Yantis in 2005.
He asked research volunteers to view a stream of letters and numbers on a computer screen while multiple voices recited letters and numbers through headphones placed over the volunteers’ ears. When the volunteers were instructed to switch their concentration from the visual stream to the auditory stream, functional magnetic resonance imaging (fMRI) revealed that activity in the visual part of the brain significantly decreased, even though their eyes continued to take in the same visual stimuli as before. The brain was incapable of maintaining full attention on both the visual and auditory at the same time. Furthermore, when the volunteers were asked to shift their attention, their parietal and prefrontal cortices lit up, indicating that several parts of the brain were involved in the decision to change from eyes to ears or vice versa. That added more time to the process.
Now imagine, says Yantis, that you are a man driving your car. Your cell phone buzzes, you answer, and your wife, who sounds upset, informs you that she just got home from her office and found water spreading across the basement floor. Your brain still needs to pay attention to the complex visual stimuli and motor control involved in safe driving, but it has shifted its attention to auditory stimuli—your upset wife. You are rolling down a busy highway at 60 miles per hour but concentrating on what’s coming through your phone, and probably also imagining the scene in your basement and mentally tabulating how much this is going to cost you and responding to the rising agitation in your wife’s voice, all in reaction to auditory stimuli. If a truck suddenly swerves into your path, your brain has to switch attention back to the visual and perform the executive function that controls your response—slamming on the brakes. All that neuroprocessing, fast though it is, might take too long for you to avoid a wreck.
FORTY YEARS AGO, decades before there was an Internet, economist Herbert A. Simon made an astute observation that now feels prescient. In “Designing Organizations for an Information-Rich World,” Simon said, “What information consumes is rather obvious: It consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”
Forni would concur. As he wrote in The Thinking Life, “A dubious accomplishment of the often misguided age in which we live is its unparalleled perfecting of the art of distraction.” To him, distraction is more than a highway safety and pay-attention-to-the-lecture problem. He believes it is also an ethical problem.
He says, “Attention is essentially a cognitive faculty with a very well-marked ethical component, because to be ethical to you, I need to be attentive to your needs and desires. I need to be aware. We cannot be kind and considerate without paying attention to others. If I am distracted, you are an abstraction, you are not a real person. Attention is necessary for civility.”
And a good deal else.
Dale Keiger, A&S ’11 (MLA), is associate editor of Johns Hopkins Magazine.