“Our brains are evolving to multitask,” not! The ill-usion of multitasking

By Allan Goldstein
Originally published July 2011 revised April 2015

Human multitasking is an apparent human ability to perform more than one task, or activity, over a short period of time. An example of multitasking is taking phone calls while typing an email and reading a book. Multitasking can result in time wasted due to human context switching and apparently causing more errors due to insufficient attention. However, studies have shown that some people can be trained to multitask where changes in brain activity have been measured as improving performance of multiple tasks. Multitasking can also be assisted with coordination techniques, such as taking notes periodically, or logging current status during an interruption to help resume a prior task midway.

Since the 1960s, psychologists have conducted experiments on the nature and limits of human multitasking. The simplest experimental design used to investigate human multitasking is the so-called psychological refractory period effect. Here, people are asked to make separate responses to each of two stimuli presented close together in time. An extremely general finding is a slowing in responses to the second-appearing stimulus.

Researchers have long suggested that there appears to be a processing bottleneck preventing the brain from working on certain key aspects of both tasks at the same time (e.g., (Gladstones, Regan & Lee 1989) (Pashler 1994)). Many researchers believe that the cognitive function subject to the most severe form of bottlenecking is the planning of actions and retrieval of information from memory.[3] Psychiatrist Edward M. Hallowell[4] has gone so far as to describe multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one.” On the other hand, there is good evidence that people can monitor many perceptual streams at the same time, and carry out perceptual and motor functions at the same time.

Although the idea that women are better multitaskers than men has been popular in the media as well in conventional thought, there is very little data available to support claims of a real sex difference. Most studies that do show any sex differences tend to find that the differences are small and inconsistent.[14]

A study by psychologist Keith Laws was widely reported in the press to have provided the first evidence of female multitasking superiority.

Rapidly increasing technology fosters multitasking because it promotes multiple sources of input at a given time. Instead of exchanging old equipment like TV, print, and music, for new equipment such as computers, the Internet, and video games, children and teens combine forms of media and continually increase sources of input.[23] According to studies by the Kaiser Family Foundation, in 1999 only 16 percent of time spent using media such as internet, television, video games, telephones, text-messaging, or e-mail was combined. In 2005, 26 percent of the time these media were used together.[10] This increase in simultaneous media usage decreases the amount of attention paid to each device. In 2005 it was found that 82 percent of American youth use the Internet by the seventh grade in school.[24] A 2005 survey by the Kaiser Family Foundation found that, while their usage of media continued at a constant 6.5 hours per day, Americans ages 8 to 18 were crowding roughly 8.5 hours’ worth of media into their days due to multitasking. The survey showed that one quarter to one third of the participants have more than one input “most of the time” while watching television, listening to music, or reading.[8] The 2007 Harvard Business Review featured Linda Stone’s idea of “continuous partial attention,” or, “constantly scanning for opportunities and staying on top of contacts, events, and activities in an effort to miss nothing”.[10] As technology provides more distractions, attention is spread among tasks more thinly.

A prevalent example of this inattention to detail due to multitasking is apparent when people talk on cellphones while driving. One study found that having an accident is four times more likely when using a cell phone while driving.[25] Another study compared reaction times for experienced drivers during a number of tasks, and found that the subjects reacted more slowly to brake lights and stop signs during phone conversations than during other simultaneous tasks.[25] A 2006 study showed that drivers talking on cell phones were more involved in rear-end collisions and sped up slower than intoxicated drivers.[26] When talking, people must withdraw their attention from the road in order to formulate responses. Because the brain cannot focus on two sources of input at one time, driving and listening or talking, constantly changing input provided by cell phones distracts the brain and increases the likelihood of accidents

The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information[1] is one of the most highly cited papers in psychology.[2][3][4] It was published in 1956 by the cognitive psychologist George A. Miller of Princeton University‘s Department of Psychology in Psychological Review. It is often interpreted to argue that the number of objects an average human can hold in working memory is 7 ± 2. This is frequently referred to as Miller’s Law.

In his article, Miller discussed a coincidence between the limits of one-dimensional absolute judgment and the limits of short-term memory. In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (e.g., 10 different tones varying only in pitch) and responds to each stimulus with a corresponding response (learned before). Performance is nearly perfect up to five or six different stimuli but declines as the number of different stimuli is increased. The task can be described as one of information transmission: The input consists of one out of n possible stimuli, and the output consists of one out of n responses. The information contained in the input can be determined by the number of binary decisions that need to be made to arrive at the selected stimulus, and the same holds for the response. Therefore, people’s maximum performance on one-dimensional absolute judgement can be characterized as an information channel capacity with approximately 2 to 3 bits of information, which corresponds to the ability to distinguish between four and eight alternatives.

The second cognitive limitation Miller discusses is memory span. Memory span refers to the longest list of items (e.g., digits, letters, words) that a person can repeat back immediately after presentation in correct order on 50% of trials. Miller observed that memory span of young adults is approximately seven items. He noticed that memory span is approximately the same for stimuli with vastly different amount of information—for instance, binary digits have 1 bit each; decimal digits have 3.32 bits each; words have about 10 bits each. Miller concluded that memory span is not limited in terms of bits but rather in terms of chunks. A chunk is the largest meaningful unit in the presented material that the person recognizes—thus, what counts as a chunk depends on the knowledge of the person being tested. For instance, a word is a single chunk for a speaker of the language but is many chunks for someone who is totally unfamiliar with the language and sees the word as a collection of phonetic segments.

Miller recognized that the correspondence between the limits of one-dimensional absolute judgment and of short-term memory span was only a coincidence, because only the first limit, not the second, can be characterized in information-theoretic terms (i.e., as a roughly constant number of bits). Therefore, there is nothing “magical” about the number seven, and Miller used the expression only rhetorically. Nevertheless, the idea of a “magical number 7” inspired much theorizing, rigorous and less rigorous, about the capacity limits of human cognition.

Later research on short-term memory and working memory revealed that memory span is not a constant even when measured in a number of chunks. The number of chunks a human can recall immediately after presentation depends on the category of chunks used (e.g., span is around seven for digits, around six for letters, and around five for words), and even on features of the chunks within a category. Chunking is used by the brain’s short-term memory as a method for keeping groups of information accessible for easy recall. It functions and works best as labels that one is already familiar with—the incorporation of new information into a label that is already well rehearsed into one’s long-term memory. These chunks must store the information in such a way that they can be disassembled into the necessary data.[5] The storage capacity is dependent on the information being stored. For instance, span is lower for long words than it is for short words. In general, memory span for verbal contents (digits, letters, words, etc.) strongly depends on the time it takes to speak the contents aloud. Some researchers have therefore proposed that the limited capacity of short-term memory for verbal material is not a “magic number” but rather a “magic spell”.[6] Baddeley used this finding to postulate that one component of his model of working memory, the phonological loop, is capable of holding around 2 seconds of sound.[7][8] However, the limit of short-term memory cannot easily be characterized as a constant “magic spell” either, because memory span depends also on other factors besides speaking duration. For instance, span depends on the lexical status of the contents (i.e., whether the contents are words known to the person or not).[9] Several other factors also affect a person’s measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan has proposed that working memory has a capacity of about four chunks in young adults (and less in children and older adults).[10]

Tarnow finds that in a classic experiment typically argued as supporting a 4 item buffer by Murdock, there is in fact no evidence for such and thus the “magical number”, at least in the Murdock experiment, is 1.[11][12] Other prominent theories of short-term memory capacity argue against measuring capacity in terms of a fixed number of elements.[13][14]

Chunking in psychology is a process by which individual pieces of information are bound together into a meaningful whole (Neath & Surprenant, 2003). A chunk is defined as a familiar collection of more elementary units that have been inter-associated and stored in memory repeatedly and act as a coherent, integrated group when retrieved (Tulving & Craik, 2000). For example, instead of remembering strings of letters such as “Y-M-C-A-I-B-M-D-H-L”, it is easier to remember the chunks “YMCA-IBM-DHL” consisting the same letters. Chunking uses one’s knowledge to reduce the number of items that need to be encoded. Thus, chunks are often meaningful to the participant.

It is believed that individuals create higher order cognitive representations of the items on the list that are more easily remembered as a group than as individual items themselves. Representations of these groupings are highly subjective, as they depend critically on the individual’s perception of the features of the items and the individual’s semantic network. The size of the chunks generally ranges anywhere from two to six items, but differs based on language and culture (Vecchi, Monticelli, & Cornoldi, 1995).

Published on Oct 3, 2013
Watch, learn and connect:

Technology continues to evolve and play a larger role in all of our daily lives. This huge growth in media (television, computers and smart phones) has changed our culture of the way in which we use media. More devices has created a world of multitaskers and in this talk, Professor Cliff Nass explores what this means for our society.

Clifford Nass is the Thomas M. Storke Professor at Stanford University with appointments in communication; computer science; education; law; science, technology and society; and symbolic systems. He directs the Communication between Humans and Interactive Media (CHIMe) Lab, focusing on the psychology and design of how people interact with technology, and the Revs Program at Stanford, a transdisciplinary approach to the past, present and future of the automobile. Professor Nass has written three books: The Media Equation, Wired for Speech and The Man Who Lied to His Laptop. He has consulted on the design of over 250 media products and services.

Much recent neuroscience research tells us that the brain doesn’t really do tasks simultaneously, as we thought (hoped) it might.

Here’s the test:

Draw two horizontal lines on a piece of paper
Now, have someone time you as you carry out the two tasks that follow:
On the first line, write:
I am a great multitasker
On the second line: write out the numbers 1-20 sequentially, like those below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
How much time did it take to do the two tasks? Usually it’s about 20 seconds.

Now, let’s multitask.

Draw two more horizontal lines. This time, and again have someone time you, write a letter on one line, and then a number on the line below, then the next letter in the sentence on the upper line, and then the next number in the sequence, changing from line to line. In other words, you write the letter “I” and then the number “1” and then the letter “a” and then the number “2” and so on, until you complete both lines.

I a…..

1 2…..

Role Theory

Role Theory proposed that human behavior is guided by expectations held both by the individual and by other people. The expectations correspond to different roles individualsperform or enact in their daily lives, such as secretary, father, or friend. For instance, most people hold pre-conceived notions of the role expectations of a secretary, which might include: answering phones, making and managing appointments, filing paperwork, and typing memos. These role expectations would not be expected of a professional soccer player.

Individuals generally have and manage many roles. Roles consist of a set of rules or norms that function as plans or blueprints to guide behavior. Roles specify what goals should be pursued, what tasks must be accomplished, and what performances are required in a given scenario or situation. Role theory holds that a substantial proportion of observable, day-to-day social behavior is simply persons carrying out their roles, much as actors carry out their roles on the stage or ballplayers theirs on the field. Role theory is, in fact, predictive. It implies that if we have information about the role expectations for a specified position (e.g., sister, fireman, prostitute), a significant portion of the behavior of the persons occupying that position can be predicted.

What’s more, role theory also argues that in order to change behavior it is necessary to change roles; roles correspond to behaviors and vice versa. In addition to heavily influencing behavior, roles influence beliefs and attitudes; individuals will change their beliefs and attitudes to correspond with their roles. For instance, someone over-looked for a promotion to a managerial position in a company may change their beliefs about the benefits of management by convincing him/herself that they didn’t want the additional responsibility that would have accompanied the position.

Many role theorists see Role Theory as one of the most compelling theories bridging individual behavior and social structure. Roles, which are in part dictated by social structure and in part by social interactions (see the two approaches outlined below), guide the behavior of the individual. The individual, in turn, influences the norms, expectations, and behaviors associated with roles. The understanding is reciprocal and didactic.

Hierarchy of needs

Beginnings: Psychology without a soul

Throughout the first half of the 20th century, the dominant throries in psychology had been the psychoanalysis of Sigmund Freud and thebehaviourism of J. B. Watson and B. F. Skinner.

Both had tended to portray human beings as faulty machines.

  • In Freud’s view, human beings were almost entirely driven by primitive urges like sex and aggression. These ever-present impulses must be managed if we are to live together in civilized society. This leaves many people hopelessly conflicted at an unconscious level. A miserable, unfulfilled existence is unavoidable.
  • In the behaviourists’ view, human beings are like oversized lab rats —  programmed or conditioned to behave the way they do by factors outside of their control. They have no mind, no will of their own. Their feelings are not real and therefore do not matter. People are simply programmable machines who can be manipulated into doing anything.

In their different ways, psychoanalysis and behaviourism had dehumanized our understanding of ourselves and what it means to be human. In the middle of the century which had brought us Nazism, Communism, mechanized warfare, systematic genocide and Mutually Assured Destruction, psychology was unintentionally providing a scientific “justification” for such horrors.

These rather bleak, soul-less visions of human nature constituted the first two “waves” of psychology as a science.

Abraham Harold Maslow (April 1, 1908 – June 8, 1970) was an American psychologist who was best known for creating Maslow’s hierarchy of needs, a theory of psychological health predicated on fulfilling innate human needs in priority, culminating in self-actualization.[2] Maslow was a psychology professor at Brandeis UniversityBrooklyn CollegeNew School for Social Research andColumbia University. He stressed the importance of focusing on the positive qualities in people, as opposed to treating them as a “bag of symptoms.”[3]


Maslow described human needs as ordered in a prepotent hierarchy—a pressing need would need to be mostly satisfied before someone would give their attention to the next highest need. None of his published works included a visual representation of the hierarchy. The pyramidal diagram illustrating the Maslow needs hierarchy may have been created by a psychology textbook publisher as an illustrative device. This now iconic pyramid frequently depicts the spectrum of human needs, both physical and psychological, as accompaniment to articles describing Maslow’s needs theory and may give the impression that the Hierarchy of Needs is a fixed and rigid sequence of progression. Yet, starting with the first publication of his theory in 1943, Maslow described human needs as being relatively fluid—with many needs being present in a person simultaneously.[38]

The hierarchy of human needs model suggests that human needs will only be fulfilled one level at a time.[39]

According to Maslow’s theory, when a human being ascends the levels of the hierarchy having fulfilled the needs in the hierarchy, one may eventually achieve self-actualization. Late in life, Maslow came to conclude that self-actualization was not an automatic outcome of satisfying the other human needs[40][41]

Human needs as identified by Maslow:

  • At the bottom of the hierarchy are the “Basic needs or Physiological needs” of a human being: food, water, sleep and sex.
  • The next level is “Safety Needs: Security, Order, and Stability”. These two steps are important to the physical survival of the person. Once individuals have basic nutrition, shelter and safety, they attempt to accomplish more.
  • The third level of need is “Love and Belonging”, which are psychological needs; when individuals have taken care of themselves physically, they are ready to share themselves with others, such as with family and friends.
  • The fourth level is achieved when individuals feel comfortable with what they have accomplished. This is the “Esteem” level, the need to be competent and recognized, such as through status and level of success.
  • Then there is the “Cognitive” level, where individuals intellectually stimulate themselves and explore.
  • After that is the “Aesthetic” level, which is the need for harmony, order and beauty.[42]
  • At the top of the pyramid, “Need for Self-actualization” occurs when individuals reach a state of harmony and understanding because they are engaged in achieving their full potential.[43] Once a person has reached the self-actualization state they focus on themselves and try to build their own image. They may look at this in terms of feelings such as self-confidence or by accomplishing a set goal.[4]

The first four levels are known as Deficit needs or D-needs. This means that if you do not have enough of one of those four needs, you will have the feeling that you need to get it. But when you do get them, then you feel content. These needs alone are not motivating.[4]

Maslow wrote that there are certain conditions that must be fulfilled in order for the basic needs to be satisfied. For example, freedom of speech, freedom to express oneself, and freedom to seek new information[44] are a few of the prerequisites. Any blockages of these freedoms could prevent the satisfaction of the basic needs.

Maslow’s Hierarchy of Needs has been subject to internet memes over the past few years. Specifically looking at the recent integration of technology in our lives.

Criticisms of Maslow’s Hierarchy of Needs

While some research showed some support for Maslow’s theories, most research has not been able to substantiate the idea of a needs hierarchy. Wahba and Bridwell reported that there was little evidence for Maslow’s ranking of these needs and even less evidence that these needs are in a hierarchical order.

Other criticisms of Maslow’s theory note that his definition of self-actualization is difficult to test scientifically. His research on self-actualization was also based on a very limited sample of individuals, including people he knew as well as biographies of famous individuals that Maslow believed to be self-actualized, such as Albert Einstein and Eleanor Roosevelt. Regardless of these criticisms, Maslow’s hierarchy of needs represents part of an important shift in psychology. Rather than focusing on abnormal behavior and development, Maslow’s humanistic psychology was focused on the development of healthy individuals.

While there was relatively little research supporting the theory, hierarchy of needs is well-known and popular both in and out of psychology. In a study published in 2011, researchers from the University of Illinois set out to put the hierarchy to the test. What they discovered is that while fulfillment of the needs was strongly correlated with happiness, people from cultures all over the reported that self-actualization and social needs were important even when many of the most basic needs were unfulfilled.