Life Cycle Analysis

Life Cycle Analysis (LCA) is hard to do. If you Google it, the most popular links are dated. LCA is a good idea that is difficult to manage because supply chains are complex and conflicting criteria must be applied to select the best option in each link of the chain.

The goal of LCA is to compare the full range of environmental effects assignable to products and services by quantifying all inputs and outputs of material flows and assessing how these material flows affect the environment. This information is used to improve processes, support policy and provide a sound basis for informed decisions.

The term life cycle refers to the notion that a fair, holistic assessment requires the assessment of raw-material production, manufacture, distribution, use and disposal including all intervening transportation steps necessary or caused by the product’s existence.

The procedures of life cycle assessment (LCA) are part of the ISO 14000 environmental management standards: in ISO 14040:2006 and 14044:2006. (ISO 14044 replaced earlier versions of ISO 14041 to ISO 14043.) GHG product life cycle assessments can also comply with specifications such as PAS 2050 and the GHG Protocol Life Cycle Accounting and Reporting Standard.[

To implement a sustainable supply chain companies must adopt a holistic systems-based approach in which all the supply chain partners’ activities are integrated throughout the four basic lifecycle stages: pre-manufacturing, manufacturing, use and post-use.

In 2009 Walmart announced its intention to create a new sustainable product index that would establish a single source of data for evaluating the sustainability of products. The company said it would provide initial funding for a consortium of universities, suppliers, retailers, government organizations and NGOs “to develop a global database of information on the lifecycle of products – from raw materials to disposal.”

LCA attempts replace monetary cost with an energy currency. Energy efficiency is only one consideration in deciding which alternative process to employ, and that it should not be elevated to the only criterion for determining environmental acceptability; for example, simple energy analysis does not take into account the renewability of energy flows or the toxicity of waste products; however, the life cycle assessment does help companies become more familiar with environmental properties and improve their environmental footprint.

The literature on life cycle assessment of energy technology has begun to reflect the interactions between the current electrical grid and future energy technology. Some papers have focused on energy life cycle, while others have focused on carbon dioxide (CO2) and other greenhouse gases. The essential critique given by these sources is that when considering energy technology, the growing nature of the power grid must be taken into consideration. If this is not done, a given class of energy technology may emit more CO2 over its lifetime than it mitigates.

A problem the energy analysis method cannot resolve is that different energy forms (heat, electricity, chemical energy etc.) have different quality and value even in natural sciences, as a consequence of the two main laws of thermodynamics. A thermodynamic measure of the quality of energy is exergy. According to the first law of thermodynamics, all energy inputs should be accounted with equal weight, whereas by the second law diverse energy forms should be accounted by different values.

A recent article by López  (López Isabel Noya, 2018) illustrates the complexities and methodologies of LCA.

References

Bredenberg, A. (2012, March 20). Lifecycle Assessment in Sustainable Supply Chains. Retrieved from ThomasNet.com: https://news.thomasnet.com/imt/2012/03/20/lifecycle-assessment-in-sustainable-supply-chains

Florian Suter, B. S. (2016, September 26). Life Cycle Impacts and Benefits of Wood along the Value Chain: The Case of Switzerland. Retrieved from Wiley Online Library: http://onlinelibrary.wiley.com/doi/10.1111/jiec.12486/full

Gestring, I. (2017). Life Cycle and Supply Chain Management for Sustainable Bins. Retrieved from Science Direct: https://doi.org/10.1016/j.proeng.2017.06.041

López Isabel Noya, V. V.-G. (2018, January). An environmental evaluation of food supply chain using life cycle assessment: A case study on gluten free biscuit products. Retrieved from Science Direct: https://www.sciencedirect.com/science/article/pii/S0959652617319777

United Nations Environment Programme. (2009). Life Cycle Management. United Nations Environment Programme. Dublin: United Nations Environment Programme. Retrieved from http://www.unep.fr/shared/publications/pdf/DTIx1208xPA-LifeCycleApproach-Howbusinessusesit.pdf

Wharton. (2010, March 3). The Business Case for Lifecycle Analysis and Building a Green Supply Chain. Retrieved from Knowledge@Wharton: http://knowledge.wharton.upenn.edu/article/the-business-case-for-lifecycle-analysis-and-building-a-green-supply-chain/

Wikipedia. (2018, January 22). Life-cycle assessment. Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Life-cycle_assessment

 

 

The precedence diagram method

The precedence diagram method is a tool for scheduling activities in a project plan. It is a method of constructing a project schedule network diagram that uses boxes, referred to as nodes, to represent activities and connects them with arrows that show the dependencies.

  • Critical tasks, noncritical tasks, and slack time
  • Shows the relationship of the tasks to each other
  • Allows for what-if, worst-case, best-case and most likely scenario

Key elements include determining predecessors and defining attributes such as

  • early start date..
  • late start date
  • early finish date
  • late finish date
  • duration
  • WBS reference

The critical path method (CPM) is a project modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley Jr. of Remington Rand.[2] Kelley and Walker related their memories of the development of CPM in 1989.[3] Kelley attributed the term “critical path” to the developers of theProgram Evaluation and Review Technique which was developed at about the same time by Booz Allen Hamilton and the U.S. Navy.[4] The precursors of what came to be known as Critical Path were developed and put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.[5]

CPM is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis. Although the original CPM program and approach is no longer used,[6] the term is generally applied to any approach used to analyze a project network logic diagram.

Originally, the critical path method considered only logical dependencies between terminal elements. Since then, it has been expanded to allow for the inclusion of resources related to each activity, through processes called activity-based resource assignments and resource leveling. A resource-leveled schedule may include delays due to resource bottlenecks (i.e., unavailability of a resource at the required time), and may cause a previously shorter path to become the longest or most “resource critical” path. A related concept is called the critical chain, which attempts to protect activity and project durations from unforeseen delays due to resource constraints.

Since project schedules change on a regular basis, CPM allows continuous monitoring of the schedule, which allows the project manager to track the critical activities, and alerts the project manager to the possibility that non-critical activities may be delayed beyond their total float, thus creating a new critical path and delaying project completion. In addition, the method can easily incorporate the concepts of stochastic predictions, using the program evaluation and review technique (PERT) and event chain methodology.

Currently, there are several software solutions available in industry that use the CPM method of scheduling; see list of project management software. The method currently used by most project management software is based on a manual calculation approach developed by Fondahl of Stanford University.

design of experiments

The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with true experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in whichnatural conditions that influence the variation are selected for observation.

In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is reflected in a variable called the predictor. The change in the predictor is generally hypothesized to result in a change in the second variable, hence called the outcome variable. Experimental design involves not only the selection of suitable predictors and outcomes, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources.

Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the predictor, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.

Correctly designed experiments advance knowledge in the natural and social sciences and engineering. Other applications include marketing and policy making.

Design of Experiments (DOE)

Outline

  1. Introduction
  2. Preparation
  3. Components of Experimental Design
  4. Purpose of Experimentation
  5. Design Guidelines
  6. Design Process
  7. One Factor Experiments
  8. Multi-factor Experiments
  9. Taguchi Methods

In the design of experiments, optimal designs (or optimum designs[2]) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith.[3][4]

In the design of experiments for estimating statistical models, optimal designs allow parameters to be estimated without bias and withminimum variance. A non-optimal design requires a greater number of experimental runs to estimate the parameters with the sameprecision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation.

The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding ofstatistical theory and practical knowledge with designing experiments.

DMAIC

DMAIC (an acronym for Define, Measure, Analyze, Improve and Control) (pronounced də-MAY-ick) refers to a data-driven improvement cycle used for improving, optimizing and stabilizing business processes and designs. The DMAIC improvement cycle is the core tool used to drive Six Sigma projects. However, DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications.

Guerrilla marketing

Guerrilla marketing is an advertisement strategy concept designed for businesses to promote their products or services in an unconventional way with little budget to spend. This involves high energy and imagination focusing on grasping the attention of the public in more personal and memorable level. Some large companies use unconventional advertisement techniques, proclaiming to be guerrilla marketing but those companies will have larger budget and the brand is already visible.[1] The main point of guerrilla marketing is that the activities are done exclusively on the streets or other public places, such as shopping centers, parks or beaches with maximum people access so as to attract a bigger audience.[2]

Guerrilla marketing is a concept that has arisen as we move from traditional media to more online and electronic media. It is a concept that was created by Jay Conrad Levinson when he wrote the book ‘Guerrilla Marketing’ in 1984. Traditional advertising media are channels such as print, radio, television and direct mail (Belch & Belch, 2012) but as we are moving away from these channels the marketers and advertisers have to find new strategies to get their commercial messages to the consumer. Guerrilla Marketing is an alternative strategy and is about taking the consumer by surprise to make a big impression about the brand (What is Guerrilla Marketing, 2015), this in turn creates a buzz about the brand or product being marketed. It is a way of advertising that increases engagement with the product or service, and is designed to create a memorable experience for the consumer. By creating this memorable experience for the consumer, it also increases the likelihood that a consumer, or someone who interacted with the campaign will tell their friends about it and via word of mouth the product or service being advertised reaches a lot more people than initially anticipated, and means it has more of a mass audience. This style of marketing is extremely effective for small businesses to advertise their product or service, especially if they are competing against bigger companies as it is inexpensive and focuses more on reach rather than frequency. For guerrilla campaigns to be successful companies don’t need to spend large amounts, they just need to have imagination, energy and time (Bourn, 2009). Guerrilla marketing is also an effective way companies who don’t provide a tangible service can advertise their products through the non traditional channels as long as they have an effective strategy. As opposed to traditional media Guerrilla marketing cannot be measured by statistics, sales and hits but is measured by profit made. It is designed to cut through clutter of traditional advertising and have no mystery about what is being advertised. The message to consumers will be clear and concise, the business will not diversify the message to the consumer and focus will be maintained. This type of marketing also works on the unconscious mind, as purchases quite often are decided by the unconscious mind. To keep the product or service in the unconscious mind means repetition is needed, so if a buzz is created around a product and it is shared amongst friends it enables repetition (Bourn, 2009) Two types of marketing encompassed by guerrilla marketing are Viral Marketing and Buzz Marketing. Unlike typical public marketing campaigns that utilize billboards, guerrilla marketing involves the application of multiple techniques and practices in order to establish direct contact with the customers.[3] One of the goals of this interaction is to cause an emotional reaction in the clients and the final goal of marketing is to get people to remember brands in a different way than they are used to . The technique involves from flyer distribution in public spaces to creating an operation at major event or festival mostly without directly connecting to the event but using the opportunity. The challenge with any guerrilla marketing campaign is to find the correct place and time to do the operation without getting involved in legal issues.

Three Needs Theory

Need theory, also known as Three Needs Theory,[1] proposed by psychologist David McClelland, is a motivational model that attempts to explain how the needs for achievement,power, and affiliation affect the actions of people from a managerial context. This model was developed in the 1960s soon after Maslow’s hierarchy of needs in the 1940s. McClelland stated that we all have these three types of motivation regardless of age, sex, race, or culture. The type of motivation by which each individual is driven derives from their life experiences and the opinions of their culture. This need theory is often taught in classes concerning management or organizational behaviour.

Gerard Hendrik (Geert) Hofstede

Hofstede’s cultural dimensions theory is a framework for cross-cultural communication, developed by Geert Hofstede. It describes the effects of a society’s culture on thevalues of its members, and how these values relate to behavior, using a structure derived from factor analysis.[1]

Hofstede developed his original model as a result of using factor analysis to examine the results of a world-wide survey of employee values by IBM between 1967 and 1973. It has been refined since. The original theory proposed four dimensions along which cultural values could be analyzed: individualism-collectivism; uncertainty avoidance; power distance (strength of social hierarchy) and masculinity-femininity (task orientation versus person-orientation). Independent research in Hong Kong led Hofstede to add a fifth dimension, long-term orientation, to cover aspects of values not discussed in the original paradigm. In 2010 Hofstede added a sixth dimension, indulgence versus self-restraint.

Hofstede’s work established a major research tradition in cross-cultural psychology and has also been drawn upon by researchers and consultants in many fields relating to international business and communication. The theory has been widely used in several fields as a paradigm for research, particularly in cross-cultural psychology, international management, and cross-cultural communication. It continues to be a major resource in cross-cultural fields. It has inspired a number of other major cross-cultural studies of values, as well as research on other aspects of culture, such as social beliefs.

Gerard Hendrik (Geert) Hofstede (born 2 October 1928) is a Dutch social psychologist, former IBM employee, and Professor Emeritus of Organizational Anthropology and International Management at Maastricht University in the Netherlands, well known for his pioneering research on cross-cultural groups and organizations.

His most notable work has been in developing cultural dimensions theory. Here he describes national cultures along six dimensions: Power Distance, Individualism, Uncertainty avoidance, Masculinity, Long Term Orientation, and Indulgence vs. restraint. He is known for his books Culture’s Consequences and Cultures and Organizations: Software of the Mind, co-authored with his son Gert Jan Hofstede.[1][2] The latter book deals with organizational culture, which is a different structure from national culture, but also has measurable dimensions, and the same research methodology is used for both.

Dimensions of national cultures

Differences between the degrees within the Power Distance Index.

  • Power distance index (PDI): The power distance index is defined as “the extent to which the less powerful members of organizations and institutions (like the family) accept and expect that power is distributed unequally.” In this dimension, inequality and power is perceived from the followers, or the lower level. A higher degree of the Index indicates that hierarchy is clearly established and executed in society, without doubt or reason. A lower degree of the Index signifies that people question authority and attempt to distribute power.[6]

Differences between the degrees within the Individualism vs. Collectivism index.

  • Individualism vs. collectivism (IDV): This index explores the “degree to which people in a society are integrated into groups.” Individualistic societies have loose ties that often only relates an individual to his/her immediate family. They emphasize the “I” versus the “we.” Its counterpart, collectivism, describes a society in which tightly-integrated relationships tie extended families and others into in-groups. These in-groups are laced with undoubted loyalty and support each other when a conflict arises with another in-group.[6][7]

Differences between the degrees within the Uncertainty Avoidance Index.

  • Uncertainty avoidance index (UAI): The uncertainty avoidance index is defined as “a society’s tolerance for ambiguity,” in which people embrace or avert an event of something unexpected, unknown, or away from the status quo. Societies that score a high degree in this index opt for stiff codes of behavior, guidelines, laws, and generally rely on absolute Truth, or the belief that one lone Truth dictates everything and people know what it is. A lower degree in this index shows more acceptance of differing thoughts/ideas. Society tends to impose fewer regulations, ambiguity is more accustomed to, and the environment is more free-flowing.[6][7]

Differences between the degrees within the Masculinity vs. Femininity index.

  • Masculinity vs. femininity (MAS): In this dimension, masculinity is defined as “a preference in society for achievement, heroism, assertiveness and material rewards for success.” Its counterpart represents “a preference for cooperation, modesty, caring for the weak and quality of life.” Women in the respective societies tend to display different values. In feminine societies, they share modest and caring views equally with men. In more masculine societies, women are more emphatic and competitive, but notably less emphatic than the men. In other words, they still recognize a gap between male and female values. This dimension is frequently viewed as taboo in highly masculine societies.[6][7]

Differences between the degrees within the Long-Term vs. Short-Term Orientation index.

  • Long-term orientation vs. short-term orientation (LTO): This dimension associates the connection of the past with the current and future actions/challenges. A lower degree of this index (short-term) indicates that traditions are honored and kept, while steadfastness is valued. Societies with a high degree in this index (long-term) views adaptation and circumstantial, pragmatic problem-solving as a necessity. A poor country that is short-term oriented usually has little to no economic development, while long-term oriented countries continue to develop to a point.[6][7]

Differences between the degrees within the Indulgent vs. Restraint index.

  • Indulgence vs. restraint (IND): This dimension is essentially a measure of happiness; whether or not simple joys are fulfilled. Indulgence is defined as “a society that allows relatively free gratification of basic and natural human desires related to enjoying life and having fun.” Its counterpart is defined as “a society that controls gratification of needs and regulates it by means of strict social norms.” Indulgent societies believe themselves to be in control of their own life and emotions; restrained societies believe other factors dictate their life and emotions.[6][7]

Fiedler contingency model

The contingency model by business and management psychologist Fred Fiedler is a contingency theory concerned with the effectiveness of a leader in an organization.

To Fiedler, stress is a key determinant of leader effectiveness (Fiedler and Garcia 1987; Fiedler et al. 1994), and a distinction is made between stress related to the leader’s superior, and stress related to subordinates or the situation itself. In stressful situations, leaders dwell on the stressful relations with others and cannot focus their intellectual abilities on the job. Thus, intelligence is more effective and used more often in stress-free situations. Fiedler concludes that experience impairs performance in low-stress conditions but contributes to performance under high-stress conditions. As with other situational factors, for stressful situations Fiedler recommends altering or engineering the leadership situation to capitalize on the leader’s strengths.

Fiedler’s situational contingency theory holds that group effectiveness depends on an appropriate match between a leader’s style (essentially a trait measure) and the demands of the situation. Fiedler considers situational control the extent to which a leader can determine what their group is going to do to be the primary contingency factor in determining the effectiveness of leader behavior.

Fiedler’s contingency model is a dynamic model where the personal characteristics and motivation of the leader are said to interact with the current situation that the group faces. Thus, the contingency model marks a shift away from the tendency to attribute leadership effectiveness to personality alone (Forsyth, 2006).

According to Fiedler, the ability to control the group situation (the second component of the contingency model) is crucial for a leader. This is because only leaders with situational control can be confident that their orders and suggestions will be carried out by their followers. Leaders who are unable to assume control over the group situation cannot be sure that the members they are leading will execute their commands. Because situational control is critical to leadership efficacy, Fiedler broke this factor down into three major components: leader-member relations, task structure, and position power (Forsyth, 2006). Moreover, there is no ideal leader. Both low-LPC (task-oriented) and high-LPC (relationship-oriented) leaders can be effective if their leadership orientation fits the situation. The contingency theory allows for predicting the characteristics of the appropriate situations for effectiveness. Three situational components determine the favourableness of situational control:

  1. Leader-Member Relations, referring to the degree of mutual trust, respect and confidence between the leader and the subordinates. When leader-member relations in the group are poor, the leader has to shift focus away from the group task in order to regulate behavior and conflict within the group (Forsyth, 2006).
  2. Task Structure, referring to the extent to which group tasks are clear and structured. When task structure is low (unstructured), group tasks are ambiguous, with no clear solution or correct approach to complete the goal. In contrast, when task structure is high (structured), the group goal is clear, unambiguous and straightforward: members have a clear idea about the how to approach and reach the goal (Forsyth, 2006).
  3. Leader Position Power, referring to the power inherent in the leader’s position itself.

When there is a good leader-member relation, a highly structured task, and high leader position power, the situation is considered a “favorable situation.” Fiedler found that low-LPC leaders are more effective in extremely favourable or unfavourable situations, whereas high-LPC leaders perform best in situations with intermediate favourability. Leaders in high positions of power have the ability to distribute resources among their members, meaning they can reward and punish their followers. Leaders in low position power cannot control resources to the same extent as leaders in high power, and so lack the same degree of situational control. For example, the CEO of a business has high position power, because she is able to increase and reduce the salary that her employees receive. On the other hand, an office worker in this same business has low position power, because although they may be the leader on a new business deal, they cannot control the situation by rewarding or disciplining their colleagues with salary changes (Forsyth, 2006).

Leader-situation match and mismatch

Since personality is relatively stable though it can be changed, the contingency model suggests that improving effectiveness requires changing the situation to fit the leader. This is called “job engineering” or “job restructuring”. The organization or the leader may increase or decrease task structure and position power, also training and group development may improve leader-member relations. In his 1976 book Improving Leadership Effectiveness: The Leader Match Concept, Fiedler (with Martin Chemers and Linda Mahar) offers a self paced leadership training programme designed to help leaders alter the favourableness of the situation, or situational control.

Examples

  • Task-oriented leadership would be advisable in natural disaster, like a flood or fire. In an uncertain situation the leader-member relations are usually poor, the task is unstructured, and the position power is weak. The one who emerges as a leader to direct the group’s activity usually does not know subordinates personally. The task-oriented leader who gets things accomplished proves to be the most successful. If the leader is considerate (relationship-oriented), they may waste so much time in the disaster, that things get out of control and lives are lost.
  • Blue-collar workers generally want to know exactly what they are supposed to do. Therefore, their work environment is usually highly structured. The leader’s position power is strong if management backs their decision. Finally, even though the leader may not be relationship-oriented, leader-member relations may be extremely strong if they can gain promotions and salary increases for subordinates. Under these situations the task-oriented style of leadership is preferred over the (considerate) relationship-oriented style.
  • The considerate (relationship-oriented) style of leadership can be appropriate in an environment where the situation is moderately favorable or certain. For example, when (1) leader-member relations are good, (2) the task is unstructured, and (3) position power is weak. Situations like this exist with research scientists, who do not like superiors to structure the task for them. They prefer to follow their own creative leads in order to solve problems. In a situation like this a considerate style of leadership is preferred over the task-oriented.

Multitasking

“Our brains are evolving to multitask,” not! The ill-usion of multitasking

By Allan Goldstein
Originally published July 2011 revised April 2015

Human multitasking is an apparent human ability to perform more than one task, or activity, over a short period of time. An example of multitasking is taking phone calls while typing an email and reading a book. Multitasking can result in time wasted due to human context switching and apparently causing more errors due to insufficient attention. However, studies have shown that some people can be trained to multitask where changes in brain activity have been measured as improving performance of multiple tasks. Multitasking can also be assisted with coordination techniques, such as taking notes periodically, or logging current status during an interruption to help resume a prior task midway.

Since the 1960s, psychologists have conducted experiments on the nature and limits of human multitasking. The simplest experimental design used to investigate human multitasking is the so-called psychological refractory period effect. Here, people are asked to make separate responses to each of two stimuli presented close together in time. An extremely general finding is a slowing in responses to the second-appearing stimulus.

Researchers have long suggested that there appears to be a processing bottleneck preventing the brain from working on certain key aspects of both tasks at the same time (e.g., (Gladstones, Regan & Lee 1989) (Pashler 1994)). Many researchers believe that the cognitive function subject to the most severe form of bottlenecking is the planning of actions and retrieval of information from memory.[3] Psychiatrist Edward M. Hallowell[4] has gone so far as to describe multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one.” On the other hand, there is good evidence that people can monitor many perceptual streams at the same time, and carry out perceptual and motor functions at the same time.

Although the idea that women are better multitaskers than men has been popular in the media as well in conventional thought, there is very little data available to support claims of a real sex difference. Most studies that do show any sex differences tend to find that the differences are small and inconsistent.[14]

A study by psychologist Keith Laws was widely reported in the press to have provided the first evidence of female multitasking superiority.

Rapidly increasing technology fosters multitasking because it promotes multiple sources of input at a given time. Instead of exchanging old equipment like TV, print, and music, for new equipment such as computers, the Internet, and video games, children and teens combine forms of media and continually increase sources of input.[23] According to studies by the Kaiser Family Foundation, in 1999 only 16 percent of time spent using media such as internet, television, video games, telephones, text-messaging, or e-mail was combined. In 2005, 26 percent of the time these media were used together.[10] This increase in simultaneous media usage decreases the amount of attention paid to each device. In 2005 it was found that 82 percent of American youth use the Internet by the seventh grade in school.[24] A 2005 survey by the Kaiser Family Foundation found that, while their usage of media continued at a constant 6.5 hours per day, Americans ages 8 to 18 were crowding roughly 8.5 hours’ worth of media into their days due to multitasking. The survey showed that one quarter to one third of the participants have more than one input “most of the time” while watching television, listening to music, or reading.[8] The 2007 Harvard Business Review featured Linda Stone’s idea of “continuous partial attention,” or, “constantly scanning for opportunities and staying on top of contacts, events, and activities in an effort to miss nothing”.[10] As technology provides more distractions, attention is spread among tasks more thinly.

A prevalent example of this inattention to detail due to multitasking is apparent when people talk on cellphones while driving. One study found that having an accident is four times more likely when using a cell phone while driving.[25] Another study compared reaction times for experienced drivers during a number of tasks, and found that the subjects reacted more slowly to brake lights and stop signs during phone conversations than during other simultaneous tasks.[25] A 2006 study showed that drivers talking on cell phones were more involved in rear-end collisions and sped up slower than intoxicated drivers.[26] When talking, people must withdraw their attention from the road in order to formulate responses. Because the brain cannot focus on two sources of input at one time, driving and listening or talking, constantly changing input provided by cell phones distracts the brain and increases the likelihood of accidents

The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information[1] is one of the most highly cited papers in psychology.[2][3][4] It was published in 1956 by the cognitive psychologist George A. Miller of Princeton University‘s Department of Psychology in Psychological Review. It is often interpreted to argue that the number of objects an average human can hold in working memory is 7 ± 2. This is frequently referred to as Miller’s Law.

In his article, Miller discussed a coincidence between the limits of one-dimensional absolute judgment and the limits of short-term memory. In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (e.g., 10 different tones varying only in pitch) and responds to each stimulus with a corresponding response (learned before). Performance is nearly perfect up to five or six different stimuli but declines as the number of different stimuli is increased. The task can be described as one of information transmission: The input consists of one out of n possible stimuli, and the output consists of one out of n responses. The information contained in the input can be determined by the number of binary decisions that need to be made to arrive at the selected stimulus, and the same holds for the response. Therefore, people’s maximum performance on one-dimensional absolute judgement can be characterized as an information channel capacity with approximately 2 to 3 bits of information, which corresponds to the ability to distinguish between four and eight alternatives.

The second cognitive limitation Miller discusses is memory span. Memory span refers to the longest list of items (e.g., digits, letters, words) that a person can repeat back immediately after presentation in correct order on 50% of trials. Miller observed that memory span of young adults is approximately seven items. He noticed that memory span is approximately the same for stimuli with vastly different amount of information—for instance, binary digits have 1 bit each; decimal digits have 3.32 bits each; words have about 10 bits each. Miller concluded that memory span is not limited in terms of bits but rather in terms of chunks. A chunk is the largest meaningful unit in the presented material that the person recognizes—thus, what counts as a chunk depends on the knowledge of the person being tested. For instance, a word is a single chunk for a speaker of the language but is many chunks for someone who is totally unfamiliar with the language and sees the word as a collection of phonetic segments.

Miller recognized that the correspondence between the limits of one-dimensional absolute judgment and of short-term memory span was only a coincidence, because only the first limit, not the second, can be characterized in information-theoretic terms (i.e., as a roughly constant number of bits). Therefore, there is nothing “magical” about the number seven, and Miller used the expression only rhetorically. Nevertheless, the idea of a “magical number 7” inspired much theorizing, rigorous and less rigorous, about the capacity limits of human cognition.

Later research on short-term memory and working memory revealed that memory span is not a constant even when measured in a number of chunks. The number of chunks a human can recall immediately after presentation depends on the category of chunks used (e.g., span is around seven for digits, around six for letters, and around five for words), and even on features of the chunks within a category. Chunking is used by the brain’s short-term memory as a method for keeping groups of information accessible for easy recall. It functions and works best as labels that one is already familiar with—the incorporation of new information into a label that is already well rehearsed into one’s long-term memory. These chunks must store the information in such a way that they can be disassembled into the necessary data.[5] The storage capacity is dependent on the information being stored. For instance, span is lower for long words than it is for short words. In general, memory span for verbal contents (digits, letters, words, etc.) strongly depends on the time it takes to speak the contents aloud. Some researchers have therefore proposed that the limited capacity of short-term memory for verbal material is not a “magic number” but rather a “magic spell”.[6] Baddeley used this finding to postulate that one component of his model of working memory, the phonological loop, is capable of holding around 2 seconds of sound.[7][8] However, the limit of short-term memory cannot easily be characterized as a constant “magic spell” either, because memory span depends also on other factors besides speaking duration. For instance, span depends on the lexical status of the contents (i.e., whether the contents are words known to the person or not).[9] Several other factors also affect a person’s measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan has proposed that working memory has a capacity of about four chunks in young adults (and less in children and older adults).[10]

Tarnow finds that in a classic experiment typically argued as supporting a 4 item buffer by Murdock, there is in fact no evidence for such and thus the “magical number”, at least in the Murdock experiment, is 1.[11][12] Other prominent theories of short-term memory capacity argue against measuring capacity in terms of a fixed number of elements.[13][14]

Chunking in psychology is a process by which individual pieces of information are bound together into a meaningful whole (Neath & Surprenant, 2003). A chunk is defined as a familiar collection of more elementary units that have been inter-associated and stored in memory repeatedly and act as a coherent, integrated group when retrieved (Tulving & Craik, 2000). For example, instead of remembering strings of letters such as “Y-M-C-A-I-B-M-D-H-L”, it is easier to remember the chunks “YMCA-IBM-DHL” consisting the same letters. Chunking uses one’s knowledge to reduce the number of items that need to be encoded. Thus, chunks are often meaningful to the participant.

It is believed that individuals create higher order cognitive representations of the items on the list that are more easily remembered as a group than as individual items themselves. Representations of these groupings are highly subjective, as they depend critically on the individual’s perception of the features of the items and the individual’s semantic network. The size of the chunks generally ranges anywhere from two to six items, but differs based on language and culture (Vecchi, Monticelli, & Cornoldi, 1995).


Published on Oct 3, 2013
Watch, learn and connect: https://stanfordconnects.stanford.edu/

Technology continues to evolve and play a larger role in all of our daily lives. This huge growth in media (television, computers and smart phones) has changed our culture of the way in which we use media. More devices has created a world of multitaskers and in this talk, Professor Cliff Nass explores what this means for our society.

Clifford Nass is the Thomas M. Storke Professor at Stanford University with appointments in communication; computer science; education; law; science, technology and society; and symbolic systems. He directs the Communication between Humans and Interactive Media (CHIMe) Lab, focusing on the psychology and design of how people interact with technology, and the Revs Program at Stanford, a transdisciplinary approach to the past, present and future of the automobile. Professor Nass has written three books: The Media Equation, Wired for Speech and The Man Who Lied to His Laptop. He has consulted on the design of over 250 media products and services.

Much recent neuroscience research tells us that the brain doesn’t really do tasks simultaneously, as we thought (hoped) it might.

Here’s the test:

Draw two horizontal lines on a piece of paper
Now, have someone time you as you carry out the two tasks that follow:
On the first line, write:
I am a great multitasker
On the second line: write out the numbers 1-20 sequentially, like those below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
How much time did it take to do the two tasks? Usually it’s about 20 seconds.

Now, let’s multitask.

Draw two more horizontal lines. This time, and again have someone time you, write a letter on one line, and then a number on the line below, then the next letter in the sentence on the upper line, and then the next number in the sequence, changing from line to line. In other words, you write the letter “I” and then the number “1” and then the letter “a” and then the number “2” and so on, until you complete both lines.

I a…..

1 2…..

successful people’s advice

Survivorship bias, or survival bias, is the logical error of concentrating on the people or things that “survived” some process and inadvertently overlooking those that did not because of their lack of visibility. This can lead to false conclusions in several different ways. The survivors may be actual people, as in a medical study, or could be companies or research subjects or applicants for a job, or anything that must make it past some selection process to be considered further.

Survivorship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes in a group have some special property, rather than just coincidence. For example, if three of the five students with the best college grades went to the same high school, that can lead one to believe that the high school must offer an excellent education. This could be true, but the question cannot be answered without looking at the grades of all the other students from that high school, not just the ones who “survived” the top-five selection process.

Survivorship bias is a type of selection bias.

In Search of Excellence is an international bestselling book written by Tom Peters and Robert H. Waterman, Jr..

First published in 1982, it is one of the biggest selling business books ever, selling 3 million copies in its first four years, and being the most widely held monograph in the United States from 1989 to 2006 (WorldCat data).

The book purports to explore the art and science of management used by several 1980s companies.