Life Cycle Analysis

Life Cycle Analysis (LCA) is hard to do. If you Google it, the most popular links are dated. LCA is a good idea that is difficult to manage because supply chains are complex and conflicting criteria must be applied to select the best option in each link of the chain.

The goal of LCA is to compare the full range of environmental effects assignable to products and services by quantifying all inputs and outputs of material flows and assessing how these material flows affect the environment. This information is used to improve processes, support policy and provide a sound basis for informed decisions.

The term life cycle refers to the notion that a fair, holistic assessment requires the assessment of raw-material production, manufacture, distribution, use and disposal including all intervening transportation steps necessary or caused by the product’s existence.

The procedures of life cycle assessment (LCA) are part of the ISO 14000 environmental management standards: in ISO 14040:2006 and 14044:2006. (ISO 14044 replaced earlier versions of ISO 14041 to ISO 14043.) GHG product life cycle assessments can also comply with specifications such as PAS 2050 and the GHG Protocol Life Cycle Accounting and Reporting Standard.[

To implement a sustainable supply chain companies must adopt a holistic systems-based approach in which all the supply chain partners’ activities are integrated throughout the four basic lifecycle stages: pre-manufacturing, manufacturing, use and post-use.

In 2009 Walmart announced its intention to create a new sustainable product index that would establish a single source of data for evaluating the sustainability of products. The company said it would provide initial funding for a consortium of universities, suppliers, retailers, government organizations and NGOs “to develop a global database of information on the lifecycle of products – from raw materials to disposal.”

LCA attempts replace monetary cost with an energy currency. Energy efficiency is only one consideration in deciding which alternative process to employ, and that it should not be elevated to the only criterion for determining environmental acceptability; for example, simple energy analysis does not take into account the renewability of energy flows or the toxicity of waste products; however, the life cycle assessment does help companies become more familiar with environmental properties and improve their environmental footprint.

The literature on life cycle assessment of energy technology has begun to reflect the interactions between the current electrical grid and future energy technology. Some papers have focused on energy life cycle, while others have focused on carbon dioxide (CO2) and other greenhouse gases. The essential critique given by these sources is that when considering energy technology, the growing nature of the power grid must be taken into consideration. If this is not done, a given class of energy technology may emit more CO2 over its lifetime than it mitigates.

A problem the energy analysis method cannot resolve is that different energy forms (heat, electricity, chemical energy etc.) have different quality and value even in natural sciences, as a consequence of the two main laws of thermodynamics. A thermodynamic measure of the quality of energy is exergy. According to the first law of thermodynamics, all energy inputs should be accounted with equal weight, whereas by the second law diverse energy forms should be accounted by different values.

A recent article by López  (López Isabel Noya, 2018) illustrates the complexities and methodologies of LCA.

References

Bredenberg, A. (2012, March 20). Lifecycle Assessment in Sustainable Supply Chains. Retrieved from ThomasNet.com: https://news.thomasnet.com/imt/2012/03/20/lifecycle-assessment-in-sustainable-supply-chains

Florian Suter, B. S. (2016, September 26). Life Cycle Impacts and Benefits of Wood along the Value Chain: The Case of Switzerland. Retrieved from Wiley Online Library: http://onlinelibrary.wiley.com/doi/10.1111/jiec.12486/full

Gestring, I. (2017). Life Cycle and Supply Chain Management for Sustainable Bins. Retrieved from Science Direct: https://doi.org/10.1016/j.proeng.2017.06.041

López Isabel Noya, V. V.-G. (2018, January). An environmental evaluation of food supply chain using life cycle assessment: A case study on gluten free biscuit products. Retrieved from Science Direct: https://www.sciencedirect.com/science/article/pii/S0959652617319777

United Nations Environment Programme. (2009). Life Cycle Management. United Nations Environment Programme. Dublin: United Nations Environment Programme. Retrieved from http://www.unep.fr/shared/publications/pdf/DTIx1208xPA-LifeCycleApproach-Howbusinessusesit.pdf

Wharton. (2010, March 3). The Business Case for Lifecycle Analysis and Building a Green Supply Chain. Retrieved from Knowledge@Wharton: http://knowledge.wharton.upenn.edu/article/the-business-case-for-lifecycle-analysis-and-building-a-green-supply-chain/

Wikipedia. (2018, January 22). Life-cycle assessment. Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Life-cycle_assessment

 

 

retail case study

Regression: the Mother of all Models – Retail Case Study Example (Part 9)


Retail case study example for marketing analytics.

Problem definition:  Part 1 & Part 2
Description: Part 3
Association: Part 4
Classification: Part 5, Part 6,  Part 7 & Part 8

 

In this part, we will learn about estimation through the mother of all models – multiple linear regression. A sound understanding of regression analysis, and modeling provides a solid foundation for analysts to gain deeper understanding of virtually every other modeling technique like neural networks, logistic regression, etc.

Fiedler contingency model

The contingency model by business and management psychologist Fred Fiedler is a contingency theory concerned with the effectiveness of a leader in an organization.

To Fiedler, stress is a key determinant of leader effectiveness (Fiedler and Garcia 1987; Fiedler et al. 1994), and a distinction is made between stress related to the leader’s superior, and stress related to subordinates or the situation itself. In stressful situations, leaders dwell on the stressful relations with others and cannot focus their intellectual abilities on the job. Thus, intelligence is more effective and used more often in stress-free situations. Fiedler concludes that experience impairs performance in low-stress conditions but contributes to performance under high-stress conditions. As with other situational factors, for stressful situations Fiedler recommends altering or engineering the leadership situation to capitalize on the leader’s strengths.

Fiedler’s situational contingency theory holds that group effectiveness depends on an appropriate match between a leader’s style (essentially a trait measure) and the demands of the situation. Fiedler considers situational control the extent to which a leader can determine what their group is going to do to be the primary contingency factor in determining the effectiveness of leader behavior.

Fiedler’s contingency model is a dynamic model where the personal characteristics and motivation of the leader are said to interact with the current situation that the group faces. Thus, the contingency model marks a shift away from the tendency to attribute leadership effectiveness to personality alone (Forsyth, 2006).

According to Fiedler, the ability to control the group situation (the second component of the contingency model) is crucial for a leader. This is because only leaders with situational control can be confident that their orders and suggestions will be carried out by their followers. Leaders who are unable to assume control over the group situation cannot be sure that the members they are leading will execute their commands. Because situational control is critical to leadership efficacy, Fiedler broke this factor down into three major components: leader-member relations, task structure, and position power (Forsyth, 2006). Moreover, there is no ideal leader. Both low-LPC (task-oriented) and high-LPC (relationship-oriented) leaders can be effective if their leadership orientation fits the situation. The contingency theory allows for predicting the characteristics of the appropriate situations for effectiveness. Three situational components determine the favourableness of situational control:

  1. Leader-Member Relations, referring to the degree of mutual trust, respect and confidence between the leader and the subordinates. When leader-member relations in the group are poor, the leader has to shift focus away from the group task in order to regulate behavior and conflict within the group (Forsyth, 2006).
  2. Task Structure, referring to the extent to which group tasks are clear and structured. When task structure is low (unstructured), group tasks are ambiguous, with no clear solution or correct approach to complete the goal. In contrast, when task structure is high (structured), the group goal is clear, unambiguous and straightforward: members have a clear idea about the how to approach and reach the goal (Forsyth, 2006).
  3. Leader Position Power, referring to the power inherent in the leader’s position itself.

When there is a good leader-member relation, a highly structured task, and high leader position power, the situation is considered a “favorable situation.” Fiedler found that low-LPC leaders are more effective in extremely favourable or unfavourable situations, whereas high-LPC leaders perform best in situations with intermediate favourability. Leaders in high positions of power have the ability to distribute resources among their members, meaning they can reward and punish their followers. Leaders in low position power cannot control resources to the same extent as leaders in high power, and so lack the same degree of situational control. For example, the CEO of a business has high position power, because she is able to increase and reduce the salary that her employees receive. On the other hand, an office worker in this same business has low position power, because although they may be the leader on a new business deal, they cannot control the situation by rewarding or disciplining their colleagues with salary changes (Forsyth, 2006).

Leader-situation match and mismatch

Since personality is relatively stable though it can be changed, the contingency model suggests that improving effectiveness requires changing the situation to fit the leader. This is called “job engineering” or “job restructuring”. The organization or the leader may increase or decrease task structure and position power, also training and group development may improve leader-member relations. In his 1976 book Improving Leadership Effectiveness: The Leader Match Concept, Fiedler (with Martin Chemers and Linda Mahar) offers a self paced leadership training programme designed to help leaders alter the favourableness of the situation, or situational control.

Examples

  • Task-oriented leadership would be advisable in natural disaster, like a flood or fire. In an uncertain situation the leader-member relations are usually poor, the task is unstructured, and the position power is weak. The one who emerges as a leader to direct the group’s activity usually does not know subordinates personally. The task-oriented leader who gets things accomplished proves to be the most successful. If the leader is considerate (relationship-oriented), they may waste so much time in the disaster, that things get out of control and lives are lost.
  • Blue-collar workers generally want to know exactly what they are supposed to do. Therefore, their work environment is usually highly structured. The leader’s position power is strong if management backs their decision. Finally, even though the leader may not be relationship-oriented, leader-member relations may be extremely strong if they can gain promotions and salary increases for subordinates. Under these situations the task-oriented style of leadership is preferred over the (considerate) relationship-oriented style.
  • The considerate (relationship-oriented) style of leadership can be appropriate in an environment where the situation is moderately favorable or certain. For example, when (1) leader-member relations are good, (2) the task is unstructured, and (3) position power is weak. Situations like this exist with research scientists, who do not like superiors to structure the task for them. They prefer to follow their own creative leads in order to solve problems. In a situation like this a considerate style of leadership is preferred over the task-oriented.

Multitasking

“Our brains are evolving to multitask,” not! The ill-usion of multitasking

By Allan Goldstein
Originally published July 2011 revised April 2015

Human multitasking is an apparent human ability to perform more than one task, or activity, over a short period of time. An example of multitasking is taking phone calls while typing an email and reading a book. Multitasking can result in time wasted due to human context switching and apparently causing more errors due to insufficient attention. However, studies have shown that some people can be trained to multitask where changes in brain activity have been measured as improving performance of multiple tasks. Multitasking can also be assisted with coordination techniques, such as taking notes periodically, or logging current status during an interruption to help resume a prior task midway.

Since the 1960s, psychologists have conducted experiments on the nature and limits of human multitasking. The simplest experimental design used to investigate human multitasking is the so-called psychological refractory period effect. Here, people are asked to make separate responses to each of two stimuli presented close together in time. An extremely general finding is a slowing in responses to the second-appearing stimulus.

Researchers have long suggested that there appears to be a processing bottleneck preventing the brain from working on certain key aspects of both tasks at the same time (e.g., (Gladstones, Regan & Lee 1989) (Pashler 1994)). Many researchers believe that the cognitive function subject to the most severe form of bottlenecking is the planning of actions and retrieval of information from memory.[3] Psychiatrist Edward M. Hallowell[4] has gone so far as to describe multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one.” On the other hand, there is good evidence that people can monitor many perceptual streams at the same time, and carry out perceptual and motor functions at the same time.

Although the idea that women are better multitaskers than men has been popular in the media as well in conventional thought, there is very little data available to support claims of a real sex difference. Most studies that do show any sex differences tend to find that the differences are small and inconsistent.[14]

A study by psychologist Keith Laws was widely reported in the press to have provided the first evidence of female multitasking superiority.

Rapidly increasing technology fosters multitasking because it promotes multiple sources of input at a given time. Instead of exchanging old equipment like TV, print, and music, for new equipment such as computers, the Internet, and video games, children and teens combine forms of media and continually increase sources of input.[23] According to studies by the Kaiser Family Foundation, in 1999 only 16 percent of time spent using media such as internet, television, video games, telephones, text-messaging, or e-mail was combined. In 2005, 26 percent of the time these media were used together.[10] This increase in simultaneous media usage decreases the amount of attention paid to each device. In 2005 it was found that 82 percent of American youth use the Internet by the seventh grade in school.[24] A 2005 survey by the Kaiser Family Foundation found that, while their usage of media continued at a constant 6.5 hours per day, Americans ages 8 to 18 were crowding roughly 8.5 hours’ worth of media into their days due to multitasking. The survey showed that one quarter to one third of the participants have more than one input “most of the time” while watching television, listening to music, or reading.[8] The 2007 Harvard Business Review featured Linda Stone’s idea of “continuous partial attention,” or, “constantly scanning for opportunities and staying on top of contacts, events, and activities in an effort to miss nothing”.[10] As technology provides more distractions, attention is spread among tasks more thinly.

A prevalent example of this inattention to detail due to multitasking is apparent when people talk on cellphones while driving. One study found that having an accident is four times more likely when using a cell phone while driving.[25] Another study compared reaction times for experienced drivers during a number of tasks, and found that the subjects reacted more slowly to brake lights and stop signs during phone conversations than during other simultaneous tasks.[25] A 2006 study showed that drivers talking on cell phones were more involved in rear-end collisions and sped up slower than intoxicated drivers.[26] When talking, people must withdraw their attention from the road in order to formulate responses. Because the brain cannot focus on two sources of input at one time, driving and listening or talking, constantly changing input provided by cell phones distracts the brain and increases the likelihood of accidents

The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information[1] is one of the most highly cited papers in psychology.[2][3][4] It was published in 1956 by the cognitive psychologist George A. Miller of Princeton University‘s Department of Psychology in Psychological Review. It is often interpreted to argue that the number of objects an average human can hold in working memory is 7 ± 2. This is frequently referred to as Miller’s Law.

In his article, Miller discussed a coincidence between the limits of one-dimensional absolute judgment and the limits of short-term memory. In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (e.g., 10 different tones varying only in pitch) and responds to each stimulus with a corresponding response (learned before). Performance is nearly perfect up to five or six different stimuli but declines as the number of different stimuli is increased. The task can be described as one of information transmission: The input consists of one out of n possible stimuli, and the output consists of one out of n responses. The information contained in the input can be determined by the number of binary decisions that need to be made to arrive at the selected stimulus, and the same holds for the response. Therefore, people’s maximum performance on one-dimensional absolute judgement can be characterized as an information channel capacity with approximately 2 to 3 bits of information, which corresponds to the ability to distinguish between four and eight alternatives.

The second cognitive limitation Miller discusses is memory span. Memory span refers to the longest list of items (e.g., digits, letters, words) that a person can repeat back immediately after presentation in correct order on 50% of trials. Miller observed that memory span of young adults is approximately seven items. He noticed that memory span is approximately the same for stimuli with vastly different amount of information—for instance, binary digits have 1 bit each; decimal digits have 3.32 bits each; words have about 10 bits each. Miller concluded that memory span is not limited in terms of bits but rather in terms of chunks. A chunk is the largest meaningful unit in the presented material that the person recognizes—thus, what counts as a chunk depends on the knowledge of the person being tested. For instance, a word is a single chunk for a speaker of the language but is many chunks for someone who is totally unfamiliar with the language and sees the word as a collection of phonetic segments.

Miller recognized that the correspondence between the limits of one-dimensional absolute judgment and of short-term memory span was only a coincidence, because only the first limit, not the second, can be characterized in information-theoretic terms (i.e., as a roughly constant number of bits). Therefore, there is nothing “magical” about the number seven, and Miller used the expression only rhetorically. Nevertheless, the idea of a “magical number 7” inspired much theorizing, rigorous and less rigorous, about the capacity limits of human cognition.

Later research on short-term memory and working memory revealed that memory span is not a constant even when measured in a number of chunks. The number of chunks a human can recall immediately after presentation depends on the category of chunks used (e.g., span is around seven for digits, around six for letters, and around five for words), and even on features of the chunks within a category. Chunking is used by the brain’s short-term memory as a method for keeping groups of information accessible for easy recall. It functions and works best as labels that one is already familiar with—the incorporation of new information into a label that is already well rehearsed into one’s long-term memory. These chunks must store the information in such a way that they can be disassembled into the necessary data.[5] The storage capacity is dependent on the information being stored. For instance, span is lower for long words than it is for short words. In general, memory span for verbal contents (digits, letters, words, etc.) strongly depends on the time it takes to speak the contents aloud. Some researchers have therefore proposed that the limited capacity of short-term memory for verbal material is not a “magic number” but rather a “magic spell”.[6] Baddeley used this finding to postulate that one component of his model of working memory, the phonological loop, is capable of holding around 2 seconds of sound.[7][8] However, the limit of short-term memory cannot easily be characterized as a constant “magic spell” either, because memory span depends also on other factors besides speaking duration. For instance, span depends on the lexical status of the contents (i.e., whether the contents are words known to the person or not).[9] Several other factors also affect a person’s measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan has proposed that working memory has a capacity of about four chunks in young adults (and less in children and older adults).[10]

Tarnow finds that in a classic experiment typically argued as supporting a 4 item buffer by Murdock, there is in fact no evidence for such and thus the “magical number”, at least in the Murdock experiment, is 1.[11][12] Other prominent theories of short-term memory capacity argue against measuring capacity in terms of a fixed number of elements.[13][14]

Chunking in psychology is a process by which individual pieces of information are bound together into a meaningful whole (Neath & Surprenant, 2003). A chunk is defined as a familiar collection of more elementary units that have been inter-associated and stored in memory repeatedly and act as a coherent, integrated group when retrieved (Tulving & Craik, 2000). For example, instead of remembering strings of letters such as “Y-M-C-A-I-B-M-D-H-L”, it is easier to remember the chunks “YMCA-IBM-DHL” consisting the same letters. Chunking uses one’s knowledge to reduce the number of items that need to be encoded. Thus, chunks are often meaningful to the participant.

It is believed that individuals create higher order cognitive representations of the items on the list that are more easily remembered as a group than as individual items themselves. Representations of these groupings are highly subjective, as they depend critically on the individual’s perception of the features of the items and the individual’s semantic network. The size of the chunks generally ranges anywhere from two to six items, but differs based on language and culture (Vecchi, Monticelli, & Cornoldi, 1995).


Published on Oct 3, 2013
Watch, learn and connect: https://stanfordconnects.stanford.edu/

Technology continues to evolve and play a larger role in all of our daily lives. This huge growth in media (television, computers and smart phones) has changed our culture of the way in which we use media. More devices has created a world of multitaskers and in this talk, Professor Cliff Nass explores what this means for our society.

Clifford Nass is the Thomas M. Storke Professor at Stanford University with appointments in communication; computer science; education; law; science, technology and society; and symbolic systems. He directs the Communication between Humans and Interactive Media (CHIMe) Lab, focusing on the psychology and design of how people interact with technology, and the Revs Program at Stanford, a transdisciplinary approach to the past, present and future of the automobile. Professor Nass has written three books: The Media Equation, Wired for Speech and The Man Who Lied to His Laptop. He has consulted on the design of over 250 media products and services.

Much recent neuroscience research tells us that the brain doesn’t really do tasks simultaneously, as we thought (hoped) it might.

Here’s the test:

Draw two horizontal lines on a piece of paper
Now, have someone time you as you carry out the two tasks that follow:
On the first line, write:
I am a great multitasker
On the second line: write out the numbers 1-20 sequentially, like those below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
How much time did it take to do the two tasks? Usually it’s about 20 seconds.

Now, let’s multitask.

Draw two more horizontal lines. This time, and again have someone time you, write a letter on one line, and then a number on the line below, then the next letter in the sentence on the upper line, and then the next number in the sequence, changing from line to line. In other words, you write the letter “I” and then the number “1” and then the letter “a” and then the number “2” and so on, until you complete both lines.

I a…..

1 2…..

Social Enterprises

Published on Jul 1, 2013
In their e-book titled, The Social Entrepreneur’s Playbook, Ian MacMillan, a Wharton management professor, and James Thompson, director of the Wharton Social Enterprise Program, offer specific suggestions to strengthen the effectiveness of social enterprises. In the second of a two-part interview, the authors discuss how two African social enterprises — one in the chicken feeds business and the other in the sanitation industry — used a five-step process to maximize the social and business impact of their operations.

A powerful force

The Globalization of Markets

FROM THE MAY 1983 ISSUE
Harvard Business Review
Who can forget the televised scenes during the 1979 Iranian uprisings of young men in fashionable French-cut trousers and silky body shirts thirsting for blood with raised modern weapons in the name of Islamic fundamentalism?

successful people’s advice

Survivorship bias, or survival bias, is the logical error of concentrating on the people or things that “survived” some process and inadvertently overlooking those that did not because of their lack of visibility. This can lead to false conclusions in several different ways. The survivors may be actual people, as in a medical study, or could be companies or research subjects or applicants for a job, or anything that must make it past some selection process to be considered further.

Survivorship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes in a group have some special property, rather than just coincidence. For example, if three of the five students with the best college grades went to the same high school, that can lead one to believe that the high school must offer an excellent education. This could be true, but the question cannot be answered without looking at the grades of all the other students from that high school, not just the ones who “survived” the top-five selection process.

Survivorship bias is a type of selection bias.

In Search of Excellence is an international bestselling book written by Tom Peters and Robert H. Waterman, Jr..

First published in 1982, it is one of the biggest selling business books ever, selling 3 million copies in its first four years, and being the most widely held monograph in the United States from 1989 to 2006 (WorldCat data).

The book purports to explore the art and science of management used by several 1980s companies.

Organizational Learning

The Challenge of Organizational Learning

Disseminating insights and know-how across any organization is critical to improving performance, but nonprofits struggle to implement organizational learning and make it a priority. A recent study found three common barriers to knowledge sharing across nonprofits and their networks, as well as ways and means to overcome them.