The Straits Times
The Straits Times
FT: India, China, Reality Game Shows Download FT_Quite right, China, not ‘to want to be a millionaire’ - FT
Sep 24th 2011 | NEW YORK | from the print edition
AS WALMART grew into the world’s largest retailer, its staff were subjected to a long list of dos and don’ts covering every aspect of their work. Now the firm has decided that its rules-based culture is too inflexible to cope with the challenges of globalisation and technological change, and is trying to instil a “values-based” culture, in which employees can be trusted to do the right thing because they know what the firm stands for.
“Values” is the latest hot topic in management thinking. PepsiCo has started preaching a creed of “performance with purpose”. Chevron, an oil firm, brands itself as a purveyor of “human energy”, though presumably it does not really want you to travel by rickshaw. Nearly every big firm claims to be building a more caring and ethical culture.
A new study suggests there is less to this than it says on the label. Commissioned by Dov Seidman, boss of LRN, a firm that advises on corporate culture, and author of “How”, a book arguing that the way firms do business matters as much as what they do, and conducted by the Boston Research Group, the “National Governance, Culture and Leadership Assessment” is based on a survey of thousands of American employees, from every rung of the corporate ladder.
It found that 43% of those surveyed described their company’s culture as based on command-and-control, top-down management or leadership by coercion—what Mr Seidman calls “blind obedience”. The largest category, 54%, saw their employer’s culture as top-down, but with skilled leadership, lots of rules and a mix of carrots and sticks, which Mr Seidman calls “informed acquiescence”. Only 3% fell into the category of “self-governance”, in which everyone is guided by a “set of core principles and values that inspire everyone to align around a company’s mission”.
The study found evidence that such differences matter. Nearly half of those in blind-obedience companies said they had observed unethical behaviour in the previous year, compared with around a quarter in the other sorts of firm. Yet only a quarter of those in the blind-obedience firms said they were likely to blow the whistle, compared with over 90% in self-governing firms. Lack of trust may inhibit innovation, too. More than 90% of employees in self-governing firms, and two-thirds in the informed-acquiescence category, agreed that “good ideas are readily adopted by my company”. At blind-obedience firms, fewer than one in five did.
Tragicomically, the study found that bosses often believe their own guff, even if their underlings do not. Bosses are eight times more likely than the average to believe that their organisation is self-governing. (The cheery folk in human resources are also much more optimistic than other employees.) Some 27% of bosses believe their employees are inspired by their firm. Alas, only 4% of employees agree. Likewise, 41% of bosses say their firm rewards performance based on values rather than merely on financial results. Only 14% of employees swallow this.
from the print edition | Business
Sep 24th 2011 | from the print edition
IN THE grand scheme of things Jeremy Bentham and John Stuart Mill are normally thought of as good guys. Between them, they came up with the ethical theory known as utilitarianism. The goal of this theory is encapsulated in Bentham’s aphorism that “the greatest happiness of the greatest number is the foundation of morals and legislation.”
Which all sounds fine and dandy until you start applying it to particular cases. A utilitarian, for example, might approve of the occasional torture of suspected terrorists—for the greater happiness of everyone else, you understand. That type of observation has led Daniel Bartels at Columbia University and David Pizarro at Cornell to ask what sort of people actually do have a utilitarian outlook on life. Their answers, just published in Cognition, are not comfortable.
One of the classic techniques used to measure a person’s willingness to behave in a utilitarian way is known as trolleyology. The subject of the study is challenged with thought experiments involving a runaway railway trolley or train carriage. All involve choices, each of which leads to people’s deaths. For example: there are five railway workmen in the path of a runaway carriage. The men will surely be killed unless the subject of the experiment, a bystander in the story, does something. The subject is told he is on a bridge over the tracks. Next to him is a big, heavy stranger. The subject is informed that his own body would be too light to stop the train, but that if he pushes the stranger onto the tracks, the stranger’s large body will stop the train and save the five lives. That, unfortunately, would kill the stranger.
Dr Bartels and Dr Pizarro knew from previous research that around 90% of people refuse the utilitarian act of killing one individual to save five. What no one had previously inquired about, though, was the nature of the remaining 10%.
To find out, the two researchers gave 208 undergraduates a battery of trolleyological tests and measured, on a four-point scale, how utilitarian their responses were. Participants were also asked to respond to a series of statements intended to get a sense of their individual psychologies. These statements included, “I like to see fist fights”, “The best way to handle people is to tell them what they want to hear”, and “When you really think about it, life is not worth the effort of getting up in the morning”. Each was asked to indicate, for each statement, where his views lay on a continuum that had “strongly agree” at one end and “strongly disagree” at the other. These statements, and others like them, were designed to measure, respectively, psychopathy, Machiavellianism and a person’s sense of how meaningful life is.
Dr Bartels and Dr Pizarro then correlated the results from the trolleyology with those from the personality tests. They found a strong link between utilitarian answers to moral dilemmas (push the fat guy off the bridge) and personalities that were psychopathic, Machiavellian or tended to view life as meaningless. Utilitarians, this suggests, may add to the sum of human happiness, but they are not very happy people themselves.
That does not make utilitarianism wrong. Crafting legislation—one of the main things that Bentham and Mill wanted to improve—inevitably involves riding roughshod over someone’s interests. Utilitarianism provides a plausible framework for deciding who should get trampled. The results obtained by Dr Bartels and Dr Pizarro do, though, raise questions about the type of people who you want making the laws. Psychopathic, Machiavellian misanthropes? Apparently, yes.
from the print edition | Science and technology
False confessions Download Economist - False confessions_ Silence is golden 110813
September 2, 2011 7:16 pm
By Gillian Tett
When Biz Stone, a co-founder of Twitter, relates the story behind his iconic venture, he likes to use a picture of a flock of birds. The reason? As Stone tells the story, he first appreciated the power of social media a few years ago, when he watched a crowd suddenly arrive at a bar after exchanging messages on a phone. That event, Stone says, helped him to understand how social media enabled people to suddenly congregate with unforeseen speed and force. To put it another way, what 21st-century tools do is enable people to “flock” together – around ideas, emotions, places or events. Hence that picture of birds.
It is a thought-provoking image, and it feels particularly pertinent to New York right now. Last weekend I hunkered down, along with millions of other New York residents, as the tropical storm-cum-hurricane known as Irene ripped along America’s East Coast. In some senses the experience turned out to be far less dramatic than many had initially feared: though the East Coast was battered with powerful winds and lashing rain, and there was terrible flooding inland, New York itself suffered far less damage than predicted. Before the storm hit I moved out of my apartment, which is next to the river, to stay with friends elsewhere. But my girls and I slept soundly during the night (much to the fury of my daughters, who were hoping that the winds would wake us so they could hold a midnight feast).
While Hurricane Irene might have spared New York in physical terms, the experience was nonetheless striking, for reasons that Stone observes. In earlier periods of my life, I have experienced moments of adrenaline-fuelled anxiety, holed up in a hotel in the middle of a civil war, or marooned in a remote outpost by snowstorms. Those occasions were marked by long spells of boredom, punctuated by flashes of anxiety, since I was dependent on a crackling radio or creaking telephone for news.
However, living in a hurricane in the age of social media takes adrenaline to a new level. Wherever you sheltered in the city last week, there was almost no escape from the tempest of information, debate and analysis flying around. Television and radio offered non-stop coverage, which became distinctly hysterical. The internet provided multiple tools to track the storm in real time. And as it approached, a gale of social media messages swirled, as New Yorkers “flocked” together, trying to make sense of events. (Apparently, there were 36 times the number of tweets per second than there were during the civil war in Libya.)
Is this a good thing? From a practical viewpoint, it might appear so. Some of the messages posted on the “Irene” Twitter page were distracting or trite (“Hello Apocalypse Irene”; “Loving the new hairstyle – I guess the wet look is in”, and so on). Many others were informative: citizens tracked the path of the rain, and government agencies sent out a blitz of practical advice and updates. The office of Michael Bloomberg, the mayor of New York, was particularly efficient and co-ordinated; one could track almost all the events from that Twitter feed alone. “City bridges may be closed”; “There are 78 hurricane centres and 8 special medical centres across the City”; “We are in the midst of the most dangerous period of the storm … continue to remain indoors.” Or – eventually – “By 3pm we will officially lift the evacuation order.”
. . .
Yet there was also a dark side to this “flocking”. A week after the event, some political rivals have accused Mayor Bloomberg of over-reaction. The television coverage has been criticised. What also needs to be debated, however, is whether this cyber “flocking” heightened public emotion too. After all, the more that people share their thoughts and fears in cyberspace, the more they create echo chambers. To use another metaphor, Hurricane Irene was producing an emotional “wind tunnel” last weekend, as news and moods were channelled into a small space and funnelled back and forth between media outlets, over and over again.
This was addictive, but the flurry of debate was disturbing, too. And while Bloomberg’s office was clearly determined to corral this information tempest – and did so, in my view, with some success – it faced a tough challenge. After all, fear is contagious and social media is anarchic, even – or especially – in 140 characters.
There is no easy solution to this. Just before the storm began to affect New York, the mayor’s office warned in a tweet that the electricity could fail (“If low-lying areas begin to flood, there is a chance that Con Ed will have to shut down the grid in parts of the City”). If that had happened, it would have been fascinating to see how New York would have behaved if all those modern forms of communication had shut down. Would people have panicked? Would they have been happy to rely on old-fashioned, battery-powered radios for news instead? Might that have been a relief? No one knows. One thing that is clear is that the impact of these emotional “wind tunnels” requires more debate. They will stay with us long after the hurricane season is past, and in far more places than New York.
Copyright The Financial Times Limited 2011
Aug 20th 2011 | from the print edition
IN GENERAL, people are pretty good at differentiating between the quick and the dead. Modern medicine, however, has created a third option, the persistent vegetative state. People in such a state have serious brain damage as a result of an accident or stroke. This often means they have no hope of regaining consciousness. Yet because parts of their brains that run activities such as breathing are intact, their vital functions can be sustained indefinitely.
When, if ever, to withdraw medical support from such people, and thus let them die, is always a traumatic decision. It depends in part, though, on how the fully alive view the mental capacities of the vegetative—an area that has not been investigated much.
To fill that gap Kurt Gray of the University of Maryland, and Annie Knickman and Dan Wegner of Harvard University, conducted an experiment designed to ascertain just how people perceive those in a persistent vegetative state. What they found astonished them.
They first asked 201 people stopped in public in New York and New England to answer questions after reading one of three short stories. In all three, a man called David was involved in a car accident and suffered serious injuries. In one, he recovered fully. In another, he died. In the third, his entire brain was destroyed except for one part that kept him breathing. Although he was technically alive, he would never again wake up.
After reading one of these stories, chosen at random, each participant was asked to rate David’s mental capacities, including whether he could influence the outcome of events, know right from wrong, remember incidents from his life, be aware of his environment, possess a personality and have emotions. Participants used a seven-point scale to make these ratings, where 3 indicated that they strongly agreed that he could do such things, 0 indicated that they neither agreed nor disagreed, and -3 indicated that they strongly disagreed.
The results, reported in Cognition, were that the fully recovered David rated an average of 1.77 and the dead David -0.29. That score for the dead David was surprising enough, suggesting as it did a considerable amount of mental acuity in the dead. What was extraordinary, though, was the result for the vegetative David: -1.73. In the view of the average New Yorker or New Englander, the vegetative David was more dead than the version who was dead.
The researchers’ first hypothesis to explain this weird observation was that participants were seeing less mind in the vegetative than in the dead because they were focusing on the inert body of the individual hooked up to a life-support system. To investigate that, they ran a follow-up experiment which had two different descriptions of the dead David. One said he had simply passed away. The other directed the participant’s attention to the corpse. It read, “After being embalmed at the morgue, he was buried in the local cemetery. David now lies in a coffin underground.” No ambiguity there. In this follow-up study participants were also asked to rate how religious they were.
Once again, the vegetative David was seen to have less mind than the David who had “passed away”. This was equally true, regardless of how religious a participant said he was. However, ratings of the dead David’s mind in the story in which his corpse was embalmed and buried varied with the participant’s religiosity.
Irreligious participants gave the buried corpse about the same mental ratings as the vegetative patient (-1.51 and -1.64 respectively). Religious participants, however, continued to ascribe less mind to the irretrievably unconscious David than they did to his buried corpse (-1.57 and 0.59).
That those who believe in an afterlife ascribe mental acuity to the dead is hardly surprising. That those who do not are inclined to do so unless heavily prompted not to is curious indeed.
from the print edition | Science and Technology
Karen Armstrong's 4Fs; Sapolsky's flight/fight and chronic stress.
By Gillian Tett
Back in the days of the “last” market crisis in 2008, a senior official at an interdealer broker – one of the firms that trade securities – observed an interesting pattern. Until then, he, like most traders, had assumed that finance was becoming an increasingly global, computerised game. In a world ruled by the internet, it was easier than ever before to trade with anyone, anywhere. In an era of 21st-century cybermoney – if not Star Trek – finance, bankers had evolved to control space and time.
But when Lehman Brothers collapsed, evolutionary “progress” crumbled. Suddenly, traders started placing orders by telephone, rather than computer, dealing only with people they knew personally. They were also refusing to take long-term decisions. Sometimes there were entirely rational explanations for this shift, but mostly the reaction was instinctive. “It was almost primeval,” my friend quips.
I have been pondering this comment during the last week, amid the latest market shock. Periods of acute stress in the markets are always fascinating to observe, since they can reveal much about how financial and political systems operate. They can also offer intriguing examples of how our brains, or cognitive maps, work, giving a subtle twist to the age-old concepts of human “fear” and “greed” – or rational self-interest, as the economic profession would argue.
Take a look, for example, at some fascinating research by Andrew Lo, a finance professor at MIT. Lo trained initially as an economist, but he has also spent part of his career trying to knit together the work of psychologists, neuroscientists, biologists and economists. In particular, he is fascinated by the idea that the evolution of the human species has left our brains with three parts. He identifies those parts as a central, “reptilian” core, which was the first to evolve, functions most rapidly and controls reflexive behaviour (by shutting down bodily functions that are in shock, say, to improve chances of survival); a “mammalian” layer that controls social desires and emotions (intuition, sexual urges and so on); and then the outer, “hominid” layer, which developed last and controls rational, sophisticated thought.
In normal circumstances, our hominid brain predominates. However, the mammalian (or emotional) brain never disappears, and reptilian instincts come to the fore in a crisis. And this has an important implication for finance: while “rational” economic theories can explain markets when our “hominid” brains are predominant (ie, most of the time), they are inadequate when our emotional, mammalian or instinctive, reptilian brains predominate.
Lo does not consider this a malfunction, but part of the adaptive techniques that have allowed humans to react to our environment and learn from mistakes over millennia. Thus it is no good arguing endlessly (as academics have done in recent years) about whether the efficient market hypothesis really works – it works when we are “hominid”, but not when we all turn “emotional” and fight for survival.
Unsurprisingly, many traditional economists hate Lo’s ideas. The problem with this theory – like most forms of behavioural finance – is that it is hard to turn into a tangible investment strategy. Well, not unless somebody finds a way to post a sign above bankers’ desks that reads: “Watch out, a reptile moment approaches!” But perhaps the real value of Lo’s idea is that it illustrates a point that we all instinctively know, but which economists and bankers sometimes forget: namely, that humans do not behave consistently, all the time.
Even our own perceptions of time can shift. Peter Atwater, a JPMorgan banker-turned-consultant, has recently been advising investment firms on strategy – and this has left him convinced of the importance of looking at “horizon preferences”. In times of calm markets, when people are confident, they plan for the long term, deal with strangers and reflect on the world as a whole.
At times of stress, though, time, social and geographical horizons collapse – and not only in moments of extreme tension. Atwater believes that the present slow-burn sense of insecurity is fostering a wider, longer-term shift towards “narrow” horizons, and this is influencing how finance and politics evolves. Cognitive maps change in ways we do not always notice.
None of this will be of much comfort to those traders who have just endured a brutal, rollercoaster week (even though people such as Atwater insist that analysing horizon preferences can enable you to be much smarter about your portfolio). They trust cyber finance.
But, personally, after several decades in which finance has been dominated by theories influenced by Newtonian physics, I find it very cheering that researchers such as Lo are trying to hop across other academic silos. The longer the crisis lasts, the more likely the field of behavioural finance will be boosted. Calling a banker a “rodent” or “snake”, in other words, may no longer be just a term of abuse. Right now, it may be a form of analysis too, and one we would be foolish to ignore.
Copyright The Financial Times Limited 2011.
Yet the same behavioural psychologists also have a rich literature on “confirmation bias”, meaning the way in which such new events are quickly adapted to fit old preconceptions. Rioting in London, to a Labour voter, may look like what you get when too many youth clubs are closed. Seen through Conservative eyes, a feeble police response may be to blame.
It turns out it is the most educated among us who are most susceptible to this way of thinking. Watts is especially critical of this, seeing it as a flaw that “impedes our ability to resolve disputes, from petty disagreements over domestic duties to long-running political conflicts”.