Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Wednesday, October 30, 2013

Why Being a Perfectionist Can Hurt Your Productivity??? 10-31




Why Being a Perfectionist Can Hurt Your Productivity???




Do you ever “back door brag  ” about being a perfectionist?
Unlike other obsessions and addictions, perfectionism is something a lot of people celebrate, believing it’s an asset. But true perfectionism can actually get in the way of productivity and happiness.
I recently interviewed David Burns, author of “Feeling Good”, who has made this exact connection. In his more than 35,000 therapy sessions he has learned that the pursuit of perfection is arguably the surest way to undermine happiness and productivity. There is a difference between the healthy pursuit of excellence and neurotic perfectionism, but in the name of the first have you ever fallen into elements of the second?
Taken to the extreme, perfectionism becomes a disorder. Burns shares the wild example of an attorney who became obsessed with getting his hair “just right.” He spent hours in front of the mirror with his scissors and comb making adjustments until his hair was just an eighth of an inch long. Then he became obsessed with getting his hairline exactly right and he shaved it a little more every day until his hair receded back so far he was bald. He would then wait for his hair to grow back and the pattern continued again. Eventually his desire to have the perfect hair led him to cut back on his legal practice in order to continue his obsession.
This is an extreme example to be sure, but are there less severe ways in which our own perfectionism leads us to major in minor activities? Have you ever obsessed over a report when your boss said it was already plenty good enough? Have you ever lost an object of little importance but just had to keep looking for it? Do colleagues often tell you, “Just let it go”?
Aiming for “perfect” instead of “good enough” can seriously backfire. This happened to me recently when I was asked to teach a workshop to the leaders of a prominent technology company. I took the time to understand their needs and personalize the materials to their specifications. And I already had materials I had taught scores of times with great results to pull from. But my obsession for making it perfect led me to scrap all of that the night before, and as a result I was unprepared and exhausted. I felt jumbled and my slides distracted from the main message. If I had shot for average instead of perfect, I would have been able to focus more on the client in the moment and things would have turned out very differently.
This left me wondering: what if trying to be average could actually accelerate your success?
Overachievers have such high expectations of themselves that their “average” might be another person’s “really good.” So instead of pushing yourself to give 100% (or 110%, whatever that means) you can go for giving 75% or 50% of what you usually might offer. This idea is captured succinctly by the mantra, “Done is better than perfect” – which Facebook has plastered all over the walls of their Menlo Park headquarters. That’s not to excuse shoddy work. Rather, the idea is to give engineers permission to complete cycles of work and learn quickly instead of being held hostage by an unattainable sense of perfection.
The word “perfect” has a Latin root; literally, it means “made well” or “done thoroughly.” Another translation would be “complete.” And yet today, we use it to mean flawless. If you must pursue perfection, at least use the former definition rather than the (unattainable) latter.
If you are a perfectionist, overachiever or workaholic you are probably used to taking on big challenges. The nature of the obsession makes it easy to do what is hard. Paradoxically, it may be harder at first to try to be average.
To understand why, we need to understand the role of fear in perfectionism: “If I don’t perfectly [fill in the blank] something terrible will happen.” Often perfectionists are so used to this anxiety that they no longer even consciously recognize it; it’s just the fuel that keeps them working, working, working and honing, honing, honing.
While the logic may be totally false, the emotion is absolutely real. As a result, it takes greater courage for a perfectionist to try to be average than to tackle almost any other challenge. Being average scares them, so they haven’t experienced the benefits of being average.
Here’s how Burns put it: “There are two doors to enlightenment. One is marked, ‘Perfection’ and the other is marked, ‘Average.’ The ‘Perfection’ door is ornate, fancy, and seductive… So you try to go through the ‘Perfection’ door and always discover a brick wall on the other side… On the other side of the ‘Average’ door, in contrast, there’s a magic garden. But it may have never occurred to you to open the door to take a look.”
If you think you are the type of person who takes on hard assignments with ease you might try to do something really hard: try being average for one day. What you find might surprise you.

Why I Gossip at Work (And You Should Too) 10-31

Why I Gossip at Work (And You Should Too)

Ask people to generate a list of social sins, and sooner or later, gossip is bound to come up. Sure, it pales in comparison to coveting thy neighbor, but the Bible does warn us that we should “not go about spreading slander.” And if your mother is like mine, she probably told you that if you don’t have anything nice to say, you shouldn’t say it at all.
But what if our moms were wrong?
In a series of new studies , social scientists have introduced a form of gossip that actually makes people better off. Imagine that you’re given $10. You can pass as much of the money as you want to Joe. The amount that you give him will be tripled, and he can then share as much as he wants with you. You decide to pass all $10 to Joe, so he now has $30. Instead of sharing the spoils, Joe keeps the entire $30 for himself, leaving you with nothing.
Now, Lisa is going to play the same game with Joe, and you have the chance to pass her a note. How would you feel — and what would you write?
In this experiment  , led by the psychologist Matthew Feinberg, most people were irritated. Ninety-six percent of people chose to gossip about Joe. They wrote things like “Joe is not reliable; he’s playing for his own selfish interest.”
They were annoyed beforehand, but gossiping made them feel better, and their heart rates dropped as a result. “Witnessing the unfair play,” the researchers write, “led to elevated heart rates for participants who had no opportunity to gossip.”
Typically, gossiping is a way to get a leg up on others. It carries a veiled threat: if you cross me, I’ll spread bad news about you too. And by putting others down, we signal that we’re superior — and that we have access to privileged information.
But this kind of gossip is different. It’s called prosocial gossip, and it involves spreading negative reputational information about someone who harms, deceives, or exploits others.
Prosocial gossip comes from people who value fairness. In one of their studies  , Feinberg’s team asked people to make decisions about sharing resources evenly, or preferred to maximize their own gains. A couple months later, they got to see Joe acting selfishly. The more they valued fairness, the more they gossiped prosocially. If you care about justice, you feel like it’s your mission in life to punish Joe—and to protect Lisa. You go out on a limb to deliver an important warning to everyone who might be vulnerable: Joe has a history of nefarious behavior, so don’t trust him.
In fact, 76% of people were willing to pay their own money for the opportunity to gossip about Joe. After receiving $5 for participating in the study, people paid an average of $1.19 to get a note to Lisa about Joe’s tendency to take advantage of others.
Prosocial gossip protects Lisa against Joe, but it also discourages Joe from being selfish. In another study, Feinberg’s team mentioned that people would have the chance to write notes to others who would play the game. This had no effect on people who valued fairness and generosity—they shared their resources regardless. But it transformed the behavior of the most selfish people.
On average, knowing that other people could find out about their behavior boosted the contributions of the most selfish players by 17-23%. This was enough to turn them into the most generous players. Under the threat of gossip, the takers actually gave more than the givers.
Prosocial gossip has three major benefits: it allows us to feel that we’re promoting justice, it protects other people against exploitation, and it encourages would-be exploiters to act more cooperatively and generously.
This doesn’t mean gossip is always good. In an emerging body of research  , Shimul Melwani finds that if you gossip about members of your team, you’ll be seen as less trustworthy, and your team will become less cooperative and more political. Yet if you gossip about people on someone else’s team, you can actually build trust, promote cooperation, and dismantle politics. Putting down a common enemy is a form of social glue.
I used to see gossip as a vice, and most of the time, it probably is. After reflecting on this research, though, I’ve come to realize that prosocial gossip can be a virtue. Recently, I warned a student to proceed cautiously when dealing with an adviser who has a history of exploiting students. I also told a colleague about the checkered history of a potential business partner and shared some disconcerting feedback about a job candidate with a hiring committee.
I still prefer to say nice things about people behind their backs, but in these situations, I feel that I have a social responsibility to speak candidly. If I don’t warn people about the most manipulative and Machiavellian marauders in their midst, I’m leaving them vulnerable to attack. And if I don’t make it known that I’m willing to spread the word about bad behavior, I’m failing to deter it in the first place.
You asked for an explanation, and now you have it. This is why I’ve been gossiping more in the past few months.
Sorry, Mom.


Sunday, October 20, 2013

Scientists pinpoint brain’s area for numeral recognition 10-20

Scientists pinpoint brain’s area for numeral recognition

BY BRUCE GOLDMAN

Steve Fisch description of photo
Jennifer Shum and Josef Parvizi led a team that identified a tiny area in the brain that processes numerals.
Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”


Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.

“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program  . “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.

“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”


The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.

the inferior temporal gyrus, a superficial region of the outer cortex on the brain. The inferior temporal gyrus is already generally known to be involved in the processing of visual information.


The new study, published April 17 in the Journal of Neuroscience, builds on an earlier one in which volunteers had been challenged with math questions. “We had accumulated lots of data from that study about what parts of the brain become active when a person is focusing on arithmetic problems, but we were mostly looking elsewhere and hadn’t paid much attention to this area within the inferior temporal gyrus,” said Parvizi, who is senior author of the study.


Not, that is, until fourth-year medical student Jennifer Shum, who also is doing research in Parvizi’s lab, noticed that, among some subjects in the first study, a spot in the inferior temporal gyrus seemed to be substantially activated by math exercises. Charged with verifying that this observation was consistent from one patient to the next, Shum, the study’s lead author, reported that this was indeed the case. So, Parvizi’s team designed a new study to look into it further.


The new study relied on epileptic volunteers who, as a first step toward possible surgery to relieve unremitting seizures that weren’t responding to therapeutic drugs, had a small section of their skulls removed and electrodes applied directly to the brain’s surface. The procedure, which doesn’t destroy any brain tissue or disrupt the brain’s function, had been undertaken so that the patients could be monitored for several days to help attending neurologists find the exact location of their seizures’ origination points. While these patients are bedridden in the hospital for as much as a week of such monitoring, they are fully conscious, in no pain and, frankly, a bit bored.


Over time, Parvizi identified seven epilepsy patients with electrode coverage in or near the inferior temporal gyrus and got these patients’ consent to undergo about an hour’s worth of tests in which they would be shown images presented for very short intervals on a laptop computer screen, while activity in their brain regions covered by electrodes was recorded. Each electrode picked up activity from an area corresponding to about a half-million nerve cells (a drop in the bucket in comparison to the brain’s roughly 100 billion nerve cells).


To make sure that any numeral-responsive brain areas identified were really responding to numerals — and not just generic lines, angles and curves — these tests were carefully calibrated to distinguish brain responses to visual presentations of the classic numerals taught in Western schools, such as 3 or 50, as opposed to squiggly lines, letters of the alphabet, number-denoting words such as “three” or “fifty,” and symbols that in fact were also numerals but — because they were drawn from the Thai, Tibetan and Devanagari languages — were extremely unlikely to be recognized as such by this particular group of volunteers.


In the first test, subjects were shown series of single numerals and letters — along with false fonts, in which the component parts of numerals or letters had been scrambled but defining curves and angles were retained, and the foreign-number symbols just described. A second test, controlling for meaning and sound, included numerals and their spelled-out versions (for instance, “1” and “one,” or “3” and “three”) and other words with the same sound or a similar one (“won” and “tree,” respectively).


All of our brains are shaped slightly differently. But in almost the identical spot within each study subject’s brain, the investigators observed a significantly larger response to numerals than to similar-shaped stimuli, such as letters or scrambled letters and numerals, or to words that either meant the same as the numerals or sounded like them.


Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”


It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.


Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”

Method of recording brain activity could lead to 'mind-reading' devices, scientists say 10-19

Method of recording brain activity could lead to 'mind-reading' devices, scientists say

BY BRUCE GOLDMAN


Josef Parvizi
A brain region activated when people are asked to perform mathematical calculations in an experimental setting is similarly activated when they use numbers — or even imprecise quantitative terms, such as “more than”— in everyday conversation, according to a study by Stanford University School of Medicine scientists.


Using a novel method, the researchers collected the first solid evidence that the pattern of brain activity seen in someone performing a mathematical exercise under experimentally controlled conditions is very similar to that observed when the person engages in quantitative thought in the course of daily life. 

“We’re now able to eavesdrop on the brain in real life,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program  . Parvizi is the senior author of the study, published Oct. 15 in Nature Communications. The study’s lead authors are postdoctoral scholar Mohammad Dastjerdi, MD, PhD, and graduate student Muge Ozker.

The finding could lead to “mind-reading” applications that, for example, would allow a patient who is rendered mute by a stroke to communicate via passive thinking. Conceivably, it could also lead to more dystopian outcomes: chip implants that spy on or even control people’s thoughts.


“This is exciting, and a little scary,” said 
Henry Greely, JD, the Deane F. and Kate Edelman Johnson Professor of Law and steering committee chair of the Stanford Center for Biomedical Ethics  , who played no role in the study but is familiar with its contents and described himself as “very impressed” by the findings. “It demonstrates, first, that we can see when someone’s dealing with numbers and, second, that we may conceivably someday be able to manipulate the brain to affect how someone deals with numbers.”


The researchers monitored electrical activity in a region of the brain called the intraparietal sulcus, known to be important in attention and eye and hand motion. Previous studies have hinted that some nerve-cell clusters in this area are also involved in numerosity, the mathematical equivalent of literacy. 

However, the techniques that previous studies have used, such as functional magnetic resonance imaging, are limited in their ability to study brain activity in real-life settings and to pinpoint the precise timing of nerve cells’ firing patterns. These studies have focused on testing just one specific function in one specific brain region, and have tried to eliminate or otherwise account for every possible confounding factor. In addition, the experimental subjects would have to lie more or less motionless inside a dark, tubular chamber whose silence would be punctuated by constant, loud, mechanical, banging noises while images flashed on a computer screen.

“This is not real life,” said Parvizi. “You’re not in your room, having a cup of tea and experiencing life’s events spontaneously.” A profoundly important question, he said, is: “How does a population of nerve cells that has been shown experimentally to be important in a particular function work in real life?” 

His team  ’s method, called intracranial recording, provided exquisite anatomical and temporal precision and allowed the scientists to monitor brain activity when people were immersed in real-life situations. Parvizi and his associates tapped into the brains of three volunteers who were being evaluated for possible surgical treatment of their recurring, drug-resistant epileptic seizures.

The procedure involves temporarily removing a portion of a patient’s skull and positioning packets of electrodes against the exposed brain surface. For up to a week, patients remain hooked up to the monitoring apparatus while the electrodes pick up electrical activity within the brain. This monitoring continues uninterrupted for patients’ entire hospital stay, capturing their inevitable repeated seizures and enabling neurologists to determine the exact spot in each patient’s brain where the seizures are originating.

During this whole time, patients remain tethered to the monitoring apparatus and mostly confined to their beds. But otherwise, except for the typical intrusions of a hospital setting, they are comfortable, free of pain and free to eat, drink, think, talk to friends and family in person or on the phone, or watch videos.

The electrodes implanted in patients’ heads are like wiretaps, each eavesdropping on a population of several hundred thousand nerve cells and reporting back to a computer.

In the study, participants’ actions were also monitored by video cameras throughout their stay. This allowed the researchers later to correlate patients’ voluntary activities in a real-life setting with nerve-cell behavior in the monitored brain region. 

As part of the study, volunteers answered true/false questions that popped up on a laptop screen, one after another. Some questions required calculation — for instance, is it true or false that 2+4=5? — while others demanded what scientists call episodic memory — true or false: I had coffee at breakfast this morning. In other instances, patients were simply asked to stare at the crosshairs at the center of an otherwise blank screen to capture the brain’s so-called “resting state.”

Consistent with other studies, Parvizi’s team found that electrical activity in a particular group of nerve cells in the intraparietal sulcus spiked when, and only when, volunteers were performing calculations.

Afterward, Parvizi and his colleagues analyzed each volunteer’s daily electrode record, identified many spikes in intraparietal-sulcus activity that occurred outside experimental settings, and turned to the recorded video footage to see exactly what the volunteer had been doing when such spikes occurred.

They found that when a patient mentioned a number — or even a quantitative reference, such as “some more,” “many” or “bigger than the other one” — there was a spike of electrical activity in the same nerve-cell population of the intraparietal sulcus that was activated when the patient was doing calculations under experimental conditions. 

That was an unexpected finding. “We found that this region is activated not only when reading numbers or thinking about them, but also when patients were referring more obliquely to quantities,” said Parvizi.

“These nerve cells are not firing chaotically,” he said. “They’re very specialized, active only when the subject starts thinking about numbers. When the subject is reminiscing, laughing or talking, they’re not activated.” Thus, it was possible to know, simply by consulting the electronic record of participants’ brain activity, whether they were engaged in quantitative thought during nonexperimental conditions.

Any fears of impending mind control are, at a minimum, premature, said Greely. “Practically speaking, it’s not the simplest thing in the world to go around implanting electrodes in people’s brains. It will not be done tomorrow, or easily, or surreptitiously.”

Parvizi agreed. “We’re still in early days with this,” he said. “If this is a baseball game, we’re not even in the first inning. We just got a ticket to enter the stadium.”

Saturday, October 19, 2013

India's Finance Minister P. Chidambaram speaking at Carnegie Endowment for International Peace. 10-19


Recapturing India's Growth Momentum 

Speech by Mr.P.Chidambaram, Union Finance Minister, at the Carnegie Endowment for International Peace on Recapturing India’s Growth Momentum 



Dr. Perkovich, Vice President for Studies at the Carnegie Endowment for International Peace, Ladies and Gentlemen! 

Thank you for the invitation to speak at one of the oldest and well-regarded global think tanks. I understand Carnegie is in the midst of establishing a Carnegie South Asia Centre based in New Delhi, and I welcome that initiative. Carnegie currently has two captains of Indian industry on its Board of Directors, Shri Sunil Mittal and Shri Ratan Tata. I am glad to see these growing Carnegie-India links. One of Carnegie’s core priorities today is building a research program on India’s political economy. To this end, I gather you have recently launched your “India Decides 2014” initiative. I wish you all the best in this exercise, but may I tell you in advance that your study will discover that India will vote my government back to power. I thought I may caution you lest you should waste too much time and effort to figure this out.

Let me now turn to the topic of India’s economic growth. India’s growth story attracted the attention of the world when our economy grew at an average of 8.5 per cent per annum during the period, 2004-05 to 2010-11. This was achieved despite the strong negative spill-over effects of the global financial crisis in 2008 and subsequently. Growth slowed down in the crisis year, 2008-09, but India took the world by surprise by rebounding quickly from the slower growth of 6.7 per cent in that year to record rates of growth of 8.6 per cent in 2009-10 and 9.3 per cent in 2010-11. However, there was a further downturn in the global economy in 2011 on account of the sovereign debt crisis in Europe and the subsequent slump in the World economy. We also witnessed the emergence of domestic constraints on investment and consumption. As a consequence, India’s growth rate declined again to 6.2 per cent in 2011–12 and further to 5.0 per cent in 2012-13. The increasing trade deficit and fall in net invisible earnings led to a widening of the current account deficit to USD 88 billion or 4.8 per cent of GDP in 2012-13.With a sharp slowdown in manufacturing growth and a moderation in the expansion of services, the growth in the first quarter of 2013-14 further declined to 4.4 per cent. India’s experience in this period is not unique. Virtually all the major emerging economies around the world have seen a sharp decline in growth -- the so-called Great Descent.

However, we are now seeing that some of the worst-affected countries of the Euro zone are showing signs of recovery, with significant improvements in their current account and fiscal deficits. The expectations of improvement in the economic and financial conditions of the US, coupled with the decision of the Fed to postpone the tapering of the quantitative easing, have shaped expectations of a gradual global revival. But I am aware that there may be possible ‘bumps’ on the road ahead. In line with this emerging global outlook, the Indian economy has also showed early indications of recovery with a pick-up in exports in July, August and September – our second quarter; reversal of the negative growth in manufacturing; and a reasonable rise in freight traffic, indicative of economic activity picking up. With very good rainfall in the current year and a sharp increase in the sown area, we expect robust growth in farm output. We have also taken numerous reform measures over the past one year. We expect these measures to show their impact from the second half of the current fiscal and believe that the Indian economy will grow at over 5.0 per cent and perhaps closer to 5.5 per cent in 2013-14. I know that the World Economic Outlook report does not share my optimism, but I may tell you that we do not share their pessimism. Set against the current global economic background, even a growth rate of 5.0 per cent looks good, but is much lower than the ambitious standards that we set for ourselves in 2004. I would be the first person to say that we need to do better and recapture the growth momentum of the last decade.

Macro economists maintain a very clear distinction between trend and fluctuations. The fluctuations are the function of open economy macroeconomics, of fiscal policy and of monetary policy. To understand trend growth, however, we have to look deeper. Trend growth is largely determined by the underlying microeconomic fundamentals. In the next ten minutes I wish to speak to you about the microeconomic fundamentals which have given us one doubling of our GDP every decade. In my reckoning, there are at least six main stories:

(i) Demographics. As is well known, India has young demographics. Alongside, we are doing well on improving the quality of the workforce. Household survey data (the CMIE Consumer Pyramids database) shows that for children of age 12, literacy is now 95%. We have a great surge in college enrolment: a full one-fifth of 21-year-olds now have a college degree. Every year, millions of young people are added to the labour force and their education is qualitatively superior to that of the elderly cohort leaving the labour force. We have also launched an ambitious national mission on Skilling in order to qualify young men and women with only a school education for jobs in the manufacturing and service sectors.

(ii) The second growth fundamental is international economic integration. On the current account and on the financial account, India is now engaging with the world on an unprecedented scale. Gross flows on the current account are now 63.3 per cent of GDP and gross flows on the financial account are now 55.3 per cent of GDP. These add up to gross flows across the border of 118.6 per cent of GDP. This makes India one of the more open economies of the world. Engagement with the world drives a flow of ideas into the economy, which is a growth fundamental.

(iii) The third growth fundamental is an increasingly “capable” financial system. On average, we invest 35 per cent of GDP every year. Finance is what determines the allocative efficiency of how this investment is done. What industries and what firms get is controlled by the financial system. We are taking measured steps on strengthening the financial system and taking the best that the global financial system has to offer. Every year, our financial system is getting better and stronger and, through this, we expect to translate our good investment to GDP ratio into a higher GDP growth rate. I shall speak a bit more on this in a moment.

(iv) The fourth growth fundamental is sophisticated firms. As all of you are aware, Indian firms are increasingly becoming capable and competitive. We used to think – and fear -- that if India opened up, our so-called large firms (I shall not take names) were third world dinosaurs that would collapse in the face of global competition. Instead, we have a clutch of firms in steel, oil and gas, mining, power, information technology, and hospitality that have become multinationals and are buying out companies in the advanced economies.

(v) The fifth growth fundamental is sophistication of the workforce. A young girl of age 21, who started her labour market career in 1991, now has 21 years of experience in a competitive and globalised market economy. She has dealt with modern technology, foreign companies, and a truly competitive domestic environment. The forty-somethings of India today are qualitatively superior to the older cohorts who grew up in a closed economy and did not face modern technology or foreign companies or competition.

(vi) The sixth growth fundamental - and I know this will be contested by many - is democracy. While it is fashionable to criticise the workings of Indian democracy, when we look deeper, I think it is working reasonably well. Liberal democracy is the ultimate foundation of rule of law and legal certainty, without which nobody can trust a country or invest in it. At its best, democracy is a great conversation, where diverse views and aspirations get heard, and the issues that genuinely concern the majority of the people become the priorities of policy makers. On a bigger scale of history, when we start from 1947, I think India has fared well on the project of constructing a liberal and open democracy.

To summarize, the Indian trend growth of the last 21 years was caused by several microeconomic fundamentals, and I have listed six of them. Nothing has changed on these. In fact our resolve to strengthen these fundamentals has become stronger. I believe India continues to have great prospects based on these fundamentals.

From the viewpoint of public policy, our job is to clear our minds of old cobwebs as well as of day to day problems and stay focused on laying the long-term foundations of a capable State that is able to deliver.

While India has greatly deregulated, there is much more to be done. However, looming large is the issue of State “capacity”. We need a State that has in place institutions to resolve market failures. We need a State that will deliver public goods quietly, efficiently and economically. This is the prime challenge in India today. In a liberal democracy, we need to build the full framework of laws that will clearly articulate specific objectives, empower the arms of government that will enforce these laws, and put in place mechanisms that will ensure performance and accountability.

If you believe what our newspapers and television channels report you may conclude that no Indian politician or civil servant is doing any work. Actually, the pace of work has been quite hectic. Let me illustrate this with examples of what have been done to improve the Indian financial system, only in 2013. So far, we have had four historic events. A commission of eminent people has drafted a new Indian Financial Code: a path breaking piece of law that has been drafted to replace 50 existing laws governing finance with a single, integrated, coherent, modern financial law. This is a law which dwarfs the scope of the Dodd-Frank Act. We have enacted a brand new Companies Act to replace a law that was 57 years old. We have shifted the subject of commodity futures to the Ministry of Finance, something which has not been possible in the US even after the 2008 crisis. We have enacted a law establishing the Defined Contribution Pension system under a statutory regulator. The New Pension System is already one of the world's big individual account DC pension systems with over 6 million participants.

Each of these four was a huge project involving enormous planning and preparation. The genesis of the Indian Financial Code goes back to 2004, when we started deep thinking about the possibilities of Mumbai as an international financial centre. The Companies Bill was pending before Parliament for many years. The work on shifting commodity futures to the Ministry of Finance began in 2003. The NPS was originally designed in 1999. All these projects have been largely bipartisan. We have dug in through these years, chipped away at the objections, cultivated the technical capacity, and built consensus, through which we are now able to reap the fruits of the long years of labour.

To conclude, I would urge everyone not to lose sight of the microeconomic foundations of Indian growth, which are delivering one doubling of GDP every decade. That is not an insignificant achievement. It will find its place in history in due course. The defining challenge in India however is in augmenting State capacity. How do we construct a competent and ethical State, that will minimally interfere with the rights of citizens in property and contracting, that will focus on preventing or resolving market failures, and that will successfully produce and deliver public goods? A wave of new thinking in public administration is now underway in India. We need to build completely new organization charts within government, leading to sharply focused agencies that can be held accountable for delivery on specific objectives. Those are the first few lines of an absorbing new story that I hope will begin in the near future. And that is the story that I am sure will captivate the world in the next ten to twenty years, as India takes its place as the third or fourth largest economy in the world. 


Audio recording of the speech

View at the original Source

Cyclone Phailin was an emotional roller-coaster ride: L S Rathore 10-20

Cyclone Phailin was an emotional roller-coaster ride: L S Rathore

Interview with Director-General, India Meteorological Departmenty
L S Rathore


India was rocked by two major disasters this year - cyclone Phailinand the Uttarakhand flash floods. While the former was managed well and the evacuation process praised globally, the latter lacked the same coordination. L S Rathore, Director-General of the India Meteorological Department, which was accurate in predicting Phailin's movement, talks to Somesh Jha on how it all panned out. Edited excerpts:

The Phailin disaster was well managed and the Met department predicted it right. Were the state governments supportive?

The state governments were very cooperative. We had a very good dialogue with the National Disaster Management Authority and meticulously planned every action required to combat the disaster. The flow of communication with the media was appropriate since we held six press conferences. So, it was the overall coordination that clicked.

Every agency was terming Phailin a super cyclone. What made the India Meteorological Department (IMD) not term it the same?

That requires guts. I am conscious that as a responsible operational forecaster, I have to be as accurate as possible. Informing the government and the stakeholders is a big decision. So, you have to weigh the function of your wrong information. Once you are confident of your forecast, you need courage to stick to it.

This helps save expense and the inconvenience to the public at large. Once you deliver the information confidently, people start believing in the institution. So, we did the analysis of the facts at our disposal and were confident of our prediction. A kilometre of evacuation costs crores of rupees and so much inconvenience to people. When so much is at stake, one really requires courage to deliver.

How did we predict more precisely than other agencies?

In every event, we do an analysis and there is always some amount of divergence. At the end of the day, it is the operational forecaster who is responsible. Hence, he/she analyses the situation with greater intensity and monitors every minute detail at his disposal. Apart from that, experience also matters. My people have a better grip over the Indian Ocean basin than the Americans. Moreover, India is responsible for eight neighbouring countries, not just itself. Therefore, we have a mandate. On the other hand, it doesn't mean that they (the Americans) were absolutely wrong.

Are you saying India is better equipped than the other international agencies ?

Yes, we are better equipped in our region because of extra surface observations and other functionaries that are not available to them.

Other forecasters were calling it a super cyclone. Were you worried that you were not reading it right?

You cannot sleep. And neither can you sleep, after it has happened because of the excitement of coming out with an accurate prediction. At the back of my mind, definitely, there was some fear. My whole team was tense. We were monitoring conditions every 10 minutes. But we were confident about our prediction.

What was the biggest challenge you faced at that time?

The first thing is to get the correct assessment of the ground situation, then to predict it precisely and deliver the information to the stakeholders, most importantly to gain the faith of the people in the system.

We forecasted it four days earlier which worked wonders. We were tracking and working on predictions daily. So, working on short-term goals and achieving them made the people believe our forecast. Even the two hour difference that came at the time of landfall… the cyclonic winds came as close as 30 kms to the coast and Phailin was stuck there for two hours. During those stressful moments, there were all sorts of rumours. Some said it had reached Andhra Pradesh and had moved on, while others were anticipating a completely different situation. But we were really confident. Honestly, it was an emotional roller coaster ride, just like at the time of the satellite launch.

Even the public's faith in the forecast was restored after we came out with accurate predictions daily. It is not an easy task to evacuate people. Nobody wants to go until they are really sure they are going to die. So, I had to go to the media and make them believe what we are saying is what is going to happen. In Odisha, I got immense response. I am a household name in Odisha. Everyone knows about the Met department. Hence, when the administration told them about the forecast, they immediately evacuated.

The Uttarakhand disaster was not managed the way this situation was handled…

I would not call the Uttarakhand floods purely a Met department disaster. In Amar Ujala, after our information, a news report on June 15 gave warnings to the residents and tourists. People could have been moved to a safe location. The reasons were quite different in that situation. First, the monsoon had hit early and the snow-melt rate was very fast, which had compounded, resulting into the creation of artificial lakes and breaching of existing ones. The water rushed in all of a sudden and there was a breach of the dam.

Is the state government at fault?

They were also not aware that something of this intensity would be triggered.

Does this mean that the disaster was unavoidable?

Yes, it was an unavoidable disaster.

Reportedly, the Uttarakhand government said the IMD issues these warnings every year…

If that dam wouldn't have had burst due to high intensity rain, there wouldn't have been so much impact and probably, no casualty as well.

Are you saying there was an infrastructure issue involved there?

We informed the state government on time. Handling the situation after that is their job.

How could such a situation be avoided?

In mountains, where there is denser habitation, we need to monitor the upstream river.

Is it possible under the IMD's ambit to do so?

It is not the IMD's job. There is the Central Water Commission and other agencies for this.

Why can't the IMD get it right every time - especially when we know that so much is at stake and majority of our population still depends on the monsoon for agriculture?

You cannot be completely correct no matter what and even after 100 years of experiences.

But the prediction rate of the monsoons is nowhere near 100 per cent…

Cent per cent accuracy can't happen even in the US, although, the weather there is so much more predictable because of the extra tropical climate.

During the 2009 drought, the department predicted there would be a normal monsoon but it was nowhere close to the reality…

It is true. But the medium range forecast was correct all the time. We do have difficulties in predicting long-term rainfall. The atmosphere does not have a memory and its nature changes in six months. In fact, no other country except India is predicting (the weather) on this scale. Had it been easy, others could have done this.

Friday, October 18, 2013

MALALA SKIPS SCHOOL TO MEET QUEEN ELIZABETH II

MALALA SKIPS SCHOOL TO MEET QUEEN ELIZABETH II


Britiain Malala

Britain's Queen Elizabeth II meets Malala Yousafzai during a reception for youth, education and the Commonwealth at Buckingham Palace, London, Friday Oct. 18, 2013. The Pakistani teenager, an advocate for education for girls, survived a Taliban assassination attempt last year on her way home from school. (AP Photo/Yui Mok, Pool)

LONDON (AP) — Malala Yousafzai skipped school for the day but she had a pretty good excuse: she was meeting Queen Elizabeth II.
The 16-year-old advocate for girls' education and survivor of a Taliban assassination attempt gave the 87-year-old queen a copy of her book, "I Am Malala," and spoke Friday with her about the importance of education.
Malala said she wouldn't ordinarily miss a school day but had made an exception. The pair also chatted about Malala's homeland, Pakistan's Swat Valley, which the queen visited decades ago.
Malala was one of the guests invited to the reception on youth and education at Buckingham Palace in London. Earlier this month she was ranked by bookmakers as one of the top possible contenders for a Nobel Peace Prize.

Is the ‘Too Big to Fail’ Problem Too Big to Solve? 10-18

Is the ‘Too Big to Fail’ Problem Too Big to Solve?


moneyblackhole

Wharton emeritus finance professor Jack Guttentag analyzes three different approaches commonly brought up in discussions about taxpayer bailouts of firms considered “too big to fail.” Only one of those approaches, he says, is on the right track.
There seems to be almost universal consensus that using public funds to protect large institutions from failure, commonly called “bailout,” is bad policy. There is nothing like a consensus, however, on what should be done about it, and execution seems to be floundering. Three approaches have emerged, only one of which has much chance of being successful.
Alternative Approaches
Approach 1: Commit That Bailouts Will Never Happen Again: Under this approach, the government adopts a policy that it will never again rescue a major bank faced with impending failure, regardless of what the consequences might be. This seems to be the favored policy position of those who have not thought through all the implications.
The problem is that as long as we have government agencies and public officials with responsibilities for promoting economic growth, price level stability and high employment, this approach cannot be implemented. The public officials who made the bailout decisions during 2007-2009 were forced to choose between using public funds to bail out an imprudent institution, or allowing the failure of that institution to destroy hundreds of innocent firms and the jobs of thousands of innocent workers. They properly chose the bailout as the lesser evil. If public officials ever have to face that horrible choice again, we will want them to make the same decision.
Approach 2: Prevent Systemically Important Firms From Failing by Imposing High Capital Requirements. This is the Dodd-Frank approach that federal agencies and legislators are now attempting to implement. Under this scheme, regulators would tag as “systemically important” (SI) every financial firm that is so large and inter-connected with other firms that its failure would destabilize the world’s financial system. Since failure results from losses that exceed a firm’s capital, SI firms would be subjected to capital requirements high enough to absorb the losses that might occur under the worst circumstances. Just as the ocean liner Titanic was built to be unsinkable, SI firms would be made “unfailurable.” The analogy is apt, even if the word doesn’t yet exist.
The first step in this approach is to identify SI firms, a process that has already started. Under Dodd-Frank, a super-committee of regulatory agencies has been compiling a list of banks and other major firms that are systemically important. While this requires some tough decisions, particularly as it applies to firms other than banks, it is clearly doable.
What is not doable is using capital requirements to reduce the risk exposure of the SI firms to the point where these firms could survive any economic shocks to which they might be subjected. The problem is that capital requirements can be gamed by the SI firms subjected to them, and regulators cannot be depended on to prevent it. This will be discussed below.
Approach 3: Adopt a Better Regulatory System That Shifts the Cost of Bailouts to Systemically Important Firms: The major objection to bailouts is not so much that the firms affected don’t deserve it, but that public funds are used for the purpose. If we had a way to require SI firms as a group to pay the cost of bailouts, the too-big-to-fail problem becomes manageable.
A regulatory system that can not be gamed by SI firms already exists and has been rigorously tested in other markets. This system, discussed below, could be easily modified to shift the cost of any required bailout to SI firms.
The Intuitive Appeal of Capital Requirements
The capital of a firm is the value of its assets less the value of its liabilities. Insolvency occurs when asset values decline to the point where they are smaller than liabilities, meaning that capital is negative.
The larger a firm’s capital is at any time, the larger the shrinkage in asset values it can suffer before becoming insolvent. It seems intuitively obvious, therefore, that the way to make an SI firm completely safe is to raise its capital requirements to the point where the firm can withstand any shock to the value of its assets. But this view fails to account for the reactions of the firm to higher requirements.
Private financial institutions will never voluntarily carry enough capital to cover the losses that would occur under a disaster scenario such as the financial crisis in 2007-2008. For one thing, such disasters occur very infrequently, and as the period since the last occurrence gets longer, the natural tendency is to disregard it. In a study of international banking crises, [Wharton finance professor] Richard Herring and I called this “disaster myopia.”
Disaster myopia is reinforced by “herding.” Any one firm that elects to play it safe will be less profitable than its peers, making its shareholders unhappy and opening itself to a possible takeover.
Even when decision makers are prescient enough to know that a severe shock that will generate large losses is coming, it is not in their interest to hold the capital needed to meet those losses. Because they don’t know when the shock will occur, preparing for it would mean reduced earnings for the firm and reduced personal income for them for what could be a very long period. Better to realize the higher income as long as possible, because if they stay within the law, it won’t be taken away from them if the firm later becomes insolvent.
A capital requirement of, say, 6%, means that a firm will remain solvent in the face of a shock that reduces the value of its assets by 5.99%. How safe that is depends on the size of potential shocks that reduce the value of assets, which in turn depends on the riskiness of the assets the firm holds. SI firms can game the system by shifting into higher-yielding but riskier assets that are subject to larger potential shocks.
Regulators have tried to shut down this obvious escape valve by adopting risk-adjusted capital ratios, where required capital varies with the type of asset. SI firms must hold more capital against commercial loans, for example, than against home loans that are viewed as less risky. However, this does not prevent the firm from making adjustments within a given asset category. For example, during the years prior to the financial crisis, some mortgage lenders shifted into sub-prime home mortgage loans, which were subject to the same capital requirements as prime loans.
A given set of capital requirements may make SI firms safe in one economic environment, but not in another. In particular, if a bubble emerges in a major segment of the economy, as it did in the home mortgage market during 2003-2007, a massive shock to asset values will occur when the bubble bursts.
In principle, regulators can offset a shift toward riskier assets within given asset categories by breaking the categories down into even smaller sub-categories subject to different capital requirements. And they can adjust to emerging bubbles by raising requirements for the sector being impacted by the bubble. But such actions require a degree of intelligence, foresight and political courage on the part of regulators that history suggests we have no reason to expect.
Banks and other depositories have been subject to capital requirements since the 1980s. During the housing bubble, regulators did not set higher capital requirements for sub-prime mortgages, nor did they increase the ratios overall.
The need is for a regulatory system that cannot be gamed by SI firms; that does not require regulators to be smarter or more strongly motivated than the firms they regulate; and that in the event that an SI firm nonetheless fails and needs to be bailed out, the cost of bailout will be imposed on all SI firms rather than on taxpayers.
An Alternative to Capital Requirements: Transaction-based Reserving
Under transaction-based reserving (TBR), financial firms are regulated as if they were insurance companies that are obliged to contribute to a reserve account in connection with every asset they acquire. The portion of the cash inflows generated by the asset that is allocated to the reserve account depends on the potential future outflows associated with the asset.
If the asset is a loan or security, the required allocation to a contingency reserve would be, say, 50% of the portion of the income generated by the asset that is risk-based. If a prime mortgage was priced at 4% and zero points, for example, the reserve allocation for a 6% 2-point mortgage might be ½% plus 1 point.
Contingency reserves cannot be touched for a long period, perhaps 15 years, except in an emergency. Inflows allocated to reserves would not be taxable until they were withdrawn.
The major advantage of TBR is that it applies to every transaction with a risk component, whether it is shown on the firm’s balance sheet or not. It is similar to a capital requirement that is applied to every individual asset and risk-generating activity. The firm cannot game the system by shifting to riskier assets within the asset groups specified by the regulator, or by incurring new types of obligations that are not shown on the balance sheet, as they can with capital requirements.
Another advantage of TBR is that regulators need not make judgments about the riskiness of different assets – judgments they are not well-equipped to make. Such judgments are made by the firm itself in its pricing.
To some degree, TBR automatically dampens the excessive optimism that feeds bubbles. A shift to riskier loans during periods of euphoria automatically generates larger reserve allocations because riskier loans carry higher risk premiums. To the degree that a euphoric SI firm underprices risk during such episodes, however, failure is possible, and with it the possible need for a bailout.
This points up another critical advantage of TBR, which is that it provides a mechanism for shifting the cost of a bailout to the SI firms. Part of the reserve allocation of SI firms (but not other firms) would accrue not to their reserve account, but to that of the FDIC. It would be held by the FDIC to cover any losses associated with the bailout of an SI firm, should that prove necessary.
Private mortgage insurance companies (PMIs) have been subject to TBR since their inception in the 1950s. They must allocate 50% of their premium income to a contingency reserve for 10 years. The system was not rigorously tested until the recent financial crisis, which devastated the industry and battered their shareholders. Yet the PMIs have been able to meet all their obligations in connection with the extraordinary losses suffered by lenders during the crisis. TBR allowed the PMIs to do exactly what they were chartered to do: cover losses out of their reserves. There were no bailouts of PMIs.