Shyam's Slide Share Presentations


This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.


Wednesday, January 27, 2016

Breaking the brain’s garbage disposal: Study shows even a small problem causes big effects 01-27

Breaking the brain’s garbage disposal: Study shows even a small problem causes big effects

You wouldn’t think that two Turkish children, some yeast and a bunch of Hungarian fruit flies could teach scientists much.

But in fact, that unlikely combination has just helped an international team make a key discovery about how the brain’s “garbage disposal” process works — and how little needs to go wrong in order for it to break down.

The findings show just how important a cell-cleanup process called autophagy is to our brains. It also demonstrates how even the tiniest genetic change can have profound effects on such an essential function.

The new understanding could lead to better treatments for people whose brain and nerve cells have troubles “taking out the trash.” Some such drugs already exist, but more could follow.

Following a mystery to its end

In a new paper in the online journal eLife, the team describes their painstaking effort to figure out what was wrong in the Turkish siblings, and to understand what it meant. The children have a rare condition called ataxia that makes it harder for them to walk. They also have intellectual disability and developmental delays.

Ataxia is rare–affecting about one in every 20,000 people–and can cause movement problems in people who develop it in adulthood, or a range of symptoms when it arises in children.
Because researchers from the University of Michigan Medical School had published studies about families with multiple cases of ataxia before, Turkish researchers got in touch with them when the children’s parents brought them in for treatment.

That started a long chain of scientific sleuthing that led to today’s publication. First, the U-M team studied samples of the children’s DNA, and used advanced methods to pinpoint the exact genetic mutation that caused their symptoms.

It turned out to be on one of the genes that scientists know play a key role in autophagy, called ATG5. Cells throughout the body trigger their internal garbage crews by turning on this gene and its partners, and using them to make proteins that help clean up the cell.

The junk that these garbage crews clean up includes botched proteins–ones that have been used up or weren’t made right in the first place.

In fact, many forms of ataxia (and lots of other diseases) are caused by genetic problems that result in brain and nerve cells making such damaged, misfolded proteins. The proteins build up inside cells, killing them and causing neurological problems.

So, scientists and drug developers have tried to ramp up autophagy activity. They hope that by cleaning that cellular junk up faster, they can keep it from causing symptoms.

Tiny change – big effects

The children’s ataxia gene problem turned out to be not such a big deal genetically–it was such a slight mutation that it barely changed the way the cells made the protein. But that tiny change was enough to alter the autophagy process, and keep the children’s brain and nerve cells from working properly.

And that’s where the yeast and Hungarian flies come in. Using them, the researchers could see what the children’s problem gene did–and what that meant for the autophagy process. That’s because the autophagy process is so important that organisms ranging from yeast to humans make almost exactly the same ATG5 protein–it’s what scientists call “highly conserved” across species.

What they saw amazed them. The genetic mutation led cells to change just one link in the chain of amino acids that make up the ATG5 protein. The new amino acid even had the same electrical charge as the usual one. But that one changed link happened to be at the exact spot where ATG5 and its partner, called ATG12, connect to one another.

Since the two crucial autophagy partners couldn’t link together as usual, the children’s cells–and the yeast and flies’ cells–couldn’t clean up their cellular trash nearly as well. Autophagy didn’t shut down completely, but less of it happened. And the fruit flies, like the children, had problems walking.
“This is a window into the autophagy system, and the first time where having less autophagy causes ataxia, developmental delays and intellectual disability,” says Margit Burmeister, Ph.D. the U-M neurogeneticist who led the research and is co-senior author on the new paper. “It’s a subtle change, but it shows how important autophagy is in neurological disorders.”

Burmeister and colleagues from the University of Michigan, St. Jude Children’s Research Hospital, Howard Hughes Medical Institute, Istanbul University and Bogazici University in Istanbul and Eötvös Loránd University in Budapest hope the findings lead to autophagy-related treatments.
Meanwhile, they’re still working to understand how the change in ATG12-ATG5 binding actually changes autophagy. They’re looking at cells made with the mutations from other ataxia patients to see if autophagy is also changed.

They’re also looking for more families with ataxias. Each family could hold clues as important as the Turkish children’s mutation did. In fact, Burmeister was in Turkey late in 2015 to work with colleagues to find more potential cases. Small villages with centuries of marriage among people with some relation to one another, and large families, can prove to be important to science.

The acceleration in genetic sequencing and other testing, made possible in the last decade by advances in technology and scientific methods, means they’ll get closer to answers faster. What once took years can now be done in a single year. Having the expertise concentrated at U-M in genetics, autophagy, fruit fly biology, cell biology and more made the work go even faster, says Burmeister. U-M colleagues Daniel Klionsky, Jun Hee Lee and Jun Z. Li were critical to the new research. So were St. Jude colleagues led by Brenda Schulman who made X-ray images of the mutant ATG5 protein, and Zuhal Yapici and Aslihan Tolun, the colleagues in Istanbul and Gabor Juhasz in Budapest.

View at the original source

Friday, January 22, 2016

My Wife Says I Never Listen To Her, At Least I Think That’s What She Said 01-22

My Wife Says I Never Listen To Her, At Least I Think That’s What She Said.

If you’re in sales I know you have heard the saying, “The reason you have two ears and one mouth is so that you can listen twice as much as you talk.” Listening is one of the most important skills you can ever acquire. How well you listen has a major impact on your job effectiveness, and on the quality of your relationships with others. We listen for enjoyment. We listen to understand. We listen to obtain information. We listen to learn.

When I am in an interview with prospective employees the most important trait that I am looking for is their listening skills. If my interviewee can’t wait to talk until I am finished speaking, I know that is exactly the way they will act with the prospect. What that tells me and the potential prospect is what we have to say is less important than what they have to say. More likely than not, I will move on to the next prospect. The potential client will, consciously or sub-consciously, probably do the same thing. Next!

The way to become a better listener is to practice “active listening”. Active listening is the process where you make a conscious effort to hear not only the words that another person is saying but, more importantly, to try and understand the total message being sent.

Given all the listening we do, you would think we would be good at it! Fact is we’re not. Research has shown that we remember a dismal 25-50% of what we hear. That means when you are listening to your boss, peers, potential clients, children or spouse for 10 minutes, you have only heard 2½-5 minutes of the conversation.
"Wisdom is the reward you get for a lifetime of listening when you'd have preferred to talk." ~Doug Larson
Turn it around and it reveals that when you are giving directions or presenting information, your audience isn’t hearing the entire message either. You can only hope the important parts are captured in your 25- 50%, but what if they’re not?

Selling is an extremely advanced form of communication. It requires the utilization of all our senses. Although you may feel that the greatest barriers to your selling performance may be attributed to having the wrong product, closing techniques, presentation tools, or even prospects, I want you consider the possibility that the foundation of successful selling is based on how well you listen.
The ability to actively listen has been proven to significantly improve the productivity of a professional salesperson. Knowing that, isn’t it ironic that listening is most likely the least developed skill amongst salespeople?

Just think back to your childhood, your time in school, even in your career, were you formally trained to listen? My money is on that your answer is no. Very few of us were formally taught effective listening skills. Most of the time we are listening it is simply the practice of hearing words coming out of our potential prospect’s mouth. So tell me please, if we know that effective listening makes a dramatic difference, why don't we listen better?

To listen actively and comprehensively takes concentration, hard work, patience, the ability to interpret other people's ideas and summarize them, as well as the ability to identify nonverbal communication such as body language. Listening is both a complex process and a learned skill; it requires a conscious intellectual and emotional effort.

Listening with intention improves the quality of the relationships you have with prospects, friends, co-workers, and your family members. Ineffective listening can damage relationships and weaken the trust that you have with those very same people. The price of poor listening is many lost opportunities, professionally and personally. Not taking advantage of a selling opportunity is tragic and can easily be avoided.

It has been noted that more than 60 percent of all problems existing between people and within businesses is a result of faulty communication. A failure to actively listen can result in costly mistakes and misunderstandings. Clearly, listening is a skill that we can all benefit from improving. By becoming a better listener, you will increase your paycheck by improving your productivity, as well as your ability to influence, persuade and negotiate. What’s more, you’ll avoid conflict and misunderstandings – all necessary for sales success.

Listening is a learned and practiced skill that will open up new selling opportunities that you may have never noticed. It allows you to receive and process valuable information that might have been missed or neglected otherwise. So, invest the time needed to sharpen your listening skills.

Remember, when speaking with a prospect, you will not learn anything from listening to yourself talk. When selling ideas to a battered nation as the Prime Minister of England, Winston Churchill understood the importance of listening. “Courage is what it takes to stand up and speak; courage is also what it takes to sit down and listen.” The point is that all anyone wants in a conversation is to be heard and acknowledged. Take notice what happens when you give someone your attention by actively listening. They will want to reciprocate. To be successful in the game of sales your potential clients have to hear what you are saying. Listen to them and they will listen to you.

I have found over the years that the training room is simply a microcosm of the sales situation. Salespeople will act exactly in the conference room as they do in an office or home of their potential clients. Armed with that knowledge I use that conference room as both a place to observe and modify certain behaviors in the group, as well as, individuals.

Here are a few tips that will help you help your salespeople improve their active listening skills.

  1. I find that role playing is often a great help to salespeople. In your next meeting encourage silence to practice active listening. Many salespeople can only wait a split second before they respond to a potential prospect’s comments or questions. Instead, in your meeting, get them in the habit of waiting a minimum of three to four seconds before responding to your questions or comments. Silently count to ensure that enough time has elapsed. This conscious pause will make your salespeople more comfortable with that moment of silence they are used to filling with their own voice. Although many salespeople find the conscious effort to stay quiet challenging, silence creates the space that will motivate their prospect to share additional information. It also gives them enough time to respond thoughtfully and intelligently to their prospect’s specific needs.

  1. Never interrupt while the prospect is speaking. This is my strongest pet peeve. Not only is it unproductive it is rude, rude, rude. Did I mention it is rude? Make a game of catching salespeople interrupting each other as they vie for your attention or acknowledgment. Most people really don’t know they do this. It is such a part of their everyday personality that it goes totally unnoticed. Pointing it out in a good natured way at least makes them aware of the interruptions. From there they can be more conscious and start to change the behavior. Obviously, what we were taught as children still applies. Enough said.

  1. Teach them to be present, to listen with an open mind (without filters or judgment), and to focus on what the potential prospect is saying (or trying to say) instead of being concerned with closing a sale. In the middle of a sales meeting I will stop, tell everyone to get out a piece of paper and to write down exactly what it is I just said. The first time I attempted this not a single person in the room could accurately reproduce what I had just said to them. What a learning experience that was for me. I began to do this on a regular basis and lo and behold they began to get better at paying attention. As the reps learned the importance of listening over time they realized that in doing so showed the potential prospect they had a genuine interest in helping them. Without actively listening to their prospects, they run the risk of missing subtle nuances or inferences that could make or stall the sale.

  1. Resist the temptation to rebut your prospects. As human beings we have a natural tendency to resist new information that conflicts with what we believe. Often, when we hear someone saying something with which we disagree, we immediately begin formulating the rebuttal in our mind and obscure the message that they are giving. If we are focused on creating a rebuttal, we are not listening. Remember that you can always rebut later, after you have heard the whole message and had time to think about it. Just remember that it is essential to NEVER make the potential client feel stupid. When presenting information that is opposed to what they believe, do so in a series of questions that will allow them to move down the path themselves. In the end you are much better served if they believe that they came to the change of mind on their own.

  1. Make the prospect feel heard. This goes beyond simply becoming a better listener. It involves making certain that the person to whom you are listening actually feels like you’ve been listening. To make someone feel heard, clarify what your potential client has said during the conversation. Rephrase their questions or comments in your own words to ensure that you not only heard but understood them as well. If you need more information for a clearer understanding, use clarifiers like:

''To further clarify this ...''

''What I am hearing is ...''

''For my own understanding what you are saying is ...''

''Help me understand ...''

''Tell me more ...''

Asking questions and using clarifiers demonstrates your concern and interest in finding a solution for the prospect’s specific situation.

I use the same technique in my sales meeting as I mentioned before. I go over something in the meeting and then ask individual reps to rephrase, paraphrase and parrot what I have just said. The more they practiced the better they got.
  1. Listen for what is not said. What is implied is often more important than what is articulated. If you sense that the prospect is sending conflicting messages, ask a question to explore the meaning behind the words and the message that you think the prospect is trying to communicate. Listen FOR information. Consider that during most conversations, we listen TO information. In other words, all we hear is what they are saying. However, when you listen FOR information, you are looking through the words to discover the implied meaning behind them. This prevents you from incorrectly prejudging or misinterpreting the message that the prospect is communicating to you. There are four main things we listen for when speaking with a prospect:

  1. Listen for what is missing.
  2. Listen for concerns the prospect may have or what is important to them.
  3. Listen for what they value.

Thursday, January 21, 2016

Managing Your Mission-Critical Knowledge 01-21

Managing Your Mission-Critical Knowledge

When executives talk about “knowledge management” today, the conversation usually turns very
quickly to the challenge of big data and analytics. That’s hardly surprising: Extraordinary amounts of rich, complicated data about customers, operations, and employees are now available to most managers, but that data is proving difficult to translate into useful knowledge. Surely, the thinking goes, if the right experts and the right tools are set loose on those megabytes, brilliant strategic insights will emerge

Tantalizing as the promise of big data is, an undue focus on it may cause companies to neglect something even more important—the proper management of all their strategic knowledge assets: core competencies, areas of expertise, intellectual property, and deep pools of talent. We contend that in the absence of a clear understanding of the knowledge drivers of an organization’s success, the real value of big data will never materialize.

Yet few companies think explicitly about what knowledge they possess, which parts of it are key to future success, how critical knowledge assets should be managed, and which spheres of knowledge can usefully be combined. In this article we’ll describe in detail how to manage this process.

Map Your Knowledge Assets

The first step is to put boundaries around what you’re trying to do. Even if you tried to collect and inventory all the knowledge floating around your company—the classic knowledge-management approach—you wouldn’t get anything useful from the exercise (and you’d suffer badly from cognitive overload). Our goal is to help you understand which knowledge assets—alone or in new combinations—are key to your future growth. We would bet heavily that if your company has a knowledge-management system, it doesn’t adequately parse out your mission-critical knowledge.

This step alone can be quite challenging the first time around. When we worked with a group of decision makers at ATLAS, the major particle physics experiment at the European Organization for Nuclear Research (CERN), we interviewed many stakeholders to get a holistic view of the knowledge underpinning its success and then surveyed nearly 200 other members of the organization. Ultimately we mapped only a portion of the ATLAS knowledge base, but in the process we whittled down a list of 26 knowledge domains to the eight that were deemed most important to organizational outcomes.
Absent a clear understanding of your knowledge assets, big data’s value won’t materialize.

Your list of key assets should ultimately include some that are “hard,” such as technical proficiency, and some that are “soft,” such as a culture that supports intelligent risk taking. You may also have identified knowledge that you should possess but don’t or that you suspect needs shoring up. This, too, should be captured.

The next step is to map your assets on a simple grid along two dimensions: tacit versus explicit (unstructured versus structured) and proprietary versus widespread (undiffused versus diffused). The exhibit “What Kind of Knowledge Is This?” which includes a mapping grid, will help you figure out where to place your knowledge assets on your own map. (We owe a debt to Sidney G. Winter, Ikujiro Nonaka, and the late Max Boisot for their work on these dimensions. Had he lived, Boisot would have been a coauthor on this article.)

Use these categories to help place your assets along the y axis from bottom to top:
  • An expert can use the knowledge to perform tasks but cannot articulate it in a way that allows others to perform them.
  • Experts can perform tasks and discuss the knowledge involved with one another.
  • People can perform tasks by trial and error.
  • People can perform tasks using rules of thumb, but causal relationships aren’t clear.
  • It’s possible to identify and describe the relationship between variables involved in doing a task so that general principles become clear.
  • The relations among variables are so well known that the outcome of actions can be calculated and reliably delivered with precision. (Knowledge assets covered under patents or other forms of copyright protection generally fit here.)
Use these categories to help place your assets along the x axis from left to right:
  • Only one person in the organization has this knowledge.
  • A few people in the organization have this knowledge.
  • Many people in one part of the organization have this knowledge.
  • People throughout the organization have this knowledge.
  • Many people in the industry have this knowledge.
  • Many people both inside and outside the industry have this knowledge

Unstructured versus structured.

Unstructured (tacit) knowledge involves deep, almost intuitive understanding that is hard to articulate; it’s generally rooted in great expertise. World-class, highly experienced engineers may intuit how to solve technical problems that nobody else can (and may be unable to explain their intuition). Rainmakers in a strategy consulting firm know in their bones how to steer a conversation or a discussion, develop a relationship, and close a deal, but they would have trouble telling colleagues why they made a particular move at a particular moment. 

Structured (explicit or codified) knowledge is easier to communicate: A company that’s expert in the use of discovery-driven planning, for example, can bring people up to speed on that methodology quickly because it has given them recourse to a common language, rules of thumb, and conceptual frameworks. Some knowledge is so fully structured that it can be captured in patents, software, or other intellectual property.

Undiffused versus diffused.

To what extent is the knowledge spread through—or outside—the company? One division may have expertise in negotiating with officials of the Chinese government, for example, which another division totally lacks. That knowledge is obviously undiffused. But most companies have certain broadly shared competencies: Those in the consumer packaged goods industry tend to have companywide strength in developing and marketing new brands; and many employees in the defense industry know a lot about bidding on government contracts. Some knowledge, of course, is diffused far beyond the boundaries of the organization.

Interpret the Map

Simply mapping your knowledge assets and then discussing the map with your senior team can uncover important insights and ideas for value creation, as our experience with decision makers at Boeing and ATLAS demonstrate.

Global sourcing at Boeing.

Sourcing managers at Boeing were aware that their relationships with internationally dispersed customers, suppliers, and partners were changing. The whole ecosystem was sharing in the creation of new aircraft technologies and services and in the associated risks. Future success would depend on learning to manage this interdependence.

With that insight in mind, the managers mapped the critical knowledge assets in their global sourcing activities, which ultimately resulted in a research paper that one of us (Martin Ihrig) coauthored with Sherry Kennedy-Reid of Boeing. They saw that cost-related knowledge—performance metrics, IP strategy, and supply-base management—was well structured and widely diffused. However, knowledge about supplier capabilities, although codified, had not spread throughout the Boeing sourcing community. And other knowledge that was important to future value creation—how to leverage Boeing’s potent and technically sophisticated culture for effective communication and negotiation, determine Boeing’s business needs and global sourcing strategy, and, most important, assess the geopolitical influences on global sourcing decisions—was neither codified nor widely shared.

Taken together, these observations suggested that Boeing was placing greater emphasis on technical efficiencies, such as improving processes and productivity, than on strategic growth, such as creating research initiatives with suppliers or building a shared innovation platform. As Boeing’s business became progressively more intertwined with that of its ecosystem partners, the development of knowledge assets would need to change.

Insights from this mapping exercise enabled the team to recommend several initiatives aimed at developing and disseminating tacit knowledge, such as a program to help employees who had a deeper understanding of geopolitical influences to put some structure around their knowledge and pass it on to others in the company, and a program to identify the capabilities of key suppliers and determine how Boeing could work more strategically with them.

Advanced physics at CERN.

The experimental work done at ATLAS is carried out by thousands of visiting scientists from 177 organizations in 38 countries, working without a traditional top-down hierarchy. This extraordinary operation has had spectacular results, including the discovery of the Higgs boson, for which Peter Higgs and François Englert were awarded a Nobel Prize in 2013. Our mapping of ATLAS’s knowledge base was done in a research partnership with Agustí Canals, Markus Nordberg, and Max Boisot.

Our team had a surprising insight when a study of that map revealed that “overview of the ATLAS experiment” was one of the top eight knowledge domains. We hadn’t given much thought to that domain, but we quickly realized how central it was to a knowledge-development program like ATLAS. Changes in the overall direction of a project can’t easily be codified when the project is so complex. The direction is continually evolving, and not necessarily in a linear fashion, as the technical and scientific work advances; but individual researchers can’t adapt their work accordingly when they don’t know what that direction is. ATLAS requires that huge numbers of people, from many countries and cultures, understand what others are learning and how it affects the overall technical direction.

Without the knowledge map, the leadership team at ATLAS would have predicted that scientific and technical knowledge were regarded as mission critical—indeed, most existing resources went to helping those domains make progress. But we found it extraordinary that the soft domains of project management and communication skills also emerged as central to ATLAS’s performance. Retrospectively, that made sense: A consensus on overall direction depends on the successful sharing of knowledge among specializations and between scientists as they cycle back to their home organizations and new people take their place. These important soft domains were much less developed and not well diffused; clearly, they needed more resources and attention.

Identify New Opportunities

Mapping knowledge assets and discussing their implications often leads directly to strategic insights, as it did at Boeing and ATLAS. But we also find it helpful to systematically explore what would happen if knowledge were moved around on the map or different spheres of it were combined. Here are some examples:

Selectively structure tacit knowledge (move it up on your map’s Y axis).

The proprietary knowledge assets in the lower left corner of your map are often the most important knowledge your company has—the deep-seated source of future strategic advantage. You need to think about which of them can and should become more structured so that (for example) your basic research will lead to the creation of bona fide intellectual property that can be developed into new products, licensed, or otherwise monetized. Structuring tacit knowledge often involves capturing expert employees’ insights with the ultimate goal of disseminating them to many more people in the company. In general, speeding up codification will increase the value of knowledge. But making the tacit explicit can also be dangerous. The more codified the knowledge is, the more easily it may be diffused and copied externally.

When you’re trying to decide what to structure further and what to keep tacit, it can be useful to distinguish between product and process. Suppose you’ve decided that your expertise in some technical domain can be codified into intellectual property. You may want to capture some of your process knowledge—whether it’s an engineer’s know-how or the conversational routines your marketing people use to tease out emerging customer needs—only informally. That way, even if a patent expires or codified knowledge is leaked, essential experience stays within the company.

Disseminate knowledge within the company (move it to the right on your map’s X axis).

Purposefully deciding which knowledge to diffuse internally can pay huge dividends. Very often one division is wrestling with a problem that another division has solved, and close study of the map will reveal the potential for productive sharing—as it would with the exemplary business unit’s expertise in negotiating with the Chinese. Productive sharing can also be done between functions: Korean chaebols (conglomerates) expend considerable money and effort to ensure that knowledge is transferred from company to company as well as from headquarters to subsidiaries.

The ease of knowledge sharing is directly proportional to the degree of knowledge codification, of course: A written document or spreadsheet is easier to share than tacit experience accumulated over many years. Some tacit knowledge can’t be codified but can be shared. One powerful way to do so internally is to run workshops that bring together people who have subject matter expertise with people facing a particular problem for which that expertise is relevant. Apprenticeship programs, too, have long been an effective way to transfer difficult-to-codify tacit knowledge.

Diffuse knowledge outside the company (move it farther right on your map’s X axis).

The most straightforward way to create value through knowledge dissemination is to sell or license your intellectual property. DuPont, for example, commercializes only a small fraction of the hundreds of patents it owns; the rest can be licensed, sold, or shared with other companies. Even companies without patents can often identify new markets for existing IP. This magazine is an example: Reprints of HBR articles have been sold to MBA and corporate learning programs for decades. A few years ago someone had the idea of collecting the best of those articles in “Must Read” collections for individual buyers, and a profitable business was born.

Some companies give away knowledge and still make a big profit.

Many companies are experimenting with less familiar ways of sharing knowledge across organizational boundaries. If suppliers, customers, and even competitors that work together on projects are creating value within your ecosystem, as at Boeing, this is worth considering. But you should keep in mind what knowledge must be protected; your map of assets will help you make those judgment calls.

Some companies even give away knowledge, ultimately making more money than they would if they kept it proprietary. In the early 1990s Adobe Systems saw an opportunity to develop a file-sharing format that would retain the text, fonts, images, and other graphics in a document no matter what operating system, hardware, or software was used to send and view it. Adobe was among the first to develop the idea behind the PDF. It then structured that knowledge in the form of the Adobe Acrobat PDF Writer and Adobe Reader. It shared the Reader on the internet, thereby creating demand for the Writer (at $300 and up), which was free from competition for years and remains one of Adobe’s leading products. Similarly, McKinsey shares selected insights through McKinsey Quarterly, generating demand for its proprietary problem-solving skills.

The recent decision of the business magnate and inventor Elon Musk to share Tesla Motors patents with anyone who wants to use them was also very astute. Clearly, Musk believes that Tesla (like Adobe) will make more money if more people build on the platform he has provided. His decision also recognizes that in order to thrive, Tesla (like Boeing) needs to create a strong ecosystem. It’s a vote of confidence in the company’s capacity to protect enough tacit knowledge to stay ahead of the competition. (Musk told a reporter for Bloomberg Businessweek, “You want to be innovating so fast that you invalidate your prior patents, in terms of what really matters. It’s the velocity of innovation that matters.”) This is one of the most interesting examples of open innovation that we’ve seen: Musk is betting not just that he can pull more partners into the world of electric cars but that he can pull the mainstream automobile industry into a more responsible position with respect to climate change.

Contextualize knowledge (move it down on your map’s Y axis).

Codified knowledge can be applied in less structured spaces in a variety of ways. Sometimes it’s a matter of taking well-established routines and applying them to new businesses. This approach is central to the growth strategies of many companies. Procter & Gamble, for instance, uses world-class brand-building competencies when it moves into new markets and develops new products. Similarly, Goldman Sachs rapidly generates new investment banking offerings by applying its analytics capabilities to changes in financial market conditions.

Contextualization can also come from combining structured and unstructured knowledge. The people who originally tried to build knowledge-management systems for consulting firms quickly discovered that most consultants used codified information as a networking tool: They would notice who wrote an article on sourcing from Indonesia (for example) and then talk with that person directly, picking her brain for more-tacit insights. Indeed, many companies build competitive advantage on just such combinations.

To be applied in a new setting, codified knowledge must generally be contextualized. If Boeing USA comes up with a new production process and then ships the related knowledge to China in the form of supporting documents, Chinese engineers have to assimilate the knowledge and adapt it to their context.

Discover new knowledge (move it to the left on your map’s X axis).

The most challenging—and highest-potential—opportunities often come from spotting connections between disparate areas of expertise (sometimes inside the company, sometimes outside it). The analytic techniques that can turn big data into big knowledge are used partly in hopes of finding such unexpected connections.

In the pursuit of innovation, flashes of insight can come from many sources. Sometimes a new technology embedded in an existing product makes it possible to change your value proposition. That happened when Rolls-Royce’s jet engine sensors provided the company with new performance data, which in turn made it more profitable to sell power by the hour than to sell engines outright. Thinking about someone else’s business model can lead to strategic insights as well. After managers at CEMEX studied how FedEx, Domino’s, and ambulance squads operate, they decided to charge for delivering truckloads of ready-mix concrete within a specified time window rather than for cubic meters of the product. Changes in the external environment can create new opportunities. Subway went from an also-ran to a high-growth fast-food business when it capitalized on consumers’ growing interest in tasty, more-nutritious, low-calorie food. Your company may have developed valuable process expertise that you could sell through consulting to other companies even outside your industry. IBM has done that many times over. 

It’s not easy to systematize this part of the knowledge-development process, which arises to some extent from intuition, tacit knowledge, and time spent studying the map. The ATLAS team’s insight about the importance of soft skills is an example. So is Boeing’s insight that becoming part of an interdependent ecosystem had major implications for what kinds of knowledge would have to be developed. A small publishing industry that is devoted to helping companies make innovative connections of this kind includes the book MarketBusters: 40 Strategic Moves That Drive Exceptional Business Growth, which one of us (Ian MacMillan) wrote with Rita McGrath; William Duggan’s Strategic Intuition: The Creative Spark in Human Achievement; and Frans Johansson’s The Medici Effect: What Elephants and Epidemics Can Teach Us About Innovation

One thing we can assure you: Your competitors will have access to the same kinds of data and general industry knowledge that you do. So your future success depends on developing a new kind of expertise: the ability to leverage your proprietary knowledge strategically and to make useful connections between seemingly unrelated knowledge assets or tap fallow, undeveloped knowledge.
Companies invest tens of millions of dollars to develop knowledge but pay scant attention to whether it contributes to future competitive advantage. The process we’ve outlined here is meant to prevent that lapse. Once you’ve mapped your mission-critical knowledge assets, the challenge is to be disciplined about which of them to develop and exploit, keeping future growth front and center. (Remember, strategy always includes deciding what not to do.) If your company thoughtfully manages its knowledge portfolio, it will achieve a distinct competitive advantage.

View at the original source

Wednesday, January 20, 2016

Customer Data: Designing for Transparency and Trust 01-20

Customer Data: Designing for Transparency and Trust

With the explosion of digital technologies, companies are sweeping up vast quantities of data about consumers’ activities, both online and off. Feeding this trend are new smart, connected products—from fitness trackers to home systems—that gather and transmit detailed information.

Though some companies are open about their data practices, most prefer to keep consumers in the dark, choose control over sharing, and ask for forgiveness rather than permission. It’s also not unusual for companies to quietly collect personal data they have no immediate use for, reasoning that it might be valuable someday.

As current and former executives at frog, a firm that helps clients create products and services that leverage users’ personal data, we believe this shrouded approach to data gathering is shortsighted. Having free use of customer data may confer near-term advantages. But our research shows that consumers are aware that they’re under surveillance—even though they may be poorly informed about the specific types of data collected about them—and are deeply anxious about how their personal information may be used.

In a future in which customer data will be a growing source of competitive advantage, gaining consumers’ confidence will be key. Companies that are transparent about the information they gather, give customers control of their personal data, and offer fair value in return for it will be trusted and will earn ongoing and even expanded access. Those that conceal how they use personal data and fail to provide value for it stand to lose customers’ goodwill—and their business.

The Expanding Scope of Data

The internet’s first personal data collectors were websites and applications. By tracking users’ activities online, marketers could deliver targeted advertising and content. More recently, intelligent technology in physical products has allowed companies in many industries to collect new types of information, including users’ locations and behavior. The personalization this data allows, such as constant adaptation to users’ preferences, has become central to the product experience. (Google’s Nest thermostat, for example, autonomously adjusts heating and cooling as it learns home owners’ habits.)

The rich new streams of data have also made it possible to tackle complex challenges in fields such as health care, environmental protection, and urban planning. Take Medtronic’s digital blood-glucose meter. It wirelessly connects an implanted sensor to a device that alerts patients and health care providers that blood-glucose levels are nearing troubling thresholds, allowing preemptive treatments. And the car service Uber has recently agreed to share ride-pattern data with Boston officials so that the city can improve transportation planning and prioritize road maintenance. These and countless other applications are increasing the power—and value—of personal data.

Of course, this flood of data presents enormous opportunities for abuse. Large-scale security breaches, such as the recent theft of the credit card information of 56 million Home Depot customers, expose consumers’ vulnerability to malicious agents. But revelations about companies’ covert activities also make consumers nervous. Target famously aroused alarm when it was revealed that the retailer used data mining to identify shoppers who were likely to be pregnant—in some cases before they’d told anyone.

At the same time, consumers appreciate that data sharing can lead to products and services that make their lives easier and more entertaining, educate them, and save them money. Neither companies nor their customers want to turn back the clock on these technologies—and indeed the development and adoption of products that leverage personal data continue to soar. The consultancy Gartner estimates that nearly 5 billion connected “things” will be in use in 2015—up 30% from 2014—and that the number will quintuple by 2020.

Resolving this tension will require companies and policy makers to move the data privacy discussion beyond advertising use and the simplistic notion that aggressive data collection is bad. We believe the answer is more nuanced guidance—specifically, guidelines that align the interests of companies and their customers, and ensure that both parties benefit from personal data collection.

Consumer Awareness and Expectations

To help companies understand consumers’ attitudes about data, in 2014 we surveyed 900 people in five countries—the United States, the United Kingdom, Germany, China, and India—whose demographic mix represented the general online population. We looked at their awareness of how their data was collected and used, how they valued different types of data, their feelings about privacy, and what they expected in return for their data.

To find out whether consumers grasped what data they shared, we asked, “To the best of your knowledge, what personal information have you put online yourself, either directly or indirectly, by your use of online services?” While awareness varied by country—Indians are the most cognizant of their data trail and Germans the least—overall the survey revealed an astonishingly low recognition of the specific types of information tracked online. On average, only 25% of people knew that their data footprints included information on their location, and just 14% understood that they were sharing their web-surfing history too.

It’s not as if consumers don’t realize that data about them is being captured, however; 97% of the people surveyed expressed concern that businesses and the government might misuse their data. Identity theft was a top concern (cited by 84% of Chinese respondents at one end of the spectrum and 49% of Indians at the other). Privacy issues also ranked high; 80% of Germans and 72% of Americans are reluctant to share information with businesses because they “just want to maintain [their] privacy.” So consumers clearly worry about their personal data—even if they don’t know exactly what they’re revealing.

To see how much consumers valued their data, we did conjoint analysis to determine what amount survey participants would be willing to pay to protect different types of information. (We used purchasing parity rather than exchange rates to convert all amounts to U.S. dollars.) Though the value assigned varied widely among individuals, we are able to determine, in effect, a median, by country, for each data type.

The responses revealed significant differences from country to country and from one type of data to another. Germans, for instance, place the most value on their personal data, and Chinese and Indians the least, with British and American respondents falling in the middle. Government identification, health, and credit card information tended to be the most highly valued across countries, and location and demographic information among the least.

We don’t believe this spectrum represents a “maturity model,” in which attitudes in a country predictably shift in a given direction over time (say, from less privacy conscious to more). Rather, our findings reflect fundamental dissimilarities among cultures. The cultures of India and China, for example, are considered more hierarchical and collectivist, while Germany, the United States, and the United Kingdom are more individualistic, which may account for their citizens’ stronger feelings about personal information.

The Need to Deliver Value

If companies understand how much data is worth to consumers, they can offer commensurate value in return for it. Making the exchange transparent will be increasingly important in building trust.
A lot depends on the type of data and how the firm is going to use it. Our analysis looked at three categories: (1) self-reported data, or information people volunteer about themselves, such as their e-mail addresses, work and educational history, and age and gender; (2) digital exhaust, such as location data and browsing history, which is created when using mobile devices, web services, or other connected technologies; and (3) profiling data, or personal profiles used to make predictions about individuals’ interests and behaviors, which are derived by combining self-reported, digital exhaust, and other data. Our research shows that people value self-reported data the least, digital exhaust more, and profiling data the most.

We also examined three categories of data use: (1) making a product or service better, for example, by allowing a map application to recommend a route based on a user’s location; (2) facilitating targeted marketing or advertising, such as ads based on a user’s browsing history; and (3) generating revenues through resale, by, say, selling credit card purchase data to third parties.

Our surveys reveal that when data is used to improve a product or service, consumers generally feel the enhancement itself is a fair trade for their data. But consumers expect more value in return for data used to target marketing, and the most value for data that will be sold to third parties. In other words, the value consumers place on their data rises as its sensitivity and breadth increase from basic information that is voluntarily shared to detailed information about the consumer that the firm derives through analytics, and as its uses go from principally benefiting the consumer (in the form of product improvements) to principally benefiting the firm (in the form of revenues from selling data).
Let’s look now at how some companies manage this trade-off.

Samsung’s Galaxy V smartphone uses digital exhaust to automatically add the contacts users call most to a favorites list. Most customers value the convenience enough to opt in to the feature—effectively agreeing to swap data for enhanced performance.

Google’s predictive application Google Now harnesses profiling data to create an automated virtual assistant for consumers. By sifting through users’ e-mail, location, calendar, and other data, Google Now can, say, notify users when they need to leave the office to get across town for a meeting and provide a map for their commute. The app depends on more-valuable types of personal data but improves performance enough that many users willingly share it. Our global survey of consumers’ attitudes toward predictive applications finds that about two-thirds of people are willing (and in some cases eager) to share data in exchange for their benefits.

Disney likewise uses profiling data gathered by its MagicBand bracelet to enhance customers’ theme park and hotel experiences and create targeted marketing. By holding the MagicBand up to sensors around Disney facilities, wearers can access parks, check in at reserved attractions, unlock their hotel doors, and charge food and merchandise. Users hand over a lot of data, but they get convenience and a sense of privileged access in return, making the trade-off worthwhile. Consumers know exactly what they’re signing on for, because Disney clearly spells out its data collection policies in its online MagicBand registration process, highlighting links to FAQs and other information about privacy and security.

Firms that sell personal information to third parties, however, have a particularly high bar to clear, because consumers expect the most value for such use of their data. The personal finance website Mint makes this elegant exchange: If a customer uses a credit card abroad and incurs foreign transaction fees, Mint flags the fees and refers the customer to a card that doesn’t charge them. Mint receives a commission for the referral from the new-card issuer, and the customer avoids future fees. Mint and its customers both collect value from the deal.

Trust and Transparency

Firms may earn access to consumers’ data by offering value in return, but trust is an essential facilitator, our research shows. The more trusted a brand is, the more willing consumers are to share their data.

Numerous studies have found that transparency about the use and protection of consumers’ data reinforces trust. To assess this effect ourselves, we surveyed consumers about 46 companies representing seven categories of business around the world. We asked them to rate the firms on  the following scale: completely trustworthy (respondents would freely share sensitive personal data with a firm because they trust the firm not to misuse it); trustworthy (they would “not mind” exchanging sensitive data for a desired service); untrustworthy (they would provide sensitive data only if required to do so in exchange for an essential service); and completely untrustworthy (they would never share sensitive data with the firm).

After primary care doctors, new finance firms such as PayPal and China’s Alipay received the highest ratings on this scale, followed by e-commerce companies, consumer electronics makers, banks and insurance companies, and telecommunications carriers. Next came internet leaders (such as Google and Yahoo) and the government. Ranked below these organizations were retailers and entertainment companies, with social networks like Facebook coming in last.

A firm that is considered untrustworthy will find it difficult or impossible to collect certain types of data, regardless of the value offered in exchange. Highly trusted firms, on the other hand, may be able to collect it simply by asking, because customers are satisfied with past benefits received and confident the company will guard their data. In practical terms, this means that if two firms offer the same value in exchange for certain data, the firm with the higher trust will find customers more willing to share. For example, if Amazon and Facebook both wanted to launch a mobile wallet service, Amazon, which received good ratings in our survey, would meet with more customer acceptance than Facebook, which had low ratings. In this equation, trust could be an important competitive differentiator for Amazon.

Approaches That Build Trust

Many have argued that the extensive data collection today’s business models rely on is fraught with security, financial, and brand risks. MIT’s Sandy Pentland and others have proposed principles and practices that would give consumers a clear view of their data and control over its use, reducing firms’ risks in the process. (See “With Big Data Comes Big Responsibility,” HBR, November 2014.)
We agree that these business models are perilous and that risk reduction is essential. And we believe reasoned policies governing data use are important. But firms must also take the lead in educating consumers about their personal data. Any firm that thinks it’s sufficient to simply provide disclosures in an end-user licensing agreement or present the terms and conditions of data use at sign-up is missing the point. Such moves may address regulatory requirements, but they do little if anything to help consumers.

Consider the belated trust-building efforts under way at Facebook. The firm has been accused of riding roughshod over user privacy in the past, launching services that pushed the boundaries on personal data use and retreating only in the face of public backlash or the threat of litigation. Facebook Beacon, which exposed users’ web activities without their permission or knowledge, for example, was pulled only after a barrage of public criticism.

More recently, however, Facebook has increased its focus on safeguarding privacy, educating users, and giving them control. It grasps that trust is no longer just “nice to have.” Commenting in a Wired interview on plans to improve Facebook Login, which allows users to log into third-party apps with their Facebook credentials, CEO Mark Zuckerberg explained that “to get to the next level and become more ubiquitous, [Facebook Login] needs to be trusted even more. We’re a bigger company now and people have more questions. We need to give people more control over their information so that everyone feels comfortable using these products.” In January 2015 Facebook launched Privacy Basics, an easy-to-understand site that explains what others see about a user and how people can customize and manage others’ activities on their pages.

Like Facebook, Apple has had its share of data privacy and security challenges—most recently when celebrity iPhoto accounts were hacked—and is taking those concerns ever more seriously. Particularly as Apple forays into mobile payments and watch-based fitness monitoring, consumer trust in its data handling will be paramount. CEO Tim Cook clearly understands this. Launching a “bid to be conspicuously transparent,” as the Telegraph put it, Apple recently introduced a new section on its website devoted to data security and privacy. At the top is a message from Cook. “At Apple, your trust means everything to us,” he writes. “That’s why we respect your privacy and protect it with strong encryption, plus strict policies that govern how all data is handled….We believe in telling you up front exactly what’s going to happen to your personal information and asking for your permission before you share it with us.”

On the site, Apple describes the steps taken to keep people’s location, communication, browsing, health tracking, and transactions private. Cook explains, “Our business model is very straightforward: We sell great products. We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t ‘monetize’ the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you. Our software and services are designed to make our devices better. Plain and simple.” Its new stance earned Apple the highest possible score—six stars—from the nonprofit digital rights organization Electronic Frontier Foundation, a major improvement over its 2013 score of one star.

Enlightened Data Principles

Facebook and Apple are taking steps in the right direction but are fixing issues that shouldn’t have arisen in the first place. Firms in that situation start the trust-building process with a handicap. Forward-looking companies, in contrast, are incorporating data privacy and security considerations into product development from the start, following three principles. The examples below each highlight one principle, but ideally companies should practice all three.

Teach your customers.

Users can’t trust you if they don’t understand what you’re up to. Consider how one of our clients educates consumers about its use of highly sensitive personal data.

This client, an information exchange for biomedical researchers, compiles genomic data on anonymous participants from the general public. Like all health information, such data is highly sensitive and closely guarded. Building trust with participants at the outset is essential. So the project has made education and informed consent central to their experience. Before receiving a kit for collecting a saliva sample for analysis, volunteers must watch a video about the potential consequences of having their genome sequenced—including the possibility of discrimination in employment and insurance—and after viewing it, must give a preliminary online consent to the process. The kit contains a more detailed hard-copy agreement that, once signed and returned with the sample, allows the exchange to include the participant’s anonymized genomic information in the database. If a participant returns the sample without the signed consent, her data is withheld from the exchange. Participants can change their minds at any time, revoking or granting access to their data.

Give them control.

The principle of building control into data exchange is even more fully developed in another project, the Metadistretti e-monitor, a collaboration between frog, Flextronics, the University Politecnico di Milano, and other partners. Participating cardiac patients wear an e-monitor, which collects ECG data and transmits it via smartphone to medical professionals and other caregivers. The patients see all their own data and control how much data goes to whom, using a browser and an app. They can set up networks of health care providers, of family and friends, or of fellow users and patients, and send each different information. This patient-directed approach is a radical departure from the tradition of paternalistic medicine that carries over to many medical devices even today, with which the patient doesn’t own his data or even have access to it.

Deliver in-kind value.

Businesses needn’t pay users for data (in fact, our research suggests that offers to do so actually reduce consumers’ trust). But as we’ve discussed, firms do have to give users value in return.
The music service Pandora was built on this principle. Pandora transparently gathers self-reported data; customers volunteer their age, gender, and zip code when they sign up, and as they use the service they tag the songs they like or don’t like. Pandora takes that information and develops a profile of each person’s musical tastes so that it can tailor the selection of songs streamed to him or her; the more data users provide, the better the tailoring becomes. In the free version of its service, Pandora uses that data to target advertising. Customers get music they enjoy at no charge and ads that are more relevant to them. Consumers clearly find the trade satisfactory; the free service has 80 million active subscribers.

In designing its service, Pandora understood that customers are most willing to share data when they know what value they’ll receive in return. It’s hard to set up this exchange gracefully, but one effective approach is to start slowly, asking for a few pieces of low-value data that can be used to improve a service. Provided that there’s a clear link between the data collected and the enhancements delivered, customers will become more comfortable sharing additional data as they grow more familiar with the service.

If your company still needs another reason to pursue the data principles we’ve described, consider this: Countries around the world are clamping down on businesses’ freewheeling approach to personal data. (See the sidebar “Data Laws Are Growing Fiercer.”)

There is an opportunity for companies in this defining moment. They can abide by local rules only as required, or they can help lead the change under way. Grudging and minimal compliance may keep a firm out of trouble but will do little to gain consumers’ trust—and may even undermine it. Voluntarily identifying and adopting the most stringent data privacy policies will inoculate a firm against legal challenges and send consumers an important message that helps confer competitive advantage. After all, in an information economy, access to data is critical, and consumer trust is the key that will unlock it.

View at the original source



Although corporate governance is a hot topic in boardrooms today, it is a relatively new field of study. Its roots can be traced back to the seminal work of Adolf Berle and Gardiner Means in the 1930s, but the field as we now know it emerged only in the 1970s. Achieving best practices has been hindered by a patchwork system of regulation, a mix of public and private policy makers, and the lack of an accepted metric for determining what constitutes successful corporate governance. The nature of the debate does not help either: shrill voices, a seemingly unbridgeable divide between shareholder activists and managers, rampant conflicts of interest, and previously staked-out positions that crowd out thoughtful discussion. The result is a system that no one would have designed from scratch, with unintended consequences that occasionally subvert both common sense and public policy.

Consider the following:
  • In 2010 the hedge fund titans Steve Roth and Bill Ackman bought 27% of J.C. Penney before having to disclose their position; Penney’s CEO, Mike Ullman, discovered the raid only when Roth telephoned him about it.

  • The proxy advisory firm Glass Lewis has announced that it will recommend a vote against the chairperson of the nominating and governance committee at any company that imposes procedural limits on litigation against the company, notwithstanding the consensus view among academics and practitioners that shareholder litigation has gotten out of control in the United States.

  • In 2012 JPMorgan Chase had no directors with risk expertise on the board’s risk committee—a deficiency that was corrected only after Bruno Iksil, the “London Whale,” caused $6 billion in trading losses through what JPM’s CEO, Jamie Dimon, called a “Risk 101 mistake.”

  • Allergan, a health care company, recently sought to impose onerous information requirements on efforts to call a special meeting of shareholders, and then promptly waived those requirements just before they would have been invalidated by the Delaware Chancery Court.

  • The corporate governance watchdog Institutional Shareholder Services (ISS) issued a report claiming that shareholders do better, on average, by voting for the insurgent slate in proxy contests; within hours, the law firm Wachtell, Lipton, Rosen & Katz issued a memorandum to clients claiming that the study was flawed.

  • The same ISS issues a “QuickScore” for every major U.S. public company, yet it won’t tell you how it calculates your company’s score or how you can improve it—unless you pay for this “advice."
We can do better. And with trillions of dollars of wealth governed by these rules of the game, we must do better. In this article I propose Corporate Governance 2.0: not quite a clean-sheet redesign of the current system, but a back-to-basics reconceptualization of what sound corporate governance means. It is based on three core principles—principles that reasonable people on all sides of the debate should be able to agree on once they have untethered from vested interests and staked-out positions. I apply these principles to develop a package solution to some of the current hot-button issues in corporate governance.

The overall approach draws from basic negotiation theory: Rather than fighting issue by issue, as boards and shareholder activist groups currently do, they should take a bundled approach that allows for give-and-take across issues, thereby increasing the likelihood of meaningful progress. The result would be a step change in the quality of corporate governance, rather than incremental meandering toward what may (or may not) be a better corporate governance regime for U.S. public companies.
  • Principle #1: Boards Should Have the Right to Manage the Company for the Long Term

    Perhaps the biggest failure of corporate governance today is its emphasis on short-term performance. Managers are consumed by unrelenting pressure to meet quarterly earnings, knowing that even a penny miss on earnings per share could mean a sharp hit to the stock price. If the downturn is severe enough, activist hedge funds will start to become interested in taking a position and then clamoring for change. And, of course, there are the lawyers, ever ready to file litigation after a big drop in the company’s stock.

  • It is ironic that companies today have to go private in order to focus on the long term. Michael Dell, for example, took Dell private in 2013 because, he claimed, the fundamental changes the company needed could not be achieved in the glare of the public markets. A year later he wrote in the Wall Street Journal, “Privatization has unleashed the passion of our team members who have the freedom to focus first on innovating for customers in a way that was not always possible when striving to meet the quarterly demands of Wall Street.” The idea that “innovating for customers” can be done more effectively in a private company is deeply troubling; public companies, after all, are still the largest driver of wealth creation in our economy.
To allow managers at public companies to focus on the long term, Corporate Governance 2.0 includes the following tenets:

End earnings guidance.

With holding periods in today’s stock markets averaging less than six months, short-termism cannot be avoided completely. Nevertheless, dispensing with earnings guidance—the practice of giving analysts a preview of what financial results the company expects—would mitigate the obsession with short-term profitability. Earnings guidance has been in decline over the past 10 years, but many companies are nervous about eliminating it for analysts who have come to rely on it. Research shows that the dispersion in analysts’ forecasts increases after companies stop giving guidance—presumably because analysts are no longer being fed the answers to the questions. With less consensus among them, the stock market reacts less negatively when earnings are lower than the average view, thereby mitigating the pressure for quarterly results. Instead of providing earnings guidance, companies should provide analysts with long-term goals, such as market share targets, number of new products, or percent of revenue from new markets.

Dispensing with earnings guidance would mitigate the obsession with short-term profitability.

Bring back a variation on the staggered board.

When a board is staggered, one-third of the directors are elected each year to three-year terms. This structure promotes continuity and stability in the boardroom, but shareholder activists dislike it, because a hostile bidder must win two director elections, which may be as far apart as 14 months, in order to gain the two-thirds board control necessary to facilitate a takeover. In my research with Lucian Bebchuk and John Coates, of Harvard Law School, I find that no hostile bidder has ever accomplished this.

As shareholder activists gained more power in the 2000s, the number of staggered boards in the S&P 500 fell from 60% in 2002 to 18% in 2012. The trend is continuing: In 2014, 31 S&P 500 companies received de-staggering proposals for their annual meetings, and seven of those companies preemptively agreed to de-stagger their boards before the issue came to a vote. The result of this trend is that most corporate directors today are elected every year to one-year terms (creating so-called unitary boards).

It is virtually tautological that directors elected to one-year terms will have a shorter-term perspective than those elected to three-year terms. This is particularly true because ISS and other proxy advisory firms have not been shy about using withhold-vote campaigns to punish directors who make decisions they don’t like. One director attending a program at HBS told me that his board had decided against hiring a talented external candidate for CEO who would have required an above-market compensation package. Even though he was the best candidate, and even though this director thought that he’d be worth the money, the board did not move forward in part because of concern that ISS would recommend against the compensation committee at the next annual meeting. With a staggered board, ISS would have recourse against only one-third of the compensation committee each year, because only one-third of the committee members would be up for re-election.

Of course, shareholder activists make a strong case that a staggered board may discourage an unsolicited offer that a majority of shareholders would like to accept. But this drawback would be avoided if the stagger could be “dismantled,” either by removing all the directors or by adding new ones. A staggered board that could be dismantled in this way would combine the longer-term perspective of three-year terms with the responsiveness to the takeover marketplace that shareholders want. It would give ISS recourse against individual directors, but only every three years rather than every year. A triannual check would allow longer-term investments (such as the superstar CEO mentioned above) to play out, and would be better aligned with long-term wealth creation than an annual check on all directors.

Install exclusive forum provisions.

In our litigation-prone system of corporate governance, plaintiffs’ attorneys (representing shareholders who typically hold only a few  shares) look for any hiccup in stock price or earnings to file litigation against the company and its board. Plaintiffs’ attorneys are especially attracted to major transactions, such as mergers and acquisitions, because of corporate law that is friendly to litigation in this arena. Any public-company board announcing a major transaction is highly likely to be sued—sometimes within hours—regardless of how much care and effort its members put into their decision. It is anyone’s guess how many value-creating deals are deterred by this “tax” that the plaintiffs’ bar imposes on the system. In fact, a board that goes forward with a transaction will often deliberately keep something in its pocket—such as a disclosure item or even a bump in the offer price—to be given up as part of a quick settlement so that the plaintiffs’ attorneys can collect their fees and the deal can proceed.

It is not only the frequency of claims that causes concern, but also where they are brought. A U.S. corporation is subject to jurisdiction wherever it has contacts—its headquarters state, its state of incorporation, and states where it does business. Plaintiffs’ attorneys take advantage of this fact to bring suit in multiple states—particularly those that permit a jury trial for corporate law cases. The prospect of inexperienced jurors deciding a complex corporate case leads many companies to settle in a hurry. This kind of blackmail is bad for corporate governance and society overall. Exclusive forum provisions permit litigation against a company only in its state of incorporation. For companies incorporated in Delaware, which are the majority of large U.S. public companies this means the case would be heard before an experienced and sophisticated judge on the Delaware Chancery Court rather than an inexperienced jury.

Yet despite these clear benefits, shareholder activists have expressed knee-jerk opposition to exclusive forum provisions. Glass Lewis has threatened a withhold vote against the chair of the nominating and governance committee of any board that installs one without shareholder approval. The argument is that the prospect of multistate litigation will make directors pay more attention. But most directors do not need the sharp prod of a jury trial for them to want to do a good job. Exclusive forum provisions give plaintiffs’ attorneys a fair fight in a state where the rules of the game are well established. In exchange for such a provision, boards might consider renouncing more-draconian measures, such as a fee-shifting bylaw that forces plaintiffs to pay the company’s expenses if their litigation is unsuccessful.

Corporate Governance 2.0 asks the functional question: What goals are the activists, governance rating agencies, boards, and everyday shareholders all trying to achieve? The answer is clear: insulation from frivolous litigation, but meaningful exposure to liability in the event of a dereliction of duty in the boardroom. In the old days, activists and their allies agreed on this shared goal. In the late 1980s, when most U.S. states enabled boards to waive liability for certain breaches of fiduciary duty, ISS encouraged directors to take up the invitation, on the understanding that they should be focused on shaping strategy and monitoring performance rather than worrying about shareholder litigation. Corporate Governance 2.0 would return to this old wisdom through exclusive forum provisions. Directors would be accountable for their actions, but only as judged by a corporate law expert. The result would be greater willingness among directors to make longer-term decisions, without fear of a jury’s 20/20 hindsight.

Principle #2: Boards Should Install Mechanisms to Ensure the Best Possible People in the Boardroom

In exchange for the right to run the company for the long term, boards have an obligation to ensure the proper mix of skills and perspectives in the boardroom. Shareholder activists have proposed several measures in recent years to push toward this goal—principally age limits and term limits, but also gender and other diversity requirements. According to the most recent NACD Public Company Governance Survey, approximately 50% of U.S. public companies have age limits, and approximately 8% have term limits. ISS is urging more companies to adopt such limits, and if history is any guide, boards will give the idea serious consideration.

Activists and corporate governance rating agencies are motivated by a sense that boards don’t take a hard look at their composition and whether the skill set on the board reflects the needs of the company. Too often directors are allowed to continue because it’s difficult to ask them to step down.

But age and term limits are a blunt instrument for achieving optimal board composition. Anyone who has served on a corporate board knows that an individual director’s contribution has little to do with either age or tenure. If anything, the correlation is likely to be positive. As for age limits, directors who have retired from full-time employment can devote themselves to their work on the board. And as for term limits, directors will often need a decade to shape strategy and evaluate the success of its execution; moreover, directors who have been in office longer than the current CEO are more likely to be able to challenge him or her when necessary. Yet these are precisely the directors who would be forced out by age limits or term limits.

Corporate Governance 2.0 would approach the issue of board composition in a tailored manner, focusing more on making sure that boards really engage in meaningful selection and evaluation processes rather than ticking boxes. In particular it would:

Require meaningful director evaluations.

Many boards today have internal evaluations conducted by the chairman or lead director. Although these evaluations are well-intentioned, directors may be unwilling to disclose perceived weaknesses to the person most responsible for the effective functioning of the board. A Corporate Governance 2.0 approach would engage an independent third party to design a process and then conduct the reviews. The process would include grading directors on various company-specific attributes so that they and their contributions were evaluated in a relevant way.

In Corporate Governance 2.0, director evaluations wouldn’t just get filed away. They would be shared with the individual director, with comments reported verbatim when necessary to make clear any opportunities for improvement. They would also go to the chairman or lead director, to provide objective evidence with which to have difficult conversations with underperforming directors.
Meaningful board evaluations would also have more-subtle effects on board composition and boardroom dynamics. Foreseeing a rigorous review process, underperforming directors would voluntarily not stand for reelection. Even more important, directors would work hard to make sure they weren’t perceived as underperforming in the first place.

Consider shareholder proxy access.

Under such a rule, shareholders with a significant ownership stake in the company would have the right to put director candidates on the company’s ballot. For the first time in corporate governance, a company proxy statement could have, say, 10 candidates for eight seats on the board. Hewlett-Packard and Western Union, among other companies, have implemented shareholder proxy access over the past two years.

The Securities and Exchange Commission tried to impose proxy access on all companies in 2010, but the D.C. Circuit Court of Appeals invalidated the move. The SEC has since allowed companies to implement it on a voluntary basis. My research with Bo Becker, then at HBS, and Daniel Bergstresser, of Brandeis, shows that a comprehensive proxy access rule would have added value, on average, for U.S. public companies. The company-by-company approach is not as good as a comprehensive rule, because qualified directors may gravitate to boards that don’t offer proxy access; nevertheless, it should be considered a backstop to rigorous director evaluations.

An individual director’s contribution has little to do with either age or tenure.

Implementing a proxy access rule would help ensure the right mix of skills in the boardroom. For example, if J.P. Morgan had a proxy access rule, it seems likely that it would not have lacked directors with risk expertise on the risk committee at the time of the London Whale incident. More than a year before that event, CtW Investment Group, an adviser to union pension funds, highlighted the point: “The current three-person risk policy committee, without a single expert in banking or financial regulation, is simply not up to the task of overseeing risk management at one of the world’s largest and most complex financial institutions.” With a proxy access regime, either the board would have put someone on the risk committee with risk expertise, or a significant shareholder could have nominated such a person, and the shareholders collectively would have decided whether the gap was worth filling.

This is not to say that if JPM’s risk committee had included directors with risk expertise, the London Whale incident would have been prevented. As is well known, primary frontline responsibility for managing risk exposure at JPM belongs to the operating committee on risk management, whose members are high-ranking JPM employees. But the odds of identifying the problem would certainly have been higher in a proxy access regime.

Only in the aftermath of the debacle did the board add a director with risk expertise to the risk committee. Of course, it should not take a multibillion-dollar trading loss to put people with the right skill set on the JPM risk committee. A shareholder proxy access regime should be considered as a supplement to meaningful board evaluations, to ensure the right composition of directors in the boardroom.

Principle #3: Boards Should Give Shareholders an Orderly Voice

Today, when an activist investor threatens a proxy contest or a strategic buyer makes a hostile tender offer, boards tend to see their role as “defender of the corporate bastion,” which often leads to a no-holds-barred, scorched-earth, throw-all-the-furniture-against-the-door campaign against the raiders. As George “Skip” Battle, then the lead director at PeopleSoft, put it to me in the context of Oracle’s 2003 hostile takeover bid for his company, “This is the closest thing you get in American business to war.”

Consider the more recent case of CommonWealth REIT, one of the largest real estate investment trusts in the United States. As of December 2012, CommonWealth’s properties were worth $7.8 billion against $4.3 billion in debt, but its market capitalization stood at only $1.3 billion. Corvex Management, a hedge fund run by Keith Meister (a Carl Icahn protégé), and the Related Companies, a privately held real estate firm specializing in luxury buildings, saw an investment opportunity in CommonWealth’s poor performance. In February 2013 they announced a 9.8% stake in CommonWealth and proposed acquiring the rest of the company for $25 a share. This offer represented a 58% premium over CommonWealth’s unaffected market price.

The Corvex-Related strategy for unlocking value at CommonWealth was relatively simple. CommonWealth had no employees; it paid an external management company to manage the real estate assets. This company, Reit Management & Research, was run by Barry and Adam Portnoy, a father-and-son team who also constituted two-fifths of the CommonWealth board. Corvex and Related believed that internalizing management would eliminate conflicts of interest within the board, align shareholder interests, and unlock substantial value. Their investment thesis boiled down to three words: Fire the Portnoys.

Would the plan unlock value at CommonWealth? The board was determined not to find out. Despite having given shareholders the right to act by written consent, it imposed onerous information requirements that made it impossible, as a practical matter, for them to do so. The board also lobbied the Maryland legislature (unsuccessfully) to amend its takeover laws to protect the company. Perhaps most egregious, the board added a provision to its bylaws declaring that any dispute regarding the company would be heard by an arbitration panel, not a Maryland court. After 18 months of arbitration hearings and sharply worded press releases, Corvex and Related finally replaced the CommonWealth board with their own nominees in June 2014. Today CommonWealth (renamed Equity Commonwealth) trades at about $25 a share, compared with about $16 before the offer.

CommonWealth’s board took the typical scorched-earth approach, but it shouldn’t be like this. The principle of “orderly shareholder voice” involves a different conceptualization of the board’s role—to guarantee a reasonable process whereby shareholders get to decide, rather than to defend the corporate bastion at all costs. Even when a board genuinely believes that the competing vision is mistaken (which is true in the vast majority of cases), its fiduciary duty—contrary to popular belief—does not require preventing shareholders from deciding. In a Corporate Governance 2.0 world, the directors would campaign hard for their point of view but leave the decision to the shareholders.

“Orderly” is a critical qualifier, because some shareholders are undeniably disorderly. With the steep decline of poison pills, which block unwanted shareholders from acquiring more than 10% to 15% of a company’s shares, hedge funds and other activist investors can buy substantial stakes in a target company before they have to disclose their positions. Recall the case of J.C. Penney: Because it did not have a poison pill in 2010, Roth and Ackman could secretly buy a 27% stake The company put them on the board, and Mike Ullman was replaced as CEO by the Apple executive Ron Johnson, who planned to give Penney a younger, hipper look. The strategy proved disastrous, and the stock price dropped from about $30 to as low as $7.50 over the next two years. Johnson was forced out in 2013—and replaced by none other than Mike Ullman.

In theory, companies are protected against such lightning-strike raids by the SEC rule that shareholders must disclose their ownership position after crossing the 5% threshold. But they have 10 days in which to do so, and nothing stops them from buying more shares in the meantime. This is exactly what happened in the Penney case. By the time Roth and Ackman had to make the disclosure, they had bought more than a quarter of the company’s shares.

The relevant rule dates back to the 1960s, when 10 days was a reasonable amount of time. Today, of course, 10 days in the securities markets is an eternity, and no one designing a disclosure regime from scratch would dream of giving shareholders such a long window. (European countries have substantially shorter windows.) Nonetheless, shareholder groups have resisted change, on the rather questionable grounds that the Roths and Ackmans of the world need sufficient incentive to keep looking for underperforming targets.

Under a Corporate Governance 2.0 system, boards would get early warning of lightning-strike attacks. One way to do this would be with what I call an “advance notice” poison pill—a pill with a 5% threshold but also an exemption: Any shareholders that disclosed their positions within two days of crossing the threshold would avoid triggering the pill and could continue buying shares without being diluted. John Coffee, of Columbia Law School, and Darius Palia, of Rutgers Business School, have proposed a similar version of self-help, which they call a “window-closing” poison pill. Either kind of pill would give directors fair warning that their company was “in play” before the bidder could build up an unassailable position.

Directors should guarantee a reasonable process whereby shareholders get to decide.

Today a change in corporate governance usually occurs when ISS threatens a withhold vote against the board unless certain reforms are implemented. Corporate Governance 2.0 takes a proactive approach that achieves the same (desirable) goals in a holistic and better way. Managers actively engage with shareholders from a functional perspective (“What are we all trying to achieve?”) rather than an issue-by-issue reactionary perspective (“Should we surrender, or do we fight?”).

In this article I have applied the three fundamental principles of Corporate Governance 2.0 to provide a package solution to certain hot-button issues in corporate governance today. A board that wants to adopt this solution could do so unilaterally in many jurisdictions (including, for the most part, Delaware), though in general it would be better advised to adopt Corporate Governance 2.0 through a shareholder vote.

Other hot-button issues will emerge in the future. The most recent version of ISS’s QuickScore, for example, includes 92 factors, any of which could become the next pressure point against corporate boards. Rather than evaluating each of these innovations incrementally, boards should hold up future proposals to the same three principles of Corporate Governance 2.0.

This shift is vital in the United States, where the power of shareholders has increased over the past 10 years and the natural instinct of boards is to simply cave to activist demands. A Corporate Governance 2.0 perspective is critical outside the U.S. as well, particularly in emerging economies where companies are trying to achieve the right balance of authority between boards and shareholders in order to gain access to global capital markets. Over the long term, a Corporate Governance 2.0 perspective would transform corporate governance from a never-ending conflict between boards and shareholders to a source of competitive advantage in the marketplace.

View at the original source