Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Wednesday, March 29, 2017

New Research uncovers DNA changes that afffect Human Height 03-30


International study of more than 700,000 people probes deeper into height than ever before.






















Image credit : Shyam's Imagination Library



In the largest, deepest search to date, the international Genetic Investigation of Anthropometric Traits (GIANT) Consortium has uncovered 83 new DNA changes that affect human height. These changes are uncommon or rare, but they have potent effects, with some of them adjusting height by more than 2 cm (almost 8/10 of an inch). The 700,000-plus-person study also found several genes pointing to previously unknown biological pathways involved in skeletal growth. Findings were published online by Nature on February 1.

“While our last study identified common height-related changes in the genome, this time we went for low-frequency and rare changes that directly alter proteins and tend to have stronger effects,” says Joel Hirschhorn, MD, PhD, of Boston Children’s Hospital and the Broad Institute of MIT and Harvard, chair of the GIANT Consortium and co-senior investigator on the study together with researchers at the Montreal Heart Institute, Queen Mary University, the University of Exeter, UK, and nearly 280 other research groups. “To identify these protein-altering changes, some of them very uncommon, required tremendous statistical power, which we achieved thanks to a strong international collaboration.”

Applying a new technology

In 2014, GIANT, studying roughly 250,000 people, brought the total number of known genetic variants to nearly 700 — in more than 400 spots in the genome. This effort involved a powerful method called genome-wide association study (GWAS), which rapidly scans across the genomes of large populations for markers that track with a particular trait. GWAS are good at finding common genetic variants, but nearly all of the identified variants alter height by less than 1 mm (less than 1/20 of an inch). GWAS studies are not as good at capturing uncommon genetic variants, which can have larger effects. Finally, the common variants that track with traits tend to lie mostly outside the protein-coding parts of genes, making it harder to figure out which genes they affect.

So in the new study, the GIANT investigators used a different technology: the ExomeChip, which tested for a catalogue of nearly 200,000 known variants that are less common and that alter the function of protein-coding genes. These variants point more directly to genes and can be used as a shortcut to figuring out which genes are important for a specific disease or trait. Most had not been assessed in prior genetic studies of height.

Using ExomeChip data from a total of 711,428 adults (an initial 460,000 people and about 250,000 more to validate the findings), the investigators identified 83 uncommon variants associated with adult height: 51 “low-frequency” variants (found in less than 5 percent of people) and 32 rare variants (found in less than 0.5 percent).

With these new findings, 27.4 percent of the heritability of height is now accounted for (up from 20 percent in earlier studies), with most heritability still explained by common variants.

Twenty-four of the newly discovered variants affect height by more than 1 cm (4/10 of an inch), larger effects than typically seen with common variants. “This finding matches a pattern seen in other genetic studies, where the more potent variants are rarer in the population,” says Hirschhorn, who is also an endocrinologist at Boston Children’s and a professor of pediatrics and genetics at Harvard Medical School.”

Rare but potent clues to new biology

These rare variants not only had large effects but also pointed to dozens of genes as important for skeletal growth. Some of these genes were already known, but many (including SUSD5, GLT8D2, LOXL4, FIBIN, and SFRP4) have not previously been connected with skeletal growth.

One gene of particular interest, STC2, had two different DNA changes that both had larger effects on height. Though the variants are quite rare (frequency of 0.1 percent), people with either of these changes were 1-2 cm taller than non-carriers. Further investigations by co-authors Troels R. Kjaer and Claus Oxvig of Aarhus University (Denmark) suggested that the variants influence height by affecting the availability of growth factors in the blood. “The STC2 protein serves as a brake on human height, validating it as a potential drug target for short stature,” says Hirschhorn.

Height: A window into complex genetics

Why study height? Height is the “poster child” of complex genetic traits, meaning it is influenced by multiple genetic variants working together. It’s easy to measure, so makes a relatively simple model for understanding traits produced by not one gene, but many.

“Mastering the complex genetics of height may give us a blueprint for studying multifactorial disorders that have eluded our complete understanding, such as diabetes and heart disease,” says Hirschhorn.  “This study has shown that rare protein-altering variants can be helpful at finding some of the important genes, but also that even larger sample sizes will be needed to completely understand the genetic and biologic basis of human growth and other multifactorial diseases.”

Indeed, the GIANT consortium is already embarking on a GWAS of height with more than 2 million people, and other studies involving sequencing data are underway. “We predict that these more comprehensive studies will continue to enhance our understanding of human growth and how best to attain the biological insights that will inform treatments for common diseases,” says Hirschhorn.

View at the original source

ADATS Could Assist X-planes With Large, Super-Fast Data Transmission 03-29

















A network and communication architecture that can more efficiently move data from research aircraft, while using half the bandwidth of traditional methods, could eventually also enable data collection of precise measurements needed for testing the next generation of X-planes.


Called the Advanced Data Acquisition and Telemetry System, or ADATS, researchers at NASA Armstrong Flight Research Center in California integrated the new systems into a NASA King Air recently for a series of three flights following extensive ground testing. The new system can move 40 megabits per second, which is the equivalent of streaming eight high-definition movies from an online service each second, said Otto Schnarr, principal investigator.


Researchers aren’t looking to make binge watching easier – they are interested in the system’s speed in moving large amounts of data up to four times faster than previous network-based telemetry efforts and up to 10 times faster than current systems Armstrong researchers are using, Schnarr explained.

All of this capability is gained without new architecture and using the advanced modulation technique to save spectral bandwidth, time and research dollars, said electrical engineer Matthew Waldersen. In addition, the system allows people to participate in the flight test from wherever a secure network is available. As many as 3.3 million sensor measurements per second can be acquired, or a focused data set can be targeted to free up bandwidth for other tasks, like streaming high-definition video simultaneously, he added.


ADATS aims to advance flight test data acquisition and telemetry systems using an Ethernet via telemetry subsystem that wirelessly transmits test data and an advanced data acquisition system that allows remote researchers to command experiments and receive data collection during flight.

“The main components are a ground station, a transceiver on the airplane and the instrumentation systems that tie everything together,” said Tom Horn, ADATS project manager. “The tests explored what this system does and how it behaves. We wanted to make sure we understood the nuances and determine if additional testing is required for researchers to feel comfortable using it.”

The flights capped a three-year effort to fill in existing gaps in the technology, such as range, instrumentation and system design challenges. ADATS team members have made well-received presentations at the center that led to additional brainstorming session on potential uses for the technology.


“People were not having trouble coming up with how they could put it to use,” Waldersen explained. “Having more data allows researchers to do what they do better. Everyone at the sessions agreed the technology is worth pursuing. You know a project is a success when you take questions from engineers like, ‘have you considered using it for this case, or could we do this with it?’”

Building up the capability is the next step.


“In any electronics project there is a hardware and a software component,” Waldersen said. “We have completed a lot of work with the hardware component to see what it can do and now it’s a matter of the software aspect and how it integrates with ground operations, which projects will put it to use immediately and what other systems can we build around it to fully utilize the capability.”


Maturing the technology could be useful for upcoming X-plane testing. For example, measurements of airflow data measurement along the entire face of a fan engine could be efficiently researched, Horn explained. Another advantage is unlike traditional data collection that can experience loss of data, or “dropouts,” ADATS can eliminate the loss with this data collection method. However, delays can still occur and researchers are looking into understanding the ramifications of that for safety.

In addition, the system also could have implications for uninhabited air vehicles and systems for uplinks and bandwidth management. For example, aircraft like the Ikhana or Global Hawk could gain efficiencies. ADATS also could work in combination with an Ethernet-based fiber optic sensing system to streamline data collection.


The ADATS effort can be traced back to The Hi-Rate Wireless Airborne Network Demonstration (HIWAND) in 2005, which also flew on the King Air. It demonstrated in flight a network-enhanced telemetry system that enabled connectivity between air and ground, including airborne Internet access. The capability was focused on allowing scientists and others to downlink scientific data and uplink critical information to airborne sensors more efficiently.


NASA’s Flight Demonstrations and Capabilities project, which is part of the Integrated Aviation Systems program, is funding the current effort.

View at the original source

Tuesday, March 28, 2017

Researchers find new gene interaction associated with increased MS risk 03-28






A person carrying variants of two particular genes could be almost three times more likely to develop multiple sclerosis, according to the latest findings from scientists at The University of Texas Medical Branch at Galveston and Duke University Medical Center.

One of these variants is in IL7R, a gene previously associated with MS, and the other in DDX39B, a gene not previously connected to the disease.

The discovery could open the way to the development of more accurate tests to identify those at greatest risk of MS, and possibly other autoimmune disorders, the researchers said.
The findings are published in the latest issue of Cell.

A disease in which the body’s own immune system attacks nerve cells in the spinal cord and brain, MS is a major cause of neurological disease in younger adults, from 20 to 50 years of age, and disproportionally affects women. While treatable, there is no cure for MS, which can lead to problems with vision, muscle control, balance, basic body functions, among other symptoms, and could lead to disability.

Available treatments have adverse side effects as they focus on slowing the progression of the disease through suppression of the immune system.

Thanks to the collaboration between scientists at UTMB, Duke, University of California, Berkeley, and Case Western Reserve University, researchers found that when two particular DNA variants in the DDX39B and IL7R genes are present in a person’s genetic code, their interaction can lead to an over production of a protein, sIL7R. That protein’s interactions with the body’s immune system plays an important, but not completely understood, role in MS.

“Our study identifies an interaction with a known MS risk gene to unlock a new MS candidate gene, and in doing so, open up a novel mechanism that is associated with the risk of multiple sclerosis and other autoimmune diseases,” said Simon Gregory, director of Genomics and Epigenetics at the Duke Molecular Physiology Institute at Duke University Medical Center and co-lead author of the paper in Cell.

This new information has potentially important applications.

“We can use this information at hand to craft tests that could allow earlier and more accurate diagnoses of multiple sclerosis, and uncover new avenues to expand the therapeutic toolkit to fight MS, and perhaps other autoimmune disorders,” said Gaddiel Galarza-Muñoz, first author on the study and postdoctoral fellow at UTMB.

It can sometimes take years before an MS patient is properly diagnosed allowing the diseases to progress and resulting in further damage to the nervous system before treatment begins.
With more accurate measures of risk, health care providers would be able to screen individuals with family histories of MS or with other suspicious symptoms. It could lead those with certain genotypes to be more vigilant.

“One could envision how this type of knowledge will someday lead to diagnose multiple sclerosis sooner and, now that we have promising therapies, a doctor could start the appropriate treatment more quickly. It is not out the realm of possibility to imagine a path for screening for other autoimmune diseases such as Type 1 Diabetes,” said Dr. Mariano Garcia-Blanco, Professor and Chair of the department of biochemistry and molecular biology at UTMB, and co-lead author of the paper.
For Garcia-Blanco the fight against MS is personal. He was already working on research related to MS when in 2012 he found out his daughter, then in her late 20s, had been diagnosed with the disease. Garcia-Blanco said this refocused his efforts on his MS related work.

“I’m much more aware now of how the work we do in the lab could someday lead to something that can be used to help those who have to live with MS”, Garcia-Blanco said.

Other study authors include Farren B.S. Briggs, Irina Evsyukova, Geraldine Schott-Lerner, Edward M. Kennedy, Tinashe Nyanhete, Liuyang Wang, Laura Bergamaschi, Steven G. Widen, Georgia D. Tomaras, Dennis C. Ko, Shelton S. Bradrick and Lisa F. Barcellos.

The research was supported by the National Institutes of Health, National MS Society Pilot Award, Duke University Whitehead Scholarship, Ruth and A. Morris Williams Faculty Research Prize funds from Duke University School of Medicine, start-up funds from UTMB and funds from Mr. Herman Stone and family for MS research.

View at the original source

Saturday, March 25, 2017

Harnessing the Secret Structure of Innovation 03-26


Sustained innovation success is not the result of artful intuition or heroic vision but of a deliberate search using key information signals.

In an era of low growth, companies need innovation more than ever. Leaders can draw on a large body of theory and precedent in pursuit of innovation, ranging from advice on choosing the right spaces to optimizing the product development process to establishing a culture of creativity.1 In practice, though, innovation remains more of an art than a science.

But it doesn’t need to be.

In our research with the London Institute, we made an exciting discovery.2 Innovation, much like marketing and human resources, can be made less reliant on artful intuition by using information in new ways. But this requires a change in perspective: We need to view innovation not as the product of luck or extraordinary vision but as the result of a deliberate search process. This process exploits the underlying structure of successful innovation to identify key information signals, which in turn can be harnessed to construct an advantaged innovation strategy.

Innovation in Legoland

Let’s illustrate the idea using Lego bricks. Think back to your childhood days. You’re in a room with two of your friends, playing with a big box of Legos (say, the beloved “fire station” set). All three of you have the same goal in mind: building as many new toys as possible. As you play, each of you searches through the box and chooses the bricks you believe will help you reach this goal.

Let’s now suppose each of you approaches this differently. Your friend Joey uses what we call an impatient strategy, carefully picking Lego men and their firefighting hats to immediately produce viable toys. You follow your intuition, picking random bricks that look intriguing. Meanwhile, your friend Jill chooses pieces such as axles, wheels, and small base plates that she noticed are common in more complex toys, even though she is not able to use them immediately to produce simpler toys. We call Jill’s approach a patient strategy.

At the end of the afternoon, who will have innovated the most?3 That is, who will have built the most new toys? Our simulations show that this depends on several factors. In the beginning, Joey will lead the way, surging ahead with his impatient strategy. But as the game progresses, fate will appear to shift. Jill’s early moves will begin to seem serendipitous when she’s able to assemble complex fire trucks from her choice of initially useless axles and wheels. It will appear that she was lucky, but we will soon see that she effectively harnessed serendipity.

What about you? Picking components without using any information, you will have built the fewest toys. Your friends had an information-enabled strategy, while you relied only on intuition and chance. 

What can we learn from this? If innovation is a search process, then your component choices today matter greatly in terms of the options they will open up to you tomorrow. Do you pick components that quickly form simple products and give you a return now, or do you choose the components that give you a higher future option value?

We analyzed the mathematics of innovation as a search process for viable product designs (toys) across a universe of components (bricks). We then tested our insights using historical data on innovations in four real environments and made a surprising discovery. You can have an advantaged innovation strategy by using information about the unfolding process of innovation. But there isn’t one superior strategy. The optimal strategy is both time-dependent (as in the Lego game) and space/sector-dependent — Lego is just one of many innovation spaces, each of which has its own characteristics. In innovation, as in business strategy, winning strategies depend on context.

The exhibit below, "Information-Enabled Innovation Strategies Outperform," demonstrates three crucial insights. First, information-enabled strategies outperform strategies that do not use the information generated by the search process. Second, in an earlier phase of the game, an impatient strategy outperforms; in later stages, a patient strategy does. Critically, third, it is possible to have an adaptive strategy, one that changes as the game unfolds and that outperforms in all phases of the game. Developing an adaptive strategy requires you to know when to switch from Joey’s approach to Jill’s. The switching point is knowable and occurs when the complexity of products (the number of unique Lego bricks in each toy) starts to level off. 



Applying the Insight

How can companies harness these insights in practice? To answer this question, we ran simulations based on detailed historical data for a range of datasets, from culinary arts and music to language and software technologies such as those used by Uber, Instagram, and Dropbox. From our findings, we distilled a five-step process for constructing an information-advantaged innovation strategy.

Step 1. Choose your space: Where to play?

The features of your innovation space matter, so it’s important to make a deliberate choice about where you want to compete. Interestingly, it’s not enough to analyze markets or anticipate customers’ needs. To innovate successfully, you also need to understand the structure of your innovation space.
Start by taking a snapshot of key competing products and their components. How complex are the products, and do you have access to the components? As a rule of thumb, choose spaces where product complexity is still low and where you have access to the most prevalent components.


By focusing on immature spaces, you can get ahead of competitors by first employing a rapid-yield, impatient strategy and then later switching to a more patient strategy with delayed rewards. Uber International CV provides a good example. The company entered the embryonic peer-to-peer ride-sharing space three years after it was founded in 2009 as a limousine commissioning company. Uber chose its space wisely: The ride-sharing industry was immature, product complexity was low, and the necessary components were easily accessible. The impatient strategy was to get to market quickly with a ride-sharing app. As we are learning, there is also now what appears to be a patient strategy at work at Uber — self-driving technology with a much higher level of complexity and a much longer period of gestation.

Reproduced from MITSLOAN Management Review

Pennsylvania restaurant offers discount for families who have phone-free meals 03-25






What would it take to get you to put down your phone during a meal?

Sarah’s Corner Café in Stroudsburg, Pennsylvania, is offering a deal for people who want to enjoy a meal, and each other, unplugged.

They’ve set up so-called “family recharging stations” at tables where you drop your phone into a basket.

“They let the server know and the server will bring over a basket with old fashioned Hangman and Tic Tac Toe and pencils because those games are interactive instead of coloring, which is solitary,” owner Barry Lynch told ABC News of how the restaurant's phone-free meals discount works.
If families make it through the meal without looking at their phones, they’re rewarded 10 percent off their bill.

“A lot of people are starting to do it and it’s taken on a life of its own,” said Lynch. “I get huge feedback. Massive feedback.”

The idea for the “family recharging time” came to Lynch after observing many of his customers.

“There’s one particular family I knew used to come in on Sunday for breakfast after church. I knew the dad and the mom and two kids and we’d always say ‘hi,’” he recalled. “Every time I went over, one or two of the kids and sometimes the parents would be on the phone. I also knew the dad would commute to New York for work every day, which takes a lot of time. I asked him about that and he said, ‘Yeah, I still do it. It’s so nice to be together and these breakfasts are rare.’ And when he said that, I thought, ‘Oh wow. Something is going on here. I need to do something.’”

Lynch is thrilled by the positive response his phone-free meals have gotten and hopes they continue to enrich his customers’ family time.

“I just thought it was such a shame not to have more time together just to talk,” he said. “Look at my eyes. I’m here with you. How was your day?”

View at the original source

Friday, March 24, 2017

Creating a Pension to Fit the Needs of the Rural Poor 03-25






Pensions are, in a sense, a necessary by-product of a rich economy. But what will it take to sell the idea to the rural poor? Especially when their income (never particularly substantial) is seasonal, increasing at harvest time and with demand in the cities for construction-related labor. What are the inducements that can convince them to invest for a future forced upon them by the changing social structure?

Olivia S. Mitchell, a Wharton professor of business economics and public policy and executive director of the Pension Research Council, and Anita Mukherjee, a professor at the Wisconsin School of Business at the University of Wisconsin-Madison, set out to answer these questions in a research paper titled, “Assessing the Demand for Micropensions among India’s Poor.” They chose India as the subject country because it is an ideal setting to study the market for micropensions, or pension plans designed for low-income individuals; the country’s new pension system is designed to reach informal sector workers. A micropension product — Swavalamban — has been applicable to all citizens in the unorganized sector who have joined the National Pension Scheme since 2011.

This scheme was funded by grants from the government. It has been replaced with the Atal Pension Yojana, in which all subscribing workers below the age of 40 are eligible for pension of up to Rs. 5,000 ($74) per month after turning 60. In the Atal Pension Yojana, for every contribution made to the pension fund, the central government also co-contributes 50% of the total contribution or Rs. 1,000 per annum, whichever is lower, to each eligible subscriber account, for a period of five years. The minimum age of joining the Atal Pension is 18 and the maximum is 40. The age of exit and the start of the pension is 60.

There was a significant need for a pension system in India. “According to the government of India’s Planning Commission (2014), nearly 30% of the country’s 1.2 billion population lives below the poverty line (BPL),” the researchers write. “At the same time, according to the Population Research Bureau, the share of India’s BPL population age 60 or older is expected to increase from 8% in 2010 to 19% in 2050. Many of these older persons work in the unorganized sector and, as such, lack the identification and proof of employment documents required for accessing basic financial services. Nevertheless, current research estimates that about 80 million of these workers are capable of saving for retirement and the untapped savings are in the order of $2 billion.” 

There are other systemic pressures at work. “In India, as in many developing countries, younger adults are moving from rural to urban areas for economic opportunity,” Mitchell and Mukherjee tell Knowledge@Wharton. “Often this means that parents are left behind in the rural areas and, though they may receive financial support from their children, this revolution in traditional family structure can make older people more vulnerable. As a result, older cohorts today may be more interested in a micropension product than they were in the past.”
“Our research shows that individuals broadly preferred a micropension plan that offers withdrawals starting at age 60, as well as partial withdrawals beforehand, to other variants that had different access features.”
Micropension Options

The experiment was conducted in two of the 71 districts in the central Indian state of Uttar Pradesh — Fatehpur and Siddharthnagar. Overall, the statistics are comparable to those of BPL populations. The average survey respondent was 43 years old, owned land, was illiterate and had minimal schooling. The two most common livelihood activities that the respondents engaged in were farming via cultivation of one’s own land (37%), and agricultural labor supplied to non-owned farms (34%). With respect to educational attainment, more than 60% had never attended school, while 21% had five to 10 years of formal schooling. Insurance access among the respondents was low, at 20% of the total sample population. But 66% held a life insurance policy. Saving penetration was relatively high, with 55% having access to a formal saving account. Respondents who had saved had an average balance of Rs. 3,000 in their accounts. 

The study placed the existing pension plan as the baseline. An appropriate information and educational scheme was unfolded for the respondents. They were then asked two sets of questions.

Group 1 was asked about variants 1B, 1C and 1D, and Group 2 was asked about variants 2B, 2C and 2D. The first variant (A) is the basic micropension product that was then being offered by the Indian government. The other variants included early withdrawal (1B), where the eligibility age was 55 instead of 60; a lower matching rate of 50% instead of 100% (1C); no early withdrawal (1D); delayed withdrawal, where the eligibility age was 65 instead of 60 (2B); a higher matching rate of 150% instead of 100% (2C), and option for full withdrawal at age 60 (2D).

“Our research shows that individuals broadly preferred a micropension plan that offers withdrawals starting at age 60, as well as partial withdrawals beforehand, to other variants that had different access features,” say the authors. “This is similar to the micropension currently on offer in India. One exception to this, not surprisingly, is that our respondents preferred an option that boosted government matches to their plan contributions.”
“Previous studies on the financial lives of the poor have documented that their incomes are irregular and highly seasonal. As a result, requiring them to pay significant sums in just a few payments could significantly reduce demand for the pension product.”
The study results included a few small surprises. Respondents were asked to rank their levels of trust in six institutions on a scale of one to five, with a level of one indicating a complete lack of trust and a level of five representing a very high level of trust. Banks topped the list with a score of 4.49. The government clocked in with 4.22, while non-governmental organizations (NGOs) at 2.55 and village councils (3.34) were regarded as relatively less trustworthy. “We do not know for certain why NGOs were less trusted relative to government entities, but it could be because they had a smaller presence in the areas we studied,” say the authors.

These results are informative about whether microfinance institutions or local governments are likely to be successful intermediaries in the micropension product. Since the government was viewed as a trusted entity, having government support for micropensions may have helped boost adoption and contributions, the researchers note.

The faith put in banks is understandable. “For some time, there has been a growing awareness of the benefits of secure banking, even in remote areas of India,” say the authors. “Moreover, technological improvements using audio cues and fingerprinting have helped expand banking to those who cannot read or write. The Jan Dhan Yojana plan was an important vehicle used to include many rural Indian families in the formal banking system. The Indian government’s demonetization policy has also spurred an interest in enhancing poor peoples’ access to banking, as it created cash constraints throughout the economy.”

Pointing out that India’s recent effort to eliminate larger banknotes was intended to crack down on the “shadow” economy,” the authors add: “It has prompted even poor and rural communities to take up mobile payment services. One example is Paytm, a phone-based system for transferring payments from a bank account to cover people’s everyday liquidity needs.”

Growing the Appeal

The paper has some advice for governments or other entities that are developing micropension products. The researchers write that an effective retirement savings device for the poor must take into account cash-flow needs, income seasonality, competing spending priorities and alternative investment options. They note that respondents to the study were among the poorest in their communities and relied heavily on income from agriculture.

“Previous studies on the financial lives of the poor have documented that their incomes are irregular and highly seasonal. As a result, requiring them to pay significant sums in just a few payments could significantly reduce demand for the pension product,” the researchers write. “For this reason, offering frequent opportunities for such individuals to contribute can be critical to the scheme’s success.”
“To grow [the appeal of micropensions], the focus should be on proper investments (inflation is currently around 10%) and policyholder retention.”
The ability to contribute frequently to an agent who makes door-to-door visits could also help explain why people were interested in micropensions even when making fixed deposits at an Indian bank would offer them high annual returns. “Our initial hypothesis in designing this survey experiment was that some respondents would exhibit a preference between early or late eligibility for withdrawal, and that we would be able to identify the heterogeneity driving these decisions,” the researchers write. “Instead, we found that with the exception of the high match variant, respondents were less willing to adopt or contribute to the alternatives to the baseline micropension product.”

Regarding the faith in the government, the authors elaborate: “In our study setting of rural Uttar Pradesh, one of India’s poorest states, individuals receive many benefits from the government such as ration cards for discounted groceries and free health care. We believe that this repeated and positive interaction with the government has engendered the high level of trust we found.” In addition, the country’s largest life insurance company, the Life Insurance Corporation of India, is also state-owned and enjoys a high level of trust. The authors note that the low levels of education and financial literacy found in the communities they studied highlight the need to provide a financial literacy program in conjunction with the micropension. 

“We believe that the move toward digitized finance can facilitate automatic contributions to enhance the appeal of the micropension product in India,” the authors say. “Yet for a micropension plan to work for India’s poor, it must allow policyholders to contribute according to the seasonal incomes they earn while encouraging savings sufficient to provide meaningful support in old age.”

The paper finds that the Indian government’s current micropension product is appealing to the audience it is meant to reach, Mitchell and Mukherjee say. “To grow that appeal, the focus should be on proper investments (inflation is currently around 10%) and policyholder retention,” they add. “The Gates Foundation is also pushing innovations in digital finance [in India] and elsewhere in the developing world, as a means to help the poor do more to save, invest, borrow and mitigate financial risks.”


View at the original source



Monday, March 20, 2017

Case of Digital Reinvention 4



As executives assess the scope of their investments, they should ask themselves if they have taken only a few steps forward in a given dimension—by digitizing their existing customer touchpoints, say. Others might find that they have acted more significantly by digitizing nearly all of their business processes and introducing new ones, where needed, to connect suppliers and users.
To that end, it may be useful to take a closer look at Exhibit 6, which comprises six smaller charts. The last of them totals up actions companies take in each dimension of digitization. Here we can see that the most assertive players will be able to restore more than 11 percent of the 12 percent loss in projected revenue growth, as well as 7.3 percent of the 10.4 percent reduction in profit growth. Such results will require action across all dimensions, not just one or two—a tall order for any management team, even those at today’s digital leaders.

Looking at the digital winners

To understand what today’s leaders are doing, we identified the companies in our survey that achieved top-quartile rankings in each of three measures: revenue growth, EBIT growth, and return on digital investment.

We found that more than twice as many leading companies closely tie their digital and corporate strategies than don’t. What’s more, winners tend to respond to digitization by changing their corporate strategies significantly. This makes intuitive sense: many digital disruptions require fundamental changes to business models. Further, 49 percent of leading companies are investing in digital more than their counterparts do, compared with only 5 percent of the laggards, 90 percent of which invest less than their counterparts. It’s unclear which way the causation runs, of course, but it does appear that heavy digital investment is a differentiator.

Leading companies not only invested more but also did so across all of the dimensions we studied. In other words, winners exceed laggards in both the magnitude and the scope of their digital investments (Exhibit 7). This is a critical element of success, given the different rates at which these dimensions are digitizing and their varying effect on economic performance. 





Strengths in organizational culture underpin these bolder actions. Winners were less likely to be hindered by siloed mind-sets and behavior or by a fragmented view of their customers. A strong organizational culture is important for several reasons: it enhances the ability to perceive digital threats and opportunities, bolsters the scope of actions companies can take in response to digitization, and supports the coordinated execution of those actions across functions, departments, and business units.

Bold strategies win

So we found a mismatch between today’s digital investments and the dimensions in which digitization is most significantly affecting revenue and profit growth. We also confirmed that winners invest more, and more broadly and boldly, than other companies do. Then we tested two paths to growth as industries reach full digitization.

The first path emphasizes strategies that change a business’s scope, including the kind of pure-play disruptions the hyperscale businesses discussed earlier generate. As Exhibit 8 shows, a great strategy can by itself retrieve all of the revenue growth lost, on average, to full digitization—at least in the aggregate industry view. Combining this kind of superior strategy with median performance in the nonstrategy dimensions of McKinsey’s digital-quotient framework—including agile operations, organization, culture, and talent—yields total projected growth of 4.3 percent in annual revenues. (For more about how we arrived at these conclusions, see sidebar “About the research.”).







Most executives would fancy the kind of ecosystem play that Alibaba, Amazon, Google, and Tencent have made on their respective platforms. Yet many recognize that few companies can mount disruptive strategies, at least at the ecosystem level. With that in mind, we tested a second path to revenue growth (Exhibit 9).






In the quest for coherent responses to a digitizing world, companies must assess how far digitization has progressed along multiple dimensions in their industries and the impact that this evolution is having—and will have—on economic performance. And they must act on each of these dimensions with bold, tightly integrated strategies. Only then will their investments match the context in which they compete.

Contd 5.........

Page 12, 35


The case for digital reinvention 5



The case for digital reinvention 3









Instead, the survey indicates that distribution channels and marketing are the primary focus of digital strategies (and thus investments) at 49 percent of companies. That focus is sensible, given the extraordinary impact digitization has already had on customer interactions and the power of digital tools to target marketing investments precisely. By now, in fact, this critical dimension has become “table stakes” for staying in the game. Standing pat is not an option.

The question, it seems, looking at exhibits 4 and 5 in combination, is whether companies are overlooking emerging opportunities, such as those in supply chains, that are likely to have a major influence on future revenues and profits. That may call for resource reallocation. In general, companies that strategically shift resources create more value and deliver higher returns to shareholders. This general finding could be even more true as digitization progresses.

Our survey results also suggest companies are not sufficiently bold in the magnitude and scope of their investments (see sidebar “Structuring your digital reinvention”). Our research (Exhibit 6) suggests that the more aggressively they respond to the digitization of their industries—up to and including initiating digital disruption—the better the effect on their projected revenue and profit growth. The one exception is the ecosystem dimension: an overactive response to new hyperscale competitors actually lowers projected growth, perhaps because many incumbents lack the assets and capabilities necessary for platform strategies.



Contd 4.........

Page 1, 2, 4. 5





The case for digital reinvention 2



This finding confirms what many executives may already suspect: by reducing economic friction, digitization enables competition that pressures revenue and profit growth. Current levels of digitization have already taken out, on average, up to six points of annual revenue and 4.5 points of growth in earnings before interest and taxes (EBIT). And there’s more pressure ahead, our research suggests, as digital penetration deepens (Exhibit 2).







While the prospect of declining growth rates is hardly encouraging, executives should bear in mind that these are average declines across all industries. Beyond the averages, we find that performance is distributed unequally, as digital further separates the high performers from the also-rans. This finding is consistent with a separate McKinsey research stream, which also shows that economic performance is extremely unequal. Strongly performing industries, according to that research, are three times more likely than others to generate market-beating economic profit. Poorly performing companies probably won’t thrive no matter which industry they compete in.

At the current level of digitization, median companies, which secure three additional points of revenue and EBIT growth, do better than average ones, presumably because the long tail of companies hit hard by digitization pulls down the mean. But our survey results suggest that as digital increases economic pressure, all companies, no matter what their position on the performance curve may be, will be affected.

Uneven returns on investment

That economic pressure will make it increasingly critical for executives to pay careful heed to where—and not just how—they compete and to monitor closely the return on their digital investments. So far, the results are uneven. Exhibit 3 shows returns distributed unequally: some players in every industry are earning outsized returns, while many others in the same industries are experiencing returns below the cost of capital. 





These findings suggest that some companies are investing in the wrong places or investing too much (or too little) in the right ones—or simply that their returns on digital investments are being competed away or transferred to consumers. On the other hand, the fact that high performers exist in every industry (as we’ll discuss further in a moment) indicates that some companies are getting it right—benefiting, for example, from cross-industry transfers, as when technology companies capture value in the media sector.

Where to make your digital investments

Improving the ROI of digital investments requires precise targeting along the dimensions where digitization is proceeding. Digital has widely expanded the number of available investment options, and simply spreading the same amount of resources across them is a losing proposition. In our research, we measured five separate dimensions of digitization’s advance into industries: products and services, marketing and distribution channels, business processes, supply chains, and new entrants acting in ecosystems.

How fully each of these dimensions has advanced, and the actions companies are taking in response, differ according to the dimension in question. And there appear to be mismatches between opportunities and investments. Those mismatches reflect advancing digitization’s uneven effect on revenue and profit growth, because of differences among dimensions as well as among industries. Exhibit 4 describes the rate of change in revenue and EBIT growth that appears to be occurring as industries progress toward full digitization. This picture, combining the data for all of the industries we studied, reveals that today’s average level of digitization, shown by the dotted vertical line, differs for each dimension. Products and services are more digitized, supply chains less so. 




To model the potential effects of full digitization on economic performance, we linked the revenue and EBIT growth of companies to a given dimension’s digitization rate, leaving everything else equal. The results confirm that digitization’s effects depend on where you look. Some dimensions take a bigger bite out of revenue and profit growth, while others are digitizing faster. This makes intuitive sense. As platforms transform industry ecosystems, for example, revenues grow—even as platform-based competitors put pressure on profits. As companies digitize business processes, profits increase, even though little momentum in top-line growth accompanies them.

The biggest future impact on revenue and EBIT growth, as Exhibit 4 shows, is set to occur through the digitization of supply chains. In this dimension, full digitization contributes two-thirds (6.8 percentage points of 10.2 percent) of the total projected hit to annual revenue growth and more than 75 percent (9.4 out of 12 percent) to annual EBIT growth.

Despite the supply chain’s potential impact on the growth of revenues and profits, survey respondents say that their companies aren’t yet investing heavily in this dimension. Only 2 percent, in fact, report that supply chains are the focus of their forward-looking digital strategies (Exhibit 5), though headlining examples such as Airbnb and Uber demonstrate the power of tapping previously inaccessible sources of supply (sharing rides or rooms, respectively) and bringing them to market. Similarly, there is little investment in the ecosystems dimension, where hyperscale businesses such as Alibaba, Amazon, Google, and Tencent are pushing digitization most radically, often entering one industry and leveraging platforms to create collateral damage in others. 

Contd 3...............

Page 1 3, 4. 5



The case for digital reinvention 03-21


Digital technology, despite its seeming ubiquity, has only begun to penetrate industries. As it continues its advance, the implications for revenues, profits, and opportunities will be dramatic.







Image credit : Shyam's Imagination Library



As new markets emerge, profit pools shift, and digital technologies pervade more of everyday life, it’s easy to assume that the economy’s digitization is already far advanced. According to our latest research, however, the forces of digital have yet to become fully mainstream. On average, industries are less than 40 percent digitized, despite the relatively deep penetration of these technologies in media, retail, and high tech.

As digitization penetrates more fully, it will dampen revenue and profit growth for some, particularly the bottom quartile of companies, according to our research, while the top quartile captures disproportionate gains. Bold, tightly integrated digital strategies will be the biggest differentiator between companies that win and companies that don’t, and the biggest payouts will go to those that initiate digital disruptions. Fast-followers with operational excellence and superior organizational health won’t be far behind.

The case for digital reinvention 


As digitization penetrates more fully, it will dampen revenue and profit growth for some, particularly the bottom quartile of companies, according to our research, while the top quartile captures disproportionate gains. Bold, tightly integrated digital strategies will be the biggest differentiator between companies that win and companies that don’t, and the biggest payouts will go to those that initiate digital disruptions. Fast-followers with operational excellence and superior organizational health won’t be far behind.

These findings emerged from a research effort to understand the nature, extent, and top-management implications of the progress of digitization. We tailored our efforts to examine its effects along multiple dimensions: products and services, marketing and distribution channels, business processes, supply chains, and new entrants at the ecosystem level (for details, see sidebar “About the research”). We sought to understand how economic performance will change as digitization continues its advance along these different dimensions. What are the best-performing companies doing in the face of rising pressure? Which approach is more important as digitization progresses: a great strategy with average execution or an average strategy with great execution?

The research-survey findings, taken together, amount to a clear mandate to act decisively, whether through the creation of new digital businesses or by reinventing the core of today’s strategic, operational, and organizational approaches.

More digitization—and performance pressure—ahead

According to our research, digitization has only begun to transform many industries (Exhibit 1). Its impact on the economic performance of companies, while already significant, is far from complete.

Contd 2.........

Page 2, 3, 4, 5

What’s Your Data Worth? 03-20


Many businesses don’t yet know the answer to that question. But going forward, companies will need to develop greater expertise at valuing their data assets.































Image credit : Shyam's Imagination Library


In 2016, Microsoft Corp. acquired the online professional network LinkedIn Corp. for $26.2 billion. Why did Microsoft consider LinkedIn to be so valuable? And how much of the price paid was for LinkedIn’s user data — as opposed to its other assets? Globally, LinkedIn had 433 million registered users and approximately 100 million active users per month prior to the acquisition. Simple arithmetic tells us that Microsoft paid about $260 per monthly active user.

Did Microsoft pay a reasonable price for the LinkedIn user data? Microsoft must have thought so — and LinkedIn agreed. But the deal generated scrutiny from the rating agency Moody’s Investors Service Inc., which conducted a review of Microsoft’s credit rating after the deal was announced. What can be learned from the Microsoft–LinkedIn transaction about the valuation of user data? How can we determine if Microsoft — or any acquirer — paid a reasonable price?

The answers to these questions are not clear. But the subject is growing increasingly relevant as companies collect and analyze ever more data. Indeed, the multibillion-dollar deal between Microsoft and LinkedIn is just one recent example of data valuation coming to the fore. Another example occurred during the Chapter 11 bankruptcy proceedings of Caesars Entertainment Operating Corp.

Inc., a subsidiary of the casino gaming company Caesars Entertainment Corp. One area of conflict was the data in Caesars’ Total Rewards customer loyalty program; some creditors argued that the Total Rewards program data was worth $1 billion, making it, according to a Wall Street Journal article, “the most valuable asset in the bitter bankruptcy feud at Caesars Entertainment Corp.” A 2016 report by a bankruptcy court examiner on the case noted instances where sold-off Caesars properties — having lost access to the customer analytics in the Total Rewards database — suffered a decline in earnings. But the report also observed that it might be difficult to sell the Total Rewards system to incorporate it into another company’s loyalty program. Although the Total Rewards system was Caesars’ most valuable asset, its value to an outside party was an open question.

As these examples illustrate, there is no formula for placing a precise price tag on data. But in both of these cases, there were parties who believed the data to be worth hundreds of millions of dollars.

Exploring Data Valuation

To research data valuation, we conducted interviews and collected secondary data on information activities in 36 companies and nonprofit organizations in North America and Europe. Most had annual revenues greater than $1 billion. They represented a wide range of industry sectors, including retail, health care, entertainment, manufacturing, transportation, and government.

Although our focus was on data value, we found that most of the organizations in our study were focused instead on the challenges of storing, protecting, accessing, and analyzing massive amounts of data — efforts for which the information technology (IT) function is primarily responsible.

While the IT functions were highly effective in storing and protecting data, they alone cannot make the key decisions that transform data into business value. Our study lens, therefore, quickly expanded to include chief financial and marketing officers and, in the case of regulatory compliance, legal officers. Because the majority of the companies in our study did not have formal data valuation practices, we adjusted our methodology to focus on significant business events triggering the need for data valuation, such as mergers and acquisitions, bankruptcy filings, or acquisitions and sales of data assets. Rather than studying data value in the abstract, we looked at events that triggered the need for such valuation and that could be compared across organizations.
We define data value as the composite of three sources of value: (1) the asset, or stock, value; (2) the activity value; and (3) the expected, or future, value.
All the companies we studied were awash in data, and the volume of their stored data was growing on average by 40% per year. We expected this explosion of data would place pressure on management to know which data was most valuable. However, the majority of companies reported they had no formal data valuation policies in place. A few identified classification efforts that included value assessments. These efforts were time-consuming and complex. For example, one large financial group had a team working on a significant data classification effort that included the categories “critical,” “important,” and “other.” Data was categorized as “other” when the value was judged to be context-specific. The team’s goal was to classify hundreds of terabytes of data; after nine months, they had worked through less than 20.

The difficulty that this particular financial group encountered is typical. Valuing data can be complex and highly context-dependent. Value may be based on multiple attributes, including usage type and frequency, content, age, author, history, reputation, creation cost, revenue potential, security requirements, and legal importance. Data value may change over time in response to new priorities, litigation, or regulations. These factors are all relevant and difficult to quantify.

A Framework for Valuing Data

How, then, should companies formalize data valuation practices? Based on our research, we define data value as the composite of three sources of value: (1) the asset, or stock, value; (2) the activity value; and (3) the expected, or future, value. Here’s a breakdown of each value source:

1. Data as Strategic Asset

For most companies, monetizing data assets means looking at the value of customer data. This is not a new concept; the idea of monetizing customer data is as old as grocery store loyalty cards. Customer data can generate monetary value directly (when the data is sold, traded, or acquired) or indirectly (when a new product or service leveraging customer data is created, but the data itself is not sold). Companies can also combine publicly available and proprietary data to create unique data sets for sale or use.

How big is the market opportunity for data monetization? In a word: big. The Strategy& unit of PwC has estimated that, in the financial sector alone, the revenue from commercializing data will grow to $300 billion per year by 2018.

2. The Value of Data in Use

Data use is typically defined by the application — such as a customer relationship management system or general ledger — and frequency of use. The frequency of use is typically defined by the application workload, the transaction rate, and the frequency of data access.

The frequency of data usage brings up an interesting aspect of data value. Conventional, tangible assets generally exhibit decreasing returns to use. That is, they decrease in value the more they are used. But data has the potential — not always, but often — to increase in value the more it is used. That is, data viewed as an asset can exhibit increasing returns to use. For example, Google Inc.’s Waze navigation and traffic application integrates real-time crowdsourced data from drivers, so the Waze mapping data becomes more valuable as more people use it.

The major costs of data are in its capture, storage, and maintenance. The marginal costs of using it can be almost negligible. An additional factor is time of use: The right data at the right time — for example, transaction data collected during the Christmas retail sales season — may be of very high value.

Of course, usage-based definitions of value are two-sided; the value attached to each side of the activity is unlikely to be the same. For example, for a traveler lost in an unfamiliar city, mapping data sent to the traveler’s cellphone may be of very high value for one use, but the traveler may never need that exact data again. On the other hand, the data provider may keep the data for other purposes — and use it over and over again — for a very long time.

3. The Expected Future Value of Data

Although the phrases “digital assets” or “data assets” are commonly used, there is no generally accepted definition of how these assets should be counted on balance sheets. In fact, if data assets are tracked and accounted for at all — a big “if” — they are typically commingled with other intangible assets, such as trademarks, patents, copyrights, and goodwill. There are a number of approaches to valuing intangible assets. For example, intangible assets can be valued on the basis of observable market-based transactions involving similar assets; on the income they produce or cash flow they generate through savings; or on the cost incurred to develop or replace them.
Making implicit data policies explicit, codified, and sharable across the company is a first step in prioritizing data value.

What Can Companies Do?

No matter which path a company chooses to embed data valuation into company-wide strategies, our research uncovered three practical steps that all companies can take.

1. Make valuation policies explicit and sharable across the company. It is critical to develop company-wide policies in this area. For example, is your company creating a data catalog so that all data assets are known? Are you tracking the usage of data assets, much like a company tracks the mileage on the cars or trucks it owns? Making implicit data policies explicit, codified, and sharable across the company is a first step in prioritizing data value.

A few companies in our sample were beginning to manually classify selected data sets by value. In one case, the triggering event was an internal security audit to assess data risk. In another, the triggering event was a desire to assess where in the organization the volume of data was growing rapidly and to examine closely the costs and value of that growth.

The strongest business case we found for data valuation was in the acquisition, sale, or divestiture of business units with significant data assets. We anticipate that in the future, some of the evolving responsibilities of chief data officers may include valuing company data for these purposes. But that role is too new for us to discern any aggregate trends at this time.

2. Build in-house data valuation expertise. Our study found that several companies were exploring ways to monetize data assets for sale or licensing to third parties. However, having data to sell is not the same thing as knowing how to sell it. Several of the companies relied on outside experts, rather than in-house expertise, to value their data. We anticipate this will change. Companies seeking to monetize their data assets will first need to address how to acquire and develop valuation expertise in their own organizations.

3. Decide whether top-down or bottom-up valuation processes are the most effective within the company. In the top-down approach to valuing data, companies identify their critical applications and assign a value to the data used in those applications, whether they are a mainframe transaction system, a customer relationship management system, or a product development system. Key steps include defining the main system linkages — that is, the systems that feed other systems — associating the data accessed by all linked systems, and measuring the data activity within the linked systems. This approach has the benefit of prioritizing where internal partnerships between IT and business units need to be built, if they are not already in place.

A second approach is to define data value heuristically — in effect, working up from a map of data usage across the core data sets in the company. Key steps in this approach include assessing data flows and linkages across data and applications, and producing a detailed analysis of data usage patterns. Companies may already have much of the required information in data storage devices and distributed systems.

Whichever approach is taken, the first step is to identify the business and technology events that trigger the business’s need for valuation. A needs-based approach will help senior management prioritize and drive valuation strategies, moving the company forward in monetizing the current and future value of its digital assets.

Reproduced from MITSLOAN Management Review

Saturday, March 18, 2017

When To NOT Use Isotype Controls 03-19




















Antibodies can bind to cells in a specific manner – where the FAB portion of the antibody binds to a high-affinity specific target or the FC portion of the antibody binds to the FcR on the surface of some cells.

They can also bind to cells in a nonspecific manner, where the FAB portion binds to a low affinity, non-specific target. Further, as cells die, and the membrane integrity is compromised, antibodies can non-specifically bind to intracellular targets.

So, the question is, how can you identify and control for this observed nonspecific antibody binding? 

To answer this question, many research groups started using a control known as the isotype control.
The concept of this control is that an antibody targeting a protein not on the surface of the target cells has the same isotype (both heavy and light chain) as the antibody of interest. When used to label cells, those that showed binding to the isotype would be excluded as they represented the non-specific binding of the cells.

Why Isotype Controls Often Fall Short 

Isotype controls were once the most popular negative control for flow cytometry experiments.


They are still very often included by some labs, almost abandoned by others, and a subject of confusion for many beginners. What are they, why and when do I need them? Are they of any use at all, or just a waste of money?

Most importantly, why do reviewers keep asking for them when they review papers containing flow data?

Isotype controls were classically meant to show what level of nonspecific binding you might have in your experiment. The idea is that there are several ways that an antibody might react in undesirable ways with the surface of the cell.

Not all of these can be directly addressed by this control (such as cross-reactivity to a similar epitope on a different antigen, or even to a different epitope on the same antigen). What it does do is give you an estimate of non-specific (non-epitope-driven) binding. This can be Fc mediated binding, or completely nonspecific “sticky” cell adhesion.

In order to be useful, the isotype control should ideally be the same isotype, both in terms of species, heavy chain (IgA, IgG, IgD, IgE, or IgM) and light chain (kappa or lambda) class, the same fluorochrome (PE, APC, etc.), and have the same F:P ratio. F:P is a measurement of how many fluorescent molecules are present on each antibody.

This, unfortunately, makes the manufacture of ideal isotype controls highly impractical. 

There is even a case to be made that differences in the amino acid sequence of the variable regions of both the light and heavy chains might result in variable levels of undesirable adherence in isotypes versus your antibody of interest. 
Moving Beyond Isotype Controls

Many flow cytometry researchers are no longer using isotype controls, with some suggesting they be left out of almost all experiments.


If you spend any time browsing the Purdue Cytometry list, you’ll see these same arguments presented in threads about isotype controls. 

A report in Cytometry A presents options for controls in several categories, the options available, and pros and cons of each option. The report's section on isotype controls summarizes the problems with the use of isotype controls very clearly.

A second report in Cytometry B presents options for controls in several categories, the options available, and pros and cons of each option.

The section of the above paper focusing on isotype controls summarizes the problems with their use very clearly.


The article also illustrates difference in undesirable binding at different levels using the same clone from different manufacturers.

For example, the figure below shows how even the same isotype control clone can result in highly variable levels of undesirable staining.

















If you do use isotype controls in your experiment, they must match as many of the following characteristics as possible for your specific antibody — species, isotype, fluorochrome, F:P ratio, and concentration.


Here are 5 cases against using isotype controls alone...

1. Isotype controls are not needed for bimodal experiments.

You don’t need isotype controls for experiments that are clearly bimodal. For example, if you are looking for T cells and B cells in peripheral blood, the negative cells also in the circulation will provide gating confidence.

As seen in the second figure below, it is extremely easy to pick out CD4 and CD8 positive cells in the sample of lysed mouse blood.



2. Isotype controls are not sufficient for post-cultured cells.

If you are using post-cultured cells, the isotype control might give you some information about the inherent “stickiness” of your cells.

However, this measurement is not a value you can subtract from your specific antibody sample to determine fluorescence intensity or percent positive.

Instead, the measurement is simply a qualitative measure of “stickiness” and the effectiveness of Fc-blocking in your protocol.

3. Isotype controls should not be used as gating controls.

If you are using multiple dyes in your search, and your concern is positivity by spectral overlap, you will be better served by using a fluorescence-minus-one control (FMO), in which all antibodies are included except the one you suspect is most prone to error from spectral overlap.

4. Isotype controls should not be used to determine positivity.

You should absolutely not be using isotype controls to determine positive versus negative cells — or, as mentioned in #3 above, as a gating control.

5. Isotype controls are not always sufficient for determining non-specific antibody adherence.

Isotype controls cannot always determine non-specific antibody adherence versus, for example, free fluorochrome adherence. For this, you need to use isoclonic controls. If you add massive amounts of non-fluorochrome conjugated monoclonal antibody to your staining reaction, your fluorescence should drop. If it does not, your issue is not due to nonspecific antibody binding, but to free fluorochrome binding.