Shyam's Slide Share Presentations


This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.


Friday, February 16, 2018

Vishal Sikka: Why AI Needs a Broader, More Realistic Approach 02-16

The concept of artificial intelligence (AI), or the ability of machines to perform tasks that typically require human-like understanding, has been around for more than 60 years. But the buzz around AI now is louder and shriller than ever. With the computing power of machines increasing exponentially and staggering amounts of data available, AI seems to be on the brink of revolutionizing various industries and, indeed, the way we lead our lives.

Vishal Sikka until last summer was the CEO of Infosys, an Indian information technology services firm, and before that a member of the executive board at SAP, a German software firm, where he led all products and drove innovation for the firm. India Today magazine named him among the top 50 most powerful Indians in 2017. Sikka is now working on his next venture exploring the breakthroughs that AI can bring and ways in which AI can help elevate humanity.

Sikka says he is passionate about building technology that amplifies human potential. He expects that the current wave of AI will “produce a tremendous number of applications and have a huge impact.” He also believes that this “hype cycle will die” and “make way for a more thoughtful, broader approach.”

In a conversation with Knowledge@Wharton, Sikka, who describes himself as a “lifelong student of AI,” discusses the current hype around AI, the bottlenecks it faces, and other nuances.

Knowledge@Wharton: Artificial intelligence (AI) has been around for more than 60 years. Why has interest in the field picked up in the last few years?

 Vishal Sikka: I have been a lifelong student of AI. I met [AI pioneer and cognitive scientist] Marvin Minsky when I was about 20 years old. I’ve been studying this field ever since. I did my Ph.D. in AI. John McCarthy, the father of AI, was the head of my qualifying exam committee.

The field of AI goes back to 1956 when John, Marvin, Allen Newell, Herbert Simon and a few others organized a summer workshop at Dartmouth. John came up with the name “AI” and Marvin gave its first definition. Over the first 50 years, there were hills and valleys in the AI journey. The progress was multifaceted. It was multidimensional. Marvin wrote a wonderful book in 1986 called The Society of Mind. What has happened in the last 10 years, especially since 2012, is that there has been a tremendous interest in one particular set of techniques. These are based on what are called “deep neural networks.”

Neural networks themselves have been around for a long time. In fact, Marvin’s thesis was on a part of neural networks in the early 1950s. But in the last 20 years or so, these neural network-based techniques have become extraordinarily popular and powerful for a couple of reasons.
First, if I can step back for a second, the idea of neural networks is that you create a network that resembles the human or the biological neural networks.

This idea has been around for more than 70 years. However, in 1986 a breakthrough happened thanks to a professor in Canada, Geoff Hinton. His technique of backpropagation (a supervised learning method used to train neural networks by adjusting the weights and the biases of each neuron) created a lot of excitement, and a great book, Parallel Distributed Processing, by David Rumelhart and James McClelland, together with Hinton, moved the field of neural net-related “connectionist” AI forward. But still, back then, AI was quite multifaceted.

Second, in the last five years, one of Hinton’s groups invented a technique called “deep learning” or “deep neural networks.” There isn’t anything particularly deep about it other than the fact that the networks have many layers, and they are massive. This has happened because of two things. One, computers have become extraordinarily powerful. With Moore’s law, every two years, more or less, we have seen doubling of price performance in computing. Those effects are becoming dramatic and much more visible now. Computers today are tens of thousands of times more powerful than they were when I first worked on neural networks in the early 1990s.

“The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.”

The second thing is that big cloud companies like Google, Facebook, Alibaba, Baidu and others have massive amounts of data, absolutely staggering amounts of data, that they can use to train neural networks. The combination of deep learning, together with these two phenomena, has created this new hype cycle, this new interest in AI.

But AI has seen many hype cycles over the last six decades. This time around, there is a lot of excitement, but the progress is still very narrow and asymmetric. It’s not multifaceted. My feeling is that this hype cycle will produce great applications and have a big impact and wonderful things will be done. But this hype cycle will die and a few years later another hype cycle will come along, and then we’ll have more breakthroughs around broader kinds of AI and more general approaches. The hype we see around AI today will pass and make way for a more thoughtful and realistic approach.

Knowledge@Wharton: What do you see as the most significant breakthroughs in AI? How far along are we in AI development?

Sikka: If you look at the success of deep neural networks or of reinforcement learning, we have produced some amazing applications. My friend [and computer science professor] Stuart Russell characterizes these as “one-second tasks.” These are tasks that people can perform in one second. For instance, identifying a cat in an image, checking if there’s an obstacle on the road, confirming if the information in a credit or loan application is correct, and so on.

With the advances in techniques — the neural network-based techniques, the reinforcement learning techniques — as well as the advances in computing and the availability of large amounts of data, computers can already do many one-second tasks better than people. We get alarmed by this because AI systems are superseding human behavior even in sophisticated jobs like radiology or legal — jobs that we typically associate with large amounts of human training. But I don’t see it as alarming at all. It will have an impact in different ways on the workforce, but I see that as a kind of great awakening.

But, to answer your question, we already have the ability to apply these techniques and build applications where a system can learn to conduct tasks in a well-defined domain. When you think about the enterprise in the business world, these applications will have tremendous impact and value.

Knowledge@Wharton: In one of your talks, you referred to new ways that fraud could be detected by using AI. Could you explain that?

Sikka: You find fraud by connecting the dots across many dimensions. Already we can build systems that can identify fraud far better than people by themselves can. Depending on the risk tolerance of the enterprise, these systems can either assist senior people whose judgment ultimately prevails, or, the systems just take over the task. Either way, fraud detection is a great example of the kinds of things that we can do with reinforcement learning, with deep neural networks, and so on.

Another example is anything that requires visual identification. For instance, looking at pictures and identifying damages, or identifying intrusions. In the medical domain, it could be looking at radiology, looking at skin cancer identifications, things like that. There are some amazing examples of systems that have done way better than people at many of these tasks. Other examples include security surveillance, or analyzing damage for insurance companies, or conducting specific tasks like processing loans, job applications or account openings. All these are areas where we can apply these techniques. Of course, these applications still have to be built. We are in the early stages of building these kinds of applications, but the technology is already there, in these narrow domains, to have a great impact.

Knowledge@Wharton: What do you expect will be the most significant trends in AI technology and fundamental research in the next 10 years? What will drive these developments?

Sikka: It is human nature to continue what has worked, so lots of money is flowing into ongoing aspects of AI. From chips, in addition to NVidia, Intel, Qualcomm etc., Google, Huawei and others are building their own AI processors and many startups are as well, and all this is becoming available in cloud platforms.  There is tons of work happening in incrementally advancing the core software technologies that sit on top of this infrastructure, like TensorFlow, Caffe, etc., which are still in the early stages of maturity. And this will of course continue.

But beyond this, my sense is that there are going to be three different fronts of development. One will be in building applications of these technologies. There is going to be a massive set of opportunities around bringing different applications in different domains to the businesses and to consumers, to help improve things. We are still woefully early on this front. That is going to be one big thing that will happen in the next five to 10 years. We will see applications in all kinds of areas, and there will be application-oriented breakthroughs.

“The development of AI is asymmetric.”

Two, from a technology perspective, there will be a realization that while the technology that we have currently is exciting, there is still a long way to go in building more sophisticated behavior, building more general behavior. We are nowhere close to building what Marvin [Minsky] called the “society of mind.” In 1991, he said in a paper that these symbolic techniques will come together with the connectionist techniques, and we would see the benefits of both. That has not happened yet.
John [McCarthy] used to say that machine learning systems should understand the reality behind the appearance, not just the appearance.

I expect that more general kinds of techniques will be developed and we will see progress towards more ensemble approaches, broader, more resilient, more general-purpose approaches. My own Ph.D. thesis was along these lines, on integrating many specialists/narrow experts into a symbolic general-purpose reasoning system. I am thinking about and working on these ideas and am very excited about it.

The third area — and I wish that there is more progress on this front — is a broader awareness, broader education around AI. I see that as a tremendous challenge facing us. The development of AI is asymmetric. A few companies have disproportionate access to data and to the AI experts. There is just a massive amount of hype, myth and noise around AI. We need to broaden the base, to bring the awareness of AI and the awareness of technology to large numbers of people. This is a problem of scaling the educational infrastructure.

Knowledge@Wharton: Picking up on what you said about AI development being asymmetric, which industries do you think are best positioned for AI adoption over the next decade?

Sikka: Manufacturing is an obvious example because of the great advances in robotics, in advancing how robots perceive their environments, reason about these, and affect increasingly finer control over it. There is going to be a great amount of progress in anything that involves transportation, though I don’t think we are still close to autonomy in driving because there are some structural problems that have to be solved.

Health care is going to be transformed because of AI, both the practice of health care as well as the quality of health care, the way we build medicines, protein-binding is a great case for deep learning, personalize medicines, personalization of care, and so on. There will be tremendous improvement in financial services, where in addition to AI, decentralized/p2p technologies like blockchain will have a huge impact. Education, as an industry, will go through another round of significant change.

There are many industries that will go through a massive transformation because of AI. In any business there will be areas where AI will help to renew the existing business, improve efficiency, improve productivity, dramatically improve agility and the speed at which we can conduct our business, connect the dots, and so forth. But there will also be opportunities around completely new breakthrough technologies that are possible because of these applications — things that we currently can’t foresee.

The point about asymmetry is a broader issue; the fact that a relatively small number of companies have access to the relatively small talent of people and to massive amounts of data and computing, and therefore, development of AI is very disproportionate. I think that is something that needs to be addressed seriously.

Knowledge@Wharton: How do you address that? Education is one way, of course. Beyond that, is there anything else that can be done?

Sikka: I find it extraordinary that in the traditional industries, for example in construction, you can walk into any building and see the plans of that building, see how the building is constructed and what the structure is like. If there is a problem, if something goes wrong in a building, we know exactly how to diagnose it, how to identify what went wrong. It’s the same with airplanes, with cars, with most complex systems.

“The compartmentalization of data and broader access to it has to be fixed.”

But when it comes to AI, when it comes to software systems, we are woefully behind. I find it astounding that we have extremely critical and extremely important services in our lives where we seem to be okay with not being able to tell what happened when the service fails or betrays our trust in some way. This is something that has to be fixed. The compartmentalization of data and broader access to it has to be fixed. This is something that the government will have to step in and address. The European governments are further ahead on this than other countries. I was surprised to see that the EU’s decision on demanding explainability of AI systems has seen some resistance, including here in the valley.

I think it behooves us to improve the state of the art, develop better technologies, more articulate technologies, and even look back on history to see work that has already been done, to see how we can build explainable and articulate AI, make technology work together with people, to share contexts and information between machines and people, to enable a great synthesis, and not impenetrable black boxes.

But the point on accessibility goes beyond this. There simply aren’t enough people who know these techniques. China’s Tencent sponsored some research recently which showed that there are basically some 300,000 machine learning engineers worldwide, whereas millions are needed. And how are we addressing this? Of course there is good work going on in online education and classes on Udacity, Coursera, and others.  My friend [Udacity co-founder] Sebastian Thrun started a wonderful class on autonomous driving that has thousands of students. But it is not nearly enough.

And so the big tech companies are building “AutoML” tools, or machine learning for machine learning, to make the underlying techniques more accessible. But we have to see that in doing so, we don’t make them even more opaque to people. Simplifying the use of systems should lead to more tinkering, more making and experimentation. Marvin [Minsky] used to say that we don’t really learn something until we’ve learnt it in more than one way. I think we need to do much more on both making the technology easier to access, so more people have access to it, and we demystify it, but also in making the systems built with these technologies more articulate and more transparent.

Knowledge@Wharton: What do you believe are some of the biggest bottlenecks hampering the growth of AI, and in what fields do you expect there will be breakthroughs?

Sikka: As I mentioned earlier, research and availability of talent is still quite lopsided. But there is another way in which the current state of AI is lopsided or bottlenecked. If you look at the way our brains are constructed, they are highly resilient. We are not only fraud identification machines. We are not only obstacle detection and avoidance machines. We are much broader machines. I can have this conversation with you while also driving a car and thinking about what I have to do next and whether I’m feeling thirsty or not, and so forth.

This requires certain fundamental breakthroughs that still have not been happened. The state of AI today is such that there is a gold rush around a particular set of techniques. We need to develop some of the more broad-based, more general techniques as well, more ensemble techniques, which bring in reasoning, articulation, etc.

For example, if you go to Google or [Amazon’s virtual assistant] Alexa or any one of these services out there and ask them, “How tall was the President of the United States when Barack Obama was born?” None of these services can answer this, even though they all know the answers to the three underlying questions. But a 5-year-old can. The basic ability to explicitly reason about things is an area where tremendous work has been done for the last many decades, but it seems largely lost on the AI research today. There are some signs that this area is developing, but it is still very early. There is a lot more work that needs to be done. I, myself, am working on some of these fundamental problems.

Knowledge@Wharton: You talked about the disproportionate and lopsided nature of resource allocation. Which sectors of AI are getting the most investment today? How do you expect that to evolve over the next decade? What do traditional industries need to do to exploit these trends and adapt to transformation?

Sikka: There’s a lot of interest in autonomous driving. There is also a lot of interest in health care. Enterprise AI should start to pick up. So there are several areas of interest but they are quite lumpy and clustered in a few areas. It reminds me of the parable of the guy who lost his keys in the dark and looks for them underneath a lamp because that’s where the light was.

But I don’t want to make light of what is happening. There are a large number of very serious people also working in these areas, but generally it is quite lopsided. From an investment point of view, it is all around automating and simplifying and improving existing processes. There are a few developments around bringing AI to completely new things, or doing things in new ways, breakthrough ways, but there is a disproportionate usage of AI for efficiency improvements and automation of existing businesses and we need to do more on the human-AI experience, of AI amplifying people’s work.

“There simply aren’t enough people who know these techniques.”

If you look at companies like Uber or Didi [China’s ride-sharing service] or Apple and Google, they are aware of what is going on with their consumers more or less in real time. For instance, Didi knows every meter of every car ride done by every consumer in real time. It’s the same with Uber and in China, even in physical retail as I mentioned earlier, Alibaba is showing that real-time connection to customers and integration of physical and digital experiences can be done very well.
But in the traditional world, in the consumer packaged goods (CPG) industry or in banking, telecom or retail, where customer contact is necessary, businesses are quite disconnected from what the true end-user is doing. It is not real time. It is not large-scale. Typically, CPG companies still analyze data that is several months old. Some CPG companies still get DVDs from behavioral aggregators three months later.

I think an awareness of that [lag] is building in businesses. Many of my friends who are CEOs of large companies in the CPG world, in banking, pharmaceuticals and telecom, are trying to now embrace new technology platforms that bring these next generation technologies to life.  But beyond embracing technology, and deploying a few next-generation applications, my sense is, the traditional companies really need to think of themselves as technology companies.

My wife Vandana started and built up the Infosys Foundation in the U.S., and her main passion is computer science education. [She left the foundation in 2017.] She found this amazing statistic that in the dark ages some 6% of the world’s population could read and write, but if you think about computing as the new literacy, today some half a percent of the world’s population can program a computer.

We are finally approaching 90% literacy in the world, and of course we are not all writers or poets or journalists, but we all know how to write and to read, and it has to be the same way with computing and digital technologies, and especially now with AI, which is as big a shift for us as computing itself.

So businesses need to reorient themselves from “I am an X company,” to “I am a technology company that happens to be in X.” Because if we don’t, we may be vulnerable to a tech company that better sees and executes and scales on that X, as we have already seen in many industries. The iPhone wasn’t so much as a phone, as it is a computer in the shape of a phone. The Apple Watch isn’t a watch, but a computer, a smart computing service, in the shape of a watch. The Tesla is not so much an electric car, but rather a computer, an intelligent, connected, computing service, in the shape of a car. So if you are simply making your car an electric one, this is not enough.

“The iPhone isn’t so much a phone as it is a computer in the shape of a phone.”

Too often companies don’t transform, and they become irrelevant. They may not die immediately. Indeed large, successful, complex structures often outlive us humans, and die long slow deaths, but they lose their relevance to the new very quickly. Transformations are difficult. One has to let go of the past, of what we have known, and embrace something completely new, alien to us. As my friend and teacher [renowned computer scientist] Alan Kay said, “We only make progress by going differently than we believe.” And of course we have to do this as individuals as well. We have to continually learn and renew our skills, our perspectives on the world.

Knowledge@Wharton: How should companies measure the return on investment (ROI) in AI? Should they think about these investments in the same way as other IT investments or is there a difference?

Sikka: First of all, it is good that we are applying AI to things where we already know the ROI. I was talking to a friend recently, and he said, “In this particular part of my business, I have 50,000 people. I could do this work with one-fourth the people, at even better efficiency.” In such a situation, the ROI is clear. In financial services, one area that has become exciting is active trading of asset management. People have started applying AI here. One hedge fund wrote about the remarkable results it got by applying AI.

A start-up in China does the entire management of investments through AI. There are no people involved and the company delivers breakthrough results.

So, that’s one way. Applying AI to areas where the ROI is clear, where we know how much better the process can become, how much cheaper, how much faster, how much better throughput, how much more accurate, and so on. But again this is all based on the known, the past. We have to think beyond that, more broadly than that. We have to think about AI as becoming an augmentation for every one of our decisions, every one of the questions that we ask, and have that fed by data and analyzed in real time. Instead of doing generalizations or approximations, we must insist on AI amplifying all of our decisions. We must bring AI to areas where we don’t yet have ROIs clearly identified or clearly understood. We must build ROIs on the fly.

Knowledge@Wharton: How does investment in AI in the U.S. compare with China and other parts of the world? What are the relative strengths and weaknesses of the U.S. and Chinese approaches to AI development?

Sikka: I’m very impressed by how China is approaching this. It is a national priority for the country. The government is very serious about broad-based AI development, skill development and building AI applications. They have defined clear goals in terms of the size of the economy, the number of people, and the leadership position. They actively recruit [AI experts]. The big Chinese technology companies are [attracting] U.S.-based, Chinese-origin scientists, researchers and experts who are moving back there.

In many ways, they are the leaders already in building applications of AI technology, and are doing leading work in technology as well. When you think about AI technology or research, the U.S. and many European universities and countries are still ahead. But in terms of large-scale applications of AI, I would argue that China is already ahead of everybody else in the world. The sophistication of their applications, the scale, the complex conditions in which they apply these, is simply extraordinary. Another dimension of that is the adoption. The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.

Knowledge@Wharton: Could you give a couple of examples of what impressed you most?

Sikka: Look at the payments space — at Alipay, WeChat Pay or other forms of payments from companies like Ping An Insurance, as well as Alibaba and Tencent. It’s amazing. Shops in rural China don’t take cash. They don’t take credit cards. They only do payments on WeChat Pay or on Alipay or others like that. You don’t see this anywhere else in the world at nearly the same scale.
Bike rentals are another example. In the past year, there has been an extraordinary development in China around bicycles.

When you walk into a Chinese city, you see tens of thousands of bicycles across the landscape — yellow ones, orange ones, blue ones. When you look at these bicycles, you think, “This is a smart bicycle.” It is another example of an intelligent, connected computing service in the shape of a bicycle. You just have to wave your phone at it with your Baidu account or your Alibaba account or something like that and you can ride the bike. It has GPS. It is fully connected. It has all kinds of sensors inside it. When you get to your destination, you can leave the bike there and carry on with whatever you need to do. Already in the last nine months, this has had a huge impact on traffic.

“The adoption of AI technology and modern technology in China, especially in rural areas, is staggering.”

If you walk into any of Alibaba’s Hema supermarkets in Beijing and Shanghai, I think they have around 20 of these already, teeming with people, they are far ahead of any retail experiences we see today in the US, including at Whole Foods. The entire store is integrated into mobile experiences, so you can wave your phone at any product on the shelf and get a complete online experience. There is no checkout, the whole experience is on mobile and automated, although there are lots of folks there to help customers. The store is also a warehouse, in fact it serves some 70% of demand from local online customers, and fulfills that demand in less than an hour.

My friend ordered a live fish from the store for dinner and it, that particular fish that he had picked on his phone, was delivered 39 minutes later. Tencent has now invested in a supermarket company. And JD has its own stores. So this is rapidly evolving.  It would be wonderful to see convenience like this in every supermarket around the world in the next few years.

A more recent example is battery chargers. All across China, there are little kiosks with chargers inside. You can open the kiosk by waving your phone at it, pick up a charger, charge your phone for a couple of hours, and then drop it off at another kiosk wherever you are. What I find impressive is not that somebody came up with the idea of sharing based on connected phone chargers, but how rapidly the idea has been adopted in the country and how quickly the landscape has adapted itself to assimilate this new idea. The rate at which the generation [of ideas] happens, gets diffused into the society, matures and becomes a part of the fabric is astounding. I don’t think people outside of China appreciate the magnitude of what is going on.

When you walk around Shenzhen, you can see the incredible advances in manufacturing, electronic device manufacturing, drones and things like that. I was there a few weeks ago. I saw a drone that is smaller than the tip of your finger. At the same time, I saw a demo of a swarm of a thousand or so drones which can carry massive loads collectively. So it is quite impressive how broadly the advance of AI is being embraced in China.

“The act of innovating is the act of seeing something that is not there.”

At the other end of the spectrum, I would say that in Europe, especially in Germany, the government is much more rigorous and thoughtful about the implications of these technologies. From a broader, regulatory and governmental perspective, they seem to be doing a wonderful job. Henning Kagermann, who used to be my boss at SAP for many years, recently shared with me a report from the ethics commission on automated and connected driving. The thoughtfulness and the rigor with which they are thinking about this is worth emulating. Many countries, especially the U.S., will be well served to embrace those ideas.

Knowledge@Wharton: How does the approach of companies like Apple, Facebook, Google, Microsoft and Amazon towards AI differ from that of Chinese companies like Alibaba, Baidu, or Tencent?

Sikka: I think there is a lot of similarity, and the similarities outweigh the differences. And of course, they’re all connected with each other. Tencent and Baidu both have advanced labs in Silicon Valley. And so does Alibaba. JD, which is a large e-commerce company in China, recently announced a partnership around AI with Stanford. There’s a lot of sharing and also competitive aspects within these companies.

There are some differences. The U.S. companies are interested in certain U.S.-specific or more international aspects of things. The Chinese companies focus a lot on the domestic market within China. In many ways, the Chinese market offers challenges and circumstances that are even more sophisticated than the ones in the U.S. But I wouldn’t say that there is anything particularly different between these companies.

If you look at Amazon and Microsoft and Google, their advances, when it comes to bringing their platforms to the enterprise, are further ahead than the Chinese companies. Alibaba and Tencent have both announced ambitions to bring their platform to the enterprise. I would say that in this regard, the U.S. companies are further ahead. But otherwise, they are all doing extraordinary work. The bigger issue in my mind is the gap between all of them and the rest of the companies.

Knowledge@Wharton: Where does India stand in all of this? India has quite a lot of strengths in the IT area, and because of demonetization there has been a strong push towards digitization. Do you see India playing any significant role here?

Sikka: India is at a critical juncture, a unique juncture. If you look at it from the perspective of the big U.S. companies or the big Chinese companies, India is by far their largest market. We have a massive population and a relatively large amount of wealth. So, there is a lot of interest in all these companies, and consequently their countries, towards India and developing the market there. If that happens, then of course the companies will benefit. But it’s also a loss of opportunity for India to do its own development through educating its workforce on these areas.

One of the largest populations that could be affected by the impact of AI in the near-term is going to be in India. The impact of automation in the IT services world, or broadly in the services world, will be huge from an employment perspective. If you look at the growth that is happening everywhere, especially in India, some people call it “jobless growth.” It’s not jobless. It’s that companies grow their revenues disproportionately compared to the growth in the number of employees.

“Finding the problem, identifying the innovation — that will be the human frontier.”

There is a gap that is emerging in the employment world. Unless we fix the education problem it’s going to have a huge impact on the workforce. Some of this is already happening. One of the things I used to find astounding in Bangalore was that a lot of people with engineering degrees do freelance jobs like driving Uber and Ola cabs. And yet we have tremendous potential.

The value of education is central to us in India, and we have a large, young, generation of highly inspired youngsters ready to embrace and shape the future, who are increasingly entrepreneurial in their outlook. So we have to build on foundations like the “India stack,” we have to build our own technological strengths, from research and core technology to applications and services. And a redoubling of the focus on education, on training massive numbers of people on technologies of the future, is absolutely critical.

So, in India, we are at this critical juncture, where on one hand there is a massive opportunity to show a great way forward, and help AI be a great amplifier for our creativity, imagination, productivity, indeed for our humanity. On the other hand, if we don’t do these things, we could be victims of these disruptions.

Knowledge@Wharton: How should countries reform their education programs to prepare young people for a future shift by AI?

Sikka: India’s Prime Minister Narendra Modi has talked about this a lot. He is passionate about this idea of job creators, not just job seekers, and about a broad culture of entrepreneurship.

I’m an optimist. I’m an entrepreneur. I like to see the opportunity in what we have, even though there are some serious issues when it comes to the future of the workforce. My own sense is that in the time of AI, the right way forward for us is to become more evolved, more enlightened, more aware, more educated, and to unleash our imagination, to unleash our creativity.

John McCarthy was a great teacher in my life. He used to say that articulating a problem is half its solution. I believe that in our lifetime, certainly in our children’s lifetime, we will see AI technology advance to the point where any task, any activity, any job, any work that can be precisely formulated and precisely articulated, will be done automatically, far better than we can do with our senses and our muscles. However, articulating the problem, finding the problem, identifying the innovation — that will be the human frontier. It is the act of seeing something that is not there. The act of exercising our creativity. And then, using AI to become a great amplifier, to help us achieve our imagination, our vision. I think that is the great calling of our time. That is my great calling.

Five or six hundred million years ago, there was this unusual event that happened geologically. It was called the Cambrian explosion. It was the greatest creation of life in the history of our planet. Before that, the Earth was basically covered by water. Land had started to emerge, and oxygen had started to emerge. Life, as it existed at that point, was very primitive. People wondered, “How did the Cambrian explosion happen? How did all these different life forms show up in a relatively small period of time?”

What happened was that the availability of oxygen, the availability of land, and the availability of light as a provider of life, as a provider of living, created a situation which formed all these species that had the ability to see. They all came out of the dark, out of the water, onto the land, into the air, where opportunities were much more plentiful, where they could all grow, they could all thrive. People wonder, “What were they looking for?” It turns out they were looking for light. The Cambrian explosion was about all these species looking for light.

When I think about the future, about the time in front of us, I see another Cambrian explosion. The act of innovating is the act of seeing something that is not there. Our eyes are programmed by nature to see what is there. We are not programmed to see what is not there. But when you think about innovation, when you think about making something new, everything that has ever been innovated was somebody seeing something that was not there.

I think the act of seeing something that is not there is in all of us. We can all be trained to see what is not there. It is not only a Steve Jobs or a Mark Zuckerberg or a Thomas Edison or an Albert Einstein who can see something that is not there. I think we can all see some things that are not there. To Vandana’s statistic, we should strive to see a billion entrepreneurs out there. A billion-plus computer literate people who can work with, even build, systems that use AI techniques, and who can switch their perspective from making a living to making a life.

When I was at Infosys, we trained 150,000 people on design thinking for this reason: To get people to become innovators. In our lifetime, all the mechanical, mechanizable, repeatable things are going to be done way better by machines. Therefore, the great frontier for us will be to innovate, to find things that are not there. I think that will be a new kind of Cambrian explosion. If we don’t do that, humanity will probably end.

Paul MacCready, one of my heroes and a pioneer in aerospace engineering, once said that if we don’t become creative, a silicon life form will likely succeed us. I believe that it is in us to refer back to our spirituality, to refer back to our creativity, our imagination, and to have AI amplify that. I think this is what Marvin [Minsky] and John [McCarthy] were after and it behooves us to transcend the technology. And we can do that. It is going to be tough. It is going to require a lot of work. But it can be done. As I look at the future, I am personally extremely excited about doing something in that area, something that fundamentally improves the world.

View at the original source

Thursday, February 15, 2018

Developing Novel Drugs 2 02-16

To isolate the causal impact of cash flows on development decisions, we exploit a second source of variation: remaining drug exclusivity (patent life plus additional exclusivity granted by the FDA). Even among firms with the same focus on the elderly, those with more time to enjoy monopoly rights on their products are likely to generate greater profits.

With these two dimensions of variation—elderly share and remaining exclusivity–we can better control for confounders arising from both individual dimensions. For example, firms with more existing drugs for the elderly may differentially see a greater increase in investment opportunities as a result of Part D, even absent any changes to cash flow.

Meanwhile, firms with longer remaining exclusivity periods on their products may have different development strategies than firms whose drugs face imminent competition, again, even absent changes to cash flows. Our strategy thus compares firms with the same share of drugs sold to the elderly and the same remaining exclusivity periods across their overall drug portfolio, but that differ in how their remaining patent exclusivity is distributed across drugs of varying elder shares. This strategy allows us to identify differences in expected cash flow among firms with similar investment opportunities, and at similar points in their overall product lifecycle.

We find that treated firms develop more new drug candidates. Importantly, this effect is driven by an increase in the number of chemically novel candidates, as opposed to “me-too” candidates. Further, these new candidates are aimed at a variety of conditions, not simply ones with a high share of elderly patients, implying that our identification strategy is at least partially successful in isolating a shock to cash flows, and not simply picking up an increase in investment opportunities for high elderly share drugs.

In addition, we find some evidence that firm managers have a preference for diversification. The marginal drug candidates that treated firms pursue often include drugs that focus on different diseases, or operate using a different mechanism (target), relative to the drugs that the firm has previously developed. These findings suggest that firms use marginal increases in cash to diversify their portfolios and undertake more exploratory development strategies, a fact consistent with models of investment with financial frictions (Froot et al., 1993), or poorly diversified managers (Smith and Stulz, 1985).

Finally, our point estimates imply sensible returns to R&D. A one standard deviation increase in Part D exposure leads to an 11 percent increase in subsequent drug development, relative to less exposed firms. For the subset of firms for which we are able to identify cash flow, this translates into an elasticity of the number of drug candidate with respect of R& expenditure of about 0.75.

We obtain a higher elasticity for the most novel drugs (1.01 to 1.59) and a lower elasticity for the most similar drugs (0.02 to 0.31). For comparison, estimates of the elasticity of output with respect to demand (or cash flow) shocks in the innovation literature range from 0.3 to 4 (Henderson and Cockburn, 1996; Acemoglu and Linn, 2004; Azoulay, Graff-Zivin, Li, and Sampat, 2016; Blume-Kohout and Sood, 2013; Dranove, Garthwaite, and Hermosilla, 2014).

Our results suggest that financial frictions likely play a role in limiting the development of novel drug candidates. The ability to observe the returns associated with individual projects is an important advantage of our setting that allows us to make a distinct contribution to the literature studying the impact of financial frictions on firm investment decisions. Existing studies typically observe the response of investment (or hiring) aggregated at the level of individual firms or geographic locations.

By contrast, our setting allows us to observe the risk and return of the marginal project being undertaken as a result of relaxing financial constraints, and hence allows us to infer the type of investments that may be more susceptible to financing frictions. We find that, relaxing financing constraints leads to more innovation, both at the extensive margin (i.e., more drug candidates) but also at the intensive margin (i.e., more novel drugs). Given that novel drugs are less likely to be approved by the FDA, the findings in our paper echo those in Metrick and Nicholson (2009), who document that firms that score higher in terms of a Kaplan-Zingles index of financial constraints are more likely to develop drugs that pass FDA approval.

By providing a new measure of novelty, our work contributes to the literature focusing on the measurement and determinants of innovation. Our novelty measure is based on the notion of chemical similarity (Johnson and Maggiora, 1990), which is widely used in the process of pharmaceutical discovery.

Chemists use molecular similarity calculations to help them search chemical space, build libraries for drug screening (Wawer, Li, Gustafsdottir, Ljosa, Bodycombe, Marton, Sokolnicki, Bray, Kemp, Winchester, Taylor, Grant, Hon, Duvall, Wilson, Bittker, Danˇc´ık, Narayan, Subramanian, Winckler, Golub, Carpenter, Shamji, Schreiber, and Clemons, 2014), quantify the “drug-like” properties of a compound (Bickerton, Paolini, Besnard, Muresan, and Hopkins, 2012), and expand medicinal chemistry techniques (Maggiora, Vogt, Stumpfe, and Bajorath, 2014). In parallel work, Pye, Bertin, Lokey, Gerwick, and Linington (2017) use chemical similarity measures to measure novelty and productivity in the discovery of natural products.

Our measure of innovation is based on ex-ante information—the similarity of a drug’s molecular structure to prior drugs—and therefore avoids some of the truncation issues associated with patent citations (Hall et al., 2005). Further, since our measure is based only on ex-ante data, it does not conflate the ex-ante novelty of an idea with measures of ex-post success or of market size. By contrast, existing work typically measures “major” innovations using metrics based on ex-post successful outcomes, which may also be related to market size.

Examples include whether a drug candidate gets FDA Priority Review status (Dranove et al., 2014), or whether a drug has highly-cited patents (Henderson and Cockburn, 1996). A potential concern with these types of measures is that a firm will be credited with pursing novel drug candidates only if these candidates succeed and not when—as is true in the vast majority of cases—they fail. Similarly, outcomes such as whether a drug is first in class or is an FDA orphan drug (Dranove et al., 2014; DiMasi and Faden, 2011; Lanthier, Miller, Nardinelli, and Woodcock, 2013; DiMasi and Paquette, 2004) may conflate market size with novelty and may fail to measure novelty of candidates within a particular class.

For example, it is easier to be the first candidate to treat a rare condition than a common condition because fewer firms have incentives to develop treatments for the former. Further, measuring novelty as first in class will label all subsequent treatments in an area as incremental, even if they are indeed novel.

Our paper also relates to work that examines how regulatory policies and market conditions distort the direction of drug development efforts (Budish, Roin, and Williams, 2015); and how changes in market demand affect innovation in the pharmaceutical sector (Acemoglu and Linn, 2004; Blume-Kohout and Sood, 2013; Dranove et al., 2014). Similar to us, Blume-Kohout and Sood (2013) and Dranove et al. (2014) exploit the passage of Medicare Part D, and find more innovation in markets that receive a greater demand shock (drugs targeted to the elderly).

Even though we use the same policy shock, our work additionally exploits differences in drug exclusivity for specific drugs to identify the effect of cash flow shocks separately from changes in product demand that may increase firm investment opportunities. Indeed, we find that treated firms invest in new drugs across different categories—as opposed to those that only target the elderly—strongly suggesting that our identification strategy effectively isolates cash flow shocks from improvements in investment opportunities.

Last, our measure of novelty can help shed light on several debates in the innovation literature. For instance, Jones (2010); Bloom, Jones, Reenen, and Webb (2017) argue for the presence of decreasing returns to innovation. Consistent with this view, we find that drug novelty has decreased over time. An important caveat is that our novelty measure cannot be computed for biologics, which represent a vibrant research area.

Back to page 1... 

View the complete research paper at the origin source

Developing Novel Drugs 02-16

We analyze firms’ decisions to invest in incremental and radical innovation, focusing specifically on pharmaceutical research. We develop a new measure of drug novelty that is based on the chemical similarity between new drug candidates and existing drugs. We show that drug candidates that we identify as ex-ante novel are riskier investments, in the sense that they are subsequently less likely to be approved by the FDA.

However, conditional on approval, novel candidates are, on average, more valuable—they are more clinically effective; have higher patent citations; lead to more revenue and to higher stock market value. Using variation in the expansion of Medicare prescription drug coverage, we show that firms respond to a plausibly exogenous cash flow shock by developing more molecularly novel drug compounds, as opposed to more so-called “me-too” drugs. This pattern suggests that, on the margin, firms perceive novel drugs to be more valuable ex-ante investments, but that financial frictions may hinder their willingness to invest in these riskier candidates. Over the past 40 years, the greatest gains in life expectancy in developed countries have come from the development of new therapies to treat conditions such as heart disease, cancer, and vascular disease.

At the same time, the development of new–and often incremental–drug therapies has played a large role in driving up health care costs, with critics frequently questioning the true innovativeness of expensive new treatments (Naci, Carter, and Mossialos, 2015). This paper contributes to our understanding of drug investment decisions by developing a measure of drug novelty and subsequently exploring the economic tradeoffs involved in the decision to develop novel drugs.

Measuring the amount of innovation in the pharmaceutical industry is challenging. Indeed, critics argue that “pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures,” making it difficult to use simple drug counts as a measure of innovation (Light and Lexchin, 2012). To overcome this challenge, we construct a new measure of drug novelty for small molecule drugs, which is based on the molecular similarity of the drug with prior drug candidates.3 Thus, our first contribution is to develop a new measure of pharmaceutical innovation.

We define a novel drug candidate as one that is molecularly distinct from previously tested candidates. Specifically, we build upon research in modern pharmaceutical chemistry to compute a pair-wise chemical distance (similarity) between a given drug candidate and any prior candidates in our data. This similarity metric is known as a “Tanimoto score” or “Jaccard coefficient,” and captures the extent to which two molecules share common chemical substructures. We aggregate these pairwise distance scores to identify the maximum similarity of a new drug candidate to all prior candidates. Drugs that are sufficiently different to their closest counterparts are novel according to our measure. Since our metric is based on molecular properties observed at the time of a drug candidate’s initial development, it improves upon existing novelty measures by not conflating ex-ante measures of novelty with ex-post measures of success such as receiving priority FDA review.

In the United States, the sharpest decline in death rates from the period 1981 to 2001 come from the reduction in the incidence of heart disease. See Life Tables for the United States Social Security Area 1900-2100. See also Lichtenberg (2013), which estimates explicit mortality improvements associated with pharmaceuticals. One of the more vocal critics is Marcia Angell, a former editor of the New England Journal of Medicine. She argues that pharmaceutical firms increasingly concentrate their research on variations of top-selling drugs already on the market, sometimes called “me-too” drugs.

She concludes: “There is very little innovative research in the modern pharmaceutical industry, despite its claims to the contrary.” http://bostonreview. net/angell-big-pharma-bad-medicine. Indeed, empirical evidence appears to be consistent with this view; Naci et al. (2015) survey a variety of studies that show a declining clinical benefit of new drugs. Small molecule drugs, synthesized using chemical methods, constitute over 80% of modern drug candidates (Ralf Otto, Alberto Santagostino, and Ulf Schrader, 2014). We will discuss larger drugs based on biological products in Section 3.6.

Our novelty measure based on molecular similarity has sensible properties. Pairs of drug candidates classified as more similar are more likely to perform the same function—that is, they share the same indication (disease) or target-action (mechanism). Further, drugs we classify as more novel are more likely to be the first therapy of its kind. In terms of secular trends, our novelty measure indicates a decline in the innovativeness of small molecule drugs: both the number, as well as the proportion, of novel drug candidates has declined over the 1999 to 2014 period. Across our sample of drug candidates, over 15% of newly developed candidates have a similarity score of over 0.8, meaning that they share more than 80% of their chemical substructures with a previously developed drug.

We next examine the economic characteristics of novel drugs, in order to better understand the tradeoffs that firms face when deciding how to allocate their R&D resources. We begin by exploring how the novelty of a drug candidate relates to its (private and social) return from an investment standpoint. Since measuring a drug’s value is challenging, we rely on several metrics. First, we examine drug effectiveness as measured by the French healthcare system’s assessments of clinical value-added, following Kyle and Williams (2017).

Since this measure is only available for a subset of approved drugs, we also examine the relationship between molecular novelty and the number of citations to a drug’s underlying patents, which the innovation literature has long argued is related to estimates of economic and scientific value (see, e.g. Hall, Jaffe, and Trajtenberg, 2005). We also use drug revenues as a more direct proxy for economic value. However, since mark-ups may vary systematically between novel and “me-too” drugs—that is, drugs that are extremely similar to existing drugs—we also rely on estimates of their contribution to firm stock market values. Specifically, we follow Kogan, Papanikolaou, Seru, and Stoffman (2017) and examine the relationship between a drug’s molecular novelty and the change its firm’s market valuation following either FDA approval or the granting of its key underlying patents.

Conditional on being approved by the FDA, novel drugs are on average more valuable. Specifically, relative to drugs entering development in the same quarter that treat the same disease (indication), a one-standard deviation increase in our measure of novelty is associated with a 33 percent increase in the likelihood that a drug is classified as “highly important” by the French healthcare system; a 10 to 33 percent increase in the number of citations for associated patents; a 15 to 35 percent increase in drug revenues; and a 2 to 8 percent increase in firm valuations. 4To benchmark what this means, we note that the chemical structures for Mevacor and Zocor, depicted in Figure 1, share an 82% overlap.

However, novel drugs are also riskier investments, in that they are less likely to receive regulatory approval. Relative to comparable drugs, a one-standard deviation increase in novelty is associated with a 29 percent decrease in the likelihood that it is approved by the FDA. Thus, novel drugs are less likely to be approved by the FDA, but conditional on approval, they are on average more valuable.
To assess how firms view this tradeoff between risk and reward at the margin, we next examine how they respond to a positive shock to their (current or expected future) cashflows. Specifically, if firms that experience a cashflow shock develop more novel—rather than molecularly derivative—drugs, then this pattern would suggest that firms value novelty more on the margin.

Here, we note that we are implicitly assuming that treated firms have a similar set of drug development opportunities as control firms, and, moreover, that financial frictions limit firms’ ability to develop new drug candidates. Indeed, if firms face no financing frictions, then, holding investment opportunities constant, cashflow shocks should not impact their development decisions. However, both theory and existing empirical evidence suggest that a firm’s cost of internal capital can be lower than its cost of external funds.5 In this case, an increase in cashflows may lead firms to develop more or different drugs by increasing the amount of internal funds that can be used towards drug development decisions. Even if this increase in cashflows occurs with some delay, firms might choose to respond today, either because it increases the firm’s net worth, and hence its effective risk aversion (see, e.g. Froot, Scharfstein, and Stein, 1993), or because this anticipated increase in profitability relaxes constraints today.

We construct shocks to expected firm cashflows using the introduction of Medicare Part D, which expanded US prescription drug coverage for the elderly. This policy change differentially increased profits for firms with more drugs that target conditions common among the elderly (Friedman, 2009). However, variation in the share of elderly customers alone does not necessarily enable us to identify the impact of increased cashflows. This is because the expansion of Medicare impacts not only the profitability of the firm’s existing assets.

For a theoretical argument, see Myers and Majluf (1984). Consistent with theory, several studies have documented that financing frictions play a role in firm investment and hiring decisions. Recent work on this topic examines the response of physical investment (for instance, Lin and Paravisini, 2013; Almeida, Campello, Laranjeira, and Weisbenner, 2011; Frydman, Hilt, and Zhou, 2015); employment decisions (Benmelech, Bergman, and Seru, 2011; Chodorow-Reich, 2014; Duygan-Bump, Levkov, and Montoriol-Garriga, 2015; Benmelech, Frydman, and Papanikolaou, 2017); and investments in R&D (see e.g. Bond, Harhoff, and van Reenen, 2005; Brown, Fazzari, and Petersen, 2009; Hall and Lerner, 2010; Nanda and Nicholas, 2014; Kerr and Nanda, 2015). These frictions may be particularly severe in the case of R&D: Howell (2017) shows that even relatively modest subsidies to R&D can have a dramatic impact on ex-post outcomes.

Contd on page 2....

How to Get People Addicted to a Good Habit 02-16

Reshmaan Hussam and colleagues used experimental interventions to determine if people could be persuaded to develop a healthy habit. Potentially at stake: the lives of more than a million children.
A few years ago, Reshmaan Hussam and colleagues decided to find out why many people in the developing world fail to wash their hands with soap, despite lifesaving benefits.

Every year more than a million children under the age of five die from diarrheal diseases and pneumonia. Washing hands with soap before meals can dramatically reduce rates of both diarrhea and acute respiratory infections.

To that end, major health organizations have poured a lot of money into handwashing education campaigns in the developing world, but to little avail. Even when made aware of the importance of a simple activity, and even when provided with free supplies, people continue to wash their hands without soap—if they wash their hands at all.

“If you look at these public health initiatives, you see that they are often a complicated combination of interventions: songs and dances and plays and free soap and water dispensers,” says Hussam, an assistant professor at Harvard Business School whose research lies at the intersection of development, behavioral, and health economics. “Which means that when these initiatives don’t work, nobody can say why.”

When Hussam and her fellow researchers conducted their initial survey of several thousand rural households in West Bengal, India, they discovered that people don’t wash their hands with soap for the same reason most of us don’t run three miles every morning or drink eight glasses of water every day, despite our doctors lecturing us on the benefits of cardiovascular exercise and hydration. It’s not that we are uninformed, unable, or lazy. It’s that we’re just not in the habit.

“The idea is that habits are equivalent to addictions”

With that in mind, the researchers designed a field study to understand whether handwashing with soap was indeed a habit-forming behavior, whether people recognized it as such, whether it was possible to induce the habit with experimental interventions, and whether the habit would continue after the interventions ceased.

The field experiment was based on the theory of “rational addiction.” Developed by economists Gary Becker and Kevin Murphy, the theory posits that addictions are not necessarily irrational. Rather, people often willingly engage in a particular behavior, despite knowing that it will increase their desire to engage in that behavior in the future (i.e. become “addicted”). As “rational addicts,” people can weigh the costs and benefits of their current behavior taking into consideration its implications for the future, and still choose to engage.

One way to test whether people are in fact “rational” about their addictions, Hussam says,is by looking at how changes in the future cost of the behavior affect them today. For example, if a rational addict learns that taxes on cigarettes are going to double in six months, she may be less likely to take up smoking today.

Hussam remains agnostic on whether the behavior of addicts (to cigarettes, drugs, or alcohol, for example) can be fully understood by the theory of rational addiction—“a theory that fails to explain why addicts often regret their behavior or regard it as a mistake,” she says. But she found the framework, which has historically been applied only to harmful behaviors, was useful to shift into the language of positive habits.

“Habits, after all, are like a lesser form of addiction: The more you engage in the past, the more likely you are to engage today,” she says. “And if that’s the case, do people recognize—are they ’rational’ about—the habitual nature of good behaviors? If they aren’t, it could explain the underinvestment in behaviors like handwashing with soap that we see. If they are rational, it can affect the design of interventions and incentives that policymakers can offer to encourage positive habit formation.”

The team’s experiment and findings are detailed in the paper Habit Formation and Rational Addiction: A Field Experiment in Handwashing (pdf), authored by Hussam; Atonu Rabbani, an associate professor at the University of Dhaka; Giovanni Reggiani, then a doctoral student at MIT and now a consultant at The Boston Consulting Group; and Natalia Rigol, a postdoctoral fellow at Harvard’s T.H. Chan School of Public Health.

The hand washing experiment

In partnership with engineers at the MIT Media Lab, the researchers designed a simple wall-mounted soap dispenser with a time-stamped sensor hidden inside. The sensor allowed the team to determine not only how often people were washing their hands, but also whether they were doing so before dinnertime, critical to an effective intervention. (The idea for the hidden sensors came from a scene in Jurassic World in which one of the characters smuggles dinosaur embryos in a jury-rigged can of Barbasol shaving cream.) The data gave the researchers the ability to tease apart behavioral mechanisms in a way that earlier work (which often used self-reports or surveyor observations of hand hygiene) could not do.

The researchers were also mindful about which type of soap to use in the dispensers. Through pilot tests, they found that people preferred foam, for example. “They didn’t feel as clean when the soap wasn’t foamy,” Hussam says.

And because all people in the experiment ate meals with their hands, they were turned off by heavily perfumed soap, which interfered with the taste of their food. So the experiment avoided strongly scented soap. That said, “we preserved some scent, as the olfactory system is a powerful sensory source of both memory and pleasure and thus easily embedded into the habit loop,” the researchers explain in the paper.

The experiment included 3,763 young children and their parents in 2,943 households across 105 villages in the Birbhum District of West Bengal, where women traditionally manage both cooking and childcare. A survey showed that 79 percent of mothers in the sample could articulate, without being prompted, that the purpose of soap is to kill germs.

But while more than 96 percent reported rinsing their hands with water before cooking and eating, only 8 percent said they used soap before cooking and only 14 percent before eating. (Hussam contends that these low numbers are almost certainly overestimates, as they were self-reported.) Some 57 percent of the respondents reported that they didn’t wash their hands with soap simply because “Obhyash nai,” which means “I do not have the habit,” Hussam says.

Monitoring vs. offering incentives

The researchers randomly divided the villages into “monitoring” and “incentive” villages, taking two approaches to inducing the hand washing habit. In each experiment, there was a randomly selected control group of households that did not receive a soap dispenser; altogether, 1,400 of the 2,943 households received dispensers.

“The monitoring experiment tried to understand the beginnings of social norm formation: whether third-party observation through active tracking by surveyors of hand washing behavior could increase hand washing rates, and whether the behavior could become a habit even after the monitoring stopped,” Hussam explains.

Among the 1,400 households that received a soap dispenser, one group was told their hand washing would be tracked from the get-go, and that they would receive feedback reports on their soap usage patterns. Another group was told their behavior would be tracked in a few months, enabling a precise test of rational habit formation—whether people would start washing their hands now if they knew that the “value” of hand washing would increase in the future. And another group was not told that soap use would be tracked.

The incentive experiment “tried to price a household’s value of hand washing and forward-looking behavior,” Hussam says—in other words, whether financial incentives could increase hand washing rates, and whether those households would keep using soap even after the incentives stopped. In one incentive group, people learned that they would receive one ticket for each day they washed their hands; the tickets could be accumulated and cashed in for various goods and gifts in a prize catalog.

In another group, people learned that they initially would receive one ticket each day for washing their hands with soap, but that in two months they would begin receiving triple the number of tickets for every day they used the dispenser. The final group received the same incentive boost two months into the experiment, but it was a happy surprise: The group had no prior knowledge of the triple-ticket future.

“The difference … is a measure of rational habit formation,” Hussam explains. “While one household is anticipating a change in future value of the behavior, the other household is not; if the first household behaves differently than their counterpart in the present, they must recognize that handwashing today increases their own likelihood of handwashing in the future.”

A clean victory

The results showed that both monitoring and monetary incentives led to substantial increases in hand washing with soap.

Households were 23 percent more likely to use soap if they knew they were being monitored. And some 70 percent of ticket-receiving households used their soap dispensers regularly throughout the experiment, compared with 30 percent of households that received the dispensers without incentives.
Importantly, the effects continued even after the households stopped receiving tickets and monitoring reports, suggesting that handwashing with soap was indeed a developable habit.

More importantly, the experiment resulted in healthier children in households that received a soap dispenser, with a 20 percent decrease in acute respiratory infections and a 30 to 40 percent decrease in loose stools on any given day, compared with children whose households did not have soap dispensers. Moreover, the children with soap dispensers ended up weighing more and even growing taller. “For an intervention of only eight months, that really surprised us,” Hussam says.

But while it appeared that handwashing was indeed a habitual behavior, were people “rational” about it? Indeed they were, based on the results of the monitoring experiment.

“Our results are consistent with the key predictions of the rational addiction model, expanding its relevance to settings beyond what are usually considered ‘addictive’ behaviors,” the researchers write.

In the incentives group, the promise of triple tickets didn’t affect behavior much, but, as Hussam notes, that may have been because the single tickets were already enough to get the children their most coveted prize: a school backpack. “Basically, we found that getting one ticket versus getting no tickets had huge effects, while going from one to three did little,” she says.

“Wherever we go, habits define much of what we do”

But in the monitoring group, handwashing rates increased significantly and immediately, not only for those who were monitored but also for those who were simply told to anticipate that their behavior would be tracked at a later date. “Simply knowing that handwashing will be more valuable in the future (because your behavior will be tracked so there’s a higher cost to shirking) makes people wash more today,” Hussam says.

This, Hussam hopes, is the primary takeaway of the study. While the experiment focused on a specific behavior in/among a specific area of India, the findings may prove valuable to anyone who is trying to develop a healthy addiction—whether it be an addiction to treating contaminated drinking water and using mosquito nets in the developing world, or an addiction to exercising every day and flossing every night in the developed world.

“Wherever we go, habits define much of what we do,” Hussam says. “This work can help us understand how to design interventions that help us cultivate the good ones.”