Accenture’s Tech Vision 2018 Trend: Citizen AI

By Amy Fuller, Accenture

As artificial intelligence grows in capabilities—and impact on people’s lives—businesses must work to “raise” their AIs to act as responsible, productive members of society.

Imagine artificial intelligence (AI) systems that help make medical diagnoses, support insurance payout decisions or use data-driven insights to enhance software development. All these things and more are happening now.

As covered in Accenture Technology Vision 2018, our annual tech trend report, AI systems and the AI-based decisions they make are increasingly affecting people’s lives. Our chapter on Citizen AI explains the expanding impact of artificial intelligence as it evolves from a technological tool, to a partner among people that coordinates and collaborates with humans in the workforce and society.

Case in point: Researchers at New York’s Icahn School of Medicine at Mt. Sinai are using an in-house AI system called Deep Patient. Armed with an analysis of electronic health records from 700,000 patients, Deep Patient taught itself to predict risk factors for 78 different diseases—and doctors now turn to the system to aid in diagnoses.

According to our Tech Vision 2018 survey, 81 percent of executives believe within the next two years AI will work next to humans in their organizations, as a co-worker, collaborator and trusted advisor.

Teaching, not programming

AI is already the public face of many businesses, handling everything from initial customer interactions via chat, voice, and email, through to filling vital customer service roles. With increasing autonomy and sophisticated capabilities in machine learning, the future of AI will often have as much influence as the people putting it to use. For businesses, this means changing the way they view AI—from systems that are programmed, to systems that learn.

Companies need to teach their AI systems to “act” responsibly, explain decisions, or work well with others. This will require applying some of the same principles used in human education: teaching machines how to learn, how to explain their thoughts and actions, and how to accept responsibility for their decisions.

To do these steps effectively, companies will need scads of data to create taxonomies from which AI systems can learn. The companies with the best data available to train an AI how to do its job will create the most capable AI systems.

Just think how quickly natural language processing, an AI technology, has progressed to understand how a variety of people speak. This isn’t by accident. Take Google, for example. To create a data set that would adequately prepare an AI to understand just 30 words in a single language, Google recorded 65,000 clips of those words being spoken, from thousands of different people. This is the scale of training data that has enabled Google’s voice recognition to reach 95 percent accuracy.

However, successfully training AI is not just about accessing a variety and depth of data sources; it’s also about actively minimizing bias in the data. This includes tracking data provenance to make sure the original data sources are trustworthy and accurate. (For more information, see our related trend on Data Veracity.)

Guiding membership in society

In addition, companies need to raise their AI systems to reflect business and societal norms of responsibility, fairness and transparency. This starts with tossing out “black box” decision-making. Given that an AI system is fundamentally designed to collaborate with people, companies must build and train their AIs to provide clear explanations for the actions the AI systems decide to take, in a format that people understand.

Regardless of the exact role an AI ends up playing in society, it represents a company in every action that it takes. What happens if an AI-powered mortgage lender denies a loan to a qualified prospective homebuyer, or if an AI-guided shelf-stocking robot runs into a worker in a warehouse? The companies using the technology must think carefully about apportioning responsibility and liability for an AI system’s actions.

Leaders will take on the challenge of raising AI in a way that acknowledges its new roles and impact in society. In doing so, they’ll set the standards for what it means to create a responsible AI system—and significantly deepen trust with customers and employees.

Is your company ready to integrate its AI systems into society? How will you “raise” your AI to be a good citizen?

To learn more about this IT trend, I encourage you to:

  • Read the Accenture Technology Vision 2018 overview and trend highlights
  • View the essential slide shares, videos and infographics
  • View the essential slide shares, videos and infographics
  • Share your thoughts at #techvision2018
  • Reach out to us to put these innovation-led ideas to work in your enterprise