There Are Many Ways to Define Artificial Intelligence, Says Intel
Artificial Intelligence
There are a lot of ways to define artificial intelligence – mostly since “intelligence” alone can be hard to pin down, but also because people ascribe AI to everything from the grandiose to the matter-of-fact.
Intel Fellow Pradeep Dubey calls artificial intelligence “a simple vision where computers become indistinguishable between humans.” It has also been defined as simply as “making sense of data,” which very much reflects how companies are using AI today.
In general, AI is an umbrella term for a range of computer algorithms and approaches that allow machines to sense, reason, act and adapt like humans do – or in ways beyond our abilities.
The human-like capabilities include things like apps that recognize your face in photos, robots that can navigate hotels and factory floors, and devices capable of having (somewhat) natural conversations with you.
The beyond-human functions could include identifying potentially dangerous storms before they form, predicting equipment failures before they happen, or detecting malware – tasks that are difficult, or impossible, for people to perform.
A team of people at Intel, the Artificial Intelligence Products Group, are working to deliver the hardware, software, data science and research to bring these new capabilities to life.
“We want to create a new kind of AI that can understand data, in all sorts of areas.”
– Amir Khosrowshahi, chief technology officer of Intel’s Artificial Intelligence Products Group
People “think we are recreating a brain,” Amir Khosrowshahi, chief technology officer of the Artificial Intelligence Products Group, said in an interview. But “we want to go beyond that, we want to create a new kind of AI that can understand the statistics of data used in business, in medicine, in all sorts of areas, and that data is very different in nature than the actual world.”
Work in AI dates back to at least the 1950s, followed since by several boom-and-bust cycles of research and investment as hopes grew for new approaches and applications (like Arthur Samuel’s 1950s checkers program and Stanford’s 1960s Shakey robot) and then fell as these methods failed to pan out (leading to “AI winters,” when investment and public interest went cold).
There are four big reasons that we’re in a new AI spring today: more compute (the cloud puts high-capacity computers in reach for all), more data (particularly as cameras and sensors proliferate), better algorithms (approaches have moved from academic curiosities to beating human performance in tasks like reading comprehension), and broad investment.
Machine Learning
AI encompasses a whole set of different computing methods, a major subset of which is called “machine learning.”
As Intel’s Dubey explains it, machine learning “is a program where performance improves over time,” and that also gets better with more data input. In other words, the machine gets smarter, and the more it “studies,” the smarter it gets.
A more formal definition of machine learning used at Intel is: “the construction and study of algorithms that can learn from data to make predictions or decisions.”
Wired magazine declared “the end of code” in describing how machine learning is changing programming: “In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them.”
Using machine learning, a major eye hospital in China was able to improve detection of potential causes of blindness, typically 70 to 80 percent for clinicians, to 93 percent.
For instance: an AI-powered ophthalmoscope (a digital version of the device a clinician would use to see the inside of your eyes) built by Aier Eye Hospital Group and MedImaging Integrated Solutions learned how to identify diabetic retinopathy and age-related macular degeneration (both of which can lead to blindness) by “looking” at thousands of labeled images of healthy and unhealthy eyes.
An early analysis based on data from 5,000 Aier patients showed that detection accuracy, which had averaged 70 to 80 percent for screening conducted by humans, jumped to 93 percent with the AI solution. With more time and more data, its accuracy should continue to rise.
Neural Networks and Deep Learning
Neural networks and deep learning are very closely related and often used interchangeably, but there is a distinction. Most simply, deep learning is a specific method of machine learning, and it’s based primarily on the use of neural networks.
“In traditional supervised machine learning, systems require an expert to use his or her domain knowledge to specify the information (called features) in the input data that will best lead to a well-trained system,” wrote a team of Intel AI engineers and data scientists in a recent blog. In the blindness-prevention example, that’d mean specifying the colors, shapes and patterns that separate a healthy eye from a troublesome one.
Deep learning is different. “Rather than specifying the features in our data that we think will lead to the best classification accuracy,” they continued, “we let the machine find this information on its own. Often, it is able to look at the problem in a way that even an expert wouldn’t have been able to imagine.”
In other words, Aier’s eye-health screener might not “see” conditions at all like a human clinician does, though it’s still more accurate. That’s what makes deep learning so powerful – given enough good data, it can be used to solve problems with unprecedented dexterity and precision.
The neural network – technically an “artificial neural network” since it’s based on how we think the brain works – provides the math that makes it work. Google offers a tool where you can actually play with a neural network in your browser, and also offers a simplified definition: “First, a collection of software ‘neurons’ are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure.”
Here’s a basic representation of a neural network, where the circles are neurons and the arrows are connections:
The important part is this: The neural network allows the program to break a problem down into smaller and smaller – and therefore simpler and simpler – chunks. “Deep” in deep learning delineates the use of a many-layered neural network. With more layers, the program gets more refined in what it can categorize and more accurate in doing so – it just requires more and more data and more and more compute power.
“Deep learning is not magic – it’s math.”
– Pradeep Dubey, Intel Fellow and director, Parallel Computing Lab, Intel Labs
The concepts sound complex, but when it comes to the actual code that runs, it’s actually pretty simple. “It’s not magic – it’s math,” said Dubey. Matrix multiplication, to be exact, “as simple as it gets,” added the Intel Fellow.
Training and Inference
OK, there are two more quick concepts worth noting: training and inference. Training is the part of machine learning in which you’re building your algorithm, shaping it with data to do what you want it to do. This is the hard part.
“Training is the process by which our system finds patterns in data,” wrote the Intel AI team. “During training, we pass data through the neural network, error-correct after each sample and iterate until the best network parametrization is achieved. After the network has been trained, the resulting architecture can be used for inference.”
In the case of Aier’s eye screener, for example, training involved feeding in those pictures of eyes labeled as healthy or not.
And then there’s inference, which fits its dictionary definition to the letter: “The act or process of deriving logical conclusions from premises known or assumed to be true.” In the software analogy, training is writing the program, while inference is using it.
“Inference is the process of using the trained model to make predictions about data we have not previously seen,” wrote those savvy Intel folks. This is where the function that a consumer might see – Aier’s camera assessing the health of your eyes, Bing answering your questions or a drone that auto-magically steers around an obstacle – actually occurs.