For many months, artificial intelligence has been in my peripheral vision, just sitting there, ignored by me because it seemed too far in the future to be interesting now.

And then, there were all these terms — Big Data, machine learning, data science — which circled the subject and, frankly, gave me a bit of a headache.

Artificial intelligence is upon us, unleashed and unbridled in its ability to transform the world. If in the previous technological revolution, machines were invented to do the physical work, then in this revolution, machines are being invented to do the thinking work. And no field involves more thinking than medicine. I took 650 mg of acetaminophen and started reading because the truth about artificial intelligence is that it’s already here.

The term artificial intelligence is credited to John McCarthy in 1956 when he used the term to describe a summer workshop he hosted called “The Dartmouth Summer Research Project on Artificial Intelligence” to discuss “thinking machines.” He thought the term was a neutral and straightforward distinction between artificial machine or computer intelligence compared to natural human intelligence. Encyclopedia definitions of AI are “the theory and development of computer systems able to perform tasks normally requiring human intelligence” or ”a branch of computer science dealing with the simulation of intelligent behavior in computers.” Perhaps the most succinct is ”the capability of a machine to imitate intelligent human behavior.”

Artificial intelligence is also described as strong AI and weak AI. Weak AI systems have specific intelligence whereas strong AI has general intelligence and is also called artificial general intelligence or AGI. Weak AI is the ability to do a specific task really well, such as IBM’s Deep Blue which was victorious over Garry Kasparov in chess in 1997. Weak AI helps turn big data into usable information by detecting patterns and making predictions. Facebook’s news feed, Amazon’s suggested purchases and Apple’s Siri are all examples of weak AI. Current systems that claim to use “artificial intelligence” are likely operating as weak AI focused on a narrowly defined specific problem.

Strong AI is a hypothetical computer system that thinks exactly the way people do, which is a very difficult problem to solve and hasn’t been invented yet. It’s a form of machine intelligence that is equal to human intelligence with the ability to reason, solve puzzles, make judgments, plan, learn and communicate. Ultimately, artificial general intelligence is the end goal.

There is also a third category somewhere in between where the majority of AI development occurs today. It is the field of machine learning: computers that use human reasoning to guide the performance of tasks without perfectly replicating human cognition. IBM’s computer Watson uses human reasoning by looking at thousands of pieces of text, recognizing patterns and then weighs the evidence. It is then able to add up all the evidence to get an answer. This is an example of artificial intelligence which isn’t exactly human cognition but is inspired by it and based on the three steps of pattern, prediction, and learning.

Machine learning practical applications include improved user content on Pinterest, broad image curation on Yelp, chatbots use on Facebook, curated timelines on Twitter and customer relationship management programs such as Salesforce’ Einstein for building better customer profiles. In the health care field, machine learning generates cancer treatment recommendations from IBM’s Watson, which is being used by hospitals today for stroke detection and an AI algorithm which preliminary results show predicts heart attacks significantly better than the ACC/AHA guidelines.

A final term, data science, includes artificial intelligence and machine learning. It is the science of getting computers to act without being programmed by humans. This is the realm of deep learning, convolutional neural networks, and cognitive computing. And it is, perhaps, these concepts that give rise to the unease and fear that somehow machines will simulate human cognition on their own and will no longer be controlled by us. It is this general lack of trust that leaves AI in our peripheral vision.

Let’s bring AI and all it’s associated terminologies into focus so we can understand how it’s revolutionizing our world because the truth is we’re already using it.

Leave a Reply

%d bloggers like this: