Don’t believe the (AI) hype
TODAY’s buzzphrase is artificial intelligence (AI). It’s everywhere. Every category of business-oriented solution coins the phrase to the extent that it sometimes seems that not incorporating AI into one’s product is to appear hopelessly outmoded; a mere peddler of last year’s ‘legacy systems’.
Choosing three categories of business software at random from review/compare site Capterra, here are some easily found examples:
[Product] is the only web-based medical billing & practice management […] software, using Artificial Intelligence.
The [Learning Management System] [product] Suite includes a number of exciting new features, such as bolstered Artificial Intelligence (AI) capability
With Al embedded into the CRM [customer management system] where you work, [you] can now have a data scientist working for [you]
In practical, business terms, what does artificial intelligence mean?
We all have ideas as to what it is, or might be: visions of armies of killer robots combing through rubble for human survivors at one end of the scale, to computer code being able to do every job more efficiently than we humans, allowing us to relax and let the tech take the strain, at the other.
Whatever your image of AI might be, it’s important not to take the marketing hype at face value.
AI, in the contexts in which you’ll read it every day, is not intelligence at all and certainly isn’t intelligent enough to reach self-awareness and decide, whimsically perhaps, to dispose of the human race.
The first point we need to understand is that the widescale uptake of the term “artificial intelligence” was disliked by the inventor of the term, John McCarthy , who came up with the phrase in 1956, before the rise of the modern computer made it possible for concepts of AI to reach a wider audience.
There is a great deal of debate about the definition of the term intelligence: philosophers, psychologists, biologists, cognitive behavioral scientists and computing experts all still hotly debate the subject, both in and outside the arena of technology, that is, artificial intelligence.
While this discourse is profoundly fascinating, for our purposes here, it is probably more useful to be able to present a key fact:
Machine learning means something quite different from artificial intelligence.
The two terms, neatly acronymized to ML and AI, are unfortunately used interchangeably. This is an incorrect simplification.
Machine learning systems, as 99 percent of AI systems should be called, present a simulation of intelligence. They cannot, as yet, instantiate (make real) true intelligence.
For example, a computer can learn to process vast amounts of recorded speech, learning as it goes to judge tone, vocabulary use, subtleties of inflection and combine this with, for instance, purchase history and records of previous interactions.
With this information, organizations can handle communications with customers in a more appropriate manner. Should the client be called, or sent an SMS? Will the customer respond well to a recorded message, or would a personal call be most effective?
This type of technology is both impressive and highly useful and is a prime example of a machine, literally, learning. By examining existing records, and adapting its methods and models over time with continuing input, the system learns as it goes.
But this is not intelligence – ask the same system to raise the volume of the music on the organization’s hold system, and the intelligence suddenly appears wrongfooted.
ML systems cannot instantiate (make real) intelligence. For the philosophers, the instantiation versus simulation divide is important, with grey areas between the two terms which will keep academics busy for generations to come.
When we read of ‘AI’ being employed in an area of business service (hardware, software, cloud, widget, voice assistant etc.) we should remember that all such systems have merely been programmed to alter their own algorithms, by a host of means: for instance, by changing variables which define levels of certainty/uncertainty.
The basic example of machine learning, often used, illustrates the point well:
Give a computer four pictures of dogs of different breeds and ask it to learn what defines a dog. Then, when the system is presented with a picture of a cat, it will probably think that the cat is another dog breed. After all, each animal has two eyes, two ears, fur, and four legs. But when the machine is taught the differences between dogs’ and cats’ faces, it will have ‘learned’ by means of firstly making an assumption, and then, over time, being corrected.
Thus, a machine is learning and may even appear to be intelligent in a limited sphere.
In some definitions of intelligence, it is the unpredictability of thought that creates intelligence – the spark that takes a theoretical physicist from one proposition to another by a leap of faith. Our primitive computers at present are not capable of useful unpredictability. Because it is that which makes us capable of thinking about a problem from beginning to end, and then, immediately, something quite different, like being able to make coffee in an unfamiliar kitchen.
Today’s computers can think problems through, after a fashion. They can also make coffee, and sometimes, given enough computing power, they can learn enough about coffee-making to be able to make a decent cup somewhere unfamiliar, sometimes. But not all three – our systems are still too primitive.
In short, the artificial intelligence you read about everywhere is actually machine learning. Machine learning allows a degree of software adaptation, but it’s still programming based firmly in ‘legacy systems’ and while useful, isn’t the cure-all the marketing departments would have you believe it is.
- REA Group believes big data can provide smarter insights on the cloud
- Maxis CTIO: Cloud-based tools help engineers with better data analysis
- Palisade Compliance to share software cost-optimization strategies next month
- Businesses need to find new opportunities to thrive on “5G experiences”
- Culture key to success with digital projects, but who’s responsible?