Is "AI" the New "Cloud"?

Yes... the end! Ok, I should put a little more into this!
If you've not heard of "AI" by now, where have you been? It's EVERYWHERE, but it's not new, it's existed in popular culture, the minds of scientists, researchers and academics for centuries! What's catapulted it forward has been Generative AI, popularised by the launch of ChatGPT in 2022. Since then it's been parabolic and currently seems to have little sign of slowing, but "AI" is starting to be very misleading and generic. While I have and intend to continue to fully embrace the technology, using the term "AI" has started making me wince in the same way "Cloud" did.
Prior to 2022 the past 10-15 years has belonged to "CLOUD", having spent most of that time in the industry, I had the same reaction, it became a catch-all and was used everywhere.
- Hosted Infrastructure via AWS / Azure.... Cloud
- Web based applications... Cloud
- Platform-as-a-Service... Cloud
- Weather formations... Also clouds, although only some of them!
This hit the height of ridiculousness when I was in a meeting with a prospective client and a colleague used the term "nebulous" to refer to hosted backup services! I wasn't sure whether to applaud or leave.
So, what is “AI” anyway?
ChatGPT is not "AI", in the same way AWS is not "Cloud". ChatGPT, or more specifically Large Language Models (LLMs) are a productised offering that encompasses a range of Artificial Intelligence techniques and technology which is broadly referred to as Generative Artificial Intelligence (GenAI). As a start point, the "AI" space includes the below and more:
- Natural Language Processing (NLP): Enables computers to understand, interpret, and generate human language
e.g. classification, entity recognition, translation, transcription, summarisation - Computer Vision: Allows computers to "see" and interpret images and videos
e.g. object detection, OCR, facial recognition, image classification - Machine Learning: Algorithms that allow systems to learn from data without explicit programming
e.g. recommendation systems, forecasting, fraud detection - Autonomous Systems: Machines and technologies that can operate without continuous human supervision
e.g. robotics, drones, self-driving vehicles - Cognitive AI: A type of artificial intelligence that simulates human thought processes
e.g. emotion recognition, conversational agents, personal assistants
Believe it or not, "AI" has been out in the wild and we've all used or benefited from it for years. A contact of mine worked on an image recognition system to detect flaws in contact lens manufacturing in the early 2000s. Another founded a company which provided sentiment analysis so effective that it predicted the results of Brexit and Trump's first-term win ahead of their respective elections in 2015/16.
These examples (and thousands like them) have seen deployment of very narrow application "AI" technologies within specific use cases for decades. The public release of Generative AI, democratisation of access at little to no cost and the rapid improvements in competition and capability has been the tipping point.
Does it matter calling everything "AI"?
This is trickier, and depends what your interest or involvement is. The further you are from the technology, the less this murkiness is likely to have a direct impact at least visibly and initially.
The development of the technology, the benefits and riches it promises to deliver are possibly the most significant shift since the introduction of the internet and we are only at the start of this journey. The journey though, does matter and we're now seeing a huge rise in "AI Washing".
- Are you buying a genuine AI-native product?
- Or a wrapper around an LLM?
- Or some pre-existing automation re-branded as "AI"?
I am not knocking any of these approaches, the companies that develop them, but understanding the difference is important! If there is not more clarity on the part of vendors and understanding from the users then we're setting the scene for a range of at best undesirable outcomes. Take the hallucination problem in Gen AI, it's certainly improving and can be mitigated to a substantial degree with better prompting, but if you deploy a service which is an LLM wrapper and the prompting is not mitigating this you could be introducing significant inaccuracies that may go unnoticed affecting your business, your clients or employees.
There are any number of other possible challenges, whether they're ROI, cyber-risks or just unmet expectations that could come from buying, using or deploying the wrong technology just because it says "AI".
What can we do?
Very easy to say in a world that's moving as fast as it is, but take a second to be more clear. Whether it's in discussions, marketing materials, procurement exercises or a chat down the pub.
Ensure that our ability to communicate and understand "AI" attempts to keep up with the technology.
What are your thoughts? Get in touch and let me know - [email protected]