The farm of algorithms: all models are equal, but some are more equal than others.


Artificial intelligence (AI) is both fascinating and frightening. The question many ask is: will AI replace us? Will it lead to the disappearance of jobs, skills and knowledge?
We marvel at advances such as the Humane AI Pin, Rabbit R1, Google Gemini and Chat GPT. But what do they really have in common? It's not what we think!
Our obsession with AI continues to grow. It has not only changed our world, it has drastically altered our expectations and perceptions.
When ChatGPT was launched in 2022, it seemed like pure magic. You asked a question and got a detailed answer in seconds - too good to be true. As Arthur C. Clarke famously said, "Any sufficiently advanced technology is indistinguishable from magic".
But behind the magic of AI lies a magician's trick. Large Language Models (LLMs), such as those that power ChatGPT, are not designed to give the "right" answers, but rather to appear intelligent. They often hallucinate and produce responses based on limited or poor quality data.
Put simply, these models sometimes process garbage content from the web. When they can't make sense of it, they generate answers that are impressive, even if they're not accurate.
So it's not just about what AI can do, but what it makes us think it can do.
This intersects with another myth we need to debunk: the notion of 'digital natives'. The idea that young people are innately tech-savvy is misleading. But if this generation is the first to coexist with generative AI, it's crucial that they learn to explore these tools safely and critically.
In adult education, we need to move away from banning AI tools to an approach that enables learners to test, analyse and critique these technologies. In doing so, we reposition AI not as an incomprehensible magical system, but as a set of tools to be understood and mastered.
Generative AI is not magic; it is neither a saviour nor a destroyer, nor is it truly intelligent. Companies promoting AI have a vested interest in promoting fantastical ideas because it drives customers, investment and future prospects.
When hype dominates, reactions can be exaggerated. By taking a more grounded approach, we can ensure that today's adult learners critically understand the AI tools they will inevitably encounter.
As educators and learners, it's time to reflect on the best uses of these technologies, the ethics that should guide them, and the need for transparency in AI mechanisms before they become so entrenched in the digital landscape that they are no longer visible.
Finally, if AI models are transformed from tools into entities that influence and manipulate our perceptions, we risk losing control and finding ourselves in a "farm of algorithms": all models are equal, but some are more equal than others.
