European Commission logo
Log in Create an account
Each keyword is searched for in the content.

EPALE - Electronic Platform for Adult Learning in Europe

Blog

Generative Artificial Intelligence: beyond the panic, the consequences for information and work

The current panic about generative AI stems from fantasy and does not focus on the right issues: the future of information and work.

Intelligence artificielle.

[Translation : EPALE France]

Artificial Intelligence (AI) is no stranger to the media. The most recent panic was over the role of recommendation and prediction algorithms in spreading misinformation. This year, it is what we call generative AI that is making the headlines in newspapers and on social media, especially in fake news, and is causing panic: Pope Francis in a puffer jacket, Emmanuel Macron as a dustman, Volodymyr Zelinsky conceding defeat to Ukraine...

This buzz has drawn our attention to the potential risks of AI, and has diverted it away from the real questions concerning the challenges of the 21st century and the consequences for work and information, in particular. One of the real risks relates to the fact that these are now complex algorithms, as the shift from AI to generative AI suggests. There has been a leap both qualitatively and quantitatively, as AI has progressed from simple tasks to complex tasks and now to general actions. To carry out these actions, it can redesign itself without human intervention, but also without understanding or sensitivity, because it involves large-scale statistical and probabilistic calculations performed on very large databases. Generative AI is based on “generative pre-trained transformers" (hence the acronym, GPT), which operate as decision-making systems even though they present themselves as friendly conversational robots like ChatGPT. 

Information 

The risks in terms of information relate to the manipulation of media content and scientific documents, which can distort the decision-making process for humans using these systems. Generative AI can be trained on databases containing errors and even misinformation. It is trained to deliver credible content, particularly images, but without any correspondence to the real world (as in the case of MidJourney or Dalle-e).

These large, non-transparent models are therefore extremely fragile and can be misled and trained to produce misinformation. In June 2022, Newsguard counted 277 news sites written solely by AI through its tracking centre. The consequences for democratic societies lie in the manipulation of public opinion and the weakening of confidence in democratic processes such as elections. This heralds a future in which tailor-made fake news will be available to everyone, with messages adapted to maximise their effect. But we can say that this is a transitional stage, with an opportunity to curb the effects through ethical reflection, appropriate regulation and the creation of specialised AI trained by the reference media using their quality information bases.

Work 

Generative AI is a continuation of human efforts to automate tedious, repetitive, non-creative tasks. However, to drive Generative AI, some “click” workers are hired under less than ideal working conditions. Many of the workers who annotate, train and correct AI systems are paid derisory wages, in developing countries, as in the case of ChatGPT where Kenyans were underpaid to label thousands of texts containing violent content.

The argument of freeing up a large part of the brain and body to devote oneself to more stimulating intellectual tasks or to increase our sporting leisure time is undoubtedly valid, but for the time being, what we are seeing is a shift in the tasks of many professions and the dismissal of employees. These issues receive little or no attention in political discussions on the future of employment, the workforce or pensions. As well as repetitive, blue-collar jobs, white-collar jobs are also in the spotlight. Even the creative professions are at risk, with no compensation for the creators on whom generative AI is based, and without the consent of users for the use of massive data.  

Human intervention is appearing late in the game in the work and calculations carried out by generative AI, and these experts themselves are seeing their own numbers dwindle. But the school and university systems don't seem to be moving fast enough or willing enough to produce students with a level of employability commensurate with the challenges and, above all, the needs of this fourth-generation industry.

Pending regulation... 

It is easy to understand the widespread mobilisation of manufacturers in the generative AI sector, who have published a request for a moratorium on the development of AI. Launched in March 2023 by the Future of Life institute, with more than 1,000 signatures (including that of the head of OpenAI, which develops ChatGPT), the petition had more than 33,000 signatures by the end of June. Some have also called for “AI governance”, highlighting the need for regulation, i.e. the need for human intelligence to regain control over artificial intelligence.

The European Union was the first to tackle the issue, with its proposed regulation on AI passed in April 2023. The regulation does not propose to regulate AI but rather its uses (in particular unacceptable or high-risk uses). It does not sufficiently take account of the effects on work and information, which leaves the burden of these tasks to trainers and educators who train human intelligence in digital literacy and media and information literacy to better understand and master artificial intelligence!        

Likeme (6)

Login or Sign up to join the conversation.