19.12.2024
Generative AI defies Regulatory Attempts
Tribune
16 mai 2023
Generative AI, with ChatGPT, breaks down the barriers between specialised applications and aims to achieve an ever more general capability. The revolution brought about by large language models requires policy makers to rethink their segmented approach. As the EU attempts to adjust its AI Act to this shifting reality, future-proof, data-centric principles must guide efforts to make AI more reliable and beneficial to society. An interview with Rémi Bourgeot, economist and data scientist, IRIS Associate Fellow.
Does generative AI amount to an unchecked social disruption that escapes not only the control of governments but also of its own designers?
Artificial intelligence has undergone a revolution since 2017, with the spectacular development of large language models (LLMs). These have opened up new possibilities for content generation. LLMs handle incredibly diverse types of content as linguistic systems. This approach allows for the generation of content such as text, sound, images, or code as well as genomic analysis… It results in an impressive unification of the techniques underpinning formerly distinct sub-fields of AI.
The great leap of generative AI, based on LLMs, has struck the imagination of the general public since the release of ChatGPT by OpenAI, with Microsoft’s backing. Beyond the prospect of massive disinformation, these advances make the replacement of an important part of human activity tangible, especially in its repetitive and predictable dimension. Along the way, generative AI challenges the managerial model that emerged from the shift to services, calling into question the skills and relationships it has shaped. At the same time, it also raises the prospect of massive productivity gains and emancipation from repetitive tasks. This could benefit society as a whole, under the right circumstances and policies, or cause major civilisational disruptions, from unprecedented levels of unemployment to the proliferation of killer robots.
AI has become more human in appearance, by aspiring to a general capability, based on the logic of language. However, its limits are obvious when it comes to reliability and require an overhaul, both in terms of data quality and new legal safeguards, founded in political discernment. The neural networks on which AI is based have a black box aspect. With the spectacular advances in generative AI, the very designers of the models are often caught off guard by their logical feats after their release. Moreover, the nature of the huge databases used to train these models is often kept secret. This revolution is therefore marked by a multitude of unknowns and a major problem of reliability.
Technical innovations tended to be used in the military and to be somehow secured before reaching the general public. Today, against the background of the competition among tech giants, incredibly powerful generative AI tools are made available to the general public, who discovers, at the same time as most experts, their potential and flaws, even before any substantial effort is made to rethink them in terms of reliability and acceptability for society.
Does this phase of AI development reshuffle the global technological race? How is Europe positioned?
This race is primarily led by the US, where capital and talent in this field are converging. At the same time, China is making a monumental effort to develop AI, according to its own criteria and political objectives, notably social control. The authorities banned ChatGPT very quickly, and the country’s digital giants are managing to develop their own generative AI tools effectively. China’s main vulnerability is not so much in the design of AI models as in the hardware. China still lags behind the United States when it comes to processors.
Besides conceptual developments, the current AI revolution is also the result of spectacular progress in graphics processing units (GPU), mainly with Nvidia. US policies restricting the sale of American processors and equipment to China and avoiding the transfer of skills has direct consequences on China’s ability to meet the challenge of generative AI and large language models.
Europe too is home to top-level talent. However, it deeply lacks a funding system commensurate with the challenge, especially to grow start-ups to a critical size, as the infrastructure linked to digital giants in terms of data access and computing capacity. However, the EU is being closely watched around the world for the implementation of legal safeguards.
The EU is trying to set up a new regulatory framework, with the AI Act. Is this attempt threatened by the advances of generative AI?
The EU’s approach to AI regulation focuses on the levels of risk associated with different types of existing or emerging tools, with varying legal requirements in terms of transparency, testing, data quality and human control. This level of risk is ranked from minimal, for example for spam filters, to unacceptable for the public use of facial recognition. This differentiated approach is disrupted by the advent of general-purpose models, since it is becoming particularly difficult to delineate the capabilities of AI tools.
The European Parliament has tried to keep pace with generative AI by introducing specific rules on foundation models with respect to data governance and copyright rules as well as safety checks before any public release. However, we are not just seeing the emergence of one more tool requiring a specific set of additional rules, but a massive unification of AI techniques that will keep making them incredibly more powerful and will challenge any classification.
For its part, China has designed specific legislation for generative AI. Its regulatory landscape clearly differs from that of the EU, as it is pursuing fearsome goals that include notably the use of facial recognition for social control purposes. Its approach to the regulation of generative AI focuses on the control of the underlying data and the content created, which must conform to the official line. Beyond these particular political goals, the data used by the models and safety are issues that all policymakers must address. The task is greater for liberal democracies in tackling the new risks brought by AI and preserving their own political values. Geoffrey Hinton, the neural network pioneers often referred to as the godfather of AI, says that his main concern about the dangers of artificial intelligence lies in the weaknesses of the political system.
Regulation that focuses primarily on the various uses, while AI is growing ever more general, is doomed to run behind technological developments. Language models remain tools designed primarily to bring together human-derived data. It is therefore crucial for regulation to focus on the question of sources, and their treatment by all types of models and uses. At this point, we particularly need future-proof principles, aimed at preserving human freedom, to guide regulation that will have to adapt to radical and unexpected technical developments in AI.