Can AI research be stopped? No, but it should become more open, democratic and scientific
Should AI research stop even temporarily?
In my view, no, as AI is the response of humanity to a global society and physical world of ever-increasing complexity. As the physical and social complexity increases processes are very deep and seem relentless, AI and citizen morphosis are our only hope to have a smooth transition from the current Information Society to a Knowledge Society. Else, we may face a catastrophic social implosion.
Maybe we reached the limits of AI research being engineered primarily by Big Tech companies, while treating powerful AI systems (like LLMs) almost as marvelous black boxes, whose functionality (the why?) is very poorly understood, both due to lack of access to technical details and due to the huge AI system complexity. Naturally, this lack of knowledge and related confusion as to the nature of human and machine intelligence entails very serious social risks.
It seems that the Open Letter reflects both welcomed genuine concerns on the social risks as well as financial concerns on risk management related, e.g., to future AI investments or the possibility of massive expensive lawsuits (in an unregulated and un-legislated environment) in case things go wrong.
However, I doubt if the proposal for a six-month ban on large-scale experiments is the solution. It is impractical for geopolitical reasons and can bring too few benefits, particularly if LLM training is targeted, rather than LLM deployment.
Furthermore, the melodramatic tone of this Open Letter can only enhance technophobia in the wider population.
On the other hand, scientific views discounting LLM value (e.g., like the ones expressed by Chomsky) are old-fashioned (reminiscent of perceptron rejection by Minsky and Papert) and not productive either.
Should We Pause or Stop Artificial Intelligence?
Of course, AI research can and should become different: more open, democratic and scientific.
Here is a proposed list of points to this end:
- The first word on important AI research issues that have a far-reaching social impact should be delegated to elected Parliaments and Governments, rather than to corporations or individual scientists.
- Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimize its negative aspects.
- The positive impact of AI systems can greatly outweigh their negative aspects if proper regulatory measures are taken. Technophobia is neither justified nor a solution.
- In my view, the biggest current threat comes from the fact that such AI systems can remotely deceive Too many commoners that have little (or average) education and/or little investigative capacity. This can be extremely dangerous to democracy and any form of socio-economic progress.
- In the near future, we should counter the big threat coming from LLM and/or CAN use it in illegal activities (cheating in University exams is a rather benign use in the space of the related criminal possibilities).
- Their impact on labor and markets will be very positive, in the medium-long run.
- In view of the above, AI systems should: a) be required by international law to be registered in an ‘AI system register’, and b) notify their users that they converse with or use the results of an AI system.
- As AI systems have a huge societal impact, and towards maximizing benefit and socio-economic progress, advanced key AI system technologies should become open.
- AI-related data should be (at least partially) democratized, again towards maximizing benefit and socio-economic progress.
- Proper strong financial compensation schemes must be foreseen for AI technology champions to compensate for any profit loss, due to the fore-said open-ness and to ensure strong future investments in AI R&D (e.g., through technology patenting, obligatory licensing schemes).
- The AI research balance between Academia and Industry should be rethought to maximize research output while maintaining competitiveness and granting rewards for undertaken R&D risks.
- Education practices should be revisited at all education levels to maximize the benefit of AI technologies while creating a new breed of creative and adaptable citizens and (AI) scientists.
- Proper AI regulatory/supervision/funding mechanisms should be created and beefed up to ensure the above.
Conclusions on how to deal with AI research
Several such points were already discussed in the 2021 AI Mellontology workshop and are also included in Prof. Pitas recent book on ‘AI Science and Society’ [PIT2023].
[FUT2023] ‘Pause Giant AI Experiments: An Open Letter’, https://futureoflife.
[PIT2023] Ioannis Pitas, “Artificial Intelligence Science and Society Part C: AI Science and Society“ (335 pages), Amazon/Createspace, https://
About the author
This article was written by Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki (AUTH), Greece. Since 1994, he has been a Professor at the Department of Informatics of AUTH and Director of the Artificial Intelligence and Information Analysis (AIIA) lab. He served as a Visiting Professor at several Universities.