Worryingly, AI content creation is only a click away
It is somewhat of an illusion that "before" there was a sharp distinction between truth and fiction, but today more than ever we need critical thinking
Prof. Dr. Federica Russo, Professor of Philosophy and Ethics of Techno-Science, Project Manager of SOLARIS:
Federica Russo is full professor of philosophy and ethics of techno-science. She holds the Westerdijk Chair at the Freudenthal Institute, Utrecht University, and she is Honorary Professor at University College London (Department of Science and Technology Studies). She has held research, teaching, and visiting positions at several institutions, including the University of Kent, Pittsburgh, and Louvain. Her research concerns epistemological, methodological, and normative aspects they arise in the health and social sciences, with special attention to policy contexts and to the highly technologized character of these fields. Federica has published extensively on various themes, such as causation and causal modelling, evidence, and technology, and her latest monograph is titled Techno-Scientific Practices. An Informational Approach (RLI, 2022). Federica is currently editor-in-chief of Digital Society; in the past, she has been co-editor in chief (with Phyllis Illari) of the European Journal for Philosophy of Science and is an executive Editor of Philosophy and Technology. She is member of the Steering Committee of the European Philosophy of Science Association, and External Faculty Member at the Institute for Advanced Study at the University of Amsterdam.
On 26 October, Federica Russo will participate in the AI N' CYBER 2023 conference, organised by the Digital National Alliance in partnership with economic.bg and the Konrad Adenauer Foundation. More information about the event can be found on the website and on the Facebook and LinkedIn pages.
Ms Russo, would you tell us how the tools for creating disinformation have evolved in recent years? How big a technological and scientific leap are we talking about and could it turn out to be massively beyond the limits of human morality, as it happened with the atomic bomb, for example?
Thanks for the question. It is important to bear in mind that disinformation is not a new phenomenon. It is clear that digital technologies - and in very recent times the development of AI systems able to alter pictures, videos, and audio – are rapidly changing the way fakes spread. What has changed is the way in which visual and audio content is manipulated, which is not only a matter of technical features, but also of the meaning that such changes carry with them. Unfortunately, we have passed the limit of human morality several times in our history, and also well before the atomic bomb – to re-use your example. But for me this is not the constructive way to pose the question of digital technologies in general, and in the case of disinformation in particular.
Would you tell us what the GANs represent – why is the technology so good in creating fake content and what makes it so dangerous? What is the difference with the GPT models?
GANs are a class of AI system able to alter visual or audio contents using a technic called ‘adversarial’ that, to put it very simply, exploits the contrasts between a training and a target set, and one ‘wins’ by improving on the quality of changes made (n.b. - Generative Adversarial Networks (GANs) are a popular class of generative deep learning models that are often used for image generation. They consist of a pair of neural networks called a discriminator and a generator. The task of the discriminator is to distinguish real images from generated (fake) images, while the generator network tries to fool the discriminator by generating increasingly realistic images).
GPT models are a class of ‘large language’ models that work on text. Again, very simply put, they are able to ‘make up’ new text, on the basis on some input given by the user, and using the large data base on which they are trained. They are really different types of model, but one thing they have in common is to belong to ‘generative AI’, which means that the AI system comes up with ‘new’ contents – new with respect to the data base used for its training.
The uncontrolled spread of fake news is already a serious problem. How does the deepening commercialization and popularization of generative artificial intelligence enhance it?
It certainly does but I do not have numbers to quantify ‘how much’ this happens. It is certainly alarming that technologies for creating AI-generated contents are just one click away from any average user, of any age, and having any level of digital literacy.
We have witnessed many examples of the undermining of democracy through disinformation. What are the invisible processes at the societal and individual level that contribute to this erosion?
You ask a question that social scientists would be able to answer better than me. I am a philosopher of science and technology, and the invisible processes that I see regard the ‘system’ or ‘network’ to which humans, institutions, social media platforms – all belong to. Or in other words, there are multiple paths both at societal and individual level that may explain the rise of such episodes. But we can’t just put the finger at GANs or other generative AI system. We must understand the broader network in which these tools are used.
Deep fakes generated with artificial intelligence blur our notions of what is truth and what is fiction. How does this affect natural intelligence, and what data (similarly to machine learning) should it be trained on to be able to build robust boundaries?
My view is that it is somewhat an illusion that ‘before’ there was a sharp distinction between truth and fiction. This is in fact the whole point in understand the role of pictorial arts, photography, etc. But it is true that we got acquainted with the idea that pictures portrait something true, and perhaps videos even more, since they add a dynamic element. My impression is that we need more critical thinking and attitudes in understanding the nuances – an x-ray of my broken arm is also not a ‘true’ picture of my fracture. The robust boundaries between the realm of truth and fiction – if they can ever be restored – are given by our capacity to exercise judgment.
The genie is already out of the bottle and the technology will only get better - what counter tools are being developed and are they effective enough?
The question is posed in generic terms, and rightly so. We are only starting developing tools that counter act the rapid spread of disinformation. Some tools are in the realms of governance and regulation, others in the realm of education (both in school and in family). One big challenge I see is that the generation that has to design such tools is the one that is less acquainted with their use. It is as if we have to regulate and educate on the use of something we ourselves do not master. But we have to do it, because we can already see that we are not going in the right direction, and more is needed.
What is the flip side to the coin – how can these types of technologies (GANs and GPTs) be used for the better?
I’m glad you ask this question, because my general attitude towards (digital) technology is that they are not per se bad or good. I wish all innovation was designed with good intentions and goals in mind. Uses of AI-generated contents for good is explored to produce educational contents, or in the film industry. GPTs are more controversial, but there are also ‘good use’ reported, for instance to assist users in writing – but all this presuppose a fundamental questions about purpose, about guidelines for use, and about literacy, which I think are not explicitly posed.
You are leading the international research project SOLARIS, that Brand Media Bulgaria is also a partner in – would you tell us about its objectives? How is SOLARIS expected to contribute to the solution of the worsening problems with deepfakes?
Thanks for your interest in the project. I wish one project could contribute to solutions! But more modestly, we propose to understand generative AI (of which GANs or GPTs are part of) as part of a network or system, as I was mentioning a moment ago. From this network perspectives, we want to understand which elements in such network make us believe in audio-visual content that is AI generated. Our hypothesis is that quality of videos or audio only partially explains this. We will test empirically this hypothesis. We will also simulate the spreading of deepfakes and observe the institutional response in such cases. Finally, we will experiment with the co-creation of AI-generated contents for good. On the basis of our findings, we will draft policy recommendations.
What does your research so far show with regards to the digital literacy of users and the penetration of GANs in society?
We are not there yet. In our first use case, we will indeed interview users, and hopefully gain more information about their digital literacy, and so potentially establish a link between this and believing in AI-generated content.
On October 26, you will participate in the conference AI 'N CYBER 2023. What is the value of such international forums and why it is important that the topic of disinformation and technology be discussed by as wide a range of international experts as possible?
I am thrilled to be able to be part of this event. It will be an excellent opportunity to present our approach and findings, but also to learn in detail about the many initiatives at EU level regarding disinformation. There is an urgent need to join forces and to share knowledge and best practices. Thank you for this opportunity.