Interview | The battle with AI generated content is about to be lost
Europe must achieve digital sovereignty, but so far, it's not taking sufficient action in that direction
Professor Stephen Lewandowsky of the University of Bristol:
© ECONOMIC.BG / @lewan.uk
Professor Stephen Lewandowsky is a cognitive scientist at the University of Bristol whose research focuses on the intersection between human cognition and the architecture of online information technologies, as well as the consequences for democracy. He examines how social media platforms and their algorithms influence the spread of misinformation, conspiracy theories, and public acceptance of scientific evidence, including on vaccination and climate change.
He is the recipient of numerous prestigious awards – among them a Discovery Outstanding Researcher Award from the Australian Research Council, a Wolfson Research Merit Fellowship from the Royal Society, and a Humboldt Research Award. Lewandowsky has been elected a Fellow of the UK’s Academy of Social Sciences, the Association for Psychological Science, and the German National Academy of Sciences Leopoldina (2022). He has been recognised as one of the world’s most highly cited researchers in 2022, 2023 and 2024 by Clarivate.
In this conversation, Professor Lewandowsky discusses artificial intelligence, its political consequences, but also the opportunities it gives, its societal consequences, its impact on democracy and human cognition, the dynamics of misinformation, and the significance of the SOLARIS project, in which "Brand Media Bulgaria" is a partner.
Professor Lewandowsky will be among the participants at the final SOLARIS final conference in Utrecht in January 2026, where experts from across Europe will examine the real risks and policy implications of generative AI.
Prof. Lewandowsky, online users today are faced with a growing mass of AI generated content, in addition to unverified citizen journalism posts. What is it in our cognitive build that makes us susceptible to disinformation?
Human beings have evolved to believe others by default. And usually that makes a lot of sense; our neighbours won’t lie to us when I ask on what day the rubbish will be collected. In “real life” lies and deception in person-to-person interaction is very rare. But this default makes us vulnerable to manipulators who want us to believe things that are false.
Even before deep fakes and AI-generated content, some people were drawn to conspiracy theories, unproved statements, etc. How big a risk are they for elections and public trust?
There is plenty of evidence that misinformation and conspiracy theories can undermine public trust, especially into elections. For example, on the eve of the 2020 presidential election, a strong majority of Republicans and Democrats both believed that Biden won fairly and squarely. It was only after Trump and his supporters engaged in a massive disinformation campaign that Republicans lost faith in the outcome, to the point where 1/4 of Republicans no longer supported the peaceful transition of power in late 2020.
Is it possible that some users unknowingly fuel the spread of false content?
Yes, most people will not share things they know to be false –although a significant share (about 25%) will willingly contribute to spreading misinformation; a process called “participatory propaganda”.
How do you see large language models like ChatGPT changing the landscape of online disinformation – do they amplify the problem, or can they also be part of the solution?
Both. They can be part of the solution because they are very successful at changing people’s minds about conspiracy theories and misinformation when they are engaged in a conversation. But ChatGPT and so on can also be used to generate misinformation, including personally tailored messages that exploit people’s personal vulnerabilities.
Given how convincingly AI can generate text, what cognitive vulnerabilities do you worry about most when people struggle to distinguish between human and machine-produced information?
I suspect that this battle has been lost or is about to be lost. AI generated text is nearly impossible to distinguish from human text, and within a short timespan there will be nothing left to discriminate between them.
Is Europe doing enough to limit disinformation, or are we still reacting too slowly?
Unlike the US, Europe is at least doing something. But it is not enough. We need to achieve true digital sovereignty so that the information diet of 450 million Europeans is no longer shaped by a dozen plutocrats in Silicon Valley.
What simple steps could platforms take right now to reduce the visibility of false information?
They know exactly how to do this, as Facebook demonstrated in the lead-up to the 2020 election. But at the moment, there is no political will to do so – on the contrary, many platforms have explicitly discontinued fact checking, which only underscores the need for Europe to achieve digital sovereignty.
Critical thinking is often cited as a key defence against misinformation, but it develops slowly and unevenly. What practical steps should institutions – schools, media, governments – take to strengthen it at scale? What responsibility should governments have in countering online manipulation?
Critical thinking is a nice concept, and sure, we should teach it in school from an early age. We should teach people all sorts of things, but that will not solve the problem. The problem is systemic, and is deeply rooted in the business model of the platforms (the “attention economy”) and the use of algorithms to keep people engaged, and hence the solution must also be systemic. We must bring the algorithms under public control and accountability, which can only be achieved by digital sovereignty.
The SOLARIS project, funded under Horizon Europe, aims to strengthen societal resilience against online manipulation. From your perspective, what makes this project strategically important right now?
I think the most important aspect of SOLARIS to my mind is the “co-creation” angle: that is, the involvement of the public in the actual research. In all of this, the public plays a crucial role and we must make sure that they are involved and represented.
What is the single most urgent policy change needed to make Europe less vulnerable to digital manipulation over the next decade?
Lots of things – I am the lead author of an EU report that is due out by the end of 2025 (or January 2026) which proposes a number of steps.
About SOLARIS
The SOLARIS research project, funded under the Horizon Europe programme, aims to address the challenges posed by generative AI technologies by fostering international collaboration and developing frameworks for AI governance, democratic resilience, and security.
For more information about the SOLARIS project and how to participate, visit the project website.
This project has received funding from the European Union. However, the views and opinions expressed are solely those of the author(s) and do not necessarily reflect those of the European Union or the European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them.