Early versions of AI are here and these instances are becoming very widely available. Familiar examples include ChatGPT and Copilot. Ah but wait, those are all just one specific category of AI. As I’m sure you appreciate, the scope for using it is actually vast and includes a great deal more…
- Virtual Assistants – Think Siri, Alexa, and similar sexed up and far smarter.
- Self Driving cars – Sounds easy, but it is actually very challenging when faced with edge cases – For example, if a car AI detects that if it keeps going it will kill some people who stepped into the road, it needs make a very rapid swerve decision to avoid impact. What happens if doing so will kill the driver, how will an AI handle such decisions?
- Game players – AI can now play the best human GO or Chess champions, and win.
- Generate images and video
- Manipulate images and video
- Human health – rapidly process CT scan images and pick out issues far better than humans can
- etc…
Literally anything involving vast amounts of data can be rapidly crunched and yield insights, diagnostics, fraud patterns, and much more, and yet even now it is all still very embryonic – the best (or worst) is yet to come.
So here is the question – overall, will it benefit us and so it is worth pursuing, or is it all incredibly risky?
You have an opinion, I have an opinion, but what do those closest to it all think?
Let’s get into that now.
Views from Subject Matter Experts
On 9th April Nature published the results of a poll of 4,260 scientists that are currently active in the field.
Titled “Will AI improve your life? Here’s what 4,000 researchers think“, it articulates what was discovered by asking these subject matter experts.
The actual research was published 8 days earlier on the preprint server Zenodo, the Nature article is just a summary of it all. I’ll give you a link to the actual research paper, the full 43 page document, within the “Further Reading” appendix below.
Key Point: Understanding this question is a rather important insight. AI will profoundly impact your life. The Insights within the paper answer a rather important question – beyond what we have now, what is coming, how will it change your life, and will be be beneficial or deeply risky?
First however, it is worth appreciating a small plot twist – the views expressed varied by nationality
Really?
Yep, like this …
Researchers in China appear to be far more optimistic.
What is also interesting here is that overall, most of the researchers view AI as having more benefits than risks, or as having equal benefits and risks – only a small number flag up more risks than benefits.
That’s a hell of a lot of optimism for AI, far more than exists amongst the wider public.
What are the Benefits they flagged up?
Basically these …
There is no denying that it is a powerful tool.
Those in technical fields can use it as a quick start kit …
“Hey AI, Show me the python code for the best way to interface to database type X and run some SQL?“
Those dealing with vast amounts of text can greatly accelerate what they do by using it to do the drudgery …
Hey AI, here are the basic facts for this house I am about to market. Turn these into a page of sales material that I can use to sell it with, and include a selection of the associated images
Both of the above are real examples of what you can do right now.
It will get better, far far better …
Hey AI, run an efficient analysis of the following customer interaction events in these 20 billion row tables and identify patterns of potential fraud attempts
What are the Risks they flagged up?
Basically these …
Yes, that top one is what also greatly worries me as well, so let’s dive right into that.
The big one is Disinformation
The ability to rapidly and very easily churn out totally fake text, videos, and images.
Those who accuse people of being negative about new technology often suggest that such fears are because they don’t really understand it. In this case that’s not what is going on, the disinformation-on-steroids concern is being expressed by subject matter experts – the fear of a rising tide of even more manufactured disinformation flooding us is wholly warranted.
Yes the potential for a profound benefit is real, but so also is the scope for social disruption.
OK, let’s quickly take a brief detour into the SiFi end of all this
What is also fascinating is the complete lack of any articulation of the more fringe claims regarding the rise of a self-aware machine consciousness that will either serve us or decide that the world would be far better off without us. There is a reason for that SiFi movie plot not getting a mention – it is total fiction and completely unreachable.
Reality Check: Right now we do not even know if a machine can have a mind or consciousness.
This is where we can briefly delve into a bit of philosophy and use the Chinese Room argument proposed by John Searle.
If we build an AI that can completely simulate human input and output then is it actually capable of becoming conscious?
Would it claiming that it is conscious just be an illusion?
Imagine you are in a room and you have a book the contains detailed instructions for the manipulation of Chinese symbols. Whenever you are given some Chinese symbols via a screen by somebody outside, you just look them up in the book to find an appropriate response and you then play those responses back. Those outside the room are amazed that the room understands Chinese, but in reality it does not. Instead it is just a rules engine that mimics that and so the illusion is created.
If you start interacting with, for example, ChatGPT then you have something very much akin to this. It is not conscious nor self-aware, but instead is parsing what you feed in and using a LLM (Large Language Model) to generate a response.
Today we don’t even understand how consciousness emerges from our brains. There are of course plenty of ideas, for example physicist Roger Penrose has suggested that quantum processes in microtubules within neurons might explain conscious. Fascinating as that is, it is a hypothesis and has not been verified.
So my point is this – manufacturing a self-aware machine intelligence is not something we need worry about. Instead the wholly valid worry is that it will take us further into the post-truth landscape by threatening us with a potential tsunami of manufactured disinformation.
Wind the clock back a couple of decades and think about the rise of social media – did anybody truly grasp the potential flood of disinformation and the disruption that it could bring when it first started to emerge?
Here we are now with millions of MAGA cult devotees totally decoupled from facts and reality.
There are now humans alive who have only ever known social media, so how astute are those who have grown up with social media all their lives?
You might worry about grandpa and grandma being suckered by disinformation, but it turns out that it is Gen Z we really need to be concerned about
A recent study uses a validated online test, the MIST – misinformation susceptibility test. They looked at data from 66,242 individuals from 24 countries and this is what they found …
“Multilevel modelling showed that Generation Z, non-male, less educated, and more conservative individuals were more vulnerable to misinformation.
Furthermore, while individuals’ confidence in detecting misinformation was generally associated with better actual discernment, the degree to which perceived ability matched actual ability varied across subgroups. That is, whereas women were especially accurate in assessing their ability, extreme conservatives’ perceived ability showed little relation to their actual misinformation discernment. Meanwhile, across all generations, Gen Z perceived their misinformation discernment ability most accurately, despite performing worst on the test. “
Gen-Z, birthed into social media, are the most gullible – yikes.
So where am I going with this?
We let the billionaire technologists inflict social media on society without any oversight or constraint and look where we are now?
In a similar manner we are doing the same with AI. Literally billions is being poured into AI with no oversight at all. It has the potential to completely destroy any notion of fact or truth. This is going to be upon us really quickly and it is going to obliterate democracy.
We can’t stop it.
We could however tap the breaks a bit and think very carefully about how we want to proceed. Unfortunately what we also need are educated regulators and sadly that is something we just don’t have.
Bottom Line: Be afraid of AI for the right reasons and not for SiFi movie reasons – The benefits are abundant, but the immediate risks are also very real.
Further Reading
- Link to the AI study on the preprint server Zenodo – ai_researcher_survey_ucl_2025.pdf (43 pages)
- The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment
- If you would like to have a go at the MIST test yourself then you can do so here. (Totally anonymous, free, only takes a few minutes, and you can include or exclude your results from the study)