When fake looks real and real looks fake
A YouTube viewer commented recently that she wished our videos were filmed face-to-camera because she likes to know that the people she’s watching are real people.
As the lyrics of one of my favorite childhood songs state* – “The idea’s sound, but how will you lift it off the ground?” I couldn’t agree more that we should care vehemently whether we’re watching real people or simulations, but the truth is that the powers of perception we’ve relied on all our lives were outsmarted some time ago and no one is in the least bit motivated to help us out.
This is an extremely complex topic and I’m going to ignore huge aspects of it – like the use of AI companions, and AI being used to control our global infrastructure – and focus simply on AI generation of online content: video, images, and text. Partly because it affects our YouTube channel, which is our livelihood, and partly because it affects every single one of us even if we don’t use AI tools and would prefer to avoid AI completely.
To my mind there are two important sides to this story, one of which is being largely ignored but both of which are merging to create a dangerous online world.
Firstly, artificially generated video is now extraordinarily good. So good in fact that it’s not perfect
Not only is AI content now capable of fooling all of us, even experts, but it is also easy enough and cheap enough to make it worth doing on a vast scale. It’s already endemic, from entire networks of ‘content farms’ and artificial ‘creators’ being invented and managed entirely by AI technology which is capable of generating and uploading thousands of videos a day; to real people being ‘managed’ by AI agencies which ‘clone’ their bodies and voices and then generate vast quantities of artificial content which looks and sounds like them; to individual creators hiring video editors who use AI tools to ‘touch up’ their videos so that it appears that they have skills they haven’t taken the time to learn. This last one is probably the most directly worrying from the point of view of the future of our YouTube channel, but basically if you can imagine it, it’s being done.
Crucially, AI tools designed to deceive do not aim to produce perfection. AI is being trained on the collective output of humanity, and in many cases it is being used to simulate humanity. That doesn’t mean being perfect. It means being a perfect copy. The better it gets at fooling us the more it will start to include mistakes, hesitations, and all of the things we think distinguish real from fake. As I write this, I am finding it difficult to think of anything except face-to-face contact which couldn’t be very effectively faked if desired. Including a pre-AI online history to ‘demonstrate’ genuineness.
And then there’s the fact that the creators of AI tools are almost certain to be running tests on us, releasing multiple versions of their content simultaneously and monitoring our responses so that they can find out what we can detect and what we can’t.
In other words, just because we recognise some content as AI generated that doesn’t mean we are detecting it all.
Then there’s the other side to this, which I think is often overlooked.
Most of us have been using AI every day for years now, since well before you could ask ChatGPT a question, and well before every online tool you use started trying to thrust AI “enhancements” down your throat. One example is familiar to us all, and very relevant to YouTube.
For several years now most phone cameras have had AI tools built deep into them which ‘enhance’ photos and video footage – whether we like it or not. This means that genuine people taking genuine footage can create content that looks more fake than the actual fake content.
Have you noticed how photos on your phone change after a second or two into something a bit different? And have you ever looked closely at your own family photos and videos and noticed how people’s faces and bodies look a bit odd, as if they’ve been stuck on in front of a background? You aren’t imagining it. The AI tools in the camera software analyse the images, detect the foreground and background, and ‘improve’ it. And the results can look decidedly faker than high quality fakes.
We have also been systematically exposed to highly edited video, TV and movies for many years now, with ever shortening sequences – these days often just a few seconds each. Narrators and actors have increasingly been reading scripts that are delivered in short phrases so that it’s easy to re-order them post production, cutting out all of their pauses and breaths at the same time. And AI tools are now automating these processes, taking real content and chopping it up, reordering it and spewing it out with a synchronised soundtrack.
Sliding uncontrollably into ‘normal’
Whether it’s intentional or not (and I suspect the former) it’s a fact that these two aspects of AI use are combining in our day to day lives. Fake content is looking more real and real content is looking more fake. Fake is being normalised, and we are being harmed by this process.
When you try a tutorial you thought looked easy, the disappointment when it turns out to be impossible is real. When you realise you’ve just been cold-called by a bot which fooled you for a few seconds, the sense of betrayal is real. When you can’t reach a human at your bank because you can’t figure out what to say to the gatekeeper bot, the anger is real. When you listen to a politician say something he never said, the opinion that stays with you is real. When you watch an event that never happened, the memory of it is real. And when you have to constantly stop yourself from believing anything you see or hear, the isolation and mistrust is real.
To me the real danger isn’t that AI will perfectly and indistinguishably mimic us as we currently are. It’s that it will change us into something we currently aren’t, something infinitely less. I worry that we will unconsciously allow our humanity to be subsumed into the machine, first by innocently letting it into every aspect of our lives, and then by failing to notice that it’s changing our behaviour, thoughts and desires.
Psychology aside, there is also of course the threat of very tangible damage to ordinary people’s lives on a massive scale. To focus again on the one tiny aspect of this immense picture which is online content like ours on YouTube, if the algorithms that analyse our reactions learn that fake content pays and that they don’t need to pay real people… well, then the real people will quickly disappear from the internet, just like the bricks and mortar stores have already disappeared from the real world.
The sin of omission
If you’re anything like me, even allowing yourself to think about these things can send you down a rabbit hole of worry and fear for the future. It feels overwhelming and hopeless. But surely we have to remind ourselves that that is no excuse. It wasn’t an excuse for our forbears who fought in wars, and this is a war of a new kind. The sin of omission is very real. We who are NOT digital natives, who DO remember a life before computers, and who INTUITIVELY understand that something is very wrong, it’s up to us to think of ways to stand our ground, to make our voices heard, to insist on knowing what’s real and what’s just an illusion which would disappear if someone pulled out the plug.
But how do we do that?
There may just be an answer which can be found if we take a step back and look at the bigger picture.
Why do people create fake content?
The abuse of AI tools and AI generated content comes down to people seeking three things: firstly attention, which is then used to generate money and power.
And who ultimately gives them those things?
We do.
We indirectly pay them and empower them by giving our attention to the content they create. That’s why it’s called the attention economy. It all starts with getting our eyes and ears to attend to their content. Every time you watch or listen to something online, you are giving the creator of that content something powerful: your mind.
So what would happen if each of us quietly and humbly, with faith and courage (and not a small amount of willpower) simply refused to give our attention to anything we didn’t know for a fact was real? If no one was watching, the content wouldn’t be there for long. There wouldn’t be any point in spending money to create it. Unrealistic? Yes, probably, but that’s no reason not to do it. We can only be held accountable for our own actions in this life after all. And maybe, just maybe, we can hold back the flood waters a bit longer in the world of watercolor on YouTube at least!
And we can go further than just denying our attention. When we encounter suspicious content we can look to see whether they admit to using AI – and on YouTube at least they should admit it. We can comment, calling people out if we suspect content is fake, asking creators to prove they’re real, and reporting the content if they can’t. We can comment with no expectation of a response, just so that the algorithm (which reads and listens to everything) knows how we feel. We can look to see whether the account is verified (on YouTube that’s a black check mark in a circle next to the channel name). We can check when their first content was published. We can dig deeper and look for other social media accounts or a website or a blog, with evidence of life pre-2023. We can send an email or a message telling them we can’t tell whether they’re real, and if they ARE real we’ll be helping them to learn that they need to try harder to distinguish themselves from fakes.
But we need to remember that any responses we receive could be fake too. Did you know that YouTube now offers us two choices of AI-generated replies to YOUR comments on our channel?
We don’t need to be victims
But if we remember that all of this is being done to attract our attention, we suddenly become empowered.
Would you entrust your child to a school you’ve never visited? Would you entrust your company to an employee you’ve never met? Then why entrust your attention to someone you don’t even know is real? From the point of view of a creator of online content, fake or genuine, our attention is our most valuable possession. Once we’ve recognised that, we can begin to give it with discernment. Then we can start to understand how protect ourselves, as we have always tried to protect ourselves from real-life threats.
But it’s not going to be easy, and we’ll have to keep vigilant. Perhaps it’s time to put copies of 1984, Brave New World, 2001 A Space Odyssey and The Machine Stops on our bedside tables.
~ Tamsin
* from “Captain Beaky and his Band” https://youtu.be/nmxvoAiSi2E