In the Age of AI, Seeing Is No Longer Believing
Images, videos and audio recordings once stood as some of the strongest forms of evidence, but the rise of deepfake technology is eroding public trust in what people see and hear, raising concerns for journalism, law, politics and everyday life.
Deepfakes are digitally manipulated media created using artificial intelligence to make a person appear to say or do things they never actually said or did.
The technology, which once required advanced technical skill, is now widely accessible through apps and online tools, making it easier for ordinary internet users to fabricate convincing media.
Even videos that appear real can sometimes be dismissed as AI. An example is a recent viral clip showing Senator Adams Oshiomhole in a private jet massaging the feet of a self-acclaimed “professional sugar baby” which sparked reactions online.
While the video looked real and contained no detectable AI artefacts, the senator’s media office publicly denied its authenticity, saying it was AI-generated and intended for blackmail or to damage his reputation.
The incident reflects a troubling reality: in an age of deepfakes, graphic evidence can be questioned, and claims of AI manipulation can be used to cast doubt on what people see.
Journalists have traditionally relied on photographs and videos as proof in reporting. But the ability to convincingly fabricate such material threatens one of the foundations of modern journalism, which is visual verification.
A manipulated video shared on social media can reach millions of viewers before it is properly examined, and by the time corrections appear, the impression created earlier may remain.
Meanwhile, the impact of AI is not limited to video alone. Artificial intelligence can now create music, replicating the voices of well-known artists with striking realism.
One of the most notable examples is “Heart on My Sleeve,” a track created by an AI music creator known as Ghostwriter977, which cloned the voices of Drake and The Weeknd.
The song used voice-cloning technology to generate vocals that sounded extremely realistic, leading many listeners to believe it was an unreleased collaboration.
The vocals were not recorded by the real artists, and the release quickly sparked copyright disputes and legal questions.
Platforms including Spotify, Apple Music and YouTube later removed the track after complaints over the unauthorised use of the artists’ voices. Despite the removals, the song continued to circulate widely online, especially on TikTok through reposted clips.
Checks by OSUN DEFENDER revealed that before its removal, the track had recorded about 600,000 streams on Spotify and around 275,000 views on official YouTube uploads.
Unofficial reposts on Twitter drew about 6.9 million views, while TikTok clips using the sound recorded roughly 15 million uses, with the hashtag #heartonmysleeve exceeding 50 million views across multiple posts.
The scale of the reaction showed how quickly AI-generated content can spread and how easily audiences can mistake it for authentic material.
During election periods, fabricated recordings or videos have the potential to influence public opinion within hours, especially when shared widely on social media. Even when falsehoods are later exposed, the initial reactions often linger.
Courts and investigators have also relied heavily on audio and video recordings to support testimonies and establish timelines. If such materials are increasingly questioned, verifying authenticity becomes more difficult and time-consuming.
Some organisations are already working on systems to verify original recordings and detect manipulated media, but the tools used to create deepfakes are also improving rapidly.
In Nigeria, where social media plays a major role in shaping public discussion, the risks are significant.
Edited videos and misleading audio clips have previously caused confusion and tension, even without highly advanced deepfake technology. More realistic fabrications could make such situations harder to manage.
At the same time, artificial intelligence is not used only for deception. Filmmakers, designers and content creators use similar technology for visual effects, documentaries and creative storytelling.
The challenge lies in how the technology is used and how audiences learn to question what they see and hear. As artificial intelligence continues to evolve, society may be entering an era where seeing is no longer believing.
Trust may depend less on the image itself and more on how information is verified and traced.
The age when a photograph or recording could serve as unquestioned proof is fading, and the meaning of evidence itself is beginning to change.

Titilope Adako is a talented and intrepid journalist, dedicated to shedding light on the untold stories of Osun State and Nigeria. Through incisive reporting, she tackles a broad spectrum of topics, from politics and social justice to culture and entertainment, with a commitment to accuracy, empathy, and inspiring positive change.







