AI-generated images and videos have evolved from quirky oddities to near-perfect illusions that can bamboozle anyone with a smartphone. The recent RNZ exposé highlights how this technology, while fascinating, has become a double-edged sword—opening doors for creativity yet also for deception and misinformation.
What’s striking is how subtle the errors have become—no more seven-fingered celebrities or spaghetti-chomping Will Smiths—now it’s the tiny inconsistencies in shadows, textures, or garbled text on signs that hint at fakery. For the everyday user, spotting these nuances is like playing detective in a high-stakes game where misinformation can inflame political tensions or destroy reputations.
The experts' advice is refreshingly straightforward: cultivate trusted sources, question who’s making claims, seek evidence, and apply common sense. Media literacy is no longer optional; it’s our best defense. Yet, as AI models get smarter, battlegrounds between content generators and detectors intensify—an endless cat-and-mouse chase.
Policymakers are cautiously stepping in, but enforcing laws against malicious AI use is a herculean task, especially with cross-border complexities and technological savvy outpacing legislation. The notion of labeling AI-generated content is commendable but practically an uphill climb.
So, where does this leave us? Well, AI is here to stay, and social media will remain a wild jungle of real and AI-crafted content. The key lies in honing our critical thinking muscles, verifying before sharing, and demanding responsibility from media outlets and platforms. Perhaps, embracing a bit of skepticism isn’t so much cynical as it is pragmatic survival in the digital age. After all, if Abe Lincoln can suddenly wield an iPhone, who knows what else AI might conjure next? Source: How to tell if an image or video has been created by AI - and if we still can