
Advances in AI technology and inadequate verification systems contribute to the spread of fake news
In a stark reminder of the growing threat of misinformation in the digital age, a viral fake photo depicting an explosion near the Pentagon has ignited concerns about the unchecked spread of manipulated content. The incident serves as a wake-up call, urging tech companies to bolster their efforts in preventing the dissemination of fake news. However, it also underscores the need for users to approach online content with increased skepticism, as even legitimate images may soon be questioned.
The Pentagon “photo” in question gained widespread attention on Twitter, leading to a brief dip in stock prices. This event has affirmed what many experts have warned for months—misinformation is rapidly evolving, empowered by new AI tools that make the creation of convincing fake photos increasingly accessible.
Addressing this issue solely with technology appears to be an uphill battle. Attempts to track image provenance, such as Adobe Inc’s Content Authenticity Initiative, are steps in the right direction. Nevertheless, the old adage rings true: a lie can circumnavigate the globe while the truth is still getting its shoes on. In an era where artificial content is prevalent, a collective sense of skepticism is required, especially in the lead-up to the next US presidential election.
The Pentagon “photo” incident was further compounded by Twitter’s flawed verification system. Elon Musk’s revamp of the platform’s blue ticks, intended to democratize verification, inadvertently created an opportunity for imitators. Accounts like BloombergFeed, posing as legitimate news sources, exploited the system before being suspended. It is worth noting that Bloomberg Feed and the account Walter Bloomberg are not affiliated with Bloomberg News, as confirmed by a spokesperson for the organization.
While Twitter has become a fertile ground for the proliferation of fake AI photos, the problem extends beyond the platform itself. The fake Pentagon photo originated on Facebook, suggesting that similar images could easily circulate on other social networks like WhatsApp. WhatsApp’s forwarding feature played a significant role in the dissemination of fake information during Brazil’s elections last year.
TikTok, despite its current glitchy AI-generated video examples, is poised to face a similar challenge. With substantial venture capital investments pouring into deepfake technology startups, videos created using AI tools are expected to become increasingly realistic. Startups like Runway and Gan.ai are already developing software that enables the transformation of videos based on textual and visual prompts, with applications ranging from personalization to branding.
While the advent of realistic fake videos may be a year or two away, the ease of generating manipulated images is at an all-time high. Adobe’s recent update to Photoshop, incorporating generative AI tools, empowers users to manipulate photos more drastically. Additionally, mobile apps such as Midjourney Inc’s and OpenAI’s DALL-E 2 offer accessible image-generating capabilities. Notably, open-source alternatives like Stable Diffusion can create deceptive images encompassing celebrities, politicians, violence, and war.
As the famous internet adage “pics or it didn’t happen” loses its potency, the trustworthiness of images is increasingly called into question. Twitter users were previously exposed to AI’s potential for accelerating misinformation when a fake photo of Pope Benedict in a puffer jacket went viral in March. As predicted, the consequences of such fakery have taken a darker turn.
The combination of generative AI technology and inadequate verification systems, epitomized by dubious blue checkmarks, creates a fertile breeding ground for misinformation on Twitter. Furthermore, concerns are mounting that content moderation teams may face downsizing as Meta Platforms Inc prepares to cut more jobs, leaving fewer individuals to tackle this pressing issue.
Just a year ago, social media platforms