SORA, OpenAI's New Text-to-Video Software, Threatens To Blur The Line Between Fiction And Reality
While the program is still in its infancy, the repercussions stemming from it are endless, ushering in a new era of disinformation that could erode the last vestiges of faith left in democracy.
This week, OpenAI, the leading company in the tech world’s burgeoning industry, presented its latest breakthrough in Artificial Intelligence - Sora, a new text-to-video programme. By simply typing in a prompt, Sora allows users to create eerily realistic animations of the text. One clip, included among several initial examples from the company, highlighted a photorealistic woman walking down a rainy Tokyo street (pictured above). According to a company blogpost, the model is also able to create a video based on a still image or extend existing footage with new material. Although there were flaws in the presentation, with obvious defections in the environment and subjects - often at times, appearing to be airbrushed - the clips on offer were frighteningly realistic, especially this early in its development cycle.
While some may herald Sora as a new leap in technology, a transformative tool that could potentially release workers from tedious tasks, it’s hard not to be weighed down by a foreboding sense of despair at its implications. To start, there are the aesthetic concerns. AI-generated art usually devalues existing art, replacing it with soulless, sanitized media that is neither thought-provoking nor stimulating in any conceivable way. Creative artists, some who may sink hundreds of hours into their creations, now risk being left unemployed as companies begin replacing them with the cheaper method of AI that doesn’t demand wages or other benefits.
AI-generated content’s biggest threat, however, is less its believability and more its accessibility. With a self-intuitive user interface, almost anyone can access the software and dictate the prompt, producing whatever comes to mind, good or bad.
What happens, for instance, when teenage boys, known for their proclivities, gain access to the technology and begin circulating, either out of spite or enjoyment, doctored footage of one of their classmates performing sexual acts? With technology that looks this authentic, it will be difficult, if not impossible, for those unskilled in detecting AI-generated content to discern fiction from reality. Lives could be ruined, relationships strained and job status threatened.
But the biggest threat arguably stems from the wide implications it leaves for democratic engagement in the political process. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics, threatening to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. Due to its ubiquitous nature, AI content hides in plain sight, producing enormous volumes of content that can flood the media landscape, the internet, and political communication with meaningless drivel at best and misinformation at worst. For government officials, this undermines efforts to understand constituent sentiment, threatening the quality of democratic representation. For voters, it threatens efforts to monitor what elected officials do and the results of their actions, eroding democratic accountability.
Technological innovation now allow malicious actors to generate false “constituent sentiment” at scale by effortlessly creating unique messages taking positions on any side of a myriad of issues. The dissemination of fake news and conspiracy theories has already become a hot-topic issue, with videos of politicians, most notably Biden and Trump, making false or misleading statements going viral on Tiktok, Twitter and Facebook. If the AI-generated content was to converge - which is only a matter of time - then how does society expect to combat this new avalanche of fake content? For some, it may be easy to distinguish between content that is real and content that has been fabricated, but for the vast millions of tech illiterate consumers that scroll through social media daily, they are vulnerable to propaganda and misinformation, exacerbating the already tenuous polarization of society.
A healthy democracy also requires that citizens be able to hold government officials accountable for their actions. For ballot-box accountability to be effective, however, voters must have access to information about the actions taken in their name by their representatives. Concerns that partisan bias in the mass media, upon which voters have long relied for political information, could affect election outcomes are longstanding, but generative AI poses a far greater threat to electoral integrity.
With people more engaged in social media than ever, AI needs to be treated as the threat that it is and legislators, on both the left and right of the political spectrum, need to earnestly reach a consensus on how the new software is to be governed. Currently, the legislative framework around AI content is relatively opaque, as politicians have struggled to adapt to it. Neglecting to treat it now, however, with the urgency it deserves, risks allowing faith in Western democracy to be whittled away further, as the distinction between reality and fiction becomes more nebulous, ushering in a new era of disinformation.
Fundamentally, the platforms responsible for generating these language models must understand they have a responsibility in terms of what content they produce, how that content is framed, and even what type of content is proscribed. As generative AI becomes more ubiquitous, these platforms have a duty not just to create the technology but to do so within a parameter that is ethically and politically responsible. The question of who gets to decide what is ethical, especially in polarized, heavily partisan societies, is not new. Social-media platforms have been at the centre of this debate for years, and now the generative AI platforms are in an analogous situation. At the very least, elected public officials should continue to work closely with these private firms to generate accountable, transparent algorithms.
To reinforce this, digital-literacy campaigns need to play a greater role in guarding against the adverse effects of generative AI by informing the recipients of the products. Just as neural networks “learn” how generative AI talks and writes, so too can individual readers themselves. Large language models such as ChatGPT have a certain formulaic way of writing—perhaps having learned a little too well the art of the five-paragraph essay. Once users understand this formulaic vernacular, they too can identity content that has been AI-generated.
New technologies such as generative AI are poised to provide enormous benefits to society — economically, medically, and possibly even politically. But artificial intelligence also poses political perils. With proper awareness of the potential risks and the guardrails to mitigate against their adverse effects, however, we can preserve and perhaps even strengthen democratic societies.