As shown not long ago during the 2016 U.S. elections, manipulation and propaganda continued fueling disinformation amidst political contest.
Less than ten years later and with current president Biden and former president Donald Trump being eligible, new language models break in to complicate the landscape by disrupting and expanding the reach of the average internet user with an AI tool – and all the data available behind – on its hands.
Internet search engines and media belatedly reacted to the publication without initially restricting individuals or advertisers from the incredibly realistic AI-generated content. The rocketed spread of this technology becomes immediately threatening; the proof is already there when potential candidates actively misuse and share videos created to instigate polemic exclusively. Its impact will firstly rely on the restrictions proposed by legislation.
In recent weeks, various U.S. Congress members and the most critical agents in the digital industry took over the titanic task of setting proper limits on these publications.
Aside from the prominent role that journalism and advertising will naturally take, AI comes to level the fact-checking and ethics game from publishers, search engines and social media giants, as now, not only can advertisers create the most sophisticated political propaganda, but anyone can.
Various news agencies already provide fact-checking services, but as for any human, it takes more than a few seconds to verify sources, compare and expose why some media materials are non-reliable. This puts journalism at an enormous disadvantage when AI-generated content is gathered, published and spread in seconds or minutes after any official announcement of presidential candidates. The disadvantage is even more significant for the general electorate.
This is becoming increasingly concerning with the rise in accessibility of AI tools. Deep fakes (A.I. generated images and videos) are increasingly common, as well as tools used to craft highly persuasive speeches and create misleading polls and models almost instantaneously. What compounds this concern is that these tools tap into vast databases that the software can quickly sift through. However, the challenge lies in the inability to promptly verify the accuracy of such content, which has the potential to fuel biased opinions, distort perceptions of reality, and foster unrealistic expectations regarding campaign actions and promises.
The U.S. could start regulating the uploading and replica of these opinion-provoking instigators and minimise the effect of persuasion of what could only be classified as lies.
This raises a number of important questions. How could Alexa inform a voter assuring 100% partial and neutral information? How can Facebook avoid another scandal as in the precedent process? How “X” will control the already-known bot armies and videos with the candidates’ voices and gestures cloned? These questions highlight the need to implement effective solutions to combat the dissemination of misinformation.
As technology continues to advance at an unprecedented rate, the challenges of maintaining trust and accuracy in political discourse bevomes increasingly urgent. Legislative efforts are currently underway, especially in the realm of political advertising, requiring policymakers to strike a balance between freedom of expression and the protection of democratic processes.
AI-generated materials are not yet banned for political advertising, as they have already been used for candidates and parties. The focus of the initial debates is on regulating how those materials are exhibited instead of with a watermark, an obligatory disclaimer about the source of the creation of the material, or a previous revision.
A case aside is the material created and uploaded by an individual user who publishes it, not as part of an advertising campaign. For example, TikTok doesn’t allow political ads, but its creators can still publish AI-generated content without any disclaimer impeding the material from being viralized.
During the present year, the Federal Elections Committee of the United States, propelled by petitions of Democratic lawmakers and other public associations, has held various meetings to discuss options as a disclosure where the advertiser will acknowledge that the material is AI-generated and that it could contain a misrepresentation of the other’s persona and campaign.
Aligned with the FEC objectives and added to their already existing pre-review of political materials published on Google and YouTube, Google has informed that from next November, as part of their, AI-generated videos and images modifying people and events, have to be accompanied by a noticeable disclaimer. This norm, even not a ban on these materials, starts to regulate the reach.
On September 13 2023, Bill Gates, Mark Zuckerberg and Elon Musk, and other relevant actors in the tech sector attended an “AI Safety Forum”, the main topic of discussion being the regulation of AI-generated content. The conference also addressed questions such as how the U.S. will handle the process and set the safety bar, when economies as India, Mexico and Taiwan will also be electing national leaders in 2024.
Despite the rapid advancement of this technology, the international influence of this process and the transparency it should have will set a precedent for the entire world; it is more than just a matter of foreign policy and ethics.
From a civil law perspective, the electorate must be able to demand a strategy competent enough to safeguard their ability to vote without fear of manipulation or violence. Simultaneously, it’s crucial to protect the privacy and handling of personal data, especially in the context of advertising.
In intellectual property matters, political parties must have well-documented advisors to rapidly eliminate a crisis or detect materials that could potentially affect their campaigns due to their authenticity before that threat is actually effectuated.
Prevention is critical; the lawyer must take the user’s role and know how these technologies are deployed to understand their scope in each format. It is not enough to be a passive user but to understand the advertising machinery behind it. Searching on Google U.S. differs from searching on Google U.K.; the results are filtered differently.
You must know how organic and paid positioning works, in addition to the methodologies used in data collection depending on the geographic location or channel. An aspiring lawyer must understand the structure of each medium to have minimal context of the impact.
At the corporate level, the U.S continues to be a land of opportunities. It offers the potential to engage in businesses entirely focused on comprehending emerging language models and elucidating their tangible effects for both individuals and companies.
AI is an exciting legal field where counselors can immediately find an opportunity to help balance private and public interests. As discussed, the technology giants met with members of Congress to begin the foundations of regulation of AI-generated ads in the 2024 election. Still, it was an event held behind closed doors.
This represents a disenchantment and distrust issue for both the press and the public at large. It fostered the idea of possible censorship when an issue of transparency and user protection is still under discussion.
Ethically, AI presents important challenges for individuals working at the intersection of law and politics:
Aside from being up to date with your international news sources, the challenge is understanding the direct impact of AI on politics. Take a look at these questions to deepen your insight into this topic:
Loading More Content