There’s Another Important Message in Taylor Swift’s Harris Endorsement
As a seasoned movie critic with a background in political science and technology studies, I find myself both captivated and alarmed by the evolving landscape of deepfakes and their impact on elections. Taylor Swift’s recent endorsement of Kamala Harris, accompanied by a stern warning against the dangers of AI, serves as a stark reminder of this new reality.
Just moments following Tuesday’s presidential debate conclusion, Taylor Swift utilized her vast fan base to endorse Kamala Harris on Instagram, amassing a staggering 8 million likes. This move wasn’t entirely unexpected, as Swift had previously backed Joe Biden during the 2020 election and subtly hinted, in classic Taylor style, that she was leaning towards this choice.
In my recent Instagram post, I made sure to highlight both the praiseworthy aspects of Kamala Harris and the potential risks associated with artificial intelligence.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
In a reference to a post from Trump on his social media platform Truth Social in August, Swift’s name was mentioned alongside an image that seemed to indicate support from her and her fans for him. Trump added the caption “I accept” to the picture, however, it was later discovered that the images appeared unnatural due to being artificially generated by AI software.
Numerous people quickly recognized that the pictures were fake. In light of Swift’s statement, it seems her denial carried more weight than the AI-generated images themselves. However, this event could foreshadow numerous disputes over AI-manipulated content in future elections for an extended period.
Craig Holman, a government affairs lobbyist at the nonprofit Public Citizen, states that we’re currently facing a situation where many American voters are losing trust in election processes. He warns that if misinformation and manipulation continue to permeate our environment, affecting our votes, the overall credibility of elections is seriously threatened.
Deepfakes proliferate around celebrities, elections
During the 2020 presidential election, AI technologies were relatively basic. Over time, however, these tools have dramatically advanced in power. Today, people worldwide utilize AI to generate lifelike images, videos, and audio. With minimal resources, disinformation campaigns can create fake social media accounts that disseminate propaganda; political parties can leverage AI to swiftly deliver personalized messages to thousands of potential voters; and convincing forgeries such as manipulated event photos and even celebrity-sounding voicemails can be produced with ease.
In various political campaigns, certain tools have been employed, including AI-generated content and deepfakes. For instance, last year, the Republican National Committee (RNC) published an AI-made video depicting a potential dystopia if Joe Biden were re-elected. Elon Musk shared an AI-altered image of Kamala Harris dressed in Soviet-style attire, stating on Twitter that she aimed to be a “communist dictator” from the start. Prior to a Chicago mayoral election in February, a fake video featuring inflammatory comments about police shootings was released and watched by thousands on X before being removed. Moreover, during this year’s Indian elections, deepfakes were extensively used to create deceptive videos of Bollywood celebrities and ads containing language promoting Hindu supremacy.
Frequently, Taylor Swift has been a focus of numerous AI projects due to her significant fame. In the early part of this year, inappropriate and occasionally violent AI-created images depicting her were extensively shared on social platforms. These images prompted legislation in the U.S., leading to the DEFIANCE Act, which empowers deepfake victims to sue individuals who produce, disseminate, or receive such content. The Act passed the Senate in July. In response, AI companies have been actively working to address this issue: Microsoft stated that it is “investigating these images” and has “enhanced its existing safety mechanisms to prevent its services from being utilized for generating similar content.
Swift’s participation is part of an escalating reaction against AI from several influential global figures in culture. Recently, Beyoncé voiced concerns about AI misinformation in an interview with GQ, stating: “We have access to vast amounts of information – some accurate, and some false information masquerading as truth…Just recently, I heard a song by an AI that sounded so much like me it gave me pause. It’s challenging to distinguish what’s genuine and what isn’t.” Earlier this year, Scarlett Johansson criticized OpenAI for releasing a chatbot voice that appeared to mimic hers.
A post shared by Taylor Swift (@taylorswift)
How Trump’s deepfake move ultimately backfired
Trump has consistently admired Taylor Swift, referring to her as “fantastic” in 2012 and “unusually beautiful” in 2023. In February, he boasted about contributing to her success on Truth Social, suggesting that if she were to endorse Joe Biden, it would be a sign of disloyalty to the person who has significantly boosted her earnings.
However, when Trump published deepfakes on Truth Social in August, it seemed like his effort to gather Swifties had boomeranged. The post provided Swift with an opportunity to portray her endorsement of Harris as a moral duty; as though she was left with no other option but to address the misinformation. Furthermore, it consumed the attention that Trump aimed to garner on debate night: by Wednesday morning, “Taylor Swift endorsement” was the second most popular topic on Google, only overshadowed by “who won the debate.
During her initial phase of celebrity status, Taylor Swift avoided discussing political matters, expressing to TIME in 2012 that she felt she wasn’t knowledgeable enough yet to tell others whom to vote for. Since then, she has cautiously entered the political arena, consistently providing solid reasons for her opinions. In 2020, she openly criticized Trump for fanning the flames of white supremacy and racism throughout his presidency. Despite facing criticism for staying politically neutral, many people have encouraged her to utilize her influential platform to make an impact. However, she refrained from political discussions until last night’s endorsement, which has attracted criticism from several individuals.
It’s uncertain how much these initiatives have swayed voter opinions: some scholars suggest that voters may be more discerning than commonly believed, and that the impact of AI-generated misinformation on election outcomes could be exaggerated.
Nevertheless, Holman from Public Citizen asserts that the studies under consideration were based on outdated AI technologies. He highlights a deepfakes database developed by Northwestern University researchers this year, which has cataloged hundreds of political deepfakes. Notably, many of these deepfakes have led to tangible real-world consequences, according to their research findings.
Holman remarks, “We’re currently living in an entirely different time period.” He explains that technology has advanced to such an extent that it’s extremely persuasive and indistinguishable from reality. As a result, he believes it will have significant impacts on upcoming election cycles.
Read More
- USD AUD PREDICTION
- O3 PREDICTION. O3 cryptocurrency
- EUR ILS PREDICTION
- GNO PREDICTION. GNO cryptocurrency
- DIS PREDICTION. DIS cryptocurrency
- EUR TRY PREDICTION
- RVN PREDICTION. RVN cryptocurrency
- RAY PREDICTION. RAY cryptocurrency
- EUR UAH PREDICTION
- UNI PREDICTION. UNI cryptocurrency
2024-09-12 01:06