Can sophisticated chatbots wreck havoc on social media

mobile phone users in a group

ChatGPT has given many people their first chance to experiment with artificial intelligence (AI), whether it be for speech assistance or advice on cooking.

The foundation of ChatGPT is a cutting-edge language processing system created by OpenAI.

Using text databases from the internet, including books, magazines, and Wikipedia entries, artificial intelligence (AI) was trained. 300 billion words were entered into the system in total.

The end result is a Chatbot with extensive knowledge that sometimes seems uncannily human.

ChatGPT will provide you with a recipe if you tell it what's in your kitchen cabinet. No problem if you need a brief introduction to a lengthy presentation.

Its convincing simulation of human responses might be a useful tool for those up to no good, but is it too good?

Researchers in academia, cybersecurity, and artificial intelligence (AI) warn that ChatGPT may be used by dishonest actors to sow discord and disseminate propaganda on social media.

Up until now, spreading false information required a lot of labor. However, a study from Georgetown University, Stanford Internet Observatory, and OpenAI published in January found that an AI like ChatGPT would make it much simpler for so-called troll armies to expand their operations.

The so-called influence operations on social media may be impacted by sophisticated language processing systems like ChatGPT.

Such campaigns can support or oppose policies as well as deflect criticism and present the ruling political party or politician in a favorable light. They also disseminate false information on social media by using fake accounts.

Then republican presidential nominee Donald Trump and Democratic presidential nominee Hillary Clinton on stage at the Hofstra University debate in Hempstead, New York on Sept. 26, 2016.
An official investigation discovered that thousands of Russian social media posts sought to sabotage Hillary Clinton's 2016 presidential campaign.

A similar campaign was started prior to the 2016 US presidential election.

The Senate Intelligence Committee reported in 2019 that the St. Petersburg-based Internet Research Agency operated thousands of Twitter, Facebook, Instagram, and YouTube accounts that were dedicated to undermining Hillary Clinton's campaign and promoting Donald Trump.

However, future elections might face an even greater flood of false information.

The AI report from January states that language models "have the potential to rival human-written content at low cost, suggesting that these models, like any powerful technology, may provide distinct advantages to propagandists who choose to use them.".

The report issues a warning, noting that these benefits "could expand access to a greater number of actors, enable new tactics of influence, and make a campaign's messaging much more tailored and potentially effective.".

Not just the quantity, but also the quality of false information could increase.

According to Josh Goldstein, a co-author of the paper and research fellow at Georgetown's Center for Security and Emerging Technology, where he works on the CyberAI Project, AI systems could enhance the persuasive quality of content and make those messages challenging for regular Internet users to recognize as part of coordinated disinformation campaigns.

"Generative language models could generate a lot of unique content on a consistent basis. and disallow each propagandist from repeatedly using the same text on news websites or social media accounts, says the author.

More business technology:

Mr. Goldstein continues by stating that it will be more challenging for the general public to determine what is true if a platform is overrun with false information or propaganda. That is frequently the intention of the bad guys who engage in influence operations.

His report also mentions that not all organizations will have access to these systems.

"At the moment, only a few businesses or governments have top-tier language models, which have limitations on the tasks they can reliably complete and the languages they can produce.

According to his report, "if more actors invest in cutting-edge generative models, then this could increase the likelihood that propagandists gain access to them.".

According to Gary Marcus, an expert in artificial intelligence and the founder of Geometric Intelligence, an AI firm acquired by Uber in 2016, malicious groups may perceive content written by AI as spam.

"Those who spread spam rely on the most credulous individuals to click on their links in order to reach as many people as they can. But with AI, that squirt gun could grow to be the greatest Super Soaker ever. ".

Additionally, Mr. Marcus notes that even if social media sites like Twitter and Facebook remove 75 percent of the content that those offenders posted on their platforms, "there is still at least 10 times as much content as before that can still aim to mislead people online.".

Twitter and Facebook were plagued by the proliferation of phony social media accounts, and as language model systems quickly develop in the present, this will only result in an increase in the number of fake profiles on those platforms.

According to Vincent Conitzer, a professor of computer science at Carnegie Mellon University, "Something like ChatGPT can scale that spread of fake accounts on a level we haven't seen before, and it can become harder to distinguish each of those accounts from human beings.". ".

Vincent Conitzer, a professor of computer science at Carnegie Mellon University
According to Vincent Conitzer, fake accounts using ChatGPT technology will be difficult to distinguish from real users.

Both the January 2023 paper, which Mr. Goldstein co-authored, and a related report from the security company WithSecure Intelligence warn of the dangers of using generative language models to quickly and effectively produce fake news articles that could spread on social media and influence voters before a crucial election.

But should social media platforms be as proactive as possible if misinformation and fake news emerge as an even bigger threat because of AI systems like Chat-GPT? Some experts think they'll be lax to enforce any of those kinds of posts.

According to Lus A Nunes Amaral, co-director of the Northwestern Institute on Complex Systems, "Facebook and other platforms should be flagging phony content, but Facebook has been failing that test spectacularly.".

"The costs associated with monitoring each and every post, as well as the realization that these fake posts are intended to enrage and divide people, which drives engagement, are some of the reasons for that inaction. That's advantageous for Facebook.

. "

Source link

You've successfully subscribed to NewsNow
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Billing info update failed.