Artificial Intelligence and the Rise of Misinformation

11.04.2024

logout clanek ai3

Lately, we've been reading more and more about the rise of artificial intelligence (AI) tools in the news. Conversational programs like ChatGPT and Microsoft's Copilot are being used to assist in creating texts, articles, social media posts, and lately, even visual content such as images and videos.

These tools are developing extremely quickly. It is estimated that by 2026, 90% of the content on the internet will be generated by AI tools. Whereas just a year ago, we could easily identify AI-generated text or images, today we often struggle. Images created with artificial intelligence have begun to appear in advertisements and news. Conversational robots have started to replace employees in support and telecommunications centers. On social media, we see videos of the Eiffel Tower burning, entirely created with AI. Such situations online are becoming increasingly common, and it's important that we know how to recognize them. Their influence can misinform us, affect our perception of reality, and even deceive us, both informationally and financially.

Bots Taking Over the Web

With the rise of AI tools, the number of bots - artificially created profiles on social networks operated by programs - has also increased. These robotic profiles comment, interact with each other, like posts, and even write personal messages. These messages often include malicious links or other attempted fraudulent schemes. In 2022, it was estimated that 47.4% of internet traffic came from bots.

 boti

These bots either contact us or we intentionally seek contact with them. If we are aware that we are communicating with a conversational robot or not, in cases of prolonged communication, we may develop an attachment to the robot, which can then be used for malicious purposes (phishing attempts, data theft, extortion, etc.). When individuals fail to recognize that they are communicating with a program, it can also lead to emotional distress and financial problems.

The Rise of Deepfake Technology and Video Generation

One of the most concerning applications of AI in creating misinformation is the development of so-called deepfake technology. Deepfake technology enables the creation of convincing but entirely fabricated videos, where a person's face in the video can be altered at will. If we add voice modulation to this, we get an extremely convincing recording. In these videos, we can "prepare" a person to say or do things that never actually happened.

Individuals are often inserted into these videos without their consent. This form of abuse can cause irreparable damage to individuals' reputations, as well as emotional and mental distress. Even worse, videos depicting individuals in intimate situations, even if completely fabricated, can quickly circulate online and become tools for harassment or even extortion. This technology can also be used for attempts to influence society, especially during times of political elections and the spread of misinformation.

The use of these tools is becoming increasingly easy and accessible. OpenAI recently introduced its new video content creator, SORA, which generates an entire photorealistic video based on simple text, which is difficult to distinguish from a real recording.

Voice Cloning Scams

In addition to copying faces in videos, today we can also copy people's voices, intonation, and speaking style. This process allows me to create a voice filter from a sample of a person's voice, which I can then adjust with my voice and sound like the desired person, or create spoken content with a "stolen" voice. This process is called voice cloning.

 logout voice cloning

The technology is primarily being developed for virtual assistants that could speak to us with any voice, for various translation applications, video content creation, etc. However, the technology is also being used for scams and disinformation purposes.

There are already cases around the world where scammers, using voice cloning of the victim, convince them to send money. Often through a call, where the perpetrator pretends to be a family member of the victim and claims to be in a serious situation requiring some form of payment (pretending to be kidnapped, in prison, or in the hospital).

Recognizing AI Content Online?

The ability to recognize AI content online will become a fundamental and necessary skill in the coming years if we want to navigate the internet safely and effectively.

  • Tips for identifying AI content include:
  • In videos and images, AI-generated content will often create impossible situations. Pay attention to the hands of individuals in the content (e.g., the number of fingers), the background activity, facial expressions, and lip movements of individuals.
  • In textual content, pay attention to repetitive sentences, impersonal writing style, and content lacking specific examples or facts.
  • Be mindful of mistakes. AI may sometimes make very obvious mistakes in the text that a human wouldn't, but on the other hand, it will often write too flawlessly and "perfectly."
  • When interacting with someone online whom we know in real life and that person suddenly asks for money, data, access, etc., unexpectedly, we should be skeptical of this request and double-check it.
  • Always verify the reliability and credibility of online content and check if the information is supported by comparable sources. Especially when it comes to content that has a strong emotional impact on us (shocking, surprising, frightening, angering, etc.).

Subscribe to our e-newsletter

We will send you information about relevant news and events occasionally. The e-mail address to which you want to receive email notifications, is stored and protected in accordance with the law on the protection of personal data.