Business

4 out of 10 Poles were dealing with deepfakes. “People can't keep up”

Deepfaki, i.e. generated or manipulated by artificial intelligence of photos, sounds or video content, is one of the most controversial technologies of our time. Advanced AI algorithms are used to create it, which is why it is increasingly difficult for users to distinguish real and false content. The report “Disinformation through the eyes of Poles 2024” shows that 4 out of 10 surveyed people were dealing with such content. It is important because deepfaks are used for manipulation, blackmail, destruction of reputation, fraud and financial scams.

4 out of 10 Poles were dealing with deepfakes.
4 out of 10 Poles were dealing with deepfakes.
photo: SB Arts Media / / Shutterstock

The risk associated with deepfakes increases with increasing access to artificial intelligence tools. Today, almost everyone who has access to the Internet can create movies or images presenting situations that have never happened. The results of the report of the Digital Poland Foundation prepared as part of the “Together against misinformation” initiative indicate that three -quarters of the respondents (77 percent) provide for an increase in the scale of this phenomenon over the coming 10 years.

– The risk of generating deepfakes and being susceptible to them is really growing every day. Interestingly, from day to day these deepfaki are also getting better, because once this technology has been created once, it Now even artificial intelligence improves itself. This means that the human eye and hearing are not keeping up with it anymore – says Newseria agency Tomasz Turba, a security specialist from Sekurak.pl.

– worse, The whole world, including the criminal world, directed towards micropayments. So today we do not have to have a supercomputer, we do not have to have a super -service for 1 thousand. hole. For a day to generate such material, just a subscription model and $ 5. A monthly and criminals generate such Deepfaki.

Social networking sites are flooded with films and recordings generated using AI politicians, people of public trust or celebrities delivering controversial views, advertising false products or financial services. Deepfaki are also used to create compromising content that can cause image or financial losses. However, before the platforms react and mark them accordingly, the Deepfaki become Virals that reach millions of recipients.

– For now, the fight against Deepfake is very uneven because in the world of cyber security it has always been that criminals were ahead of all defense mechanisms. Here, criminals no longer have to be the first, but they must be more accurate in creating – says Tomasz Turba. – Our vigilance should play a key role, because when we look at some video or read some news, we are often unable to see if it is generated by man, for example a photo by a photographer or by artificial intelligence.

Interestingly, tools based on artificial intelligence can also support us in this.

– On the one hand, the AI ​​criminal branch is developing with deepfakes, but on the other hand we have more and more tools for their detection. Where human perception can no longer give advice, then the computer AI will catch that this picture is too perfect or something is missing. And then he will tell us: Hola, Hola, do not go there, because there is deepfake there – says a security specialist. – We still have to train on internet hazards. And we can use these tools, but they have finished computing power and nothing happens there, and the criminal may be more and more accurate in creating or to better lighting of the material, etc.

As he emphasizes, such awareness will be more effective than bans or content control, because it is very difficult in the world of social media.

– We would have to introduce full internet control to what someone is watching, then we could control it. But this is North Korea on the internet, and I do not know if you remember, but there was a fight in the ACTA case that there will be no censorship on the Internet, so it is certainly not the right way – says Tomasz Turba.

In his opinion, however, there is some way to effectively fight deepfakes. According to the report “Disinformation through the eyes of Poles. 2024”, 86 percent respondents agree with the opinion that all information generated by AI should be clearly marked.

– It would be enough to softly softly softly softly be found so that all the materials that the user sees during scrolling immediately runs through the engine to detect, whether it is deepfake. If in half a second this engine finds that this is a feet material, it should be properly marked. So what Tiktok really proposes as a solution for his materials could work. The only question is whether all social networks would go this way, because these are huge costs – adds the expert of Sekurak.

Source:

Ashley Davis

I’m Ashley Davis as an editor, I’m committed to upholding the highest standards of integrity and accuracy in every piece we publish. My work is driven by curiosity, a passion for truth, and a belief that journalism plays a crucial role in shaping public discourse. I strive to tell stories that not only inform but also inspire action and conversation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button