New AI Voice Technology Makes It Easy To Spread Misinformation
It has just become even easier to manipulate and create misleading content, including the creation of deepfake videos.
You may have seen it: a doctored video of US President Joe Biden went viral on social media. In the video, he appeared to be attacking transgender people, but it was a result of AI tools that can simulate a person’s voice with just a few clicks of a button.
These new AI tools can generate realistic audio of any person’s voice by uploading a few minutes of audio samples and typing in any text for it to say.
This technology was developed by ElevenLabs a technology company started up by a former Google engineer.
However, the dark side of this technology has been exposed as social media users quickly began sharing AI-generated audio samples of prominent people saying hateful statements. These clips could do real-world harm and add more fuel to the fire of misinformation that is already rampant on social media.
Experts warn that bad actors could use this technology to manipulate the stock market or even instigate conflicts. And it’s not just the commercial tools that are the problem; free and open-source software with the same capabilities have also emerged online.
As we become more reliant on technology, we need to be more vigilant about the potential for fake content to mislead and harm us. It’s up to all of us to be critical consumers of information and to ensure that what we share is accurate and truthful.
So next time you see a video or audio clip that seems too good (or too bad) to be true, do you think you will be able to differentiate between what’s real and fake ?
Tag someone who needs to see this