Generative AI and Deepfakes
What is generative AI? What is a deepfake?
Generative Artificial Intelligence (GenAI) is a technology that can produce synthetic novel text, images, videos, and audio. GenAI Models (LLMs), such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama are currently the most commonly used types of genAI. Deepfakes are AI-generated fake multimedia content, mostly videos, images, or audio that look and sound entirely authentic. Deepfakes may show a famous person saying something they never said, or putting someone’s face on somebody else’s body in a video.
How can generative AI affect the information ecosystem?
The rise of genAI and deepfakes have severe implications for integrity and trust in the Canadian information ecosystem. It is now easier than ever to quickly generate thousands of posts, images, and even websites with disinformation, bot accounts to spread them, and deepfake videos and photos to seemingly confirm them. As genAI tools become more sophisticated, it is becoming more difficult to identify if articles or videos were created by a real person or AI, making it easier to spread false information and harder to immediately debunk it.
With AI-generated content and deepfakes becoming easy to produce, everyone is vulnerable to trusting fake content and dismissing real content. For example, deepfake videos may easily convince people that a politician said something they never said, affecting their perception of that politician and potentially impacting how they vote. On the other hand, growing concern about genAI can tip over into paranoia, as people may be quick to dismiss real information just because it could be AI.
How do you identify a deepfake? What do you do about it?
Since deepfakes and AI-generated content are designed to look real, it may be difficult to identify. However, there are a few things you can do to protect yourself from being fooled and from preventing false information from spreading.
Double-check: Always try to find additional sources before trusting information you see online. Deepfakes have made it harder to believe your eyes. If you see something too scandalous or sensational, it probably is not.
Learn the signs: Deepfake videos often have small mistakes, such as the background does not look right, people’s faces and body parts have anomalies, changes part-way through the video, or camera angle moving in an odd way. Pay attention to these details. Here is a video guide from CBC about how to detect deepfakes.
Report it: If something is fake but is pretending to be real, report it to the social media platform. You should also send it to our tipline for us to investigate it further.