How can adults spot AI-generated content?

In an age where deepfakes and synthetic content are spreading rapidly, it is becoming increasingly difficult to distinguish between reality and fabrication. To navigate safely in today's media landscape, adults must develop both critical media literacy and the ability to use digital tools effectively. But how can you actually recognize AI-generated images, videos, and texts?
Characteristics of deepfakes
MIT Media Lab's project "Detect Deepfakes" has developed public resources to help people train their intuition to detect manipulated videos. They recommend looking for eight signs: facial distortions, uneven skin on cheeks and forehead, unnatural eyes and eyebrows, incorrect reflections in glasses, odd beard or hairlines, moles or scars that suddenly disappear, abnormal blinking, and mismatches between lip movements and sound.
An article from the National Council on Aging (NCOA) points out that deepfake images often have an unnatural, glossy appearance, distorted backgrounds, and uneven skin tone. Deepfake videos can be revealed by strange blinking, missing reflections, or audio that does not match mouth movements.
See the difference yourself
Below is an example of a real image of actor Robert De Niro and a deepfake image. Pay attention to the details: skin texture, eyes, background, and symmetry.
Common Scams and Risks
The National Council on Aging (NCOA) warns that older adults are especially vulnerable to deepfake-based scams. These may include investment fraud, romance scams, political misinformation, extortion, or fake endorsements from celebrities—often using synthetic audio and video. Such scams can result in significant financial losses, making it essential to learn how to identify manipulations early.
Strategies for Distinguishing Facts from AI-Generated Content
Look for visual and audio cues: Use a checklist and watch for unnatural skin texture, unrealistic eyes, missing shadows, or details that change between frames.
Check sources and context: Investigate who is sharing the content and where it originates. Seek out primary sources and verify whether reputable media report the same information.
Use fact-checking tools: Online tools like TinEye and Google Reverse Image Search can help trace the original source of an image. Apps such as Deepware Scanner can detect deepfake videos. For text, try searching exact phrases to see if the content is copied.
Be aware of emotional triggers: Scammers often exploit strong emotions like fear, anger, or excitement to manipulate audiences. Take a moment to reflect before reacting or sharing.
Limit sharing of personal information: NCOA recommends minimizing the amount of personal information shared online, as this makes it harder for scammers to create convincing AI-generated duplicates.
Training for Adults
Critical media literacy training should include hands-on exercises where participants analyze images, videos, and texts to identify signs of manipulation. By working with real examples of deepfakes and practicing with available detection tools, learners build both awareness and skills. Training should also include discussion about how algorithms can be misused in political campaigns and how to recognize credible sources. Topics like ethics, privacy, and responsible sharing should be integrated throughout the learning process.
Conclusion
In a digital age where AI can produce highly convincing forgeries, critical media literacy is a vital skill in adult education. By learning to detect visual and audio manipulation, verify sources, use fact-checking tools, and remain mindful of emotional influence, adults can protect themselves and their communities from fraud and misinformation.
References
MIT Media Lab. Detect Deepfakes Project: Eight indicators for spotting manipulated videos. [media.mit.edu]
National Council on Aging (NCOA). Article explaining what deepfakes are, how they’re used in scams, how to identify them, and how to stay protected. [ncoa.org]