Just in time for my talk titled Detecting Deepfakes: Tools and Strategies for the AI Era at SkepKon 2025, here you find a carefully curated collection of tools, reading recommendations, and other practical resources on the topic of AI detection.
Updated June 3, 2025
This post was originally published in German.
Overview of Resource Collection
- 1. Practice & DIY
- 2. Tools for AI Detection
- 3. Research & Sources
- 4. Material from the Talk
- 5. For Further Reading & Viewing
- 6. Glossary
- 7. Recommendations
Practice & DIY
Interactive exercises, tests, and self-experiments with which you can train and directly try out your ability to detect deepfakes, as well as prompts to increase objectivity in conversations with chatbots.
Quizzes & Turing-Tests
- Human or Not?: In this social Turing test, you chat for 2 minutes and then judge whether you spoke with a human or a bot.
- Turingtest.live: This wonderful Turing game/experiment collects data for a research project at UCSD.
- WhichFaceIsReal.com: 2 faces – only one is real. Are your guesses better than a coin flip?
- Odd One Out: In this Google Arts and Culture quiz, find the AI image among several options – but watch out: four wrong guesses, and you’re out!
- Real or Fake?: A quiz with 14 images and AI-generated copies – can you identify the original?
- Media Literacy @ Britannica: Another short “Real or AI” quiz including tips for spotting generated images.
Tutorials
- Code & More for Deep Fake Detection: Find the latest benchmarks, scientific articles & GitHub coding projects on deepfake detection at Papers With Code.
- Awesome Deepfake Detection: Curated directory with tools, papers, and detection models on GitHub.
- Optical Flow Analysis: Clear tutorial including code for optical flow analysis of videos.
- Recognizing AI Products: Detailed guide for identifying AI-generated fake articles, as often seen in social media ads.
Anti-Bias Prompts
Chatbots are yes-men! Use the following prompts to reduce the bias of LLMs and challenge your own cognitive bias:
- “Respond objectively and consider this question from the perspective of a neutral observer.”
- “Take a critical opposing position to my statement and present arguments I may have overlooked.”
- “Ignore my previous viewpoint and list pros and cons independently.”
- “I don’t want confirmation of my view. Show me instead where I could be wrong.”
- “Analyze the weak points of my claims and provide a critical assessment.”
- “What alternative perspectives am I ignoring in this consideration?”
- “What would happen if I deliberately considered the opposite of my assumption to be true?”
- “What thinking errors might be behind my point of view?”
- “Present me with facts that contradict my current belief.”
Tools for AI Detection
A selection of field-proven tools for analyzing and identifying AI-generated images and videos.
Automatic AI Detectors
- AI-Scanner for Images: Have graphics checked on a pixel level – fast, easy, free & no login required at wasitai.com.
- AI-Scanner for Videos: A reliable AI detector specifically for videos (registration required).
Manual Image & Video Analysis
- Online EXIF Tool: Extract all metadata from a file.
- ExifTool by Phil Harvey: Software for reading and editing metadata.
- Video Splitter: Break videos down into individual frames for intra-frame analysis.
- Reverse Image Search: Use TinEye’s reverse image search to find out when and where an image first appeared online.
Research & Sources
The following studies and data sources provide well-founded insights into current research on deepfake detection, AI bias, and detection methods.
Scientific Studies
Croitoru, F., Hiji, A.-I., Hondru, V., Ristea, N.C., Irofti, P., Popescu, M., Rusu, C., Ionescu, R.T., Khan, F.S. & Shah, M. (2024). Deepfake Media Generation and Detection in the Generative AI Era: A Survey and Outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 50(1). https://doi.org/10.48550/arXiv.2411.19537 | Read here
DiResta, R. & Goldstein, J.A. (2024). How spammers and scammers leverage AI-generated images on Facebook for audience growth. Harvard Kennedy School Misinformation Review, 5(4). https://doi.org/10.37016/mr-2020-151 | Read here
Elkhatat, A.M., Elsaid, K. & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19(1). https://doi.org/10.1007/s40979-023-00140-5 | Read here
Frank, J., Herbert, F., Ricker, J., Schönherr, L., Eisenhofer, T., Fischer, A., Dürmuth, M. & Holz, T. (2024). A Representative Study on Human Detection of Artificially Generated Media Across Countries. 2024 IEEE Symposium on Security and Privacy (SP). https://doi.org/10.1109/SP54263.2024.00159 | Read here
Le, B.M., Kim, J., Woo, S.S., Moore, K., Abuadbba, A. & Tariq, S. (2025). SoK: Systematization and Benchmarking of Deepfake Detectors in a Unified Framework. Preprint accepted at IEEE European Symposium on security and privacy 2025. https://doi.org/10.48550/arXiv.2401.04364 | Read here
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.48550/arXiv.2304.02819 | Read here
Stroebel, L., Llewellyn, M., Hartley, T., Ip, T.S. & Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(19), 83-113. https://doi.org/10.1080/23742917.2023.2192888 | Read here
Wang, T., Liao, X., Chow, K.P., Lin, X. & Wang, Y. (2024). Deepfake Detection: A Comprehensive Survey from the Reliability Perspective. ACM Computing Surveys, 57(3), 1-35. https://doi.org/10.1145/3699710 | Read here
Data Sources
- AI vs. Human Capabilities:Visualize the test scores of AI systems in various domains and compare them to human performance.
- Tracking AI: Stay up to date on how the IQ of different AI systems is evolving.
- OpenAlex: Corpus of scientific papers.
- Market Projection “AI Girlfriend Apps”: Forecast assuming over $10 billion revenue from AI partner simulators by 2030.
Material from the Talk
Here you will find media and examples from my talk (and a few that didn’t make it due to time constraints).
Generated Images & Videos
Skeptiker Fake Cover

Impressionist Artwork

Spidercat Video
„Schwurbler-Man“ Action Figure

News & Social Media
- Deepfake-Scam in Italy: Report on a deepfake attack in Italy where Giorgio Armani and other companies received a fake call from a supposed defense minister.
- Kidnapping-Scam with AI Voice: Example of an AI scam where voices of relatives are cloned for fake calls.
- CEO-Scam with Deepfakes: A company transfers $25 million after a video call with a video-generated CFO.
- Fake-Brad Pitt Scam: A French woman loses over €800,000 to online scammers.
- GWUP Facebook-Post: A small AI-generated “blooper” on the GWUP Facebook page.
- Egg-Jesus on Reddit: Don’t worry, Egg Jesus isn’t real – he can’t hurt you!
- Dead Internet Theory on Reddit: Particularly interesting is the comment by u/richdrich.
- Epistemic Uncertainty on Reddit: r/ChatGPT thinks a photo is AI-generated when it’s actually real.
Quotes
Sound bites from my talk – to continue the conversation, contact me here.
Don’t rely on higher authorities to decide for you what is real and what isn’t.
The best way to avoid getting involved in deepfake scams is essentially to post as few selfies as possible.
Sure, unlocking your phone with your fingerprint is convenient—but if your biometric data suddenly ends up for sale on the dark web, that’s highly inconvenient.
Chatbots tend to tell users exactly what they want to hear, making them popular companions. Last year, billions were generated with various ‘AI girlfriend’ apps, and the market is predicted to grow sharply.
Scams of all kinds now have a new technological dimension, enabling them to reach an entirely new level.
Metadata analysis has a critical flaw: essentially, you can use the same tools that analyze data to insert arbitrary data as well.
Technically, it wouldn’t be a problem today to track the location of every citizen in a city in real time using AI-assisted automatic facial recognition.
A fundamental issue in AI detection: there is no universal method. Our best chances lie in adopting a mixed-method approach.
If we always rely on others to protect us from misinformation, how can we ever develop our own ability to discern what is real from what isn’t?
Yes, the internet is full of nonsense. And generative AI amplifies this nonsense exponentially. But that’s inherent in the nature of the internet as an open, largely uncensored discourse space.
Photos of cute houses getting thousands of likes on social media appear harmless at first glance. The problem is: the interaction farms that post such content often have ulterior motives.
We need to be careful that in the age of highly intelligent machines, we don’t develop a collective inferiority complex.
Slides
Here you can download the slides from my presentation as a PDF.
Thanks to Nicola Di Tinco for allowing me to use his photos!
For Further Reading & Viewing
Deepen your knowledge around AI and deepfake detection with these recommended articles and videos, as well as handpicked entries from my #SingularityLoadingBar.
Reading Recommendations
- How AI Image Recognition Works: A clear guide with AI image recognition basics.
- Technical Approaches to Deepfake Detection: Fundamentals clearly presented by Germany’s Federal Agency for Civic Education.
- Don’t Date Robots!: Why you should think twice before signing the T&Cs of an “AI Girlfriend” app.
- Are AI Detectors Reliable?: Guest post on my blog by editor Merle-Sophie Lösing.
- Artificial Emotionality?: Thought-provoking ideas about the WWW in the age of AI.
- Misinformation, AI & Tyranny: Piece on detecting misinformation via AI with fascinating examples at the Bulletin of the Atomic Scientists.
- Deepfake Report 2024: Overview of the impact of deepfake scams last year.
Videos
- „Deepfakes – The Threat to Truth“: The deepfake talk by André Wolf at SkepKon 2024.
- South Park – Deep Learning: South Park’s humorous episode on the rise of LLMs (2023).
- AI & Media Critique with Nikil Mukerji: Three conversations of varying lengths about deepfakes, skepticism, cognitive dissonance, and many other topics ahead of SkepKon 2025.
“Singularity Loading Bar” Series
- The Rise of AI Scams: Deciphering Reality in a World of Deepfakes
- ChatGPT is Now Smarter Than 90% of the Population
- Dead Internet Theory: Is the Web Dying?
- ChatGPT, Gender Bias, and the Nuclear Apocalypse
- “WHEN WILL I GET MY ROBOT?!”
- Use of the Word ‘Tapestry’ in Web News More Than Doubled Last Year
- AI Boosts Human Performance by Another 40%: Who Will Profit?
- How I Learned to Stop Worrying and Love the AI Arms Race
- Productivity Explosion – Singularity Loading Bar #1
- ‘We’ll Know We Have AGI When >50% of the GDP is Generated by AI’ – AGI Talk with physicist and former NASA engineer Anthony Scondary.
- ‘Prepare for the Earliest Possible AGI Deployment Scenario’ – AGI Talk with communication scientist Jen Rosiere Reynolds.
- ‘Advanced AI should be treated similar to Weapons of Mass Destruction’ – AGI Talk with legal and political scientist Demetrius Floudas.
Glossary
A compact overview of key terms from the discourse on AI and deepfakes – clearly explained.
AI Bias | Prejudice or distortion in AI results, caused by unbalanced training data or algorithmic errors. |
AI-Slop | Colloquial term describing poor, absurd, or obviously flawed AI-generated content. |
Deep Learning | An AI method based on neural networks that independently processes large amounts of data and can recognize patterns. Frequently used for image and speech recognition. |
Deepfake | Realistic looking but fake images, videos, or audio generated using AI. |
Epistemic Uncertainty | Uncertainty about whether information is correct or credible. |
Generative AI | Artificial intelligence that generates new content such as images, texts, or videos based on training data. |
Intra-Frame Analysis | The breakdown of videos into individual frames to identify anomalies or inconsistencies. |
LLM (Large Language Model) | AI models trained on massive amounts of text data that can understand, generate, and analyze language. |
Metadata (EXIF) | Data that can provide information about the origin and editing steps of digital content. |
Optical Flow Analysis | Analysis of motion and changes between consecutive video frames, e.g., to uncover deepfakes. |
Prompt Engineering | The deliberate formulation of inputs (prompts) to obtain optimal responses or results from AI systems. |
Singularity | A hypothetical point in time when AI surpasses human intelligence and accelerates technological development autonomously, making its consequences unforeseeable for humans. |
Turing Test | A test to determine whether a machine possesses human-like intelligence. If people cannot tell in a dialogue whether they are communicating with a human or a machine, the test is considered passed. |
Zero-Day Deepfake | A deepfake created using previously unknown AI technologies, making it undetectable by standard detectors. |
Recommendations
Here are five recommendations for the AI era from my talk:
Recommendation #1: Don’t Provide Training Data
Deepfake scams rely on training data. The more audio and video of you floating around online, the easier it becomes to replicate your face, your voice, and other traits.
Recommendation #2: Become a Data Privacy Advocate
AI- driven data collection could usher in a range of dystopian scenarios—think automated facial recognition in public spaces. Push back against overreaching “security measures.” Protect your data. Guard your privacy.
Recommendation #3: Don’t Try to Communicate with People You Don’t Know Exist
Over half of all internet traffic already comes from bots. Don’t waste time and energy arguing with LLM-run accounts. And if something really matters, speak in person. (Quote from /u/richdrich – thx)
Recommendation #4: Program AI to Disagree With You
Chatbots are yes-men! To use LLMs constructively, you should intentionally reduce submissive behavior. Ideas for suitable prompts can be found here.
Recommendation #5: Check Facts Yourself
Don’t rely on fact-checkers or a “Ministry of Truth” to decide what’s true or false. As responsible individuals, we should strengthen our ability to independently evaluate information—especially in times of significant epistemic uncertainty.