Almost Timely News: ๐️ Why Deep Fakes Are So Dangerous (2025-10-26)
Almost Timely News: 🗞️ Why Deep Fakes Are So Dangerous (2025-10-26) :: View in Browser The Big Plug👉 Watch my MAICON 2025 session, From Text to Video in Seconds, a session on AI video generation! Content Authenticity Statement95% of this week’s newsletter was generated by me, the human. You will see bountiful AI outputs in the video, especially in the analysis. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: Why Deep Fakes Are So DangerousIn this week’s newsletter, let’s do a deep dive on deep fakes - and in particular, why they’re so bad. Ask anyone in the digital space who knows deep fakes exist why they’re bad, and you’ll get a high level answer that they distort reality, but people may struggle explaining why in any level of detail. I wanted to know why, and that’s where this week’s newsletter came from. DISCLAIMER AND WARNING: Parts of this week’s newsletter contain intentionally false information. I’ve labeled it clearly so you know what’s fake, but as you’ll read, that alone might not be sufficient. Also, I am neither a psychologist nor a neuroscientist. While I gathered and read through many papers as part of assembling this issue of the newsletter, I am by no means claiming any expertise in either domain. I can virtually guarantee mistakes in the finer details of both fields. This is a layperson’s perspective on the topic. Part 1: How We RememberBefore we can talk about deep fakes, we have to talk about reality, and how we remember reality. To do this, we’ll need to dig into how our brains work. First and foremost, we don’t recall things the way we think we do. We THINK we remember things by simply recalling the information, in the same way that you open up a video on YouTube, hit play, and watch the show again. But that’s not how we remember things. Instead, our brains behave more like our AI systems do, fetching bits and pieces and reconstructing memories, similar to how generative AI takes a massive amount of past data digested down into probabilities and reconstructs text when prompted. Our brains go through three basic stages of memory:
This is why we have expressions like “rose-colored glasses” about the past, about nostalgia, about the “good old days” We often believe the good old days were better than today - because we don’t remember what actually happened. We reconstruct our perception of what happened. What makes us different than our AI children is that AI functions solely on probabilities, whereas our memory recall is based both on probability (repetition and high frequency) as well as emotion. Emotionally intense events encode memories far more durably than events that are just high frequency alone. That’s why there are people who are really good at sort of last minute crunches. They feel like they have better memory and capabilities because that sense of panic is creating a whole stew of neurotransmitters in their heads that are sharpening their capabilities and increasing their memory capabilities than if they weren’t in that last minute panic. For example, you might not remember your school locker combination, even though you dialed it every day, multiple times per day, for many days in a row. Yet you might remember your first crush’s phone number, even though you probably didn’t use it nearly as much, because of the emotional attachment to it. This knowledge is critical for understanding the power of deep fakes. Because we don’t recall things accurately or retrieve the actual information, and instead reconstruct it, it means we can reconstruct things incorrectly. In generative AI terms, we hallucinate just like our AI systems do. Worse, just like our machines, our brains are garbage in, garbage out. If we remember bad information, we will recall bad information. Finally, that last key is vital when it comes to understanding the importance of deep fakes. Repetition is important, but emotion is even more important. Part 2: Five Key BiasesOur brains are deeply imperfect (which is part of what makes us human). As such, when it comes to memory, there are five key biases we suffer from:
Each of these play a key role in why deep fakes are so dangerous. Continued Influence Effect (CIE)Continued Influence Effect, or CIE, is when our memory - especially our retrieval - is still influenced by false information. Even when we know something is fake, even when we know something has been disproven, we still remember it. It’s like saying “don’t think of the color red” - even though mechanically I’m telling you to NOT do something, your brain has to do it to retrieve the context needed to not do it. Ever had a dream or a memory influenced by a movie or TV show you’ve watched? Even though you know beyond a shadow of a doubt that the content was completely fake. The content was fictional. It’s a movie, fiction, like watching the aliens in Independence Day blow up the White House. Yet you may still inadvertently recall it in other contexts? That’s CIE in a nutshell. This is why retractions are so ineffective, and why we have the expression “A lie races across the world while the truth is still putting its shoes on”. Belief Perseverance Bias (BPB)Belief Perseverance Bias, or BPB, is when we keep believing in something even after we have evidence that it’s been discredited. Typically, this is because beliefs are rooted in our identity - if we’ve managed to bond our identity to a belief, then even after we have rational evidence the belief is invalid, we still stand by it because it’s part of our identity. Suppose you’re a fan of Buffalo sports teams, like the Bills and the Sabres. You believe in your teams, you’re a fan of your teams, you think they’re terrific. The Buffalo Sports fan is embedded in your identity. Objectively, no Buffalo team has ever won any sports championship in the 59 years, 121 seasons of play. Even though evidence clearly refutes a belief in Buffalo’s sports teams, the belief perseveres. Confirmation BiasConfirmation bias is perhaps one of the best known cognitive biases. In a nutshell, confirmation bias is the tendency to seek out, interpret, and recall information that supports what you already believe. If you believe in any given position, then when you evaluate information about that position, you evaluate it through the lens of what you want to see more of, and what you agree with already. If you believe that vaccines are not safe, as an example, then when you encounter news about extremely rare side effects, you reframe your memory about that news to align with your beliefs. Today’s social networks are masterful engineers of confirmation bias. Social media algorithms are amoral. That is to say, they do not have any kind of morality baked in. What they do instead is they attempt to give you more of what you engage with. So if you engage with content that reinforces your beliefs and the algorithm is trying to maximize engagement and maximize the amount of content you consume, and maximize the number of ads you see, it’s going to serve up more and more of content that reinforces your beliefs, even if that content isn’t factually true. Sleeper EffectThe sleeper effect is a phenomenon in which a persuasive message from a low credibility source becomes more persuasive over time. The reason for this is because our brains do not recall factual information in context. They recall things that have been repeated, and things that have high emotional valence. For example, if you watch a video or read a piece of news that provokes a strong emotional reaction, and later on you read a piece of news that debunks the original highly provocative piece of information, your brain tends to remember the more emotional piece of information, even though it’s false. In a 2004 study, scientists discovered that highly emotional message content decays more slowly in memory than memory for a discounting cue, such as a disclaimer, a warning label, or outright labeling something as false. Think of it this way. Imagine that you had a page out of a magazine that was filled with highly charged emotional content and on the corner of the page was a sticky note saying that the content of the article was fake. The sleeper effect says that you’ll remember the article, particularly if you see it over and over again, and you’ll forget that the sticky note is there at all. What happens over time is that if you’re exposed to enough false information that is memorable, your brain will forget the fact that it came from an untrustworthy source. Your brain will launder it of its untrustworthy origins, leaving you with a belief that the fake information is actually true. This is why politicians lie so much. Because they know, even if they are themselves are not trustworthy, if they expose you to a message enough, and it is emotional enough, even if it pisses you off, your brain will launder it and start to believe it’s true. Think about how dangerous that kind of politician is. The fact that they’re pissing you off is helping them rewrite your brain so that the false information they’re feeding you, you eventually perceive as true. Illusory Truth EffectThe fifth and final cognitive bias is called the illusory truth effect, which is all about frequency. The more something is repeated, the more we remember it. If you’ve ever crammed for an exam, you know the illusory truth effect and how it works. You keep repeating the same thing over and over again until you memorize it, until it gets into your brain. The underlying mechanism for this is something neuroscientists call processing fluency. When your brain receives a new piece of information for the first time, it has to figure out “where do I put this thing?” “How does it fit into what I already know?” - kind of like a librarian trying to figure out where to shelve a new book. After the librarian sees that book a couple more times, they know where to put it. They know what section it belongs in, and they can put it away much more easily. Your brain does exactly the same thing in the illusory truth effect. The more that a piece of information is repeated, the easier it is for your brain to attach it and reinforce it to your existing memories. The first time you commute to a new job, it’s confusion, right? Google Maps is telling you go to one way and and and and you’re you get lost and then the second day it’s a lot easier to get to work. And the third day it’s even easier. And by the time it’s week two, you have been exposed to that information so frequently that it’s burned into your brain. This is where fact checking, ironically, works against us. In order to fact-check a claim, you have to repeat the claim, which means that for every fact-checked claim, you’ve given the claim additional airtime. And that means you’re reinforcing its strength in the neural pathways of the audience. Again, this is very much like telling someone, don’t think of an elephant. In order for them to obey that command, they have to think of an elephant. In doing so, the strength of the memory of an elephant gets reinforced over and over again. Part 3: The True Danger of Deep FakesNow that we’ve reviewed how memory works and the five major cognitive biases, it should be obvious how dangerous deep fake videos and other deep fake content is. We know that emotionally charged content helps form memories better by engaging a part of the brain called the amygdala. The amygdala talks to a part of the brain called the hippocampus, which is where short term memories get stored before they are converted into long term memories in that consolidation phase, phase two of memory. The more emotionally charged a piece of information is, the easier and more vividly the amygdala tags a memory in the hippocampus. Think about if you were writing a post on social media and hashtags corresponded to emotional content. If you had a piece of boring information that had no emotional content, you wouldn’t attach any hashtags to it and you might forget about it. On the other hand, if you had a piece of highly charged content, maybe some political content, you might attach five or six or seven or eight hashtags to it. And then when you go to remember things later and you start searching for hashtags, it’s easier to find that piece of content that has a dozen hashtags than it is to find the piece of content that has none. And then after consolidation, every time somebody quotes that piece of information back to you, your brain has to retrieve it and reopen the memory pathways, which reinforces the information. So let’s put these different effects in order to see how deep fakes are so problematic. Let’s start with a confirmation bias. If you have a confirmation bias towards a specific point of view, you’re automatically going to seek out content that reinforces your beliefs. (Social networks and social media are going be happy to do that for you.) And the more strongly you have those beliefs, the more likely it is that you will encounter fake information and believe it to be true. Once you are exposed to a deep fake, the continued influence effect takes hold. Your brain takes that information and codes it perhaps very strongly if it’s something that you have an emotional response to, puts it in for consolidation, and then later retrieval. The sleeper effect takes hold, scrubbing it of context in information about whether the source of the information is high quality or not. Then every time you’re exposed to that information again, the illusory truth effect takes hold, reinforcing it. And if someone tries to fact check you or disclaim the video, belief perseverance bias, confirmation bias, and the illusory truth effect all make those claims even stronger. Make the beliefs even stronger. This means that instead of countering a fake piece of information with the truth, we’re actually reinforcing it in the minds of people who have consumed it. Once you watch a deep fake and the information enters your head, if you remember it, if it is strong enough to provoke the memory process and get consolidated into your long term memories, the chances of you forgetting it or remembering that it’s been discredited are relatively low. When OpenAI’s Sora 2 came out, I made this short video: Even with my disclaimers and labeling the video as AI generated and amusing interstitials from an episode of Star Trek Deep Space Nine, where a character is shouting that a piece of information is fake, the fact that you’re watching a fake version of me in different high authority contexts is going to stick with you. The sleeper effect and the illusory truth effect mean that you may not remember that I’ve never actually spoken at TED or the United Nations. You may instead remember just those pieces of information, fake though they are. And even if you recall that the video itself is fake, because of the way we encode memories, because of the way we associate things, just those fake associations with things like The View or Davos, reinforce mental pathways in your head about what kind of authority I may have. By hijacking those symbols of authority, the video serves to reinforce your existing beliefs about my authority. Part 4: So What? Now What?We’ve established how deep fakes work, how human memory works, and why they’re bad, in that they can create mental associations in our heads and rewrite our memories even when we know they’re fake. So what do we do about this? On a personal level, this means practicing good information hygiene by consuming sources that are known to be good and true. Spend your time with high quality data sources and read, watch, or listen as frequently as you can. The goal is to put the illusory truth effect to work for you instead of against you by reinforcing the neural pathways in your head for true information. This means, among other things, reducing the amount of time you spend in places that are known to circulate bad information like social media. Put yourself on a social media diet and restrict your time and be absolutely vigilant about curating who you follow and what you see. When you see information in your feed that is less truthful, flag it. Mark it as unreliable. Mark that you don’t want to see it anymore. Report it as spam or misinformation. All those are signals that may not necessarily do anything for the network as a whole, but will change what you see in those places. Make the algorithms work for you instead of against you. Be actively skeptical of anything that provokes a strong emotional response. You see a video of some politician, it provokes a strong response. Be skeptical of it as your first instinct. This requires a lot of self-awareness to know that a piece of information, like a video, is provoking an emotional response and training yourself that your first reaction should be skepticism. Be very cautious in how you use generative AI tools yourself, particularly if you are messing around with information from your past. Today’s tools can take photos and turn them into incredibly compelling, high-quality videos of events that never happened. For example, I have an image I made of my two former cats sitting together on the couch. And the photo is incredibly realistic looking. But it’s a lie, and I remind myself it’s a lie. They never got along. They didn’t like each other, they didn’t want to be around each other, and they certainly never sat together under a blanket on a couch. They were ten years apart in age - one was a senior, the other a kitten, and their lives only overlapped by a year or so. I keep the image around to remind me of them, but also as a potent reminder of how generative AI can rewrite your memories if you don’t constantly reinforce the truth to yourself. (I also have plenty of actual photos of them.) Now there may be therapeutic uses for using generative AI to rewrite past traumatic memories, but that is outside the scope of this newsletter and certainly not something I’m qualified to speak about. The takeaway here is to be very cautious about what memories of your own you subject to generative AI and potentially rewrite, inadvertently or purposefully. At an organizational level, be very clear in your own ethics about when you will or won’t use generated content, under what circumstances you will use generated content, and what the standard of truthfulness in the content is. For example, as each new video model has come out, I have experimented with taking photos and video snippets of me speaking on stage and seeing if I can extrapolate the audience from it. Today’s video models can take an existing photo or video of the audience and make them do a standing ovation. My personal ethics around AI and the Trust Insights values absolutely prohibit me from making a speaker reel with an audience giving a standing ovation if it did not actually happen. In fact, that’s our first core value: We reject deception and secrecy. We are transparent and honest. That core value means I cannot use generative AI to misrepresent the truth. That’s where I think deep fakes have the potential to cause the most damage. Not in outlandish videos of a politician or celebrity saying or doing something ridiculous, but in small erosions of the truth all over the place that add up. You have to decide how much of the truth you are willing to erode for yourself, for your own benefit, for your company’s benefit, and use AI accordingly. The temptation to make things seem better than they were or are is incredibly powerful - so if you are not clear on your own values first, get clear on them. Finally, at a societal level, I tend towards a libertarian point of view. In general, I prefer to try and let people behave like adults rather than impose regulation on them if possible. Regulation should be reserved for the things that people will do even though they know they shouldn’t. But deep fakes are one area where I believe regulation and criminal consequences are necessary, particularly anything involving politics. If someone uses the technology to create demonstrably false information, with the intent of persuading citizens or changing government policy, that should come with significant criminal liability. The reason I hold this belief is because of the five biases that deep fakes hijack in our heads so well. A deep fake video of me speaking at the UN on the surface seems harmless and cute and fun, but now you know that it changes your beliefs of me, even when you know it is fake. Letting advertisers and marketers and politicians and companies run amok creating and implanting false memories and beliefs in us without regulation is incredibly dangerous, and so deep fakes are one of the things in AI we should outright ban and criminalize. Wrapping UpTo quote Jeff Goldblum’s Doctor Ian Malcolm from Jurassic Park, “Your scientists were so preoccupied with whether or not they could that they never stopped to think about whether they should.” That perfectly encapsulates and describes the state of generative AI today, particularly with video. Video models have gone from a freak show two years ago (Will Smith eating spaghetti videos) to uncanny valley territory today. Any user with a Sora 2 account or a Veo 3 account can generate nearly any reality that they like, and the tech companies are slapping watermarks on these videos that look more like official certifications and endorsements rather than warnings that they’re generated. As I’ve said many times, including in the faked video above, generative AI takes the good and makes it better, and takes the bad and makes it worse. People with good intentions are using these tools to create amazing works of art and compelling videos. People with bad intentions are using them to deceive populations in greater ways and at greater scale than ever before. My hope is after you’re done reading this newsletter, you can take what I’ve explained and use it to articulate to other people why deep fakes are a problem. Because on the surface, they may discount them as just silly fun. But once you understand the cognitive and neuroscience effects of deep fakes, you understand just how dangerous they can be by rewriting and hijacking our brains. How Was This Issue?Rate this week’s newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you. Here’s The UnsubscribeIt took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe. If you don’t see anything, here’s the text link to copy and paste: https://almosttimely.substack.com/action/disable_email Share With a Friend or ColleagueIf you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague: https://www.christopherspenn.com/newsletter For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here. Advertisement: The Unofficial LinkedIn Algorithm GuideIf you’re wondering whether the LinkedIn ‘algorithm’ has changed, the entire system has changed. I refreshed the Trust Insights Unofficial LinkedIn Algorithm Guide with the latest technical papers, blog posts, and data from LinkedIn Engineering. The big news is that not only has the system changed since our last version of the paper (back in May), it’s changed MASSIVELY. It behaves very differently now because there’s all new technology under the hood that’s very clever but focuses much more heavily on relevance than recency, courtesy of a custom-tuned LLM under the hood. In the updated guide, you’ll learn what the system is, how it works, and most important, what you should do with your profile, content, and engagement to align with the technical aspects of the system, derived from LinkedIn’s own engineering content. 👉 Here’s where to get it, free of financial cost (but with a form fill) ICYMI: In Case You Missed ItHere’s content from the last week in case things fell through the cracks:
On The TubesHere’s what debuted on my YouTube channel this week: Skill Up With ClassesThese are just a few of the classes I have available over at the Trust Insights website that you can take. PremiumFree
Advertisement: New AI Book!In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principles—durable mental models that give you a more permanent, strategic understanding of this transformative technology. In this book, you will learn to:
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you won’t just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage. 👉 Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today! Get Back to WorkFolks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
Advertisement: New AI Strategy CourseAlmost every AI course is the same, conceptually. They show you how to prompt, how to set things up - the cooking equivalents of how to use a blender or how to cook a dish. These are foundation skills, and while they’re good and important, you know what’s missing from all of them? How to run a restaurant successfully. That’s the big miss. We’re so focused on the how that we completely lose sight of the why and the what. This is why our new course, the AI-Ready Strategist, is different. It’s not a collection of prompting techniques or a set of recipes; it’s about why we do things with AI. AI strategy has nothing to do with prompting or the shiny object of the day — it has everything to do with extracting value from AI and avoiding preventable disasters. This course is for everyone in a decision-making capacity because it answers the questions almost every AI hype artist ignores: Why are you even considering AI in the first place? What will you do with it? If your AI strategy is the equivalent of obsessing over blenders while your steakhouse goes out of business, this is the course to get you back on course. How to Stay in TouchLet’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:
Listen to my theme song as a new single: Advertisement: Ukraine 🇺🇦 Humanitarian FundThe war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support. 👉 Donate today to the Ukraine Humanitarian Relief Fund » Events I’ll Be AtHere are the public events where I’m speaking and attending. Say hi if you’re at an event also:
There are also private events that aren’t open to the public. If you’re an event organizer, let me help your event shine. Visit my speaking page for more details. Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers. Required DisclosuresEvents with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them. Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them. My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well. Thank YouThanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness. See you next week, Christopher S. Penn Invite your friends and earn rewardsIf you enjoy Almost Timely Newsletter, share it with your friends and earn rewards when they subscribe. |

Comments