{"id":1279,"date":"2025-05-22T11:00:00","date_gmt":"2025-05-22T11:00:00","guid":{"rendered":"https:\/\/internship.infoskaters.com\/blog\/2025\/05\/22\/debunked-by-ai-the-future-of-misinformation-on-social\/"},"modified":"2025-05-22T11:00:00","modified_gmt":"2025-05-22T11:00:00","slug":"debunked-by-ai-the-future-of-misinformation-on-social","status":"publish","type":"post","link":"https:\/\/internship.infoskaters.com\/blog\/2025\/05\/22\/debunked-by-ai-the-future-of-misinformation-on-social\/","title":{"rendered":"Debunked by AI: The future of misinformation on social"},"content":{"rendered":"<p>Ethan Mollick, professor of management at Wharton Business School, has a simple benchmark for tracking the progress of AI\u2019s image generation capabilities: \u201c<a href=\"https:\/\/www.oneusefulthing.org\/p\/change-blindness\">Otter on a plane using wifi<\/a>.\u201d<\/p>\n<p><a class=\"cta_button\" href=\"https:\/\/www.hubspot.com\/cs\/ci\/?pg=3dc1dfd9-2cb4-4498-8c57-19dbb5671820&amp;pid=53&amp;ecid=&amp;hseid=&amp;hsic=\"><\/a><\/p>\n\n<p>Mollick uses that prompt to create images of \u2026 an otter using Wi-Fi on an airplane. Here are his results from a generative AI image tool around November 2022.<\/p>\n\n<p><a href=\"https:\/\/www.oneusefulthing.org\/p\/change-blindness\"><em>Source<\/em><\/a><\/p>\n<p>And here is his result in August 2024.<\/p>\n\n<p><a href=\"https:\/\/www.oneusefulthing.org\/p\/change-blindness\"><em>Source<\/em><\/a><\/p>\n<p>AI image and video creation have come a <em>long<\/em> way in a <em>short <\/em>time. With access to the right tools and resources, you can manufacture a video in hours (or even minutes) that would\u2019ve otherwise taken days with a creative team. AI can help almost anybody create polished visual content that feels real \u2014 even if it isn\u2019t.<\/p>\n<p>Of course, AI is only a tool. And like any tool, it reflects the intent of the person wielding it.<\/p>\n<p>For every aerial otter enthusiast, there\u2019s someone else creating deepfakes of presidential candidates. And it\u2019s not only visuals: Models can generate persuasive articles in bulk, clone human voices, and create entire fake social media accounts. Misinformation at scale used to take serious operations, time, and expenses. Now, anyone with a decent internet connection can manufacture the truth.<\/p>\n<p>In a world where AI can quickly generate polished content at scale, social media becomes the perfect delivery system. And <a href=\"https:\/\/blog.hubspot.com\/marketing\/ai-social-media-strategy\">AI&#8217;s impact on social media<\/a> can&#8217;t be ignored.<\/p>\n<p>Misinformation is no longer just about low-effort memes lost in the dark corners of the web. Slick, personalized, emotionally charged AI content is misinformation\u2019s future. To understand the implications, let\u2019s dive deeper into social media misinformation and AI\u2019s role on both sides of the misinformation fence.<\/p>\n<h2>Social Media Misinformation Today<\/h2>\n<h3>What is misinformation?<\/h3>\n<p>Before I begin, I should note how I\u2019ll discuss the term \u201cmisinformation.\u201d Technically speaking, this issue has a few different flavors:<\/p>\n<p><strong>Misinformation <\/strong>is false information shared without the intent to deceive. It\u2019s usually spread accidentally because people believe it\u2019s true. When your uncle shares a fake news story on Facebook, that\u2019s misinformation. <\/p>\n<p><strong>Disinformation <\/strong>is false information shared deliberately to mislead, manipulate, or harm a person or persons. Its purpose is often to create political, social, or financial gain. Think bad state actors or troll farms meant to deceive intentionally. <\/p>\n<p><strong>Malinformation <\/strong>is when someone shares true information intending to cause harm, often by taking it out of context. It\u2019s a real story used maliciously. For example, someone leaking private emails to smear a public figure is malinformation. <\/p>\n<p>For our purposes, I\u2019ll focus on misinformation as much as possible and will note differences otherwise.<\/p>\n<h3>Social Media Misinformation: A Brief History<\/h3>\n<p>The fact that we need distinctions hints at the scope and scale of social media misinformation today. False or inaccurate printed content has existed since the Gutenberg printing press.<\/p>\n<p>The advent of newspapers also brought \u201cfake news\u201d and hoaxes \u2014 one of my favorites being <a href=\"https:\/\/www.history.com\/this-day-in-history\/august-25\/the-great-moon-hoax\">The Great Moon Hoax of 1835<\/a>, a series of fake articles in the New York Sun covering the \u201cdiscovery\u201d of life on the Moon.<\/p>\n<p>Misinformation has followed every medium \u2014 newsprint, radio, television. But the internet? Two-way communication on the World Wide Web has helped misinformation like \u201c<a href=\"https:\/\/www.bbc.com\/news\/blogs-trending-42724320\">fake news<\/a>\u201d proliferate.<\/p>\n<p>Once users could <em>create<\/em> content online \u2014 not just consume it \u2014 the door opened to an almost limitless supply of misinformation. And as social media platforms became dominant, that supply didn\u2019t just grow; it became <em>incentivized<\/em>.<\/p>\n<h3>News on Social Media<\/h3>\n<p>Today, <a href=\"https:\/\/www.pewresearch.org\/short-reads\/2021\/01\/12\/more-than-eight-in-ten-americans-get-news-from-digital-devices\/\">86% of Americans<\/a> get their news from digital devices; information sits in their palms, awaiting engagement. Ironically, the more accessible information becomes, the less we seem to trust it \u2014 <a href=\"https:\/\/news.gallup.com\/poll\/512861\/media-confidence-matches-2016-record-low.aspx\">especially our news<\/a>.<\/p>\n<p>Social media has only exacerbated these challenges. Firstly, social media platforms have become primary news sources. The <a href=\"https:\/\/reutersinstitute.politics.ox.ac.uk\/digital-news-report\/2024\/dnr-executive-summary\">2024 Digital News Report<\/a> from Reuters &amp; Oxford found:<\/p>\n<p> News use has fragmented, with six networks reaching significant global populations.<br \/>\n YouTube is still the most popular, followed by WhatsApp, TikTok, and X\/Twitter.<br \/>\n Short news videos are increasingly popular, with 66% of respondents watching them each week \u2014 and 72% of consumption happens on-platform.<br \/>\n More people worry about what is real or fake online: 59% of global respondents are worried, including 72% of Americans.<br \/>\n TikTok and X\/Twitter are cited for the highest levels of distrust, with misinformation and conspiracy theories proliferating more often on these platforms. <\/p>\n<p>The more we rely on social media platforms for news, the more their algorithms prioritize engagement over accuracy in the challenge to keep us scrolling. Platform creators are then encouraged to provide relevant content to capture attention, engagement \u2014 <a href=\"https:\/\/democratic-erosion.org\/2024\/12\/03\/fake-news-for-profit-the-disinformation-goldrush\/\">and dollars<\/a>.<\/p>\n<p>And if the goal is engagement, not accuracy, why limit yourself to real news? When \u201c<a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2022\/05\/social-media-democracy-trust-babel\/629369\/\">outrage is the key to virality<\/a>,\u201d as social psychologist Jonathan Haidt says, and virality leads to rewards, you do whatever it takes to go viral.<\/p>\n<p>And it works, as the data shows. MIT research shows <a href=\"https:\/\/www.science.org\/content\/article\/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots\">fake news can spread up to ten times faster<\/a> than true news on platforms like X\/Twitter. A story need not be <em>true<\/em> to be <em>interesting<\/em>, and in an attention economy, interesting wins.<\/p>\n<p>Mind you, misinformation is often unintentional. And the reward systems these platforms offer to <em>users<\/em> <a href=\"https:\/\/insights.som.yale.edu\/insights\/how-social-media-rewards-misinformation\">encourage sharing interesting content<\/a> regardless of veracity. Your uncle may not know if an article is true, but if sharing it gets him twice as much engagement on Facebook, there\u2019s a good chance he pushes that button.<\/p>\n<p>But now, it\u2019s not just humans spreading falsehoods. Generative AI\u2019s ascendence is fueling the fire \u2014 revving up a powerful misinformation engine and making it harder than ever to tell what\u2019s real or not.<\/p>\n<h2>AI Can Create Misinformation, Too<\/h2>\n<p>Generative AI tools, with broad access and easily manipulated prompts, <a href=\"https:\/\/www.calpoly.edu\/news\/ask-expert-how-has-ai-changed-misinformation-and-what-does-mean-consumers\">expand creative powers<\/a> to nearly <em>anybody<\/em> with a fast enough internet connection.<\/p>\n<p>So far, the ability to manufacture fake images and videos is AI\u2019s greatest contribution to misinformation proliferation. Common offenders include \u201c<a href=\"https:\/\/blog.hubspot.com\/marketing\/everything-to-know-about-deepfakes\">deepfakes<\/a>,\u201d AI-generated multimedia used to impersonate someone or represent a fictitious event. These can be funny; others, damaging.<\/p>\n<p>For example:<\/p>\n<p> The \u201c<a href=\"https:\/\/www.theverge.com\/2023\/3\/27\/23657927\/ai-pope-image-fake-midjourney-computer-generated-aesthetic\">swagged-out Pope<\/a>,\u201d with images of Pope Francis in a puffy jacket.<br \/>\n Russian state-sponsored <a href=\"https:\/\/time.com\/7095506\/russia-disinformation-us-election-essay\/\">fake news sites<\/a> mimicking The Washington Post and Fox News to disseminate AI-generated misinformation.<br \/>\n Drake\u2019s \u201c<a href=\"https:\/\/www.nbcnews.com\/pop-culture\/pop-culture-news\/drake-pulls-taylor-made-freestyle-tupac-estate-threatens-action-appare-rcna149592\">Taylor Made Freestyle<\/a>,\u201d which used deepfakes of Tupac Shakur and Snoop Dogg. Drake removed the song from his social media after the Shakur estate sent a cease-and-desist letter.<br \/>\n A <a href=\"https:\/\/www.npr.org\/2024\/05\/23\/nx-s1-4977582\/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative\">campaign robocall<\/a> to New Hampshire residents using a deepfake of President Biden. The consultant behind the robocall was assessed a $6 million fine by the FCC and was indicted on criminal charges. <\/p>\n<p>Organizations can also use AI copywriters to mass produce thousands of fake articles. <a href=\"https:\/\/theconversation.com\/how-ai-bots-spread-misinformation-online-and-undermine-democratic-politics-234915\">AI bots<\/a> can share those articles and simulate engagement at scale. This includes auto-liking posts, generating fake comments, and amplifying the content to trick algorithms into prioritizing it.<\/p>\n<p>One often-cited prediction suggests that by 2026, up to <a href=\"https:\/\/thelivinglib.org\/experts-90-of-online-content-will-be-ai-generated-by-2026\/\">90% of online content<\/a> could be \u201csynthetically generated\u201d \u2014 meaning created or heavily shaped by AI. I feel that the number is inflated, but the trend line is real: content creation is becoming faster, cheaper, and less human-driven.<\/p>\n<p>That said, I\u2019ve also found that some fears over AI misinformation\u2019s effect on real life could be overblown. Ahead of the 2024 U.S. presidential election, four out of five Americans had some level of <a href=\"https:\/\/misinforeview.hks.harvard.edu\/article\/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election\/\">concern with AI<\/a> spreading misinformation before Election Day.<\/p>\n<p>Yet, amid efforts from foreign actors or deepfakes like the New Hampshire robocall, <a href=\"https:\/\/time.com\/7131271\/ai-2024-elections\/\">AI\u2019s impact ended up muted<\/a>. While technological advances could lead to more effects in future elections, this result shows the limitations of AI-driven misinformation in the current technological climate.<\/p>\n<p>And from a brand safety perspective, marketers aren\u2019t panicking either \u2014 at least not when using established social media platforms. Our own research found that marketers felt most comfortable with Facebook, YouTube, and Instagram as safe environments for their brands. While AI-generated misinformation makes noise in political and academic circles, many marketing teams remain somewhat confident.<\/p>\n<p>So if AI-driven misinformation isn\u2019t swaying elections or bothering marketers (yet), where does that leave us? These AI tools are evolving, as are the tactics. Which begs the question: Can AI fight the fire it helped light?<\/p>\n<h2>But \u2026 AI Can Also Be the Solution<\/h2>\n<p>For years, <a href=\"https:\/\/blog.hubspot.com\/marketing\/google-approach-to-disinformation\">search engines like Google<\/a> have tried to fend off the spread of misinformation. Many news sources also put misinformation management front and center. For example, Google News has a \u201cFact Check\u201d section highlighting erroneous information. And, while automation and bots are helping, it faces an uphill battle in the Age of AI.<\/p>\n\n<p>What AI unlocks is scale. While generative AI can create misinformation, it can detect, flag, and remove that content just as effectively. AI-generated content is <a href=\"https:\/\/today.umd.edu\/ai-generated-misinformation-is-everywhere-iding-it-may-be-harder-than-you-think\">becoming more realistic<\/a> and harder for humans to spot, which means scalable AI countermeasures become essential. That\u2019s true for <a href=\"https:\/\/www.weforum.org\/stories\/2024\/06\/ai-combat-online-misinformation-disinformation\/\">protecting public trust<\/a> and brand reputation.<\/p>\n<p>Marketers are caught between an AI arms race. They\u2019re trying to use <a href=\"https:\/\/blog.hubspot.com\/marketing\/using-ai-to-get-your-business-branding-right-my-favorite-tips-and-tools\">AI in their business branding<\/a> to help them do their jobs faster and better. But AI-powered misinformation can negatively affect brand credibility, platform visibility, and consumer loyalty. In short, marketers need help.<\/p>\n<p><strong>Here are some organizations on the front lines of that fight, using AI to rein in misinformation.<\/strong><\/p>\n<h3>Cyabra<\/h3>\n<p><a href=\"https:\/\/cyabra.com\/\">Cyabra<\/a> focuses on detecting fake accounts, deepfakes, and coordinated disinformation campaigns. Cyabra\u2019s AI analyzes details like content authenticity and network patterns and behaviors across platforms to flag false narratives early.<\/p>\n\n<p><a href=\"https:\/\/cyabra.com\/\"><em>Source<\/em><\/a><\/p>\n<p>Fake profiles can pop up and push misleading online narratives with breathtaking speed. If your brand is monitoring online risk and sentiment, a tool like Cyabra can keep pace with the spread of misinformation.<\/p>\n<h3>Logically<\/h3>\n<p><a href=\"https:\/\/logically.ai\/\">Logically<\/a> pairs AI with human fact-checkers to monitor, analyze, and debunk misinformation. Its Logically Intelligent (LI) platform helps governments, nonprofits, and media outlets track misinformation\u2019s origins and spread across social media.<\/p>\n\n<p><a href=\"https:\/\/logically.ai\/\"><em>Source<\/em><\/a><\/p>\n<p>For marketers and communicators, Logically can offer an early warning detection system for false narratives around their brand, industry, or audience.<\/p>\n<h3>Reality Defender<\/h3>\n<p><a href=\"https:\/\/www.realitydefender.com\/\">Reality Defender<\/a> uses machine learning to scan digital media for signs of manipulation, like synthetic voice or video content or AI-generated faces. I haven\u2019t found many tools offering proactive detection \u2014 you can catch deepfakes before they go viral.<\/p>\n\n<p><a href=\"https:\/\/www.realitydefender.com\/\"><em>Source<\/em><\/a><\/p>\n<p>This kind of early detection can help brands protect their campaigns, spokespeople, or public-facing content from synthetic manipulation.<\/p>\n<h3>Debunk.org<\/h3>\n<p><a href=\"http:\/\/debunk.org\/\">Debunk.org<\/a> blends AI-driven web monitoring with human analysis to detect disinformation across over 2,500 online domains in over 25 languages. It tracks trending narratives and misleading headlines, then publishes reports countering emerging falsehoods.<\/p>\n\n<p><a href=\"http:\/\/debunk.org\/\"><em>Source<\/em><\/a><\/p>\n<p>Global brands will find Debunk.org especially helpful, given its tool\u2019s multilingual nature. You can navigate international markets and regional misinformation spikes more intelligently.<\/p>\n<p>Consumers are also getting AI-powered support. For example, TikTok now automatically <a href=\"https:\/\/newsroom.tiktok.com\/en-us\/partnering-with-our-industry-to-advance-ai-transparency-and-literacy\">labels AI-generated content<\/a> thanks to a partnership with <a href=\"https:\/\/c2pa.org\/\">The Coalition for Content Provenance and Authenticity<\/a> (C2PA) and its metadata tools.<\/p>\n<p>And with Google investing heavily in its Generative Search Experience, the company includes an \u201c<a href=\"https:\/\/www.humanlevel.com\/en\/blog\/seo\/impact-of-google-sge-search-generative-experience-on-seo\">About this result<\/a>\u201d panel in Search to help users assess the credibility of its responses.<\/p>\n<p>As AI advances, so too will the tactics used to deceive, and the tools designed to stop it. What\u2019s around the AI river bend? Let\u2019s look at where misinformation could head in the Age of AI \u2014 and what experts are already seeing.<\/p>\n<h2>What We Can Expect: Misinformation in the Age of AI<\/h2>\n<h3>Emotional Manipulation and \u201cFake Influencers\u201d<\/h3>\n<p>According to <a href=\"https:\/\/www.linkedin.com\/in\/paul-demott\/\">Paul DeMott<\/a>, CTO of Helium SEO, the most dangerous misinformation tactics may be the ones that <em>don\u2019t feel<\/em> like misinformation.<\/p>\n<p>\u201cAs AI gets better, some subtle ways misinformation spreads are slipping under the radar. It&#8217;s not always about fake news articles; AI can create believable fake profiles on social media that slowly push biased info,\u201d he said. \u201cResearchers might not be paying enough attention to how these fake accounts work to influence people over time.\u201d<\/p>\n\n<p>DeMott sees the issue extending beyond fake people into the message\u2019s emotional design.<\/p>\n<p>\u201cOne thing that could make it harder to spot misinformation is how AI can target specific emotions. AI can create messages that prey on people&#8217;s fears or desires, making them less likely to question what they are seeing,\u201d he said.<\/p>\n<p>He believes the next wave of misinformation solutions must match AI\u2019s budding emotional awareness with detection systems ready for subtext.<\/p>\n<p>\u201cTo counter this, we might need to look at AI solutions that can detect these subtle emotional cues in misinformation. We can use AI to analyze patterns in how misinformation spreads and identify accounts that are likely to be involved,\u201d said DeMott.<\/p>\n<p>\u201cIt&#8217;s a constant cat-and-mouse game, but by staying ahead of these evolving tactics, we have a shot at keeping the information landscape a bit cleaner.\u201d<\/p>\n<h3>Hyper-Personalization and Psychological Biases<\/h3>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/kristietse\/\">Kristie Tse<\/a>, a licensed psychotherapist and founder of Uncover Mental Health Counseling, sees the danger not only in the tech but also in the psychology behind why misinformation works.<\/p>\n<p>\u201cOne emerging misinformation tactic that&#8217;s being underestimated is leveraging highly personalized, AI-generated content to manipulate beliefs or opinions,\u201d she said.<\/p>\n<p>\u201cWith AI becoming increasingly sophisticated, these tailored messages can feel authentic and resonate deeply with individuals, making them more effective at spreading falsehoods.\u201d<\/p>\n<p>Tse explains how misinformation hijacks humans\u2019 emotional wiring, leading to challenges like the speed of spread.<\/p>\n<p>\u201cThe speed at which misinformation spreads is often faster than our ability to fact-check and correct it, partly because it taps into strong emotional responses \u2014 like fear or outrage \u2014 that bypass critical thinking,\u201d she said. \u201cPsychological factors, such as confirmation bias, play a significant role. People are more likely to believe and share misinformation that aligns with their existing beliefs, making it harder to counteract.\u201d<\/p>\n\n<p>But AI could help us if we build the right tools.<\/p>\n<p>\u201cOn the solution side, we might be overlooking the potential for AI to create tools that proactively detect and counter misinformation in real-time before it goes viral,\u201d said Tse.<\/p>\n<p>\u201cFor example, AI could flag manipulated content, suggest reliable sources, or even simulate a debate to highlight contradictory evidence. However, these solutions need to be user-friendly and widely accessible to truly make an impact.\u201d<\/p>\n<h3>AI Ecosystems That Reinforce Biases<\/h3>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/jamesefrancis\/\">James Francis<\/a>, CEO of Artificial Integrity, warns we\u2019re focusing too much on content moderation and not enough on <em>context manipulation<\/em>.<\/p>\n<p>\u201cWe\u2018re not just dealing with fake articles or deepfakes anymore. We\u2019re dealing with entire ecosystems of influence built on machine-generated content that feels real, speaks directly to our emotions, and reinforces what we already believe,\u201d he said.<\/p>\n<p>Francis notes that people usually fall for lies because the content feels emotionally right.<\/p>\n<p>\u201cWhat worries me most isn\u2018t the technology \u2014 it\u2019s the psychology behind it. People don\u2018t fall for lies because they\u2019re gullible. They fall for them because the content feels familiar, comfortable, and emotionally satisfying,\u201d he said. \u201cAI can now mimic that familiarity with incredible precision.\u201d<\/p>\n\n<p>With such an ecosystem in play, he believes the real challenge isn\u2019t removing falsehoods but empowering people to stop and think.<\/p>\n<p>\u201cIf we want to push back, we need more than just filters and fact-checkers. We need to build systems that encourage digital self-awareness,\u201d he said. \u201cTools that don\u2018t just say \u2018this is false,\u2019 but that nudge users to pause, to question, to think. I believe AI can help there, too \u2014 if we design it with intention. The truth doesn\u2019t need to shout. It just needs a fair shot at being heard.\u201d<\/p>\n<h3>Synthetic Echo Chambers<\/h3>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/rob-gold-0043603\/\">Rob Gold<\/a>, VP of marketing communications at Intermedia, raises the alarm on one of AI\u2019s more insidious abilities: creating networks of fake credibility.<\/p>\n<p>\u201cIt&#8217;s not just a fake or misinformed article, but the potential for AI to manufacture the illusion of academic or expert consensus by building large networks of interconnected fake sources,\u201d he said.<\/p>\n<p>Gold shares that AI could mimic credibility by creating articles, studies, posts \u2014 even Reddit threads \u2014 fooling users and search engines.<\/p>\n\n<p>\u201cIt wouldn&#8217;t be hard at all to build a strong, fake echo chamber supporting a false story. It tricks us because we tend to trust information that seems backed up by many sources, and AI makes scaling that creation simple,\u201d he said.<\/p>\n<p>\u201cImagine trying to disprove a fake claim about, say, security flaws in cloud communications when there are half a dozen fake \u2018studies\u2019 that all agree and cite one another.\u201d<\/p>\n<p>To fight this, he says we need smarter tools able to detect citation loops and sudden explosions of information.<\/p>\n<p>\u201cThese tools should flag strange patterns, like lots of new sources appearing quickly, sources that heavily cite each other but have no history, or sources that don&#8217;t link back to any established, trusted information,\u201d Gold said.<\/p>\n<p>\u201cIronically, seeing too many of these tightly linked, brand-new sources pointing only to each other might become the warning sign itself.\u201d<\/p>\n<h3>Confusion Attacks Against the Fact-Checkers<\/h3>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/willcyang\/\">Will Yang<\/a>, head of growth and marketing at Instrumentl, sees an even deeper problem simmering: AI content design not only to trick humans but also to confuse other AIs.<\/p>\n<p>\u201cNeural Network Confusion Attacks are a sneaky new tactic emerging as AI technology advances. These attacks involve creating AI-generated content designed to confuse AI fact-checkers, tricking them into misidentifying genuine news as false,\u201d he said.<\/p>\n<p>These attacks fool AI systems, of course. But they also erode public trust in <em>all<\/em> moderation efforts.<\/p>\n\n<p>\u201cResearchers might underestimate the psychological impact this has, as users begin to question the reliability of trusted sources,\u201d he said. \u201cThis erosion of trust can have real-world consequences, influencing public opinion and behavior.\u201d<\/p>\n<p>Yang suggests the solution is for AI systems to get smarter at both detection and identifying manipulative intent.<\/p>\n<p>\u201cTraining these systems not only on typical data patterns but also on detecting subtle manipulation within AI-generated text can help,\u201d he said.<\/p>\n<p>\u201cThis means enhancing AI models to recognize inconsistencies often overlooked by conventional systems and focusing on anomaly detection. Expanding datasets used for AI training to include diverse scenarios could also reduce the success of these confusion attacks.\u201d<\/p>\n<h2>Social Media Misinformation Is Getting Smarter. So Must We.<\/h2>\n<p>Ethan Mollick posted another otter video in January 2025. <a href=\"https:\/\/x.com\/emollick\/status\/1877786752779256194\">Watch it<\/a>, and you might mistake it for cinema.<\/p>\n<p>Otters on planes are fun and games. But this same technology can whip up fake videos or audio of celebrities and politicians. It can tailor emotionally precise content that slips easily into a family member\u2019s Facebook feed. And it can create an ocean of fake articles or fictional studies to manufacture expertise overnight, leaving users none the wiser.<\/p>\n<p>I work with AI in marketing regularly, but writing this piece reminded me how fast this space is moving. The truth may not need to shout, but amid louder AI-generated noise, it needs help to be heard.<\/p>\n<p>Whether you\u2019re scrolling social media feeds as a marketer or an everyday user:<\/p>\n<p> <strong>Stay aware. <\/strong><br \/>\n <strong>Ask questions. <\/strong><br \/>\n <strong>Understand how AI systems work. <\/strong> <\/p>\n<p>Thankfully, AI isn\u2019t only amplifying misinformation; it\u2019s also helping us detect and manage it. We can\u2019t outsource the truth to machines. But we <em>can<\/em> make them part of our solution.<\/p>","protected":false},"excerpt":{"rendered":"<p>Ethan Mollick, professor of management at Wharton Business School, has a simple benchmark for tracking [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":1280,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1279","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/posts\/1279","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/comments?post=1279"}],"version-history":[{"count":0,"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/posts\/1279\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/media\/1280"}],"wp:attachment":[{"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/media?parent=1279"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/categories?post=1279"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/internship.infoskaters.com\/blog\/wp-json\/wp\/v2\/tags?post=1279"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}