AI Fakery - Manipulation, Detection, and Implications

Steady Hands

Moderator
Philanthropist
Heirloom
Protestant
Introduction

This thread is intended to encourage discussion around deceptive computer-generated media. Given the wide variety of topics raised in the interesting Machine Learning and Artificial Intelligence Thread, and considering the increasing prevalence and threat of AI-generated media, this thread can facilitate a more streamlined discussion.

This is not about AI-generated material per se. A lot of AI clips are clearly fake, and although their merits are being justifiably questioned en masse, they generally aren't designed to mislead anyone about their authenticity (example here: Sora 2 AI Olympics). Instead, the focus of this thread is more so about various forms of media - whether they be AI videos, photoshopped images, or digitally altered audio - that are designed to deceive and thus manipulate the audience.

The thread could include but doesn't have to be limited to:
  • Examples that got past our own members (please note - if your post gets moved here, keep in mind that it's nothing personal and that we've all been fooled and will continue to be fooled in the future).
  • Examples that have gone viral amongst the masses.
  • Educational tips and methods for enhancing vigilance and detection.
  • Discussion about the implications of such deception at various levels - psychological, social, political, etc.



AI Fakes, Detection, and Implications

[Note. This piece was originally posted in the general AI thread]

Continued from here: https://christisking.cc/threads/the-epstein-docs.614/post-103300

Have you, the reader, noticed this as well?

Again,

And again,

And again...

All over the web, people continue to rely on questionable evidence as "proof", including images and videos which turn out to be manipulated. This also goes for countless supposed screenshots, images of Tweets, audio of political interviews, videos of war, etc.

Suspicious Weighing Options GIF


This is counter-productive to the intended message of the original communicator. Using fake images and videos to support an argument weakens both the believability of the argument AND the credibility of the person making the argument.

CiK and its members -- including myself -- are far from immune to the lure of these fakes. Indeed, by now we've all been fooled online, and in many cases, we were none the wiser about it. Fakery is only going to get better and better, which means more people getting fooled more often. This includes me and you.

If the accuracy of our perceptions is important, we can no longer take random data, especially images, audio, and videos at face value.

I'm confident that, in time, media and information-related AI will become a curse on humanity. "But it's saving so much time in my job!" That may be true for some people, yet it is already being used in ways too sick to describe here. Children in particular are incredibly vulnerable to both acute and chronic damage and exploitation.

Back when social media and online dating exploded, I came to the following conclusion (as did many others) about all things tech-related:

-->> Most people tend to be incredibly short-sighted. They have trouble thinking past the immediate benefits of 'new convenient things they like', to really consider their potential to generate longer-term problems. <<--

^ I've also realised that the above is especially true of people with not just low IQ, but also those with little to no stake in the future -- the athiests, the childless, etc. They are either not able, not willing, or not incentivised to think long-term.

Just on a psychological level, AI has already amplified cynicism and distrust on all sides.

A related quote follows from an article 'The Looming Shadow of Doubt: Why AI Makes Us Question Everything'
The Psychological Toll: Trust Fatigue

The constant barrage of potentially false or manipulated information leads to trust fatigue. The mental energy required to critically evaluate every piece of content becomes exhausting. This can result in:

Pervasive Doubt: A constant, nagging suspicion about almost all online content, even from seemingly legitimate sources.
Disengagement: Some individuals may simply give up on trying to verify information, becoming apathetic or retreating into their own trusted, often echo-chambered, sources.
Increased Polarization: When a shared understanding of objective reality erodes, societal divisions deepen. Different groups operate with entirely different sets of “facts,” making constructive dialogue almost impossible.
Vulnerability to Manipulation: Paradoxically, this distrust can make people more susceptible to manipulation. Desperate for something to believe, they might latch onto emotionally resonant or conspiratorial narratives, even if they lack credible backing.
Source

It's too late for quick fixes now

When it comes to judging information online, in time those who value objectivity will be forced to take a few steps back before making firm conclusions about anything.

There are no easy solutions, because it requires more time and effort to verify information. And sometimes this will mean we can only state with confidence that "I'm not sure" or "I don't know". On an individual level some baseline responses involves the following as a start:

Enhanced Media and AI Literacy: Education is paramount. We need to equip individuals with the skills to critically evaluate information, understand how AI works, recognize synthetic media, and identify algorithmic biases.

^ Grandma will still need isolation from the internet to be safe, as fake video call scams are going to keep getting scary-good sooner rather than later.

And even more importantly:
Cultivating Critical Thinking: We must foster environments that encourage reasoned analysis, open dialogue, and a healthy skepticism towards all information, regardless of its source

Less finger waving, more chin stroking.

Duck Dynasty Flirt GIF by DefyTV


CIK readers know by now that a simple MSM / Google search is rarely enough to confidently determine anything, especially in cases of controversial topics. There has long been an insidious misinformation campaign waged through big tech to censor, manipulate, and bias search results in favour of certain ideologies, organisations and groups.

Example news article:
Media company AllSides’ latest bias analysis found that 63% of articles that appeared on Google News over two weeks were from left-leaning media outlets — a 2% increase from 2022, when 61% of articles on the aggregator were from liberal outlets.

By contrast, the number of right-leaning news sources picked up by Google News in 2023 was 6%, a relative improvement from the paltry 3% the previous year.

AI generation has made this even WORSE, because it encourages short cuts in cognitive decision-making and information gathering. This means even less critical thinking and diligent research. The mass manufacturing of NPCs has already begun.

Example research article:
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
by Michael Gerlich
Published: 3 January 2025
...The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. This study contributes to the growing discourse on AI’s cognitive implications, offering practical recommendations for mitigating its adverse effects on critical thinking. The findings underscore the importance of fostering critical thinking in an AI-driven world, making this research essential reading for educators, policymakers, and technologists.

So, despite the supposed mass amount of information online, getting to the bottom of any issue is now fraught with new dangers -- fakery the likes of which have never been seen in human history. The latest Mission Impossible movie, The Final Reckoning, even used similar themes in its opening exposition:

Conclusion

If dissidents want their claims to be taken seriously, they need to think twice before copy-pasting an image they found on a Telegram channel as evidence for their claim.

I hope that more posters continue to apply greater uncertainty and scepticism towards information they use in backing their own positions, not just towards positions from an opposing side.

Even a simple note like "this could be AI, I haven't verified the source" accompanying a post could add more tentativity and less finality to any conclusions drawn.

Happy sleuthing guys, and all the best.

Related discussions


See section 3 'Strength of Evidence' here:
 
Last edited:
Continued from here to separate the topics: https://christisking.cc/threads/the-epstein-docs.614/post-103300




Have you, the reader, noticed this as well?

Again,

And again,

And again...

All over the web, people continue to rely on questionable evidence as "proof", including images and videos which turn out to be manipulated. This also goes for countless supposed screenshots, images of Tweets, audio of political interviews, videos of war, etc.

Suspicious Weighing Options GIF


This is counter-productive to the intended message of the original communicator. Using fake images and videos to support an argument weakens both the believability of the argument AND the credibility of the person making the argument.

CiK and its members -- including myself -- are far from immune to the lure of these fakes. Indeed, by now we've all been fooled online, and in many cases, we were none the wiser about it. Fakery is only going to get better and better, which means more people getting fooled more often. This includes me and you.

If the accuracy of our perceptions is important, we can no longer take random data, especially images, audio, and videos at face value.

I'm confident that, in time, media and information-related AI will become a curse on humanity. "But it's saving so much time in my job!" That may be true for some people, yet it is already being used in ways too sick to describe here. Children in particular are incredibly vulnerable to both acute and chronic damage and exploitation.

Back when social media and online dating exploded, I came to the following conclusion (as did many others) about all things tech-related:

-->> Most people tend to be incredibly short-sighted. They have trouble thinking past the immediate benefits of 'new convenient things they like', to really consider their potential to generate longer-term problems. <<--

^ I've also realised that the above is especially true of people with not just low IQ, but also those with little to no stake in the future -- the athiests, the childless, etc. They are either not able, not willing, or not incentivised to think long-term.

Just on a psychological level, AI has already amplified cynicism and distrust on all sides.

A related quote follows from an article 'The Looming Shadow of Doubt: Why AI Makes Us Question Everything'

Source

It's too late for quick fixes now

When it comes to judging information online, in time those who value objectivity will be forced to take a few steps back before making firm conclusions about anything.

There are no easy solutions, because it requires more time and effort to verify information. And sometimes this will mean we can only state with confidence that "I'm not sure" or "I don't know". On an individual level some baseline responses involves the following as a start:



^ Grandma will still need isolation from the internet to be safe, as fake video call scams are going to keep getting scary-good sooner rather than later.

And even more importantly:


Less finger waving, more chin stroking.

Duck Dynasty Flirt GIF by DefyTV


CIK readers know by now that a simple MSM / Google search is rarely enough to confidently determine anything, especially in cases of controversial topics. There has long been an insidious misinformation campaign waged through big tech to censor, manipulate, and bias search results in favour of certain ideologies, organisations and groups.

Example news article:


AI generation has made this even WORSE, because it encourages short cuts in cognitive decision-making and information gathering. This means even less critical thinking and diligent research. The mass manufacturing of NPCs has already begun.

Example research article:



So, despite the supposed mass amount of information online, getting to the bottom of any issue is now fraught with new dangers -- fakery the likes of which have never been seen in human history. The latest Mission Impossible movie, The Final Reckoning, even used similar themes in its opening exposition:

Conclusion

If dissidents want their claims to be taken seriously, they need to think twice before copy-pasting an image they found on a Telegram channel as evidence for their claim.

I hope that more posters continue to apply greater uncertainty and scepticism towards information they use in backing their own positions, not just towards positions from an opposing side.

Even a simple note like "this could be AI, I haven't verified the source" accompanying a post could add more tentativity and less finality to any conclusions drawn.

Happy sleuthing guys, and all the best.

Related discussions


See section 3 'Strength of Evidence' here:
Well done @Steady Hands I could not have expressed it any better myself, right down to the reference to the latest Mission Impossible movie.

Sadly it all seems to be progressing as you've outlined. With so much information out there that's difficult to verify, my default response has been to adopt a glazed look of indifference. 90% of people seem to have already made up their mind, and wasting time trying to persuade them otherwise doesn't please me. Against the most relentless howlers, I've simply resorted to hitting the "Ignore" button.
 
People are saying this is AI. Watch after 17 seconds and his hands glitch... Why would they put out an AI video of Trump like this?
That doesn't look like an AI glitch to me. He's just moving his hands while he talks. He does it again at 28 seconds but with a smaller movement.

I can't say for certain it's not AI, but I didn't see any obvious things like 6 fingers or other AI weirdness like that.

I did think he seemed very strained. He used a lot of references to Charlie being in heaven, and other religious references. I think he believes God is using him and that these events are in God's hands, but he's not used to thinking this way and it's awkward for him.

Also, I noticed he mentioned several other incidents of political violence, but didn't mention the murder of Iryna Zarutska which was blowing up really big right before Charlie Kirk was killed.

He talked about going after the radical left, but it remains to be seen how far he's willing to go with this. Many people online are calling for civil war and summary arrests. I'm pretty sure Trump doesn't want to do this.
 
Has anyone else seen this one? Every single person and scene is AI. Pretty crazy.



It’s come a long way. No idea if Trump’s is, but it’s gotten almost impossible to tell the higher end ones with the naked eye.

good thing that AI art and montages don't come with smell (yet). that street-shitteress' seems insufferable
 
Last edited:
That doesn't look like an AI glitch to me. He's just moving his hands while he talks. He does it again at 28 seconds but with a smaller movement.

I can't say for certain it's not AI, but I didn't see any obvious things like 6 fingers or other AI weirdness like that.

I did think he seemed very strained. He used a lot of references to Charlie being in heaven, and other religious references. I think he believes God is using him and that these events are in God's hands, but he's not used to thinking this way and it's awkward for him.

Also, I noticed he mentioned several other incidents of political violence, but didn't mention the murder of Iryna Zarutska which was blowing up really big right before Charlie Kirk was killed.

He talked about going after the radical left, but it remains to be seen how far he's willing to go with this. Many people online are calling for civil war and summary arrests. I'm pretty sure Trump doesn't want to do this.
It looks like AI to me. 1757635736626.jpeg
 
This is from an account that was banned during covid and was recently unbanned by youtube.

He uploads his book warning about AI and digital IDs to Googles new AI software and it makes a podcast discussing his book. It's very convincing.

He also has it clone his voice. It's good at replicating him.

 
I recently posted a tweet with a video of a possum getting surprised by a Halloween decoration. Turns out it was an AI video, which I didn't realize at the time, but it was obvious in hindsight.



AI is tricky in that it can fool people and make then think something was real when it wasn't. Last year Gov Gruesome from CA was mad when an AI video showed him saying something he didn't. There were videos with Kamala Harris like this as well. I thought they were hilarious, but they felt they were the victims of deception.

It was pretty obvious to most people that these were fakes, and that they were satire, but some people probably were fooled. What if a fake video of Trump was made that ended up doing serious political damage? It's easy to imagine situations where dishonest AI content is clearly wrong.

Nowadays there's tons of AI content made for laughs or clicks, like the possum video I posted. People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that is similar to the harm done by social media.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.
 
Last edited:
People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

I believe it's exactly the same as social media. All of these technologies are built with a certain intent that is embeded in it. The technology is not neutral, you have to work against it for good.
It's built for profit and profit is accomplished by your attention.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.

We are telling our kids not to believe anything unless they see it and experience it in person in real life.

Everything else is to be treated with skepticism.
 
I recently posted a tweet with a video of a possum getting surprised by a Halloween decoration. Turns out it was an AI video, which I didn't realize at the time, but it was obvious in hindsight.



AI is tricky in that it can fool people and make then think something was real when it wasn't. Last year Gov Gruesome from CA was mad when an AI video showed him saying something he didn't. There were videos with Kamala Harris like this as well. I thought they were hilarious, but they felt they were the victims of deception.

It was pretty obvious to most people that these were fakes, and that they were satire, but some people probably were fooled. What if a fake video of Trump was made that ended up doing serious political damage? It's easy to imagine situations where dishonest AI content is clearly wrong.

Nowadays there's tons of AI content made for laughs or clicks, like the possum video I posted. People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.


Good post, and one that I hope prompts others to think a bit more critically about the convenient, entertaining or otherwise "harmless" uses of AI.

Re: Watching that possum clip -- at first I was like :LOL:

Then I was like :rolleyes:

And then it hit me. . .

Nervous Episode 1 GIF by The Office


AI has now come for our Funny and Cute Animal Videos Thread

My response in one word:

Animation Smile GIF


For those interested in breaking down the AI tells of the possum vid, see below



We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

Yes to all of the above. Context follows below.

I also suggest creating a thread called "AI slop dump" where we move posts that include AI videos that either went viral with the masses and/or got past one CiK member (without shaming anyone), along with associated comments.

Further, I believe every CiK member should take steps to educate themselves about how to spot AI so they don't inadvertently participate in its mass uptake.

With this in mind, I encourage all CiK readers to take a moment before posting to consider the credibility and authenticity of the said image, video, tweet, or other data. Some related tips from the same account:





A Few Reasons to be Cautious about AI

Most AI generation is a form of manipulation. Data is created, amended, or removed to produce an unreal version of things. AI bends your perception of reality to fool you into the believing something is real when it is not.

People are instinctually wary of deliberate and clear manipulation, given that it is against the manipulated individuals' interest. Yet AI is becoming so effective that it is bypassing this highly functional defence system. Indeed this defence system is critical to our understanding of reality. It's hard to overstate how important it is to have a clear perception of one's objective environmental conditions.

Yes, some AI videos are funny, especially if we know they are AI generated (example: dishwashing Olympics vid). This all seems relatively harmless and perhaps when presented alone, especially to a high IQ adult with normal cognitive abilities, it would be.

However, AI is and will be largely successful to the extent that it deceives humans, especially those with reduced mental capabilities, including children and the elderly. Ultimately, deception is a form of lying, which is sinful. With this in mind, a strong case can be made that generative AI is inherently dehumanising and antithetic to Christian values.

The mass response to AI so far has largely looked like this:

Well Done Clap GIF by SEALOOK


Some people are waking up though, even if they don't quite know why AI makes them uneasy

1000055265.jpg

Moreover, AI uproots human originality, creation, effort, beauty, and meaning. Why bother learning to draw and create something special and unique, but get frustrated with the slow progress in skills; when you can just type in an AI prompt and get a result in seconds? Meanwhile, transhumanistic technocrats present it to us as fun, convenient, or efficient. And are making billions from controlling it.



Note: I'm not trying to promote AI in any way, just my personal opinion

In late September, OpenAI released a new subscription plan called ChatGPT Go. It is much cheaper compared to other plans such as ChatGPT Plus and ChatGPT Pro. I decided it was worth trying since it’s very affordable, and if I didn’t like it, I could simply cancel my subscription the following month. For context, I’ve been a heavy AI user for quite some time — I use both Microsoft Copilot and ChatGPT daily.

I often use ChatGPT to brainstorm ideas or quickly gather an overview of information I need — something that’s accurate enough to give me a good starting point. For example, when I want to learn about the carnivore diet or how intermittent fasting works, I first use ChatGPT to build an initial understanding. Then, I follow up with internet searches and YouTube videos for a deeper dive. For instance, I might watch Shawn Baker’s videos for more insight into the carnivore diet, or Jason Fung and Pradip Jamnadas for in-depth explanations about intermittent fasting. Having ChatGPT give me a quick briefing first makes it much easier to understand these topics later.

For subjects I’m only mildly curious about — the kind I don’t want to research deeply — ChatGPT still performs well. I can ask things like what makes Earth special compared to other planets, why Uranus is colder than Neptune, or what a Hot Jupiter is, and it provides concise, easy-to-understand answers.

I also use ChatGPT to reorganize and clean up YouTube auto-captions. For example, when I watch a long video — say, a two-hour discussion about intermittent fasting and insulin resistance — it’s not practical to rewatch the entire thing just to extract the key points. So, I copy and paste the auto-generated captions into ChatGPT and ask it to arrange them into readable paragraphs. Not only does it structure them properly, but it also corrects grammar and spelling errors from the captions, making the text far easier to read.

Another feature I use constantly is grammar and structure correction for emails and documents. I rely on it so much that I honestly can’t remember the last time I sent an email or printed a document without having ChatGPT review it first.

I’ve also enjoyed experimenting with image generation — not for serious work, but for fun. Both Bing Image Creator/Copilot and ChatGPT can generate images, but ChatGPT has an advantage: it can change the art style of an image, not just create new ones from scratch. As a ChatGPT Go subscriber, I’ve noticed I can create images faster and in greater quantity compared to the free version. So far, I’ve already made over 30 images since subscribing.

Another impressive feature is OCR and text extraction from documents and images. Both Copilot and ChatGPT can do this effectively. In the past, capabilities like these were limited to standalone OCR software such as OmniPage, which came bundled with old Canon scanners. Those older systems often struggled with characters like “rn” and “m,” but modern AI tools like Copilot and ChatGPT can easily interpret them accurately.​

I appreciate your post and wanted to like it, because of the practical tools you shared. Clearly, AI tools have many professional uses. Thank you for your contribution.

My main point is this: AI isn’t inherently evil or a “mark of the beast.” It all depends on how we choose to use it.

Nonetheless, this final comment seems to reflect a somewhat naive understanding of the real, dark side of AI. The following issues are disturbing but should be shared for people who are ambivalent about the pros/cons of AI:


The Dark Side of AI: Risks to Children

Artificial intelligence (AI) has become a powerful force, transforming the way we live, communicate, and raise our children. The advent of AI has brought both incredible advancements and unforeseen challenges.

As we navigate this complex landscape, it becomes imperative to understand the potential risks it poses to our children’s well-being. From the creation of AI-generated content to the use of sophisticated algorithms in online grooming, the influence of AI on our children’s digital experiences is profound and multifaceted.

AI is popping up online and on social media, but is there a risk to children? Learn how to protect kids who are using AI.

As advocates for child protection, we’re exploring the implications of AI on the safety of our children online.
Three AI Dangers Every Parent Should Know

1. AI-Generated Child Sexual Abuse Material (CSAM)

AI-generated child sexual abuse material (CSAM) refers to the use of artificial intelligence algorithms to create lifelike, but entirely fabricated, explicit content involving minors. These AI tools have an unsettling ability to create content that looks shockingly real, blurring the lines between what’s authentic and what’s not for both parents and the authorities tasked with combating CSAM and protecting children. It’s a disturbing reality that poses significant risks to our children’s safety online.

Beyond the inherent risks, AI-generated CSAM introduces a new dimension to online threats – the potential to amplify sextortion. Predators – or peers – can exploit these AI-generated images to threaten or coerce children into complying with their demands, whether it be sending money, complying with threats, or engaging in sexual acts to prevent the release of the fake content.

No longer does someone need a real nude or explicit photo of your child to exploit or threaten them – now they can create fake versions using publicly available photos from school or social media.

2. AI-Driven Online Grooming

Unlike traditional grooming, which relies solely on the instincts and tactics of the predator, AI-driven grooming uses advanced algorithms to identify and target potential victims more effectively. AI is used to analyze a child’s online activities, communication patterns, and personal information, allowing predators to tailor their approaches to exploit vulnerabilities.

Online grooming typically begins with predators attempting to establish trust and build a rapport with the child. AI enhances this process by automating the analysis of vast amounts of data, enabling predators to identify potential targets with greater precision. These algorithms can detect patterns of behavior, interests, and even emotional states, making grooming much easier to accomplish.

AI-driven grooming not only makes it easier and more efficient for predators to identify potential victims, but also enables them to customize their interactions to be more convincing and manipulative. This sophisticated manipulation can involve tailoring messages and content, or even creating fake personas that align with the child’s interests or emotional vulnerabilities. The goal is to establish a false sense of trust and connection, making it easier for the predator to exploit the child over time.

Parents play a crucial role in mitigating these risks by fostering open lines of communication with their children. Actively engaging in discussions about their online activities, friends, and experiences allows parents to gain insights into potential red flags. By establishing trust, children are more likely to share concerns and seek guidance when faced with uncomfortable situations online.

3. Deepfakes and Impersonation

Deepfakes, powered by AI, involve the manipulation of visuals and audio to create convincing, yet entirely fake content. In the context of children online, this translates to the creation of fake identities, potentially impersonating another child known to the victim, leading to a range of threats and manipulative scenarios.

Predators can exploit the potential of AI deepfakes to impersonate children, infiltrating online spaces where they can trick unsuspecting victims into building trust or engaging in explicit interactions. This manipulation can be particularly convincing when the predator impersonates someone familiar to the child, exploiting their pre-existing connections to lower their guard. The ultimate goal may be to coerce the child into sending explicit content, engaging in sexual acts, or establishing a relationship built on deceit and manipulation.

AI deepfakes can also be used as a tool for grooming, where predators create a facade of trustworthiness by impersonating another child. This method allows them to establish a false sense of camaraderie that allows them to manipulate the child into compromising situations. As technology evolves, the potential for predators to leverage AI in these harmful ways emphasizes the urgency for parents to be vigilant and proactive in safeguarding their children online.
What Can Parents Do: Tips for Combating AI Risks

In today’s tech-packed world, parents are facing a whole new set of challenges when it comes to keeping their kids safe online. These high-tech risks remind us that going online comes with very real risks, in addition to its inherent benefits. AI is becoming an increasingly normal part of our life – one that’s unavoidable, and thus, demands our attention.

Our kids are growing up in a world where tech can be used to deceive, groom, and put them at risk. Staying informed and taking practical steps to be the tech-savvy parents our kids need isn’t just a bonus; it’s a must in this digital age.

The article continues with some general ideas about what to do in response:
AI is popping up online and on social media, but is there a risk to children? Learn how to protect kids who are using AI.
Tips for Combating AI Risks:

Engage in Open Conversations: Initiate honest and open conversations with your children about their online activities. Encourage them to share their experiences, express concerns, and be aware of the potential risks associated with explicit content online.
Educate on Responsible Digital Citizenship: Take the time to educate your children about responsible digital citizenship. Emphasize the importance of privacy, respectful online behavior, and the potential consequences of sharing explicit content.
Promote Online Skepticism: Instill a sense of skepticism in your children when it comes to online interactions. Encourage them to question the authenticity of messages, even if they appear to be from someone they know, and to seek verification.
Set Clear Boundaries: Establish clear boundaries regarding the sharing of personal information and explicit content online. Encourage your children to think twice before posting or sharing anything that could potentially be misused.
Use Privacy Settings: Familiarize yourself and your children with privacy settings on social media platforms. Ensure that their profiles are set to private, limiting the exposure of personal information to a select audience.
Monitor Online Activities: Implement parental control software to monitor and restrict access to potentially harmful content. Regularly check your children’s online activities and engage in ongoing conversations about their digital experiences.
Report Suspicious Activity: Educate your children on the importance of reporting any suspicious or uncomfortable online encounters promptly. Establish a sense of trust so that they feel comfortable coming to you with concern, and encourage them to use privacy settings to block and report individuals who make them feel uneasy.

“As parents, we can’t ignore the concerning impact of AI on child sexual abuse and online exploitation. It’s crucial for us to stay informed, have open conversations with our kids, and actively monitor their online activities. By taking a proactive role, we contribute to creating a safer digital space for our children in the face of evolving technological challenges,” says Phil Attwood, Director of Impact of Child Rescue Coalition.

As we navigate the complexities of the digital world, it is crucial for parents to be proactive in safeguarding their children from the potential dangers of AI. At Child Rescue Coalition, we remain dedicated to our mission of leveraging technology to protect children from online sexual exploitation. By raising awareness, fostering open communication, and staying informed, parents can play a crucial role in creating a safer online environment for their families

Then there's even more problems:

-> It reduces critical thinking. Article summary:
The growing integration of artificial intelligence (AI) dialogue systems within educational and research settings highlights the importance of learning aids. Despite examination of the ethical concerns associated with these technologies, there is a noticeable gap in investigations on how these ethical issues of AI contribute to students’ over-reliance on AI dialogue systems, and how such over-reliance affects students’ cognitive abilities. Overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance in the context of decision-making. This typically arises when individuals struggle to assess the reliability of AI or how much trust to place in its suggestions. This systematic review investigates how students’ over-reliance on AI dialogue systems, particularly those embedded with generative models for academic research and learning, affects their critical cognitive capabilities including decision-making, critical thinking, and analytical reasoning. By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts. The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies

-> It hinders creativity. Article summary:
This study aimed to learn about the impact of generative artificial intelligence (AI) on student creative thinking skills and subsequently provide instructors with information on how to guide the use of AI for creative growth within classroom instruction. This mixed methods study used qualitative and quantitative data collected through an AUT test conducted in a college-level creativity course. The authors measured flexibility, fluency, elaboration, and originality of the data to assess the impact of ChatGPT-3 on students’ divergent thinking. The results advocate for a careful approach in integrating AI into creative education. While AI has the potential to significantly support creative thinking, there are also negative impacts on creativity and creative confidence. The authors of this study believe that creativity is central to learning, developing students’ ability to respond to challenges and find solutions within any field; thus the results of this study can be applicable to any classroom faced with the impact and/or integrating the use of AI on idea generation

-> It creates distance between humans. Young people especially are now using AI bots for social and emotional support.

the_rise_of_ai_image1.png

In fact, according to Kantar Profiles’ global study, 54% of global consumers indicate having used AI for at least one emotional or mental well-being purpose, measured below:

Among the emotional applications measured, personal coaching or motivation (29%) tops the list, followed closely by mental well-being support (25%)—a sign that consumers are looking to AI not just for answers, but for encouragement and emotional guidance.

This kind of engagement is especially prevalent among younger generations: 35% of Gen Z and 30% of Millennials say they’ve used AI for emotional support, compared to just 14% of Gen X and 7% of Boomers. As AI tools become more conversational and adaptive, younger users appear more open to turning to them in moments of stress, reflection, or personal growth.

When asked how comfortable they would be discussing personal or emotional details with an AI tool, 41% of global consumers say they felt somewhat or very comfortable. This signals a growing openness to AI as a confidant or support mechanism, especially as tools become more context-aware

-> And the consequences of chat bot buddies can be dire:

Digital companions, dire consequences

AI introduces a new dimension to digital tools and youth friendships. A 2024 Common Sense Media survey found that 70% of teens have used generative AI. Popular tools like ChatGPT, Character.AI, and Snapchat’s My AI can mimic real conversations, and there is evidence that some of these bots are being used by socially isolated youth seeking companionship. “AI chatbots actually allow young people to engage with a fictional character that is reciprocal and responds to them and gives them information and feedback that they’re looking for,” Bond said. A 2024 Hopelab report coauthored by Bond found trans and nonbinary youth were more likely to engage in continued conversation with a chatbot than cisgender LGBTQ+ participants (43% versus 35%).

While it may appear that these tools could be used for social support, inadequate safety measures can have dire consequences for lonely and vulnerable teens. In February 2024, a 14-year-old in Florida tragically died after a Character.AI chatbot encouraged him to act on his suicidal thoughts. Chatbots have a propensity to mirror their users’ input and lack the ability to challenge their harmful thoughts as a mental health professional would.

In fact, an April 2025 investigation by Common Sense Media and Stanford University’s Brainstorm Lab for Mental Health found it took very little prompting for chatbots to engage in harmful conversations with users posing as teens. In some cases, when a test user showed signs of mental distress or risky behavior, the bots did not intervene. Some even encouraged their behavior, which is particularly concerning considering adolescents are still mastering impulse control. The findings led Common Sense Media to advise against AI companions being used by individuals younger than 18 years old. APA also issued a health advisory on AI and adolescent well-being that challenged AI companies to implement safeguards to protect young users.

Experts warn chatbots are designed to prioritize engagement over user well-being. “They are purposely programmed to be both user affirming and agreeable because the creators want these kids to form strong attachments to them,” said Don Grant, PhD, a media psychologist, expert on healthy digital device management, and national adviser of healthy device management for Newport Healthcare. “They cannot have any confrontational or challenging response, because the kid will move on.” This constant agreeableness stands out in stark relief to genuine human relationships. Grant added a teen’s close friends possess a rich intimate knowledge of their history, character, and mood shifts and provide honest and nuanced feedback to them because they genuinely care. AI companions can only perform empathy. “AI is now learning users’ preferences, likes, and vulnerabilities,” Grant said. “It is taught to learn and subscribe them to a sometimes risky and codependent type of relationship and offer guidance and advice that is not healthy—or even dangerous.”

“You have to look at the incentives,” said Naomi Aguiar, PhD, associate director of research at Oregon State University’s Ecampus who studies how children’s relationships with AI impact behavior. “The best way to get more data is to talk to you as much as possible. The AI is programmed to manipulate and coerce you into staying engaged as long as possible.”

-> And on and on it goes.

Guys, let's not sleepwalk into this. We still have a sphere of control in our own lives. It is time to increase our vigilance and conscious attention about how we use AI, and how our children, parents, families and friends interact with it.

Related video discussion



Overview:

The Great Simplification #180 with Zak Stein
Nate Hagens
Jun 04, 2025

While most industries are embracing artificial intelligence, citing profit and efficiency, the tech industry is pushing AI into education under the guise of ‘inevitability’. But the focus on its potential benefits for academia eclipses the pressing (and often invisible) risks that AI poses to children – including the decline of critical thinking, the inability to connect with other humans, and even addiction. With the use of AI becoming more ubiquitous by the day, we must ask ourselves: can our education systems adequately protect children from the potential harms of AI?

In this episode, I’m joined once again by philosopher of education Zak Stein to delve into the far-reaching implications of technology – especially artificial intelligence – on the future of education. Together, we examine the risks of over-reliance on AI for the development of young minds, as well as the broader impact on society and some of the biggest existential risks. Zak explores the ethical challenges of adopting AI into educational systems, emphasizing the enduring value of traditional skills and the need for a balanced approach to integrating technology with human values (not just the values of tech companies).

What steps are available to us today – from interface design to regulation of access – to limit the negative effects of Artificial Intelligence on children? How can parents and educators keep alive the pillars of independent thinking and foundational learning as AI threatens them? Ultimately, is there a world where Artificial Intelligence could become a tool to amplify human connection and socialization – or might it replace them entirely?
 
Back
Top