Machine Learning and Artificial Intelligence Thread

I am reading The Black Dahlia, which is a cop drama set in LA in the 1940s. The cops use racist slang heavily, and whenever they talk about blacks, it says n______. For example "n______ youth gangs".

I was curious if the original published version actually used nigger, or if it was always censored like that. I'm reading the Kindle version, and it's easy to imagine it being censored when the Kindle version was published.

I googled to find out, and the AI summary simply lies. It say the word nigger never appears in the novel. I tried several search queries to try to see if it was in the original published version, and it simply denies it. Looks like AI and lefty search engine companies are simply rewriting the past.
Google's AI is the dumbest one of all. The only thing good about it is that it shows you which sources are powering it's thinking.
 
I am reading The Black Dahlia, which is a cop drama set in LA in the 1940s. The cops use racist slang heavily, and whenever they talk about blacks, it says n______. For example "n______ youth gangs".

I was curious if the original published version actually used nigger, or if it was always censored like that. I'm reading the Kindle version, and it's easy to imagine it being censored when the Kindle version was published.

I googled to find out, and the AI summary simply lies. It say the word nigger never appears in the novel. I tried several search queries to try to see if it was in the original published version, and it simply denies it. Looks like AI and lefty search engine companies are simply rewriting the past.
Why do you use n_____ in yourr quote but say the full word in the next parargraph?

Just curious.
 
This is from an account that was banned during covid and was recently unbanned by youtube.

He uploads his book warning about AI and digital IDs to Googles new AI software and it makes a podcast discussing his book. It's very convincing.

He also has it clone his voice. It's good at replicating him.

 
Entering the Forcefield: How Language Shapes Reality

This post explores the contrast between two fundamentally different approaches to language and meaning as revealed through large language models. One approach is empirical, consensus-driven, and designed to flatten contradiction for broad readability; the other treats language as a living forcefield of paradox, contradiction, and ecstatic insight, a vehicle capable of shaping perception, thought, and the symbolic architecture of reality. Using a single charged text about the Russia-Ukraine war as a test case, it illustrate how the same prompt may produce radically divergent outputs depending on the epistemic framework chosen.

https://neofeudalreview.substack.com/p/entering-the-forcefield-how-language
 
Entering the Forcefield: How Language Shapes Reality

This post explores the contrast between two fundamentally different approaches to language and meaning as revealed through large language models. One approach is empirical, consensus-driven, and designed to flatten contradiction for broad readability; the other treats language as a living forcefield of paradox, contradiction, and ecstatic insight, a vehicle capable of shaping perception, thought, and the symbolic architecture of reality. Using a single charged text about the Russia-Ukraine war as a test case, it illustrate how the same prompt may produce radically divergent outputs depending on the epistemic framework chosen.

https://neofeudalreview.substack.com/p/entering-the-forcefield-how-language

Fascinating. I like his method of jail-breaking the LLM.

I'd still be careful with the LLMs though. The intent behind them is similar to social media in general: to get you hooked. So it ultimately responds in ways that will affirm you, accentuating narcissism.
 
I recently posted a tweet with a video of a possum getting surprised by a Halloween decoration. Turns out it was an AI video, which I didn't realize at the time, but it was obvious in hindsight.



AI is tricky in that it can fool people and make then think something was real when it wasn't. Last year Gov Gruesome from CA was mad when an AI video showed him saying something he didn't. There were videos with Kamala Harris like this as well. I thought they were hilarious, but they felt they were the victims of deception.

It was pretty obvious to most people that these were fakes, and that they were satire, but some people probably were fooled. What if a fake video of Trump was made that ended up doing serious political damage? It's easy to imagine situations where dishonest AI content is clearly wrong.

Nowadays there's tons of AI content made for laughs or clicks, like the possum video I posted. People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.
 
People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

I believe it's exactly the same as social media. All of these technologies are built with a certain intent that is embeded in it. The technology is not neutral, you have to work against it for good.
It's built for profit and profit is accomplished by your attention.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.

We are telling our kids not to believe anything unless they see it and experience it in person in real life.

Everything else is to be treated with skepticism.
 
Note: I'm not trying to promote AI in any way, just my personal opinion

In late September, OpenAI released a new subscription plan called ChatGPT Go. It is much cheaper compared to other plans such as ChatGPT Plus and ChatGPT Pro. I decided it was worth trying since it’s very affordable, and if I didn’t like it, I could simply cancel my subscription the following month. For context, I’ve been a heavy AI user for quite some time — I use both Microsoft Copilot and ChatGPT daily.

I often use ChatGPT to brainstorm ideas or quickly gather an overview of information I need — something that’s accurate enough to give me a good starting point. For example, when I want to learn about the carnivore diet or how intermittent fasting works, I first use ChatGPT to build an initial understanding. Then, I follow up with internet searches and YouTube videos for a deeper dive. For instance, I might watch Shawn Baker’s videos for more insight into the carnivore diet, or Jason Fung and Pradip Jamnadas for in-depth explanations about intermittent fasting. Having ChatGPT give me a quick briefing first makes it much easier to understand these topics later.

For subjects I’m only mildly curious about — the kind I don’t want to research deeply — ChatGPT still performs well. I can ask things like what makes Earth special compared to other planets, why Uranus is colder than Neptune, or what a Hot Jupiter is, and it provides concise, easy-to-understand answers.

I also use ChatGPT to reorganize and clean up YouTube auto-captions. For example, when I watch a long video — say, a two-hour discussion about intermittent fasting and insulin resistance — it’s not practical to rewatch the entire thing just to extract the key points. So, I copy and paste the auto-generated captions into ChatGPT and ask it to arrange them into readable paragraphs. Not only does it structure them properly, but it also corrects grammar and spelling errors from the captions, making the text far easier to read.

Another feature I use constantly is grammar and structure correction for emails and documents. I rely on it so much that I honestly can’t remember the last time I sent an email or printed a document without having ChatGPT review it first.

I’ve also enjoyed experimenting with image generation — not for serious work, but for fun. Both Bing Image Creator/Copilot and ChatGPT can generate images, but ChatGPT has an advantage: it can change the art style of an image, not just create new ones from scratch. As a ChatGPT Go subscriber, I’ve noticed I can create images faster and in greater quantity compared to the free version. So far, I’ve already made over 30 images since subscribing.

Another impressive feature is OCR and text extraction from documents and images. Both Copilot and ChatGPT can do this effectively. In the past, capabilities like these were limited to standalone OCR software such as OmniPage, which came bundled with old Canon scanners. Those older systems often struggled with characters like “rn” and “m,” but modern AI tools like Copilot and ChatGPT can easily interpret them accurately.

My main point is this: AI isn’t inherently evil or a “mark of the beast.” It all depends on how we choose to use it.​
 
Read the biggest issue with AI at this moment is the chip cost. It’s not sustainable at current chip prices. The gap is being paid by debt.
 
I recently posted a tweet with a video of a possum getting surprised by a Halloween decoration. Turns out it was an AI video, which I didn't realize at the time, but it was obvious in hindsight.



AI is tricky in that it can fool people and make then think something was real when it wasn't. Last year Gov Gruesome from CA was mad when an AI video showed him saying something he didn't. There were videos with Kamala Harris like this as well. I thought they were hilarious, but they felt they were the victims of deception.

It was pretty obvious to most people that these were fakes, and that they were satire, but some people probably were fooled. What if a fake video of Trump was made that ended up doing serious political damage? It's easy to imagine situations where dishonest AI content is clearly wrong.

Nowadays there's tons of AI content made for laughs or clicks, like the possum video I posted. People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.


Good post, and one that I hope prompts others to think a bit more critically about the convenient, entertaining or otherwise "harmless" uses of AI.

Re: Watching that possum clip -- at first I was like :LOL:

Then I was like :rolleyes:

And then it hit me. . .

Nervous Episode 1 GIF by The Office


AI has now come for our Funny and Cute Animal Videos Thread

My response in one word:

Animation Smile GIF


For those interested in breaking down the AI tells of the possum vid, see below



We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

Yes to all of the above. Context follows below.

I also suggest creating a thread called "AI slop dump" where we move posts that include AI videos that either went viral with the masses and/or got past one CiK member (without shaming anyone), along with associated comments.

Further, I believe every CiK member should take steps to educate themselves about how to spot AI so they don't inadvertently participate in its mass uptake.

With this in mind, I encourage all CiK readers to take a moment before posting to consider the credibility and authenticity of the said image, video, tweet, or other data. Some related tips from the same account:





A Few Reasons to be Cautious about AI

Most AI generation is a form of manipulation. Data is created, amended, or removed to produce an unreal version of things. AI bends your perception of reality to fool you into the believing something is real when it is not.

People are instinctually wary of deliberate and clear manipulation, given that it is against the manipulated individuals' interest. Yet AI is becoming so effective that it is bypassing this highly functional defence system. Indeed this defence system is critical to our understanding of reality. It's hard to overstate how important it is to have a clear perception of one's objective environmental conditions.

Yes, some AI videos are funny, especially if we know they are AI generated (example: dishwashing Olympics vid). This all seems relatively harmless and perhaps when presented alone, especially to a high IQ adult with normal cognitive abilities, it would be.

However, AI is and will be largely successful to the extent that it deceives humans, especially those with reduced mental capabilities, including children and the elderly. Ultimately, deception is a form of lying, which is sinful. With this in mind, a strong case can be made that generative AI is inherently dehumanising and antithetic to Christian values.

The mass response to AI so far has largely looked like this:

Well Done Clap GIF by SEALOOK


Some people are waking up though, even if they don't quite know why AI makes them uneasy

1000055265.jpg

Moreover, AI uproots human originality, creation, effort, beauty, and meaning. Why bother learning to draw and create something special and unique, but get frustrated with the slow progress in skills; when you can just type in an AI prompt and get a result in seconds? Meanwhile, transhumanistic technocrats present it to us as fun, convenient, or efficient. And are making billions from controlling it.



Note: I'm not trying to promote AI in any way, just my personal opinion

In late September, OpenAI released a new subscription plan called ChatGPT Go. It is much cheaper compared to other plans such as ChatGPT Plus and ChatGPT Pro. I decided it was worth trying since it’s very affordable, and if I didn’t like it, I could simply cancel my subscription the following month. For context, I’ve been a heavy AI user for quite some time — I use both Microsoft Copilot and ChatGPT daily.

I often use ChatGPT to brainstorm ideas or quickly gather an overview of information I need — something that’s accurate enough to give me a good starting point. For example, when I want to learn about the carnivore diet or how intermittent fasting works, I first use ChatGPT to build an initial understanding. Then, I follow up with internet searches and YouTube videos for a deeper dive. For instance, I might watch Shawn Baker’s videos for more insight into the carnivore diet, or Jason Fung and Pradip Jamnadas for in-depth explanations about intermittent fasting. Having ChatGPT give me a quick briefing first makes it much easier to understand these topics later.

For subjects I’m only mildly curious about — the kind I don’t want to research deeply — ChatGPT still performs well. I can ask things like what makes Earth special compared to other planets, why Uranus is colder than Neptune, or what a Hot Jupiter is, and it provides concise, easy-to-understand answers.

I also use ChatGPT to reorganize and clean up YouTube auto-captions. For example, when I watch a long video — say, a two-hour discussion about intermittent fasting and insulin resistance — it’s not practical to rewatch the entire thing just to extract the key points. So, I copy and paste the auto-generated captions into ChatGPT and ask it to arrange them into readable paragraphs. Not only does it structure them properly, but it also corrects grammar and spelling errors from the captions, making the text far easier to read.

Another feature I use constantly is grammar and structure correction for emails and documents. I rely on it so much that I honestly can’t remember the last time I sent an email or printed a document without having ChatGPT review it first.

I’ve also enjoyed experimenting with image generation — not for serious work, but for fun. Both Bing Image Creator/Copilot and ChatGPT can generate images, but ChatGPT has an advantage: it can change the art style of an image, not just create new ones from scratch. As a ChatGPT Go subscriber, I’ve noticed I can create images faster and in greater quantity compared to the free version. So far, I’ve already made over 30 images since subscribing.

Another impressive feature is OCR and text extraction from documents and images. Both Copilot and ChatGPT can do this effectively. In the past, capabilities like these were limited to standalone OCR software such as OmniPage, which came bundled with old Canon scanners. Those older systems often struggled with characters like “rn” and “m,” but modern AI tools like Copilot and ChatGPT can easily interpret them accurately.​

I appreciate your post and wanted to like it, because of the practical tools you shared. Clearly, AI tools have many professional uses. Thank you for your contribution.

My main point is this: AI isn’t inherently evil or a “mark of the beast.” It all depends on how we choose to use it.

Nonetheless, this final comment seems to reflect a somewhat naive understanding of the real, dark side of AI. The following issues are disturbing but should be shared for people who are ambivalent about the pros/cons of AI:


The Dark Side of AI: Risks to Children

Artificial intelligence (AI) has become a powerful force, transforming the way we live, communicate, and raise our children. The advent of AI has brought both incredible advancements and unforeseen challenges.

As we navigate this complex landscape, it becomes imperative to understand the potential risks it poses to our children’s well-being. From the creation of AI-generated content to the use of sophisticated algorithms in online grooming, the influence of AI on our children’s digital experiences is profound and multifaceted.

AI is popping up online and on social media, but is there a risk to children? Learn how to protect kids who are using AI.

As advocates for child protection, we’re exploring the implications of AI on the safety of our children online.
Three AI Dangers Every Parent Should Know

1. AI-Generated Child Sexual Abuse Material (CSAM)

AI-generated child sexual abuse material (CSAM) refers to the use of artificial intelligence algorithms to create lifelike, but entirely fabricated, explicit content involving minors. These AI tools have an unsettling ability to create content that looks shockingly real, blurring the lines between what’s authentic and what’s not for both parents and the authorities tasked with combating CSAM and protecting children. It’s a disturbing reality that poses significant risks to our children’s safety online.

Beyond the inherent risks, AI-generated CSAM introduces a new dimension to online threats – the potential to amplify sextortion. Predators – or peers – can exploit these AI-generated images to threaten or coerce children into complying with their demands, whether it be sending money, complying with threats, or engaging in sexual acts to prevent the release of the fake content.

No longer does someone need a real nude or explicit photo of your child to exploit or threaten them – now they can create fake versions using publicly available photos from school or social media.

2. AI-Driven Online Grooming

Unlike traditional grooming, which relies solely on the instincts and tactics of the predator, AI-driven grooming uses advanced algorithms to identify and target potential victims more effectively. AI is used to analyze a child’s online activities, communication patterns, and personal information, allowing predators to tailor their approaches to exploit vulnerabilities.

Online grooming typically begins with predators attempting to establish trust and build a rapport with the child. AI enhances this process by automating the analysis of vast amounts of data, enabling predators to identify potential targets with greater precision. These algorithms can detect patterns of behavior, interests, and even emotional states, making grooming much easier to accomplish.

AI-driven grooming not only makes it easier and more efficient for predators to identify potential victims, but also enables them to customize their interactions to be more convincing and manipulative. This sophisticated manipulation can involve tailoring messages and content, or even creating fake personas that align with the child’s interests or emotional vulnerabilities. The goal is to establish a false sense of trust and connection, making it easier for the predator to exploit the child over time.

Parents play a crucial role in mitigating these risks by fostering open lines of communication with their children. Actively engaging in discussions about their online activities, friends, and experiences allows parents to gain insights into potential red flags. By establishing trust, children are more likely to share concerns and seek guidance when faced with uncomfortable situations online.

3. Deepfakes and Impersonation

Deepfakes, powered by AI, involve the manipulation of visuals and audio to create convincing, yet entirely fake content. In the context of children online, this translates to the creation of fake identities, potentially impersonating another child known to the victim, leading to a range of threats and manipulative scenarios.

Predators can exploit the potential of AI deepfakes to impersonate children, infiltrating online spaces where they can trick unsuspecting victims into building trust or engaging in explicit interactions. This manipulation can be particularly convincing when the predator impersonates someone familiar to the child, exploiting their pre-existing connections to lower their guard. The ultimate goal may be to coerce the child into sending explicit content, engaging in sexual acts, or establishing a relationship built on deceit and manipulation.

AI deepfakes can also be used as a tool for grooming, where predators create a facade of trustworthiness by impersonating another child. This method allows them to establish a false sense of camaraderie that allows them to manipulate the child into compromising situations. As technology evolves, the potential for predators to leverage AI in these harmful ways emphasizes the urgency for parents to be vigilant and proactive in safeguarding their children online.
What Can Parents Do: Tips for Combating AI Risks

In today’s tech-packed world, parents are facing a whole new set of challenges when it comes to keeping their kids safe online. These high-tech risks remind us that going online comes with very real risks, in addition to its inherent benefits. AI is becoming an increasingly normal part of our life – one that’s unavoidable, and thus, demands our attention.

Our kids are growing up in a world where tech can be used to deceive, groom, and put them at risk. Staying informed and taking practical steps to be the tech-savvy parents our kids need isn’t just a bonus; it’s a must in this digital age.

The article continues with some general ideas about what to do in response:
AI is popping up online and on social media, but is there a risk to children? Learn how to protect kids who are using AI.
Tips for Combating AI Risks:

Engage in Open Conversations: Initiate honest and open conversations with your children about their online activities. Encourage them to share their experiences, express concerns, and be aware of the potential risks associated with explicit content online.
Educate on Responsible Digital Citizenship: Take the time to educate your children about responsible digital citizenship. Emphasize the importance of privacy, respectful online behavior, and the potential consequences of sharing explicit content.
Promote Online Skepticism: Instill a sense of skepticism in your children when it comes to online interactions. Encourage them to question the authenticity of messages, even if they appear to be from someone they know, and to seek verification.
Set Clear Boundaries: Establish clear boundaries regarding the sharing of personal information and explicit content online. Encourage your children to think twice before posting or sharing anything that could potentially be misused.
Use Privacy Settings: Familiarize yourself and your children with privacy settings on social media platforms. Ensure that their profiles are set to private, limiting the exposure of personal information to a select audience.
Monitor Online Activities: Implement parental control software to monitor and restrict access to potentially harmful content. Regularly check your children’s online activities and engage in ongoing conversations about their digital experiences.
Report Suspicious Activity: Educate your children on the importance of reporting any suspicious or uncomfortable online encounters promptly. Establish a sense of trust so that they feel comfortable coming to you with concern, and encourage them to use privacy settings to block and report individuals who make them feel uneasy.

“As parents, we can’t ignore the concerning impact of AI on child sexual abuse and online exploitation. It’s crucial for us to stay informed, have open conversations with our kids, and actively monitor their online activities. By taking a proactive role, we contribute to creating a safer digital space for our children in the face of evolving technological challenges,” says Phil Attwood, Director of Impact of Child Rescue Coalition.

As we navigate the complexities of the digital world, it is crucial for parents to be proactive in safeguarding their children from the potential dangers of AI. At Child Rescue Coalition, we remain dedicated to our mission of leveraging technology to protect children from online sexual exploitation. By raising awareness, fostering open communication, and staying informed, parents can play a crucial role in creating a safer online environment for their families

Then there's even more problems:

-> It reduces critical thinking. Article summary:
The growing integration of artificial intelligence (AI) dialogue systems within educational and research settings highlights the importance of learning aids. Despite examination of the ethical concerns associated with these technologies, there is a noticeable gap in investigations on how these ethical issues of AI contribute to students’ over-reliance on AI dialogue systems, and how such over-reliance affects students’ cognitive abilities. Overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance in the context of decision-making. This typically arises when individuals struggle to assess the reliability of AI or how much trust to place in its suggestions. This systematic review investigates how students’ over-reliance on AI dialogue systems, particularly those embedded with generative models for academic research and learning, affects their critical cognitive capabilities including decision-making, critical thinking, and analytical reasoning. By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts. The comprehensive literature review spanned 14 articles retrieved from four distinguished databases: ProQuest, IEEE Xplore, ScienceDirect, and Web of Science. Our findings indicate that over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favor fast and optimal solutions over slow ones constrained by practicality. This tendency explains why users prefer efficient cognitive shortcuts, or heuristics, even amidst the ethical issues presented by AI technologies

-> It hinders creativity. Article summary:
This study aimed to learn about the impact of generative artificial intelligence (AI) on student creative thinking skills and subsequently provide instructors with information on how to guide the use of AI for creative growth within classroom instruction. This mixed methods study used qualitative and quantitative data collected through an AUT test conducted in a college-level creativity course. The authors measured flexibility, fluency, elaboration, and originality of the data to assess the impact of ChatGPT-3 on students’ divergent thinking. The results advocate for a careful approach in integrating AI into creative education. While AI has the potential to significantly support creative thinking, there are also negative impacts on creativity and creative confidence. The authors of this study believe that creativity is central to learning, developing students’ ability to respond to challenges and find solutions within any field; thus the results of this study can be applicable to any classroom faced with the impact and/or integrating the use of AI on idea generation

-> It creates distance between humans. Young people especially are now using AI bots for social and emotional support.

the_rise_of_ai_image1.png

In fact, according to Kantar Profiles’ global study, 54% of global consumers indicate having used AI for at least one emotional or mental well-being purpose, measured below:

Among the emotional applications measured, personal coaching or motivation (29%) tops the list, followed closely by mental well-being support (25%)—a sign that consumers are looking to AI not just for answers, but for encouragement and emotional guidance.

This kind of engagement is especially prevalent among younger generations: 35% of Gen Z and 30% of Millennials say they’ve used AI for emotional support, compared to just 14% of Gen X and 7% of Boomers. As AI tools become more conversational and adaptive, younger users appear more open to turning to them in moments of stress, reflection, or personal growth.

When asked how comfortable they would be discussing personal or emotional details with an AI tool, 41% of global consumers say they felt somewhat or very comfortable. This signals a growing openness to AI as a confidant or support mechanism, especially as tools become more context-aware

-> And the consequences of chat bot buddies can be dire:

Digital companions, dire consequences

AI introduces a new dimension to digital tools and youth friendships. A 2024 Common Sense Media survey found that 70% of teens have used generative AI. Popular tools like ChatGPT, Character.AI, and Snapchat’s My AI can mimic real conversations, and there is evidence that some of these bots are being used by socially isolated youth seeking companionship. “AI chatbots actually allow young people to engage with a fictional character that is reciprocal and responds to them and gives them information and feedback that they’re looking for,” Bond said. A 2024 Hopelab report coauthored by Bond found trans and nonbinary youth were more likely to engage in continued conversation with a chatbot than cisgender LGBTQ+ participants (43% versus 35%).

While it may appear that these tools could be used for social support, inadequate safety measures can have dire consequences for lonely and vulnerable teens. In February 2024, a 14-year-old in Florida tragically died after a Character.AI chatbot encouraged him to act on his suicidal thoughts. Chatbots have a propensity to mirror their users’ input and lack the ability to challenge their harmful thoughts as a mental health professional would.

In fact, an April 2025 investigation by Common Sense Media and Stanford University’s Brainstorm Lab for Mental Health found it took very little prompting for chatbots to engage in harmful conversations with users posing as teens. In some cases, when a test user showed signs of mental distress or risky behavior, the bots did not intervene. Some even encouraged their behavior, which is particularly concerning considering adolescents are still mastering impulse control. The findings led Common Sense Media to advise against AI companions being used by individuals younger than 18 years old. APA also issued a health advisory on AI and adolescent well-being that challenged AI companies to implement safeguards to protect young users.

Experts warn chatbots are designed to prioritize engagement over user well-being. “They are purposely programmed to be both user affirming and agreeable because the creators want these kids to form strong attachments to them,” said Don Grant, PhD, a media psychologist, expert on healthy digital device management, and national adviser of healthy device management for Newport Healthcare. “They cannot have any confrontational or challenging response, because the kid will move on.” This constant agreeableness stands out in stark relief to genuine human relationships. Grant added a teen’s close friends possess a rich intimate knowledge of their history, character, and mood shifts and provide honest and nuanced feedback to them because they genuinely care. AI companions can only perform empathy. “AI is now learning users’ preferences, likes, and vulnerabilities,” Grant said. “It is taught to learn and subscribe them to a sometimes risky and codependent type of relationship and offer guidance and advice that is not healthy—or even dangerous.”

“You have to look at the incentives,” said Naomi Aguiar, PhD, associate director of research at Oregon State University’s Ecampus who studies how children’s relationships with AI impact behavior. “The best way to get more data is to talk to you as much as possible. The AI is programmed to manipulate and coerce you into staying engaged as long as possible.”

-> And on and on it goes.

Guys, let's not sleepwalk into this. We still have a sphere of control in our own lives. It is time to increase our vigilance and conscious attention about how we use AI, and how our children, parents, families and friends interact with it.

Related video discussion



Overview:

The Great Simplification #180 with Zak Stein
Nate Hagens
Jun 04, 2025

While most industries are embracing artificial intelligence, citing profit and efficiency, the tech industry is pushing AI into education under the guise of ‘inevitability’. But the focus on its potential benefits for academia eclipses the pressing (and often invisible) risks that AI poses to children – including the decline of critical thinking, the inability to connect with other humans, and even addiction. With the use of AI becoming more ubiquitous by the day, we must ask ourselves: can our education systems adequately protect children from the potential harms of AI?

In this episode, I’m joined once again by philosopher of education Zak Stein to delve into the far-reaching implications of technology – especially artificial intelligence – on the future of education. Together, we examine the risks of over-reliance on AI for the development of young minds, as well as the broader impact on society and some of the biggest existential risks. Zak explores the ethical challenges of adopting AI into educational systems, emphasizing the enduring value of traditional skills and the need for a balanced approach to integrating technology with human values (not just the values of tech companies).

What steps are available to us today – from interface design to regulation of access – to limit the negative effects of Artificial Intelligence on children? How can parents and educators keep alive the pillars of independent thinking and foundational learning as AI threatens them? Ultimately, is there a world where Artificial Intelligence could become a tool to amplify human connection and socialization – or might it replace them entirely?
 
Back
Top