Machine Learning and Artificial Intelligence Thread




Hertz is using an AI-scanning system to find any dings on newly-returned rental cars:

> the system captures 1000s of high-res images when car enters and exits lot

> generates a damage report and sends to a human for review

> machine maker UVeye says it can “detect 5x more damage than manual checks” and “6x higher total value of damage captured”

The machine is currently at airports in Atlanta, Charlotte, Phoenix, Tampa and Houston.

While Hertz says only 3% of cars scanned by UVeye have had “billable damage”, customers that have been hit are really annoyed.

Some of the “damage” is minuscule based on photos provided.

Here is the kick in the nuts: if you get charged for repairs based on AI scan, the cost of UVeye usage is bundled into the fee.

I get the idea of standardizing rental damage reports, but having dealt with the byzantine world of car rental damages…this is guaranteed to be one the most annoying uses of AI in corporate America.

This total nickel and diming seems like somethign cooked up by McKinsey.

Neither Enterprise or Avis has jumped onto the trend and both still rely on “human led” analysis.



Gvi45mMaMAAtNp7
 
Continued from here to separate the topics: https://christisking.cc/threads/the-epstein-docs.614/post-103300




Have you, the reader, noticed this as well?

Again,

And again,

And again...

All over the web, people continue to rely on questionable evidence as "proof", including images and videos which turn out to be manipulated. This also goes for countless supposed screenshots, images of Tweets, audio of political interviews, videos of war, etc.

Suspicious Weighing Options GIF


This is counter-productive to the intended message of the original communicator. Using fake images and videos to support an argument weakens both the believability of the argument AND the credibility of the person making the argument.

CiK and its members -- including myself -- are far from immune to the lure of these fakes. Indeed, by now we've all been fooled online, and in many cases, we were none the wiser about it. Fakery is only going to get better and better, which means more people getting fooled more often. This includes me and you.

If the accuracy of our perceptions is important, we can no longer take random data, especially images, audio, and videos at face value.

I'm confident that, in time, media and information-related AI will become a curse on humanity. "But it's saving so much time in my job!" That may be true for some people, yet it is already being used in ways too sick to describe here. Children in particular are incredibly vulnerable to both acute and chronic damage and exploitation.

Back when social media and online dating exploded, I came to the following conclusion (as did many others) about all things tech-related:

-->> Most people tend to be incredibly short-sighted. They have trouble thinking past the immediate benefits of 'new convenient things they like', to really consider their potential to generate longer-term problems. <<--

^ I've also realised that the above is especially true of people with not just low IQ, but also those with little to no stake in the future -- the athiests, the childless, etc. They are either not able, not willing, or not incentivised to think long-term.

Just on a psychological level, AI has already amplified cynicism and distrust on all sides.

A related quote follows from an article 'The Looming Shadow of Doubt: Why AI Makes Us Question Everything'
The Psychological Toll: Trust Fatigue

The constant barrage of potentially false or manipulated information leads to trust fatigue. The mental energy required to critically evaluate every piece of content becomes exhausting. This can result in:

Pervasive Doubt: A constant, nagging suspicion about almost all online content, even from seemingly legitimate sources.
Disengagement: Some individuals may simply give up on trying to verify information, becoming apathetic or retreating into their own trusted, often echo-chambered, sources.
Increased Polarization: When a shared understanding of objective reality erodes, societal divisions deepen. Different groups operate with entirely different sets of “facts,” making constructive dialogue almost impossible.
Vulnerability to Manipulation: Paradoxically, this distrust can make people more susceptible to manipulation. Desperate for something to believe, they might latch onto emotionally resonant or conspiratorial narratives, even if they lack credible backing.
Source

It's too late for quick fixes now

When it comes to judging information online, in time those who value objectivity will be forced to take a few steps back before making firm conclusions about anything.

There are no easy solutions, because it requires more time and effort to verify information. And sometimes this will mean we can only state with confidence that "I'm not sure" or "I don't know". On an individual level some baseline responses involves the following as a start:

Enhanced Media and AI Literacy: Education is paramount. We need to equip individuals with the skills to critically evaluate information, understand how AI works, recognize synthetic media, and identify algorithmic biases.

^ Grandma will still need isolation from the internet to be safe, as fake video call scams are going to keep getting scary-good sooner rather than later.

And even more importantly:
Cultivating Critical Thinking: We must foster environments that encourage reasoned analysis, open dialogue, and a healthy skepticism towards all information, regardless of its source

Less finger waving, more chin stroking.

Duck Dynasty Flirt GIF by DefyTV


CIK readers know by now that a simple MSM / Google search is rarely enough to confidently determine anything, especially in cases of controversial topics. There has long been an insidious misinformation campaign waged through big tech to censor, manipulate, and bias search results in favour of certain ideologies, organisations and groups.

Example news article:
Media company AllSides’ latest bias analysis found that 63% of articles that appeared on Google News over two weeks were from left-leaning media outlets — a 2% increase from 2022, when 61% of articles on the aggregator were from liberal outlets.

By contrast, the number of right-leaning news sources picked up by Google News in 2023 was 6%, a relative improvement from the paltry 3% the previous year.

AI generation has made this even WORSE, because it encourages short cuts in cognitive decision-making and information gathering. This means even less critical thinking and diligent research. The mass manufacturing of NPCs has already begun.

Example research article:
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
by Michael Gerlich
Published: 3 January 2025
...The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. This study contributes to the growing discourse on AI’s cognitive implications, offering practical recommendations for mitigating its adverse effects on critical thinking. The findings underscore the importance of fostering critical thinking in an AI-driven world, making this research essential reading for educators, policymakers, and technologists.

So, despite the supposed mass amount of information online, getting to the bottom of any issue is now fraught with new dangers -- fakery the likes of which have never been seen in human history. The latest Mission Impossible movie, The Final Reckoning, even used similar themes in its opening exposition:

Conclusion

If dissidents want their claims to be taken seriously, they need to think twice before copy-pasting an image they found on a Telegram channel as evidence for their claim.

I hope that more posters continue to apply greater uncertainty and scepticism towards information they use in backing their own positions, not just towards positions from an opposing side.

Even a simple note like "this could be AI, I haven't verified the source" accompanying a post could add more tentativity and less finality to any conclusions drawn.

Happy sleuthing guys, and all the best.

Related discussions


See section 3 'Strength of Evidence' here:
 
Last edited:
Continued from here to separate the topics: https://christisking.cc/threads/the-epstein-docs.614/post-103300




Have you, the reader, noticed this as well?

Again,

And again,

And again...

All over the web, people continue to rely on questionable evidence as "proof", including images and videos which turn out to be manipulated. This also goes for countless supposed screenshots, images of Tweets, audio of political interviews, videos of war, etc.

Suspicious Weighing Options GIF


This is counter-productive to the intended message of the original communicator. Using fake images and videos to support an argument weakens both the believability of the argument AND the credibility of the person making the argument.

CiK and its members -- including myself -- are far from immune to the lure of these fakes. Indeed, by now we've all been fooled online, and in many cases, we were none the wiser about it. Fakery is only going to get better and better, which means more people getting fooled more often. This includes me and you.

If the accuracy of our perceptions is important, we can no longer take random data, especially images, audio, and videos at face value.

I'm confident that, in time, media and information-related AI will become a curse on humanity. "But it's saving so much time in my job!" That may be true for some people, yet it is already being used in ways too sick to describe here. Children in particular are incredibly vulnerable to both acute and chronic damage and exploitation.

Back when social media and online dating exploded, I came to the following conclusion (as did many others) about all things tech-related:

-->> Most people tend to be incredibly short-sighted. They have trouble thinking past the immediate benefits of 'new convenient things they like', to really consider their potential to generate longer-term problems. <<--

^ I've also realised that the above is especially true of people with not just low IQ, but also those with little to no stake in the future -- the athiests, the childless, etc. They are either not able, not willing, or not incentivised to think long-term.

Just on a psychological level, AI has already amplified cynicism and distrust on all sides.

A related quote follows from an article 'The Looming Shadow of Doubt: Why AI Makes Us Question Everything'

Source

It's too late for quick fixes now

When it comes to judging information online, in time those who value objectivity will be forced to take a few steps back before making firm conclusions about anything.

There are no easy solutions, because it requires more time and effort to verify information. And sometimes this will mean we can only state with confidence that "I'm not sure" or "I don't know". On an individual level some baseline responses involves the following as a start:



^ Grandma will still need isolation from the internet to be safe, as fake video call scams are going to keep getting scary-good sooner rather than later.

And even more importantly:


Less finger waving, more chin stroking.

Duck Dynasty Flirt GIF by DefyTV


CIK readers know by now that a simple MSM / Google search is rarely enough to confidently determine anything, especially in cases of controversial topics. There has long been an insidious misinformation campaign waged through big tech to censor, manipulate, and bias search results in favour of certain ideologies, organisations and groups.

Example news article:


AI generation has made this even WORSE, because it encourages short cuts in cognitive decision-making and information gathering. This means even less critical thinking and diligent research. The mass manufacturing of NPCs has already begun.

Example research article:



So, despite the supposed mass amount of information online, getting to the bottom of any issue is now fraught with new dangers -- fakery the likes of which have never been seen in human history. The latest Mission Impossible movie, The Final Reckoning, even used similar themes in its opening exposition:

Conclusion

If dissidents want their claims to be taken seriously, they need to think twice before copy-pasting an image they found on a Telegram channel as evidence for their claim.

I hope that more posters continue to apply greater uncertainty and scepticism towards information they use in backing their own positions, not just towards positions from an opposing side.

Even a simple note like "this could be AI, I haven't verified the source" accompanying a post could add more tentativity and less finality to any conclusions drawn.

Happy sleuthing guys, and all the best.

Related discussions


See section 3 'Strength of Evidence' here:
Well done @Steady Hands I could not have expressed it any better myself, right down to the reference to the latest Mission Impossible movie.

Sadly it all seems to be progressing as you've outlined. With so much information out there that's difficult to verify, my default response has been to adopt a glazed look of indifference. 90% of people seem to have already made up their mind, and wasting time trying to persuade them otherwise doesn't please me. Against the most relentless howlers, I've simply resorted to hitting the "Ignore" button.
 
Back
Top