Machine Learning and Artificial Intelligence Thread

I am reading The Black Dahlia, which is a cop drama set in LA in the 1940s. The cops use racist slang heavily, and whenever they talk about blacks, it says n______. For example "n______ youth gangs".

I was curious if the original published version actually used nigger, or if it was always censored like that. I'm reading the Kindle version, and it's easy to imagine it being censored when the Kindle version was published.

I googled to find out, and the AI summary simply lies. It say the word nigger never appears in the novel. I tried several search queries to try to see if it was in the original published version, and it simply denies it. Looks like AI and lefty search engine companies are simply rewriting the past.
Why do you use n_____ in yourr quote but say the full word in the next parargraph?

Just curious.
 
This is from an account that was banned during covid and was recently unbanned by youtube.

He uploads his book warning about AI and digital IDs to Googles new AI software and it makes a podcast discussing his book. It's very convincing.

He also has it clone his voice. It's good at replicating him.

 
Entering the Forcefield: How Language Shapes Reality

This post explores the contrast between two fundamentally different approaches to language and meaning as revealed through large language models. One approach is empirical, consensus-driven, and designed to flatten contradiction for broad readability; the other treats language as a living forcefield of paradox, contradiction, and ecstatic insight, a vehicle capable of shaping perception, thought, and the symbolic architecture of reality. Using a single charged text about the Russia-Ukraine war as a test case, it illustrate how the same prompt may produce radically divergent outputs depending on the epistemic framework chosen.

https://neofeudalreview.substack.com/p/entering-the-forcefield-how-language
 
Entering the Forcefield: How Language Shapes Reality

This post explores the contrast between two fundamentally different approaches to language and meaning as revealed through large language models. One approach is empirical, consensus-driven, and designed to flatten contradiction for broad readability; the other treats language as a living forcefield of paradox, contradiction, and ecstatic insight, a vehicle capable of shaping perception, thought, and the symbolic architecture of reality. Using a single charged text about the Russia-Ukraine war as a test case, it illustrate how the same prompt may produce radically divergent outputs depending on the epistemic framework chosen.

https://neofeudalreview.substack.com/p/entering-the-forcefield-how-language

Fascinating. I like his method of jail-breaking the LLM.

I'd still be careful with the LLMs though. The intent behind them is similar to social media in general: to get you hooked. So it ultimately responds in ways that will affirm you, accentuating narcissism.
 
I recently posted a tweet with a video of a possum getting surprised by a Halloween decoration. Turns out it was an AI video, which I didn't realize at the time, but it was obvious in hindsight.



AI is tricky in that it can fool people and make then think something was real when it wasn't. Last year Gov Gruesome from CA was mad when an AI video showed him saying something he didn't. There were videos with Kamala Harris like this as well. I thought they were hilarious, but they felt they were the victims of deception.

It was pretty obvious to most people that these were fakes, and that they were satire, but some people probably were fooled. What if a fake video of Trump was made that ended up doing serious political damage? It's easy to imagine situations where dishonest AI content is clearly wrong.

Nowadays there's tons of AI content made for laughs or clicks, like the possum video I posted. People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that is similar to the harm done by social media.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.
 
Last edited:
People have been calling this stuff AI slop. How should we react to this? Some of it is harmless, but maybe some of it is not. Maybe AI slop is harmful in a way that similar to the harm done by social media.

I believe it's exactly the same as social media. All of these technologies are built with a certain intent that is embeded in it. The technology is not neutral, you have to work against it for good.
It's built for profit and profit is accomplished by your attention.

We're all going to be dealing with AI content from now on. Should everybody learn to recognize AI content and develop an instant prejudice against it? We'll probably all be using it ourselves to make content for work or hobbies or fun.

Should all AI be tagged as such? What about here on the forum? If someone posts a funny video that turns out to be AI, should it be labeled as such?

I think we'll all be having to decide on things like this.

We are telling our kids not to believe anything unless they see it and experience it in person in real life.

Everything else is to be treated with skepticism.
 
Note: I'm not trying to promote AI in any way, just my personal opinion

In late September, OpenAI released a new subscription plan called ChatGPT Go. It is much cheaper compared to other plans such as ChatGPT Plus and ChatGPT Pro. I decided it was worth trying since it’s very affordable, and if I didn’t like it, I could simply cancel my subscription the following month. For context, I’ve been a heavy AI user for quite some time — I use both Microsoft Copilot and ChatGPT daily.

I often use ChatGPT to brainstorm ideas or quickly gather an overview of information I need — something that’s accurate enough to give me a good starting point. For example, when I want to learn about the carnivore diet or how intermittent fasting works, I first use ChatGPT to build an initial understanding. Then, I follow up with internet searches and YouTube videos for a deeper dive. For instance, I might watch Shawn Baker’s videos for more insight into the carnivore diet, or Jason Fung and Pradip Jamnadas for in-depth explanations about intermittent fasting. Having ChatGPT give me a quick briefing first makes it much easier to understand these topics later.

For subjects I’m only mildly curious about — the kind I don’t want to research deeply — ChatGPT still performs well. I can ask things like what makes Earth special compared to other planets, why Uranus is colder than Neptune, or what a Hot Jupiter is, and it provides concise, easy-to-understand answers.

I also use ChatGPT to reorganize and clean up YouTube auto-captions. For example, when I watch a long video — say, a two-hour discussion about intermittent fasting and insulin resistance — it’s not practical to rewatch the entire thing just to extract the key points. So, I copy and paste the auto-generated captions into ChatGPT and ask it to arrange them into readable paragraphs. Not only does it structure them properly, but it also corrects grammar and spelling errors from the captions, making the text far easier to read.

Another feature I use constantly is grammar and structure correction for emails and documents. I rely on it so much that I honestly can’t remember the last time I sent an email or printed a document without having ChatGPT review it first.

I’ve also enjoyed experimenting with image generation — not for serious work, but for fun. Both Bing Image Creator/Copilot and ChatGPT can generate images, but ChatGPT has an advantage: it can change the art style of an image, not just create new ones from scratch. As a ChatGPT Go subscriber, I’ve noticed I can create images faster and in greater quantity compared to the free version. So far, I’ve already made over 30 images since subscribing.

Another impressive feature is OCR and text extraction from documents and images. Both Copilot and ChatGPT can do this effectively. In the past, capabilities like these were limited to standalone OCR software such as OmniPage, which came bundled with old Canon scanners. Those older systems often struggled with characters like “rn” and “m,” but modern AI tools like Copilot and ChatGPT can easily interpret them accurately.

My main point is this: AI isn’t inherently evil or a “mark of the beast.” It all depends on how we choose to use it.​
 
Read the biggest issue with AI at this moment is the chip cost. It’s not sustainable at current chip prices. The gap is being paid by debt.
 
Note: I'm not trying to promote AI in any way, just my personal opinion

I appreciate your caveat about AI. Even though I'm completely against it I do acknowledge that I'll most likely be forced to use it soon to survive in society. I probably already unknowingly use AI when doing internet searches or interacting with feeds on youtube, etc.


My main point is this: AI isn’t inherently evil or a “mark of the beast.” It all depends on how we choose to use it.​

As I described in my post above I think all technology has intent, for which it was created, embedded in it. With LLMs you are essentially trusting that the intent of people like Sam Altman, Elon Musk, and Bill Gates is good and virtuous.

AI, more than previous technologies, has their creators intent expressly embedded in it.

AI also has a uniqueness (or an exaggerated quality) compared to previous technologies in that it's tapping into the spiritual realm fairly directly.

We are in such a materialistic culture that we tend to forget just how much of reality is actually in the spiritual realm. Mundane stuff that's in the spiritual realm. Mundane stuff like our thoughts.

But if you can think this way you can see how AI plays in that realm.

I do not think AI is the mark either but it is part of a system that will eventually ensconce us so fully we will be marked for the world rather than Christ.
 
There's a disproportionate amount of transformers too for some reason within that sphere.

There was actually this group of people involved with the AI/rationalist community in the Bay Area that ended up murdering a few people and about half or more of the people involved are transformers.


Transgenderism is connected to Transhumanism. Peter Thiel recently gave an interview where he stated transgenderism didn't go far enough.
 
Can someone on this forum please tell me they also use AI frequently? I don’t want to be the only oddball here who likes AI 😭😭😭

In fact, the reason I subscribed to ChatGPT Go is that I often hit the limit on the free plan. I wanted to unlock more features and get better quota compared to the free version.

As I described in my post above I think all technology has intent, for which it was created, embedded in it. With LLMs you are essentially trusting that the intent of people like Sam Altman, Elon Musk, and Bill Gates is good and virtuous.
Also, it seems like most of the tech guys in these realms are gays, secular jews, or both.
Just to make sure, I searched for more info about Sam Altman, and here is what I found:

Altman1.pngAltman2.png

I also found this, The Jerusalem Post said that Sam Altman is the top number 1 most influential chews in the world
The question isn’t what ChatGPT can do – it’s what it can’t. From drafting a sermon on the weekly Torah portion to coming up with a recipe for an excellent summer cocktail, planning an itinerary for an urban getaway to writing a new Shakespearean sonnet – all of which I have done using the platform – ChatGPT is revolutionizing the way we gather and process information, generate content, and live our lives in countless ways.
Behind it all is Sam Altman. The 38-year-old co-founder and CEO of OpenAI was born to a Jewish family in Chicago and raised in St. Louis.
Today, Altman is the face of OpenAI and, in many ways, of AI itself, traveling the world to discuss the dazzling potential of this cutting-edge technology while acknowledging the profound risks. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads a statement he signed alongside other industry leaders in May, and he has called for global regulation to prevent the technology from being used for nefarious purposes – or running amok. At the same time, he is convinced that AI’s benefits to humanity could be unlike anything imaginable. “You are about to enter the greatest golden age,” he recently told an audience in Seoul.
Notably, Altman is not the only Jewish member of OpenAI’s leadership team. The company’s co-founder, Ilya Sutskever, was born in Russia and made aliyah with his family at age five. He grew up in Jerusalem and attended Israel’s Open University before moving to Canada and studying at the University of Toronto and Stanford University. Today, Sutskever serves as OpenAI’s chief scientist.
The pair visited Israel in June, and Altman lauded the country’s flourishing tech scene, saying he believes the Jewish state will play a “huge role” in AI. Speaking to The Jerusalem Post later the same month, Prime Minister Benjamin Netanyahu – who had chatted with Altman during his visit – echoed his sentiment. “We’re moving into AI with full force,” Netanyahu told the Post. “Israel is receiving 20% of the world’s investments in AI start-ups, and more will come.”
Based on the quotes above, I can see why many people here thinks that AI has some nefarious behind-the-scene purposes.

So basically I am a fan of a tech made by a chewish LGBTQ man. But then what should I do, should I stop my ChatGPT Go subscription? Based on this I feel like continuing using AI means I am selling my soul to the chews, but then his technology is too good to pass up 😭😭😭
 
So basically I am a fan of a tech made by a chewish LGBTQ man. But then what should I do, should I stop my ChatGPT Go subscription? Based on this I feel like continuing using AI means I am selling my soul to the chews, but then his technology is too good to pass up 😭😭😭

Sorry, I didn't mean to be flippant by laughing at your post. I've been accused of being over cynical, but, in my mind I'm just being realistic and recognizing the battle we are always in. I think many of the things we are fans of are actually chewish/LGBTQ inspired. But, such is life at this late stage of the world.

I actually think it's a good thing to understand the nature of AI and LLMs. And you sort of have to use it to know it's nature. That post earlier of the guy jailbreaking the LLM was very interesting and probably good to know.

I'll begrudgingly admit there probably is a way to use AI against it's intent as well. I'm just drawing my own personal line with AI.
 
I use AI (free AI) daily, but it doesn’t come with frustrations. I started with ChatGPT, then moved to MS CoPilot and currently have been using Grok for the last year or so. I find it frequently provides inaccurate information. One example: I was driving from one city to another one morning. I gave it my primary highways/roads I’d be traveling on and wanted to find a nice local diner or breakfast place along the way, no more than a 5 minute departure from the route. The highest recommended place it gave me turned out to be permanently closed for last 3 years. I mentioned this back to AI, and it acknowledged this info, gave me more details about its closure etc.

Several other very frustrating “bad advice” recommendations that I’ve called AI out on as inaccurate and then it continues to agree with my input or correct answers and then give me more details. There has even been times when I’ve revisited similar/same topics months later after I provided corrected info and it still spit back out the same incorrect info. I thought these AI tools absorb input and continuously learn? Clearly not.
 
I use AI (free AI) daily, but it doesn’t come with frustrations. I started with ChatGPT, then moved to MS CoPilot and currently have been using Grok for the last year or so. I find it frequently provides inaccurate information. One example: I was driving from one city to another one morning. I gave it my primary highways/roads I’d be traveling on and wanted to find a nice local diner or breakfast place along the way, no more than a 5 minute departure from the route. The highest recommended place it gave me turned out to be permanently closed for last 3 years. I mentioned this back to AI, and it acknowledged this info, gave me more details about its closure etc.

Several other very frustrating “bad advice” recommendations that I’ve called AI out on as inaccurate and then it continues to agree with my input or correct answers and then give me more details. There has even been times when I’ve revisited similar/same topics months later after I provided corrected info and it still spit back out the same incorrect info. I thought these AI tools absorb input and continuously learn? Clearly not.
Thank you for this — I’m glad to know that someone else here also uses AI.

In my experience, AI doesn’t perform well when asked highly specific questions that require information it may not have access to, or when the data available is very limited. I have two examples in mind:​
  • I once asked it how many times K’Ehleyr appears throughout the entire run of Star Trek: The Next Generation. It told me she appeared in only one episode. However, after doing a manual Google search, I found that she actually appears twice. When I asked why it gave the wrong answer, it responded that the information wasn’t commonly available.​
  • I tested it with a question I already knew the answer to: I asked why the Yamaha Jupiter Z1 has a 115cc engine while its competitor, the Honda Supra X, has a 125cc engine. It answered that Yamaha used a cost-saving strategy to compete with Honda as the market leader. The correct explanation, however, is that Yamaha’s 115cc engines produce more power and torque with better delivery curves, while also being highly fuel-efficient. Yamaha knew that their smaller engine was still competitive with Honda’s 125cc.​
That said, for general-purpose questions, it is actually excellent.​
  • For example, I asked what makes Earth special compared to other planets, and it gave the usual answers — being in the Goldilocks zone, having liquid water, and supporting life. But it also mentioned something I didn’t know: the Moon is unusually large compared to Earth (about 25% of its size), and its presence plays a crucial role in stabilizing Earth’s rotation.​
  • When I asked about hot Jupiters, I learned a new term — the Grand Tack Hypothesis.​
  • Previously, I always used 0.5mm 2B mechanical pencils. I asked the AI about the difference between 0.5mm and 0.7mm leads, and which hardness is best for general writing. It explained that 0.7mm leads are more resistant to breakage and produce thicker lines, which some people prefer, and that HB is the most balanced hardness for everyday writing. Since then, I’ve switched to 0.7mm HB leads, and they’re much better for general writing compared to my old setup.​
  • I also asked whether there’s any investment option better than bank term deposits but without the high risk of stocks. It suggested money market mutual funds. I tried putting a small amount of money into one, and it actually generates daily returns while remaining fully liquid — clearly outperforming term deposits.​
In addition to that, as I mentioned in a previous post, it can summarize webpages and documents, correct grammar and structure, generate images, and even perform OCR.​
 
Back
Top