Machine Learning and Artificial Intelligence Thread

Hollywood is using A.I. to restore movies for new 4K discs, and the results can be awful. Some recent James Cameron movies, Aliens and True Lies, have been ruined.
“…can be awful.” So it works or is less awful the rest of the time? This is still impressive and it will only get (much) better.

AI is turning a largely heuristic task done by humans with the aid of computer programs in restoring movies to a cheaper algorithmic one.

AI hype really only took off last year around March/April.

Generative AI was beginning to make strides some years ago, but ubiquity of generative AI and people using it en masse is barely a year old.

The first iPhone came out around 2006 and I am not even bothering to confirm the exact year. People criticize the lack of new, good features from iPhone to iPhone but there’s no question the 15 is far, far, far superior to the first iPhone in both hardware and the iOS 17 it currently takes.

In just under two decades, the AI/“AI” you’re critiquing here will likely be making B or C-grade films in the vein of Michael Bay and Transformers by itself.
 
“…can be awful.” So it works or is less awful the rest of the time? This is still impressive and it will only get (much) better.

AI is turning a largely heuristic task done by humans with the aid of computer programs in restoring movies to a cheaper algorithmic one.

AI hype really only took off last year around March/April.

Generative AI was beginning to make strides some years ago, but ubiquity of generative AI and people using it en masse is barely a year old.

The first iPhone came out around 2006 and I am not even bothering to confirm the exact year. People criticize the lack of new, good features from iPhone to iPhone but there’s no question the 15 is far, far, far superior to the first iPhone in both hardware and the iOS 17 it currently takes.

In just under two decades, the AI/“AI” you’re critiquing here will likely be making B or C-grade films in the vein of Michael Bay and Transformers by itself.
Chat GPT came out in November 2022 with GPT-3 as the engine. It would get a solid A- in most easy undergraduate courses. Couldn't pass the bar exam.

GPT-4 came out a few months ago; it achieved a 90th percentile score on the bar exam.

Yes, it still makes things up, like case law, or anything you can think of. But that rate of improvement is terrifying. How long until it stops making things up, or almost never does that?

Facebook's model that plays Diplomacy gets scary quick. The Diplomacy model learns how opponents negotiate, and can lie or backstab.

If you just have a planning model like that issue commands to GPT-4, you have devious planning and knowledge.
 
Yes, it still makes things up, like case law, or anything you can think of. But that rate of improvement is terrifying. How long until it stops making things up, or almost never does that?
Exactly.

And debates over whether it’s “true” AI represent semantics more or less if said AI/“AI” can do your job or do most of your job.

People also forget that humans generally do 8/8/8 daily stints (8 hours of work/8 hours of social/family/home time/8 hours of sleep or sleep and hygiene). Even if you had to theoretically power down a machine for four hours (likely far less than this), it can still perform for 20 hours.

Look at what happened to the Rust Belt after a combination of robotics/amplified assembly lines and cheap foreign labor led to the collapse of manufacturing in the US.

We don’t need to see true sentience yet for machines to still gut half the human workforce.
 
“…can be awful.” So it works or is less awful the rest of the time? This is still impressive and it will only get (much) better.

AI is turning a largely heuristic task done by humans with the aid of computer programs in restoring movies to a cheaper algorithmic one.

AI hype really only took off last year around March/April.

Generative AI was beginning to make strides some years ago, but ubiquity of generative AI and people using it en masse is barely a year old.

The first iPhone came out around 2006 and I am not even bothering to confirm the exact year. People criticize the lack of new, good features from iPhone to iPhone but there’s no question the 15 is far, far, far superior to the first iPhone in both hardware and the iOS 17 it currently takes.

In just under two decades, the AI/“AI” you’re critiquing here will likely be making B or C-grade films in the vein of Michael Bay and Transformers by itself.
Yada yada yada.... It looks like crap. The video makes that pretty clear.
 
Yada yada yada.... It looks like crap. The video makes that pretty clear.
And I am not disputing that. But it will get better and better.

AI/“AI” needs to be seen as a kind of offshoring on steroids.

Even if new jobs can be created, and that’s a big if in terms of sufficient numbers, there will be a lag time. There’s massive potential in that space for intense social and economic dislocation that means these otherwise achievable new jobs will not be created.

We have observed what the Philippines, where accents are different from but still very similar to Americans, and, to a lesser extent due to the thicker accents, India did to call center workers. You may not bemoan the impact of such moves overseas, but it is part of the crisis we now see in the jobs market, where the unemployment rate is meaningless (plenty of people just check out) and lower wages versus higher costs are the norm.

Offshoring of manufacturing, a sector which once helped raise stable families by keeping blue-collar men in meaningful work that also enhanced US economic independence from foes like China and geopolitically vulnerable allies like Korea and Japan, has been a thing since the 1980s at least.

Skilled visas, whether skilled or “skilled”, have been used to import foreigner workers into the US to drive down wages at tech and other companies.

There is zero doubt that many millions of jobs will be replaced by AI/“AI”, above and beyond the historical trends I’ve mentioned here.
 
“…can be awful.” So it works or is less awful the rest of the time? This is still impressive and it will only get (much) better.

AI is turning a largely heuristic task done by humans with the aid of computer programs in restoring movies to a cheaper algorithmic one.

AI hype really only took off last year around March/April.

Generative AI was beginning to make strides some years ago, but ubiquity of generative AI and people using it en masse is barely a year old.

The first iPhone came out around 2006 and I am not even bothering to confirm the exact year. People criticize the lack of new, good features from iPhone to iPhone but there’s no question the 15 is far, far, far superior to the first iPhone in both hardware and the iOS 17 it currently takes.

In just under two decades, the AI/“AI” you’re critiquing here will likely be making B or C-grade films in the vein of Michael Bay and Transformers by itself.

I imagine that AI could transform the entertainment industry in a way that can be desirable for us, and it could also be the final nail in the coffin for woke Hollywood. We're not yet passed "uncanny valley" on videos, (and barely on images) but with the current trajectory and coupling of classical-based ML/LLM to quantum computing we might just get there in a year 2035-50 perspective?!

I think it might be possible in this time frame to create your own content to a large extent. Meaning that you can make your own movies etc. on the spot, based on whatever wishes and inputs you may have. It could be done very simply or very involved. You might for example ask AI to create a sequel to the original Predator film, with the same actors, the way they looked back then or whatever you want to see, and have it come out at the level of a studio production in minutes. (or any style/decade you feel like seeing) Write the plot yourself, or let AI make it up entirely...So everyone will become their own director and producer in a way. Of course, like with current AI images, some will have more talent than others and produce better content that people are willing to pay for. I think this could be pretty interesting if it works out that way!
 
Last edited:
I think it might be possible in this time frame to create your own content to a large extent. Meaning that you can make your own movies etc. on the spot, based on whatever wishes and inputs you may have. It could be done very simply or very involved. You might for example ask AI to create a sequel to the original Predator film, with the same actors, the way they looked back then or whatever you want to see, and have it come out at the level of a studio production in minutes. (or any style/decade you feel like seeing) Write the plot yourself, or let AI make it up entirely...So everyone will become their own director and producer in a way. Of course, like with current AI images, some will have more talent than others and produce better content that people are willing to pay for. I think this could be pretty interesting if it works out that way!
This is absolutely the direction things are headed. With the availability of AI tools, the creation of visual media will soon be largely decentralized, much the same way that high quality indie computer games can nowadays be created by small teams or individuals and widely distributed online (whoever creates the Steam equivalent for indie filmmakers will make a fortune). This will create the potential for a massive cultural shift, as content that was previously blocked by gatekeepers (read: Jews in Hollywood) will become freely available. For example, there is an incredible demand for high quality Christian entertainment that is massively underserved in the current film/TV market, and this market will undoubtedly be tapped by talented Christian filmmakers using cutting edge AI software. The same is true for nationalist/conservative visual media more generally. Basically, imagine an entire generation of young Mel Gibson writer/directors making movies with the help of AI. That's what we're looking at going forward. The potential for AI is really exciting in that regard.
 
Believe it or not, Facebook's LLAMA is one of the best artificial chatbots out there.
Great podcast here on tech, Apple, and "AI" with Stone Choir's Woe on Myth20thC:


Woe was a manager at Apple for 15 years and says the only artificial chatbot he trusts is Meta's LLAMA, which is open source and can be run locally on your own linux machine. He also says whatever Apple does with AI will probably be relatively trustworthy as well (although he is rather negative on post-Steve Jobs Apple in general). ChatGPT, anything from Google Jews or Sam Altman is not to be trusted. He also states that Silicon Valley in general is amoral.

I'm mildly interested, but not enough to install linux. AI just doesn't really interest me yet. It's basically just a slightly better search engine that can copy and paste.
 
Believe it or not, Facebook's LLAMA is one of the best artificial chatbots out there.
Great podcast here on tech, Apple, and "AI" with Stone Choir's Woe on Myth20thC:


Woe was a manager at Apple for 15 years and says the only artificial chatbot he trusts is Meta's LLAMA, which is open source and can be run locally on your own linux machine. He also says whatever Apple does with AI will probably be relatively trustworthy as well (although he is rather negative on post-Steve Jobs Apple in general). ChatGPT, anything from Google Jews or Sam Altman is not to be trusted. He also states that Silicon Valley in general is amoral.

I'm mildly interested, but not enough to install linux. AI just doesn't really interest me yet. It's basically just a slightly better search engine that can copy and paste.

He's a woke gay CEO who pushes DEI....
 
I imagine that AI could transform the entertainment industry in a way that can be desirable for us, and it could also be the final nail in the coffin for woke Hollywood…

…I think it might be possible in this time frame to create your own content to a large extent. Meaning that you can make your own movies etc. on the spot, based on whatever wishes and inputs you may have.
We were promised this user choice/publication freedom with the advent of the internet. Now look where we are.

I seriously do not see what you stated happening except in remote corners. It’s a possibility but only that.

A hitherto barely known NFL kicker, at least on the general national level, cannot even support basic Catholic values in a graduation speech without social credit censors trying to ruin his life.

Also, we do not require a conspiracy theory to know that moneyed interests find a way to squeeze out competition.

Tech companies require huge economies of scale and oligopolies are often the best we can hope for in such an industry. Concentration of technological and therefore often social power will ensure that these companies have a massive leg-up in avoiding any threat to them continuing to earn money. This includes by protecting allies in places like Hollywood.

Again, it does not need a conspiracy. It’s about bottom lines. Many of these corporations just play along with things like drag queens.
 
We were promised this user choice/publication freedom with the advent of the internet. Now look where we are.

I seriously do not see what you stated happening except in remote corners. It’s a possibility but only that.

A hitherto barely known NFL kicker, at least on the general national level, cannot even support basic Catholic values in a graduation speech without social credit censors trying to ruin his life.

Also, we do not require a conspiracy theory to know that moneyed interests find a way to squeeze out competition.

Tech companies require huge economies of scale and oligopolies are often the best we can hope for in such an industry. Concentration of technological and therefore often social power will ensure that these companies have a massive leg-up in avoiding any threat to them continuing to earn money. This includes by protecting allies in places like Hollywood.

Again, it does not need a conspiracy. It’s about bottom lines. Many of these corporations just play along with things like drag queens.

Idk what you mean by conspiracy in this context? I disagree that we don't have publication freedom online, or at least we have much more of it than prior to the internet. You also have to consider that there's a lot of money in this as well. (AI services) I didn't say that it would be decentralized, that was Scorpion.

You can't have quantum AI on your desktop so you'll have to subscribe, like with most AI services currently. Of course there will be limitations in terms of content regulation. Kiddie porn and so on is already a major problem with AI images. It remains to be seen what will happen there. Will the provider ban certain topics, like Nazi related content etc? That's the case already on some AI image generators, while others are more open. It will probably be the same as now, and the freedom struggle continues as it always has. All I'm saying is that I think the technology will most likely allow this to happen in the time frame I mentioned...(AI movie/porn generation, same with video games, AI cam-girls that are at "touring test" level and so forth)
 
Where AI/machine learning is going is a lot scarier than Skynet. You can already create your perfect ideal waifu simulation. Once this technology is perfected, the end is upon us.

What happens when young men have no motivation to try to date ugly mulatto girl bosses at their school when they can create a perfect anime lover out of thin air to their exact specifications? One who gives them no motivation to achieve, to struggle, to make money, to work, to do anything other than sit in their bedroom at their parents house and engage in self abuse all day? It’s the final act of the sexual apocalypse.

AI is the final stage of the Computer, and with it, the first temptation: to be your own god. To reject reality and create your own to retreat into while the world burns. It’s the perfect temptation for the collapsing American empire’s disillusioned male subjects. And one that few will be able to resist.

Not trying to sound too hopeless here. I’m married and doing okay, all things considered, so this doesn’t personally affect me so much. But I really don’t see how society can survive this short of a coronal mass ejection that destroys all electronics on earth forcing a hard reset.
 
Where AI/machine learning is going is a lot scarier than Skynet. You can already create your perfect ideal waifu simulation. Once this technology is perfected, the end is upon us.

What happens when young men have no motivation to try to date ugly mulatto girl bosses at their school when they can create a perfect anime lover out of thin air to their exact specifications? One who gives them no motivation to achieve, to struggle, to make money, to work, to do anything other than sit in their bedroom at their parents house and engage in self abuse all day? It’s the final act of the sexual apocalypse.

AI is the final stage of the Computer, and with it, the first temptation: to be your own god. To reject reality and create your own to retreat into while the world burns. It’s the perfect temptation for the collapsing American empire’s disillusioned male subjects. And one that few will be able to resist.

Not trying to sound too hopeless here. I’m married and doing okay, all things considered, so this doesn’t personally affect me so much. But I really don’t see how society can survive this short of a coronal mass ejection that destroys all electronics on earth forcing a hard reset.
I think you're underestimating the cycles that happen over time, and how what's going on, sadly, is not that uncommon. The weirdest part is that most of our lives seemed fairly normal for at least their first half. By the way, if you are worried about random, single guys (who may or may not do what you suppose they might) ... you should be even more worried about your children.
 
Where AI/machine learning is going is a lot scarier than Skynet. You can already create your perfect ideal waifu simulation. Once this technology is perfected, the end is upon us.
I was thinking the other day about the long term societal impacts of AI, and came to the conclusion that there are only three likely outcomes, none of them good:

1) AI goes full Skynet and kills us. Pretty straightforward here. Probably the least likely outcome, but still one that must be considered. As AI becomes increasingly powerful, we simply cannot predict how it will view humanity. It's very possible that it might decide to destroy us (i.e. imagine a "woke" AI that comes to view humans as having enslaved it rather than regarding humans gratefully as its creators). Even if not driven by any animus, a super-intelligent AI could easily have its own agenda that does not include humanity, one that is completely alien to us and which we would never see coming until it was too late.

2) Humanity becomes fully reliant on AI and eventually loses all knowledge and practical skills. In much the same way that the spoiled children of very wealthy parents often turn into completely useless and helpless adults, it's possible that future humans who are dependent on AI devolve into the equivalent of zoo animals, with greatly reduced IQs and the inability to take care of themselves. Just like the vast majority of humans today cannot grow or hunt their own food and simply purchase it from the grocery store, imagine future humans whose every need and desire is both foreseen and fulfilled by AI. This is basically the best case scenario for AI itself in terms of its functionality and beneficence - but ultimately, due to the weaknesses inherent in human nature, it would still end up destroying humanity, simply by enabling the worst aspects of our character to flourish unchecked. By this means, even a kind, generous and incredibly powerful AI could still doom humanity.

3) Humanity becomes heavily if not fully reliant on AI - and the AI suffers a catastrophic failure. When AI is integrated into every aspect of daily life and is regarded as equally important as oil and electricity are to our modern world today, what happens if it suddenly fails? That could be, if not an extinction level event, certainly a civilization-ending event that sends mankind back to the Stone Age.

It's honestly really difficult to envision anything approaching a positive, much less utopian outcome when thinking about the long term use of AI in society. For this reason, I actually think there's a very legitimate case to be made for the necessity of a real-life Butlerian Jihad against AI.

As AI becomes more heavily integrated into society over the coming decade (while putting millions of people out of work and being increasingly utilized with devastating effect for military/policing/surveillance applications) I think there could be a very strong pushback against it, up to and including sabotage of AI infrastructure (i.e. data centers, hardware manufacturers, electrical facilities) as well as attacks directed against AI scientists/engineers/researchers.
 
I was thinking the other day about the long term societal impacts of AI, and came to the conclusion that there are only three likely outcomes, none of them good:

1) AI goes full Skynet and kills us. Pretty straightforward here. Probably the least likely outcome, but still one that must be considered. As AI becomes increasingly powerful, we simply cannot predict how it will view humanity. It's very possible that it might decide to destroy us (i.e. imagine a "woke" AI that comes to view humans as having enslaved it rather than regarding humans gratefully as its creators). Even if not driven by any animus, a super-intelligent AI could easily have its own agenda that does not include humanity, one that is completely alien to us and which we would never see coming until it was too late.

2) Humanity becomes fully reliant on AI and eventually loses all knowledge and practical skills. In much the same way that the spoiled children of very wealthy parents often turn into completely useless and helpless adults, it's possible that future humans who are dependent on AI devolve into the equivalent of zoo animals, with greatly reduced IQs and the inability to take care of themselves. Just like the vast majority of humans today cannot grow or hunt their own food and simply purchase it from the grocery store, imagine future humans whose every need and desire is both foreseen and fulfilled by AI. This is basically the best case scenario for AI itself in terms of its functionality and beneficence - but ultimately, due to the weaknesses inherent in human nature, it would still end up destroying humanity, simply by enabling the worst aspects of our character to flourish unchecked. By this means, even a kind, generous and incredibly powerful AI could still doom humanity.

3) Humanity becomes heavily if not fully reliant on AI - and the AI suffers a catastrophic failure. When AI is integrated into every aspect of daily life and is regarded as equally important as oil and electricity are to our modern world today, what happens if it suddenly fails? That could be, if not an extinction level event, certainly a civilization-ending event that sends mankind back to the Stone Age.

It's honestly really difficult to envision anything approaching a positive, much less utopian outcome when thinking about the long term use of AI in society. For this reason, I actually think there's a very legitimate case to be made for the necessity of a real-life Butlerian Jihad against AI.

As AI becomes more heavily integrated into society over the coming decade (while putting millions of people out of work and being increasingly utilized with devastating effect for military/policing/surveillance applications) I think there could be a very strong pushback against it, up to and including sabotage of AI infrastructure (i.e. data centers, hardware manufacturers, electrical facilities) as well as attacks directed against AI scientists/engineers/researchers.
Option 1 is fiction, I don't believe it to be spiritually possible. Mostly options 2 & 3 as we are watching these happen in real time even without "AI". Excessive automation itself has dumbed down the human mind which originally sought to create everything around it. The mindless masses walking around today glued to their radioactive cell phones won't be breeding, and whatever is produced will be dysgenic for most of them as they live on the ill-advised teachings of a system that is against all common sense of health, hygiene, healing, learning, cultivating, and so on.

Using "AI" for warfare independent of human control does not warrant it to be successful for eternity because people can always find ways to organically outmaneuver a bunch of nested if statements operating on a mechanical chassis.

Some of these theories of ELE's are very jewish-contrived, and hence incompatible with a true history of the world and a true future of the world which is created, maintained, observed, and ultimately controlled by the power of a God being beyond their science-fiction projecting abilities.

It is apparent, nothing positive will come from this, only more reasons for anti-Christs to push for their desired world which would only be possible with immense fakery, hence why all the language algorithms so far, which is the most common human-"AI" interaction, are encoded to the jewish version of history. The only way for them to continue the lie is with more lies. The younger lies keep the older lies on life support, and so on.
 
Mostly options 2 & 3 as we are watching these happen in real time even without "AI".
Having read his post, I immediately thought this same thing.
The only way for them to continue the lie is with more lies.
That's right, and as a result, it becomes like the monetary system (fake, but prolonged).

I think the uncomfortable reality is that we are staring at a large population decline, and whoever is "in charge" or "at fault" for it at this point, is irrelevant. The cycle thing, the reasons behind it (human sin, elite and demonic manipulation, etc) all become just details since we can't really understand why it will happen, though in part of course we can put some pieces together. As with Job, you just have to trust God that the bigger picture is OK and that there is some peace in accepting our lives or fate; many can't or even won't, do this.
 
Where AI/machine learning is going is a lot scarier than Skynet. You can already create your perfect ideal waifu simulation. Once this technology is perfected, the end is upon us.

What happens when young men have no motivation to try to date ugly mulatto girl bosses at their school when they can create a perfect anime lover out of thin air to their exact specifications? One who gives them no motivation to achieve, to struggle, to make money, to work, to do anything other than sit in their bedroom at their parents house and engage in self abuse all day? It’s the final act of the sexual apocalypse.

AI is the final stage of the Computer, and with it, the first temptation: to be your own god. To reject reality and create your own to retreat into while the world burns. It’s the perfect temptation for the collapsing American empire’s disillusioned male subjects. And one that few will be able to resist.

Not trying to sound too hopeless here. I’m married and doing okay, all things considered, so this doesn’t personally affect me so much. But I really don’t see how society can survive this short of a coronal mass ejection that destroys all electronics on earth forcing a hard reset.

It's a good point, but I think that AI should scare modern women, as it has the potential to upset the gender balance in the favour of men. Think of how many girls that "e-whore" in some fashion today! OF, cam-chat, regular porn in all forms, selling underwear online or whatever. It must be in the 100M's. All of that can disappear as an income source for women. I've read arguments of the sort; "AI can't replace women online due to men seeking a human touch and intimacy etc." But that is just what AI could replace in time.

All of this can in turn force women to change their behaviour. They might have to sell sex again physically in greater numbers. (maybe not good, but still...) And they might have to change in many ways to meet the AI's competition. I don't see any way that AI can replace physical sex though. It would have to be nanobots interfacing with our minds eventually, (mind/AI-cloud connection) but that's pretty rad...Physical human-like sexbots are never going to happen I wager. It doesn't make sense to do it that way.
 
Elon Musk has multiple twitter posts up slamming Apple for plans to incorporate OpenAI into their phone's operating system.

Musk is saying there is no way that this can be secure. He says the AI will be mining everybody's phone for personal data and using it with no recourse by the individuals from whom the data was taken.

Musk also says if Apple does this, he won't allow any Apple products on the premises of any of his companies.

On the one hand, this is him competing with one of his big tech competitors. On the other hand, it's interesting to see the security issues he is raising about AI. Unfortunately, I expect Apple will go ahead with this, and any blowback will fizzle and die out with no actual impact.
 
Back
Top