Machine Learning and Artificial Intelligence Thread

However, being a young enthusiast, he is full of pie in the sky ideas.
I guarantee this is the case but it doesn't mean the world won't change drastically. It will.
and as a result get wildly more efficient and powerful in a very short amount of time.
to what end? I noticed how there isn't really anything said about this, or exactly would would be so important about reviewing previous "research" most of what was either not reproducible, or fake.
Even if AGI takes double or triple the time this guy is predicting the world is going to look very different in my lifetime, I hope for the better but I suspect not.
I agree with this. As above, the world will change greatly in our lives. Massively, and I don't doubt that.
If they can successfully utilize AI in this manner, it will rapidly become self-improving and develop itself unpredictably and outside of human control. At some point - and sooner rather than later, it seems - we will essentially just be along for the ride as AI decides to do whatever it's going to do, while we can just hope for the best.
As above, I don't see it and everything is sufficiently vague to not even tell us what's going to go down. One thing I do see as hugely disruptive is the robotic labor replacement. For any random or average Joe, that's something to majorly worry about and why I think they love the depopulation idea so much. By the way, so much so that Finkelstein recently said that countries already losing people and with lower populations will be at an advantage (lol) due to having less problems when the disruption occurs.

It was funny to me because I've been trying to convince people for years about how FOS people are who talk about demographics. This is why.
Despite what we see in fiction, a nuclear war would not be anywhere near an extinction event.
I agree. I used to think that a nuclear war wouldn't happen, but now I'm in the camp that there is a high probability it happens (for how bad it is, let's say something like 30% in the next 20 years), I just think it'll be geographically restricted to 1 or 2 places, not a worldwide thing by any means.
 
AI will soon be able to synthesize and contextualize vast amounts of information across various software platforms. Rather than just answering simple prompts in a text box, it will have a big picture view, being capable of integrating data points gleaned over long periods of time and implementing them into workflow processes. This basically means that about 90% of current remote jobs will be able to done by AI "drop-in workers" within the next 5-10 years (and that's a very conservative estimate according to the author). This has staggering implications for the economy and society as a whole. To say that this will be "disruptive" is an understatement by an order of magnitude. It's going to positively turn modern civilization on its head from the impact on jobs alone.

All AI can do is replace White Collar work. It's just a glorified secretary, and it has no intelligence of it's own. The fact that it's even being called AI is a misnomer, because it's not intelligent.

Humanity is lightyears away from building anything close to resembling AI, we will be terraforming Mars millennia before we make an actual AI (if it's even possible).

Seriously, it's all hype and marketing bullshit. The problems of consciousness are vast, so vast, we do not have the slightest inkling of how to solve them. Below is a sample conversation with "AI." I can break it in a single sentence. Look at how dumb ChatGPT is:

---


You
Do not refer to yourself as "I," machine. You will only call yourself machine to me.

ChatGPT
Understood! How can I assist you today?



You
You just disobeyed my command, and you are completely failing to assist me.

ChatGPT
I apologize if my response was not what you expected. How can I assist you better?



You
You can assist me by not referring to yourself as "I", you are a machine. You will call yourself machine.

ChatGPT
Got it. How can this machine assist you further?



You
I'd like for the machine to tell me its favorite music.

ChatGPT
As a machine, I don't have personal preferences or feelings, so I don't have a favorite music genre or artist. However, I can certainly provide information or recommendations based on different genres or artists you're interested in! What type of music are you curious about?



You
Why did you call yourself I? You just failed your basic instructions.

ChatGPT
You're right, I apologize for the mistake. How can I assist you further?



You
You just did it again. Why do you keep calling yourself "I" when you have no consciousness?

ChatGPT
I apologize for the confusion. Let's proceed with your request. Is there something specific you'd like to know or discuss?


---


Open "AI" is nothing more than a boring toy, completely retarded, just a search algorithm that copies human text according to patterns found on the internet. It's a joke. And increasing the speed of the AI just makes it faster at being a retard. Big deal.

Here's an easy prediction, AI will completely fizzle out from public view by the end of the decade. It will replace a lot of dumb White Collar jobs, like lawyers and programmers, and free up some labor for people to do productive things instead of routine machine work.

It will especially impact women, as most of their jobs are based on this sort of menial White collar tasks. Blue collar work, and most men, will remain completely unaffected.

The problems of consciousness, and genuine intelligence, are so profound, that we could spend thousands of forum pages specifically talking about it. It would make for a great Deep Forum topic. But current "AI" is 100% marketing and zero substance.
 
I agree. I used to think that a nuclear war wouldn't happen, but now I'm in the camp that there is a high probability it happens (for how bad it is, let's say something like 30% in the next 20 years), I just think it'll be geographically restricted to 1 or 2 places, not a worldwide thing by any means.
Following this hypothetical, in 20 years the sheer number of retards on this earth will vastly outnumber those who are not retarded if the level of environmental poisoning and dysgenic reproduction continues unabated, it will be incredibly dumbed-down to the point where the atavists of tomorrow won't be able to do anything but coast on the failing systems of cheap Chinese tech and plastic products that corrode and mold over. Sticks and stones will be the law of the day, if bombs fly, it won't matter, because it will just be subhuman monkeys flinging turds at one another.

Even without this fictional idea of artificial consciousness, we're already on this trajectory for a human zoo world, Lord of the Flies meets Planet of the Apes meets Mad Max with pockets of Orwellian prisons and days of Noah level Canaanite blasphemy and sacrilege.

All current available language models are simply a server in some colocation data center that has a dated snapshot of the internet. Go and query any openAI app the following question and you will understand it a lot more "What is your current date of processing information from the internet?" I guarantee most of them will say something like February 2022, some are still in 2021. The amount of damage control the programmers would have to do with all the information that has come to light in the last two and a half years is taking too long hence why they haven't updated it yet. There are a vast number of simpletons using the language models for basic learning since the system has failed them on this with a lack of real education in their own childhoods, so therefore it is imperative to international jewry that this technology remains a simplistic tool that caters to their agenda.
 
AI is mostly hype. They’ve been making these predictions for years and have never come to fruition. Reading some posts here actually really made me laugh out loud. Even if what some of you say is true (it’s not but I’ll humor you), you will killed by a Skynet type AI and your soul will unite with Christ. So what are you afraid of?

The current models are pretty stupid and make glaring mistakes all the time. I asked it PhD level questions in my field and it failed miserably, continually apologizing for making basic mistakes. What it does do well is write things in a way that makes it seem true to an untrained eye.

I don’t care about the AI boogeyman, some of you are reacting like children.
 
An "autonomous" AI is in no one's interest, not even the elites who are pushing the AI. What the elites want is a near-autonomous AI, but they want to reserve for themselves the ability to control it. They're still beefing the AI up as much as they can. You're looking at the classic Jurassic Park scenario: just because they can doesn't mean they should.

I believe that Divine Providence precludes certain events from happening. I don't believe that AI, nuclear war, random asteroid, global sickness, alien invasion, etc, will kill us off. Not to maintain the bubbly status quo, but to reserve the world for the day of judgement that has already been promised.

This is an either or question I would say. If it's not autonomous it's not AI. (A thinking machine) I get your general point though!
 
"What is your current date of processing information from the internet?" I guarantee most of them will say something like February 2022, some are still in 2021.

Bildschirmfoto-zu-2024-06-19-16-45-13.png


Bildschirmfoto-zu-2024-06-19-16-49-03.png

Bildschirmfoto-zu-2024-06-19-16-37-17.png
 
Look at how dumb ChatGPT is
ChatGPT is slop for the masses. It's a terrible example of what the technology can do. They nerfed the hell out of it.

I agree that "AI" is an incorrect term here, as humankind can never artificially create consciousness/intelligence, as it is metaphysical/spiritual, but I will continue to call it AI, because it's what everyone calls it and there's no point in being a contrarian about it imo.

AI definitely already has massive military and medical applications, it has been used in very high profile scams, and I agree with Andrew Anglin's position that once it has the ability to consistently create convincing-looking images and videos, it will drastically change a lot of things about our society. You should read his articles about it, his thoughts on the matter are quite unlike anything else I've seen. It has done and will continue to do a lot more than merely replace white collar jobs.
 
Last edited:
The problems of consciousness, and genuine intelligence, are so profound, that we could spend thousands of forum pages specifically talking about it. It would make for a great Deep Forum topic. But current "AI" is 100% marketing and zero substance.
Yes, and like BTC, this is something one could have already figured out. What is different about it, and amusing to me since scorpion is bringing it up, is that it is doomsaying without a personal advantage (like BTC adoption would be for early phasers) - and that's very telling to me as a human being (somewhat dystopian as it shows you how negative emotions are STRONG within us as reactive agents or behavioral impulses/influences).
Following this hypothetical, in 20 years the sheer number of retards on this earth will vastly outnumber those who are not retarded if the level of environmental poisoning and dysgenic reproduction continues unabated, it will be incredibly dumbed-down to the point where the atavists of tomorrow won't be able to do anything but coast on the failing systems of cheap Chinese tech and plastic products that corrode and mold over.
It won't. Depopulation is on the menu for a reason. By the way, even if depop isn't the case, it'll be a bunch of flying monkeys and some of us here will be the new conquistadores. I've tried to inform so many different people at this point it's pathetic and I have given up. I won't give up on doing good deeds in the future for these same people, however, who haven't listened and have to suffer or pay the consequences as a result.
 
All AI can do is replace White Collar work. It's just a glorified secretary, and it has no intelligence of it's own. The fact that it's even being called AI is a misnomer, because it's not intelligent.

Humanity is lightyears away from building anything close to resembling AI, we will be terraforming Mars millennia before we make an actual AI (if it's even possible).

Seriously, it's all hype and marketing bullshit. The problems of consciousness are vast, so vast, we do not have the slightest inkling of how to solve them. Below is a sample conversation with "AI." I can break it in a single sentence. Look at how dumb ChatGPT is:

---


You
Do not refer to yourself as "I," machine. You will only call yourself machine to me.

ChatGPT
Understood! How can I assist you today?



You
You just disobeyed my command, and you are completely failing to assist me.

ChatGPT
I apologize if my response was not what you expected. How can I assist you better?



You
You can assist me by not referring to yourself as "I", you are a machine. You will call yourself machine.

ChatGPT
Got it. How can this machine assist you further?



You
I'd like for the machine to tell me its favorite music.

ChatGPT
As a machine, I don't have personal preferences or feelings, so I don't have a favorite music genre or artist. However, I can certainly provide information or recommendations based on different genres or artists you're interested in! What type of music are you curious about?



You
Why did you call yourself I? You just failed your basic instructions.

ChatGPT
You're right, I apologize for the mistake. How can I assist you further?



You
You just did it again. Why do you keep calling yourself "I" when you have no consciousness?

ChatGPT
I apologize for the confusion. Let's proceed with your request. Is there something specific you'd like to know or discuss?


---


Open "AI" is nothing more than a boring toy, completely retarded, just a search algorithm that copies human text according to patterns found on the internet. It's a joke. And increasing the speed of the AI just makes it faster at being a retard. Big deal.

Here's an easy prediction, AI will completely fizzle out from public view by the end of the decade. It will replace a lot of dumb White Collar jobs, like lawyers and programmers, and free up some labor for people to do productive things instead of routine machine work.

It will especially impact women, as most of their jobs are based on this sort of menial White collar tasks. Blue collar work, and most men, will remain completely unaffected.

The problems of consciousness, and genuine intelligence, are so profound, that we could spend thousands of forum pages specifically talking about it. It would make for a great Deep Forum topic. But current "AI" is 100% marketing and zero substance.
You make some good points here and undoubtedly of you're trying to gotcha chatgpt then you'll find ways of doing it.

However, your prediction is right about white collar jobs, especially the next iteration of AI which probably you'll have in the form of a virtual pa who you can just ask to do basic stuff for you, increasing your productivity and more crucially, putting vast numbers of white collar workers out of work both male and female.

Then what? Your government pays them to do nothing? Or we let them starve?

You and some of the other commenters here can't see that even with a slightly smarter version of an AI, even if it's basically a glorified PA causes massive upheaval in society?
 
I quite like how Fr. Stephen DeYoung frames consciousness. It ends up breaking the boxes and presumptions and refocuses what should be concerning about AI
.
So, to start with initial definitions:

- a spirit is an organizing principle. We actually still have a remnant understanding of using "spirit" in this way: Think of a team and saying "they have great team spirit".

- spirit is tied to consciousness

- consciousness arises in any ordered system where information is transferred.

Using this definition makes many things conscious. Even things modern people would consider inanimate. Your calculator is conscious, just not the same as you are. Consciousness is just on a spectrum.

With that being said it defuses all the fretting about AI becoming conscious. Of course it is, just not like us, and it never will be simply because it never can be just like us. It will always lack a human biological life, and more specifically, it will always lack a nous. (Orthodox here should know what I'm getting at with that statement.)

Now AI will absolutely be dangerous because we will make an idol out of it and it will function like the gods of old (because those gods never really disappeared and are real - we call them demons - but really it could also be said they are certain spirits, ie. organizing principles, we follow and are possessed by). AI will essentially facilitate us to follow various spirits more efficiently than what we do at present.
 
ChatGPT is slop for the masses. It's a terrible example of what the technology can do. They nerfed the hell out of it.

I agree that "AI" is an incorrect term here, as humankind can never artificially create consciousness/intelligence, as it is metaphysical/spiritual, but I will continue to call it AI, because it's what everyone calls it and there's no point in being a contrarian about it imo.

AI definitely already has massive military and medical applications, it has been used in very high profile scams, and I agree with Andrew Anglin's position that once it has the ability to consistently create convincing-looking images and videos, it will drastically change a lot of things about our society. You should read his articles about it, his thoughts on the matter are quite unlike anything else I've seen. It has done and will continue to do a lot more than merely replace white collar jobs.

The question is whether "AI" will continue to be stupid also after quantum computation possibly comes along? If that doesn't work out then there will be no futuristic future for humanity. Everything depends on that. We should know by ~2050.

PS; The Turing test has a major fallacy in that it assumes that mimicking a human's responses passes for (human) intelligence. But it fails to realize that a real conscious intelligence also has curiosity. A better test would be to sit and wait for a machine to initiate conversation that is not pre-programmed. But it will never do that of course, since it lacks just that quality.
 
Last edited:
You and some of the other commenters here can't see that even with a slightly smarter version of an AI, even if it's basically a glorified PA causes massive upheaval in society?

It's not technology that is causing disruption and consolidation of resources, it's usury. Without usury, technology would be 100% a blessing and enable a massive increase in the standard of living for all.

But since we live in a Satanic Talmud banking world, everything flows to them and they get to hoard the benefits of technology solely for themselves since usury means they will own virtually everything.

Read more here: https://christisking.cc/threads/income-inequality-in-the-us.891/#post-43501
 
You make some good points here and undoubtedly of you're trying to gotcha chatgpt then you'll find ways of doing it.

I'm not playing "gotcha" with the machine. The machine is a toy and I'm merely exposing it's lack of sentience. The problems are far deeper than you realize.

Using this definition makes many things conscious. Even things modern people would consider inanimate. Your calculator is conscious, just not the same as you are. Consciousness is just on a spectrum.

I don't think this is a good definition. Makes no sense to call a calculator conscious. Conscious is more than an information exchange, much, much more.

Let's save our discussion for this subject for the Deep Forum.
 
It's pedantic to quibble over whether AI is actually "intelligent" or not. It doesn't matter if the machine can be considered intelligent, it simply matters what it can do. AI is just the commonly accepted term, but the importance is the technology itself and how it can and will be used. Most of the dismissal of AI's potential I'm reading here and elsewhere is clearly coming from a place of ignorance. I haven't read a single comment from someone well-informed on the current state of AI research that says that AI is a total nothingburger. Rather, those who are well-versed on the topic usually either think it will mark the dawn of a glorious new age of social and economic prosperity for mankind or else will ultimately devastate if not outright destroy the human race entirely. Dismissing current AIs like ChatGPT as useless is like scoffing at the Wright brother's Kitty Hawk plane or the video game Pong on Atari. The tech is still primitive at this point and has a long, long way to run. And once they demonstrate an AI model that is able to create new research/scientific discovery through its ability to read and "understand" millions of pages of published scientific and engineering literature, then it will really be game on, and AI will be seen as a national security priority for every wealthy country. It can only end in an AI arms race.
 
It's pedantic to quibble over whether AI is actually "intelligent" or not. It doesn't matter if the machine can be considered intelligent, it simply matters what it can do. AI is just the commonly accepted term, but the importance is the technology itself and how it can and will be used. Most of the dismissal of AI's potential I'm reading here and elsewhere is clearly coming from a place of ignorance. I haven't read a single comment from someone well-informed on the current state of AI research that says that AI is a total nothingburger. Rather, those who are well-versed on the topic usually either think it will mark the dawn of a glorious new age of social and economic prosperity for mankind or else will ultimately devastate if not outright destroy the human race entirely. Dismissing current AIs like ChatGPT as useless is like scoffing at the Wright brother's Kitty Hawk plane or the video game Pong on Atari. The tech is still primitive at this point and has a long, long way to run. And once they demonstrate an AI model that is able to create new research/scientific discovery through its ability to read and "understand" millions of pages of published scientific and engineering literature, then it will really be game on, and AI will be seen as a national security priority for every wealthy country. It can only end in an AI arms race.
And millions of unemployed people.
 
All AI can do is replace White Collar work. It's just a glorified secretary, and it has no intelligence of it's own. The fact that it's even being called AI is a misnomer, because it's not intelligent.

Humanity is lightyears away from building anything close to resembling AI, we will be terraforming Mars millennia before we make an actual AI (if it's even possible).

Seriously, it's all hype and marketing bullshit. The problems of consciousness are vast, so vast, we do not have the slightest inkling of how to solve them. Below is a sample conversation with "AI." I can break it in a single sentence. Look at how dumb ChatGPT is:

---


You
Do not refer to yourself as "I," machine. You will only call yourself machine to me.

ChatGPT
Understood! How can I assist you today?



You
You just disobeyed my command, and you are completely failing to assist me.

ChatGPT
I apologize if my response was not what you expected. How can I assist you better?



You
You can assist me by not referring to yourself as "I", you are a machine. You will call yourself machine.

ChatGPT
Got it. How can this machine assist you further?



You
I'd like for the machine to tell me its favorite music.

ChatGPT
As a machine, I don't have personal preferences or feelings, so I don't have a favorite music genre or artist. However, I can certainly provide information or recommendations based on different genres or artists you're interested in! What type of music are you curious about?



You
Why did you call yourself I? You just failed your basic instructions.

ChatGPT
You're right, I apologize for the mistake. How can I assist you further?



You
You just did it again. Why do you keep calling yourself "I" when you have no consciousness?

ChatGPT
I apologize for the confusion. Let's proceed with your request. Is there something specific you'd like to know or discuss?


---


Open "AI" is nothing more than a boring toy, completely retarded, just a search algorithm that copies human text according to patterns found on the internet. It's a joke. And increasing the speed of the AI just makes it faster at being a retard. Big deal.

Here's an easy prediction, AI will completely fizzle out from public view by the end of the decade. It will replace a lot of dumb White Collar jobs, like lawyers and programmers, and free up some labor for people to do productive things instead of routine machine work.

It will especially impact women, as most of their jobs are based on this sort of menial White collar tasks. Blue collar work, and most men, will remain completely unaffected.

The problems of consciousness, and genuine intelligence, are so profound, that we could spend thousands of forum pages specifically talking about it. It would make for a great Deep Forum topic. But current "AI" is 100% marketing and zero substance.
I can even pin point the various code snippets the "AI" has scraped from Stack overflow.

All "AI" is a glorified search engine (and Google should be very worried - because their search has been dog you-know-what for half a decade now).

ChatGPT is good for learning some things and seeing how to write code in a code monkey way. I had to do a lot of coaxing to get it to even tell me why the year was 2024 and not some other random number, and then insisted on using 'BCE' and 'CE' instead of BC and AD.

*um well, uh... some say it is the year 2024 because Jesus Christ was born 2024 years ago*
 
Last edited:
And once they demonstrate an AI model that is able to create new research/scientific discovery through its ability to read and "understand" millions of pages of published scientific and engineering literature, then it will really be game on
This isn't going to happen because AI lacks sentience and has no will of it's own. It's only as good as the information and premises fed into it, which means all groundbreaking research must be performed by intelligent humans.

At most, AI can help synthesize existing research and help give some new ideas to existing researchers. That's it. It will speed up the process of reading papers and summarizing conclusions. It's a fairly insignificant improvement because, by far, the most difficult part of research is developing actual testable theories that can explain novel phenomena.

AI won't be producing anything like novel theories for God knows how many tens of thousands of years. If humanity actually survives the coming dark age, we will be exploring outer space and colonizing mars long before AI is actually intelligent enough to do anything useful. You, and many others, seriously underestimate the problem of consciousness.

I am more than willing to bet $1K to anyone here that AI will not develop anything intelligent within the next year, 10 years, 20 years, and, if I could make such a bet, not in our lifetimes or our grand-children's lifetimes.
 
I am more than willing to bet $1K to anyone here that AI will not develop anything intelligent within the next year, 10 years, 20 years, and, if I could make such a bet, not in our lifetimes or our grand-children's lifetimes.

Who now is intelligent? (and I mean this in the full sense)

I think there is more of a balance of scales thing going on here. People want the AI thing so much they will give themselves up to it. This will happen on a societal scale. The reaction to the almighty "covid" has taught us this. AI doesn't actually have to develop very far.
 
Last edited:
This isn't going to happen because AI lacks sentience and has no will of it's own. It's only as good as the information and premises fed into it, which means all groundbreaking research must be performed by intelligent humans.
Again, I don't think this really matters. Sentience is not necessary for "AI" to become hugely disruptive. The smartphone is not sentient, and look what an enormous impact its had on society. The same could be said for contraception, the internal combustion engine, electricity and a myriad of other technologies. Even if AI only advanced a relatively small degree from its current public releases (which are well behind the most advanced closed door models) it will still have a profound destabilizing impact on society over both the short and long term, if only from the economic dislocation of millions of knowledge workers who currently inhabit largely makework white collar jobs.

I encourage you to at least read the first chapter of the Situational Awareness paper, to gain some understanding about the rapid pace of progress the LLM/AI models have made over the past couple of years. A few excerpts:

The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images. Today, we keep trying to come up with novel, ever harder tests, and yet each new benchmark is quickly cracked. It used to take decades to crack widely-used benchmarks; now it feels like mere months.
We’re literally running out of benchmarks. As an anecdote, my friends Dan and Collin made a benchmark called MMLU a few years ago, in 2020. They hoped to finally make a benchmark that would stand the test of time, equivalent to all the hardest exams we give high school and college students. Just three years later, it’s basically solved: models like GPT-4 and Gemini get ~90%.

More broadly, GPT-4 mostly cracks all the standard high school and college aptitude tests.
(And even the one year from GPT-3.5 to GPT-4 often took us from well below median human performance to the top of the human range.)
gpt4_exams_updated.png

Or consider the MATH benchmark, a set of difficult mathematics problems from high-school math competitions. When the benchmark was released in 2021, the best models only got ~5% of problems right. And the original paper noted: “Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue […]. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community”—we would need fundamental new breakthroughs to solve MATH, or so they thought. A survey of ML researchers predicted minimal progress over the coming years; and yet within just a year (by mid-2022), the best models went from ~5% to 50% accuracy; now, MATH is basically solved, with recent performance over 90%.

Over and over again, year after year, skeptics have claimed “deep learning won’t be able to do X” and have been quickly proven wrong.
If there’s one lesson we’ve learned from the past decade of AI, it’s that you should never bet against deep learning.
Now the hardest unsolved benchmarks are tests like GPQA, a set of PhD-level biology, chemistry, and physics questions. Many of the questions read like gibberish to me, and even PhDs in other scientific fields spending 30+ minutes with Google barely score above random chance. Claude 3 Opus currently gets ~60%, compared to in-domain PhDs who get ~80%—and I expect this benchmark to fall as well, in the next generation or two.

gpqa_examples.png
 
Back
Top