• ChristIsKing.eu has moved to ChristIsKing.cc - see the announcement for more details. If you don't know your password PM a mod on Element or via a temporary account here to confirm your username and email.

Machine Learning and Artificial Intelligence Thread

Again, I don't think this really matters. Sentience is not necessary for "AI" to become hugely disruptive. The smartphone is not sentient, and look what an enormous impact its had on society. The same could be said for contraception, the internal combustion engine, electricity and a myriad of other technologies. Even if AI only advanced a relatively small degree from its current public releases (which are well behind the most advanced closed door models) it will still have a profound destabilizing impact on society over both the short and long term, if only from the economic dislocation of millions of knowledge workers who currently inhabit largely makework white collar jobs.

I encourage you to at least read the first chapter of the Situational Awareness paper, to gain some understanding about the rapid pace of progress the LLM/AI models have made over the past couple of years. A few excerpts:

The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images. Today, we keep trying to come up with novel, ever harder tests, and yet each new benchmark is quickly cracked. It used to take decades to crack widely-used benchmarks; now it feels like mere months.
We’re literally running out of benchmarks. As an anecdote, my friends Dan and Collin made a benchmark called MMLU a few years ago, in 2020. They hoped to finally make a benchmark that would stand the test of time, equivalent to all the hardest exams we give high school and college students. Just three years later, it’s basically solved: models like GPT-4 and Gemini get ~90%.

More broadly, GPT-4 mostly cracks all the standard high school and college aptitude tests.
(And even the one year from GPT-3.5 to GPT-4 often took us from well below median human performance to the top of the human range.)
gpt4_exams_updated.png

Or consider the MATH benchmark, a set of difficult mathematics problems from high-school math competitions. When the benchmark was released in 2021, the best models only got ~5% of problems right. And the original paper noted: “Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue […]. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community”—we would need fundamental new breakthroughs to solve MATH, or so they thought. A survey of ML researchers predicted minimal progress over the coming years; and yet within just a year (by mid-2022), the best models went from ~5% to 50% accuracy; now, MATH is basically solved, with recent performance over 90%.

Over and over again, year after year, skeptics have claimed “deep learning won’t be able to do X” and have been quickly proven wrong.
If there’s one lesson we’ve learned from the past decade of AI, it’s that you should never bet against deep learning.
Now the hardest unsolved benchmarks are tests like GPQA, a set of PhD-level biology, chemistry, and physics questions. Many of the questions read like gibberish to me, and even PhDs in other scientific fields spending 30+ minutes with Google barely score above random chance. Claude 3 Opus currently gets ~60%, compared to in-domain PhDs who get ~80%—and I expect this benchmark to fall as well, in the next generation or two.

gpqa_examples.png
I agree with a lot of what you're saying, but the one thing I would question is how much more advanced of an AI the government has than what is publicly available..

I have enough exposure to government procurement bids for AI functionality to think that the government is not privy to a more advanced class of AI.

The analogy I would make is this: When automobile technology broke out, the private sector was ahead of the government. Likewise airplane technology. I think computer technology was like this as well in the era when PCs developed.

Sure, the government got interested early on, and started issuing contracts to apply these technologies for military purposes, but they did not have some kind of secret next generation capability. I think this is the case with AI.

I would say they are spending money to apply recent advances in AI, but it hasn't put them on a whole different level. The whole field is advancing too fast for the government to be able to get out in front of it. The government procurement process simply does not have that kind of foresight and organizational efficiency. They're too busy observing Juneteenth and making sure they meet their DIE quotas.
 
I agree with a lot of what you're saying, but the one thing I would question is how much more advanced of an AI the government has than what is publicly available..
The government absolutely does not have some secret advanced AI at this time. The closed door AIs I was referred to belong to the frontier AI labs (primarily OpenAI/Microsoft and Google). But the entire impetus behind the Situational Awareness paper was to advocate for federal government partnership with those companies in the very near future, because the author believes that the technology is quickly becoming so powerful that it will inevitably end in an arms race.

If the government had super powerful AIs, it would have already taken over OpenAI and Google's AI program for national security reasons, because they would understand how powerful and dangerous it is to have that kind of technology being developed in such an unsecured environment that is wide open to espionage.
 
Again, I don't think this really matters. Sentience is not necessary for "AI" to become hugely disruptive. The smartphone is not sentient, and look what an enormous impact its had on society. The same could be said for contraception, the internal combustion engine, electricity and a myriad of other technologies. Even if AI only advanced a relatively small degree from its current public releases (which are well behind the most advanced closed door models) it will still have a profound destabilizing impact on society over both the short and long term, if only from the economic dislocation of millions of knowledge workers who currently inhabit largely makework white collar jobs.

I encourage you to at least read the first chapter of the Situational Awareness paper, to gain some understanding about the rapid pace of progress the LLM/AI models have made over the past couple of years. A few excerpts:

The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images. Today, we keep trying to come up with novel, ever harder tests, and yet each new benchmark is quickly cracked. It used to take decades to crack widely-used benchmarks; now it feels like mere months.
We’re literally running out of benchmarks. As an anecdote, my friends Dan and Collin made a benchmark called MMLU a few years ago, in 2020. They hoped to finally make a benchmark that would stand the test of time, equivalent to all the hardest exams we give high school and college students. Just three years later, it’s basically solved: models like GPT-4 and Gemini get ~90%.

More broadly, GPT-4 mostly cracks all the standard high school and college aptitude tests.
(And even the one year from GPT-3.5 to GPT-4 often took us from well below median human performance to the top of the human range.)
gpt4_exams_updated.png

Or consider the MATH benchmark, a set of difficult mathematics problems from high-school math competitions. When the benchmark was released in 2021, the best models only got ~5% of problems right. And the original paper noted: “Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue […]. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community”—we would need fundamental new breakthroughs to solve MATH, or so they thought. A survey of ML researchers predicted minimal progress over the coming years; and yet within just a year (by mid-2022), the best models went from ~5% to 50% accuracy; now, MATH is basically solved, with recent performance over 90%.

Over and over again, year after year, skeptics have claimed “deep learning won’t be able to do X” and have been quickly proven wrong.
If there’s one lesson we’ve learned from the past decade of AI, it’s that you should never bet against deep learning.
Now the hardest unsolved benchmarks are tests like GPQA, a set of PhD-level biology, chemistry, and physics questions. Many of the questions read like gibberish to me, and even PhDs in other scientific fields spending 30+ minutes with Google barely score above random chance. Claude 3 Opus currently gets ~60%, compared to in-domain PhDs who get ~80%—and I expect this benchmark to fall as well, in the next generation or two.

gpqa_examples.png

What advancements? All your spoilers show is that an AI engine with what, almost all the internet knowledge in the world, can take tests and get the right answers. It’s a gigantic nothingburger.

This is precisely what computer systems do, they are very good at solving problems in closed systems.

It was also the end of human intelligence and the human race when Kasparov lost his chess match to deep blue…oh yeah no it wasn’t.
 
Despite our reservations, I've found that AI tools have been very useful for certain tasks.

This month I've seen an uptick in undesirable content on IG, fortunately they have a tool that can help block posts that use certain key words.

So I used Gemini to help me make a list. I feel a lot cleaner already.

Screenshot from 2024-06-20 11-35-22.png

Screenshot from 2024-06-20 11-36-30.png

Screenshot from 2024-06-20 11-36-51.png
 
What advancements? All your spoilers show is that an AI engine with what, almost all the internet knowledge in the world, can take tests and get the right answers. It’s a gigantic nothingburger.

This is precisely what computer systems do, they are very good at solving problems in closed systems.

It was also the end of human intelligence and the human race when Kasparov lost his chess match to deep blue…oh yeah no it wasn’t.
Given how demonstrably and embarrassingly ignorant you are on the topic, I question why you'd even elect to participate in this discussion at all. What you're doing here is akin to going into a thread about soccer and saying, "Well, these guys suck. Why don't they just pick up the ball and run with it? That's what I'd do."

Please take the time to educate yourself on the subject to at least to a minimal degree before posting about it. It makes you come across better and saves the rest of us the trouble of reading such a retarded take.
 
To end this so-called debate, current "AI" is a plagiarizing machine that cannot write anything new that hasn't already been written on the internet. To think that this machine threatens humanity is retarded. The real threat comes from the retards that idolize it and treat it as a god.
 
Given how demonstrably and embarrassingly ignorant you are on the topic, I question why you'd even elect to participate in this discussion at all. What you're doing here is akin to going into a thread about soccer and saying, "Well, these guys suck. Why don't they just pick up the ball and run with it? That's what I'd do."

Please take the time to educate yourself on the subject to at least to a minimal degree before posting about it. It makes you come across better and saves the rest of us the trouble of reading such a retarded take.

Oh then please educate us oh wise one.

Are you going to enlighten us with a 2000 word salad with a few big words thrown in to make yourself seem intelligent? Cause that’s what your posts have read like to me for the longest time.

What are your qualifications on AI, because what you posted as spoilers and some big take is exactly what I said. Nonsense. It solved tests using the biggest knowledge base in the world. What did you expect that it would do?

What new discoveries has AI made in any field? Can you name one? Or are you just going to quote ‘experts’ like you’ve been doing the whole time in this thread?
 
What I am not looking forward to is the day when the AI will become the "de facto" authority on every issue.

Arguing about global warming? The AI examined all of the data and made it clear that global warming is true and that the planet is hotter than ever before. End of argument.

Arguing about religion? The AI examined all of the data and it shows that the Bible is errant and outdated by science. End of argument.

AI is not so different from humanity. It operates off of preexisting ideas. Some of those ideas are programmed to be more foundational and irrevocable than others. Right now, AI is like a dumb, soft-liberal human telling you it's opinion (it reflects and assumes the same ideas of its programmers). In the future, it will be like having a smart, but still fallible man think for you as your authority on every issue that it is programmed for.
 
Oh then please educate us oh wise one.

Are you going to enlighten us with a 2000 word salad with a few big words thrown in to make yourself seem intelligent? Cause that’s what your posts have read like to me for the longest time.

What are your qualifications on AI, because what you posted as spoilers and some big take is exactly what I said. Nonsense. It solved tests using the biggest knowledge base in the world. What did you expect that it would do?

What new discoveries has AI made in any field? Can you name one? Or are you just going to quote ‘experts’ like you’ve been doing the whole time in this thread?
You're triggered pretty hard by this.

It's pretty clear from what I've read that @scorpion isnt some sort of AI fan boy. He's just presented a series of essays, which I guess you never bothered to read and some of us are just publicly putting general thoughts on the potential implications both positive and negative. If they are negative, so be it, some of us see it in a different light.

It doesn't matter what you or I think.we are just guys posting on an Internet forum.
 
You're triggered pretty hard by this.

It's pretty clear from what I've read that @scorpion isnt some sort of AI fan boy. He's just presented a series of essays, which I guess you never bothered to read and some of us are just publicly putting general thoughts on the potential implications both positive and negative. If they are negative, so be it, some of us see it in a different light.

It doesn't matter what you or I think.we are just guys posting on an Internet forum.

Maybe it’s because I’ve used sophisticated generative AI tools in my field vs. you guys posting in this thread that are simply quoting hyped articles you read online? They are not as good as they are made out to be. Frankly they get tons of things completely wrong and when they do it’s chalked up to it’s learning and will
be updated.

But of course you guys are all more wise probably never having used them, so yes carry on.

And @Bird your credibility is zero, please create an AI model that says the earth is flat, and enjoy yourself.
 
Oh then please educate us oh wise one.
I posted a link to 165 page PDF that was precisely intended to educate you and others on this topic. It's quite obvious that you didn't read it, however, and instead chose to shit up the thread with your totally uninformed rambling.
Are you going to enlighten us with a 2000 word salad with a few big words thrown in to make yourself seem intelligent? Cause that’s what your posts have read like to me for the longest time.
Given that you appear to be a proud and lazy ignoramus, it's not surprising that my posts appear to you as little more than word salad.
It solved tests using the biggest knowledge base in the world. What did you expect that it would do?
Again, if you had taken the time to familiarize yourself with even the basics of this technology, you would never say something like this. The capabilities being demonstrated by these current AI models are extraordinary and completely unprecedented in the history of computing. In your willful ignorance, you seem to be dismissing this technology as nothing more than a glorified search engine. You are wrong, profoundly and hilariously so, to the extent that you are totally unable to contribute to this discussion except as an object of scorn and mockery.
 
I posted a link to 165 page PDF that was precisely intended to educate you and others on this topic. It's quite obvious that you didn't read it, however, and instead chose to shit up the thread with your totally uninformed rambling.

Given that you appear to be a proud and lazy ignoramus, it's not surprising that my posts appear to you as little more than word salad.

Again, if you had taken the time to familiarize yourself with even the basics of this technology, you would never say something like this. The capabilities being demonstrated by these current AI models are extraordinary and completely unprecedented in the history of computing. In your willful ignorance, you seem to be dismissing this technology as nothing more than a glorified search engine. You are wrong, profoundly and hilariously so, to the extent that you are totally unable to contribute to this discussion except as an object of scorn and mockery.
Your arrogance is hilarious. Keep posting... oh wise one. Enlighten the plebs. You seem to be an expert in everything. Is there anything you aren't an expert in? You read a paper from a bullshitting techie, whose only qualification is participating in orgies in an "alpha beta kappa" fraternity and you ate it up whole and you have the nerve to present yourself as an expert on the subject? You are demonstrably not.
 
Your arrogance is hilarious. Keep posting... oh wise one. Enlighten the plebs. You seem to be an expert in everything. Is there anything you aren't an expert in? You read a paper from a bullshitting techie, whose only qualification is participating in orgies in an "alpha beta kappa" fraternity and you ate it up whole and you have the nerve to present yourself as an expert on the subject? You are demonstrably not.
Never have I claimed to be an AI expert. But I am informed enough on the subject to offer an educated opinion. And it's glaringly obvious that most (though not all) of those in this thread and elsewhere who blithely dismiss the potential of AI are simply ignorant of the progress the models are making. I have no issue with people who disagree with me or want to debate the topic. But if you don't even understand the distinction between what AI/LLMs are currently doing and Google search, then you clearly have nothing to contribute to the discussion.
 
I posted a link to 165 page PDF that was precisely intended to educate you and others on this topic. It's quite obvious that you didn't read it, however, and instead chose to shit up the thread with your totally uninformed rambling.

Given that you appear to be a proud and lazy ignoramus, it's not surprising that my posts appear to you as little more than word salad.

Again, if you had taken the time to familiarize yourself with even the basics of this technology, you would never say something like this. The capabilities being demonstrated by these current AI models are extraordinary and completely unprecedented in the history of computing. In your willful ignorance, you seem to be dismissing this technology as nothing more than a glorified search engine. You are wrong, profoundly and hilariously so, to the extent that you are totally unable to contribute to this discussion except as an object of scorn and mockery.

Have you ever used AI to try to create something substantiative in your field that could be realistically used as a product or to improve lives?

Playing around with ChatGPT doesn’t count.

Please spare us your rambling sermon on the mount type posts where you present us with your apparent wisdom and authority. It’s a simple yes or no question. If yes what was it?
 
A lot of the confusion in various posts above ^^^^ could be avoided by realizing that AI does not exist at this very moment, just ever more advanced machine learning. It (ML) might replace this and that and hence is potentially very society-transforming, but it's still no closer to being a thinking machine. A thinking machine can't be controlled, only be turned on and off. (hopefully)
 
Have you ever used AI to try to create something substantiative in your field that could be realistically used as a product or to improve lives?

Playing around with ChatGPT doesn’t count.

Please spare us your rambling sermon on the mount type posts where you present us with your apparent wisdom and authority. It’s a simple yes or no question. If yes what was it?
I'm not personally using AI tools for productive purposes at the moment, but have friends in IT and software dev who are already finding value in them, particularly in regard to coding and scripting applications. And given that these models are still relatively new and primitive, their potential with continued development should not be dismissed out of hand. That's really all I've been saying. I am not remotely an AI fanboy, and if anything I would consider myself broadly anti-AI (in the sense that I think it will ultimately have overall negative rather than positive effects on humanity). I am simply recognizing the reality of the situation we're facing. AI has the potential to fundamentally transform society in a manner similar to - but perhaps even more extensive than - the internet.
 
Never have I claimed to be an AI expert. But I am informed enough on the subject to offer an educated opinion. And it's glaringly obvious that most (though not all) of those in this thread and elsewhere who blithely dismiss the potential of AI are simply ignorant of the progress the models are making. I have no issue with people who disagree with me or want to debate the topic. But if you don't even understand the distinction between what AI/LLMs are currently doing and Google search, then you clearly have nothing to contribute to the discussion.
The pdf by the blondie is not contributing to the discussion. Sorry to burst your enthusiasm about the paper.

LLMs are a more advanced form of retrieving, sorting and presenting information already written by humans. There is nothing arcane about them.
I used them to learn programming and I found them useful during the learning proccess, because I always had to extensively debug the code snipets they gave me. Stack Overflow is a much more reliable but slower way of finding answers for coders. This is my real world experience, for what it's worth.

LLMs can never create something new, they can only process what they've been fed during their training phase. In other words, they will never surpass the total body of work that humans have produced or will ever produce. And they will never reach a stage where they even have access to the total, only a tiny fraction of it. It's just another dead-end reasearch field, despite the empty promises by science worshippers. They are trying to horde research money from gullible investors, that's all.
 
The pdf by the blondie is not contributing to the discussion. Sorry to burst your enthusiasm about the paper.

LLMs are a more advanced form of retrieving, sorting and presenting information already written by humans. There is nothing arcane about them.
I used them to learn programming and I found them useful during the learning proccess, because I always had to extensively debug the code snipets they gave me. Stack Overflow is a much more reliable but slower way of finding answers for coders. This is my real world experience, for what it's worth.

LLMs can never create something new, they can only process what they've been fed during their training phase. In other words, they will never surpass the total body of work that humans have produced or will ever produce. And they will never reach a stage where they even have access to the total, only a tiny fraction of it. It's just another dead-end reasearch field, despite the empty promises by science worshippers. They are trying to horde research money from gullible investors, that's all.
This is my sentiment on the various LLMs - they are a breakthrough way of presenting data but the ML techniques are pretty well the same as they have been since the 80s - computing power has allowed for the acceleration of the presentation and lookup of data but not the *creativity* of the algorithms.

We will be entering a new AI winter in the coming year or two, where all of this investment will have gone nowhere but a better mousetrap and (God willing) the fall of Google.

Let us not forget that more data and more 'parameters', do not equate to better models. The more AI generated content on the web, the worse the web will be. We will need to begin building an alternative to the 'mainnet' if we want non-pozzed, non-censored, and freedom respecting internet services in the very near future.
 
Back
Top