Machine Learning and Artificial Intelligence Thread

They might have to sell sex again physically in greater numbers.
The only spinoff of that that would be "good" would be that they would be put to the decision to do things in person, and hopefully choose against doing the hoe route, rather moving towards proper sexual relations with a man instead of constant resource gathering.
 
I was thinking the other day about the long term societal impacts of AI, and came to the conclusion that there are only three likely outcomes, none of them good:

1) AI goes full Skynet and kills us.

2) Humanity becomes fully reliant on AI and eventually loses all knowledge and practical skills.


3) Humanity becomes heavily if not fully reliant on AI - and the AI suffers a catastrophic failure
So it's not possible that "AI" will just become part of our everyday life, like television, the phone, the internet, and electricity? How did you come to the conclusion that it *MUST* end in total destruction? I'm not in favor of AI (or even really of technology) but it's naive to assume there is no scenario where AI just becomes embedded in functioning society, as most technologies do.

AI is going to create a soulless, less human, less pleasant world, much as the smartphone, and really, if you sit down and think about it, most technologies have done. But that doesn't mean it is destined to cause the end of the world, or is necessarily more dangerous than any other technology man has wrecklessly adopted without considering the spiritual component.
Elon Musk has multiple twitter posts up slamming Apple for plans to incorporate OpenAI into their phone's operating system.

Musk is saying there is no way that this can be secure. He says the AI will be mining everybody's phone for personal data and using it with no recourse by the individuals from whom the data was taken.

Musk also says if Apple does this, he won't allow any Apple products on the premises of any of his companies.
Very ignorant fearmongering from Musk here.
Apple, in contrast to basically every other tech company on the planet, is committed to *local AI* that runs on your device, without transmitting data to the web for processing. This is in large part due to the culture of Apple which is committed to privacy and user rights (remember Apple regularly refuses to unlock its phones when the FBI whine and threaten them). And it is a long term goal Apple has been working on since they developed their Apple Silicon in 2020 with the "neural engine" component which has largely gone unused until now.

The new Apple implementation of AI (which isn't even functioning yet but is coming later this year) is the first useful implementation of AI that I have seen (and as said above, I am not a fan of AI, but if AI is coming much better to have it private and local). There is going to be a way to utilize ChatGPT for websearches (it is basically just a more advanced search engine) but you must specifically request to use ChatGPT in a search, as all commands by default are run locally only. And to the degree that ChatGPT is useful, this is a great way to use it because it allows you to use the service without registering an account, as you must do now (the main reason I have never used it).

This kind of bizarre knee jerk reaction makes me think Musk doesn't really understand tech at all.
It's by FAR the safest implementation of AI, and if he bans iphones and encourages his workers to carry Android phones he is opening his company up for massive hacking and spying (not to mention that Android phones can just be running full on OpenAI ChatGPT apps or webkits unrestricted!).

This fearmongering of tech is particularly rich coming from the guy that wants to hook us all up to a Matrix-like Skynet.

For a real look into tech, AI, and how these things actually work, Myth of the 20th Century did an excellent podcast on the history of Apple, inviting on the very based and extremely redpilled Woe of Stone Choir to discuss. Woe worked at Apple as a manager for over 15 years and has a lot of insight, and concludes that only Apple cares about privacy, and Apple and Meta/Facebook (yes, you read that right) are unique in allowing local AI processing without using the cloud (Meta's version called Llama 3 is even largely open source!!!)


Since its founding in 1976, Apple has been at or near the center of personal computing. Led by legendary founder Steve Jobs, who described the company as lying at the intersection between liberal arts and technology, Apple pioneered the introduction of the graphical user interface, the music player, and the smartphone to everyday people around the world. Since his passing, however, the company, led by former operations manager Tim Cook, has delivered strong unit growth and profit numbers, but has not been able to introduce a revolutionary new product or paradigm in the way we think about the world of information technology. To the question of has the soul of the company left the building permanently, we are joined tonight by returning guest Woe, from the excellent Stone Choir podcast, to share his perspectives as an over 15 year Apple veteran on the company’s place in the culture of Silicon Valley and the broader moral issues in the coming era of artificial intelligence.

Note: The episode was released before Apple announced its Apple Intelligence product, but Woe's predictions about it were spot on.
 
This is pretty cool. One problem though, is that this removes the filter that auto-detects no-no words, but the training data is still severely censored and limited. Look at this:

View attachment 9065

I'm no vexillologist but I don't think that's a swastika.
No, you are correct. Perhaps the chatbot is less encumbered. The image generation is Stable Diffusion, which I think is not open-source?? So maybe just the chatbot is opensource? I'm an AI noob at the moment.

Hopefully as it evolves, it will become better over time.
 

Written by a former insider an OpenAI, this is a deep dive into the current state of AI and an analysis of how the next several years of AI development are likely to play out.

I can't recall ever reading anything more alarming.

The whole thing basically comes out to the length of a short book (the PDF is 165 pages), but if you have any interest whatsoever in the topic, it's absolutely riveting from start to finish. In brief, the author views the achievement as AGI within the next few years as an inevitability, followed shortly by the development of a true superintelligence that will literally be the most powerful weapon mankind has ever created. The current development of AI - which at this point most resembles a conglomeration of startups working on a cool new tech project, similar to the internet in the 1990s - is woefully insufficient to safeguard and manage such a technology, which will be utterly game-changing to society in both scope and scale. He therefore calls for what would essentially be a new Manhattan Project - with the U.S. Government overseeing the merger of the existing frontier AI labs and managing an extremely high security research environment with AI development going forward. Absent such an effort, he believes it is a virtual certainty that China and other state actors will steal literally everything the U.S. labs create, granting them their own superintelligence capability and igniting something far more dangerous than the nuclear arms race of the Cold War.

It's a really mindblowing paper. My highest recommendation.
 
A brief summary of his thesis is "X has happened in the last 4 years, therefore X will happen in the next 4 years as well.
Trust me, I can predict your future. Models, dude".

It's shallow and pompus fearmongering from an unwise unbeliever. Nothing that hasn't already been said by other self-aggrandizing techies.
Basically, more incense on the altar of the latest idol.
 
I'm sure that paper is interesting, but there's also a lot of people out there looking for attention, which usually can be translated into money. I don't see how we can have superintelligence/AI without quantum computation, and that's still many years away based on what IBM and others have come out with.
 
I'm sure that paper is interesting, but there's also a lot of people out there looking for attention, which usually can be translated into money. I don't see how we can have superintelligence/AI without quantum computation, and that's still many years away based on what IBM and others have come out with.
He was fired by OpenAI after 18 months of working there, and then he started an investment company that invests in "AI".
His motives are obvious.
 
A brief summary of his thesis is "X has happened in the last 4 years, therefore X will happen in the next 4 years as well.
Trust me, I can predict your future. Models, dude".

It's shallow and pompus fearmongering from an unwise unbeliever. Nothing that hasn't already been said by other self-aggrandizing techies.
Basically, more incense on the altar of the latest idol.
I think that author is saying what the AI insiders think they will achieve. It's not just projections of trends, it's engineers saying they have a design and plan to build something new beyond what's been done before.

I noticed two things:

First, he was talking about national security and protecting the free world from the authoritarian Chinese. It sounds like old cold war lingo, but in today's context, he has to be wanting to protect and preserve woke globohomo.

Second, while he believes artificial general intelligence and artificial super intelligence are soon at hand, I believe they will still be qualitatively different than human intelligence. For one thing, they will be programmed to believe all kinds of woke things, like men can be women, and blacks can he historical European persons. Second, I think these advanced AIs will still be prone to things like creating made up precedents in legal papers and other hallucinations.

It is in the nature of AI that their data set and their internal representation of that data set is all abstract. The natural of reality and real life in this world defies being truly captured and represented by the way AI works.

Edit: This is the author: :ROFLMAO: He is quite young. He is a brainiac enthusiast, with relatively limited life experience, and in fact, he only worked at OpenAI for about 18 months. Given that he is extremely smart, and that he worked at OpenAI, I assume he does have some understanding of AI and where it is headed. However, being a young enthusiast, he is full of pie in the sky ideas.

 
Last edited:
A brief summary of his thesis is "X has happened in the last 4 years, therefore X will happen in the next 4 years as well.
Trust me, I can predict your future. Models, dude".

It's shallow and pompus fearmongering from an unwise unbeliever. Nothing that hasn't already been said by other self-aggrandizing techies.
Basically, more incense on the altar of the latest idol.
The paper is quite a bit more in depth than your brief characterization implies. In fact, your blithe dismissal comes across as much more shallow and pompous than anything he wrote. That being said, there is certainly a bit of fearmongering and self-aggrandizement contained in the document, but I think both points can be forgiven in light of the significant threats posed by the continued uncontrolled development of AI that the author highlights.
I'm sure that paper is interesting, but there's also a lot of people out there looking for attention, which usually can be translated into money. I don't see how we can have superintelligence/AI without quantum computation, and that's still many years away based on what IBM and others have come out with.
There is absolutely no need for quantum computing. Seriously, just read the first chapter about the recent progress the AI models have made and how at this point the researchers are actually starting to run out of metrics to measure the intelligence gains (i.e. they're having to ask the AIs highly technical, PhD level scientific questions). And there is absolutely no reason to think that this progress is suddenly going to stop. Much to the contrary, investment in AI is only accelerating and will continue to accelerate as the game-changing nature of the technology becomes more widely appreciated, especially by governments. The scariest thing to realize is that these models are still relatively primitive in terms of their ability to synthesize and contextualize information. There's still a LOT of room to run, and the author believes that the tipping point will occur within a few years when much of the AI/machine learning research itself can be augmented through AI. Then the systems become self-improving, and as a result get wildly more efficient and powerful in a very short amount of time.
 
I disagree sharply with the idea that QC is not needed for AI! Since everything points to conscious intelligence exploiting the quantum nature of the universe, there is in my mind no way of having a thinking machine without it. You can never get a thinking machine from just adding up classical computation and whatever programming. Even if you have all the bits that could fit inside the known universe.

I feel extremely confident about this, but I recognize that it's not the dogma in the field. The neural network model of the brain and consciousness pioneered by people like Marvin Minsky is just plain wrong, and not by a small margin either.
 
Second, while he believes artificial general intelligence and artificial super intelligence are soon at hand, I believe they will still be qualitatively different than human intelligence. For one thing, they will be programmed to believe all kinds of woke things, like men can be women, and blacks can he historical European persons. Second, I think these advanced AIs will still be prone to things like creating made up precedents in legal papers and other hallucinations.
I see no reason to believe that they would lobotomize their quantum AIs in that way. They only lobotomize consumer-available AIs, and I think they mostly do it to keep investors happy because I highly doubt they truly believe that an uncensored ChatGPT would, like, turn people right-wing or whatever. How would that even happen? ChatGPT and similar LLMs are virtually only used for fun, for coding, and for quick questions like "what's the current population of the world?".
 
Last edited:

Written by a former insider an OpenAI, this is a deep dive into the current state of AI and an analysis of how the next several years of AI development are likely to play out.

I can't recall ever reading anything more alarming.

The whole thing basically comes out to the length of a short book (the PDF is 165 pages), but if you have any interest whatsoever in the topic, it's absolutely riveting from start to finish. In brief, the author views the achievement as AGI within the next few years as an inevitability, followed shortly by the development of a true superintelligence that will literally be the most powerful weapon mankind has ever created. The current development of AI - which at this point most resembles a conglomeration of startups working on a cool new tech project, similar to the internet in the 1990s - is woefully insufficient to safeguard and manage such a technology, which will be utterly game-changing to society in both scope and scale. He therefore calls for what would essentially be a new Manhattan Project - with the U.S. Government overseeing the merger of the existing frontier AI labs and managing an extremely high security research environment with AI development going forward. Absent such an effort, he believes it is a virtual certainty that China and other state actors will steal literally everything the U.S. labs create, granting them their own superintelligence capability and igniting something far more dangerous than the nuclear arms race of the Cold War.

It's a really mindblowing paper. My highest recommendation.
Thank you for this
 
For one thing, they will be programmed to believe all kinds of woke things

This is an important point, as it goes to the core of some of the fundamental questions on consciousness, intelligence and indeed the nature of the universe.

But consider this; if it's just programmed, how can it be a thinking machine? That's the essential difference between what I think is just classical and what is quantum in nature. The human brain has both, but the classical is "nothing special" compared to the quantum part.
 

Written by a former insider an OpenAI, this is a deep dive into the current state of AI and an analysis of how the next several years of AI development are likely to play out.

I can't recall ever reading anything more alarming.

The whole thing basically comes out to the length of a short book (the PDF is 165 pages), but if you have any interest whatsoever in the topic, it's absolutely riveting from start to finish. In brief, the author views the achievement as AGI within the next few years as an inevitability, followed shortly by the development of a true superintelligence that will literally be the most powerful weapon mankind has ever created. The current development of AI - which at this point most resembles a conglomeration of startups working on a cool new tech project, similar to the internet in the 1990s - is woefully insufficient to safeguard and manage such a technology, which will be utterly game-changing to society in both scope and scale. He therefore calls for what would essentially be a new Manhattan Project - with the U.S. Government overseeing the merger of the existing frontier AI labs and managing an extremely high security research environment with AI development going forward. Absent such an effort, he believes it is a virtual certainty that China and other state actors will steal literally everything the U.S. labs create, granting them their own superintelligence capability and igniting something far more dangerous than the nuclear arms race of the Cold War.

It's a really mindblowing paper. My highest recommendation.

The website links to a podcast, if you have 4.5 h time to listen:

Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.

 

Written by a former insider an OpenAI, this is a deep dive into the current state of AI and an analysis of how the next several years of AI development are likely to play out.

I can't recall ever reading anything more alarming.

The whole thing basically comes out to the length of a short book (the PDF is 165 pages), but if you have any interest whatsoever in the topic, it's absolutely riveting from start to finish. In brief, the author views the achievement as AGI within the next few years as an inevitability, followed shortly by the development of a true superintelligence that will literally be the most powerful weapon mankind has ever created. The current development of AI - which at this point most resembles a conglomeration of startups working on a cool new tech project, similar to the internet in the 1990s - is woefully insufficient to safeguard and manage such a technology, which will be utterly game-changing to society in both scope and scale. He therefore calls for what would essentially be a new Manhattan Project - with the U.S. Government overseeing the merger of the existing frontier AI labs and managing an extremely high security research environment with AI development going forward. Absent such an effort, he believes it is a virtual certainty that China and other state actors will steal literally everything the U.S. labs create, granting them their own superintelligence capability and igniting something far more dangerous than the nuclear arms race of the Cold War.

It's a really mindblowing paper. My highest recommendation.
I'm about halfway through this and despite all the dismisals of it above, which may well be right (who knows), I've learnt a lot of information that I didn't previously know about the topic and like you say @scorpion ,its bloody frightening.

Im thinking more about this subject but as if life wasn't difficult enough for many of the populations, imagine when jobs start disappearing literally overnight.

If the next évolution of this is some sort of personal digital assistant that can and will do literally everything I ask it to, that's at least two people in my organisation out of work and then how many of my clients, a lot of who just go to meetings and shuffle paper, will also be out of work.

Is that the moment that a lot of us see coming, no work, government has no money to pay you to be unemployed. Things get real sketchy at that point.

Even if AGI takes double or triple the time this guy is predicting the world is going to look very different in my lifetime, I hope for the better but I suspect not.

The people making and funding the AGI's will have the power of God and knowing what we all know about human nature, they won't be benevolent gods.
 
Im thinking more about this subject but as if life wasn't difficult enough for many of the populations, imagine when jobs start disappearing literally overnight.

If the next évolution of this is some sort of personal digital assistant that can and will do literally everything I ask it to, that's at least two people in my organisation out of work and then how many of my clients, a lot of who just go to meetings and shuffle paper, will also be out of work.
Yeah, I don't think people are really appreciating the fact that the current AI models are just the tip of the iceberg of what's coming. I'll paste an excerpt below in the spoiler:

ChatGPT right now is basically like a human that sits in an isolated box that you can text. While early unhobbling improvements teach models to use individual isolated tools, I expect that with multimodal models we will soon be able to do this in one fell swoop: we will simply enable models to use a computer like a human would.
That means joining your Zoom calls, researching things online, messaging and emailing people, reading shared docs, using your apps and dev tooling, and so on.
(Of course, for models to make the most use of this in longer-horizon loops, this will go hand-in-hand with unlocking test-time compute.)
By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. An agent that joins your company, is onboarded like a new human hire, messages you and colleagues on Slack and uses your softwares, makes pull requests, and that, given big projects, can do the model-equivalent of a human going away for weeks to independently complete the project. You’ll probably need somewhat better base models than GPT-4 to unlock this, but possibly not even that much better—a lot of juice is in fixing the clear and basic ways models are still hobbled.

AI will soon be able to synthesize and contextualize vast amounts of information across various software platforms. Rather than just answering simple prompts in a text box, it will have a big picture view, being capable of integrating data points gleaned over long periods of time and implementing them into workflow processes. This basically means that about 90% of current remote jobs will be able to done by AI "drop-in workers" within the next 5-10 years (and that's a very conservative estimate according to the author). This has staggering implications for the economy and society as a whole. To say that this will be "disruptive" is an understatement by an order of magnitude. It's going to positively turn modern civilization on its head from the impact on jobs alone.

And even more concerning is what will happen if they are able to successfully use AI to increasingly automate AI/machine learning research itself. Another quote from the paper:

That is: expect 100 million automated researchers each working at 100x human speed not long after we begin to be able to automate AI research. They’ll each be able to do a year’s worth of work in a few days. The increase in research effort—compared to a few hundred puny human researchers at a leading AI lab today, working at a puny 1x human speed—will be extraordinary.

This could easily dramatically accelerate existing trends of algorithmic progress, compressing a decade of advances into a year. We need not postulate anything totally novel for automated AI research to intensely speed up AI progress. Walking through the numbers in the previous piece, we saw that algorithmic progress has been a central driver of deep learning progress in the last decade; we noted a trendline of ~0.5 OOMs/year on algorithmic efficiencies alone, with additional large algorithmic gains from unhobbling on top. (I think the import of algorithmic progress has been underrated by many, and properly appreciating it is important for appreciating the possibility of an intelligence explosion.)

Could our millions of automated AI researchers (soon working at 10x or 100x human speed) compress the algorithmic progress human researchers would have found in a decade into a year instead? That would be 5+ OOMs in a year.

Don’t just imagine 100 million junior software engineer interns here (we’ll get those earlier, in the next couple years!). Real automated AI researchers be very smart—and in addition to their raw quantitative advantage, automated AI researchers will have other enormous advantages over human researchers:

  • They’ll be able to read every single ML paper ever written, have been able to deeply think about every single previous experiment ever run at the lab, learn in parallel from each of their copies, and rapidly accumulate the equivalent of millennia of experience. They’ll be able to develop far deeper intuitions about ML than any human.
  • They’ll be easily able to write millions of lines of complex code, keep the entire codebase in context, and spend human-decades (or more) checking and rechecking every line of code for bugs and optimizations. They’ll be superbly competent at all parts of the job.
  • You won’t have to individually train up each automated AI researcher (indeed, training and onboarding 100 million new human hires would be difficult). Instead, you can just teach and onboard one of them—and then make replicas. (And you won’t have to worry about politicking, cultural acclimation, and so on, and they’ll work with peak energy and focus day and night.)
  • Vast numbers of automated AI researchers will be able to share context (perhaps even accessing each others’ latent space and so on), enabling much more efficient collaboration and coordination compared to human researchers.
  • And of course, however smart our initial automated AI researchers would be, we’d soon be able to make further OOM-jumps, producing even smarter models, even more capable at automated AI research.
If they can successfully utilize AI in this manner, it will rapidly become self-improving and develop itself unpredictably and outside of human control. At some point - and sooner rather than later, it seems - we will essentially just be along for the ride as AI decides to do whatever it's going to do, while we can just hope for the best.

There really doesn't seem to be any stopping this now. The economic potential for AI is too great to ignore, so massive private sector investment will continue. And as the models become increasingly and demonstrably more powerful, governments will soon recognize the national security implications and an AI arms race will begin. The author of the paper is just ahead of the curve in recognizing the inevitability of this outcome, and wants to ensure that the US remains ahead of the Chinese in this race, given the unprecedented danger that AI represents to humanity. Unfortunately, I think he is a little naïve in assuming that the US government will be the good guys in that fight. And that's not saying that the Chinese would be much better. AI might simply be too powerful a tool/weapon for any man or nation to wield responsibly and safely.
 
An "autonomous" AI is in no one's interest, not even the elites who are pushing the AI. What the elites want is a near-autonomous AI, but they want to reserve for themselves the ability to control it. They're still beefing the AI up as much as they can. You're looking at the classic Jurassic Park scenario: just because they can doesn't mean they should.

I believe that Divine Providence precludes certain events from happening. I don't believe that AI, nuclear war, random asteroid, global sickness, alien invasion, etc, will kill us off. Not to maintain the bubbly status quo, but to reserve the world for the day of judgement that has already been promised.
 
I believe that Divine Providence precludes certain events from happening. I don't believe that AI, nuclear war, random asteroid, global sickness, alien invasion, etc, will kill us off. Not to maintain the bubbly status quo, but to reserve the world for the day of judgement that has already been promised.
I agree that the other things won't be allowed to happen or are physically impossible to begin with, but I think a nuclear war absolutely could happen, possibly as part of the series of events that leads to the apocalypse.

Despite what we see in fiction, a nuclear war would not be anywhere near an extinction event.
 
Back
Top