Machine Learning and Artificial Intelligence Thread

I think you both are missing the forest for the trees. It doesn't matter if the model is truly intelligent or creative by your standards. How many real live human beings can be said to possess impressive intelligence or creativity? Those traits are certainly the exception among humanity, and aren't at all necessary for a huge swath of current jobs. Even if AI research ultimately stalls out sometime in the next few years, the economic dislocations it will create by disemploying millions of white collar workers will be profound and widely impactful. Anyone dismissing the potential for massive societal disruption is whistling past the graveyard in my view.
 
You're triggered pretty hard by this.

It's pretty clear from what I've read that @scorpion isnt some sort of AI fan boy. He's just presented a series of essays, which I guess you never bothered to read and some of us are just publicly putting general thoughts on the potential implications both positive and negative. If they are negative, so be it, some of us see it in a different light.

It doesn't matter what you or I think.we are just guys posting on an Internet forum.
I expect Rational1 is responding more to Scorpion posting this "Given how demonstrably and embarrassingly ignorant you are on the topic" rather than anything else.

It's not that Scorpion is presenting himself to be an expert. He is referring to this published paper, which has some credibility, but not unlimited. Also he is saying that AI doesn't have to achieve true conscious intelligence to have a big impact. It just has to be able to replace a lot of existing workers. This is a valid point.

However, it would be better to let these points stand on their own, without using harsh insults. I can't blame Rational1 for flaming back after being called "embarrassingly ignorant". This is clearly provocative and uncalled for.
 
Where I work we've already essentially let go of 90% of our previous workforce due to advances in compute power (including ML and AI type technology). We just don't need as many humans QCing and preparing data. Entire departments are gone that existed 5 to 10 years ago. Granted there also is reduction due to shipping the workforce overseas, but technology is the largest contributor in my general field.

I've also heard CEOs and company owners brag that if they could get their employee count to zero they could maximize profits. They chuckle about it but they absolutely want that and are going for it.

The societal change due to AI type of technology is already happening.
 
This is my sentiment on the various LLMs - they are a breakthrough way of presenting data but the ML techniques are pretty well the same as they have been since the 80s - computing power has allowed for the acceleration of the presentation and lookup of data but not the *creativity* of the algorithms.

We will be entering a new AI winter in the coming year or two, where all of this investment will have gone nowhere but a better mousetrap and (God willing) the fall of Google.

Let us not forget that more data and more 'parameters', do not equate to better models. The more AI generated content on the web, the worse the web will be. We will need to begin building an alternative to the 'mainnet' if we want non-pozzed, non-censored, and freedom respecting internet services in the very near future.
In the old forums I commented in a thread of the same topic. In there I mentioned basically the same, and without giving away my personal identity I have taught applied Artificial Intelligence (largely implementing its algorithms and then applying it with big data) if that means anything.

Things that have happened around that time and since then: a tonne of marketing. AI is a huge buzzword. That means a lot of companies are focused on using it (like DEI!). AI had the same amount of applicability back then as it did now, but now because it is intertwined a lot better with language models it can replace people. I think that impact is being felt now replacing mundane, repetitive jobs.

That being said, my phrase which I used in that thread's discussion which still applies today: AI is the amalgamation of human stupidity. It is the ultimate hivemind, with the ability of its handlers to bias answers to their liking. That has dangerous implications if people insist that AI is "all-seeing and powerful" and that it should be trusted as an authority.

I say a lot of these sociological papers that come out of academia surrounding AI are similar to papers coming out for climate change, DEI, etc. and should be treated with as much skepticism. They are priming the public for future power grabs.

Obviously papers that detail precisely advancements in algorithms away from the "amalgamation of stupidity" (big data) models would convince me better whether AI is advancing dangerously towards sentience, and I haven't come across that yet.
 
Again, I don't think this really matters. Sentience is not necessary for "AI" to become hugely disruptive. The smartphone is not sentient, and look what an enormous impact its had on society. The same could be said for contraception, the internal combustion engine, electricity and a myriad of other technologies. Even if AI only advanced a relatively small degree from its current public releases (which are well behind the most advanced closed door models) it will still have a profound destabilizing impact on society over both the short and long term, if only from the economic dislocation of millions of knowledge workers who currently inhabit largely makework white collar jobs.

I encourage you to at least read the first chapter of the Situational Awareness paper, to gain some understanding about the rapid pace of progress the LLM/AI models have made over the past couple of years. A few excerpts:

The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images. Today, we keep trying to come up with novel, ever harder tests, and yet each new benchmark is quickly cracked. It used to take decades to crack widely-used benchmarks; now it feels like mere months.
We’re literally running out of benchmarks. As an anecdote, my friends Dan and Collin made a benchmark called MMLU a few years ago, in 2020. They hoped to finally make a benchmark that would stand the test of time, equivalent to all the hardest exams we give high school and college students. Just three years later, it’s basically solved: models like GPT-4 and Gemini get ~90%.

More broadly, GPT-4 mostly cracks all the standard high school and college aptitude tests.
(And even the one year from GPT-3.5 to GPT-4 often took us from well below median human performance to the top of the human range.)
gpt4_exams_updated.png

Or consider the MATH benchmark, a set of difficult mathematics problems from high-school math competitions. When the benchmark was released in 2021, the best models only got ~5% of problems right. And the original paper noted: “Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue […]. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community”—we would need fundamental new breakthroughs to solve MATH, or so they thought. A survey of ML researchers predicted minimal progress over the coming years; and yet within just a year (by mid-2022), the best models went from ~5% to 50% accuracy; now, MATH is basically solved, with recent performance over 90%.

Over and over again, year after year, skeptics have claimed “deep learning won’t be able to do X” and have been quickly proven wrong.
If there’s one lesson we’ve learned from the past decade of AI, it’s that you should never bet against deep learning.
Now the hardest unsolved benchmarks are tests like GPQA, a set of PhD-level biology, chemistry, and physics questions. Many of the questions read like gibberish to me, and even PhDs in other scientific fields spending 30+ minutes with Google barely score above random chance. Claude 3 Opus currently gets ~60%, compared to in-domain PhDs who get ~80%—and I expect this benchmark to fall as well, in the next generation or two.

gpqa_examples.png


What we are looking at is a very advanced calculator. It's only as good as the premises fed into it, and can only calculate questions presented to it. The only thing it is good for is replacing menial White collar jobs, as I originally stated, and that it will mostly impact women, who inhabit these jobs.

The world of blue collars will remain completely unaffected, in terms of labor demand.

By far the biggest problem with the lack of good paying jobs has to do with usury, and not technology. Usury, and not anything technological, is the greatest reason for our societies decline, and, ironically, using these advanced "AI" calculators it should be quite easy to demonstrate that usury will kill any financial system it inhabits, 100% of the time.

The "disruptive" effects of "AI" will be good, as all labor saving devices are good, but, if the power of such tools are monopolized by the few, because of usury, then the technology will be further used to poz mankind because the usurers are all homosexual Talmuds.

Don't blame the tool, blame the person using the tool, which is the main problem we have today. This "AI" will be useful, but it's not actually AI, it's just a strong calculator, and calculators are indeed powerful. I'm sure there will be investment opportunities to be made, and improvements to the organization of labor as a result of these calculators.

But ultimately, whatever improvements are offered will be drowned out by the perversions of the usurer class, and "AI" will become just one more net negative, like almost all forms of technology have become, since 1971. Take away usury, however, and these forms of technology would fall into the hands of normal men, and become blessings.
 
Who now is intelligent? (and I mean this in the full sense)

A child, who develops their own imaginary world and games, is already infinitely more intelligent than any of our machines today.

I think there is more of a balance of scales thing going on here. People want the AI thing so much they will give themselves up to it. This will happen on a societal scale. The reaction to the almighty "covid" has taught us this. AI doesn't actually have to develop very far.

Naturally people will worship AI because all humans need something to fill in the God-sized hole in their souls.
 
I expect Rational1 is responding more to Scorpion posting this "Given how demonstrably and embarrassingly ignorant you are on the topic" rather than anything else.

It's not that Scorpion is presenting himself to be an expert. He is referring to this published paper, which has some credibility, but not unlimited. Also he is saying that AI doesn't have to achieve true conscious intelligence to have a big impact. It just has to be able to replace a lot of existing workers. This is a valid point.

However, it would be better to let these points stand on their own, without using harsh insults. I can't blame Rational1 for flaming back after being called "embarrassingly ignorant". This is clearly provocative and uncalled for.

You are correct, but it takes two people to escalate the situation. I shouldn’t have let my emotions get the best of me and gone after Scorpion. He’s an overall good poster, and we can disagree on things and still remain civil. His contention that AI will be disruptive is correct and of course it can be used for nefarious purposes. The CEOs and elites always viewed us as cattle for a utilitarian use.

However as someone who has and is actively trying to use these tools, I don’t find them very impressive at all. Their predictive power is limited and often quite wrong.
 
A child, who develops their own imaginary world and games, is already infinitely more intelligent than any of our machines today.

I don't disagree in the full sense of what intelligence is. I also think machines will never attain the level of intelligence humans have. But both "level" and "intelligence" are difficult terms because they are loaded. I think another term is needed to describe the higher register we have. Maybe imagination is a good one, or direct intuition, or illumination. Flat "reason" is not a good enough term.

What is interesting is it's the higher register that atheistic materialistic culture denies/is blind to. It's sort of a self fulfilling thing that they are the ones enslaving themselves to AI, and their tech in general.
 
Where I work we've already essentially let go of 90% of our previous workforce due to advances in compute power (including ML and AI type technology). We just don't need as many humans QCing and preparing data. Entire departments are gone that existed 5 to 10 years ago. Granted there also is reduction due to shipping the workforce overseas, but technology is the largest contributor in my general field.

I've also heard CEOs and company owners brag that if they could get their employee count to zero they could maximize profits. They chuckle about it but they absolutely want that and are going for it.

The societal change due to AI type of technology is already happening.
I had exactly this conversation today with my marketing guy and this thread is useful for presenting both sides.

The shareholder return model of corporations combined with smarter AI/predictive machine/insert your own name for it, has the potential to put loads of people out of work.

Even one slight advance for me where something like chat gpt 4 could access my programs and do work for mehas the potential to put at least 2 or 3 people in my small organisation out of work.

Arguing whether it's truly intelligent or what it's called doesn't really matter, it's the societal effects that will matter.

The other thing I don't think anyone is considering is this. What if it does put masses of white collars people out of work. How will they live? UBI? If so, who will pay them? The 'AI' is not in the hands of the government, far from it, they are asleep as far as this is concerned. The government won't have all of that power and they won't have the money to pay your UBI either.
 
The other thing I don't think anyone is considering is this. What if it does put masses of white collars people out of work. How will they live? UBI? If so, who will pay them? The 'AI' is not in the hands of the government, far from it, they are asleep as far as this is concerned. The government won't have all of that power and they won't have the money to pay your UBI either.

In the case of where I work most of the attrition was effectively accomplished via retirement (sizeable early retirement as well). So that particular group no longer is in need of being in the workforce. All baby boomers.

Attrition was accelerated during covid where we also experienced direct and drastic layoffs. Our company has recovered completely in a financial sense since then but our workforce is gutted. It's actually a very strange experience where internally the workers are depressed and constantly on edge (we let go of people every year under the auspices of bad performance) but the company is doing fantastic.

Whole departments never came back after the 2020 thing and we did not and will not rehire to form them. With the tech, compute power, various automation, including what AI offers - (pattern recognition is big here) we truly don't need them. 2020 was a great excuse to "update" for our company.

It's the young people who are at a great disadvantage from my point of view. We have essentially frozen internships. From my understanding even the schools feeding us don't have the students they used to. (These students were already primarily coming from outside the US). I have a theory most people will be reduced to selling themselves on social media. Farming for attention to benefit the various tech juggernauts. Sort of like feudalism.
 
Last edited:
In your willful ignorance, you seem to be dismissing this technology as nothing more than a glorified search engine. You are wrong, profoundly and hilariously so, to the extent that you are totally unable to contribute to this discussion except as an object of scorn and mockery.
I've been trying to find where you state what exactly it is, if you claim it is more than a "glorified search engine". I never saw you actually post what you believe it to be, which suggests to me that you are reading into something that might be there, but similarly (and more likely in my estimation) might not be.
Even if AI research ultimately stalls out sometime in the next few years, the economic dislocations it will create by disemploying millions of white collar workers will be profound and widely impactful.
There are tasks which I might not be aware of that will dislocate quite a few people, but I'm guessing all of those are tech helpers or floors of corporate HR nonsense hires, if I had to guess. The latter are political in any case, so they may not see any change, as their purpose is not for productivity. One place it seems obvious that we would see major changes would be entertainment, such as in digital graphics, music, etc. Mostly that seems to me to be for the good, though, to be honest.

I'm not trying to re-hash our previous debate about BTC, but I have mentioned it in passing, and it's amazing to me that you have nonspecific doomporn about something you can't actually grasp, while also not grasping an actual real invention (discovery of money) and its implications, which are already real and advantageous to all people. The similarity seems to be a weak point in your thought process regarding the way you categorize things and assign emotions to them. You may not like this but I thought it might help you out in assessing the way you entertain or employ belief constructs, which is a classic human foil.
That has dangerous implications if people insist that AI is "all-seeing and powerful" and that it should be trusted as an authority.
This is where I see it going (badly) too. I won't doxx myself either, but I've been talking and thinking about AI and its suggested disruption in a field I work in for years. Let's just say it won't do anything of note to improve systems, but it's very possible (my prediction) that it will be utilized as a marketing trick to cut costs and deliver dookie wrapped in bow; an inauspicious "gift" that sounds good but deep down is dung.
 
I'm not trying to re-hash our previous debate about BTC, but I have mentioned it in passing, and it's amazing to me that you have nonspecific doomporn about something you can't actually grasp, while also not grasping an actual real invention (discovery of money) and its implications, which are already real and advantageous to all people. The similarity seems to be a weak point in your thought process regarding the way you categorize things and assign emotions to them. You may not like this but I thought it might help you out in assessing the way you entertain or employ belief constructs, which is a classic human foil.
Current combined market cap of Nvidia, Microsoft, Google and Apple (seen as the leading AI stocks) is sitting at around $12 trillion. Bitcoin market cap currently $1.3 trillion. One of these things is not like the other.

If AI continues its current trajectory, Bitcoin will inevitably become one of its many casualties. First of all, because crypto/blockchain will no longer be seen as the "cool new thing" to invest in, and secondly because all of the available mining hardware/data centers will be taken over by AI companies (and if AI makes truly great strides you can expect to see crypto mining literally banned by governments to increase data center availability for AI use).

AI also exposes how utterly useless crypto is in comparison. Even in its current relatively primitive state, AI is already finding broad commercial applications. It enhances productivity remarkably in many fields, creating serious economic value. In contrast, fifteen years in, Bitcoin (and every other crypto project) still cannot articulate a compelling use case, even though crypto true believers are constantly dreaming up new ones to lure in new bagholders as people naturally lose interest over time. It's an enormous distinction when compared to AI, which doesn't need to be sold to anyone. Its power and appeal are clear as day. That's why AI will be broadly adopted throughout society and Bitcoin will continue to be nothing but a speculative greed play, one which will eventually go to zero and become the subject of future economic historians, serving as both a strange curiosity and perhaps the foremost example of the highly delusional nature of early 21st century man.
 

Extremists across the US have weaponized artificial intelligence tools to help them spread hate speech more efficiently, recruit new members, and radicalize online supporters at an unprecedented speed and scale, according to a new report from the Middle East Media Research Institute (MEMRI), an American non-profit press monitoring organization.

The report found that AI-generated content is now a mainstay of extremists’ output: They are developing their own extremist-infused AI models, and are already experimenting with novel ways to leverage the technology, including producing blueprints for 3D weapons and recipes for making bombs.

Researchers at the Domestic Terrorism Threat Monitor, a group within the institute which specifically tracks US-based extremists, lay out in stark detail the scale and scope of the use of AI among domestic actors, including neo-Nazis, white supremacists, and anti-government extremists.


“There initially was a bit of hesitation around this technology and we saw a lot of debate and discussion among [extremists] online about whether this technology could be used for their purposes,” Simon Purdue, director of the Domestic Terrorism Threat Monitor at MEMRI, told reporters in a briefing earlier this week. “In the last few years we’ve gone from seeing occasional AI content to AI being a significant portion of hateful propaganda content online, particularly when it comes to video and visual propaganda. So as this technology develops, we'll see extremists use it more.”

Sounds promising.
 
At one point I remember Gab was planning to make a ChatGPT alternative that would be free of woke conditioning. Has anything heard about how this is going?
Yes, it's called Gab.ai, it's been out for a while. I've used it and I think it's quite good. It has chatbots and image generation. There's a limit on how many times you can use it before there's a cooldown, and you have to pay a subscription in order for this cap to be raised. It's quite annoying but understandable.

I haven't really seen any reason to believe that people are actually using Gab's AI, though. I've seen nobody talking about it, even in online circles like this one, and I only know of its existence because I stumbled upon it through an unrelated Google search.
 
Last edited:
Current combined market cap of Nvidia, Microsoft, Google and Apple (seen as the leading AI stocks) is sitting at around $12 trillion. Bitcoin market cap currently $1.3 trillion. One of these things is not like the other.
This exposes the way you think about things, again. Are you going to come back and post about how things in your mind are different when BTC market cap goes up 10x? If not, it's not a genuine point or in good faith.
 
Yes, it's called Gab.ai, it's been out for a while. I've used it and I think it's quite good. It has chatbots and image generation

Image generation requires an account / email address to use?
I've seen another (supposedly) unrestricted image generation site that is free to use (limited) that I can't find right now.

This one is free and does not require an account.


This one as well. It will create NSFW images, and the name makes me wonder if it's targeted to scam guys on dating sites and such. What a crazy world.
 
Last edited:
The same people who have been pushing safe and effective, good Ukraine, pregnant men, and are lowering the earth's temperature, are now promising us a bright AI future without work, where our biggest worry will be what to do with so much free time, and on what to spend generous UBI.

I'm a little skeptical, but you say this time it's different, right?
 
The same people who have been pushing safe and effective, good Ukraine, pregnant men, and are lowering the earth's temperature, are now promising us a bright AI future without work, where our biggest worry will be what to do with so much free time, and on what to spend generous UBI.

I'm a little skeptical, but you say this time it's different, right?

If AI starts replacing large numbers of workers in various segments of the economy like some people think will happen soon, it will accelerate the trend of depopulation. I happen to think depopulation in and of itself is a good thing as long as it's not accompanied by replacement immigration, but that's another issue. We might see some form of UBI in this scenario, but people will be stuck in poverty and will not be able to have kids, even more so than today.
 
The same people who have been pushing safe and effective, good Ukraine, pregnant men, and are lowering the earth's temperature, are now promising us a bright AI future without work, where our biggest worry will be what to do with so much free time, and on what to spend generous UBI.

I'm a little skeptical, but you say this time it's different, right?
He's been whiffing a lot these days. I think the reason he won't listen to some of us, who actually are experts or closer to it in the topics of conversation, is because of age.
 
Back
Top