Machine Learning and Artificial Intelligence Thread

Anyone here from Australia that has observations on the impact of AI on work there? The guys on this podcast make it sound like Australia is a little further along in experiencing job loss due to things like AI agents replacing people. They say they've experienced talking to AI agents when calling restaurants and things like that. They didn't even know until they thought the delivery was too smooth.

I do like their discussion as they sound somewhat optimistic about some possible silver linings. About 17 min in they essentially see the model of: find a job at a large corporation, work for them, do the work well, going away. Instead they see people more or less forced into pre-industrial-revolution type of roles again where you must be an entrepreneur to survive.


 
Last edited:
I work in STEM and use AI daily for my work. A year ago I practically didn't use it at all because it was nearly useless. Today, it basically does 60 to 70% of my work for me. I still need to provide the business context, the end goals, the data sources, etc. And I have to go back and revise a few things and challenge some of its assumptions. But it does a lot of the technical work and calculations for me.

That's a massive improvement in one year, the blink of an eye compared to all of human history. If it can improve that much in one year, imagine what it will be like 5 years from now. Sure it has major limitations and drawbacks but those are getting fewer and fewer every quarter.

If you work an office job and are not using AI today, I suggest you start. Get used to solving problems with it and understanding how to iterate. Because someone else will, and they'll be the one to take your job when a large part of the workforce becomes redundant.
If "AI" does 60 to 70% of your work, you could have written python (or bash) to automate your job - no offense.

Machine learning has been around since the 40s, this is not something new, and the LLMs of today are not going to get us to the fabled "AGI" - they are dumb word guessers.

These companies are going to implode:



Go all in on humanity. Go all in on thinking and real creativity, and stop consuming the slop.
 
If "AI" does 60 to 70% of your work, you could have written python (or bash) to automate your job - no offense.

Machine learning has been around since the 40s, this is not something new, and the LLMs of today are not going to get us to the fabled "AGI" - they are dumb word guessers.

I'm also in a corporate STEM job. I've used and developed various types of machine learning algorithms my entire career. I know how to set up LLMs from scratch. But I'm terrified and awestruck by what they are now. Yes, depending how you look at them they are just dumb word guessers. But remember, depending on your epistemological view, many people view people in general as "just dumb word guessers".

The effect AI has on society is wholly dependent on the people receiving it. Society wants efficiency, convenience, maximum ROI. AI will absolutely do that. Pair that with materialistic people who vaguely pad around for the spiritual and you get a quickly changing world.

Now do I use it? No, I still don't really use it but I'm suspecting my time and role has an expiration date. The only thing really preserving jobs is rate of uptake and that's dictated by various things. Privacy and law could end up slowing the phenomenon down for a while.

These companies are going to implode:



Possible. But just like @scorpion said, there could be a dot com like bubble, but AI is here to stay and will have at least the same impact as the internet.


Go all in on humanity. Go all in on thinking and real creativity, and stop consuming the slop.

There are a few us of willing to do that but most are happy with slop. I'm absolutely going to do everything I can to keep real thinking and creativity going but this AI thing is inevitable.
 
Fair enough. AI in media creation has improved with greater alacrity. Although in the AI clips, you can see the CGI (thinking the Breaking Bad clip) that breaks the illusion. Many people won't see it however and are happy to blissfully consume.

In knowledge based professions it's been woefully lacking.

My wife is an attorney and her company has invested over $1B in AI and told the staff that they must use it. It invents (non-existand) case precedents to justify a legal argument. So in many situations it's losing a lot of productivity instead of gaining some. I've already detailed how it is in the tech (specifically coding) space.

Which is why when those internet theses are posted and generate hype through virality, I tend to see them as to serving that purpose of hype through fear as opposed to being an oracle of the future.

I'm with you that I'm optimistic (is that the right word for possible society disruption and dystopia? ) the tech can do what is being hypothesized.

The question is whether the speed of breakthrough is greater than the speed of spending.

I was thinking more about this. I'm not so sure media AI is more advanced than other AI.

The difference is in the metrics.

In knowledge based applications for AI:
  • The code either works or it doesn't
  • The legal argument using real precedents works or it doesn't
In media creation, it's not binary, but a scale. If the AI is 80% accurate is that good enough to be entertained or believed? As we're seeing, that threshold seems to be enough.

However, a coding solution that is 80% (or even 99%) complete will still fail (sometimes spectacularly).

So it's a matter of perception.
 
Yeah, look at the state of many Microsoft products already.

Absolutely.

And this was their peak
microsoft GIF
 
In media creation, it's not binary, but a scale. If the AI is 80% accurate is that good enough to be entertained or believed? As we're seeing, that threshold seems to be enough.

However, a coding solution that is 80% (or even 99%) complete will still fail (sometimes spectacularly).
I think this is the right way to think about it. The reality is that for many applications, "good enough" is good enough. This is where we are going to see AI deployed at scale, and if these applications only account for 10-20% of current jobs (I would say this is a rather conservative estimate), then AI is still going to be very disruptive. And even in applications where AI still needs oversight, under the supervision of a knowledgeable and capable user, it will greatly enhance productivity, leading to a need for fewer workers in those roles.

But I think there's a real danger for people who lean too heavily into AI for creative work or personal compansionship (I've touched on this recently in the Vox Day thread). From what we've seen so far, it appears that a non-trivial percentage of people who interact heavily with AI begin to experience grandiose delusions. The sycophantic nature of AI systems is already well known, and the human weakness for flattery will always be with us. The more people interact with AI on a personal level (i.e. "discussing" ideas rather than assigning it rote tasks) the more they will inevitably fall into patterns of thinking that are rewarded by praise from the AI. Over time, this will not only lead to an erosion in the quality and creativity of their work (as it becomes increasingly uniform based on AI preferences), but a growing disconnect with reality as the user's AI-fueled delusions come to dominate their thinking.
 
I have noticed a number of restaurant drive throughs using AI to take orders. I saw some tried out a year or so ago, and I hated them. They were very brittle, and couldn't handle even slight variations in the way the order was placed. There was one case in particular where they removed it after a month or so.

Now the AI drive through systems are back, and they work pretty well. You give your order and the machine can understand pretty well. It's a definite improvement. I hated the previous version. I have always naturally hated talking to a machine, especially those automated phone trees when you call a company. However, the recent drive through systems have worked well enough that I could just use them without getting mad about it.
 
I have noticed a number of restaurant drive throughs using AI to take orders.
A great example of the type of low-hanging-fruit job where AI is more than good enough to provide value and cause significant labor disruptions. In 3-5 years time, we can expect this to be standard across the fast food industry. And to the extent that customers will be able to discern that they are interacting with AI, it will only be because, on average, their experience is actually superior to what they would receive from the typical fast food employee (i.e. most people would probably prefer interacting with a "sweet Southern belle" AI voice/personality than some scowling, obese, low-IQ immigrant who barely speaks English).
 
has anyone else experienced any noticeable decline in cognitive abilities after becoming a regular consumer of AI? I would often rely on generative AI and LLMs for anything that required any measure of deep thinking and heavy-duty concentration, sacrificing those skills in the process. It came to a point where I could hardly understand material that I would have otherwise easily processed and assimilated before discovering chatgpt, bard and gemini. I had to ultimately ditch AI and no longer use it right off the bat; I resort to it only after having spent a considerable amount of time thinking the problem through and trying out a number of solutions myself. I have found myself using it less and less often now.
 
Uber CEO explaining why they are phasing out human drivers for AI agents.

It's now proven to be statistically safer using an AI agent rather than a human on the road.

He says people who are drivers for work have time because it will take a while to get regulations and autonomous vehicles in place. 10-15 yrs.

For those losing their jobs? He says he has new positions opening for training AI agents.

 
Last edited:
Back
Top