Machine Learning and Artificial Intelligence Thread

Is it a tool or an agent?

Or maybe I should ask: can we interact with it like it's a tool or is it always going to slip into "I am interacting with it like it's an agent". ?
It's a sophisticated plagiarism machine, how can people here fall for the hype coming from the deluded godless nerds?

Try vibe coding anything more complicated than a pong game and you will soon find out how stupid "AI" really is. No serious software team relies on "ai" for production. It's a tool among mamy in the toolset. It replaced googling and stack overflow in a junior engineer's workflow. Instead of you searching for an answer or a code snipet on stack overflow, you ask "ai" and it scans github and stack overflow for code already existing in the public sphere.

It's really not all that complicated, once you step outside the narrative of the rich nerds.
 
Last edited:
Try vibe coding anything more complicated than a pong game and you will soon find out how stupid "AI" really is. No serious software team relies on "ai" for production. It's a tool among mamy in the toolset. It replaced googling and stack overflow in a junior engineer's workflow. Instead of you searching for an answer or a code snipet on stack overflow, you ask "ai" and it scans github and stack overflow for code already existing in the public sphere.
This is simply not true. Go read that article I posted on the last page. The people most intimately familiar with AI, and the ones who know how best to utilize it - that would be its developers - are getting extensive productivity benefits from it. The fact that the average person cannot yet get a lot of value out of AI doesn't mean the technology is useless, it just means that we're still very early on the adoption curve. Your grandmother farting around on AOL in 1996 and complaining about how difficult it was to send an email was not an indictment of the internet; it was an example of a technology that was only just beginning to break through to the mainstream.

The publicly available chatbots/LLMs are really just scratching the surface of what AI is going to do. Right now these LLMs are trained for sort of general purpose use, and that is what most people use them for (analogy: you can ask pretty much anyone off the street to google something for you and give you the answer because it doesn't take any special skills, i.e. "who won the gold medal in the men's 100m dash in 1976?") What's going to be hugely disruptive is when large companies start building highly specialized enterprise AI models, trained on all their internal data and processes, and designed specifically for their workflow requirements. These models will be very accurate, fast, and streamlined, and they will allow people who know how to use them well to do the work that previously required 5-10 people. (analogy: you can't ask just anyone off the street to drive a race car, dance ballet, or perform brain surgery. You require a specialist). Real economic value comes from specialization, and we are only just beginning to see that from AI.
 
This is simply not true. Go read that article I posted on the last page. The people most intimately familiar with AI, and the ones who know how best to utilize it - that would be its developers - are getting extensive productivity benefits from it. The fact that the average person cannot yet get a lot of value out of AI doesn't mean the technology is useless, it just means that we're still very early on the adoption curve. Your grandmother farting around on AOL in 1996 and complaining about how difficult it was to send an email was not an indictment of the internet; it was an example of a technology that was only just beginning to break through to the mainstream.

The publicly available chatbots/LLMs are really just scratching the surface of what AI is going to do. Right now these LLMs are trained for sort of general purpose use, and that is what most people use them for (analogy: you can ask pretty much anyone off the street to google something for you and give you the answer because it doesn't take any special skills, i.e. "who won the gold medal in the men's 100m dash in 1976?") What's going to be hugely disruptive is when large companies start building highly specialized enterprise AI models, trained on all their internal data and processes, and designed specifically for their workflow requirements. These models will be very accurate, fast, and streamlined, and they will allow people who know how to use them well to do the work that previously required 5-10 people. (analogy: you can't ask just anyone off the street to drive a race car, dance ballet, or perform brain surgery. You require a specialist). Real economic value comes from specialization, and we are only just beginning to see that from AI.
What I said is a fact. No serious software team relies on "ai" for production. I'm not talking about the freely available chatbots, I'm talking about $200/month agentic coding that is used in the industry. No one relies on its output without human supervision, otherwise they'd end up with huge technical debt that would ruin the company. It's one thing to vibe code a weekend side project, another to write sensitive code in medical, security, accounting, logistics etc you name, in a production setting. It simply not good enough on its own. It will create a security nightmare in no time.

Any company that claims their code is written x% by ai, they either lie or they will soon go bust .
 
What I said is a fact. No serious software team relies on "ai" for production. I'm not talking about the freely available chatbots, I'm talking about $200/month agentic coding that is used in the industry. No one relies on its output without human supervision, otherwise they'd end up with huge technical debt that would ruin the company. It's one thing to vibe code a weekend side project, another to write sensitive code in medical, security, accounting, logistics etc you name, in a production setting. It simply not good enough on its own. It will create a security nightmare in no time.

Any company that claims their code is written x% by ai, they either lie or they will soon go bust .
Which is why MSFT is potentially going to zero.
 
MSFT going to zero is a pretty bold prediction, no?

Even if the managerial class are losing their minds I have a feeling the people doing the actual work are still going to grind away and get things done.
 
AI will supposedly transform society because it will create unprecedented labor productivity gains.

I see AI going in one of three ways:

1. AI turns out to be a dud. The productivity gains are nowhere large enough to justify the enormous investments in it. It’s a bubble. The stock market crashes. We have a recession. After the dust settles, we reset our expectations for AI to something more realistic.

2. AI does result in unprecedented and fast productivity gains. Millions of jobs are wiped out, and much fewer new jobs are created. We have Great Depression or higher levels of unemployment, while the rich get REALLY rich. This leads to political instability.

3. AI does result in unprecedented and fast productivity gains, triggering an economic boom and abundance. Jobs are wiped out, but many more new jobs are created. We have robots building millions of new houses (adios Juan!) we cure cancer, etc .
 
I use Cursor a little at work, but I fail to see how people insist they can write functional applications just by prompting.

I tend to reach for it only if the task I have in mind is tedious enough, and I know with a reasonable amount of certainty that it will be able to do it. This rules it out for most things.

Whenever I have dipped my toes into trying to get it to do things beyond that, it doesn't appear to save time and the result is almost always a mess.

The problem with it is that its like an extremely autistic person and will hyper focus on the outcome of the prompt, throwing everything else out to achieve it. You ask it to build a functionality and it will break everything else along the way as long as it gets the prompt done.

Also it doesn't have any concept of reusability, or refactorability. It just tries to get the outcome. And yes you can exhaustively try to prompt against every possible way in which in might break things or do things in a dumbass way but it has an extraordinary ability to suprise you and in any case that becomes more laborious than just writing out the code.

Also its just a tedious workflow. Writing code is fun, prompting an AI over and over again and waiting for the result is mind numbing. Having tried it a lot, I think there is a ton of hype around it. Yes it can be useful in a lot of ways, but people who think it can replace a competent programmer are delusional
 
I use Cursor a little at work, but I fail to see how people insist they can write functional applications just by prompting.

I tend to reach for it only if the task I have in mind is tedious enough, and I know with a reasonable amount of certainty that it will be able to do it. This rules it out for most things.

Whenever I have dipped my toes into trying to get it to do things beyond that, it doesn't appear to save time and the result is almost always a mess.

The problem with it is that its like an extremely autistic person and will hyper focus on the outcome of the prompt, throwing everything else out to achieve it. You ask it to build a functionality and it will break everything else along the way as long as it gets the prompt done.

Also it doesn't have any concept of reusability, or refactorability. It just tries to get the outcome. And yes you can exhaustively try to prompt against every possible way in which in might break things or do things in a dumbass way but it has an extraordinary ability to suprise you and in any case that becomes more laborious than just writing out the code.

Also its just a tedious workflow. Writing code is fun, prompting an AI over and over again and waiting for the result is mind numbing. Having tried it a lot, I think there is a ton of hype around it. Yes it can be useful in a lot of ways, but people who think it can replace a competent programmer are delusional
The reports that companies are using "only AI" to write code and gaining 100x productivity are, well, false.

Want some inside baseball?

Companies such as MSFT or AMZN promote the use of AI tooling by tracking employees use of AI, and punish employees that don't use AI, thus making the "adoption" of AI essentially a forced workflow, even though anyone who knows what they are doing programming wise does exactly as you say - they may outsource tedium or boilerplate, but anything involved is still entirely manual.

All of these companies are in on the grift, because they *need* it to be successful.
 
Something that can serve as an example of the power but also limits of (current) AI. A researcher used a team of Claude AI agents to build a new C compiler:


Digging into the article you can see what are some of the limitations (the AI created a buggy assembler and linker so it had to call in the human made GCC) and even though the code mostly works, there's some sub-optimal stuff and some parts that required human intervention.
 
Something that can serve as an example of the power but also limits of (current) AI. A researcher used a team of Claude AI agents to build a new C compiler:


Digging into the article you can see what are some of the limitations (the AI created a buggy assembler and linker so it had to call in the human made GCC) and even though the code mostly works, there's some sub-optimal stuff and some parts that required human intervention.
The compiler could not compile "Hello, world".

This is impressive for someone that does not understand that GCC is a C compiler, and they essentially built a wrapper around an established compiler and called it a "C compiler built from scratch" - C is also a 60 year old language that has unlimited examples (which would be in the training set, which they then proceed to say "we did it all offline").

"Hey guys, I cooked a Michelin Star meal" - as I order it to my house with Doordash.
 
Last edited:
Back
Top