Machine Learning and Artificial Intelligence Thread

Completely free energy source? I don't think it's possible, there is cost to energy extraction and storage, but sun is a good cheap energy souce, solar panels had been getting cheaper, I had paid $40 per each 100 watt panel and I'm sure there are cheaper options. Even my portable folding panels only cost me $70 per 100 watt brand new. This is pretty cheap energy.
The problem is sun doesn't shine well everywhere. Energy being constrained and limited is what holds the natural world together, otherwise it would be runaway destructive process. In human context, it keeps societies from overextending themselves.
 
Completely free energy source? I don't think it's possible, there is cost to energy extraction and storage, but sun is a good cheap energy souce, solar panels had been getting cheaper, I had paid $40 per each 100 watt panel and I'm sure there are cheaper options. Even my portable folding panels only cost me $70 per 100 watt brand new. This is pretty cheap energy.
The problem is sun doesn't shine well everywhere. Energy being constrained and limited is what holds the natural world together, otherwise it would be runaway destructive process. In human context, it keeps societies from overextending themselves.

you mean like this?

 
Saw this article the other day about how OpenAI is screwed. Oh, and don’t expect AI magic to circumvent hard material limits of dwindling non-renewable natural resources.

At this point, the only real question is whether the tech-bro elites know this and are pitching everything into AI in a Hail Mary attempt to build a super intelligence to come up with some brilliant tech breakthrough to solve our energy and material needs, or if they’re just trying to hoard the deck chairs of the Titanic before it sinks, so they can live their satanic elite Epstein Island fantasy while 99% of humanity is kicked back to the Stone Age ala Peter Thiel/Curtis Yarvin. There’s also the possibility that they’re actually short sighted idiots with delusions of grandeur who are too dumb to realize they’ve dug themselves into a hole. Lots of possibilities are on the table.

Personally, I think the Eschaton happens before we run out of oil and/or a super intelligent AI takes over the world for five minutes, but this is mostly my writer’s instinct gut feeling talking.
 
Saw this article the other day about how OpenAI is screwed. Oh, and don’t expect AI magic to circumvent hard material limits of dwindling non-renewable natural resources.

At this point, the only real question is whether the tech-bro elites know this and are pitching everything into AI in a Hail Mary attempt to build a super intelligence to come up with some brilliant tech breakthrough to solve our energy and material needs, or if they’re just trying to hoard the deck chairs of the Titanic before it sinks, so they can live their satanic elite Epstein Island fantasy while 99% of humanity is kicked back to the Stone Age ala Peter Thiel/Curtis Yarvin. There’s also the possibility that they’re actually short sighted idiots with delusions of grandeur who are too dumb to realize they’ve dug themselves into a hole. Lots of possibilities are on the table.

Personally, I think the Eschaton happens before we run out of oil and/or a super intelligent AI takes over the world for five minutes, but this is mostly my writer’s instinct gut feeling talking.
If you don't get at least SMR (small modular nuclear reactors), the jig is up.

Gates knows it, which is why he admitted the climate grift and that they need energy up the wazoo to do this nonsense.
 
I somehow don't think this will pan out for people. At least not the way they want.


He compared the future of work to gardening. "It's much harder to grow vegetables in your backyard, but some people still do it because they like growing vegetables," he said."That will be what work is like: optional."
 
I somehow don't think this will pan out for people. At least not the way they want.

So if we grow tomatoes in our backyard, we’ll get paid $200k a year and no worries about inflation eroding our standard of living? Ok sign me up Elon, I’m on board.
 
If AI takes away most people’s jobs, and people are subsidized with UBI, I think most people’s standard of living will be much less on UBi than before when they were working. Only the current population bringing in <$50k per year may get a bump up in standard of living on UBI but that is only if inflation doesn’t run wild. Just how I envision things going….90% of population will be UBI serfs (say $50k income based on current US economy figures) maybe 7% of population will be our Uber wealthy “human servants” catering to the elites (say $1-2m annual income) and 2% ruling class trillionaires (CEOs, politicians etc). IMO this is looking more and more likely IMO.
 
Musk’s problem is that he operates under an infinite growth mindset, where the numbers just keep going up forever. He’s going to find out the hard way that you can’t have infinite growth without infinite energy and infinite natural resources, both of which are about to get a lot more expensive and scarce as the inexpensive and accessible oil wells and mines dry up.

It’s looking less and less likely that modern machine civilization will pull a rabbit out of the hat by inventing some new magical energy source (which doesn’t just need to replace electricity generation, but diesel used for mining and transport, high-quality coal for steel production and heavy industries, and so on) to circumvent these inescapable material limits. The technological economy of Musk where nobody needs to work anymore is predicated on fantasy.
 
He’s going to find out the hard way that you can’t have infinite growth without infinite energy and infinite natural resources, both of which are about to get a lot more expensive and scarce as the inexpensive and accessible oil wells and mines dry up.
He thinks because the sun can theoretically harnassed and they have the technologically, technically, to do much of it, that they will. But practicality is another thing entirely, as you have stated.

The other part is that the world absolutely runs on the fact that the powers that be want to remain powers, which means they have zero desire to increase the standard of living more for the middle 50% of the world, whatever that means. They let advanced economies run, in my estimation, so they could get enough economic power and energy to give rise to the computers, computing power, and robots/tech. Now that that fun period is over, they're doing all they can to destroy population and make it harder for the lower 85% to live as non-slaves. And that will be 90%, then 95% ... etc
 
The people working with this stuff admit there is literally no identifiable theory at it's base. They smash a bunch of data that they've embedded with meaning and out comes something that seems intelligible. But no one really knows why it seems intelligible. It simultaneously sounds dumb that we are entertaining such a thing and scary at the same time.

Just for your information my work is on the left side of that picture and I routinely have been dabbling in the middle part with Physics Informed Neural Nets. Which is sort of a compromise position. But even with that I have been souring on. I'm not the only one.
Agreed. I have worked with AI and coded it too. What happens inside the model's matrices is like some kind of weird dream. I few years ago I saw an article about an image recognition neural net (NN). This net has many layers, where each layer is a matrix and the output from each layer feeds into the next one.
What you say about the layers is fascinating. I've cracked them open as well. It gets a little more difficult to understand what you are looking at when you are trying to get it to learn various physics and statistics rather than images.

Conceptally I do think it's pretty fascinating how that vector space can be used in terms of language. Like how a vector for "tower" might be close to a vector for "Paris". And so it would likely to pick a certain tower out when prompted for a story about Paris. But you're right about how it just happens to be human readable. That's an interesting way to think about it!
So what does all of this actually mean? Does it mean that AI is just some insane mumbo-jumbo that only looks meaningful on the surface?

Or does it mean that AI looks impressive on paper and appears beneficial at first glance, but once you look under the hood it’s a spaghetti system that doesn’t really make sense?​
 
So what does all of this actually mean? Does it mean that AI is just some insane mumbo-jumbo that only looks meaningful on the surface?

Or does it mean that AI looks impressive on paper and appears beneficial at first glance, but once you look under the hood it’s a spaghetti system that doesn’t really make sense?​
AI is impressive, but it's unreliable. I've had lots of occasions where I try to search something and the Google AI summary says one thing, and then I change the wording of my question to get more to the point of what I'm trying to find, and the AI gives a new answer that directly contradicts the previous answer. It's like some kind of bullshit artist that can often make up stuff that sounds pretty good, but it doesn't really know what's it's talking about, and you can't trust it.
 
Last edited:
AI is impressive, but it's unreliable. I've had lots of occasions where I try to search something and the Google AI summary says one thing, and then I change the wording of my question to get more to the point of what I'm trying to find, and the AI gives a new answer that directly contradicts the previous answer. It's like some kind of bullshit artist that can often make up stuff that sounds pretty good, but it doesn't really know what's it's talking about, and you can't trust it.
I see what you mean. I’ve mentioned before how AI has given me made-up explanations. For example, when I asked why Yamaha chose to use a 115cc engine to compete with Honda’s 125cc in the same market segment, it claimed Yamaha did it for cost-cutting reasons. But the real reason is that Yamaha’s 115cc engine already performs on par with (and in some areas better than) Honda’s slightly larger engine.

Now I’m seeing it make up facts about a quest in Fallout: New Vegas. I asked whether it’s true that Caesar can become hostile if you pick the wrong dialogue option or refuse to work with him. It said yes, gave me some explanation, and even provided specific dialogue lines that supposedly trigger hostility.

Now this is the interesting part: during a specific quest, the AI tells me that choosing the wrong dialogue option can cause Caesar to become hostile toward us, while my experience playing the game and a simple wiki search tell me this is impossible. Yes, Caesar can become hostile in the end, but only if we fail his quests—not by choosing the wrong dialogue option.

The dumbest part is that I tested this using ChatGPT 5.1, which is supposed to be the newest and most accurate model, yet it still invents information instead of admitting it has no data or simply doesn’t know.

Here is the screenshot from what Chatgpt 5.1 said:​

1764040076885.png

A simple search of the wiki proves that this is not true: https://fallout.fandom.com/wiki/Et_Tumor,_Brute?

If we care about accuracy, then using AI will only lead to being misdirected.​
 
Last edited by a moderator:
So what does all of this actually mean? Does it mean that AI is just some insane mumbo-jumbo that only looks meaningful on the surface?

Or does it mean that AI looks impressive on paper and appears beneficial at first glance, but once you look under the hood it’s a spaghetti system that doesn’t really make sense?​

@Thomas More said it most succinctly:

you can't trust it.

That said, AI can be correct in its answers, and even if it's not correct it can be close enough. It may even be close enough for most things. But at base we are giving up human judgment and agency to something artificial. To something fake. To something "not human" (and not angelic). We are made in the image of God and we want an artificial version of that? We want a "something else" to make judgements/ decisions. Do we trust that?

I actually think the AI question is driving at some very deep things. It is not just insane mumbo jumbo. It is regurgitating something meaningful. All this "training" of AI is what embeds the meaning into it. We (humans) have labeled all these bits of language and images for it and given it meaning.

But it operates in principle very similar to a "digital ouija board".

You have billions of people who have now focused their energy and attention on this thing, they all put their hands on the keys, and that now takes all of that and channels it... to answer... to predict.

The technology is flipping over into some ancient concepts of techne, and technique. And it's stuff that is not great to play around with. Technology has been treading into this territory for some time but it's just getting more blatant as time goes on.
 
Back
Top