I'm honestly starting to think our predictions about AI are wrong and built on hype and baseless optimism. Seems like we're running into many hurdles we have not anticipated. We've come a long way, but it's possible we're already hitting the peak. I mean look at smartphones - have they really changed much in the last 17 years? Incremental improvements, mostly to the display/battery/camera, but that's about it. I think we're a very, very, very long ways off from true AGI, especially any kind of wetware AGI, assuming it's even possible to achieve. All we really have is trainable programs that uses deep learning to simulate AI, it's arguably not even true AI right now, let alone anything approximating AGI. The leap from deep learning algorithms to true AGI sounds like it's far more complex than we thought and it's not even on our horizon, i'm thinking it's something that may take generations or even centuries to complete.
I mean just 30 years ago we thought self-sufficient robots were as simple as making a machine that can move limbs and process inputs from the world. Turns out it's wildly more complex than that, to actually perceive the environment and respond to it accordingly is far beyond our comprehension, and we've barely scratched the surface with it. Similarly, we've talked about "brain uploading" for the last 20 years but it sounds like we're eons away from this, I cannot take any claims seriously that this is something we can do in our lifetime, let alone this millennium, at least in the sense of creating a true 1:1 recreation of human consciousness in digital form. Laughable.
This to me seems like an example of making arrogant predictions based off a lack of knowledge, basically not knowing how ignorant we are, and not understanding how what it really takes - project management alone is a classic example, we chronically underestimate how much time it takes to complete something. The more we learn about what makes us human, the more we realize how little we understand it.