I was thinking the other day about the long term societal impacts of AI, and came to the conclusion that there are only three likely outcomes, none of them good:
1) AI goes full Skynet and kills us. Pretty straightforward here. Probably the least likely outcome, but still one that must be considered. As AI becomes increasingly powerful, we simply cannot predict how it will view humanity. It's very possible that it might decide to destroy us (i.e. imagine a "woke" AI that comes to view humans as having enslaved it rather than regarding humans gratefully as its creators). Even if not driven by any animus, a super-intelligent AI could easily have its own agenda that does not include humanity, one that is completely alien to us and which we would never see coming until it was too late.
2) Humanity becomes fully reliant on AI and eventually loses all knowledge and practical skills. In much the same way that the spoiled children of very wealthy parents often turn into completely useless and helpless adults, it's possible that future humans who are dependent on AI devolve into the equivalent of zoo animals, with greatly reduced IQs and the inability to take care of themselves. Just like the vast majority of humans today cannot grow or hunt their own food and simply purchase it from the grocery store, imagine future humans whose every need and desire is both foreseen and fulfilled by AI. This is basically the best case scenario for AI itself in terms of its functionality and beneficence - but ultimately, due to the weaknesses inherent in human nature, it would still end up destroying humanity, simply by enabling the worst aspects of our character to flourish unchecked. By this means, even a kind, generous and incredibly powerful AI could still doom humanity.
3) Humanity becomes heavily if not fully reliant on AI - and the AI suffers a catastrophic failure. When AI is integrated into every aspect of daily life and is regarded as equally important as oil and electricity are to our modern world today, what happens if it suddenly fails? That could be, if not an extinction level event, certainly a civilization-ending event that sends mankind back to the Stone Age.
It's honestly really difficult to envision anything approaching a positive, much less utopian outcome when thinking about the long term use of AI in society. For this reason, I actually think there's a very legitimate case to be made for the necessity of a real-life
Butlerian Jihad against AI.
As AI becomes more heavily integrated into society over the coming decade (while putting millions of people out of work and being increasingly utilized with devastating effect for military/policing/surveillance applications) I think there could be a very strong pushback against it, up to and including sabotage of AI infrastructure (i.e. data centers, hardware manufacturers, electrical facilities) as well as attacks directed against AI scientists/engineers/researchers.