Those are great stories but not how programming works. Programs only do what we tell them to do and are actually pretty limited.
I got into a discussion with people about ChatGPT where they said things like "ChatGPT is smarter than you because I can ask it to explain the Theory of Relativity and it will give me an answer that's more than what is in Wikipedia!" This shows a fundamental misunderstanding of what chatbots are.
First of all, ChatGPT does not
understand anything. It regurgitates based on the prompts (what you ask it) and the inputs (text it has access to) it is given. So it is not "smart" and does not understand what it is saying.
Secondly, that person had never actually asked ChatGPT to explain the theory of relativity or they would have realized that it does explain it at about the Wikipedia level.
Third, I could ask ChatGPT to explain some scientifically sounding concept
that does not exist and it will! It just makes shit up. It even makes up the citations when you ask it to explain real things. This is because it doesn't understand what a citation is. It sees them in the text it reads and figures it needs citations so uses the examples to create ones that look like what it has previously come across.
Fourth, the people who make ChatGPT are monitoring it all the time and putting in safeguards every time it goes off the rails. (I wouldn't be surprised if it stops making up citations now that it's been pointed out that it shouldn't do that, for example.) Prior chatbots were able to return Nazi propaganda and other bad crap because there weren't safeguards in place but ChatGPT had safeguards against that right from the start. This is also why every article on the web about how to "turn off" its filters has a comment under it to the effect of "it doesn't work." It doesn't work because OpenAI fixes that backdoor as soon as it finds out about it.
Fifth, as pointed out in the article, ChatGPT doesn't actually have the capacity to
do anything. It isn't hooked up to a nuclear reactor, the FAA system, our power grid, or even just a factory making widgets.
No one is going to hook up a chatbot to a nuclear power plant. The software that automates that plant will be written to only do things that make sense to do at a plant. Will that software maybe make a mistake one day? Sure. Hopefully, not one that leads to the plant blowing up but even if it did, it wouldn't happen because it "decided" that humans suck or something, and it wouldn't talk to all the other nuclear plants that exist and get them to blow up too. For one thing, nuclear power plants don't all exist on the same networks so they can't talk to each other. For another, they won't be programmed to consult with other plants about what to do.
It seems to me that people are anthropomorphizing chatbots and then imagining them doing horrible things because they have dark secret feelings or something like humans do. I think worry about the software in nuclear power plants having a bug that causes them to blow up is something to really fear. Having AI take over a spaceship or the world is not.
If people are interested in the technical details of how ChatGPT works, here is a very lengthy (and scientifically dense) article about it:
Stephen Wolfram explores the broader picture of what's going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax.
writings.stephenwolfram.com