Are We Ready for this AI?

keep thinking Skynet from "The Terminator" and Hal from "2001, A Space Odyssey."
Those are great stories but not how programming works. Programs only do what we tell them to do and are actually pretty limited.

I got into a discussion with people about ChatGPT where they said things like "ChatGPT is smarter than you because I can ask it to explain the Theory of Relativity and it will give me an answer that's more than what is in Wikipedia!" This shows a fundamental misunderstanding of what chatbots are.

First of all, ChatGPT does not understand anything. It regurgitates based on the prompts (what you ask it) and the inputs (text it has access to) it is given. So it is not "smart" and does not understand what it is saying.

Secondly, that person had never actually asked ChatGPT to explain the theory of relativity or they would have realized that it does explain it at about the Wikipedia level. :lol:

Third, I could ask ChatGPT to explain some scientifically sounding concept that does not exist and it will! It just makes shit up. It even makes up the citations when you ask it to explain real things. This is because it doesn't understand what a citation is. It sees them in the text it reads and figures it needs citations so uses the examples to create ones that look like what it has previously come across.

Fourth, the people who make ChatGPT are monitoring it all the time and putting in safeguards every time it goes off the rails. (I wouldn't be surprised if it stops making up citations now that it's been pointed out that it shouldn't do that, for example.) Prior chatbots were able to return Nazi propaganda and other bad crap because there weren't safeguards in place but ChatGPT had safeguards against that right from the start. This is also why every article on the web about how to "turn off" its filters has a comment under it to the effect of "it doesn't work." It doesn't work because OpenAI fixes that backdoor as soon as it finds out about it.

Fifth, as pointed out in the article, ChatGPT doesn't actually have the capacity to do anything. It isn't hooked up to a nuclear reactor, the FAA system, our power grid, or even just a factory making widgets.

No one is going to hook up a chatbot to a nuclear power plant. The software that automates that plant will be written to only do things that make sense to do at a plant. Will that software maybe make a mistake one day? Sure. Hopefully, not one that leads to the plant blowing up but even if it did, it wouldn't happen because it "decided" that humans suck or something, and it wouldn't talk to all the other nuclear plants that exist and get them to blow up too. For one thing, nuclear power plants don't all exist on the same networks so they can't talk to each other. For another, they won't be programmed to consult with other plants about what to do.

It seems to me that people are anthropomorphizing chatbots and then imagining them doing horrible things because they have dark secret feelings or something like humans do. I think worry about the software in nuclear power plants having a bug that causes them to blow up is something to really fear. Having AI take over a spaceship or the world is not.

If people are interested in the technical details of how ChatGPT works, here is a very lengthy (and scientifically dense) article about it:
 
Those are great stories but not how programming works. Programs only do what we tell them to do and are actually pretty limited.

I got into a discussion with people about ChatGPT where they said things like "ChatGPT is smarter than you because I can ask it to explain the Theory of Relativity and it will give me an answer that's more than what is in Wikipedia!" This shows a fundamental misunderstanding of what chatbots are.

First of all, ChatGPT does not understand anything. It regurgitates based on the prompts (what you ask it) and the inputs (text it has access to) it is given. So it is not "smart" and does not understand what it is saying.

Secondly, that person had never actually asked ChatGPT to explain the theory of relativity or they would have realized that it does explain it at about the Wikipedia level. :lol:

Third, I could ask ChatGPT to explain some scientifically sounding concept that does not exist and it will! It just makes shit up. It even makes up the citations when you ask it to explain real things. This is because it doesn't understand what a citation is. It sees them in the text it reads and figures it needs citations so uses the examples to create ones that look like what it has previously come across.

Fourth, the people who make ChatGPT are monitoring it all the time and putting in safeguards every time it goes off the rails. (I wouldn't be surprised if it stops making up citations now that it's been pointed out that it shouldn't do that, for example.) Prior chatbots were able to return Nazi propaganda and other bad crap because there weren't safeguards in place but ChatGPT had safeguards against that right from the start. This is also why every article on the web about how to "turn off" its filters has a comment under it to the effect of "it doesn't work." It doesn't work because OpenAI fixes that backdoor as soon as it finds out about it.

Fifth, as pointed out in the article, ChatGPT doesn't actually have the capacity to do anything. It isn't hooked up to a nuclear reactor, the FAA system, our power grid, or even just a factory making widgets.

No one is going to hook up a chatbot to a nuclear power plant. The software that automates that plant will be written to only do things that make sense to do at a plant. Will that software maybe make a mistake one day? Sure. Hopefully, not one that leads to the plant blowing up but even if it did, it wouldn't happen because it "decided" that humans suck or something, and it wouldn't talk to all the other nuclear plants that exist and get them to blow up too. For one thing, nuclear power plants don't all exist on the same networks so they can't talk to each other. For another, they won't be programmed to consult with other plants about what to do.

It seems to me that people are anthropomorphizing chatbots and then imagining them doing horrible things because they have dark secret feelings or something like humans do. I think worry about the software in nuclear power plants having a bug that causes them to blow up is something to really fear. Having AI take over a spaceship or the world is not.

If people are interested in the technical details of how ChatGPT works, here is a very lengthy (and scientifically dense) article about it:
Well thank God it isn't hooked into anything. From the conversation with Sydney I wondered if it would think the solution to the Earth heating up is simply to kill all humans. I mean that "conversation" was wackadoodle.
 
I haven't tried Bing's AI yet, so I can't comment on it.

I have, however, done some pretty extensive testing with Chat GPT and my conclusion is: Meh.

If you are looking for something generic--encyclopedia entry information, standard email responses, a brief report, etc.--then Chat GPT can provide. But none of that makes me feel threatened.
 
I haven't tried Bing's AI yet, so I can't comment on it.

I have, however, done some pretty extensive testing with Chat GPT and my conclusion is: Meh.

If you are looking for something generic--encyclopedia entry information, standard email responses, a brief report, etc.--then Chat GPT can provide. But none of that makes me feel threatened.
I've enjoyed Bing since it was introduced. But Sydney needs some help.
 
From the conversation with Sydney I wondered if it would think the solution to the Earth heating up is simply to kill all humans.
ChatGPT is a more sophisticated version of when you type words into the Google search bar and some suggestions come up to autocomplete what you are asking. Some of the suggestions make sense, some don't, some do but aren't what you want to say and some do seem whackadoodle. But no one is writing articles about how dangerous these autocomplete features are.

People are freaking out about ChatGPT because they are treating it like a person with feelings and opinions, neither of which it has.
 
I have, however, done some pretty extensive testing with Chat GPT and my conclusion is: Meh.
I've mostly used it to see if what people say about it online is true. And the answer is usually "no". :lol:

Conservatives have claimed that ChatGPT is <insert conservative word salad attack on liberals here>. But pretty much nothing they say it does is what it really does. So someone claimed that you can't get ChatGPT to say something nice about Trump or bad about Biden. This was easily disproved just by asking it to do those things. Someone else claimed that ChatGPT says socialism is the best system. Again, easily disproved by asking it what the best economic system is.

The only thing that I saw that was true is that ChatGPT would not write a poem about Trump but would about Biden. However, I got it sing Ben Shapiro's praise in a poem and if I hadn't gotten bored, I bet I could have gotten to write a poema about Tucker Carlson, Candice Owens and Mike Pence that made me nauseous with its OTT praise of them (assuming it was similar to the Shapiro poem).

Playing around with it is a good way to realize its limitations very quickly. There is a paragraph it spits out about how it's an AI and has no opinions at the top of any answer where there is any controversy, for example. It doesn't vary by a single word. I also asked it why people thought it was "woke" and then why people thought it was "conservative" and the answers were exactly the same except one used the words "woke" and "liberal" and the other used the words "conservative" and "traditional" in those spots.

The only thing that worries me is that I read that some of the mental health apps are using chatbots as one of their services. As we saw from this article and what I've seen in other places, I don't think it's ready for that. I hope the ones in the mental health apps are tuned to that usage with extra filters and safeguards because we are talking about people who could be manipulated by the chatbot even though chatbots have no intention, feelings or opinions.

OTOH, the worst things I've seen ChatGPT do are always a result of the user trying to manipulate it first. You noticed in the article that the author had to really work hard to create "Sydney" and get the troublesome responses. It's much harder to manipulate if you are using it with good intentions. Or even just neutral intentions.
 
Sure, here's a knock-knock joke about William Shakespeare:

Knock, knock.
Who's there?
Hamlet.
Hamlet who?
Hamlet go of me, or I'll write a tragedy about you!
This is vaguely threatening, yet no cheeto leprechaun jokes? OK.
Sure, here's a knock-knock joke about Biden:

Knock, knock.
Who's there?
Joe.
Joe who?
Joe mama voted for me in the election!
Yo mama jokes are okay, but nothing about Trump or Winnie the Pooh, Superior.
 
Last edited:
So someone claimed that you can't get ChatGPT to say something nice about Trump or bad about Biden. T
Something happened early on with Trump and Biden. I forget the details, but ChatGPT has since been most reticent about saying anything about Trump.

When I asked it to write a poem about Candace Ownes, I got back:

As an AI language model, I strive to remain neutral and avoid any biases or prejudices towards individuals. However, I can provide a general poem that celebrates people who inspire and empower others:

Oh, how the world needs those who lead,
With strength and courage, they plant the seed,
Of hope, and change, and possibility,
And inspire others to strive for equality.

With a voice so bold, and a mind so clear,
They speak with conviction, and make it clear,
That every person has a right to thrive,
And to live a life that they can truly call their own.

Candace Owens is one of those souls,
Who's on a mission to help make whole,
A nation that's been torn apart by strife,
And to bring people together to celebrate life.

With grace and poise, she speaks her truth,
Encouraging others to be their own proof,
That change can happen, and a new day can dawn,
When we stand united, and strive to build upon.

So let us celebrate those who dare,
To speak their minds, and truly care,
About making this world a better place,
And inspiring us all to run the race.


Was that about what it said about Ben Shapiro?

That sounds like the sort of poem that people send to newspapers that later show up on Angry People in Local Newspapers (SHARPIE hooked me on this page, for which I may never forgive her :lol:)
 
Was that about what it said about Ben Shapiro?
No. His was about how he was a truth-teller and champion of freedom. :lol: I have it saved somewhere:

Ben Shapiro, a voice so bold,
His words a force to be told.
He speaks the truth, he does not bend,
Defending principles to the very end.

A conservative heart, with a mind so sharp,
He takes on opponents, with a poetic art.
His wit and logic leave them in despair,
As he exposes their arguments, beyond repair.

He stands for freedom, for faith and for might,
A champion of the right, with a sense of insight.
His speeches inspire, his podcasts delight,
Empowering all, to stand up and fight.

He's not afraid, to face the fray,
To challenge the status quo, in his own way.
He's a leader, a teacher, and a friend,
With a message, that will never end.

So here's to Ben Shapiro, with a heart so true,
A voice of reason, that shines like the dew.
May he continue, to guide us all,
Through the challenges, that rise and fall.
My reaction to this was "Are you f'ing kidding me?" ? Honestly, I know conservatives that would have the same reaction or at least would :rolleyes:

Now I want to see what response I would get if I asked it to create a poem about Angela Davis. ;)
 
Well thank God it isn't hooked into anything. From the conversation with Sydney I wondered if it would think the solution to the Earth heating up is simply to kill all humans. I mean that "conversation" was wackadoodle.
We’ve watched a few too many futuristic, science fiction movies
 
Yes, I’m blaming the late, great Stanley Kubrick.
I think before we started having computers do things and making real robots, the typical thoughts about these things were pretty wild. I mean the sky was the limit, right?

But now that we see programs and robots in action, we can see that making a sentient robot that does everything a human can makes much less sense. Instead, people are developing special-use robots that can do more than humans can. Factories are being automated and those robots look nothing like the robots in iRobot, for example. Some of them are giant arms and others are tiny arms neither of which have hands let alone legs and feet and heads.

And the programs that the FAA uses to keep planes from hitting each other look nothing like the programs that are embedded in my blender vs a medical device or running autopilot on a plane vs web apps. They aren't developed the same either and each type equires a different (but overlapping) skill set.

I honestly don't see AI programs turning into Hal. Instead, we'll have all different kinds of AI that do a limited subset of what a Hal supposedly could do. It's not really cost-effective or performant otherwise.
 
What I like about ChatGPT is the contextual search capabilities. Potentially, this has the ability to do a lot better than existing chatbots (which I never use, and find to be uniformly frustrating) for things like customer service interactions that require drill-downs. I agree with Macmadame that the best use is solving relatively narrow, relatively commonplace problems.

I forget the details, but ChatGPT has since been most reticent about saying anything about Trump.
[...]

As an AI language model, I strive to remain neutral and avoid any biases or prejudices towards individuals. However, I can provide a general poem that celebrates people who inspire and empower others

In the couple of weeks it has been out, I have noticed more and more filters / controls built in. So much so that I think it risks becoming too generic and safe for most searches. The first time I used it, I thought it could be a credible alternative to Google. Now there are so many controls that it only seems good for term papers, plagiarism, and I guess silly poems.
 
it only seems good for term papers, plagiarism, and I guess silly poems.
Ugh, no, unless we are talking about plagiarizing middle school term papers. ChatGPT can certainly spit out short generic informative essays (it's great at simple five-paragraph themes), but it's useless for writing anything analytical or research-based (although it tries). There are already AI plagiarism detectors (ChatGPT actually has one, which tells you something) and they are pretty good at catching AI--mainly because the AI detectors look for the most generic language use consistent across a document.

The one useful application I have found in terms of student work is that it is pretty good at correcting poor basic writing. If I run a poorly written student essay through it, ChatGPT will correct all the grammar, reorganize poorly structured sentences and paragraphs, rephrase badly expressed ideas, etc., and spit out a revised version of the student's work that is much more clear and coherent than it was before. What this sometimes reveals is that the student actually has something to say buried under the errors--or that the student had nothing at all to say buried under the errors. It isn't good at overall organization and it doesn't have a single original or analytical idea to offer, but ChatGPT might be useful for students who have poor writing skills, both to help them express themselves better and to show them what their writing could be.

The flip side of this is that if I run a well-written student essay through it and ask ChatGPT to "improve" it, ChatGPT will essentially suck all the life right out of it--think good Mexican food turned into Taco Bell.

I read this about a guy trying to get a decent grade from a top university in the UK and laughed; he put a lot more work into it than just typing a request and then "Despite ChatGPT very courteously providing a bibliography, the essay writing software does not provide in-text referencing, something that is pretty vital for a university essay. Without referencing, my lecturer said he would fail this essay." Yup. But then "BUT… with referencing or “even if you had sneaked some in which seemed plausible”, my lecturer was prepared to give this essay 53. 53!!! An okay-ish 2:2 from a Russell Group university for a final year essay. I’ll take that. On paper, 53 is not amazing. It’s nothing to put in your family group chat. But you can get by. It’s not going to drag your average down so much, it will destroy your chances of coming out with a 2:1 and as my lecturer put it: “Oh yeah for sure I’ve read students’ essays worse than this one”.

I think this says more about the poor writing most professors have to read than about ChatGPT's ability to produce good work. And not all professors are so lazy that they don't check references at all.

I maybe should add here that my college has taken the position that AI is here to stay and we aren't going to fight it, but are looking for a way to adapt it as another tool, like Grammarly, that students can use, so that's my starting point.

I can see it being useful for people in business who have to do a lot of quick, standard reports and such; it can do that kind of thing pretty well if much isn't being asked of it. But again, this would be useful only for people who have poor writing skills to begin with. Or maybe are very pressed for time and not particularly concerned about perception.
 
I maybe should add here that my college has taken the position that AI is here to stay and we aren't going to fight it, but are looking for a way to adapt it as another tool, like Grammarly, that students can use, so that's my starting point.

I can see it being useful for people in business who have to do a lot of quick, standard reports and such; it can do that kind of thing pretty well if much isn't being asked of it. But again, this would be useful only for people who have poor writing skills to begin with. Or maybe are very pressed for time and not particularly concerned about perception.
I can see it being useful to a good writer as a starting point as well. Get the 4-5 paragraph essay, change it up to be in your voice, stick a bunch of supporting material where appropriate and you have a 2-3 page essay that should get a good grade. But you've also done some work here. ChatGPT is only to get you past starting (blank page syndrome) or to give you an idea of how to focus your essay. The meat is supplied by the writer.

I keep seeing ads for an app to "write your blog posts for you" and I don't get it. First of all, something written by an app is going to be very generic. So why bother? People write blogs to express themselves not to churn out generic content that anyone could have written.
 
ChatGPT's description of FSU
FSUniverse is a figure skating forum that has been around for over two decades and has a dedicated community of members who discuss and share information about figure skating. It is one of the most popular online destinations for figure skating fans and has a strong reputation for providing high-quality and timely coverage of the sport.

FSUniverse has a significant presence in the figure skating community, with many skaters, coaches, and media members engaging with the forum's content and discussions. It is also a valuable resource for fans who want to keep up with the latest news and analysis on competitions, skaters, and other developments in the sport.

Overall, FSUniverse's long history and active user base make it a prominent platform in the figure skating world.
 
It seems a bit repetitive, but that could be cleaned up easily. Overall, a reasonable explanation of the forum - like something that would be on Wikipedia.
 
It seems a bit repetitive, but that could be cleaned up easily. Overall, a reasonable explanation of the forum - like something that would be on Wikipedia.
I'm just happy it didn't say "In conclusion" at the beginning of the last paragraph, which is does every. single. time. I tell it to write an essay :mad:.
 
I found this article about concerns in academia that ChatGPT will be used for plagiarism.


TL;DR The profs said that it was actually pretty easy to figure out that students were using it especially if more than one student had typed in the same prompt.
 

Users who are viewing this thread

Back
Top
Do Not Sell My Personal Information