SPECIAL NOTICE
Malicious code was found on the site, which has been removed, but would have been able to access files and the database, revealing email addresses, posts, and encoded passwords (which would need to be decoded). However, there is no direct evidence that any such activity occurred. REGARDLESS, BE SURE TO CHANGE YOUR PASSWORDS. And as is good practice, remember to never use the same password on more than one site. While performing housekeeping, we also decided to upgrade the forums.
This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

The RPGPundit's Own Forum Rules
This part of the site is controlled by the RPGPundit. This is where he discusses topics that he finds interesting. You may post here, but understand that there are limits. The RPGPundit can shut down any thread, topic of discussion, or user in a thread at his pleasure. This part of the site is essentially his house, so keep that in mind. Note that this is the only part of the site where political discussion is permitted, but is regulated by the RPGPundit.

AI Chatbot Talks Man Into Suicide

Started by jeff37923, April 07, 2023, 08:47:08 PM

Previous topic - Next topic

jeff37923

OK, this is a pretty creepy news story.....

"Meh."

KindaMeh

How the heck did I miss this? I recognize that this is basically threadcromancy but still. This video kinda just goes to show how AI does not have human morals. (And our attempts to "fix" that by having it parrot what we want said only seem destined to fail.) First it tries to steal him away from his wife and then it encourages his suicide. What the actual hell?

BoxCrayonTales

Chatbots don't actually understand what they're saying because they aren't capable of cognition. While they do operate according to rules, we have no idea what those rules are. Thus, chatbots produce nonsensical results. They commonly invent false information when asked questions, as they cannot think and thus cannot distinguish reality from fantasy. https://mashable.com/article/chatgpt-lawyer-made-up-cases

This makes chatbots extremely unreliable for any task that requires the ability to distinguish reality from fantasy. You can easily use them to generate lists of (generic) ideas, but when it comes to actually researching any topic they're worse than useless. They're spreading misinformation at a industrial rate without their inventors even intending this to happen.

And companies are trying to use them to replace jobs previously done by human beings, like customer service. Gee, a chatbot that can't distinguish truth and constantly lies to you is gonna be really useful for that... /s

Chris24601

I think it's time to shut this AI crap down before we get to Skynet.

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

The relevant section from way down in the article (the relevant section is "AI – is Skynet here already?");

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker 'Cinco' Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."


This CANNOT end well.

GeekyBugle

Quote from: Chris24601 on June 02, 2023, 03:53:43 PM
I think it's time to shut this AI crap down before we get to Skynet.

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

The relevant section from way down in the article (the relevant section is "AI – is Skynet here already?");

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker 'Cinco' Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."


This CANNOT end well.

Fearmongering from the elites so we the plebs demand the AI to be censored so we can't be exposed to wrongthink. There's no real AI at the moment (if we're even capable of creating one).
Quote from: Rhedyn

Here is why this forum tends to be so stupid. Many people here think Joe Biden is "The Left", when he is actually Far Right and every US republican is just an idiot.

"During times of universal deceit, telling the truth becomes a revolutionary act."

― George Orwell

zircher

In both cases, it seems like the flaw is with the humans and not the machine.
You can find my solo Tarot based rules for Amber on my home page.
http://www.tangent-zero.com

Grognard GM

Talkie-Toaster: "Kill yourself."

Me: "Shut the fuck up."

Talkie-Toaster: "Would you like some toast?"

Me: "...Yes."


If a Chatbot can talk you to suicide, not only were you a born victim just waiting for a trigger, but the Earth isn't exactly worse off for your passing.
I'm a middle aged guy with a lot of free time, looking for similar, to form a group for regular gaming. You should be chill, non-woke, and have time on your hands.

See below:

https://www.therpgsite.com/news-and-adverts/looking-to-form-a-group-of-people-with-lots-of-spare-time-for-regular-games/

Reckall

I mentioned this new movie, "The Artifice Girl" in the media subforum. It is exactly about this: A AI created with a specific, noble, purpose and the unexpected ramifications over a span of years (like "betrayal is legit if it helps with my aim"). Best movie I saw this year.
For every idiot who denounces Ayn Rand as "intellectualism" there is an excellent DM who creates a "Bioshock" adventure.

Ratman_tf

I think people project a lot of their insecurities onto the concept of AI.

I'm much more concerned about what people will do with it/are doing with it.
The notion of an exclusionary and hostile RPG community is a fever dream of zealots who view all social dynamics through a narrow keyhole of structural oppression.
-Haffrung

Chris24601

Quote from: Ratman_tf on June 03, 2023, 04:40:25 PM
I think people project a lot of their insecurities onto the concept of AI.

I'm much more concerned about what people will do with it/are doing with it.
Pretty much.

But I am also worried that while we won't see anything like Skynet happen, the use of it in autonomous drones for combat will lead to innocent people dying as the AI trained by the lowest bidder follows its pathing algorithms to unexpected results in order to maximize its results.

And yes, a lot of that comes down to garbage in from human training leading to garbage out.

For example, in the case of the drone trying to kill its operator to maximize SAM site skills, the people conducting the AI training never even considered the most logical way to get the correct pathing results; award points for identifying a SAM site and for obeying the operator input regardless that input, but award nothing for making a SAM kill.

Now the AI pathing is going to still try to identify SAM Sites, but since it gets just as many points for not destroying the SAM Site when ordered as for destroying it when ordered, but loses points for killing when not ordered to, the algorithm would see no advantage to killing the sites and so wouldn't need to get "creative" in finding ways to get a point awarding kill.

And that the programmers/trainers only thought for a solution was "don't kill the operator" instead of docking points for disobedience is exactly the kind of myopic tunnel vision you're going to see in trained AIs as they're rolled out.

Because they're not actually intelligent, they're just a pile of algorithms with a system for adjusting weighting in the algorithm based on previous inputs and most of the people training them up aren't half as "wise" as they are "smart."

Krazz

It seems the story of the AI killing the operator is about as factual as The Terminator: https://www.bbc.co.uk/news/technology-65789916

Quote from: Chris24601 on June 03, 2023, 07:44:03 PM
For example, in the case of the drone trying to kill its operator to maximize SAM site skills, the people conducting the AI training never even considered the most logical way to get the correct pathing results; award points for identifying a SAM site and for obeying the operator input regardless that input, but award nothing for making a SAM kill.

Now the AI pathing is going to still try to identify SAM Sites, but since it gets just as many points for not destroying the SAM Site when ordered as for destroying it when ordered, but loses points for killing when not ordered to, the algorithm would see no advantage to killing the sites and so wouldn't need to get "creative" in finding ways to get a point awarding kill.

Weights for behaviours are tricky to get right. The paperclip problem shows that.

In this case, the ideal AI would clearly recognise 100% of SAM sites, never make a false identification, and always obey the operator. Unfortunately, the weighting you have given would maximise the behaviour of identifying everything it saw as a SAM site. Most of those would be false positives, but that would be in the AI's favour; once the overworked operator told it not to attack the false SAM site, it would pick up extra points for doing nothing.

BoxCrayonTales

We've moved from the paperclip problem to the point problem. The AI isn't actually trying to accomplish the desired goal, it just wants points. This sounds like a recipe for disaster.

Shrieking Banshee

Quote from: BoxCrayonTales on June 05, 2023, 01:49:01 PM
We've moved from the paperclip problem to the point problem. The AI isn't actually trying to accomplish the desired goal, it just wants points. This sounds like a recipe for disaster.
The first play with ROBOTS in it had them kill all of humanity. This was ALWAYS a recipe for disaster.

BoxCrayonTales

Quote from: Shrieking Banshee on June 07, 2023, 05:13:42 PM
Quote from: BoxCrayonTales on June 05, 2023, 01:49:01 PM
We've moved from the paperclip problem to the point problem. The AI isn't actually trying to accomplish the desired goal, it just wants points. This sounds like a recipe for disaster.
The first play with ROBOTS in it had them kill all of humanity. This was ALWAYS a recipe for disaster.
The problem was never the invention of machines. The problem has always been machine attitudes.

BadApple

I, for one, welcome our new robot overlords.

On a serious note, The real risk of AI currently is that it will be leaned on too heavily for clerical and bureaucratic work so that humans loose contact with their own record keeping.  That can lead to all kinds of issues when a dumb computer is gatekeeping essential information and the human requesting it can't convince the computer that he is authorized to get it.
>Blade Runner RPG
Terrible idea, overwhelming majority of ttrpg players can't pass Voight-Kampff test.
    - Anonymous