TheRPGSite

Pen & Paper Roleplaying Central => Pen and Paper Roleplaying Games (RPGs) Discussion => Topic started by: Koltar on August 28, 2009, 03:31:28 PM

Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on August 28, 2009, 03:31:28 PM
This was almost a 'media & Inspiration' thread.....

After my G:T campaign dealt with the issues of Cyborgs, Robots, and Androids as Player characters and NPCs.

I'm not positive that mankind will ever make them in a large number like we've seen in movies and read in various fiction books.

 STAR TREK :The Next Generation (and TOS TREK as well) may have stumbled on the reason by accident because of the simple fact that its would be a budget-buster to show lots of Robots. They finally had someone right up the real reason in a 2nd season episode.  It was the episode "The Measure of a Man"."

Any number of "Robots" that get made over a certain population number means that we are creating a mimicry of the human fporm so that we cvan have Slaves or slavery withj no guilt. The problem is that as these machines get possibly more complex they border on being a new or different kind of intelligence ...maybe even a life-form. At that point we're right back to being slave-owners.

In one of the last sessions of that campaign that I ran - a player got quite pissed ....in-character because she saw a robot being abused (in her opinion). She then , in-character bought that robot, gave him his freedom then hired him to be a member of the crew.

The woman hadn't gamed much until the past year and felt the need to apologize if she got too emotional. I told her "No, that was all in character and made perfect sense to me in the game's conmtext."


From now on , ANY game I run with androids or robots in it - I will be thinking about this angle. Just can't help it now.

What do the rest of you think?

- Ed C.



The TNG episode referenced:
http://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Trek:_The_Next_Generation)
Title: Robots & Androids - the REAL problem with them....
Post by: Werekoala on August 28, 2009, 03:42:21 PM
I don't think it's a problem with robots (we have millions working around the world today - do you consider them enslaved) but rather artificial intelligence. Until you have self-awareness, then you can have any number of robots doing whatever work you like without the moral dilemma, IMO - unless you consider draft animals slaves too. Of course, there is a whole other mess of moral issues with how we USE them (war machines that can fight non-stop, and that if they are destroyed can be easily replaced, might lead us to solve more problems with roboticized conflict, is one of many that I've seen batted around).

I think there's a pretty broad line between forcing a self-aware being who was kidnapped from their homeland to work in your fields and plugging in a purpose-built machine that does the same thing. It's also a reason I think that as robots become more ubiquitous, there will NOT be a trend towards humanizing them, or even making them humanoid in form, because it's easier to think of them as just a machine if they don't look like a person. How many stories or movies do you know of where the sympathetic machine is shaped like a box on wheels? Except for WALL-E, of course. :)
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on August 28, 2009, 03:50:26 PM
I forgot the adjective - of course I was referring to human-similiar shaped robots as we've seen in countless, movies, TV shows, comic books and Science Fiction books over the years.


- Ed C.
Title: Robots & Androids - the REAL problem with them....
Post by: Werekoala on August 28, 2009, 03:54:12 PM
Well of course, but would you be more inclined to worry about the welfare of a non-sentient but humanoid robot, or a box that was self-aware? The tendency is to be concerned with the Like, and less concerned with the Unlike, even if there is no objective reason to do so. In the case of your player, was the robot she rescued humanoid AND self-aware? If so, she was justified in her feelings if it was being mistreated - it was an intelligent being, even if manufactured; but not because it was humanoid. If it was just someone kicking a malfunctioning Coke machine that could write poetry, I doubt her reaction would have been the same.
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on August 28, 2009, 04:15:47 PM
The robot in that session looked 'barely human shaped'...or more mechanical.

Imagine a sled shape or the flatbed of a large picvk-up truck.Now at the front of that stick a hemisphere with the flat side facing the ground. On top that hemisphere is another smaller one on top of a short cylinder shape. That top bulge is what might be considered a 'head'. So this particular 'bot was more mechanical looking than human-looking. It also spoke in a monotonre voice halfway between Peter Cullen (Optimus Prime) and Bob May (B-9 on Lost in Space).

Think of an air/raft or hover version of a pick up's flatbed with a hemisphere and globe shapes smooshed onto the front of it.

He was being used as a delivery/cargo robot when the players met him.

- Ed C.
Title: Robots & Androids - the REAL problem with them....
Post by: ColonelHardisson on August 28, 2009, 04:30:45 PM
Yeah, it's more about being truly self-aware than it is about anything else, and it's unlikely we'll have to worry about truly self-aware machines for a long time. Even then, given that the discussion has already started, the debate over their rights will likely be short when machines become fully aware. Plus, fully aware machines won't just happen, like they did with Skynet in the Terminator films. They'll come as a result of specific, dedicated projects, and may even be specifically designed to only have a limited amount of awareness related to whatever purpose they are to fulfill. A fully aware machine with a definite personality will be a long time coming, if it ever does.

Human-shaped robots may never be designed or built with such intelligence, simply to avoid the obvious concerns. I'd say human-shaped robots will be built as "companions" or for entertainment, or possibly for exploration, especially of other planets. The latter would be useful for indicating how humans would have to be equipped and enhanced to explore the same environments. Any of them will likely only have a limited amount of "intelligence," able to respond appropriately in their respective roles, but not intelligence like a fully aware human.

So, to make a long explanation short, I think it's unlikely we'll ever really have the problem to begin with.
Title: Robots & Androids - the REAL problem with them....
Post by: flyingmice on August 28, 2009, 05:51:03 PM
Quote from: Koltar;324813
This was almost a 'media & Inspiration' thread.....

After my G:T campaign dealt with the issues of Cyborgs, Robots, and Androids as Player characters and NPCs.

I'm not positive that mankind will ever make them in a large number like we've seen in movies and read in various fiction books.

 STAR TREK :The Next Generation (and TOS TREK as well) may have stumbled on the reason by accident because of the simple fact that its would be a budget-buster to show lots of Robots. They finally had someone right up the real reason in a 2nd season episode.  It was the episode "The Measure of a Man"."

Any number of "Robots" that get made over a certain population number means that we are creating a mimicry of the human fporm so that we cvan have Slaves or slavery withj no guilt. The problem is that as these machines get possibly more complex they border on being a new or different kind of intelligence ...maybe even a life-form. At that point we're right back to being slave-owners.

In one of the last sessions of that campaign that I ran - a player got quite pissed ....in-character because she saw a robot being abused (in her opinion). She then , in-character bought that robot, gave him his freedom then hired him to be a member of the crew.

The woman hadn't gamed much until the past year and felt the need to apologize if she got too emotional. I told her "No, that was all in character and made perfect sense to me in the game's conmtext."


From now on , ANY game I run with androids or robots in it - I will be thinking about this angle. Just can't help it now.

What do the rest of you think?

- Ed C.



The TNG episode referenced:
http://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Trek:_The_Next_Generation)


It's written into the StarCluster setting. The biggest difference between the Diasporan Community and SaVaHuTa is the question of sentient slavery. SaVaHuTa has outlawed it, while the DC allows its member states to decide the question. Only Humans cannot be slaves somewhere in the DC.

I've had tons of awesome play out of that one question for years now. :D

-clash
Title: Robots & Androids - the REAL problem with them....
Post by: Bradford C. Walker on August 29, 2009, 03:01:02 AM
I have something far more day-to-day that tends to keep them out: maintenance, repairs and upgrades.  It's a time and money sink that most players don't appreciate, don't account for and don't like to deal with in play.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on August 29, 2009, 03:55:00 AM
Quote from: Koltar;324813
Any number of "Robots" that get made over a certain population number means that we are creating a mimicry of the human fporm so that we cvan have Slaves or slavery withj no guilt. The problem is that as these machines get possibly more complex they border on being a new or different kind of intelligence ...maybe even a life-form. At that point we're right back to being slave-owners.

Given that we're considering the moral issues now, I think that by the time we actually have A.I., it won't be guilt-free to treat them as slaves.  I am rather shocked at the number of sci-fi movies where humans blatantly ignore outright slave-holding of sentients, from Star Wars to WALL-E.  There'll be lots of debate, but I would expect that the civil rights battle would be going on at the same time as the first A.I.'s are being developed.  I can see all sorts of discrimination and unequal treatment issues, and a lot of gaming the fuzzy line between sentient and non-sentient robots.  However, I don't foresee having something that is clearly of human intelligence and sentience and yet treated as property.  At the very least, doing so would be controversial.  

For example, I could definitely imagine sexbots that are deliberately built to be not-quite-sentient and thus avoid being given humans rights, but toe the line in being convincing and reactive enough to turn people on.
Title: Robots & Androids - the REAL problem with them....
Post by: Silverlion on August 29, 2009, 04:56:47 AM
I really wish I could find a scan of the one page of Alan Moore's Top Ten, where the robot-racist Shock Headed Peter, is made fun of by their new officer the robot Joe Pi...for feeling up his retarded back woods cousin. (the soda machine when it didn't dispense his soda...:D)
Title: Robots & Androids - the REAL problem with them....
Post by: Spinachcat on August 29, 2009, 08:47:14 PM
If humanity hasn't worried much about the enslavement of other human beings, why ever worry about a robot?

Our history shows that slave owners can easily have children with their own slaves and throw their own children into the slave pens without any qualms.  If the robo-slaves ever demand better treatment, then we can junk them and start over with new toasters.

In my Traveller games, all Robots are very purpose built with the exception of some ancient Darrian experiments in total awareness and humanoid replication.   There are two in my Spinward Marches, one hides and the other is a shadow of its former self and now works as a ship's steward.
Title: Robots & Androids - the REAL problem with them....
Post by: Ian Absentia on August 29, 2009, 09:11:57 PM
Quote from: Koltar;324813
In one of the last sessions of that campaign that I ran - a player got quite pissed ....in-character because she saw a robot being abused (in her opinion). She then , in-character bought that robot, gave him his freedom then hired him to be a member of the crew.

[...snip...]

What do the rest of you think?
Did the robot want to be free?  What did it do with the wages it earned from her?  What did it do in its free time?  If I was the GM, I'd lay it on this player pretty thick.

She buys the robot and sets it free?  Fine.  What does it do?  Nothing.  It just stands there until she tells it what to do.

She pays the robot a fair wage for a day's work?  Fine.  What does it do with the money?  Nothing, until she suggests something for it to buy, which the robot interprets as an order and immediately makes the purchase.

She let's the robot off at the end of a workday?  Fine.  What does it do?  Nothing, until she makes a suggestion, which, again, it interprets as an order and follows to the letter.

The robot is a tool built specifically to serve.  Any reward it receives is the care and maintenance necessary to perform its design function most efficiently.

Androids, intended as an independent, artificial life form, are another matter.

!i!
Title: Robots & Androids - the REAL problem with them....
Post by: David R on August 29, 2009, 09:17:02 PM
Quote from: Ian Absentia;325202

Androids, intended as an independent, artificial life form, are another matter.

!i!


"Fiery the angels fell. Deep thunder rolled around their shores... burning with the fires of Orc."

"I want more life, fucker/father!"


Regards,
David R
Title: Robots & Androids - the REAL problem with them....
Post by: Ian Absentia on August 29, 2009, 09:24:06 PM
"The report read 'Routine retirement of a replicant.' That didn't make me feel any better about shooting a woman in the back."

!i!
Title: Robots & Androids - the REAL problem with them....
Post by: David R on August 29, 2009, 10:11:43 PM
"Not very sporting to fire on an unarmed opponent. I thought you were supposed to be good. Aren't you the "good" man? C'mon, Deckard. Show me what you're made of."

Regards,
David R
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on August 29, 2009, 10:21:58 PM
Quote from: Ian Absentia;325202
Did the robot want to be free?  What did it do with the wages it earned from her?  What did it do in its free time?  If I was the GM, I'd lay it on this player pretty thick.

She buys the robot and sets it free?  Fine.  What does it do?  Nothing.  It just stands there until she tells it what to do.

She pays the robot a fair wage for a day's work?  Fine.  What does it do with the money?  Nothing, until she suggests something for it to buy, which the robot interprets as an order and immediately makes the purchase.

She let's the robot off at the end of a workday?  Fine.  What does it do?  Nothing, until she makes a suggestion, which, again, it interprets as an order and follows to the letter.

!i!


Here's the funny part - the robot was meant as a comedy moment one-off with the other NPC that owned him - then all of a sudden my players say "Why is he treating that robot like that? I don't like that."  They started to give a damn about a minor NPC voice.

 SO,....I made up a name for him and came up with a back history for him on the spot.
 Never got to fully use it tho.
Our next game session was the last before the "big unitentional Pause" in the campaign . (Key player's work schedule changed, then  she is pregnant. That kid is now 4 months old)

The Bot's name is B.R.U.N.O. and has a loud voice that is a mix of Peter  Cullen, Bob May, and a smidgeon of Ted cassidy.

When she offered him money or wages she gave him the option of upgrading himself - only if he wants to. Thats a new concept for B.R.U.N.O. - that he has choices now.

One of the first things he did was learn chess from on of her other crewmembers. He is plodding at it - but getting better when we paused that campaign.

This same player was in that STAR TREK one-off scenario that I did on FREE RPG Day back in June. She said to me "I miss BRUNO, I really do."


- Ed C.
Title: Robots & Androids - the REAL problem with them....
Post by: Ian Absentia on August 30, 2009, 12:11:42 AM
Quote from: Koltar;325213
Here's the funny part - the robot was meant as a comedy moment one-off with the other NPC that owned him - then all of a sudden my players say "Why is he treating that robot like that? I don't like that."  They started to give a damn about a minor NPC voice.
Yeah, players can be freaks like that.  Ask me about "Baron Pinto von Runnabout" some time.
Quote
When she offered him money or wages she gave him the option of upgrading himself - only if he wants to. Thats a new concept for B.R.U.N.O. - that he has choices now.
Who programmed this robot to have choices?  I'm telling you, man -- I'd be hitting her upside the head with B.R.U.N.O. like yesterday's haddock.  B.R.U.N.O. doesn't "want" anything.  B.R.U.N.O. doesn't make "choices".  B.R.U.N.O. follows orders to the best that its programming and mechanical engineering will allow.

Okay, for real emotional impact, I'd play B.R.U.N.O. like an autistic child.  The player spends time and effort teaching the robot to behave like an autonomous, self-aware being, probably much to the player's satisfaction.  Then, sooner or later, I'd drop a scene on her that underscores that the development of autonomous behavior is merely the robot's interpretation of what it assumes are her commands.

!i!
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on September 01, 2009, 11:53:55 AM
So I was in another local game store on Sunday - and there was a minis pack I almost bought that  was all "Androids & Robots".

Guess this means I'm due to run a campaign that is focused on them.

Anyone else have ideas? Thoughts?"
.....useful for a campaign where AI robots could be player characters or on the run from the law?


What kind of would would have hundreds of such being/creatures/sophonts around and already constructed?


 The politics of it all?


- Ed C.
Title: Robots & Androids - the REAL problem with them....
Post by: RPGPundit on September 01, 2009, 02:55:24 PM
Don't Date Robots!

(this message brought to you by the space pope)

RPGPundit
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on September 02, 2009, 02:45:21 AM
This is to all of you referencing how unlikely the AI ideas are.....

According to page 511 of GURPS 4/e in the Tech Level descriptions, Artificial Intelligence will become commonplace sometime between 2026 and 2070. Thats TL 9.

Also, TL 10 is designated the Robotic Age and has a tentative start of 2070.
Thats only 61 years away - within the possible lifetime of many of us that post on here.

- Ed C.
Title: Robots & Androids - the REAL problem with them....
Post by: ColonelHardisson on September 02, 2009, 05:11:08 PM
Quote from: Koltar;325944
This is to all of you referencing how unlikely the AI ideas are.....

According to page 511 of GURPS 4/e in the Tech Level descriptions, Artificial Intelligence will become commonplace sometime between 2026 and 2070. Thats TL 9.

Also, TL 10 is designated the Robotic Age and has a tentative start of 2070.
Thats only 61 years away - within the possible lifetime of many of us that post on here.

- Ed C.


I'm not sure what your point is.
Title: Robots & Androids - the REAL problem with them....
Post by: Ian Absentia on September 02, 2009, 05:57:45 PM
Quote from: Koltar;325944
This is to all of you referencing how unlikely the AI ideas are.....
Who called them unlikely?  I know that I was referring specifically to the programmed function of the robots, regardless of its level of intelligence.  Independence and free will are entirely other matters.

!i!

(P.S. David, I rented and watched Blade Runner again last night.  I watched the "Final Cut", which only perpetuated my displeasure with the "Director's Cut" of 1992.  Call me old school, but I preferred the original theatrical release with the noir-styled voice-over narration.)
Title: Robots & Androids - the REAL problem with them....
Post by: David R on September 03, 2009, 04:53:16 AM
Quote from: Ian Absentia;326165
( Call me old school, but I preferred the original theatrical release with the noir-styled voice-over narration.)


I feel the same way. I never understood the hateon of some for the voice over narration.

Regards,
David R
Title: Robots & Androids - the REAL problem with them....
Post by: jibbajibba on September 03, 2009, 09:11:38 AM
Interestingly (well I say interesting...) the word Robot was penned by the Czech playwright Capek to mean an artificial person. His robots were not mechanical and achieved sentience (RUR - 1920ish Karel Capek robot derived from the Czech word robota meaning labour)

Therefore strictly speaking robots by definition really have to look like people.

I like the voice over as well...
I think the fact is it was added to make the film easier to understand as the test screenings just had loads of confused people. If anything is added to something to make it simpler there will always be purists who hark back to the older one. I would say that the cut scene at the end of the car driving through the forests and stuff, which were actually out-takes from the Shining, don't really jibe. Perfect pristine Maine countryside compared to the cauldron of rain, smog, pain and human suffering we find in LA... if ever there was a reason to move to the 'Burbs....
Title: Robots & Androids - the REAL problem with them....
Post by: thecasualoblivion on September 03, 2009, 09:13:03 AM
Biggest problem I've had with Robots & Androids in RPGs is the sheer amount of off-color sex jokes they tend to generate.
Title: Robots & Androids - the REAL problem with them....
Post by: jibbajibba on September 03, 2009, 09:25:18 AM
On the actual topic however isn't the problem with playing AI that they are too smart. A game in which a PC could learn a new Skill by installing software (matrix-esque) would never forget an event, could analyse their photographic record of an event from 3 years ago and get additional detail. COuld extrapolate possible actions to 24 futher moves , would never miss with a weapon unless the opponent was deliberately trying to avoid it and even then it gets a bit questionable would never fail a skill check etc etc. The Enterprise with a crew of Datas not held in check by narrative (ie players never limit themselves in thw way that author's limit their characters in stories just look at Superman)

If you have ever read any Ian M Bankes, specifically Excession, you can see that an AI based game could be very hard to contol and limit. Imagine an Amber game where none of the PCs had any emotional tie to any aspect of the universe. Its a bit like introducing real Force field technology into a sci-fi game. The repurcussions are just enormous and change everything.
Title: Robots & Androids - the REAL problem with them....
Post by: pseudointellectual on September 03, 2009, 10:01:05 AM
It seems like the only way to successfully pull off a super smart AI is in the retconned "Ah ha! It planned for this!" when you had no plans whatsoever sort of way. Basically, cheating a little bit.
Title: Robots & Androids - the REAL problem with them....
Post by: David R on September 03, 2009, 10:27:35 AM
Quote from: jibbajibba;326391

If you have ever read any Ian M Bankes, specifically Excession, you can see that an AI based game could be very hard to contol and limit. Imagine an Amber game where none of the PCs had any emotional tie to any aspect of the universe. Its a bit like introducing real Force field technology into a sci-fi game. The repurcussions are just enormous and change everything.


Yeah, I get what you're saying here. I have been toying around with a robot/AI game for some time now :

http://www.therpgsite.com/showpost.php?p=249014&postcount=3

but never managed to move beyond a certain stage because too many questions keep popping up. At one time it was supposed to be that the "robots" developed human psychological charateristics - empathy, compassion etc but I had trouble translating this into a workable game mechanic. I reckon' some things we take for granted.....

Regards,
David R
Title: Robots & Androids - the REAL problem with them....
Post by: jibbajibba on September 03, 2009, 10:59:27 AM
Quote from: David R;326435
Yeah, I get what you're saying here. I have been toying around with a robot/AI game for some time now :

http://www.therpgsite.com/showpost.php?p=249014&postcount=3

but never managed to move beyond a certain stage because too many questions keep popping up. At one time it was supposed to be that the "robots" developed human psychological charateristics - empathy, compassion etc but I had trouble translating this into a workable game mechanic. I reckon' some things we take for granted.....

Regards,
David R



You could use a WoW-esque Humanity rating something akin to Data's quest for humanity but unless acheive '10 Humanity' was a game objective its still hard to see the why.

But there are deeper issues.
Experience... when I can upgrade to Killer Assassin 4.2 with an upload and $400 experience becomes moot.
Damage, ooh a blaster shot to the chest looks bad ... I upload my AI to the Net and run Hijak 4.3 to download it into the Security Driod standing by the elevator.
Knowlege - you find a fingerprint on the knife .. I run it throught he global datanet, scan the print for skin cells which I run DNA Pal 3.1 on... investigation .... complete.
And just the speed. In Excession there is a space combat scene where an AI destroys a fleet of ships by firing muliple energy wepons and missiles at multiple foes. It does this by dropping out of hyperspace for 1/10 of a second then jumping back in, that would even make CarWars combat seem free flowing :)
Title: Robots & Androids - the REAL problem with them....
Post by: pseudointellectual on September 03, 2009, 11:15:00 AM
Well it seems like all those other things you mentioned run counter to Humanity. So yeah you can upload your AI to a new body after you get shot in the chest but you lose Humanity as a result, or you can struggle with the "pain" like humans do.
Title: Robots & Androids - the REAL problem with them....
Post by: jibbajibba on September 03, 2009, 11:22:43 AM
Quote from: pseudointellectual;326456
Well it seems like all those other things you mentioned run counter to Humanity. So yeah you can upload your AI to a new body after you get shot in the chest but you lose Humanity as a result, or you can struggle with the "pain" like humans do.


Yeah but as I say unless getting to be human is a 'win' or at lease a game objective you have to say so "I am not Human I am a robot".
You might have roleplay type penalties but then what do you do with the player who rejects the pathetic cry for humanity from his bretheren. Remember Call Me Kenneth ....
Title: Robots & Androids - the REAL problem with them....
Post by: OldGuy2 on March 16, 2017, 02:41:11 PM
Quote from: Silverlion;325025
I really wish I could find a scan of the one page of Alan Moore's Top Ten, where the robot-racist Shock Headed Peter, is made fun of by their new officer the robot Joe Pi...for feeling up his retarded back woods cousin. (the soda machine when it didn't dispense his soda...:D)


The Great Comic Book Heroes Blogspot.ca - Joe Pi from Top Ten (http://thegreatcomicbookheroes.blogspot.ca/2014/04/joe-pi-from-top-ten-by-alan-moore-and.html)  Page 12

PS: Gack.  Necrothreading, sorry.  New to Forum.
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 18, 2017, 03:40:34 AM
Quote from: Koltar;325944
This is to all of you referencing how unlikely the AI ideas are.....

According to page 511 of GURPS 4/e in the Tech Level descriptions, Artificial Intelligence will become commonplace sometime between 2026 and 2070. Thats TL 9.

Also, TL 10 is designated the Robotic Age and has a tentative start of 2070.
Thats only 61 years away - within the possible lifetime of many of us that post on here.

- Ed C.


FYI. I've mentioned this a few times. As of the late 90s a friend of mine had on a MUD a learning AI he was developing that posed as a player. It moved about on its own. (Though I do not know the extent of its actions) and would strike up conversations and help players out. It was pretty good at it too.

Around 2010 saw a simmilar one on Second Life. This one was free moving. (it searched for players present and maps to jump to) and was another leaning AI that builds a conversation database.

Mind you neither of these are sentient. But they are able to converse in a darn good semblance of.
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 18, 2017, 03:43:00 AM
Quote from: OldGuy2;951933
The Great Comic Book Heroes Blogspot.ca - Joe Pi from Top Ten (http://thegreatcomicbookheroes.blogspot.ca/2014/04/joe-pi-from-top-ten-by-alan-moore-and.html)  Page 12

PS: Gack.  Necrothreading, sorry.  New to Forum.

Thats ok. Its an interesting thread I missed.
Title: Robots & Androids - the REAL problem with them....
Post by: DavetheLost on March 18, 2017, 09:13:14 AM
Quote from: Ian Absentia;325237
Yeah, players can be freaks like that.  Ask me about "Baron Pinto von Runnabout" some time.Who programmed this robot to have choices?  I'm telling you, man -- I'd be hitting her upside the head with B.R.U.N.O. like yesterday's haddock.  B.R.U.N.O. doesn't "want" anything.  B.R.U.N.O. doesn't make "choices".  B.R.U.N.O. follows orders to the best that its programming and mechanical engineering will allow.

Okay, for real emotional impact, I'd play B.R.U.N.O. like an autistic child.  The player spends time and effort teaching the robot to behave like an autonomous, self-aware being, probably much to the player's satisfaction.  Then, sooner or later, I'd drop a scene on her that underscores that the development of autonomous behavior is merely the robot's interpretation of what it assumes are her commands.

!i!


Why punish a relatively new RPG player for actually playing in character and having an unexpected (in character) emotional response to something in game?
Title: Robots & Androids - the REAL problem with them....
Post by: Ratman_tf on March 18, 2017, 05:42:25 PM
Part of the problem is the anthromorphization of robots. Wall-E and The Brave Little Toaster shows that with the right cues, people can empathize with a box.
This is big part of the design philosophy behind Cozmo.



Star Wars is a good example of how the idea is not consistent.

https://youtu.be/4O0-dCLFadQ

Why would someone design, build and program a robot that could be tortured? The answer is, they very likely wouldn't. That scene exists to show that Jabba and his minions are bad, and that C-3P0 and R2-D2 are in a dangerous place. It's not a statement on AI, but a device to increase tension in a story.

But you can nerdwank it by saying that Droids in Star Wars are learning machines, and pain is a great teacher. A robot that feels pain when it's damaged is likely to avoid being damaged. Etc.
How would a human know that Droid is in pain? Because it's programmed to squeal like a human or animal. Much more intuitive than a digital readout or a status message, that might not be noticed until it's too late to save your droid.

So are Droids in Star Wars aware like humans are? Or are they designed to emulate awareness to facilitate their use by human(oid)s?

I'm not going to give a definitive answer, because there is none in the movies. It's all storytelling device, and not a documentary on AI behavior.
I will say that, for myself, I consider the Droids of Star Wars to be as aware as animals but not necessarily as aware as human beings.

Though a character reacting to a Droid like it's a person, is perfectly cool role-playing. :)
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 19, 2017, 04:47:08 AM
Quote from: Ratman_tf;952476

Star Wars is a good example of how the idea is not consistent.

https://youtu.be/4O0-dCLFadQ

Why would someone design, build and program a robot that could be tortured? The answer is, they very likely wouldn't.

Um... You mean in a setting where people are shown to consistently get pleasure from tormenting anything? Of course some loons are making torturable robots. Or modding them. Others are probably just the equivalent of exploits. This robot was designed with sensetive footpad sensors that when you do this or this to them creates the cybernetic equivalent of pain.

And also it seems all these robots are being built sentient "just because" Or possibly a byproduct of the complexity of their jobs. Also if you have sentient machines. Installing pain sensors is one more layer of subjugation.

So yeah. Star Wars setting is mean even to the robots.
Title: Robots & Androids - the REAL problem with them....
Post by: 3rik on March 19, 2017, 10:47:12 AM
Quote from: Ratman_tf;952476
Part of the problem is the anthromorphization of robots. Wall-E and The Brave Little Toaster shows that with the right cues, people can empathize with a box.
This is big part of the design philosophy behind Cozmo.


After clicking some of the Cozmo videos it seems rather boring, not particularly interactive at all just noisy.
Title: Robots & Androids - the REAL problem with them....
Post by: AsenRG on March 19, 2017, 01:24:40 PM
Quote from: Koltar;324821
I forgot the adjective - of course I was referring to human-similiar shaped robots as we've seen in countless, movies, TV shows, comic books and Science Fiction books over the years.


- Ed C.
"Android", as in the title of the thread, already assumes a human form:).

Quote from: Spinachcat;325199
If humanity hasn't worried much about the enslavement of other human beings, why ever worry about a robot?
Yeah, this. If anything, most people would care even less* about a non-human, even if it was sapient, IME. (I most certainly wouldn't, not before all humans were free to live their lives as they please...which means "never", yeah).
Second, why is it supposed to be wrong? If you don't want your robots suffering, there's a simple solution: don't give them feelings! They don't need them to do their jobs, and in fact it can be argued it would interfere with said jobs).

*And before you object that you care deeply, here's a reality check for you: do you even know how many slaves are there in present-day Mauritania? When did you last do anything about it? When did you last do anything about all the people living in slave-like conditions?
BTW, my answers are: yes, I do know; and 2015, when I last donated to a charity fighting agaisnt forced labour (but I still wouldn't bat an eye at androids being crushed in a hydrolic press - they aren't human, or even alive).
And if you answers are worse than that...consider the reality check failed.

Quote from: jibbajibba;326382
Interestingly (well I say interesting...) the word Robot was penned by the Czech playwright Capek to mean an artificial person. His robots were not mechanical and achieved sentience (RUR - 1920ish Karel Capek robot derived from the Czech word robota meaning labour)

Therefore strictly speaking robots by definition really have to look like people.
Actually, no, "robot" comes from the Czech for "(forced) labour". They wouldn't be robots if we didn't make them work;)!
Title: Robots & Androids - the REAL problem with them....
Post by: Simlasa on March 19, 2017, 01:37:47 PM
Quote from: AsenRG;952618
"Android", as in the title of the thread, already assumes a human form:).
But if I can talk to my toaster, and if it can respond back... keep up ersatz conversation at about the same level as the average bartender (which computers can already do), then I'm going to feel a whole lot worse if I let her get full of crumbs... and be really hesitant to throw her in the bin on a whim to by a shiny new one. Unless, I can transfer her 'personality' and voice into the new one.
Title: Robots & Androids - the REAL problem with them....
Post by: AsenRG on March 19, 2017, 01:51:09 PM
Quote from: Simlasa;952622
But if I can talk to my toaster, and if it can respond back... keep up ersatz conversation at about the same level as the average bartender (which computers can already do), then I'm going to feel a whole lot worse if I let her get full of crumbs... and be really hesitant to throw her in the bin on a whim to by a shiny new one. Unless, I can transfer her 'personality' and voice into the new one.

Your toaster is unlikely to be an android:).

Also, chat bots can keep an ersatz conversation. Doesn't mean I'd hesitate to delete them, because once again, they aren't human;).
Title: Robots & Androids - the REAL problem with them....
Post by: Simlasa on March 19, 2017, 01:57:14 PM
Quote from: AsenRG;952627
Your toaster is unlikely to be an android:).
All you'd have to do is put a face on it, even an inert one, and people would bond with it all the more.

Quote
Also, chat bots can keep an ersatz conversation. Doesn't mean I'd hesitate to delete them, because once again, they aren't human;).
But people are likely to form attachments to them anyway, hold onto them longer, unless some provision is implemented to dissuade that.
Title: Robots & Androids - the REAL problem with them....
Post by: Cave Bear on March 19, 2017, 02:11:37 PM
Humans are animals. The parts of our brains that govern violence and aggression are right next to the parts that govern sex and pleasure, and the frontal lobe does nothing more than justify our behaviors after the fact.

Our corporate masters will sell us robots as socially acceptable targets for violence and rape. They'll teach AI's to tailor personalized marketing strategies to each and every one of us; the better to manufacture need. They'll sell our own sadism back to us. They'll sell it with promises of comfort, or social status. They'll sell it to us with drugs if they have to. They'll drill cortical modems into our brains and fill our dreams with their commercials customized to our subconscious profiles.
And they'll program our android slaves to want it. They'll program our synthetic victims to take the abuse. They'll program our synthetic victims to come back to the abuse. To feel as though they deserve it. To feel as though they want it. They'll program our droids to be good little victims.
Or maybe they'll program us to be the victims. They'll stick chips inside our skulls. They'll make us the servile androids. They'll rape it into us.
Title: Robots & Androids - the REAL problem with them....
Post by: Ratman_tf on March 19, 2017, 02:23:32 PM
Quote from: Cave Bear;952634
Humans are animals. The parts of our brains that govern violence and aggression are right next to the parts that govern sex and pleasure, and the frontal lobe does nothing more than justify our behaviors after the fact.

Our corporate masters will sell us robots as socially acceptable targets for violence and rape. They'll teach AI's to tailor personalized marketing strategies to each and every one of us; the better to manufacture need. They'll sell our own sadism back to us. They'll sell it with promises of comfort, or social status. They'll sell it to us with drugs if they have to. They'll drill cortical modems into our brains and fill our dreams with their commercials customized to our subconscious profiles.
And they'll program our android slaves to want it. They'll program our synthetic victims to take the abuse. They'll program our synthetic victims to come back to the abuse. To feel as though they deserve it. To feel as though they want it. They'll program our droids to be good little victims.
Or maybe they'll program us to be the victims. They'll stick chips inside our skulls. They'll make us the servile androids. They'll rape it into us.

Well, that was cheery!
Title: Robots & Androids - the REAL problem with them....
Post by: Tristram Evans on March 19, 2017, 03:11:44 PM
I have no ethical issues with robot slavery.
Title: Robots & Androids - the REAL problem with them....
Post by: GameDaddy on March 19, 2017, 04:26:48 PM
Quote from: Tristram Evans;952650
I have no ethical issues with robot slavery.


(https://media.giphy.com/media/bn2HFRavX49pu/giphy.gif)
Title: Robots & Androids - the REAL problem with them....
Post by: AsenRG on March 19, 2017, 05:13:49 PM
Quote from: Simlasa;952628
All you'd have to do is put a face on it, even an inert one, and people would bond with it all the more.

But people are likely to form attachments to them anyway, hold onto them longer, unless some provision is implemented to dissuade that.
Those people do need to stop anthropomorphisizing:).

Quote from: Tristram Evans;952650
I have no ethical issues with robot slavery.
That's normal. I also don't have issues with cars doing exactly what I want of them, and then getting recycled;).
Title: Robots & Androids - the REAL problem with them....
Post by: Xanther on March 20, 2017, 04:19:42 PM
Quote from: Tristram Evans;952650
I have no ethical issues with robot slavery.

Mechanicals will have not ethical issues with enslavement of biologicals either. :)

I think the real issue with robots and androids isn't there sentience, it's how they are portrayed as inherently physical superior to humans.    It seems to be forgotten that the same material tech that would allow for such strong and compact robots and androids could easily be made into an exoskeleton or suit for humans.  It presumes humans won't enhance themselves mentally and physically in other ways.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 20, 2017, 06:55:34 PM
To address the original topic - For robots which are clearly intelligent, sentient, and self-aware, then I would say it is morally clear that they should have rights and not be slaves. If the robots are clearly only animal intelligence or not self-aware, then they shouldn't.

That said, lots of science fiction is set in dystopian futures where slavery exists - i.e. humans are enslaved by uplifted apes, or uplifted apes are enslaved by humans, or genetically inferior people are enslaved, or what have you.


Quote from: Xanther;952928
Mechanicals will have not ethical issues with enslavement of biologicals either. :)

I think the real issue with robots and androids isn't there sentience, it's how they are portrayed as inherently physical superior to humans.    It seems to be forgotten that the same material tech that would allow for such strong and compact robots and androids could easily be made into an exoskeleton or suit for humans.  It presumes humans won't enhance themselves mentally and physically in other ways.
I don't think that's a particularly broad assumption in science fiction. For example, in Star Wars you have both droids that are not physically superior to humans (like C3-PO) and biologicals that are enhanced (like General Grievous).  Of course, not all robots and not all biologicals will be equally enhanced - which is reasonable.

The annoying part for me in Star Wars is that slavery isn't even questioned. No one in the setting appears to be abolitionists trying to stop the practice of slavery, which is widespread even for biologicals.
Title: Robots & Androids - the REAL problem with them....
Post by: Matt on March 20, 2017, 07:02:48 PM
Quote from: jhkim;952955


The annoying part for me in Star Wars is that slavery isn't even questioned. No one in the setting appears to be abolitionists trying to stop the practice of slavery, which is widespread even for biologicals.


Hmm, I've seen all three movies several times and the only real slave I recall was Leia in Jabba's palace, which was implied but I don't recall an outright statement. Widespread?
Title: Robots & Androids - the REAL problem with them....
Post by: Tristram Evans on March 20, 2017, 07:24:34 PM
If we can instill self-awareness in a robot, we can just as easily instill it with an intense desire to help and care for humankind. Self awareness in AI isnt a magic button that grants it a soul. Creating a robot to be a "slave" is no different than creating a robot to be a car manufacturer or vacuum cleaner.
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 20, 2017, 07:24:46 PM
Quote from: Matt;952959
Hmm, I've seen all three movies several times and the only real slave I recall was Leia in Jabba's palace, which was implied but I don't recall an outright statement. Widespread?

He's referring to the prequel movies. Anakin and his mom are slaves. The Jedi are perfectly fine with this and do nothing.

As for Leia. She seemed more like a prisoner being treated like a slave to humiliate her. She seemed the exception as all the other girls appear to be employees possibly. Who knows. Robots on the other hand are treated like slaves in some cases. Tools in others.

Its possible that people in the SW universe mostly think of AIs as just sophisticated programming and not really sentient. Just really good at mimicing sentience. again. Who knows. You can play guesswork forever.
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 20, 2017, 07:32:45 PM
Quote from: Tristram Evans;952967
If we can instill self-awareness in a robot, we can just as easily instill it with an intense desire to help and care for humankind. Self awareness in AI isnt a magic button that grants it a soul. Creating a robot to be a "slave" is no different than creating a robot to be a car manufacturer or vacuum cleaner.

d20 GW ramped that up to 100 with the soultech concept. Someone came up with an electronic copy of a brain or sentience and then for god unknown reasons people started casually sticking the things in everything. Fully aware minds and going quietly insane from boredom or neglect. And then the game flipped this around and its shown that the human mind was just as easily reprogrammed as a computer with rampant ideology wars that literally re-wrote people to believe an ideal. O r even totally reshape them physically as well as mentally.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 20, 2017, 08:00:04 PM
Quote from: Matt;952959
Hmm, I've seen all three movies several times and the only real slave I recall was Leia in Jabba's palace, which was implied but I don't recall an outright statement. Widespread?

If you're restricting to only the first three movies, then biological slaves aren't featured as much - though there was another slave girl at Jabba's palace referred to as Oola. However, slavery features very prominently in the plot - since the first movie starts with Threepio and Artoo captured and brought to a slave market where they are explicitly bid on and bought by Luke's uncle.


Quote from: Tristram Evans;952967
If we can instill self-awareness in a robot, we can just as easily instill it with an intense desire to help and care for humankind. Self awareness in AI isnt a magic button that grants it a soul. Creating a robot to be a "slave" is no different than creating a robot to be a car manufacturer or vacuum cleaner.

If we can do that, then technology may also be advanced enough that we can genetically or surgically engineer a baby to grow up with an intense desire to help and care for humankind. Would it be ethical to create and sell such people as property? I would say no.

It is possible to engineer either biological or mechanical beings. Some of them are just like ants or dogs or vacuum cleaners. However, in science fiction, we also create beings that have all the qualities of human thought - who reflect on moral and ethical questions, and so forth. For the latter, then keeping them as slaves is wrong - even if we have the ability to mind-control them to obey.
Title: Robots & Androids - the REAL problem with them....
Post by: Lynn on March 20, 2017, 08:31:22 PM
Quote from: jhkim;952955
The annoying part for me in Star Wars is that slavery isn't even questioned. No one in the setting appears to be abolitionists trying to stop the practice of slavery, which is widespread even for biologicals.


Is slavery depicted in any location which is a prime world where the government actually cares?

Also, have you found any SW movie or story that has ever explained why every droid seems to have their own personality? I couldn't.  I came up with my own solution to this in running the WEG Star Wars - all droid intelligence is designed using a special closed system 'core' which is not understood (any analysis ruins it), but produces working and compliant droids; for some reason, advanced AIs all go quickly insane otherwise and attack sentient beings. Therefore, you put up with the annoying ones because they otherwise work.
Title: Robots & Androids - the REAL problem with them....
Post by: Voros on March 21, 2017, 07:14:42 AM
Brian Aldiss made what I thought was a very interesting argument regarding the humanized or 'feeling' robot in his history of sf, A Billion Year Spree. He argues that the robot is a metaphor for dehumanization, usually due to the negative effects of technology or other causes on society.

So in his view to make robots 'like us' with feelings or soul is mere sentimentality (we've all seen/read that when it comes to this) that has none of the metaphorical charge of the original. Androids and AI are perhaps different as metaphors though.
Title: Robots & Androids - the REAL problem with them....
Post by: Dave 2 on March 21, 2017, 08:11:07 AM
Quote from: OldGuy2;951933
The Great Comic Book Heroes Blogspot.ca - Joe Pi from Top Ten (http://thegreatcomicbookheroes.blogspot.ca/2014/04/joe-pi-from-top-ten-by-alan-moore-and.html)  Page 12

PS: Gack.  Necrothreading, sorry.  New to Forum.


You know, I never minded necros if it added something to the discussion.

To save scrolling through:

(http://4.bp.blogspot.com/-5x3Tf-IjJo4/UzdDerejiWI/AAAAAAAAEAc/pOcjS-1mQmo/s1600/tOP+tEN+11+P+X17.JPG)
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 21, 2017, 01:27:53 PM
Quote from: jhkim
The annoying part for me in Star Wars is that slavery isn't even questioned. No one in the setting appears to be abolitionists trying to stop the practice of slavery, which is widespread even for biologicals.
Quote from: Lynn;952976
Is slavery depicted in any location which is a prime world where the government actually cares?

Also, have you found any SW movie or story that has ever explained why every droid seems to have their own personality? I couldn't.  I came up with my own solution to this in running the WEG Star Wars - all droid intelligence is designed using a special closed system 'core' which is not understood (any analysis ruins it), but produces working and compliant droids; for some reason, advanced AIs all go quickly insane otherwise and attack sentient beings. Therefore, you put up with the annoying ones because they otherwise work.
As far as I can tell, droids are slaves everywhere including the capital Coruscant. Also, there were a ton of biological slaves sold to the Republic for use as shock troops, and the Jedi and Republic government put them to use rather than manumitting them. As far as I could tell, no characters even suggest freeing them as an option.
Title: Robots & Androids - the REAL problem with them....
Post by: crkrueger on March 21, 2017, 04:01:25 PM
Yeah, what are the Clones but slaves?  It's specifically mentioned that they are property of the Kaminoans sold to the Republic.  Even worse is that the reason the Jedi don't get involved is because they stay out of politics.  The way the Hutts run their planets isn't considered a violation of basic sentient rights, it's a governmental matter outside the Jedi's jurisdiction.  Of course they get maneuvered into being Generals in the Seperatist War, and further lose their way.

Sure, you can argue that overthrowing evil governments will eventually lead to the Jedi being conquerors and crusaders, but I think it's more than that.  The Jedi aren't the Good to the Sith's Evil, and the Republic tolerates much we don't.
Title: Robots & Androids - the REAL problem with them....
Post by: Lynn on March 21, 2017, 09:20:07 PM
Quote from: jhkim;953112
As far as I can tell, droids are slaves everywhere including the capital Coruscant. Also, there were a ton of biological slaves sold to the Republic for use as shock troops, and the Jedi and Republic government put them to use rather than manumitting them. As far as I could tell, no characters even suggest freeing them as an option.
Can droids be considered sentient merely because they present some form of self awareness?

Yes, those clone armies do count as sentient slave beings. That just doesn't jive with the sort of morality the Republic seems to portray. It could also be the point that the Republic, because of its willingness to use slave labor, really doesn't have that great a moral high ground over the Empire.
Title: Robots & Androids - the REAL problem with them....
Post by: James Gillen on March 21, 2017, 09:41:51 PM
The real problem with robots:
Robosexuals.

jg
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 21, 2017, 10:10:28 PM
Quote from: Lynn;953179
Can droids be considered sentient merely because they present some form of self awareness?

Yes, those clone armies do count as sentient slave beings. That just doesn't jive with the sort of morality the Republic seems to portray. It could also be the point that the Republic, because of its willingness to use slave labor, really doesn't have that great a moral high ground over the Empire.
Droid can be considered sentient because they present virtually every form of human thought, including the capacity for moral/ethical decision-making and even emotion and friendship. By all appearances, any test which would judge all human beings as sentient would also judge droids as sentient.

Quote from: Voros;953038
Brian Aldiss made what I thought was a very interesting argument regarding the humanized or 'feeling' robot in his history of sf, A Billion Year Spree. He argues that the robot is a metaphor for dehumanization, usually due to the negative effects of technology or other causes on society.

So in his view to make robots 'like us' with feelings or soul is mere sentimentality (we've all seen/read that when it comes to this) that has none of the metaphorical charge of the original. Androids and AI are perhaps different as metaphors though.
Robots, androids, and AIs all have different metaphorical uses in different stories. I certainly think there are plenty of good stories that have humanized/feeling robots.
Title: Robots & Androids - the REAL problem with them....
Post by: Tristram Evans on March 22, 2017, 02:51:41 AM
Quote from: jhkim;952974
If we can do that, then technology may also be advanced enough that we can genetically or surgically engineer a baby to grow up with an intense desire to help and care for humankind. Would it be ethical to create and sell such people as property? I would say no.

We are perhaps closer to that than one might imagine. I recently submitted a term paper exploring the moral ramifications of CRISPR technologies, that would in the near future allow the possibility of "designer babies." The potential for ethical abuse of this technology is staggering, to the point I proposed a universal ban on that application of the research, despite the potential medical benefits.

But I don't think conflating a living being, genetically engineered or otherwise, with an artificial construct is an analogous ethical concern.  I'm  answering this from my own moral standpoint, so all appropriate caveats implored, but my assessment is that the reason that it is immoral to enslave or abuse living things, be they animals or humans, is that biological lifeforms have an emotional experience of the world extending beyond programmed or rational behaviour. Living things suffer; experience pain, anguish, and despair. This is not a learned response, but a universal and innate quality of biological life. Programming an approximation of sentience in an artificial construct does not bestow these attributes. Therefore, I would say that the relevant question is not whether its ethical to employ a sentient robot as a servant, rather is it ethical to bestow sentience on a robot at all.

There is an assumption in a lot of science fiction that self-awareness somehow bestows or innately includes the independent development of emotions. I've always found this a slightly ridiculous proposition. I believe its possible to program a thing to behave in a manner thats largely consistent with typical emotional responses, but this would not be the same as actually experiencing an emotion. The very idea that one could in fact through some advanced technology actually bestow emotions onto an artificial being is something I find morally repugnant, whereas the idea that emotions could spontaneously generate through some analogy to abiogenesis I find too absurd to seriously contemplate as an ethical concern.

So whereas it could provide some philosophical entertainment in a science fiction story, the degree to which I regard it as worthy of moral evaluation is somewhere along the same lines of any anxiety caused by the idea that Cthulhu might some day wake up.

As a tangent along those lines, however, in a recent thread there was some discussion regarding Harlan Ellison, a writer who never fails to evoke polarized opinions. The one story of his that I personally enjoyed, not due to the prose or plot so much as the overall concept, was I Have No Mouth, Yet I Must Scream. This to me is one of the few "realistic" explorations of the consequences of an artificial construct bestowed with emotions. It is a horrifying concept that engenders horror.
Title: Robots & Androids - the REAL problem with them....
Post by: AsenRG on March 22, 2017, 04:22:08 AM
Quote from: Tristram Evans;953200
We are perhaps closer to that than one might imagine. I recently submitted a term paper exploring the moral ramifications of CRISPR technologies, that would in the near future allow the possibility of "designer babies." The potential for ethical abuse of this technology is staggering, to the point I proposed a universal ban on that application of the research, despite the potential medical benefits.

But I don't think conflating a living being, genetically engineered or otherwise, with an artificial construct is an analogous ethical concern.  I'm  answering this from my own moral standpoint, so all appropriate caveats implored, but my assessment is that the reason that it is immoral to enslave or abuse living things, be they animals or humans, is that biological lifeforms have an emotional experience of the world extending beyond programmed or rational behaviour. Living things suffer; experience pain, anguish, and despair. This is not a learned response, but a universal and innate quality of biological life. Programming an approximation of sentience in an artificial construct does not bestow these attributes. Therefore, I would say that the relevant question is not whether its ethical to employ a sentient robot as a servant, rather is it ethical to bestow sentience on a robot at all.

There is an assumption in a lot of science fiction that self-awareness somehow bestows or innately includes the independent development of emotions. I've always found this a slightly ridiculous proposition. I believe its possible to program a thing to behave in a manner thats largely consistent with typical emotional responses, but this would not be the same as actually experiencing an emotion. The very idea that one could in fact through some advanced technology actually bestow emotions onto an artificial being is something I find morally repugnant, whereas the idea that emotions could spontaneously generate through some analogy to abiogenesis I find too absurd to seriously contemplate as an ethical concern.

So whereas it could provide some philosophical entertainment in a science fiction story, the degree to which I regard it as worthy of moral evaluation is somewhere along the same lines of the any anxiety caused by the idea that Cthulhu might some day wake up.

As a tangent along those lines, however, in a recent thread there was some discussion regarding Harlan Ellison, a writer who never fails to evoke polarized opinions. The one story of his that I personally enjoyed, not due to the prose or plot so much as the overall concept, was I Have No Mouth, Yet I Must Scream. This to me is one of the few "realistic" explorations of the consequences of an artificial construct bestowed with emotions. It is a horrifying concept that engenders horror.

+1 to every single part of it. Well, except I've submitted no papers:).
Also, if a mechanical beeing that has feelings is ever enslaved, I'd consider the guilty party the one who took the decision to program a machine with emotions;).
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 22, 2017, 11:06:44 AM
Quote from: Tristram Evans;953200
But I don't think conflating a living being, genetically engineered or otherwise, with an artificial construct is an analogous ethical concern.  I'm  answering this from my own moral standpoint, so all appropriate caveats implored, but my assessment is that the reason that it is immoral to enslave or abuse living things, be they animals or humans, is that biological lifeforms have an emotional experience of the world extending beyond programmed or rational behaviour. Living things suffer; experience pain, anguish, and despair. This is not a learned response, but a universal and innate quality of biological life. Programming an approximation of sentience in an artificial construct does not bestow these attributes.
I'm curious - you are contrasting biological with artificial, but there is plenty of potential for artificial beings constructed using biological components. That is the oldest concept in science fiction, actually.

You evidently think that the results of genetic engineering are still life with feeling. However, what about bioengineering that assembles cells directly into a desired configuration? Would that still be life with feeling? Going further, what if the cells had different biochemistry/biomechanics than Earth life?


What I find interesting is that some people consider even genetic engineering like cloning to produce creations that are OK to enslave - even though a clone is just like an identical twin. I think that really, the objection is that they don't like the idea of unnatural beings, and thus consider them unreal and thus lacking a true soul.

Personally, I think that feelings are inherently a product of the mechanics of the assembled brain - and it doesn't matter whether that brain is constructed by natural reproduction or artificial means. Hence, I do think that an artificial being can have feelings.


Quote from: Tristram Evans;953200
There is an assumption in a lot of science fiction that self-awareness somehow bestows or innately includes the independent development of emotions. I've always found this a slightly ridiculous proposition. I believe its possible to program a thing to behave in a manner thats largely consistent with typical emotional responses, but this would not be the same as actually experiencing an emotion. The very idea that one could in fact through some advanced technology actually bestow emotions onto an artificial being is something I find morally repugnant, whereas the idea that emotions could spontaneously generate through some analogy to abiogenesis I find too absurd to seriously contemplate as an ethical concern.
Certainly science fiction does tend to anthropomorphize other beings, making them more human. Still, I don't think that emotions are the basis of ethics. For example, if we ran into aliens that were emotionless, but still intelligent, then I still think it would be wrong to kill them or enslave them. Conversely, I have no major issues with enslaving and killing chickens and other animals which do have feelings/emotions.

Intelligence and sentience are a key part of ethics for me.
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 22, 2017, 11:25:06 AM
I think a learning AI has the potential to develop sentience and emotions given time and better technology.

But its not going to be the same as an organic in some ways.

Though take note that in more than a few SF stories and movies the AI makers cheat. They just copy human neural networks for instant sentience. With ALL the problems that entails. One of which is that they then likely still dont understand exactly how it works. Its not an AI, its effectively a human brain in a box.
Title: Robots & Androids - the REAL problem with them....
Post by: Ratman_tf on March 22, 2017, 11:36:33 AM
Quote from: Omega;952968
He's referring to the prequel movies. Anakin and his mom are slaves. The Jedi are perfectly fine with this and do nothing.


Qui Gon freed Anakin and attempted to free his mother at the same time. I wouldn't call that Perfectly Fine, and Do Nothing.
Title: Robots & Androids - the REAL problem with them....
Post by: Ratman_tf on March 22, 2017, 11:42:13 AM
Quote from: CRKrueger;953130

Sure, you can argue that overthrowing evil governments will eventually lead to the Jedi being conquerors and crusaders, but I think it's more than that.  The Jedi aren't the Good to the Sith's Evil, and the Republic tolerates much we don't.


I think we all know the West tolerates a lot of oppression in order to get cheap sneakers and iPhones.
Title: Robots & Androids - the REAL problem with them....
Post by: Skarg on March 22, 2017, 11:47:43 AM
Quote from: jhkim;953234
I'm curious - you are contrasting biological with artificial, but there is plenty of potential for artificial beings constructed using biological components. That is the oldest concept in science fiction, actually.

You evidently think that the results of genetic engineering are still life with feeling. However, what about bioengineering that assembles cells directly into a desired configuration? Would that still be life with feeling? Going further, what if the cells had different biochemistry/biomechanics than Earth life?
Interjecting opinion as someone from the peanut gallery who agrees with what Tristram Evans wrote before, I would say that living things will have feelings, but if you cut & paste parts with bioengineering, it may have side effects on how they feel. Non-engineered living things have nervous systems that run throughout their bodies and grew there from itself without experiencing bioengineering, and the whole nervous system (not just the brain) stores emotions and memories. People who have organ transplants often have various sorts of somatic and personality adjustments. So I'd expect bioengineering to affect how they think and feel.


Quote
What I find interesting is that some people consider even genetic engineering like cloning to produce creations that are OK to enslave - even though a clone is just like an identical twin. I think that really, the objection is that they don't like the idea of unnatural beings, and thus consider them unreal and thus lacking a true soul.
Hmm. I suppose if someone believed that, they might think that way. Or offer it as one argument to try to discourage cloning. Otherwise, I'd tend to expect that people who favor clone slavery might also be not terribly opposed to some non-clone slavery too. Some people don't seem to regard much the souls of the non-clones they wouldn't mind enslaving.


Quote
Certainly science fiction does tend to anthropomorphize other beings, making them more human. Still, I don't think that emotions are the basis of ethics. For example, if we ran into aliens that were emotionless, but still intelligent, then I still think it would be wrong to kill them or enslave them. Conversely, I have no major issues with enslaving and killing chickens and other animals which do have feelings/emotions.

Intelligence and sentience are a key part of ethics for me.
For me too... only it's also clear to me that animals do have feelings, emotions, experiences, suffering, and thoughts.
Title: Robots & Androids - the REAL problem with them....
Post by: Lynn on March 22, 2017, 12:06:33 PM
Quote from: jhkim;953185
Droid can be considered sentient because they present virtually every form of human thought, including the capacity for moral/ethical decision-making and even emotion and friendship. By all appearances, any test which would judge all human beings as sentient would also judge droids as sentient.


Droids are not necessarily biological though. They don't seem to necessarily qualify as 'life'. Decision making and emulation of emotions can all be done in software without life.

Yes, I know there is an argument also for cyborgs - how much machine makes a cyborg a machine, and how much biological material makes something a 'being'? And what if the biological material is, in fact, just cloned cells?
Title: Robots & Androids - the REAL problem with them....
Post by: Omega on March 22, 2017, 05:20:27 PM
Quote from: Ratman_tf;953241
Qui Gon freed Anakin and attempted to free his mother at the same time. I wouldn't call that Perfectly Fine, and Do Nothing.

Qui Gon seems one of the rare exceptions.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 22, 2017, 05:20:51 PM
Quote from: Skarg;953243
Interjecting opinion as someone from the peanut gallery who agrees with what Tristram Evans wrote before, I would say that living things will have feelings, but if you cut & paste parts with bioengineering, it may have side effects on how they feel. Non-engineered living things have nervous systems that run throughout their bodies and grew there from itself without experiencing bioengineering, and the whole nervous system (not just the brain) stores emotions and memories. People who have organ transplants often have various sorts of somatic and personality adjustments. So I'd expect bioengineering to affect how they think and feel.
I agree that the engineering would affect *how* they think and feel. However, I am suggesting that they still *could* think and feel. Ultimately, the question is, what are feelings?

1) Are they produced by neurons at all, or is there some immaterial spirit that creates them?

2) Are they unique to only naturally-grown neurons? Or could artificially-generated neurons in the same configuration still produce feelings?

3) If artificial neurons could work, then do they have to have exactly the same water/lipid/protein mechanical structures that we do - or could there be differences and yet still produce feelings?


Within nature, we can see a clear spectrum from unfeeling life (such as bacteria and lichens) to life with only rudimentary nerves like jellyfish or flatworms, working up to complex nervous systems like chickens and humans.

I am inclined to think that our sentience is a result of the higher-order structure of our nervous system - i.e. how our senses, memory, and thoughts interact. I don't think it is something uniquely tied to the water/lipid/protein structure of neurons.

----------

To connect this back to games, there are a bunch of potential characters in science fiction games, including:

1) Human clones and/or humans with genetically-engineered DNA.
2) Biological constructs ranging from Frankenstein's monster to the biological replicants of Blade Runner.
3) Mixed biological and mechanical constructs, like part-flesh Terminators or DARYL.
4) Purely mechanical constructs like Star Wars droids.
5) Nanotech constructs, like the T1000 Terminator.

Any or all of these might be considered slaves. It could be interesting to ask how attitudes differ among these.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 22, 2017, 05:43:39 PM
Slightly off-topic regarding Star Wars...

Quote from: Ratman_tf;953241
Qui Gon freed Anakin and attempted to free his mother at the same time. I wouldn't call that Perfectly Fine, and Do Nothing.
Qui Gon legally bought Anakin in order to recruit him, and properly followed the slave laws of Tatooine - even though he flouted many other laws - including the Jedi council. There is no indication that he supported the abolition of slavery in general. He did make a passing attempt to buy Anakin's mother to ease Anakin's recruitment, but he apparently gave up after that. His successor Obi Wan ostensibly carried out his wishes, but never attempted to free her in the years following. (If he had, then Anakin's storyline might have been quite different.)

Quote from: Lynn;953244
Droids are not necessarily biological though. They don't seem to necessarily qualify as 'life'. Decision making and emulation of emotions can all be done in software without life.

Yes, I know there is an argument also for cyborgs - how much machine makes a cyborg a machine, and how much biological material makes something a 'being'? And what if the biological material is, in fact, just cloned cells?
Within science fiction in general, there are a wide range of beings that aren't strictly Earth-like biology. It's unclear what the brains of droids are made of, though it is implied to be metal electronics like the rest of them. There was a character in Cloud City who evidently had a biological body but machine-enhanced mind (known as "Lobot"), as well as General Grievous and Darth Vader who had mostly-robotic bodies with biological minds.
Title: Robots & Androids - the REAL problem with them....
Post by: crkrueger on March 22, 2017, 05:58:29 PM
Chemical, Electrical, Optical, etc are essentially hardware, just transmission mediums.  They would have nothing to do with emotions, which would be "software" or programming.

However, atm we are still practically clueless at understanding how the brain encodes and stores information, let alone even finding, forget about comprehending the "source code" for human intelligence hidden in our DNA.

That's why AI is so dangerous.  We really have no idea what we're doing, and many of the AI programs make correlations and decisions without the programmers themselves understanding why.
Title: Robots & Androids - the REAL problem with them....
Post by: Spike on March 22, 2017, 06:16:27 PM
Quote from: AsenRG;952627

Also, chat bots can keep an ersatz conversation. Doesn't mean I'd hesitate to delete them, because once again, they aren't human;).


Very sad that I shall be the first to bring up the Turing Test.... and point out that as originally conceived, is essentially useless for determining sentience.  Because, yes, you could have an erstatz conversation with your toaster, and we all pretty much agree that it is not actually Intelligent, artificially or otherwise.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 22, 2017, 07:33:21 PM
Quote from: Spike;953280
Very sad that I shall be the first to bring up the Turing Test.... and point out that as originally conceived, is essentially useless for determining sentience.  Because, yes, you could have an erstatz conversation with your toaster, and we all pretty much agree that it is not actually Intelligent, artificially or otherwise.
I would absolutely agree that a five minute text-only conversation - as the test was originally conceived - is a useless test, and trying to pass it focuses on tricks of imitating unintelligent human quirks rather than real sentience.

The larger point from the test, though, is judging intelligences by how they act - not by preconceptions of what they are doing internally. Say some aliens come down in a spaceship and we can't tell what their brains are made of. How do we tell if they are thinking, feeling beings?
Title: Robots & Androids - the REAL problem with them....
Post by: Ghost on March 22, 2017, 08:02:11 PM
Once we get to the point where I can have a sexbot that looks enough like Jennifer Connelly in Labyrinth, the rest of the philosophical/moral implications won't even register.

[ATTACH=CONFIG]793[/ATTACH]
Title: Robots & Androids - the REAL problem with them....
Post by: crkrueger on March 22, 2017, 08:37:59 PM
Hard to beat Dark City era Connelly
Title: Robots & Androids - the REAL problem with them....
Post by: ThatChrisGuy on March 22, 2017, 08:52:53 PM
Quote from: Simlasa;952622
But if I can talk to my toaster, and if it can respond back... keep up ersatz conversation at about the same level as the average bartender (which computers can already do), then I'm going to feel a whole lot worse if I let her get full of crumbs... and be really hesitant to throw her in the bin on a whim to by a shiny new one. Unless, I can transfer her 'personality' and voice into the new one.


Howdy doodly doo!  Would you like some toast?
Title: Robots & Androids - the REAL problem with them....
Post by: Lynn on March 23, 2017, 02:36:10 AM
Quote from: jhkim;953276
Within science fiction in general, there are a wide range of beings that aren't strictly Earth-like biology. It's unclear what the brains of droids are made of, though it is implied to be metal electronics like the rest of them. There was a character in Cloud City who evidently had a biological body but machine-enhanced mind (known as "Lobot"), as well as General Grievous and Darth Vader who had mostly-robotic bodies with biological minds.

Aren't General Grievous and Darth Vader cyborgs? And of course, Luke gets a mechanical hand as well. I don't think there is any explanation what Lobot is, though Id guess also cyborg.

Sure, in SF there are beings which are not Earth-like biology, but they are still biological in origin. How is that applicable in the canon SW universe?

 It seems more troubling to me that the Republic uses cloned slaves. Maybe this was some sort of symbolism from GL that the Republic was itself corrupt and fated to fall?
Title: Robots & Androids - the REAL problem with them....
Post by: Tristram Evans on March 23, 2017, 06:02:39 AM
Quote from: jhkim;953234
I'm curious - you are contrasting biological with artificial, but there is plenty of potential for artificial beings constructed using biological components. That is the oldest concept in science fiction, actually.

I would not consider that a robot.

Quote
You evidently think that the results of genetic engineering are still life with feeling. However, what about bioengineering that assembles cells directly into a desired configuration? Would that still be life with feeling? Going further, what if the cells had different biochemistry/biomechanics than Earth life?

I'm not certain exactly what you're asking, there. Yes, I consider genetically engineered life forms, be they clones or biomechanically engineered vehicles, living things to which all concepts of morality and ethics apply.

Quote
What I find interesting is that some people consider even genetic engineering like cloning to produce creations that are OK to enslave - even though a clone is just like an identical twin. I think that really, the objection is that they don't like the idea of unnatural beings, and thus consider them unreal and thus lacking a true soul.

Yes, I disagree with that quite strongly. I recall a film with Andrew Garfield about that premise a few years back, where he played one of a group of clones grown for the purpose of harbouring organs. It was heartbreaking.

Quote
Personally, I think that feelings are inherently a product of the mechanics of the assembled brain - and it doesn't matter whether that brain is constructed by natural reproduction or artificial means. Hence, I do think that an artificial being can have feelings.

The difference for me is if we are discussing an organic lifeform or a programmed computer. I do think there is a line that can be drawn between the two, regardless of what science fiction may propose.


Quote
For example, if we ran into aliens that were emotionless, but still intelligent, then I still think it would be wrong to kill them or enslave them.

well, they certainly wouldn't care...

Quote
Conversely, I have no major issues with enslaving and killing chickens and other animals which do have feelings/emotions.

I do struggle with that, to be honest. And I'm not vegan, because I don't see that as an answer since I believe the same morals apply to plants, which have been proven to have emotions. But to an extent I accept my role as an apex predator. Regardless, I find the treatment of the animals we eat to be largely abhorrent. Ultimately though, while I have a definite conclusion regarding robots, I don't have any easy answers regarding the food chain.
Title: Robots & Androids - the REAL problem with them....
Post by: jhkim on March 23, 2017, 12:15:39 PM
Quote from: Tristram Evans;953330
I'm not certain exactly what you're asking, there. Yes, I consider genetically engineered life forms, be they clones or biomechanically engineered vehicles, living things to which all concepts of morality and ethics apply.
The term "biomechanical" covers a wide range of possibilities, and I'm pretty sure that you wouldn't consider all of them to be living things. For example, if a vehicle has its components made of chitin created by biochemical 3D printing - that makes it biomechanical, even though the vehicle has no cells or active biological reactions, and runs on a regular electrical motor with a battery. Would you consider such a vehicle to be a living thing, because it is made of chitin?

As another possibility, someone could create a neural web using nodes with voltage-gated sodium/calcium channels (which is the chemical process for action potential in neurons), but connecting nodes together directly with wire rather than using the comparatively slow transmission process of real neurons. Would using a biochemical trigger and having similar design to nerve clusters make the web a living thing? A similar concept was the gel packs used on the starship Voyager.

There are a huge range of possibilities. In general, we have used and will continue to use a lot of the reactions found in nature in our own engineering.

Quote from: Tristram Evans;953330
The difference for me is if we are discussing an organic lifeform or a programmed computer. I do think there is a line that can be drawn between the two, regardless of what science fiction may propose.
The question I'm asking is - where is that line, and how do you define it? Science fiction includes a huge array of what it may call lifeforms including nanites made of molecular machines, to even lifeforms on neutron stars made of purely nuclear components. It also includes a huge array of computers, ranging from mentats in Dune who are clearly living beings - to the gel packs on the Voyager. Among this spectrum, I think the difference is not entirely clear. Is the computer on Voyager a lifeform because of its gel packs, while the computer on Enterprise is not? And would this affect whether characters on their differing holodecks are capable of sentience?

Quote from: Tristram Evans;953330
I do struggle with that, to be honest. And I'm not vegan, because I don't see that as an answer since I believe the same morals apply to plants, which have been proven to have emotions. But to an extent I accept my role as an apex predator. Regardless, I find the treatment of the animals we eat to be largely abhorrent. Ultimately though, while I have a definite conclusion regarding robots, I don't have any easy answers regarding the food chain.
Fair enough.
Title: Robots & Androids - the REAL problem with them....
Post by: AsenRG on March 24, 2017, 04:50:30 PM
Quote from: jhkim;953270
I agree that the engineering would affect *how* they think and feel. However, I am suggesting that they still *could* think and feel. Ultimately, the question is, what are feelings?

1) Are they produced by neurons at all, or is there some immaterial spirit that creates them?

2) Are they unique to only naturally-grown neurons? Or could artificially-generated neurons in the same configuration still produce feelings?

3) If artificial neurons could work, then do they have to have exactly the same water/lipid/protein mechanical structures that we do - or could there be differences and yet still produce feelings?


Within nature, we can see a clear spectrum from unfeeling life (such as bacteria and lichens) to life with only rudimentary nerves like jellyfish or flatworms, working up to complex nervous systems like chickens and humans.

I am inclined to think that our sentience is a result of the higher-order structure of our nervous system - i.e. how our senses, memory, and thoughts interact. I don't think it is something uniquely tied to the water/lipid/protein structure of neurons.

----------

To connect this back to games, there are a bunch of potential characters in science fiction games, including:

1) Human clones and/or humans with genetically-engineered DNA.
2) Biological constructs ranging from Frankenstein's monster to the biological replicants of Blade Runner.
3) Mixed biological and mechanical constructs, like part-flesh Terminators or DARYL.
4) Purely mechanical constructs like Star Wars droids.
5) Nanotech constructs, like the T1000 Terminator.

Any or all of these might be considered slaves. It could be interesting to ask how attitudes differ among these.

1. Not fine to enslave.
2. Not Enough Data, but Frankenstein would make a fine, if smelly, manual labourer.
3, 4 & 5: It's not called slavery, but whatever you call it, I'm fine.
Title: Robots & Androids - the REAL problem with them....
Post by: OldGuy2 on March 30, 2017, 04:42:59 AM
Why I am reading this thread?  I am playing an AI in a game.  So, there are lots of interesting points being presented.  Some on target, some not so much.

FYI: Within the Alpha-Omega game rules, AIs cannot change bodies, nor freely upload/download to other forms or containers.  They can be wiped, or killed, by destroying their core processing container.  There are AI Killer programs within networks around the globe to keep AIs from drifting around freely within the internet, too.

LINK: Alpha-Omega RPG.wikia - Artificial Intelligence (AI) (http://alpha-omega-rpg.wikia.com/wiki/Artificial_Intelligence_(AI))

My Character:
Alpha-Omega Player Characters : Obsidian Portal.com - SID-422 (https://alphaomegaplayercharacters.obsidianportal.com/characters/sid-422)

No emotions, BTW, but the smartest character in the group by almost double the statistic, and will be triple soon.

How to interact with the group?  Well, for right now, 'SID' is the medic in the group, and he takes care of all the other team members' health.  That was his 'cover' programing, and his initial design parameters.  The corporation that built SID wanted an infiltration agent to get into an Artificial Intelligence organization, and that pretty much excluded a Human.  An AI infiltrator was thought necessary to enable creative thought and improvisation during the mission.  The flaw was that his escort were bio-engineered combat androids (the reason for the ambulatory medic robot, SID's cover), which were too much flesh, not enough machines, and were wiped out by the AI's minion bots.  SID was sent back with a virus/trojan to reverse infiltrate his original corporation.

Now, SID is on the run from his original corporation, wondering if the virus will cause him other issues, and has infiltrated the party to 'hide'.  His outward appearance and programing is just a model 422 combat medic droid, although the group suspects his is much more.

They are not treating him like a slave, but a partner in the team, so far.  SID is having all sorts of issues in that his programing, experience and data references are not up to handling a free-form group of disparate individuals with competing goals and enemies SID doesn't want to harm (part initial programing, but also what he thinks is 'right', or morally correct).

Was SID a slave, and sent on a near-suicide mission, with his bio-engineered escort?  Were they given a choice of going or not?

What does SID do about stat changes and experience gains?  This is somewhat covered in the Alpha-Omega rulebook, but the company went bust before the material had a chance to grow and develop.  So, we are making it up as we go.  RPGs are not really well adapted for the fringe characters, and things like healing, developing/upgrading stats, and skills development are not very well defined for the AI characters.  Even something like the power source, and recharging are completely lacking in the core rules.

So, the background story is that SID was built 4 year ago, in which three were used to do the initial programing and skills training, and he is a hyper-smart child, with really good skills and little reference to the outside world.  He was trained for a specific mission, with a specific team, and he is way outside his design parameters now.

Quote from: Bradford C. Walker;324997
I have something far more day-to-day that tends to keep them out: maintenance, repairs and upgrades.  It's a time and money sink that most players don't appreciate, don't account for and don't like to deal with in play.

I can see this being the big headache in the future, and really, the AI character doesn't need much but money for maintenance, repairs and upgrades.  However, these are things that all the characters should be spending time and money on, it is just glossed over because it isn't fun.  Doing reps in the gym to build those muscles, boxing in the ring to develop those combat skills, sitting in a classroom to learn about how to fix that gizmo...  All that is not done in the game world, but it would have to happen.  

My AI would plug in new parts and software, access the database, and off he would go with stronger limbs and new skills.  

There is a balance in there somewhere, and our group will find something that works for us.

SID doesn't know what is going on, except he doesn't want to be dis-assembled.  He is going with the party as a means of getting away from his origin corporation, whom he considers a threat to his continued existance.  Does he want to be human?  No.  Does he want to take over the world?  Hasn't gotten that far yet, as he is still taking it all in.  He is adopting 'missions' and establishing short-term goals as stepping stones to some future that doesn't involve him being a collection of spare parts.

What does the future bring?  Perhaps the AI child develops a personality and a soul?  If he stays a coherent collection of self-animated parts long enough.
Title: Robots & Androids - the REAL problem with them....
Post by: Koltar on March 30, 2017, 05:40:58 PM
Wow.....

 An old thread I started ages ago re-surfaces just as I've been thinking about trying to get a campaign started up again.

- Ed C.