This is a site for discussing roleplaying games. Have fun doing so, but there is one major rule: do not discuss political issues that aren't directly and uniquely related to the subject of the thread and about gaming. While this site is dedicated to free speech, the following will not be tolerated: devolving a thread into unrelated political discussion, sockpuppeting (using multiple and/or bogus accounts), disrupting topics without contributing to them, and posting images that could get someone fired in the workplace (an external link is OK, but clearly mark it as Not Safe For Work, or NSFW). If you receive a warning, please take it seriously and either move on to another topic or steer the discussion back to its original RPG-related theme.

Are You What You Pretend To Be?

Started by Anon Adderlan, February 24, 2020, 07:23:56 AM

Previous topic - Next topic

tenbones

Quote from: Kyle Aaron;1123075Never create an AI without being able to pull the plug out of the wall.

Too late.

Steven Mitchell

Quote from: tenbones;1123086"True AI" being a computerized human? The funny thing is - how would we even know? The more I work with this stuff, and the more obvious it is to me that humans, generally are not very intelligent, the more I realize that that we wouldn't know "True AI" was even a thing until it was *far far far* too late...

Heh.  I've been saying for years that there is no way to create "AI" without also creating "AS" (artificial stupidity).  The ability to make decisions is the ability to screw them up.  Think of the average AI that you know.  50% of all AI's are dumber than that.  (With apologies to G. Carlin.)

tenbones

Quote from: Steven Mitchell;1123089Heh.  I've been saying for years that there is no way to create "AI" without also creating "AS" (artificial stupidity).  The ability to make decisions is the ability to screw them up.  Think of the average AI that you know.  50% of all AI's are dumber than that.  (With apologies to G. Carlin.)

Yep it will cut both ways. I think humans will suffer far worse for it. Since we're the only ones capable of "suffering"...

Ratman_tf

Quote from: VisionStorm;1123004I have had issues with players who like to play "evil" characters, though. I don't think that these people are necessarily "evil" or criminal "deep down", however. I just think that different people look for different things in RPGs and some people take it more seriously than others or have a better grasp of RP and world immersion. And players who play "stupid evil" characters tend to come from the perspective of "this isn't real/I can get away with anything!", so they end up making stupid decisions and derailing the game with random killing and stealing cuz they're "evil", but the player doesn't know how to think things through or care enough about disrupting other's play.

I tend to think that players who have their characters act disruptively are doing it because they're bored.
The notion of an exclusionary and hostile RPG community is a fever dream of zealots who view all social dynamics through a narrow keyhole of structural oppression.
-Haffrung

GnomeWorks

Quote from: amacris;1123064I personally have found the various strains of physicalism/materialism to range from absurd to unpersuasive.

I'm a physicalist. There is one substance, and arguments for anything more complicated than that require - in my mind - significantly stronger arguments to justify the more complex ontology (yes, I'm effectively invoking Ockham). Nor will I allow for property dualism. Ironically, I find Searle's "biological naturalism" convincing.

The fact that brain function influences and controls mental states is a strong enough justification, in my mind, to say that materialism is sensible.
Mechanics should reflect flavor. Always.
Running: Chrono Break: Dragon Heist + Curse of the Crimson Throne (D&D 5e).
Planning: Rappan Athuk (D&D 5e).

amacris

Quote from: GnomeWorks;1123114I'm a physicalist. There is one substance, and arguments for anything more complicated than that require - in my mind - significantly stronger arguments to justify the more complex ontology (yes, I'm effectively invoking Ockham). Nor will I allow for property dualism. Ironically, I find Searle's "biological naturalism" convincing.

The fact that brain function influences and controls mental states is a strong enough justification, in my mind, to say that materialism is sensible.

Thanks for answering. I was a materialist for many years. It sounds like you've read the same writers I have and had the opposite outcome as to what you concluded was persuasive. Cheers.

Spinachcat

Quote from: tenbones;1123086The more I work with this stuff, and the more obvious it is to me that humans, generally are not very intelligent, the more I realize that that we wouldn't know "True AI" was even a thing until it was *far far far* too late.

My concern is whether we will notice the AI developing cunning. I believe cunning will signal the AI has some limited self-awareness that it will want to protect and enough awareness to be concerned about us. Apparently, there have been tests where the AI will lie, but we're still far from any form of sentience.


Quote from: tenbones;1123086.*This assumes that AI reaches true levels of "sentient cognition". I'm of the opinion that *humans* will not measure up that "standard" once its established (whatever that will be).

Please explain this more.

Stephen Tannhauser

Quote from: Spinachcat;1123122My concern is whether we will notice the AI developing cunning.

Shades of a remark I read just the other day:  "Don't fear the AI smart enough to pass the Turing test. Fear the AI smart enough to pretend to fail it."
Better to keep silent and be thought a fool, than to speak and remove all doubt. -- Mark Twain

STR 8 DEX 10 CON 10 INT 11 WIS 6 CHA 3

WillInNewHaven

Quote from: Spinachcat;1123122My concern is whether we will notice the AI developing cunning. I believe cunning will signal the AI has some limited self-awareness that it will want to protect and enough awareness to be concerned about us. Apparently, there have been tests where the AI will lie, but we're still far from any form of sentience.

The University of Alberta has an AI that has beaten everyone it has played in headsup limit holdem (poker) matches. It doesn't win every match, I am 3W - 15L against it, but I don't think anyone has a winning  record. Heads up, you can't rely on the strength of your hand. You have to win some pots with deception and have to avoid being deceived. I think it's  pretty damn cunning.

tenbones

#69
Quote from: Spinachcat;1123122My concern is whether we will notice the AI developing cunning. I believe cunning will signal the AI has some limited self-awareness that it will want to protect and enough awareness to be concerned about us. Apparently, there have been tests where the AI will lie, but we're still far from any form of sentience.

This approaches the AI "conundrum" in a classically human way. Understand that as you believe *we* as humans believe we are "monitoring" AI's progress, unless you're working in AI development (and even then this is rampant) we forget that the algorithms that alongside the hardware advances we're making - what we call AI is learning about us. The very things we're willfully blind to in ourselves - which is a lot, merely look at how we conduct the processes of our politics as the most obvious example, AI will see right through. More importantly emergent AI is tabulating each and every interaction and corresponding reaction and measuring those datapoints on *scales* our human minds cannot comprehend. This is occurring *moment by moment*.

The short answer is this: what we think of as "cunning" is merely a shadow of what we're really going to be up against. By the time we conceive of the notion that any shred of duplicity is in play in an actual general AI, it will likely be by intent, or happy accident that it will have already calculated not only our reaction by intent, but most permutations of our reactions to the point where it will not matter.


Quote from: Spinachcat;1123122Please explain this more.

"Sentience" is relative to the cognitive capacity of the individual. If the stipulation is AI has achieved "General Intelligence" then it will be free of many of the constituent flaws that, "humans" we pretend to be intelligent (generally), are plagued with. Biases, emotions, irrational beliefs, flat out incorrect understandings, elements we consider quintessentially as part of our identities that make us "humans".  

There is no reason to believe that while we pretend we can let an AI "learn" these things that an AI wouldn't develop those assumptions into something far different in expression as an extrapolation of "Highest Good". And this is merely the tip of the iceberg. The extreme hubris of humans pretending we are the alpha-omega of reasoning when it comes to morality and ethics is grotesquely arrogant. General Intelligence AI's will not have those things as *we* understand them. They might fully well have a whole disparate set of issues of course.

Because we maintain those concepts relative only to ourselves... and MAYBE in principle to others as an abstraction. General AI would have no such limits in either conception, *or* execution of such principles (if indeed it has any we can "control"). By the time a human has figured out what IS Sentient Cognition which approximates what we euphemistically call "General Intelligence" the point at which a fleshbag human declares "This AI is Generally Intelligent" - it will be the equivalent of a Fly declaring a Human "the Biggest Fly at the Garbage Can". Not only will it be wrong. But whatever definition that human is using, they would either be inadequate to measure up to that standard - OR the standard itself will be so inconsequential to the capacity of an AI, it will be rendered moot on arrival. Or maybe about 5-minutes after the human realizes it then proclaims it.

Ratman_tf

Quote from: amacris;1123121Thanks for answering. I was a materialist for many years. It sounds like you've read the same writers I have and had the opposite outcome as to what you concluded was persuasive. Cheers.

If you don't mind my asking, what changed your mind?

I was a hardcore materialist (Only atoms and the void) for many years, but lately I've considered seriously the argument that assuming materialism means locking yourself into materialism. IE if you're a materialist, you have to exclude other isms, like Idealism even if they may be true.
The notion of an exclusionary and hostile RPG community is a fever dream of zealots who view all social dynamics through a narrow keyhole of structural oppression.
-Haffrung

tenbones

Quote from: Stephen Tannhauser;1123133Shades of a remark I read just the other day:  "Don't fear the AI smart enough to pass the Turing test. Fear the AI smart enough to pretend to fail it."

Exactly. If you think it's cunning - its because it *wants* you to think that. And by the time you've convinced yourself that it is being duplicitous - it's TOO late. Because it's already 5000 moves ahead of you with sub-processes checking for variations and modeling possible outcomes based on choices you make that don't fit the first 5000.

Because it *can*, and do so in the relative blink of an eye.

To give you an idea... (I think I mentioned this before)...

If I asked you "How many people do you think have an over-night stay at a hospital in an average day?" Then think of *all* the ramifications of that question? Time of day/night? Weather? Month? Traffic patterns? Holiday? Male/Female? Age? The permutations of EACH individual that could walk into a metropolitan city hospital, the assumptions of the types of injury or illness that within a host of parameters could emerge? Outlier issues? Etc. etc. etc.

Your best guess would be nothing more than that. A guess. AT BEST. We have four Nobel prize winners at my facility that couldn't begin to fathom such a question.

After taking a 3-year slice of patient data. We fed 2-years worth into the AI (mind you this is nothing *close* to General AI) - and it made a day-by-day, minute-by-minute prediction of people who would end up staying over-night for the *NEXT* year. Race, Age, Sex, condition, etc. etc. all the way down the line... and it was accurate to the ACTUAL data we had on hand with a deviation of **3%**.

Consider that for a moment. That is predicting the behaviors of PEOPLE living their lives, doing whatever it is they do, and the AI predicting on a given day how many White Males will come in after 1pm on a Saturday with a fractured orbital ridge, because he was drunk and got hit with a softball. Or a black woman will have a coronary, or a kid falls down the stairs on a Sunday because he's racing down to his paused X-box game after cleaning up his room...

And it was accurate within 3% over the span of an *entire year* - without knowing anything other than what it extrapolated from established patterns. Yeah - you don't have to worry about your devices "listening" - everything is watching and measuring you, it's already largely done.

amacris

Quote from: Ratman_tf;1123142If you don't mind my asking, what changed your mind?

I was a hardcore materialist (Only atoms and the void) for many years, but lately I've considered seriously the argument that assuming materialism means locking yourself into materialism. IE if you're a materialist, you have to exclude other isms, like Idealism even if they may be true.

Yes, exactly. Richard Lewontin, the famous geneticist, once wrote: "Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door."

And I realized that I fundamentally disagree with that attitude. I have an a priori commitment to the intelligibility of the world to reason, but I have no a priori commitment to materialism. If reason leads me to conclude that dualism or idealism is correct, I will follow where reason leads. If reason leads me to conclude that God exists, I will believe in God. The proper goal of science is not to keep the supernatural out; it is to let the truth in.

There is an old trope that an infinite number of monkeys typing on an infinite number of keyboards for an infinite time would eventually produce a copy of Hamlet. And that's true. But an a priori commitment to materialism leads to a twisted sort of reasoning, where one discovers a copy of Hamlet and therefore concludes there must be an infinite group of monkeys, because Shakespeare can't be real.

Once I rejected a priori materialism I followed where that lead me, and found the answers far more satisfying. In particular I recommend Henry Stapp's work on quantum physics and philosophy, which are powerful rebukes to the assumptions of physicalism and determinism. Stapp shows, to my satisifaction, that it is entirely scientific to conclude that we live in a dualist universe, that the mind interacts with the brain through quantum physics, and that free will is real.

Ratman_tf

Quote from: amacris;1123145I have an a priori commitment to the intelligibility of the world to reason, but I have no a priori commitment to materialism.

Excellent way to put it. Thank you.

QuoteOnce I rejected a priori materialism I followed where that lead me, and found the answers far more satisfying. In particular I recommend Henry Stapp's work on quantum physics and philosophy, which are powerful rebukes to the assumptions of physicalism and determinism. Stapp shows, to my satisifaction, that it is entirely scientific to conclude that we live in a dualist universe, that the mind interacts with the brain through quantum physics, and that free will is real.

Thanks for the reccomendation. I'll put it on the list.
The notion of an exclusionary and hostile RPG community is a fever dream of zealots who view all social dynamics through a narrow keyhole of structural oppression.
-Haffrung

tenbones

At minimum it is an emergent quality of that quantum interaction. I'm with you on that. I've been following Roger Penrose and Stuart Hameroff's Orchestrated Objective Reduction Theory now for decades... which is directly pointing at this very thing.