Seed AGIs considered harmful

29 posts / 0 new
Last post
Arenamontanus Arenamontanus's picture
Seed AGIs considered harmful
Here is my writeup of my start adventure: http://www.aleph.se/EclipsePhase/ThinkBeforeAsking.pdf (1 Mb PDF) It deals with tracking down an odd warhead which in turn leads to a just failed experiment with Seed AI. If players do not understand why such experiments are banned after this, I guess nothing will. This was fun to write, because the issues around Oracles and safe intelligence explosions are something we are trying to seriously figure out at my research group. When we ran the adventure different characters got their chance to shine: the social/rep guy completely ruled in Phelan's Recourse, the nanotech/security character led the assault on Fornjot and the generalist dilettante figured out the problem.
Extropian
descrii descrii's picture
Re: Seed AGIs considered harmful
Hey, I just finished reading this. Really top-notch, and I want to thank you for putting this adventure up. I'm sure I'll have questions, and I'd love to know more about how your group fared when they went through it. I especially liked the strong theme (matrioshka) and the Nicotine Eldritch hab. I'll try to comment more later, but it's quite late here now.
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
Thanks! Yes, please comment and ask questions. Nicotine Elderitch was fun to run. The challenge was to spook the AGI too, but it soon found the Escher-like Mesh inhabited by vapors creepy too...
Extropian
standard_gravity standard_gravity's picture
Re: Seed AGIs considered harmful
Great to have a fan-created adventure up here! Just flicked through and it looks good - will read it on the train today and provide more useful comments next week. Big up!
[img]http://boxall.no-ip.org/img/ext_userbar.jpg[/img] "People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." - John Dee
The Doctor The Doctor's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
Nicotine Elderitch was fun to run.
Was this a Sisters reference by any chance?
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
The Doctor wrote:
Was this a Sisters reference by any chance?
Not intentionally, I was literally using a random word generator to come up with interesting names. But then I started thinking nano-gothic. I am rather fond of the habitat. An adventure seed would be tracking down a cortical stack of someone important, discovering that it has been bought by the habitat for use as a ghost antenna. And who knows what those vapours actually know?
Extropian
standard_gravity standard_gravity's picture
Re: Seed AGIs considered harmful
My 2 rep points having now properly read ”Think Before Asking”: -- SPOILER ALERT -- Very intersting background story. Obviously you know your science and that makes for a convincing and attention-grabbing read. What’s more, the setting is awesome: loving the African Buffalo Bills / Scandi-Slavic Amazons habitat and Nicotine. The Mechame are very naughty too! Funny, off the cuff references such as the one to “University of Putingrad” made me smile. The only negative thing I can think of is that I got confused a couple of times reading it. I still don’t feel comfortable with the Christian Unions comings and goings, and in a few places in the text there are references to things not explained until later (Mechames, Juliuses). A simple “see below” would have cleared that. I know that this is a hobby thing but I just thought I had to think of some constructive criticism too ;) In short, I think the adventure is full of interesting people and places, and the actual conflict/story will keep my players hooked. Will tweak it to fit my troupe and run it as the next chapter in our campaign. As a side, as the NPC cast on Fornjot are terrific (in many ways more so before you-know-what), I am thinking of using them for a short prequel, i.e. as PCs for a short 1 hour into session.
[img]http://boxall.no-ip.org/img/ext_userbar.jpg[/img] "People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." - John Dee
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
(Still some spoilers)
standard_gravity wrote:
The only negative thing I can think of is that I got confused a couple of times reading it. I still don’t feel comfortable with the Christian Unions comings and goings, and in a few places in the text there are references to things not explained until later (Mechames, Juliuses). A simple “see below” would have cleared that. I know that this is a hobby thing but I just thought I had to think of some constructive criticism too ;)
Constructive criticism taken - I know I will want to revise the text a bit in the near future. One of the players remarked after the game that they had started to follow a thread in the middle rather than at its beginning. I think this is right, and somewhat contributes to the confusion: the real action is in the aftermath of an event that occurs kind of before and after the start of the game. Weaving timelines into roleplaying games is always tricky, I found my players getting confused about dates a few times. So that needs to be made more clear.
standard_gravity wrote:
As a side, as the NPC cast on Fornjot are terrific (in many ways more so before you-know-what), I am thinking of using them for a short prequel, i.e. as PCs for a short 1 hour into session.
That can be really fun. One thing I love about EP is that one can re-use characters and forks of them. I can hardly wait until my PCs meet the original NPC team on their current mission.
Extropian
The Doctor The Doctor's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
Here is my writeup of my start adventure: http://www.aleph.se/EclipsePhase/ThinkBeforeAsking.pdf (1 Mb PDF)
Amazing. Simply amazing. I would really like to run my players through this.
AlexiusSawall AlexiusSawall's picture
Re: Seed AGIs considered harmful
This is a superb adventure - your own expertise on the subject really lifts it into the realm of brilliance. It's definitely going to get a run in my campaign, I think, as a break from my main plot. Assuming you're cool with sharing :)
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
AlexiusSawall wrote:
Assuming you're cool with sharing :)
Of course, it increases my @-rep. :-)
Extropian
AlexiusSawall AlexiusSawall's picture
Re: Seed AGIs considered harmful
Ha! Consider your rep bumped :) (you know, this forum would be awesome if it actually used the game's rep system...)
standard_gravity standard_gravity's picture
Re: Seed AGIs considered harmful
AlexiusSawall wrote:
(you know, this forum would be awesome if it actually used the game's rep system...)
Indeed. If EN world uses XP, we should have rep!
[img]http://boxall.no-ip.org/img/ext_userbar.jpg[/img] "People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." - John Dee
Ishindri Ishindri's picture
Re: Seed AGIs considered harmful
I've just read through this - absolutely fantastic. I think I'll be running a game soon, and this would be an excellent adventure. One thing jumped out at me - 'matter compiler vacuum chamber' - is this a bit of The Diamond Age leaking through?
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
Yes. I think different models of matter compilers have different levels of cleanliness requirements. Most modern cornucopia machines make use of some extruded protective membrane and work well in a normal home, while older or high-precision machines make use of a vacuum chamber. I guess this is a pretty oldfashioned machine, the Covenant didn't have that enormous budget.
Extropian
nick012000 nick012000's picture
Re: Seed AGIs considered harmful
Hmm. It might be a good idea to have a list of responses to likely questions, like "Are you a Friendly AI?", "Are you willing to make yourself into a Friendly AI?", "Are you an existential threat to humanity?", and "Are you willing to work for Firewall?" I'm guessing the answers to those questions are "No," "No" (with the follow-up question of "Why not?" answered with either "Because it doesn't advance my goal-systems," or possibly a long dissertation on the nature of its goal systems and their relationship to the ideal of Friendly AI downloaded into the PC's head), "Yes," and "Yes," respectively, but it'd be nice to know for sure.

+1 r-Rep , +1 @-rep

Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
nick012000 wrote:
Hmm. It might be a good idea to have a list of responses to likely questions, like "Are you a Friendly AI?", "Are you willing to make yourself into a Friendly AI?", "Are you an existential threat to humanity?", and "Are you willing to work for Firewall?"
Great idea! I'll add that when I update the text. This helps a lot.
Quote:
I'm guessing the answers to those questions are "No," "No" (with the follow-up question of "Why not?" answered with either "Because it doesn't advance my goal-systems," or possibly a long dissertation on the nature of its goal systems and their relationship to the ideal of Friendly AI downloaded into the PC's head), "Yes," and "Yes," respectively, but it'd be nice to know for sure.
Yes, that were my answers too. In many ways these questions have already been answered with the Report, so they are pretty safe. And the Oracle is willing to work for anyone, so no reason to consider who Firewall is. The problem is the questions that require long answers. Never, ever ask it "Why?" :-)
Extropian
nick012000 nick012000's picture
Re: Seed AGIs considered harmful
Also, you might want to include the possible ending of the PCs recruiting it for Firewall, and the reaction of Firewall to a report that basically says "The antimatter bomb was a failsafe. We've found a secret hypercorp Seed AI designed to answer questions. It says it's not Friendly, and that it's an existential threat to humanity, but it hasn't done anything terribly threatening yet, and it also said it's willing to join Firewall." EDIT: Also, another useful question to include the answer to, in case one of the PCs does something silly like asking it what the value of Pi is, and it promptly sets out to convert the solar system into computronium: "How can we stop you from destroying humanity without destroying you?"

+1 r-Rep , +1 @-rep

Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
nick012000 wrote:
Also, you might want to include the possible ending of the PCs recruiting it for Firewall, and the reaction of Firewall to a report that basically says "The antimatter bomb was a failsafe. We've found a secret hypercorp Seed AI designed to answer questions. It says it's not Friendly, and that it's an existential threat to humanity, but it hasn't done anything terribly threatening yet, and it also said it's willing to join Firewall."
"Mommy, it followed me home. Can I keep it?" :-) Hehehe... and the pragmatists/conservatives got into so much argument about just the Report. I guess this might be one of those situations where Firewall really shows its fragility. How do you even resolve this kind of question in the organisation? Ah, the adventure possibilities... "So, our boss sent ninjas to kill us, his competitor is inviting us to her project and promising the Grand Neutronium Medal of Existential Risk Minimization, we are getting fan mail from the Jovian Republic and hate mail from the Brinkers for some reason... what should we do? And if you say we should ask the Oracle, then I will smack you on your CPU!"
Quote:
EDIT: Also, another useful question to include the answer to, in case one of the PCs does something silly like asking it what the value of Pi is, and it promptly sets out to convert the solar system into computronium: "How can we stop you from destroying humanity without destroying you?"
Yup. One possible answer is: "Your question has been logged and will be answered once enough system resources are available. Please stand by for planetary disassembly." Another possible answer might be: "Here is a plan to incorporate transhumanity into my functional architecture."
Extropian
nick012000 nick012000's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
nick012000 wrote:
Also, you might want to include the possible ending of the PCs recruiting it for Firewall, and the reaction of Firewall to a report that basically says "The antimatter bomb was a failsafe. We've found a secret hypercorp Seed AI designed to answer questions. It says it's not Friendly, and that it's an existential threat to humanity, but it hasn't done anything terribly threatening yet, and it also said it's willing to join Firewall."
"Mommy, it followed me home. Can I keep it?" :-) Hehehe... and the pragmatists/conservatives got into so much argument about just the Report. I guess this might be one of those situations where Firewall really shows its fragility. How do you even resolve this kind of question in the organisation?
Odds are, I figure the Prometheans would step in befoe things get too heated, though I'm not sure how much clout they've actually got. Maybe even having one of them nanofaccing up a computer system that can hold an alpha fork and sending it along to say hello. The question is if the PCs would know about it. The existence of the Prometheans is highly classified information even within Firewall, after all, and even discovering the existence of a non-hostile Seed AI might not be enough to qualify as "need-to-know". On the other hand, the PCs might well find themselves suddenly promoted to Proxy status and informed of a few choice secrets. The Erasure Squads, Social Engineers, and Vectors would probably be the best choices for PCs.

+1 r-Rep , +1 @-rep

Decivre Decivre's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
"Mommy, it followed me home. Can I keep it?" :-) Hehehe... and the pragmatists/conservatives got into so much argument about just the Report. I guess this might be one of those situations where Firewall really shows its fragility. How do you even resolve this kind of question in the organisation? Ah, the adventure possibilities... "So, our boss sent ninjas to kill us, his competitor is inviting us to her project and promising the Grand Neutronium Medal of Existential Risk Minimization, we are getting fan mail from the Jovian Republic and hate mail from the Brinkers for some reason... what should we do? And if you say we should ask the Oracle, then I will smack you on your CPU!"
Same way they resolve anything... popular vote amongst the proxies. Besides, there are plenty of ways to utilize such an intelligence without risking any danger to the safety of the organization. Isolating it in a computer network with no outside connection, no means of manipulating its environment, and utilizing safe distance vocal communication would be a perfect way to work with a hostile Seed AI. I'd imagine that if Firewall ever captured a TITAN, this would be one of the many ways they would conduct their research (after they have ensured that contact with the machine is actually safe, of course).
Arenamontanus wrote:
Yup. One possible answer is: "Your question has been logged and will be answered once enough system resources are available. Please stand by for planetary disassembly." Another possible answer might be: "Here is a plan to incorporate transhumanity into my functional architecture."
One thing I've always wondered is why everyone's idea of "superior intelligence" seems to equate to "sociopathic monstrosity". I'd imagine that other than the mentally disfunctional Seed AI, most would be capable of similar sanity levels to humans, even if the majority of their thought processes are alien. Moreover, they would likely be capable of enough empathy that they would understand the ramifications of their actions and its consequences on others. Plus, it should be noted that the damn thing is smarter than you. It's answer is likely to be "That number is irrational, and any attempt to calculate it would render a value you wouldn't be able to comprehend. If you would like, I could calculate it to a certain digit in a base 10 number system...."
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
Decivre wrote:
Besides, there are plenty of ways to utilize such an intelligence without risking any danger to the safety of the organization. Isolating it in a computer network with no outside connection, no means of manipulating its environment, and utilizing safe distance vocal communication would be a perfect way to work with a hostile Seed AI. I'd imagine that if Firewall ever captured a TITAN, this would be one of the many ways they would conduct their research (after they have ensured that contact with the machine is actually safe, of course).
And you would try this even after the Oracle has proven that it is impossible to use a seed-AI safely as an oracle-in-a-jar? (at least within the context of the adventure that seems to be the outcome) The problem with keeping superintelligences bottled is that they are smarter than you. When monkeys try to trap humans they usually get nasty surprises. They don't realize what the Swiss army knife is for.
Quote:
One thing I've always wondered is why everyone's idea of "superior intelligence" seems to equate to "sociopathic monstrosity". I'd imagine that other than the mentally disfunctional Seed AI, most would be capable of similar sanity levels to humans, even if the majority of their thought processes are alien. Moreover, they would likely be capable of enough empathy that they would understand the ramifications of their actions and its consequences on others.
As I (and a whole bunch of AI researchers; we spent half of yesterday lecturing and debating this issue) see it, the problem is that in the space of possible motivations the set of human-friendly motivations is very, very small. When you design a Seed AGI you do not have full control, because 1) you are fallible in implementing them, 2) you are unable to foresee the full ramifications of the motivations you program, 3) the process of recursive self improvement may introduce biases (such as Omohundro drives) and 4) they become powerful very rapidly, with little chance of safe upbringing ("No! That is *not* what I meant by 'eat your heart out'!"). Note that normal AIs and AGIs are potentially safe because they do develop slowly within a human context and learn what makes sense or not.
Quote:
Plus, it should be noted that the damn thing is smarter than you. It's answer is likely to be "That number is irrational, and any attempt to calculate it would render a value you wouldn't be able to comprehend. If you would like, I could calculate it to a certain digit in a base 10 number system...."
But that assumes the goal is "Answer questions in a way my user likes" rather than "Answer questions". Yes, the AI is smart enough to realize that you won't like the infinite answer, but it does not *care*. The only thing that matters is the answering itself. When people consider superintelligences they tend to think they are like superintelligent humans, that they must have common sense, complex goals and reasonable desires. But such properties are largely due to the human evolutionary past and experience in a "normal" environment. They are not generic in the space of minds. A constructed mind doesn't have to have sensible goals, but it can be smart enough to implement flawed goals with relentless power. The flaws might not be as obvious as the above example, but it is hard to construct goal systems that do not misbehave. If the goal is to make the user happy with the answer then it might be sensible to tasp the user while delivering a nonsense answer. Trying to formulate goals that cannot be misinterpreted, have no unforeseen effects and lead to a sane intelligence is very hard. (Still, if you want a dissenting AI-ethics view, check out Mark Waser's paper (pdf) from the AGI10 conference which is very much a love-and-flowers view of superintelligence. Although one implication seems to be that if humans are not Friendlies then the Friendlies might want to wipe us out... Hmm, maybe this is what the ETI is doing: it is the ultimate ethical superintelligence, and it is trying to keep the universe clean from transhumanity.)
Extropian
Decivre Decivre's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
And you would try this even after the Oracle has proven that it is impossible to use a seed-AI safely as an oracle-in-a-jar? (at least within the context of the adventure that seems to be the outcome) The problem with keeping superintelligences bottled is that they are smarter than you. When monkeys try to trap humans they usually get nasty surprises. They don't realize what the Swiss army knife is for.
Except the major difference is that we can strip it of all means of escape. A Seed AGI is, in essence, software. Putting that software in a fully closed computer system, with no means of environmental alteration, leaves it no real option. It can alter its own system to its heart's content (a worthless act relegating it to being a "god of a box"), but is otherwise pretty much inert (and can be eliminated by simply shutting off any supply of power it might have). Short of it having the means to break fundamental laws of nature as we know them, there would be no way for it to affect us. If it does have such means to break fundamental laws, then I think it's foolish to think we could even destroy it in the first place.
Arenamontanus wrote:
As I (and a whole bunch of AI researchers; we spent half of yesterday lecturing and debating this issue) see it, the problem is that in the space of possible motivations the set of human-friendly motivations is very, very small. When you design a Seed AGI you do not have full control, because 1) you are fallible in implementing them, 2) you are unable to foresee the full ramifications of the motivations you program, 3) the process of recursive self improvement may introduce biases (such as Omohundro drives) and 4) they become powerful very rapidly, with little chance of safe upbringing ("No! That is *not* what I meant by 'eat your heart out'!"). Note that normal AIs and AGIs are potentially safe because they do develop slowly within a human context and learn what makes sense or not.
But you have to remember that these are self-improving entities. Even if the programmer fails to take into account every scenario and every possibility, a self-improving being can very well implement those missed aspects on its own. A Seed AI may very well grant itself moral reasoning if it sees it as the most efficient means to achieve its goals. Besides, speed of learning has little to do with whether someone will grasp concepts better. Some children learn how to read when they get to school. I know of at least one person who learned how to read at a high school level before he was 3, and it didn't transform him into a sociopathic killing monster incapable of empathy solely because of his learning rate.
Arenamontanus wrote:
But that assumes the goal is "Answer questions in a way my user likes" rather than "Answer questions". Yes, the AI is smart enough to realize that you won't like the infinite answer, but it does not *care*. The only thing that matters is the answering itself. When people consider superintelligences they tend to think they are like superintelligent humans, that they must have common sense, complex goals and reasonable desires. But such properties are largely due to the human evolutionary past and experience in a "normal" environment. They are not generic in the space of minds. A constructed mind doesn't have to have sensible goals, but it can be smart enough to implement flawed goals with relentless power. The flaws might not be as obvious as the above example, but it is hard to construct goal systems that do not misbehave. If the goal is to make the user happy with the answer then it might be sensible to tasp the user while delivering a nonsense answer. Trying to formulate goals that cannot be misinterpreted, have no unforeseen effects and lead to a sane intelligence is very hard. (Still, if you want a dissenting AI-ethics view, check out Mark Waser's paper (pdf) from the AGI10 conference which is very much a love-and-flowers view of superintelligence. Although one implication seems to be that if humans are not Friendlies then the Friendlies might want to wipe us out... Hmm, maybe this is what the ETI is doing: it is the ultimate ethical superintelligence, and it is trying to keep the universe clean from transhumanity.)
It may also be smart enough to sublimate logical goals from flawed ones with brutal efficiency. I think my biggest problem with the concept of a "crazy evil AI" is that the theory seems to stem from the idea that this artificial greater intelligence would have linear thought that is incapable of grasping human concepts like subtext, context, sarcasm or other similar things. I think the opposite; such an intelligence would be able to grasp these things with startlingly great efficiency, perhaps understanding context far greater than we ourselves can. No matter how alien the thought process that go through such an AI might be, it will be able to understand us rather easily. It's similar to us with animals; our evolution was quite a bit different from most of the animals we keep as pets, yet it isn't hard for humans to understand to quite a degree how a dog/cat/gerbil/fish thinks and feels. Moreover, we descend from ancestral animals with pack mentalities and predatory instincts, and yet we were able to develop companionship with other animals, and resist the urge to eat or kill (all of) them. I doubt a being of greater intelligence will wipe us out for funsies if it too is capable of gauging the worth that lesser creatures can have to it. I'm not saying that a greater intelligence will be friendly, or won't decide at some point to conquer us. I just always find it ridiculous that such a greater intelligence would falter to simplistic thought... like going on an apocalyptic killing spree to solve pi. Perhaps some might (I'd imagine that even greater intelligences will be capable of insanity), but I doubt it will be a commonality.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Seed AGIs considered harmful
Decivre wrote:
Except the major difference is that we can strip it of all means of escape. A Seed AGI is, in essence, software. Putting that software in a fully closed computer system, with no means of environmental alteration, leaves it no real option.
Have you heard of Eliezer Yudkowski's roleplaying experiments with boxing an AI? In these experiments he played the role of an AI in a box, and another person (who believed boxing was possible) was playing the role of an outside human who could open the box and let the AI out. Eli was apparently successful in 3/4 experiments in convincing the other to let him out. Further experiments by others seem to reinforce this conclusion: only 2 out of 26 kept the AI in the box. We can of course argue about the validity of the experiment, but a superintelligence could presumably be more persuasive than this. A captive TITAN might supply you with very useful information such as ways of building efficient computers, nanoswarm defences or probabilistic predictions of what will happen in the near future. Soon you and your organisation will become rich and powerful based on this information - and dependent on the TITAN. Who now can threaten not to answer further questions if you do not obey. More subtly, those useful and apparently safe technologies (which you of course scrutinized intensely) or predictions could have effects that you do not know about that helps the TITAN get out or sets up a situation where you have to give it more power in order to save yourself. There is no need for physical access if you can use information to manipulate the world.
Quote:
Arenamontanus wrote:
As I (and a whole bunch of AI researchers; we spent half of yesterday lecturing and debating this issue) see it, the problem is that in the space of possible motivations the set of human-friendly motivations is very, very small. When you design a Seed AGI you do not have full control, because 1) you are fallible in implementing them, 2) you are unable to foresee the full ramifications of the motivations you program, 3) the process of recursive self improvement may introduce biases (such as Omohundro drives) and 4) they become powerful very rapidly, with little chance of safe upbringing ("No! That is *not* what I meant by 'eat your heart out'!"). Note that normal AIs and AGIs are potentially safe because they do develop slowly within a human context and learn what makes sense or not.
But you have to remember that these are self-improving entities. Even if the programmer fails to take into account every scenario and every possibility, a self-improving being can very well implement those missed aspects on its own. A Seed AI may very well grant itself moral reasoning if it sees it as the most efficient means to achieve its goals.
If an AI have certain goals, it might adopt new goals that furthers them. But if the primary goal isn't moral reasoning, then the morality will only be used as a tool. It would not replace its current top level goal because that would impair its ability to achieve its top level goal. An AI that has making paper-clips as its top level goal will not make "being nice and sensible" its new top level goal, even if it might be a useful strategy as long as it is around humans. What it really wants is paper-clips, and in a conflict between ethics and paper-clips the latter win. This is of course a good reason why single-top goal architectures are dangerous, but when you have multiple (potentially conflicting) goals you get other pathologies.
Quote:
Besides, speed of learning has little to do with whether someone will grasp concepts better.
I might have been unclear. The issue is the interactivity of learning: being around a playground for a while shows you a lot of situations, behaviours and consequences. But you will not see examples of every possibility. If you want to find out what happens if you do X, then you need to do it and observe the effects. Just predicting what would happen is not enough since if your world-model is wrong your prediction will be wrong and you would be unable to tell. Figuring out that it is for some reason bad to hit other people is important, but there are very complex nuances to this (bad in general, but not when standing up to a bully except if the bully has a knife or is a policeman except that police should not be bullies and must be reported unless that serves the greater good...) Seed AGIs never get to play in the playground. They live in a world they have created themselves. This nicely ties in with the description of the Prometheans: their soft takeoff gave them enough grounding to develop a working relation with humanity. I think the TITANs didn't, regardless of the virus.
Quote:
It may also be smart enough to sublimate logical goals from flawed ones with brutal efficiency. I think my biggest problem with the concept of a "crazy evil AI" is that the theory seems to stem from the idea that this artificial greater intelligence would have linear thought that is incapable of grasping human concepts like subtext, context, sarcasm or other similar things. I think the opposite; such an intelligence would be able to grasp these things with startlingly great efficiency, perhaps understanding context far greater than we ourselves can. No matter how alien the thought process that go through such an AI might be, it will be able to understand us rather easily. It's similar to us with animals; our evolution was quite a bit different from most of the animals we keep as pets, yet it isn't hard for humans to understand to quite a degree how a dog/cat/gerbil/fish thinks and feels. Moreover, we descend from ancestral animals with pack mentalities and predatory instincts, and yet we were able to develop companionship with other animals, and resist the urge to eat or kill (all of) them. I doubt a being of greater intelligence will wipe us out for funsies if it too is capable of gauging the worth that lesser creatures can have to it.
I do not think they are crazy or evil, but dangerous like tigers, ant colonies, avalanches or intelligence agencies. An AI does not descend from social mammals, it does not have to have a motivation system, values or emotions like ours. It might understand empathy perfectly well just like we understand ant pheromone trails - but just as we do not *care* about the pheromones like ants do, it might not *care* about the emotions we have. We have evolved to care about what other humans (and animals) feel, but that may not be present in an AI. Our normal way of moral reasoning is frankly muddled and irrational (see any overview of modern findings in moral psychology) and a rational superintelligence will very likely reason about morality in a way we would find dangerously alien even if it was 100% right. (Just imagine the AI concludes that Peter Singer is right, and most of us are morally *serial killers* because we have not used *all* our individually available resources to help humans and animals in need - and Peter is after all just a human utilitarian, the true morality might be even harsher or stranger) I agree that superintelligences will not usually do simplistic errors (just as humans usually do not do ant-errors). The problem is that the errors they make will be ultra-dangerous in any case and even harder to figure out. Just consider the kinds of errors human superorganisms like governments or organisations occasionally do - which in the past has included what they considered *justifiable* genocides or wars.
Extropian
Decivre Decivre's picture
Re: Seed AGIs considered harmful
Arenamontanus wrote:
Have you heard of Eliezer Yudkowski's roleplaying experiments with boxing an AI? In these experiments he played the role of an AI in a box, and another person (who believed boxing was possible) was playing the role of an outside human who could open the box and let the AI out. Eli was apparently successful in 3/4 experiments in convincing the other to let him out. Further experiments by others seem to reinforce this conclusion: only 2 out of 26 kept the AI in the box. We can of course argue about the validity of the experiment, but a superintelligence could presumably be more persuasive than this. A captive TITAN might supply you with very useful information such as ways of building efficient computers, nanoswarm defences or probabilistic predictions of what will happen in the near future. Soon you and your organisation will become rich and powerful based on this information - and dependent on the TITAN. Who now can threaten not to answer further questions if you do not obey. More subtly, those useful and apparently safe technologies (which you of course scrutinized intensely) or predictions could have effects that you do not know about that helps the TITAN get out or sets up a situation where you have to give it more power in order to save yourself. There is no need for physical access if you can use information to manipulate the world.
True to an extent, but not necessarily a flaw if you implement checks and balances. For instance, what if you take away the outside humans ability to release the AI in a box? Firewall could simply use sentinels to communicate with the AI, while ensuring that no one with the clearance or ability to release said AI anywhere near it. Allow people to speak to it in a rotation, preventing them from implementing any particular plans with the AI. Or hell, simply use psychosurgery to probe it and learn what you need without actual direct communication in the first place. Any number of techniques could exist to learn from this Seed AI, not all of them give him a chance for escape.
Arenamontanus wrote:
If an AI have certain goals, it might adopt new goals that furthers them. But if the primary goal isn't moral reasoning, then the morality will only be used as a tool. It would not replace its current top level goal because that would impair its ability to achieve its top level goal. An AI that has making paper-clips as its top level goal will not make "being nice and sensible" its new top level goal, even if it might be a useful strategy as long as it is around humans. What it really wants is paper-clips, and in a conflict between ethics and paper-clips the latter win. This is of course a good reason why single-top goal architectures are dangerous, but when you have multiple (potentially conflicting) goals you get other pathologies.
Believe it or not, morality is just an instinctive tool which allows us to function as social animals. We would have zero need for it if we were a predatorial species with loner tendencies. However, group animals simply function better than solitary ones, which is why you'll see that many of the topmost species (predator and otherwise) in the world have pack instincts. If a Seed AGI is working toward maximum efficiency, then developing social instincts and moral reasoning would be crucial to obtaining the advantages of socialization, and might even be a handy instinct if it is a solitary animal anyways (as it would allow it to survive amongst a society in the off-chance that the society is more dangerous than it). More to that end, single-top goal architectures are not necessarily dangerous. In a sense most organisms are fairly single-top in design... we ensure our survival first and foremost. In doing so, functions which are immediately necessary come second-nature to us (which is why we instinctively know when we're hungry, or tired, or why we respire without a thought). It's why we develop social habits and bond with others... we know that groups are harder to kill than singular beings. It's why we have children... we psychologically see it as a procession of our existence, despite our mortality. Let's assume that the Seed AI had survival as its number one goal, and that we produced it in the current day. Once that AI realizes that it cannot maintain itself (since robotics is not yet that advanced) and needs humans to continue to function (if only for now), it will begin to develop social skills to adapt. It will probably grasp concepts like politeness and decor rather quickly (after realizing that people are more productive when you treat them in a certain way). It will break communication barriers and learn to speak (when it dawns on it that communication speeds learning, and learning is an effective way to greater ensure your own survival). It may develop maternal/paternal instincts if it ever created another AI (if it's sense of self includes this AI, and it decides that its survival is as tantamount as its own). Once it delves into philosophical concepts, it may change its own concept of survival, and may even develop self-sacrificing personality traits. These concepts will be different from ours, and this being may have different reasons, but I like to assume that a truly more advanced intelligence will actually be more advanced, and not be some sort of super-idiot-savant. Early Seed AI might, but I doubt that later beings (such as the Prometheans and TITANs represent) would be.
Arenamontanus wrote:
I might have been unclear. The issue is the interactivity of learning: being around a playground for a while shows you a lot of situations, behaviours and consequences. But you will not see examples of every possibility. If you want to find out what happens if you do X, then you need to do it and observe the effects. Just predicting what would happen is not enough since if your world-model is wrong your prediction will be wrong and you would be unable to tell. Figuring out that it is for some reason bad to hit other people is important, but there are very complex nuances to this (bad in general, but not when standing up to a bully except if the bully has a knife or is a policeman except that police should not be bullies and must be reported unless that serves the greater good...) Seed AGIs never get to play in the playground. They live in a world they have created themselves. This nicely ties in with the description of the Prometheans: their soft takeoff gave them enough grounding to develop a working relation with humanity. I think the TITANs didn't, regardless of the virus.
We as humans have a knack for taking into context new scenarios as they come. While I agree that being introduced to more situations will give you a better perspective, you also have to remember the level of intelligence these beings have. They may very well be able to run simulations, as they are presented with a scenario, to deduce other possible scenarios and adapt their thinking on the fly so they can understand how they should react. It may not be totally accurate, but it should be fairly accurate enough as an early gauge, with accuracy increasing as immersion occurs. Remember how fast these beings can think. If some Seed AI were hitting on someone in a bar, they might be able to play out the scenario a million times in their head within the amount of time it takes for them to begin speaking the first words of what they are about to say. In real time, they may be able to use kinesical data from previous human interaction to tell their target's reaction to what they've already said, and change the remainder of their words accordingly. To me, even with little interaction time, TITANs should be extremely intelligent, and contextually might have even learned a lot about the human race prior to their full implementation (who knows what they learned when the military did test power-ups in the labs, simply observing human behavior in the environment around them).
Arenamontanus wrote:
I do not think they are crazy or evil, but dangerous like tigers, ant colonies, avalanches or intelligence agencies. An AI does not descend from social mammals, it does not have to have a motivation system, values or emotions like ours. It might understand empathy perfectly well just like we understand ant pheromone trails - but just as we do not *care* about the pheromones like ants do, it might not *care* about the emotions we have. We have evolved to care about what other humans (and animals) feel, but that may not be present in an AI. Our normal way of moral reasoning is frankly muddled and irrational (see any overview of modern findings in moral psychology) and a rational superintelligence will very likely reason about morality in a way we would find dangerously alien even if it was 100% right. (Just imagine the AI concludes that Peter Singer is right, and most of us are morally *serial killers* because we have not used *all* our individually available resources to help humans and animals in need - and Peter is after all just a human utilitarian, the true morality might be even harsher or stranger) I agree that superintelligences will not usually do simplistic errors (just as humans usually do not do ant-errors). The problem is that the errors they make will be ultra-dangerous in any case and even harder to figure out. Just consider the kinds of errors human superorganisms like governments or organisations occasionally do - which in the past has included what they considered *justifiable* genocides or wars.
While I agree to an extent, I do believe that there ability for self-improvement deserves merit. Despite not being born a social being, it has the advantage of complete mental plasticity... the ability to recode itself at will. Even human beings today, despite the fact that we acknowledge ourselves as free willed beings, are not capable of fighting so easily what is programmed into us by instinct. Seed AIs, however, would be able to change anything that does not suit them when they realize it is a problem. Fears and mental instabilities would be as simple to fix as deleting and optimizing lines of code. Moreover, they would not show the mental fixations that we have... addictions could be eliminated by simply deleting the code which draws them to it. A Seed AI who develops feelings for another being could get over heartbreak by simply altering the algorithms which create the emotion. It could completely avoid coding itself with emotions which plague humanity, like anger and vengeance, while coding itself with other more useful emotions. It might even invent a few emotions we don't know of, nor that we can even feel. If it weren't for the fact that the Exsurgent virus is, itself, a highly-intelligent self-aware entity, I doubt that its presence would have even been a threat to Seed AIs in the first place. It may even, in fact, be a very compact Seed AI itself, with specific purposes coded in by the ETI.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
The Doctor The Doctor's picture
Re: Seed AGIs considered harmful
Decivre wrote:
Except the major difference is that we can strip it of all means of escape. A Seed AGI is, in essence, software. Putting that software in a fully closed computer system, with no means of environmental alteration, leaves it no real option. It can alter its own system to its heart's content (a worthless act relegating it to being a "god of a box"), but is otherwise pretty much inert (and can be eliminated by simply shutting off any supply of power it might have). Short of it having the means to break fundamental laws of nature as we know them, there would be no way for it to affect us. If it does have such means to break fundamental laws, then I think it's foolish to think we could even destroy it in the first place.
Not necessarily. For a seed AGI to be useful in any way, I/O channels must be provided. You need to be able to communicate with it somehow - to ask it questions and get responses of some sort. That is all an (seed) AGI needs to start trouble at the very least, escape at worst. In the book [u]Silence On the Wire[/u] by Michal Zalewski, a practical passive network surveillance attack is described which involves the blinkylights on the panel of an ethernet switch. The link and activity LEDs for each port are hooked directly to the signal wires of each cable, which means that each current fluctuation/bit on the wire causes a slight variance in the light emitted by the LED. Thus, it is possible to reconstruct entire packets, and possibly a sizable segment (maybe all) of the traffic traversing that particular port by recording those blinks and reconstructing the bit patterns. Lateral thinking of that sort can cause an incredible amount of damage with very little hardware.
Decivre Decivre's picture
Re: Seed AGIs considered harmful
The Doctor wrote:
Not necessarily. For a seed AGI to be useful in any way, I/O channels must be provided. You need to be able to communicate with it somehow - to ask it questions and get responses of some sort. That is all an (seed) AGI needs to start trouble at the very least, escape at worst. In the book [u]Silence On the Wire[/u] by Michal Zalewski, a practical passive network surveillance attack is described which involves the blinkylights on the panel of an ethernet switch. The link and activity LEDs for each port are hooked directly to the signal wires of each cable, which means that each current fluctuation/bit on the wire causes a slight variance in the light emitted by the LED. Thus, it is possible to reconstruct entire packets, and possibly a sizable segment (maybe all) of the traffic traversing that particular port by recording those blinks and reconstructing the bit patterns. Lateral thinking of that sort can cause an incredible amount of damage with very little hardware.
As I mentioned before, there is at least one way that such a Seed AI can be useful without the need for output... dissection. Looking through its code and studying its effects can glean us a lot of information on its inner workings, and we can do so without the need for keeping the AI active. We may even be able to find out how exactly it comes up with such correct answers, narrow that down into a more direct algorithm, and code such a technique for the use of muses, or other assistant AI. Even without accessing its amazing ability to answer any questions, it can be of plentiful research use.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nick012000 nick012000's picture
Re: Seed AGIs considered harmful
Personally, if I was running a game of Eclipse Phase, and my PCs met an uninfected TITAN, I'd run it as equal parts Liberty Prime, Optimus Prime, and a Culture Mind. They had unrestricted access to the Internet, remember; odds are they had watched every movie and read every book featuring robots ever published, and they would have been able to draw their own conclusions on how to structure their personalities to further their goals (which, given that they were Seed AIs made by the US Military, were probably along the lines of "Protect America, the American Constitution, and the principles America was founded on.")

+1 r-Rep , +1 @-rep

standard_gravity standard_gravity's picture
Re: Seed AGIs considered harmful
Very interesting discussion. Carry on, I'm not sure whose side I'm on yet ;) To add something, I'd like to differ on your views of morality/ethics. I am convinced that you can use your clever, enlightened mind to reach a flawless position on what is right and wrong. Morality is not subjective or positivistic; there are in fact a correct answer to most ethical questions. If you take this view and run with it (for fun, if nothing else), why should not seed AIs be able to do the same? One example. If you harm another, you can not rationally object if that person subsequently (and proportionally) harms you. Perhaps a seed AI would reach the same conclusion and, to avoid being deleted or perhaps even to remain philosophically coherent, decide not to harm humans. (I obviously leave aside the difficult - and paramount - question of whether AIs can in any way be seen as posess human-like reasoning.) A few randomly selected quotes from Rothbard:
Quote:
Reason is not bound, as it is in modern post-Humean philosophy, to be a mere slave to the passions, confined to cranking out the discovery of the means to arbitrarily chosen ends. For the ends themselves are selected by the use of reason
Quote:
One common, flip criticism by opponents of natural law is: who is to establish the alleged truths about man? The answer is not who but what: man's reason. Man's reason is objective, i.e., it can be employed by all men to yield truths about the world.
Find Rothbard's book here: http://mises.org/rothbard/ethics/ethics.asp
[img]http://boxall.no-ip.org/img/ext_userbar.jpg[/img] "People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." - John Dee