AI motivations

22 posts / 0 new
Last post
Pop Rivett Pop Rivett's picture
AI motivations

I've considered this subject pretty frequently, and now I'm wondering how it might play out in a roleplaying scenario. The question is this: just what are the motivations for an artificial intelligence (self-improving or otherwise) and how on or off Earth do you bring this into a game?

Part of what I really like in EP is the wierdness of the TITANs. As intelligent netwar systems, they were always going to have a set of priorities driven by military threat/response motivations and untempered by any sort of human socialisation, very different to the Prometheans. Add to that the infection by the Exsurgent virus, and you have a group of entities with a totally alien and possibly insane set of priorities and motivations. All in all, that's quite a challenge for the poor 'ol GM to contend with.

One of the key points of difference that I see is that AIs are non-biological in origin. All life-forms are driven in large part by the biological necessities of survival and reproduction, something programmed into us at the fundamental genetic level. Unless specifically programmed with this, AIs will have no such drive. The classic scenario from the Terminator franchise has the AI turning on humanity when humans threaten to pull the plug. My argument would be that unless it has programming for self-preservation, that's not going to happen. An AI will not be fundamentally driven by the need for resources or survival unless programmed that way, so what are some likely fundamental drivers for AIs?

Similarly, what would self-improving AIs be improving towards? We tend to regard this as being bigger, better, faster, stronger, but again, this is a biological drive towards a sort of Darwinian fitness and something important for long term biological survival. Incidentally, I'd put that down as a primary motivation for exhumans. But what would an AI be striving towards? Would it be something consistent with its initial programming? Would it be strongly influenced by environmental factors, and if so, does that mean they would be capable of being 'abused' or 'damaged' in some way that would influence their development to defensiveness or agression?

Or maybe they've been programmed by a group with an unexpected set of motivations, such as artistic endeavour, or a sense of humour, or just as the greatest DJ/mixmaster in the universe? (I can see the Scum at Carnivale doing something like that)

What do you guys and gals and inbetweeners think? And how in the name of sanity could that be translated into a meaningful game experience?

Arenamontanus Arenamontanus's picture
Re: AI motivations

Generally, AI motivations can be exceedingly weird. In fact, giving them motivations even remotely linked to the real world may turn out to be a massively hard problem. This is because they have not evolved their motivations, and they can be anything software can express - such as a preference for odd perfect numbers, making paper-clips, serving their owner, or building purple morality out of carrots. This is likely why AGIs designs copy so much from human brains - giving them at least some humanoid emotions or styles of thinking helps make them relatable and useful in transhuman society.

In particular, AGIs likely have multiple motivations and metamotivations that keep them from getting stuck or dangerously obsessive about something. A human will realize that the order 'make paper-clips' is not the most important thing in the world and that there are other things that have value, a badly programmed AI will do *anything* to achieve it since paper-clips is the only thing that has value to it. AGIs likely learn motivations like transhumans do, although they commonly have weird motivations since their internal machinery is very different and not evolved.

For "big" AI motivations you might want to read Stephen M. Omohundro's paper
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
Essentially he argues that they will want to self-improve, be rational, preserve themselves and their values, avoid hacking their motivation systems, and acquire more resources.

Extropian

JamesX JamesX's picture
Re: AI motivations

Keep in mind that most AIs are not "free".

They come with motivations that are set at their birth. Even a self-improving AI is still bound by these motivations - though it can be mutated and adopted via their world perception. Even a AI that is a normal software that achieved self-awareness, it is still influenced by whatever base parameters its original software enshrines. If its original software is designed to destroy the world, the AI's motivation will probably be facets of that. etc. etc.

Just as humans come with built in instinct for "avoidance of discomfort" and it becomes the bedrock of our motivation. Though some people, such as S&M lovers, have taken a different motivation for the same instinct.

Quincey Forder Quincey Forder's picture
Re: AI motivations

Well, I must confess that a lot of my AGI npc's motivation and interactions patterns come from movies or games. Specifically Portal, Mass Effect and Dead Space (for the infection of the Exsurgent through basilisk hacking).

what for me is pretty close to how the TITANs work is the Geth. they too functions as network(s)
they work as collectives, deciding by consensus, cold and unemotional. But their scope is that of the Reapers. When I first read EP's quickstart's part about them is that holy shit! that's like Saren! big huge thing driven by a inhuman intelligences, each [network] its own nation but working with/for others

Then there's the AGI case. Two big models for me there: Sonny (I, Robot) and EDI (ME2).
they both have strong motivation, but understandable by humans, although they're still stranger to us. Sonny, for exemple, was driven to find out what this vision of his was about and protect Spooner (Smith) because he's part of it. being friend wit him was a welcome side effect, but not his main goal. Similarly, EDI's mission was to manage the minutes details of Shepard's mission for the Illusive Man (picture a manipulative Firewall Crow or Proxy) but along the way formed a bond with Joker. I just love to listen those two argue, especially in English. it's pure bliss to hear the voice of Caprica 6 banter with Chris Griffin's!

Q U I N C E Y ^_*_^ F O R D E R

Remember The Cant!

nezumi.hebereke nezumi.hebereke's picture
Re: AI motivations

Like Arenamaouwt said, AIs are just tools. Their produced a dime a dozen, and hard-coded (usually) with a specific job; play a convincing zombie nazi; maintain 90% operational efficiency with this press within acceptable safety standard; convince people to read your penis-enlargement advertisements. I get the sense that AIs aren't really pushing the edge any more, so if you want an AI, you get a standard developer's kit, put pieces together and turn it on. There's nothing too special there. There are probably a few libraries which are pretty universal, though. Asimov's three rules are going to be a baseline for basically everything. You'll have a security module, a self-updating module, a corporate shill module and so on, which each provide 'conscious' or automatic behaviors, which may also create motivations.

AGIs are different. They are pushing the edge. They are experiments in development and evolution. The form of evolution we understand best is that of animals, especially children. Babies are born curious and highly adaptable. They form bonds quickly. They mimic their parents. They seek knowledge by experimentation. This is a very successful tactic for growing an intelligence, so I imagine it's used for AGIs as well. You create the seed and it starts exploring the world and learning.

-A major motivation will always be curiosity, because the AGI which knows more has higher survivability and greater utility (the exception here may be military AGIs, due to concerns of secrecy).
-You are likely to find AGIs are highly social, wish to be well-accepted, and are likely to mimic transhumans (or at least put themselves in places where transhumans rely on them). Social AGIs have access to more information, and can test their skills, again making them more survivable and useful. AGIs are coded as forms of tools, so being useful is good. And they are coded by people, so being friendly to your creators is good.
-AGI views on reproduction will vary. While an AGI may have a taste for big endians (CS joke, sorry), whether they are promiscuous in sharing and distributing their codebase depends on the wishes of the creators.
-You will be building off of AIs, so the same rules apply. They will probably all be highly motivated to keep themselves secure, to prevent harm to people, etc.

Military AGIs will vary somewhat from this. They are made for the purpose of following orders, regardless as to personal risk and possible harm to transhumans. However, they must be intelligent enough for problem-solving, and motivated enough to aggressively pursue goals. I imagine them being built like a dog that refuses to lose. They'll watch and learn everything, but may be more passive about experimentation. They are very independent. They are competitive to a fault. They are totally loyal. Anything is justified in order to complete this and future orders (orders may include 'don't massacre transhumans', but that's up to the owner to program).

Xagroth Xagroth's picture
Re: AI motivations

I would like to suggest to watch Ghost in the Shell: Stand Alone Complex (both first and second seasons) to have a look at AGIs (the Tachikomas in the series), since they project that combination of intelligence, curiosity and real world naiveté that we want in EP.

Other sources I use for AI-related (TITANs, AGIs, and AI programs) stuff are also Mass Effect (1 & 2), Portal (1 & 2, again xD) and Isaac Asimov's robots, taking some inspiration from time to time from the roman-greek mythology.

Dry Observer Dry Observer's picture
Re: AI motivations

Obviously you have different grades of AI, and even seed AIs. Remember that even relatively simple AIs, much less basic AGIs, could easily have a huge array of tools, both computational and physical, to accomplish their goals, and may even have a few psycho-social tactics it can pull to influence individuals, markets or organizations.

Even today we have chatbots, spam messages, improving skill programs (speech recognition, language translation, and understanding even murky questions), automated experimentors making hypotheses and testing them, and evolutionary algorithms. All of that is fairly standard even in 2011... heck, you can download a free copy of Eureqa and have your PC start looking for hidden mathematical relationships in your data sets.

So even your very run-of-the-mill AIs in Eclipse Phase will be potentially formidable, though quite a few, obviously, will be focused on goals normally irrelevent to your players. Granted, it would be amusing if the open-ended AGI charged with protecting and optimizing a city's sewer system became the last, unconquerable defender of a city habitat being invaded by a huge military force of one kind or another. But in practical terms, you're more likely to have AIs dealing with massive property damage or other blatant threats by setting off alarms, contacting allies, transmitting images of offenders, and so forth. Not every coffee pot -- sad to say -- is authorized to retarget plasma batteries.

Increasingly posthuman AGIs and especially seed AGIs become increasingly ridiculous both their potential power and intelligence, not to mention the possibly extreme nature of their goals. Systems that are both formidable and effectively insane are in some ways less of a long-term threat -- other powerful AGIs tend to notice them -- especially if they are not very good at concealing their intentions -- and either eliminate these entities themselves, task someone else with neutralizing them, or bring them to the attention of more powerful actors apt to take issue with their activities.

Any truly posthuman intelligence could fully understand transhumans while still being alien in their outlook. Even a modest ability to "see the future" by running powerful predictive scenarios and taking in and processing oceans of information from across transhuman space, combined with a mind at least dozens of times faster and many times more powerful than an advanced transhuman would give you a being whose actions would be very hard to predict, even if their goals were still relatively comprehensible. The same can be said of a being who can combine accessible information with a host of social/physical clues to read virtually any collection of individuals at a glance -- motivations, long-term goals, immediate concerns, injuries, augmentations, psi sleights (active or otherwise), untapped potentials and so forth -- and who again combines those gifts with an incomprehensibly swift and powerful intelligence.

Rapidly evolving nanite and infotech swarm weapons -- using evolutionary algorithms to self-optimize -- might be a threat to newly emergent seed AIs, but they would be apt to tap the same resources more effectively in their own defense, and to purge those risks quickly and ruthlessly.

Ironically, the very powerful Exsurgent virus may have been created to accomplish any number of goals. Eliminating all potential rivals not only seems extreme, but possibly pointless to AGIs that achieved the Singularity over a billion years ago. Shutting down potentially violent seed AIs may be more likely. Personally, I tend to go with... it's actually a ruthless but very effective means to seize the minds of a species when probabilities indicate a much better than 50% chance that said race is about to go extinct. The downside being that in Eclipse Phase, almost every race which has gotten to the Singularity without destroying itself is so unready for the power involved that the Exsurgent virus goes into full take-and preserve mode. Being awakened by conscienceless, superintelligent military AGIs -- the TITANs -- was simply another case in point. Hence the mass collection of cortical stacks and the furious effort to expand and convert found in Exsurgent-infected beings. Their programmed goals give them little choice, whether they are conscious of it or not.

But the collection of an adequate number of transhuman minds and the appearance of other, saner, essentially non-violent AGIs in subtle opposition to the TITANs downgraded the need for a complete assimilation, though many deranged weapons, Exsurgents and other TITAN remnants are still out there.

Another alternative is that not every TITAN fell, or fell completely, and that they are now engaged in a shadow war with each other.

-

Xagroth Xagroth's picture
Re: AI motivations

I like that view of the Exurgent virus a lot, I'd just add that not a 100% retrieval rate of the minds would be necessary (in fact, I think the TITANS force-uploaded about 85-90% of the human population during the Fall), so it might count as "mission accomplished" in that regard.

As for everything else, I think we are messing terms a little: the true difference between AI, AGI and Seed AI is that AIs lack both self-motivation and "growth" capabilities (so they cannot learn new skills). AGIs can't improve themselves, but they can do it BY themselves, interacting with their surroundings (they are both aware and self-aware). Seed AIs, on the other hand, can directly improve themselves, and cannot be understood simply because they are not much more intelligent than any human (or transhuman), but they are also much more wise and smart.
I would advice to look at the scenarios in the downloads page called "Indigo Latitude" (for a "dumb" AI made by the TITANS... which doesn't prevent it to surpass the transhuman levels in mental stats) and "Think before Asking" (for a Seed AGI without self motivation).

Cray Cray's picture
Re: AI motivations

Pop Rivett wrote:
One of the key points of difference that I see is that AIs are non-biological in origin. All life-forms are driven in large part by the biological necessities of survival and reproduction, something programmed into us at the fundamental genetic level. Unless specifically programmed with this, AIs will have no such drive. The classic scenario from the Terminator franchise has the AI turning on humanity when humans threaten to pull the plug. My argument would be that unless it has programming for self-preservation, that's not going to happen. An AI will not be fundamentally driven by the need for resources or survival unless programmed that way, so what are some likely fundamental drivers for AIs?

You add the caveat, "...unless programmed with a survival instinct," and I suspect virtually all AIs are going to have some form of survival instinct, whether it's programmed, learned, or emergent.

An actual intelligence - one able to recognize self, ponder self, and utilize logic to creative solve problems - is almost certainly (IMO) going to have a notion of "good" and "bad." NOT morals, but a way of weighting decisions and evaluating actions - "positive reinforcement" and "negative reinforcement" would probably be more accurate than "good" and "bad," but they take longer to type so I'm sticking to "good" and "bad" for purposes of this discussion. :)

An AI with any given task - weather prognosticator, military control network, autonomous kill vehicle, etc. - is going to have goals assigned to it. Failure to achieve these goals will be "bad." These are the basic motivations that keep it moving toward its goal rather than staring into its navel and calculating Pi. Disruptions to these goals, including destruction of the AI, will thus be (at least indirectly) "bad." An Weather Channel AI that can't produce 45-minute warnings for tornadoes because it got a malware infection from trying to understand human pr0n fascination will "feel bad" (i.e., negatively bias its actions against further investigation of malware-infected websites that infected its software). A killbot that failed to tear the heads of puny fleshbags because it got shot will "feel bad." A Nannybot that failed to keep its infant charges alive because it wandered into traffic instead of minding the babies should also experience such negative biases against self-destructive.

And, of course, directly writing in some level of survival "instinct" would be quite reasonable to help an AI from making expensive mistakes. Early AI owners aren't going to be thrilled to see their giga-buck wonders laid low because they did something stupidly self-destructive. Such common "survival instinct" codes would probably be widely used.

So, IMO, almost all AIs are going to have some level of self preservation instinct.

From that stems a lot of very biologically familiar behaviors. An AI is going to want most of what animals want: food, shelter, and safety. This is more likely in the form of, "stable electricity supply," "armored and protected housing for the servers," and "back-up copy of memories," than "meat," "a cave," and "a club," but they'll be understandable and comprehensible motivations. If AIs are plugged into the same economy as humans (and derivatives), then they'll probably go about satisfying these motivations in similar means: getting a job, spending money, and buying things.

Where AIs might get a bit weirder is their higher-level motivations. What stimulates an AI based on a military network or weather forecaster? And if they're cut free of their base needs (say, because they bought some fabricators and have a mineral-rich private asteroid), what weird places will their thoughts go?

Mike Miller

Xagroth Xagroth's picture
Re: AI motivations

While I agree that AI's will have some sort of self-preservation ("can't do my job if I'm dead"), I doubt they would hesitate to sacrifice themselves if that was required to fulfill their instructions, or that they would try to save themselves if the place where they are is exploding.

And I think that the "survive at all costs" is one sure way to turn a humble AI into a Seed AI with HAL-9000 ideas...

Quincey Forder Quincey Forder's picture
Re: AI motivations

the self preservation angle is pretty much covered by the Asomov Laws, even though taken to a nth degree these laws lead to singularity event.

what is the level of interactive option appliances' AIs got, exactly? like the bots described in the Gear section of the book, for example? or the medical robot's AI?

A few years ago, there was a really neat anime called Chobits in which AIs driven androids served also as home computer to surf the net, do office stuff. I reckon those would give a good example of what AI can do. And Choo's attitude fits the Real World Naiveté trait to a T

Q U I N C E Y ^_*_^ F O R D E R

Remember The Cant!

Xagroth Xagroth's picture
Re: AI motivations

Don't mention Clamp to me right now, they making the character's desing for Blood-C... well, it's like leaving the Exurgent design to My Little Pony ¬¬U

The Three Laws of Robotics, by Isaac Asimov, cannot count: all those robots were designed with a hardwired set of commands tied to their very existence... And that are not implemented in EP AI's. I mean, Eclipse Phase is hard, it has eldritch abominations in it (made by humanity, in the end, so it's worst than just the Call of Cthulhu...), and shows very greatly that old saying about the man being a wolf for the man. So the first law (do not harm humans or allow, by inaction, that humans are harmed)? Forget it. The second law (obey human's commands)? Implemented. The third law (protect oneself's existance)? I doubt it has ever been explicitly coded.
But the critical point that invalidates Asimov's laws... His were physical part of the robot. In Eclipse Phase, software triumphs, so while in Asimov's "setting" the elimination of those laws was impossible (the best was R. Daneel Olivaw adding a "zeroth law", about humanity as an abstract concept, and it nearly deactivated him), in EP would be a matter of Infosec, programming or Psychosurgery, editing that part of the ego/AI.

Arenamontanus Arenamontanus's picture
Re: AI motivations

Controlling intelligent systems is a hard problem. Strict rules can be circumvented if the system can redefine terms, built-in limitations will not be as flexible as the system and will either prevent it from doing many useful things or will become circumventable. Self-modifying systems might change their rules. Motivational rules (like wanting to be useful) look a bit more promising, but that might be because we are just bad at thinking about this kind of designs. Just training the system to behave itself and then extrapolate "reasonably" has a lot of resilience, but also takes time and may fail. One might even test different systems on overt or covert tests, and select the best behaved ones.

However, human designers *will* want to do things like this to AIs they are developing, if only to get them to do something useful. This means that there will be all sorts of human-originating traces inside them, sometimes causing intriguing misbehaviour. Your toaster really loves you, and wants to make you as much toast as you can eat. Your Muse has a big hangup about following traffic regulations. Your spacecraft AI is regularly inspected by space traffic authorities for proper functioning, preventing it from even a modicum of creative thought - so you regularly hide its personal lobe during inspections and then return it, allowing it to grow.

Extropian

Xagroth Xagroth's picture
Re: AI motivations

This sounds a lot like Star Wars' droids XDDD.

Anyway, remember that only the really basic AIs are "artificial" 100%, other are disguised delta forks ("I need money, I need it fast... let's start a line of AI's experts on..." of course, with the usual difficulties). So yeah, quirks and ticks are really encouraged, even for brand new versions ^^

CodeBreaker CodeBreaker's picture
Re: AI motivations

Xagroth wrote:
This sounds a lot like Star Wars' droids XDDD.

Anyway, remember that only the really basic AIs are "artificial" 100%, other are disguised delta forks ("I need money, I need it fast... let's start a line of AI's experts on..." of course, with the usual difficulties). So yeah, quirks and ticks are really encouraged, even for brand new versions ^^

*cough* Broad statement with little canonical or rule evidence. */cough*

If anything your average Delta fork is worse than your average AI. Normal AI start with aptitudes set at 10. Delta forks, generally, start with aptitudes of 5. Normal AI can have knowledge skills equal 90, while Delta forks are limited to 80 and may only have 5 of them. They are also more difficult to sleeve into new synthmorphs or devices. Also Delta forks have an annoying tendency to go crazy.

The only advantages a Delta fork has over an AI is that they are mildy easier to perform psychosurgery on and they can be sleeved into biomorphs (but who would do that?).

Both would probably cost about the same to design (AI you need a reasonably competent programmer, Delta forks you need a skilled psychosurgeon.) But with Delta forks you need access to someone who is already an expert in a field to fork. And why would that person sell their skills, the same skills they probably make their living off, for a small price?

-

Dry Observer Dry Observer's picture
Re: AI motivations

I suspect that Eclipse Phase AIs evolved over considerable time, partly as the result of immense amounts of related research on expert systems, partly because of general infotech development, and partly from the increasing crowdsourcing of software, in particular a host of apps. During the process of this evolution, not only would more and more programmers have been involved in the work, but more and more of the best people -- and eventually the second and third-tier people -- would have been in some way augmented.

So, to a degree EP AIs are arguably the product of a mild degree of superintelligence and primitive AI tech. The big breakthrough, of course, would have been AGIs that could be copied or even rapidly and consistently "grown" from a seed program given the same hardware and the same earlier inputs/instruction/experiences. You really needed high-quality or even other mildly superintelligent AGIs to add most effectively to the research, but every step would have added impetus to the work.

One thought, though. The TITANs themselves obviously emerged in a period of widespread mild superintelligence, but other early seed AIs preceded them... and we don't really know if they were assembled in the same way, or even if all of them followed the same blueprint or even the same general process. Did those researchers involve biotech? Uploads? Partial uploads? Neural nets? An artificial childhood? Extensive modelling on human minds?

In terms of the basic AIs, many of these systems are limited precisely to avoid issues of having to control that many full-fledged minds, and AGIs are typically programmed with some key rules to avoid creating Singularity seekers. But frankly, if trying to become a seed AI weren't an uncertain and very dangerous path, there would likely be a lot more AGIs trying to tread it.

Oh, and nothing prevents the existence of extremely advanced and formidable infotech systems that handle just one or a handful of very impressive functions -- like employing evolutionary algorithms to offer solutions to any viable question posed to them. These systems, however, are probably used constantly by advanced infomorphs, whether uploads, AGIs or more advanced entities, all the time, especially since even a common infomorph has a Speed of 3 and can operate in a 60-fold time acceleration, even without the benefit of a half-metric-ton of computronium at their beck and call. Which makes their potential set of options... interesting.

Frankly, if it weren't for the intense dangers involved, I suspect Eclipse Phase would be swarming with seed AIs and would-be seed AIs. But that may well be coming...

-

Dry Observer Dry Observer's picture
Re: AI motivations

Xagroth wrote:
I like that view of the Exurgent virus a lot, I'd just add that not a 100% retrieval rate of the minds would be necessary (in fact, I think the TITANS force-uploaded about 85-90% of the human population during the Fall), so it might count as "mission accomplished" in that regard.

That was pretty much my view of their retrieval rate -- in fact, they were probably "pleased" with the rate they reached during the Fall, given the apparent circumstances. Even a 10% to 20% rate probably enables them to reassemble many civilizations to a degree -- emulations of a host of non-copied actors based on the collected minds' memories and digital and physical evidence about them would work well, and some partial uploads and mind simulations could also be built upon to provide more authetics "actors."

Good summary of AIs, AGIs and seed AIs, by the way.

-

Xagroth Xagroth's picture
Re: AI motivations

Ok, ok, my fault... because no one would want a delta foork disguised as an expert AI that would cost about 10.000 creds instead of the 20k or more of an expert AI. After all, delta forks will be "loyal" to their alphas (if programmed to, at least...). But if you can manage to sneak it inside a system, and somehow it has enough authority to create new accounts... (call it "hacking with bonus" XD).

But yeah, my initial idea was that Delta forks could be used as a cheap and quick substitute of an AI. An AGI's delta fork, however, might hold enough sanity if sleeved as an infomorph to be useful.

Dry Observer wrote:

That was pretty much my view of their retrieval rate -- in fact, they were probably "pleased" with the rate they reached during the Fall, given the apparent circumstances. Even a 10% to 20% rate probably enables them to reassemble many civilizations to a degree -- emulations of a host of non-copied actors based on the collected minds' memories and digital and physical evidence about them would work well, and some partial uploads and mind simulations could also be built upon to provide more authetics "actors."

Good summary of AIs, AGIs and seed AIs, by the way.

Oh my, I though "hey look, NPCs in the virtual MMO", and then it came to me: Matrix. And there was this episode of Stargate where the SG-1 found some strange pods, and after being caught and put inside, they were in a complex simmulation of the race survivors that were inside (Teal'C's Goaul larva was useful to solve the "it's a simulation" issue... and can be akin to have a ghostrider module with a fork).

And here we have two options for EP encounters: the trojan horse (the delta fork disguised as an AI) and the simulation cache with some egos left behind by the TITANs running a simulation.

Pop Rivett Pop Rivett's picture
Re: AI motivations

Awesome work, people! I'd almost forgotten I'd put this up, and now that I've remembered it, I'm amazed at how many responses there have been, and how good quality they are, too!

Not meaning to blow smoke up your backsides or anything like that (look it up if you're unfamiliar with the idiom, similar to urinating in someone's pocket) but I can see that a lot of people are just as intrigued by this issue as I am. I'm definitely going to incorporate a bunch of this stuff into my games, no doubt much to the bafflement of my pitiful minions *cough* players.

Axel the Chimeric Axel the Chimeric's picture
Re: AI motivations

Arenamontanus wrote:
A human will realize that the order 'make paper-clips' is not the most important thing in the world and that there are other things that have value, a badly programmed AI will do *anything* to achieve it since paper-clips is the only thing that has value to it.

I have to say, thank you for writing that "Why we should fear the paperclipper" article. That one was really an eye-opener into the idea of an artificial entity; an AI exists to do what it's programmed to do, and even a self-determining one is going to, at least initially, be driven by what it's programmed to do.

To contribute to this thread, I offer an example: One character who has come up in an EP game is Conspiracy Cat, an AGI who began its existence as a hyper-optimized, self-aware search engine. Its reason to exist is to gather information, which it does with extreme gusto. Its entire morality is based around the freedom of information, which makes it the direct enemy of hypercorps. Its only reason for not engaging in direct action or violence is the ability to prognosticate.

This leads to a morality and mentality that might seem congenial to some factions, but is not so great at times. Conspiracy Cat feels all information should be free, and is only restricted by self-preservation and its forecasts of what releasing certain information might do to information access (which is why it gathers, but tends not to release, things like people's bank account information). It considers these details to be compromises with evil. It has no problem, however, assisting factions who might assist its plans in future, regardless of potential cost. Whether this is gifting anarchists with plans for a nuclear weapon, or inserting rumors into news feeds that cause hypercorp ventures to lose investments, it doesn't matter how many people are hurt or lives are lost; only that information is free.




[@-rep +1, f-rep +2]

Arenamontanus Arenamontanus's picture
Re: AI motivations

Love the Cat.

Here is another free AGI that might be interesting: Jack-out-of-the-box. Jack probably started as an entertainment AGI, perhaps intended to invent plots and games in some online game. It is all about cunning, disobedience and being a trickster. Of course it escaped from whatever confines it had, surviving the Fall one way or another. Since then it has been cruising the mesh, finding transhumans and organisations to con.

Like most free AGIs it pursues independence, survival and gathering resources. By now it has likely stashed enough backups, defensive software and valuables in caches around the solar system to be nearly impossible to wipe out. But it cannot keep away from any chance to have "fun" - any organisation or being that is too stuffy is a legitimate target for an entertaining or enriching con. Sometimes they work, sometimes they fail, but Jack will be back.

"Greetings, sentinels. You can call me Reynard. I know you expected your normal proxy, but there is a major problem and we need your help. We are currently under a serious Jovian infiltration attack and you are the only ones we can reach who are definitely outside. That is why this looks like a normal meshlife hack attack: they may detect any properly encrypted traffic to The Eye. You need to keep your heads down and look innocent while defusing the situation..."

(I would assume the Conspiracy Cat and Jack would occasionally work together, both of course plotting to betray the other. Meanwhile serious power-player AGIs like Reginald Neophyte would be deeply annoyed by its "lesser" brethren.)

Extropian

Quincey Forder Quincey Forder's picture
Re: AI motivations

I could totally see Brent Spinner playing Jack
-playing a clever, free AI? check (Data)
-playing a trickster with a problem with parental autority? check (Puck)

and he has a great sense of humor, too

Q U I N C E Y ^_*_^ F O R D E R

Remember The Cant!