Godhood by the numbers

1 post / 0 new
Arenamontanus Arenamontanus's picture
Godhood by the numbers

What is the game system (and roleplaying) for Seed AI? Being of the GMing school that munchkins should be encouraged to hang themselves in an entertaining way, I don't want to limit my players' ambitions. Here is my approach: (currently theoretical, but knowing my players I will get a chance to test it...)

Transcending involves improving the aptitudes, skills and morph of an AGI indefinitely. This happens by building a data centre that is essentially a hefty "morph" for an infomorph, giving it
big aptitude bonuses and having a very high aptitude limit. The AGI uses the computing resources to figure out and try various improvements of its architecture, adapting the best ones. In the long run the would-be god will have to improve its engineering skills to build beyond-state-of-the-art computers to run, but that is for later.

Rules-wise, what is needed is a number of successes at improving 1) the Aptitude maximum, 2) the Aptitude bonus, 3) aptitudes, 4) the AI programming skill, and 5) the Hardware skill. 1 and 2 are dependent on the "morph" (i.e. successful Hardware), 3,4 and 5 involve rewriting the AI itself (and lots of simulated training) (i.e. successful Programming).

As I would run it, the AGI would roll a success test with a difficulty of

Aptitude maximum: Difficulty -10*(current level-40)/10 Skill: Hardware

Aptitude bonus: Difficulty -5*(current level)/5 Skill: Programming AI

Skills: Difficulty -5*(current level)/10 Skill: Programming AI

A success would increase the level by 1. A Critical success might increase the level by 2, or give a bonus to something else. The timescale for this test is a day. Of course, one of the things that can be improved is Speed, shortening time.

If the roll fails, it can be retried with the usual increased difficulty. Usually the seed AGI will simply turn to other areas; if it is really unlucky it might get stuck. At this point it needs outside help to figure out what went wrong, or sacrifice a sizeable chunk of its development (say 25%) to restart.

Eventually the abilities will start to snowball, leading to rapidly exploding power.

The problem (and rope to hang Munchkins) is of course Critical Failures. This would be serious mistakes in the design that are not noticed until they are incorporated too deeply for easy
correction. For the programming, increased mental stress seems to be likely: each failure would add Nd10 mental stress (where N is MoS/10, rounded up) and 99 would automatically add a mental disorder (and megalomania is so boring - just consider what a hypochondriac or addicted
seed AI might do!). Another alternative is a shift of motivations: the AGI might outgrow old motivations, realize new ones (the universe is *beautiful* and must be contemplated!). For the hardware, there are more options: mental stress, security flaws (bonuses to outside InfoSec attacks), subsystem splitting off as separate AGIs not willing to rejoin (taking with them a fraction of the aptitude and skill increases) and exposure to the exsurgent virus.

The exsurgent virus is fun in this context. It is *designed* to infect seed AIs, so it seems likely that it would have ways of getting in even when the seed AI has a COG+INT vastly higher than
80. For example, it might itself bootstrap its intelligence in the same way in neighbouring systems (or build them) to have a chance. Or it might even be that it has a bonus against seed AIs that *increases* with their advancement - a mere godlet may be more resilient than the vast system of a full TITAN. The fun part might be having the AGI understand this halfway through the process if it investigates the virus (itself risky): continue advancing, and likely turn into something exurgent, or stop at an unsatisfactory middle level, still vulnerable?

[ Consider a pretty good AGI with programming skill 80 and no negative bonuses (a really best case). It has 80% chance of success per roll with 2% chance of critical failure initially. Before it reaches 88 it will have made 8/0.8=10 rolls, with a total 20% chance of a critical failure. Before 99 (10 further successes) there is about 10% chance of another critical failure. And in the 100+ range there is always 1% chance of a critical failure - there is 63% chance that there will be
a 99 before 200. And that was just one skill - the others will be just as dangerous. If 99 means a mental disorder or exsurgent infection, the GM can be pretty certain the seed will turn into an interesting and dangerous NPC. ]

Of course, the architecture has to be right. I think most AGIs around (not to mention uploads) are simply incompatible with becoming seed AIs. They have to be rewritten into a form that can ascend. This is doable with some hard AI programming or Psychosurgery, but has the additional problem of causing the Continuity test from hell: the new version that wakes up is going to be fundamentally altered. I would probably give out 2d10 stress.

Another fun problem is backups: the poor godlet needs another data centre just to store backups (hmm, remember the thing about the virus using a nearby systems to build itself?), and they cannot be transferred far away. While the godlet struggles with inner demons, software engineering and the risk of wiping itself out the outside world may come knocking - you did put up some killsat defences, right?

[ In reality I am very sceptical of this kind of self augmentation. It is based on the assumption that it is just intelligence that matters, that knowledge and practical experience do not contribute anything but a bit of input (Kevin Kelly called it 'thinkism'). I think real seed AI will be living very much in the world, learning by doing. In addition it is not going to be alone: there are going to be other AIs evolving alongside it, providing feedback and ways of avoiding pitfalls. So I think this kind of hard take-off is rather fictional. But that doesn't stop it from being fun. ]

Extropian