A mistake in a factory can result in scores of injuries or deaths. A mistake at a chemical plant can kill thousands. But a mistake in a biological laboratory could result in a pandemic. And as more and more people at all levels of competence gain access to the tools for biohacking, the risk of error would seem to rise dramatically.obEP: Firewall investigates a dangerous new bio-plague outbreak, only to discover that it's instigator is a biohacker working on a beneficial new gene therapy who accidentally set something horrible loose.
From the democratization of violence to the democratization of virulence: how a garage lab mistake could wipe us out.
Humanity Plus magazine asks several sci-writers, AI researchers, and other tech-thinkers if a Terminator-style scenario is possible. The results range from interesting to humorous.
Well, if someone built a global computer security system and intentionally made it highly intelligent, autonomous and creative... so as to allow it to better combat complex security threats (and ever-more-intelligent computer worms and viruses) ... well, perhaps so. It's not beyond the pale. A narrow-AI computer security system wouldn't spontaneously develop general intelligence, initiative and so forth.... but an AGI computer security system might... and the boundary between narrow AI and AGI may grow blurry in the next decades...That's from Ben Goertzel's response, and he essentially nails the idea we had with the TITANs in Eclipse Phase.
In a similar vein, take a look at this site: Preventing Skynet.
How does death by death star sound? No, not the Star Wars kind -- the kind of star that dies, explodes in a supernova, and sends a wave of killer gamma-ray radiation across the nearby stellar neighborhood, wiping out life.
ObEP: A group of gatecrashers on a crucial mission suddenly find out that their exoplanet is about to be bombarded by a gamma ray burst from a nearby dying star. They have scant time to finish their mission and escape back through the Pandora gate ...
Sorry about the lack of posts this week. We've been waiting on some new artwork to come in so we can show you, and we've also been busy proofreading the EP layout and assigning chapters of future books to freelancers. In the meantime, here is some end-of-the-week apocalypse for you:
Over at Avatar | Anima, John Carter McKnight is liveblogging the Global Catastrophic Risks conference, which is going on today in California. (This is the sort of stuff Firewall would be interested in in EP.) So far he's covered Jamais Cascio on "Risks and Resilience" and Eliezer Yudkowsky on "Cognitive Biases in the assessment of Risk."