Humanity Plus magazine asks several sci-writers, AI researchers, and other tech-thinkers if a Terminator-style scenario is possible. The results range from interesting to humorous.
Well, if someone built a global computer security system and
intentionally made it highly intelligent, autonomous and creative... so
as to allow it to better combat complex security threats (and
ever-more-intelligent computer worms and viruses) ... well, perhaps so.
It's not beyond the pale. A narrow-AI computer security system wouldn't
spontaneously develop general intelligence, initiative and so forth....
but an AGI computer security system might... and the boundary between
narrow AI and AGI may grow blurry in the next decades...
That's from Ben Goertzel's response, and he essentially nails the idea we had with the TITANs in Eclipse Phase.
In a similar vein, take a look at this site: Preventing Skynet.