Lossy uploads

1 post / 0 new
root root's picture
Lossy uploads

I'm reading Anders Sandberg and Nick Bostrum's time-until estimates on mind uploading with respect to Moore's law. Interesting stuff, but presupposes that any brain emulation is going to be a direct copy of the human brain.

I've also read this interesting paper comparing the call graph of the linux kernel to the transcriptional regulatory network of Escherichia coli. Ignoring all of the problems with that comparison, it does generate the idea that mind uploading will not be a direct copy of the human brain.

This is leading me to consider just how lossy of an upload would be required before anyone noticed.

Presuming that we have some ability to grow neural structures, and the basics are pretty well mapped out (cortico-basal ganglia-thalamo-cortical loop goes here, ventral medial prefrontal cortex is shaped like this, Wernicke's area right over there; mostly trying to not induce OCD or some other mental disorder by having a wildly malformed neural structure), you should be able to put together a rough structure that supports a given personality profile. Add that to the information context the person will find themselves in (class/cultural context, social ties, photos, target advertising, etc) once uploaded, and people's tendency to invent memories given the expectation that they already have them, can we get away with a really sloppy upload and call it good enough?

If we can have really sloppy uploads, aren't trying to emulate proteomes, don't care too much that the person's consciousness is running on the same duty cycle as a bio-human, and don't have higher expectations for consistency of behavior than we would for a bio-human, what sort of supplemental inputs to a brain scan would be needed?

[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]