So, what about a human who is perfectly coherent in a simulated world, but then starts to exhibit certain anomalous behaviour? If you consider that human consciousness consists in a particular way (such as having a self-aware “ego,” as is required by conventional physicalism), then these behaviour’s wouldn’t be attributed to the computer. They would be attributed to the human.
Here is an interesting philosophical note I’d like to point out: when a human thinks, in accordance with the rules of thought, it should be possible for the human himself to realize that he thought in the wrong way (or at least to recognize something wasn’t right about what he thought). However, if we think that a computer simulating a human has that sort of control over its thoughts, then the simulated human will act according to our rules rather than to the rules of thought itself. However, this won’t happen if the computer is the simulated human: our thoughts were the simulated human’s, and thus the simulated human thinks just like us. This implies that when thinking about our mind-and-brain, we should consider how things might have been, as opposed to looking for how things really are.
(Incidentally, this is the crux of a certain interpretation of the distinction between the simulated person and the virtual person and the physical person. When we think of a computer as simulating a “person,” we can conceive of an intermediate state of mind in which the mind is an intermediate entity between the system and the computer. This intermediate state might be different, but in principle it is possible for things to be different between the “person” and the computer.)
A related argument, also called the “Simulation argument,” is presented by Daniel Dennett, a philosopher of science, philosopher of life, and author of Breaking the Spell: Religion as a Natural Phenomenon. The argument says that all the information that’s “given up” in a computer simulation would be lost, and hence no simulation could be real.
I would first like to point out that I myself have not yet figured out the exact implications of the Simulation Argument. One issue that is not up for argument is whether I am entitled to have more information than I would in a world where all information is lost. If a simulation is given up, then all information that was in use during the simulation is lost.