I read a post on New Scientist today that offered up six suggestions for developing robots that would be adequately submissive, docile, or otherwise non-threatening to humans. While they are all reasonable possibilities, it should be noted that:
- they admit that it’s “too late” to execute two of them
- a third (Asimov-like laws) is dismissed as good for fiction, but not practical for real life
- the remaining possibilities are not technologically feasible at this point
Of course, we all know that regardless of what we do to control the manner in which robotic intelligence develops, the robots themselves will probably have greater ideas down the road.
With your average cell phone or internet disruption, how many times in a given week would you estimate that the first logical conclusion you reach is that the singularity has been achieved?