Abstract
The possibility that artificially intelligent machines may some day pose a risk is well-known [1].
Less understood, but more immediately pressing, are the risks that humanistically intelligent [5, 7] people or organizations pose, whether facilitated by “smart buildings”, “smart cities” (a camera in every streetlight), or “cyborgs” with wearable or implantable intelligence. As we augment our bodies and our societies with ever more pervasive and possibly invasive sensing, computation, and communication, there comes a point when we ourselves become these technologies (what Minsky, Kurzweil, and Mann refer to as the “Sensory Singularity”[10]).
This sensory intelligence augmentation technology is already developed enough to be dangerous in the wrong hands, e.g. as a way for a corrupt government or corporation to further augment its power and use it unjustly.
Accordingly we have spent a number of years developing a Code of Ethics on Human Augmentation [9], further developed at IEEE ISTAS 2013 and IEEE GEM 2015 (the “Toronto Code”), resulting in three fundamental “laws”.
These three “Laws” represent a philisophical ideal (like the laws of physics, or like Asimov’s Laws of Robotics [2], not an enforcement (legal) paradigm:
A metalaw states that the Code itself will be created in an open and transparent manner, i.e. with instant feedback and not written in secret. In this meta-ethics (ethics of ethics) spirit, continual rough drafts were posted (e.g. on social media such as Twitter #HACode), and members of the community were invited to give their input and even become co-authors.
The First Law is well-documented in existing literature on metasensing, metaveillance [8], and veillametrics [4]. Interestingly, the City of Hamilton, Ontario, Canada, has passed the following bylaw, relevant to the First Law of Human Augmentation:
“No person shall: Apply, use, cause, permit or maintain ... the use of visual surveillance equipment where the exterior lenses are obstructed from view or which are employed so as to prevent observation of the direction in which they are aimed.” [3].
The Second Law asserts that systems that watch us, while forbidding us from watching them, are unfair and often unjust.
2.1 The Veillance Divide is Justice Denied
In the new, “transhumanistic era”, some machines will acquire human qualities such as AI (Artificial Intelligence), and some humans will acquire machine-like qualities such as near-perfect sensory and memory capabilities. Irrefutable recorded memories - suitable as evidence, not mere testimony - will challenge many of our old ways, calling for updated ethics that serve the interests of all parties, not just those with power or authority. Our greatest danger may be a “(sur)Veillance Divide” where things (Internet of Things) and elites may record with perfect memory, while ordinary people are forbidden from seeing or remembering. Therefore, we propose the following pledge, to clarify the need for fairness, equality, and two-way transparency:
We take here an imporant first step toward the Human Augmentation Code 1.0. This is a “living document” and we are open to contributions from all, as it evolves.
References
[1] N. Bostrom. Ethical issues in advanced artificial intelligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, pages 277–284, 2003.
[2] R. Clarke. Asimov’s laws of robotics: implications for information technology-part i. Computer, 26(12):53– 61, 1993.
[3] M. Fred Eisenberger and C. C. Rose Caterini. City of hamilton by-law no. 10-122. May 26, 2010.
[4] R. Janzen and S. Mann. Sensory flux from the eye: Biological sensing-of-sensing (veillametrics) for 3d augmented-reality environments. In IEEE GEM 2015, pages 1–9.
[5] S. Mann. Humanistic intelligence/humanistic computing: ‘wearcomp’ as a new framework for intelligent signal processing. Proceedings of the IEEE, 86(11):2123– 2151+cover, Nov 1998.
[6] S. Mann. Computer architectures for personal space: Forms-based reasoning in the domain of humanistic intelligence. First Monday, 6(8), 2001.
[7] S. Mann. Wearable computing: Toward humanistic intelligence. IEEE Intelligent Systems, 16(3):10–15, May/June 2001.
[8] S. Mann. The sightfield: Visualizing computer vision, and seeing its capacity to" see". In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 618–623. IEEE, 2014.
[9] S. Mann. Keynote address: Code of ethics for the cyborg transhumanist era. In Second annual conference of the World Transhumanism Association. http://www.transhumanism.org/tv/2004/, August 5- 8, 2004.
[10] M. Minsky, R. Kurzweil, and S. Mann. The society of intelligent veillance. In IEEE ISTAS 2013.
Created By Akshay Kharade At Widespread Solutions