On the 27th of June 2012, Google held its first demonstration of Glass at their I/O Developers Conference. Presented as a groundbreaking product, paving the way for widespread ‘wearable’ computing, it provoked considerable excitement. However, concerns were quickly raised: on our dependence on such technologies, their impact on our bodies and minds as well as human interactions and social norms. As Google’s apparent lack of interest in discussing these issues publicly causes unease (The Guardian, 6th of March 2013), it is this essay’s aim to assess how Glass challenges and questions our understanding of human nature.
The first part of this essay discusses humanism, how human nature is conceptualised in our (western) societies, before looking at a range of emerging alternatives – grouped under the heading of ‘posthumanism’. This is followed by a discussion of how Glass challenges the distinction between human and machine, fundamental to humanism. In the final section, an effort is made to unpack further oppositions, between material/immaterial and public/private, which serve to reinforce our traditional understanding of human nature.
What does it mean to be human?
The discourse that drives most of our current thinking about human nature is known as humanism, which Badmington defines as:
“a discourse that claims that the figure of ‘Man’ naturally stands at the centre of things, is entirely distinct from animals, machines and other non- human entities, is absolutely known and knowable to himself, is the origin of meaning and history, and shares with all other human beings a universal essence.”Badmington (2004, p.1345)
From a humanist perspective, humans are set apart from all other entities, somehow exceptional, different. This relies on the formulation of a norm universally shared by humans : ‘human nature’. Badmington also adds that this essentialism:
“relies upon a set of binary opposition, such as human/inhuman, self/other, natural/cultural, inside/outside, subject/object, us/them, here/there, active/passive, and wild/tame.”Badmington (2004, p.1345)
As such, humanism can be understood as a boundary-drawing project. The question that remains then is empirical : how and where to draw the line between human and non-human worlds (Horigan, 1988). Anything we study is then placed into either pure category. Anything that doesn’t neatly fit in either is considered as an intermediary point along the line from human to non-human, a mixture of two pure forms (Latour, 1992).
The trouble with humanism is that it influences many of our daily decisions and activities in ways that are often unclear. This is because it is generally considered “obviousness itself” (Althusser, 1996); core beliefs and underlying assumptions are left unspoken because supposedly self-evident. However, recent research, particularly in the human sciences, has increasingly challenged this way of understanding human nature on a conceptual level. The usefulness of this linear model, binary oppositions and boundary-drawing have been put into question. Categories are being challenged, and the lines between them blurred (Castree and Nash, 2004). One of the best and most studied examples of this is perhaps the human-animal distinction (Badmington, 2004), which Derrida (2003) argues is a key foundation of humanism.
A simple example of this is the idea of the ‘wild man’, an unsettling figure because it cannot be understood as purely animal, yet isn’t quite human by our ‘civilised’ western standards (Horigan, 1988). However, this sort of issue has led to a second dualism presenting two sides to each human being: between the natural human-animal and the cultural human who transcends (escapes) his primitive animal-like side by cultivating both nature and his own mind (Anderson, 2001). Discussing more recent challenges to humanist dualisms, Birke and Michael (1998) look at how xenotransplantation biotechnologies (the transfer of organs between species – as when parts of pig hearts are implanted in humans) threaten the notion of individual species, and therefore human identities. They argue that unsettling classical understandings of species boundaries in this way raises important ethical and philosophical questions about what it means to be human, and how we treat animals. In fact, it is now commonly argued that discourses of humanity and animality have deep social and political implications, particularly in relation to race and sexuality (Anderson, 2000).
Arguments of this sort have given rise in the 1990s to the discourse of ‘posthumanism’. In a concerted effort to ‘map’ this diverse field led by Castree and Nash (2004), three main strands of posthumanist thought were identified. The first line of argument sees the posthuman as a historical condition, a decisive break from previous eras of human existence brought about by rapid advances in science and technology. In this view, human nature is fundamentally altered by techno-scientific practices, such as xenotransplantation, cloning, genetic coding, anti-ageing drugs, ubiquitous computing and the like (Badmington, 2004).
For ‘bioconservatives’ like Fukuyama (2002) the posthuman is thus a threat to the integrity of our universal essence, most clearly evidenced in the corruption of our organic body. His book, Our Posthuman Future, calls “back to nature” in order for us to avoid the “moral chasm” which awaits us if we continue along this path. Strongly opposed to Fukuyama’s apocalyptic vision of the posthuman future stand the transhumanists, whose ideas he has dubbed “the most dangerous in the world” (Foreign Policy, 9th of January 2004). Transhumanism is a futuristic philosophy aiming to usher humankind into the ‘posthuman age’. Although approaches vary, this is generally to be achieved through ‘evolutionary technologies’ – the same Fukuyama fears and more – leading to extended life and enhanced physical, psychological and intellectual capacities for those individuals who choose it (Bostrom, 2004). Beyond the problematic technocratic and consumerist values and undeniable dangers of an endeavour based on a supposedly innate human desire for immortality, omniscience and even incorporeality (see Graham, 2002), transhumanism is criticised for relying on humanist assumptions (Agar, 2007) in much the same way Fukuyama’s work does. Indeed, both the ‘evolutionary humanism’ (Agar, 2007) backed by most transhumanist commentators and the ‘neo-humanist backlash’ (Badmington, 2004) of bioconservatives rest on the idea that the humanist perspective, although challenged today, is at least right historically. In both cases, the posthuman is seen as an object of analysis, a new state of human nature to be studied.
In contrast to this stand the other two strands of posthumanism, which view it as an analytical-philosophical position. As such, the posthuman provides a different way of understanding human nature. The second strand takes the approach of deconstructive reading, one of “ceaseless skepticism” towards “claims made in the name of either the human or its notional transcendence” (Castree and Nash, 2004, p.1342). The third sees posthumanism as a set of ontological theses based on the argument that “we have never been human” (Castree and Nash, 2004), that human nature is a social construct. This is based on Latour’s (1992) point that “we have never been modern”. He argues that we should abandon the one-dimensional opposition of nature and society. Instead, he calls for a two-dimensional frame of reference whereby objects are not forced into a point along the line but instead exist as a trajectory amidst an endless amount of poles, indeed as many poles as there are actors.
Applied to humanism, this means that there never was a universal human essence to move into the posthuman from. What’s more, we should talk of human becomings rather than beings. The posthuman is about “the ongoing differentiation of ways of life and modes of being” (Braun, 2004 p.1355), about recognising the evolving diversity and partiality of human identities (Whatmore, 2002).
Crucially, many of these evolutions occur because of new technologies. Taking interactive toys as an example for the rise of intelligent environments, Thrift (2003) argues that current evolutions in computing are altering human consciousness as we traditionally imagine it, just as machines are becoming more human-like. These changes may affect us in many ways. As such, the posthuman politicises technology, looking at the social ways in which they destabilise our understandings of human nature (Whatmore, 2004). It is in this spirit that we now move on to a discussion of Glass.
Challenging the human/machine distinction – integrating body, mind and tool
Glass challenges our understanding of human nature by destabilising the human/machine distinction. It does so primarily in two ways. Firstly, it blurs the physical boundary between organic and technological. It is designed to adapt to the body and senses in the most unobtrusive way possible. The monitor is placed above rather than in front of the eye, removed to the periphery. The device is physically and visually light, adaptable to new kinds of frames, comfortable. They are apparently sturdy, so they can be worn in all conditions. Operating them requires only voice- command or touches to the frame, and sound is transmitted through bone conduction (Tech Radar, 17th of April 2013). Even the singular name suggests seamless physical integration : transparent, invisible glass. The people from Google tell us that the rationale for designing Glass is “to be close to your senses but not blocking them” because they “don’t want technology to get in the way”. Taking this a step further, some commentators even speculate that such technology may one day be integrated into contact lenses (The Guardian, 6th of March 2103) and perhaps even directly into the eye.
On another level, Glass challenges the human-machine distinction by blurring the boundaries between mind and body. One often emphasised aspect is the ability to record, store and share video from a first-person point of view. The Google spokespersons at I/O identified two key elements or uses for this. Firstly, this function could serve as a vast and precise computerised memory, permitting us to recall video footage of past experiences better than our brains could. The second purpose for this is to permit sharing personal experiences with others in real-time in an online ‘Google Hangout’. Another objective driving ‘Project Glass’ is to permit the user to access information extremely fast. Ultimately, Google tell us they “would like to make this so fast that you don’t feel that you have a question … we would like it to be so fast that you feel you know it”. Sergey Brin once declared that he wants Google to become “the third half of your brain” (Business Insider, 2010), an ever-present queriable machine to assist you in the task of collecting and processing information. A “third half” with far more computing power than the other two combined.
It must be said that much of this discussion is necessarily speculative, and the technology that would permit all of this is still the matter of science-fiction fantasy. Even so, the ambition is clear: to merge technological tools with the (physical/organic) body and (non-physical) mind. Our mind would no longer be tied to our body, but intimately connected to, and ultimately dependent on technological devices physically integrated with our body. Glass appears as a first step towards this goal. As such, it inscribes itself in the broader category of ‘wearable computing’. To quote Steve Mann :
“the goal of wearable computing is to produce a synergistic combination of human and machine, in which the human performs tasks that it is better at, while the computer performs tasks that it is better at. Over an extended period of time, the wearable computer begins to function as a true extension of the mind and body”Mann (1998, p.7, my emphasis)
In this way, wearable computing is only one aspect of a general contemporary trend in the technosciences towards ‘ubiquitous and pervasive computing’. These terms describe :
“computing systems … connected not just to the internet or other computers, but to places, people, everyday objects … the highest ideal is to make a computer so embedded, so fitting, so natural, that we use it without even thinking about it”Weiser (1993, p.64, my emphasis)
While Glass is physically removable, this might not be the case for future innovations. Yet it is clear from Mann’s definition that we might not even want to remove our wearables, as they will become so essential to the normal functioning of our bodies and minds. Mann’s personal involvement in wearable computing which led him to attach the latest version of his ‘digital eyetap’ onto his skull, earned him the title of ‘world’s first cyborg’, an image which may well be appropriate. Haraway (1991) defined the cyborg as “a hybrid of human and machine” (p.149), a manifestation of how “mind, body and tool are on very intimate terms” (p.165). Approaching this from an analytical-philosophical posthuman perspective, this blurring of the traditional boundaries between humans and machines illustrates the instability of our construction of human nature. What matters then is to understand what human and social implications this has.
Understanding the human and social impact of Glass
Glass further unbalances the humanist myth by challenging what Haraway (1991) calls ‘boundary-maintaining images’, discursive structures that reinforce the human/non-human distinction. The two most prominent are the material/immaterial and public/private dichotomies. Discussions about wearable computing distinguish between the physical world and the immaterial world of the internet. For example, Google tells us that they “created Glass so that you can interact with a virtual world without distracting you from the real world”. Both categories are used as if they were still completely relevant, yet it would seem that by bringing them both in such close interaction, one would scarcely be separate from the other. Media reactions to Glass also commonly express concerns about whether such technologies will finally spell the end of privacy as we know it, putting everyone and everything in the public eye – on the web. The potential impacts of these blurrings of boundaries are necessarily speculative, and seemingly endless:
- Glass could serve as a ‘social prosthetic’ (The Verge, 11th of April 2013) which could provoke a feeling of ‘lostness’ and inability to function when disconnected from it (Mashable, 18th of April 2013).
- Glass could cause unexpected health problems from excessive eyestrain (The Guardian, 6th of March 2013) to increased heart disease risk (New York Times, 23rd of March 2013) .
- Centralised control and unsupervised use of data by Google (The Guardian, 16th of October 2012) may lead to abusive uses of Glass as a ‘persuasive technology’ to help corporations better target their ads, in ever more subtle ways (The New Inquiry, 19th of March 2013).
- Glass may serve the positive purpose of ‘sousveillance’ aka ‘surveillance of the surveillers’ (corporations and state) by the people (Mann et al, 2003) but also lead to increased social surveillance through facial recognition (The New Inquiry, 19th of March 2013) or even ‘visual fingerprint’ (New Scientist, 7th of March 2013). People may have to completely alter their behaviour, patterns of speech and gestures to hide thoughts they want to keep private.
- Glass could considerably reduce our attention spans (Mashable, 18th of April 2013) as well as change the way we converse with others. In some contexts, it may reduce distractions by removing the need to turn to a screen (The Guardian, 6th of March 2013).
- Glass may help people visually remove offensive or undesirable objects from their field of vision (Mashable, 18th of April 2013).
Although far from exhaustive, this list clearly illustrates the human (physical and mental) and social significance of a product like Glass. It will undoubtedly push us to reconsider the categories we use, leading for example to the use of concepts such as ‘privacy in public’ (Nissenbaum, 1998). Rather than establishing new boundaries, however, we should make every effort to keep the categories we use in motion (Jons, 2006).
Looking at Glass from a posthumanist perspective, this essay has shown how human nature is being affected and re-imagined by such technology. Even if Glass turns out to be a commercial failure, when one hears the founder of a tech giant like Google say that “certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off” (Newsweek, 2004), it appears crucial to consider what implications such an assumption may hold for human nature and society if it ever comes to fruition. Glass carries the promise of a myriad new uses, many of which are yet to be discovered, but all presenting both negative and positive potential. Far from advocating a restrictive, controlling approach, it nevertheless appears crucial to take up an ‘ethics of emergence’ (Thrift, 2003) to exploit this potential responsibly.
Business Insider (8th of September 2010) ‘Sergey Brin : we want google to be the third half of your brain’, www.businessinsider.com, retrieved 20th of April 2013 at http://www.businessinsider.com/sergey-brin-we-want-google-to-be-the-third-half-of-your- brain-2010-9
Foreign Policy (9th of January 2004) ‘Transhumanism’, www.foreignpolicy.com, retrieved 20th of April 2013 at http://www.foreignpolicy.com/articles/2004/09/01/transhumanism
Guardian, The (6th of March 2013) ‘Google Glass : is it a threat to our privacy?’, www.guardian.co.uk, retrieved 20th of April 2013 at http://www.guardian.co.uk/technology/2013/mar/06/google-glass-threat-to-our-privacy
New Scientist (7th of April 2013) ‘Google Glass indentifies you by your fashion sense’, www.newscientist.com, retrieved 20th of April 2013 at http://www.newscientist.com/article/mg21729075.600-google-glass-app-identifies-you-by- your-fashion-sense.html
New York Times, The (23rd of March 2013) ‘Your phone vs. your heart’, www.nytimes.com, retrieved 20th of April 2013 at http://www.nytimes.com/2013/03/24/opinion/sunday/your-phone-vs-your-heart.html?
Mashable (18th of April 2013) ‘What will Google Glass do to our brains?’, www.mashable.com, retrieved 20th of April 2013 at http://mashable.com/2013/04/17/what-will-google-glass-do-to-our-brains/
Newsweek (28th of March 2004) ‘All eyes on Google’, www.thedailybeast.com, retrieved 20th of April 2013 at http://www.thedailybeast.com/newsweek/2004/03/29/all-eyes-on-google.html
Tech Radar (17th of April 2013) ‘Google Glass : what you need to know’, www.techradar.com, retrieved 20th of April 2013 at http://www.techradar.com/news/video/google-glass-what-you-need-to-know-1078114
The Verge (11th of April 2013) ‘OK, Cupid: giving your love life to Google Glass and the hive mind’, www.theverge.com, retrieved 20th of April 2013 at http://www.theverge.com/2013/4/11/4187906/ok-cupid-giving-your-love-life-to-google- glass-and-the-hive-mind
Agar, N. (2007) ‘Whereto posthumanism? The literature reaches a critical mass’, The Hastings Center Report, 37(3), pp. 12-17.
Anderson K (2000) ‘The beast within’: race, humanity, and animality’, Environment and planning D: Society and Space, 18(3), pp. 301-320
Badmington, N. 2004, ‘Mapping posthumanism’, Environment and Planning A, 36(8), pp. 1344- 1351.
Braun, B. (2004), ‘Modalities of posthumanism’, Environment and Planning A, 36(8), pp. 1352-1355.
Birke, L and Michael, M. ‘The heart of the matter: Animal bodies, ethics and species boundaries’, Society and Animals, 6(3), pp. 245-261
Bostrom, N. (2004) ‘Is posthumanism the world’s most dangerous idea?’, www.betterhumans.com. Castree, N. & Nash, C. 2004, ‘Introduction: posthumanism in question’, Environment and Planning A, 36(8), pp. 1341-1343.
Fukuyama, F. (2002) Our Posthuman Future : Consequences of the Biotechnology Revolution, New York, Farrar, Strauss and Giroux.
Graham, E. (2002) ‘Nietzche gets a modem : transhumanism and the technological sublime’, Literature & Theology, 16(1), pp. 65-80.
Haraway, D. (1991), ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century’ in Simians, Cyborgs and Women: The Reinvention of Nature, New York; Routledge, pp.149-181.
Jons, H. 2006, ‘Dynamic hybrids and the geographies of technoscience: discussing conceptual resources beyond the human/non-human binary’, Social & Cultural Geography, vol. 7, no. 4, pp. 559-580.
Latour, B. 1992 ‘One More Turn After the Social Turn: Easing Science Studies into the Non- Modern World’, in McMullin, E. (ed.) The Social Dimensions of Science, Notre Dame: University of Notre Dame Press. pp. 272-292
Mann, S., Nolan, J. and Wellman, B. (2003) ‘Sousveillance : inventing and using wearable computing devices for data collection in surveillance environments’, Surveillance and Society, 1(3), pp. 331-355.
Mann, S. (1998) ‘Wearable computing as means of personal empowerment’, http://wearcam.org/icwckeynote.html
Nissenbaum, H. (1998) ‘Protecting privacy in an information age: the problem of privacy in public’, Law and Philosophy, 17(5), pp. 559-596.
Thrift, N. (2003) ‘Closer to the machine? Intelligent environments, new forms of possession and the rise of the supertoy’, Cultural Geographies, 10(4), pp.389-407
Weiser, M. (1993) ‘Ubiquitous computing’, Computer, 26(10), pp. 71-72.
Whatmore, S. (2002) Hybrid geographies: Natures cultures spaces, SAGE Publications Limited. Whatmore, S. (2004), ‘Humanism’s excess: some thoughts on the ‘post-human/ist’ agenda’,Environment and Planning A, 36(8), pp. 1360-1363.