Feeds:
Posts
Comments

MINA 2013 Symposium Review

Last week I was in Auckland for a couple of days to go to the MINA (Mobile Innovation Network Aotearoa) 2013 Symposium at the Auckland University of Technology. Having just recently arrived in New Zealand the symposium seemed like a great opportunity to meet some researchers and artists working in and around pervasive/locative media, and to see what kinds of mobile media research and praxis are going on in New Zealand.

The conference kicked off with a fascinating keynote from Larissa Hjorth from RMIT in Melbourne. Hjorth looked at practices surrounding current cultural usages of mobile imaging technologies from an ethnographic perspective, and charaterised this as second generation research in camera phone studies. Whereas the first wave focussed on mobile imaging through the perspectives of networked visuality, sharing/storing/saving, and vernacular creativity, she characterises second generation camera phone studies as focussing on the notions of emplacement through movement, the prominence of geo-temporal tagging and spatial connectivity, intimate co-presence and re-conceptualising casual play as ambient play.

My other highlights on the first day were a fantastic session on activism and mobile video practices which features papers from Lorenzo Dalvit and Ben Lenzner. Dalvit explored the use of user uploaded mobile phone videos to a tabloid online newspaper The Daily Sun, which provides a public forum for citizens to publish and attract widespread attention to instances of police brutality within South Africa. In particular Dalvit focussed on a case where police dragged a Mozambican taxi driver to his death through the streets, and mobile footage posted to the Daily Sun was used to contradict the official police account that the taxi driver was armed, and was thus pivotal in bringing the policeman in question to face trial for their actions. Dalvit also highlighted the utility of audiovisual media in cultural contexts where literacy cannot be assumed as universal, and the ways that the Daily Sun provided a forum of public discussion surrounding the commonplace acts of police brutality which are primarily aimed at impoverished black youths in SA.

This was followed by a look at some of Lenzner’s PhD research which compares the usage of mobile video streaming techniques by US activist such as Tim Poole and the Indian community-activist group India Unheard. Similarly to Dalvit’s South African case study, Cole’s footage of Occupy Wall Street was used in court to quash bogus charges fabricated by police against an Occupy protester, again highlighting the ways that citizen journalism and in particular video evidence can provide a powerful tool in providing counter-narratives to official accounts which are often pure fabrications. Whereas Cole was able to stream video live on to UStream, community video activists working for India Unheard have to go somewhere to compress and upload material due to the difference in bandwidth between New York and Mumbai. This forced pause means that they produce activist video which is closer to traditional forms of video activism, providing edited stories rather than just a live stream of events. Both these papers were fantastic examples of how the increasing access to media production tools provides ways for previously unheard voices to be heard, and within a legal context, to provide very strong evidence to contradict official statements from powerful institutions linked to the state.

Also on the Thursday were really interesting papers from Craig Hight and Trudy Lane. Hight’s paper focussed on the implications of emerging software digital video, and in particular various ways that numerous forms of consumer/prosumer software are automating increasing amounts of the editing process. The paper outlined a number of fairly new tools, such as Magisto, which claims to ‘automatically turns your everyday videos into beautifully edited movies, perfect for sharing. It’s free, quick, and easy as pie!’ Within the software you select which clips you wish to use, a song to act as the soundtrack and a title, and Magisto assembles your video for you. While Hight was quite critical of the extremely formulaic videos this process produces, it’s interesting to think about what this does in turns of algorithmic agency and the unique ability of software to make the types of decisions normatively only associated with human (what Adrian Mackenzie has described as secondary agency).

Lane by contrast is an artist whose recent project the A Walk Through Deep Time  was the subject of her paper. While the deep time here is not the same as Sigfried Zielinski’s work into mediation and deep time, it does present an exploration of a non-anthropocentric geological temporality, intially realised through a walk along a 457m fence to represent 4.57 billion years of evolution. The project uses an open-source locative platform called roundware which provides locative audio with the ability for users to upload content themselves whilst in situ, allowing the soundscape to become an evolving and dynamic entity. The ecological praxis at the heart of Lane’s work was something that really resonated with my interests, and it was great to see that there are really interesting locative art/ecology projects going on here.

The second day of the symposium opened with a keynote from Helen Keegan from the University of Salford. Keegan’s presentation centred on a unit she had run as an alternate reality game entitled Who is Rufi Franzen. The project was a way of getting students to engage in a curious and critical way with the course, rather than the traditional ways of learning we encounter within lectures and seminars. The project saw the students working together across numerous social media platforms to try and piece together the clues as to whom Rufi was, how he had been able to contact them, and what he wanted. The project climaxed with the students having been led to the triangle in Manchester, where they were astonished to see their works projected on the BBC controlled big screen there. It looked like a great project, and a fantastic experience.

My highlight of the second day was a paper by Mark McGuire from the University of Otago who presented on the topic of Twitter, Instagram and Micro-Narratives (Mark’s presentation slides are online via a link on his blog and well worth a look). Taking cues from Henry Jenkin’s recent work into spreadable media, which emphasises the ways that contemporary networked media foregrounds the flow of ideas in easy to share formats, McGuire went on to explore the ways that micro-narratives create a shred collaborative experience whereby the frequent sharing of ideas and experiences, content creators become entangled within a web of feedback or creative ecologies which productively drives the artistic work. Looking at Brian Eno’s notion of an ecology of talent and applying interdisciplinary notions of connectionist thinking and ecological thought and metaphors, McGuire made a convincing case as to why feedback rich networks provide a material infrastructure which cultivates communities who learn to act creatively together.

There was also a really interesting paper on the second day from Marsha Berry from RMIT, Melbourne, who built upon Hjorth’s notions of emplaced visuality to explore how creative practices and networked sociality are becoming increasingly entangled. Looking in detail at practices of creating retro-aesthticised images using numerous mobile tools including Instagram and retro camera filters, Berry explored these images as continuity with analogue imaging, as a form of paradox, as Derridean hauntology – as a nostalgia for a lost future, and finally as the impulse to create poetic imagery, highlighting that for teenagers today there is no nostalgia for 1970s imaging technologies and techniques which pre-date their birth.

Max Schleser and Daniel Wagner also presented interesting papers, looking at projects they had respectively been running which used mobile phone filmmaking. Schleser outlined the 24 Frames 24 Hours project for workshop videos, which featured a really nice UI designed by Tim Turnidge, and looked like a really nice tool for integrating video, metadata and maps. Schleser explored how mobile filmmaking is important to the emergence of new types of interactive documentary, touching on some of the conceptual material surrounding iDocs. Wagner presented the evolution of ELVSS (Entertainment Lab for the Very Small Screen), a collaborative project which has seen Wagner’s Unitec students working alongside teams from AUT, University of Salford, Bogota and Strasbourg to collectively craft video based mobile phone projects. The scale of the project is really quite inspiring in terms of thinking what it’s possible to create in terms of global networked interdisciplinary collaborations within higher education today.

Overall, I really enjoyed attending MINA 2013. The community seems friendly, relaxed and very welcoming, the standard of presentations, artworks and keynotes was really high and it’s really helped me in terms of feeling that there are academic networks within and around New Zealand who’re involved in really interesting work. Roll on MINA 2014.

This is a draft version of a paper which was published in NYX: Vol 7 Machines in 2012.

 

Contemporary analyses of the relationships between humans and machines − ways that machines influence the scale, pace, and patterns of socio-technical assemblages − tend to focus upon the effects, impacts, and results of the finished products: the packaged information processing commodities of digital culture. This work is undeniably important in demarcating the multiple and complex ways that human symbiosis with machinic prostheses alters cognitive capacities and presents novel, distributed, peer-to-peer architectures for economic, political, and socio-technical networks. However, existing discourses surrounding machines and digital culture largely fail to explore the wider material ecologies implicated in contemporary technics.

 

Ecological analysis of machines seeks to go beyond exploring marketable commodities, instead examining the ecological costs involved in the reconfiguration of ores, metals, and minerals into smartphones and servers. This involves considering the systems implicated in each stage of the life-cycle of contemporary information-processing machines: the extraction of materials from the earth; their refinement and processing into pure elements, compounds, and then components; the product-manufacturing process; and finally what happens to these machines when they break or are discarded due to perceived obsolescence. At each stage of this life-cycle, and in the overall structure of the ecology of machines, there are ethical and political costs and problematics. This paper seeks to outline examples of these impacts and consider several ways in which they can be mitigated.

 

Hardware is not the only ecological scale associated with machines: flows of information and code, of content and software, also comprise complex, dynamic, systems open to flows of matter and energy; however, issues surrounding these two scales are substantially addressed by existing approaches to media and culture. We can understand scale as a way of framing the mode of organisation evident within the specific system being studied. The notion of ecological analysis approaching different scales, stems from the scientific discipline of ecology and is transposed into critical theory through the works of Gregory Bateson and Felix Guattari. Within the science of ecology, scale is a paramount concern, with the discipline approaching several distinct scales, the relationships between: organism and environment, populations (numerous organisms of the same species), communities (organisms of differing species), and ecosystems (comprising living and nonliving elements within a geographical location).i No particular scale is hierarchically privileged, with each nested scale understood as crucial to the functioning of ecosystem dynamics.

 

The notion of multiple, entangled scales are similarly advanced by Bateson, who presents three ecologies − mind, society and environment.ii Key to understanding their entangled − and thus inseparable – nature, is Bateson’s elaboration of distributed cognition, whereby the pathways of the mind are not reducible to the brain, nervous systems, or confines of the body, but are immanent in broader social and environmental systems. The human is only ever part of a thinking system which includes other humans, technology and an environment. Indeed, Bateson contends that arrogating mental capacity exclusively to individuals or humans constitutes an epistemological error, whose wrongful identification of the individual (life-form or species) as the unit of ecological survival necessarily promotes a perspective whereby the environment is viewed as a resource to be exploited, rather than the source of all systemic value.iii

 

Guattari advances Bateson’s concepts in The Three Ecologies,iv expounding a mode of political ecology which has little to do with the notion of preserving ‘nature’, instead constructing an ethical paradigm and political mobilisations predicated upon connecting subjective, societal and environmental scales in order to escape globalised capitalism’s focus upon economic growth as the sole measure of wealth. According to Guattari, only by implementing an ethics which works across these three entangled ecologies can socially beneficial and environmentally sustainable models of growth be founded. Ecology then, presents a way of approaching machines which decentres the commonly encountered anthropocentrism that depicts machines (objects) assisting humans (subjects), instead encouraging us to consider ourselves and technologies as nodes within complex networks which extend across individual, social, environmental, and technological dimensions. Correspondingly, ecology requires a shift when considering value and growth; moving from the economic-led anthropocentric approach characteristic of neoliberalism, to valuing the health and resilience of ecosystems and their human and nonhuman, living and nonliving components. Consequently, applying an ecological ethics may prove useful in considering ways to mitigate many of the deleterious material impacts of the contemporaneous ecology of machines.

 

This paper will proceed by exploring the contemporary ecology of hardware, examining ecological costs which are incurred during each phase of the current industrial production cycle. Additionally,  the overall structure of this process will be analysed, alongside a conclusion which considers whether current iterations of information processing machines presents opportunities for the implementation of a mode of production within which the barriers between producers and consumers are less rigid, allowing alternative ethics and value systems to become viable.

 

The initial stages in the contemporary industrial production process are resource extraction and processing. A vast array of materials is required for contemporary microelectronics manufacturing, including: iron, copper, tin, tungsten, tantalum, gold, silicon, rare earth elements and various plastics. Considering the ways that these materials are mined connects information processing technologies to the flows of energy and matter that comprise the globalised networks of contemporary markets and trade systems, refuting claims that information processing technologies are part of a virtual, cognitive, or immaterial form of production.

 

One environmentally damaging practice currently widely employed is open-cast mining, whereby the topmost layers of earth are stripped back to provide access to ores underneath, whilst whatever ecosystem previously occupied the surface is destroyed. Mining also produces ecological costs including erosion and the contamination of local groundwater, for example in Picher, Oklahoma, lead and zinc mines left the area so badly polluted and at risk of structural subsidence, that the Environmental Protection Agency declared the town uninhabitable and ordered an evacuation.v Another series of ecological costs associated with resource extraction surrounds conflict minerals, which is increasingly being acknowledged thanks to the activities of NGOs and activists publicising the links between conflict minerals in the Democratic Republic of Congo (particularly coltan, the Congolese tantalum-containing ore) and information technologies (particularly mobile phones). Whilst coltan and other conflict minerals were not a primary factor in the outbreak of civil/regional conflict in the DRC, which has led directly or indirectly to the deaths of over five million people over a dozen years, as the conflict wore on and the various factions required revenue-raising activities to finance their continuing campaigns, conflict minerals ‘became a major reason to continue fighting… the Congo war became a conflict in which economic agendas became just as important as other agendas, and at times more important than other interests.’vi Factions including the Congolese army, various rebel groups and invading armies from numerous neighbouring states fiercely contested mining areas, as controlling mines allowed the various armed groups to procure minerals which were then sold for use in microelectronics, in order to finance munitions, enabling the continuation of military activities.

The role of the global microelectronics industry in financing the most brutal conflict of the last twenty years, reveals the connections between ‘virtual’ technologies and the geopolitics of globalised capitalism.

 

Engaging with the ecology of machines requires consideration of the ethical and political implications of the consequences wrought by current patterns of consumption upon people and ecosystems geographically far removed from sites of consumption, onto whom the brunt of negative externalities generated by current practices frequently falls. In this case the costs of acquiring cheap tantalum – a crucial substance in the miniaturisation of contemporary microelectronics – are not borne by consumers or corporations, but by people inside an impoverished and war-ravaged central African state.

 

Once extracted, materials are refined into pure elements and compounds, transformed into components, and then assembled into products during the manufacturing phase of the production process. Since the late 1980s there has been a shift away from the corporations who brand and sell information technology hardware incorporating manufacturing into their operations. Instead, a globalised model now dominates the industry, whereby manufacturing is primarily conducted by subcontractors in vast complexes concentrated in a handful of low cost regions, primarily south-east Asia.[vii] This can be understood within the broader context of changes to the global system of industrial production, whereby manufacturing is increasingly handled by subcontractors in areas where labour costs are low and there does not exist rigorously enforced legislation protecting the rights of workers or local ecosystems. Consequently, this transition has been accompanied by marked decreases in wages and safety conditions, alongside increased environmental damage as companies externalise costs onto local ecosystems.viii

 

Information technology sweatshops are receiving increasing attention, and have begun to punctuate public consciousness, partially as a consequence of campaigning from NGOs, and partially due to a spate of suicides among young migrant workers at Foxconn’s Longhua Science and Technology plant in Shenzhen, China. Fourteen workers aged 18-25 jumped off factory roofs to end their lives between January and May 2010 to escape an existence spent working 60-80 hours a week and earning around US$1.78 per hour manufacturing information processing devices such as the Apple iPad for consumers elsewhere in the world.

 

Once information processing technologies have been discarded, they become part of the 20-50 million tonnes of annually produced e-waste,ix much of which contains toxic substances such as lead, mercury, hexavalent chromium and cadmium. Whilst it is illegal for most OECD nations to ship hazardous or toxic materials to non-OECD countries, and illegal for non-OECD nations to receive hazardous wastes,x vast quantities of e-waste are shipped illicitly, with e-waste routinely mislabelled  as working goods for resale, circumventing laws such as the Basel Convention and the EU’s Waste Electrical and Electronics Equipment (WEEE) Directive.xi In 2006 estimates suggest that 80% of North American and 60% of the EU’s electronics wastes were being exported to regions such as China, India, Pakistan, Nigeria and Ghana.xii Essentially, wealthy nations externalise the ecological costs of their toxic waste to impoverished peoples in the global south.

 

Once e-waste arrives in these areas it is ‘recycled’: machines are manually disassembled by workers often earning less than US$1.50 per day,xiii who implement a variety of techniques for recovering materials which can be resold. For example, copper is retrieved from wiring by burning the plastic casings, a process which releases brominated and chlorinated dioxins and furans; highly toxic materials which persist in organic systems, meaning that workers are poisoning themselves and local ecosystems. Investigation by the Basel Action Network reveals that:

 

Interviews reveal that the workers and the general public are completely unaware of the hazards of the materials that are being processed and the toxins they contain. There is no proper regulatory authority to oversee or control the pollution nor the occupational exposures to the toxins in the waste. Because of the general poverty people are forced to work in these hazardous conditions.xiv

 

This activity is often subsumed under the rhetoric of ‘recycling’, with associated connotations of environmental concern, however, the reality is that international conventions and regional laws are broken in order to reduce the economic costs of treating the hazardous remains of digital hardware.

 

The systematic displacement of negative externalities minimises the cost of commodities for consumers and improves profitability for corporations, but in doing so, makes the epistemological error delineated by Bateson and Guattari regarding the wrongful identification of value within systems. Creating systems designed to maximise benefits for the individual consumer − or individual corporation − while externalising costs onto the social and ecological systems which support those individual entities ultimately results in the breakdown of systems which consumers and corporations rely upon. Although such strategies create short term profitability, their neglect for longer term consequences breeds systemic instabilities which will eventually return to haunt these actors:

 

            If an organism or aggregate of organisms sets to work with a focus on its own survival and thinks that is the way to select its adaptive moves, its ‘progress’ ends up with a destroyed environment. If the organism ends up destroying its environment, it has in fact destroyed itself… The unit of survival is not the breeding organism, or the family line or the society… The unit of survival is a flexible organism-in-its-environment.xv

 

There have however, been numerous interventions by NGOs, activists, and concerned citizens who have employed the guilty machines at issue to address and alter these deleterious effects. The deployment of social media, for instance, to raise awareness of these issues and pressure corporations and governments to alter practices and laws, highlights what Bernard Stiegler and Ars Industrialis describe as the pharmacological context of contemporary technics:xvi xvii machines are simultaneously poisonous and the remedy to this poison. Thinking in terms of poison and toxicity is particularly cogent with reference to the material impacts of digital technologies, whereby what can otherwise appear to be a metaphorical way of approaching attention and desire amongst consumers, presents an insightful analysis of the material impacts which accompany the shifts in subjectivity, which Stiegler argues arise from changing technological environments.

 

The actions implied by this approach initially seem entirely inadequate given the scope of the problems: ‘retweeting’ messages and ‘liking’ pages in the face of serious social and ecological problematics that relate to the dynamics of globalised capitalism appears laughable. However, the impacts of collective action made possible by networked telecommunications has effected numerous cases: Wages at Foxconn’s plant in Shenzhen have risen from 900 to over 2000 yuan in less than a year in response to sustained pressure mobilised by assemblages of humans and machines, many of the latter having been assembled within that factory. In the face of widespread networked protests, Apple cancelled a contract with another Chinese subcontractor because of their employment of child labour.xviii Lobbying by NGOs such as Raise Hope For Congo,xix supported by a networked activist community, has convinced the US congress to examine legislating to phase out the use of conflict minerals.

 

The mobilisation of attention via these socio-technological networks effects change in two primary ways: through raising awareness and altering vectors of subjectivity amongst consumers, and by subsequently mobilising this attention as public opinion to pressurise governmental and corporate actors to alter practices. In the face of this type of networked action, governments are compelled to avoid the appearance of supporting unethical practices. Corporations, as fabrication-free entities which design and market, but do not manufacture products, are faced with the potential toxification of their brand. Corporations such as Apple and Dellxx have demonstrated a willingness to take remedial action, albeit often in a limited way.xxi

 

There are additional issues raised by the structure of the flows of matter associated with the system in its entirety. The industrial model of production involves a near-linear flow throughout the stages of a machine’s lifespan; resources are extracted, processed, used, and then discarded. Recycling is partial, leading to the steady accumulation of ‘waste’ matter in landfills. By contrast, when examining how ecosystems work, we are confronted with cyclical processes with multiple negative feedback loops. These cycles create sustainable processes: there is no end stage where waste accumulates, as the outputs of processes become inputs for other nodes in the network, allowing systems to run continuously for millions of years. Feedback loops within these systems build resilience, so minor perturbations do not create systemic instability or collapse, only when the system faces major disturbance, a substantial alteration to the speeds or viscosities of ecological flows which exceed adaptive capacity, does collapse occur. In the past, ecological collapse and planetary mass extinction events have been triggered by phenomena such as an asteroid striking the planet, today a mass extinction event and new geological age, the Anthropocenexxii is under way because of anthropogenic industrial activity.

 

Given the state of play with reference to climate change, loss of biodiversity, and associated impacts upon human civilisations, urgent action is required in reconfiguring the industrial production process along alternatives based on biomimicry: cyclical processes resembling closed-loop systems such as the nitrogen cycle. This methodology has been adopted by the cradle-to-cradle movement, who advocate that the waste from one iteration of processes should become the nutrients, or food for successive iterations. Products are not conceived of as commodities to be sold and discarded, but valuable assets to be leased for a period before the materials are transformed into other, equally valuable products. A cradle-to-cradle methodology also seeks to remove toxic substances from goods during the design process, entailing that there is no subsequent conflict of interest between cheap but damaging and responsible but expensive disposal at a later date. 

 

Another movement which points towards alternative methods of producing machines are open-source hardware (OSH) communities, which apply an ethic derived from free/open-source software (FOSS) development, and implement homologous processes to designing and producing hardware.  Whereas FOSS involves the distributed collaboration of self-aggregating peers using the hardware/software/social infrastructures of the Internet to create software – a non-rival good which can be directly created and shared by exchanging digital data – OSH communities cannot collectively create the finished products, but share designs for how to make machines and source the requisite parts. Operating in this manner enables a mode of producing rival goods, including information technology hardware, which is led by user innovation and the desires and ethics of the producer/user community, rather than profit-orientated corporations, who have a vested interest in creating products which rapidly become obsolete and require replacement. OSH presents an example of the democratisation of innovation and production,xxiii and a rebuttal of the contention that peer-to-peer systems are only relevant to non-rival, informational ventures, whilst also presenting one way of approaching Stiegler’s concept of an economy of contribution.

 

Stiegler contends that the particular affordances of contemporary computing technologies enable the construction of a new economy which elides the distinction between producers and consumers. According to Stiegler, free software exemplifies a historically novel methodology predicated on communal labour and which is characterised by the formation of positive externalities.xxiv Whereas the contemporary ecology of machines is dominated by a model based on an econocentricism which advocates the externalisation of any possible costs onto social and environmental systems which are seen as ‘outside’ of economic concern and therefore valueless, Stiegler contends that there exists the potential to construct an alternative ecology of machines based upon broader conceptions of growth, resembling the ecological value systems advocated by Bateson and Guattari.

 

While the pharmacological context of technology entails that an economy of contribution is by no means certain, or even probable, a reorientation of the ecology of machines is crucial if we are to escape the spectre of ecological collapse. The current system of producing the material infrastructure of digital cultures is ecologically unsustainable and socially unjust, with problems at the scales of the structure of the production process as a whole, and within the specificities of each constituent stage. Only through a sustained engagement with the material consequences of information technologies, involving an eco-ethically inflected application of these machines themselves, may equitable alternatives based around contribution rather than commodities supersede the destructive tendencies of the contemporary ecology of machines.

 

i Michael Begon, Colin Townsend and John Harper, Ecology: From Individuals to Ecosystems, 4th Edition, Malden MA and Oxford: Blackwell Publishing, 2006

ii Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p435-445

iii Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p468

iv Felix Guattari, The Three Ecologies,  trans Ian Pindar and Paul Sutton, London:Athelone Press, 2000

v John D. Sutter Last Man Standing at Wake for Toxic Town, 2009, CNN, available at http://articles.cnn.com/2009-06-30/us/oklahoma.toxic.town_1_tar-creek-superfund-site-picher-mines?_s=PM:US#cnnSTCText last visited 22/03/2012

vi Michael Nest, Coltan, Cambridge: Polity Press, 2011 p76

vii Boy Lujthe (2006) The Changing Map of Global Electronics: Networks of Mass Production in the New Economy, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p22

viii Rohan Price, WhyNo Choice is a ChoiceDoes Not Absolve the West of Chinese Factory Deaths, Social Science Research Network, 2010, Available at SSRN: http://ssrn.com/abstract=1709315   (last visited 15/03/2012)

ix Electronics Takeback Coalition,  Facts and Figures on E-Waste and Recycling, 2011, avaialble at http://www.electronicstakeback.com/wpcontent/uploads/Facts_and_Figures_on_EWaste_and_Recycling.pdf last visited 15/03/2012

Under the Basel Convention which forbids the transfer of toxic substances from OECD nations to non-OECD nations. However, the USA, Canada and Australia refused to sign the convention, and so it remains legal for these states to export hazardous wastes, although it is illegal for the non-OECD countries they send hazardous wastes to, to receive them

xi The WEEE directive, passed into EU law in 2003 and transposed into UK law in 2006 states that all e-waste must be safely disposed of within the EU at an approved facility, and that consumers can return used WEEE products when they purchase new products

xii Jim Puckett, High-Tech’s Dirty Little Secret: Economics and Ethics of the Electronic Waste Trade, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p225

xiii Jim Puckett and Lauren Roman, E-Scrap Exportation, Challenges and Considerations, Electronics and the Environment, 2002 Annual IEEE International Symposium, available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1003243 last visited 15/03/2012

xiv Basel Action Network and Silicon Valley Toxics Coalition, Exporting Harm, The High-Tech Trashing of Asia, 2002, p26 available at http://www.ban.org/E-waste/technotrashfinalcomp.pdf last visited 15/03/2012

xv Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p457

xvi Bernard Stiegler, For a New Critique of Political Economy, Cambridge:Polity 2010

xvii Ars Industrialis, Manifesto 2010, 2010, available at http://arsindustrialis.org/manifesto-2010 last visited 17/03/2012

xviii Tania Branigan, Apple Report Reveals Child Labour Increase, The Guardian, 15 February 2011, available at http://www.guardian.co.uk/technology/2011/feb/15/apple-report-reveals-child-labour last visited 18/03/2012

xix http://www.raisehopeforcongo.org/ last visited 15/03/12

xx David Wood and Robin Schneider, Toxicdude.com: The Dell Campaign, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p285-297

xxi For example the similarities between the labour rights violations found in reports at Foxconn in Shenzhen in 2006 and 2012 suggest that Apple’s claims in 2006 that they would take action to redress these violations were public relations rhetoric not substantiated by actions

xxii Jan Zalasiewicz, Mark Williams, Will Steffen, and Paul Crutzen, The New World of the Anthropocene,  2010, Environment Science & Technology 44 (7): 2228–2231. doi:10.1021/es903118j.

xxiii Eric Von Hippel, Democratising Innovation, Cambridge MA: MIT Press, 2005

xxiv Bernard Stiegler, For a New Critique of Political Economy, Cambridge:Polity 2010 p129

The Global Warming Pause

Ahead of the upcoming IPCC report into global climate, and climate change, the news agenda seems to been largely dominated by stories asking why global warming has paused for the last 15 years (see the BBC, the BBC again, the torygraph and the NZ herald among countless other examples).

A substantial part of this seems to be the repeat of familiar claims that 1998 was the hottest year on global record, and if global warming scientists were right there is no way that we should not have a seen a hotter year during the past 15 years. Hence, climate change has paused, the models and data suggesting that human fossil fuel emissions were to blame for late 20th century warming were wrong, and that consequently any argument for restricting emissions in future are null and void.

Which of course ought to lead to the question, who says that 1998 was the hottest year on record? Well the answer to this is somewhat complicated, but also somewhat revealing. It aint NASA, who run GISStemp (the Goddard Institute for Space Studies Surface Temperature Analysis) who have 2010 as the hottest year on record followed by 2005, with 9 of the 10 hottest years occurring after the year 2000 (with 1998 as the only pre-2000 year in that list).  It also isn’t NOAA (the US National Oceanic and Atmospheric Administration) who compile a global temperature record at the National Climactic Data Centre (NCDC), whose data again places 2010 as the hottest year on record, followed by 2005, with 1998 in third, and 9 of the hottest 10 years on record occurring after the year 2000 (ie after global warming has allegedly paused). Which leaves the UK Met Office’s Hadley Centre and and University of East Anglia’s Climactic Research Unit record HadCRU. The CRU is of course the unit who were the subject of the Climategate faux controversy where sceptics hacked emails, published some excepts from private correspondence out of context claiming fraud and data manipulation generating global headlines, and which were subsequently found by numerous independent investigations to have found no evidence of wrongdoing. The latest version of this temperature series is HadCRUT4v which again shows that 2010 was the hottest year on record, followed by 2005, followed by 1998.

So where does the 1998 was the hottest year claim come from? Well, HadCRUT4v is the latest, and most accurate temperature record maintained by the Met Office and CRU (for a detailed explanation of what’s changed look here). If we ignore that and instead use their previous version, HadCRU3v, then, and only then does 1998 appear to be the warmest year on record. So why did this old record suggest a different year to the NASA and NCDC records (and indeed the latest version of the CRU record)? Well the main reason for this was the different methods used to generate global temperatures. Of course none of these institutions are able to measure the temperature in every place in the world, they use stations in various locations, and the places where there tend to be the fewest stations tend to be the polar regions (where there also tend to be the fewest people). And one of the things we know quite well, is that the Arctic has been the fastest warming region on the planet. Whereas GISStemp interpolates values between measured locations in the Arctic, HadCRU3v left them blank as unknown, which introduced a cold bias into their dataset compared with the others, and explaining why it has been replaced by a dataset which features a greater number of stations and which correlates much more strongly with the other datasets.

So the ‘pause’ in climate change is something that only exists if you exclusively look at a now obsolete and known to be biased dataset generated by a group who those using this data have previously claimed to be frauds. And decide to ignore that 1998 was in any case a super El-nino which had a dramatic short term effect on global weather – hence the other 9 of the 10 hottest years on record all occurring since the year 2000. If you used 1997 or 1999 as start dates there wouldn’t appear to be any pause in any dataset (outdated or otherwise), but cherry-picking the year when specific short-term conditions made things abnormally hot added to cherry-picking a now obsolete dataset allows sceptics to make the global warming has paused argument (see this excellent skeptical science post for details on cherry-picking)

So why are so many mainstream media outlets focussing upon this as the main story in the lead up to the IPCC report? Probably because it’s a more sensationalist and conflict-driven story than one which reads science has been slowing progressing, turning a 90% confidence in predictions in 2007 into a 95% confidence by 2013, allied with a big PR drive from a number of the main players in the climate denial industry.

 

I’ve just had an article published as part of the spring/summer edition of Necsus, the European Journal of Media Studies. Necsus is an open access journal, so you can find the full text HERE.  My text is a look at how notions of scale and entanglement can productively add to media ecologies as an emergent way of exploring media systems. If looks at case studies of Phone Story and Open Source Ecology, and examines how in both cases a multiscalar approach which looks across content, software and hardware can be productively applied.

The journal also features an interview with Toby Miller and Richard Maxwell, the authors of Greening the Media, a book released last year which is one of the first full-length pieces to look at issues pertaining to the ecological costs of media technologies (both old and new), and a series on interesting essays which look at the intersection of media/film studies and ecology from diverse perspectives. Outside of the Green material, there are essays by Sean Cubitt (who was my PhD external examiner a few months back) and Jonathan Beller which are well worth a read.

 

iDocs 2012

Last year’s iDocs conference at the Watershed in Bristol was a lively and engaging event which looked at a range of critical, conceptual and practical issues around the emerging field of interactive documentary. It focused on several key themes surrounding the genre: participation and authorship, activism, pervasive/locative media and HTML 5 authoring tools.

The conference featured a number of practitioners involved in fantastic projects, such as Jigar Metha’s 18 Days in Egypt, Brett Gaylor, who made the excellent RIP: a remix manifesto and is now at Mozilla working on their popcorn maker, an HTML 5 based javascript library for making interactive web documentaries,  and Kat Cizek (via Skype) whose Highrise project is well worth a look. There were also more theoretically inflected contributions from the likes of Brian Winston, Mandy Rose, Jon Dovey and Sandra Gaudenzi (among many others) which made for a really stimulating couple of days.

The Digital Cultures Research Centre at UWE asked me to document the event and produce a short video summary, and the video above is the outcome of that.

Changes

8709639741_de660d1a53_z

The last few months have been a whirlwind of exciting changes.

Firstly, I’ve now completed my PhD. I had my viva on March 25th with Sean Cubitt from Goldsmiths as my external examiner and Mark Jackson from the Geography department as my internal. I’m glad to say that we had a lively an interesting discussion, and at the end of it they let me know that I had passed with the minor end of minor corrections (basically a bunch of typos and a couple of references missed out of the bibliography). Since then I’ve completed those corrections and submitted the final version of the thesis to the library, so I’m now Dr Taffel!

In other exciting news, about a week after the viva I was flown out to New Zealand for an interview at Massey university in Palmerston North. I had a wonderful time out in NZ, catching up with some old friends who left the UK for Nelson a while ago, and then seeing a few bits of the stunning New Zealand scenery (pics on Flickr). I’m very pleased to say that Massey have offered me a job, so I’m currently in the process of sorting out a visa and relocation logistics, and as of July I’ll be a lecturer in Media Studies (Media Practice) at Massey.

It’s hugely exciting to be moving to a new country, a new job and a new lifestyle. I’m also slightly blown away by how quickly this has all happened, I know a lot of hugely talented people who’ve spent a fair amount of time with PhD corrections and then trying to find that elusive first full time academic post. For it to have all come together in a couple of months is amazing, and so I’d like to say a huge thank you to everyone who’s helped me get to where I am now, especially after being in a really low place a few years back after the bike accident and prolonged period of brokenness.

This is a brief write-up of a talk I gave at the Cube Microplex last night as part of a night co-organised by Permanent Culture Now, Bristol Indymedia and Bristol Radical Film Festival. The night itself was an interesting mix, with the film This Land is Our Land kicking things off, followed by my inchoate ramblings, and Mike from PCN reading a text about commons, sustainability and land.

Whilst This Land is Our Land presents a really useful introduction to the notion of the commons, demarcating a range of types of commons ranging from communally managed land, through to ‘natural resources’ such as air and water, to public services and the Internet – I think that it’s worth taking a step back and considering whether or not classifying these phenomena as the same thing is really all that useful. Whilst they all are not forms of private property, they do exhibit some differing characteristics that are worth further explication.

The first mode of commons I’d like to discuss is the model of common land – what we could think of as a pre-industrial mode of commons, albeit one which still exists today through our shared ownership and access to things like air. Land which was accessible for commoners to graze cattle or sheep, or to collect firewood or cut turf for fuel. Anyone had access to this communal resource and there was no formal hierarchical management of the common land – no manager or boss who ensured that no one took too much wood or had too many sheep grazing on the land (although there did exist arable commons where lots were allocated on an annual basis). So access and ownership of this communal resource was distributed, management was horizontal rather than hierarchical, but access effectively depended upon geographical proximity to the site in question.

A second mode of commons is that of the public service, which we could conceptualise as an industrial model of commonwealth. For example consider the example of the National Health Service in the UK: unlike common land, this was a public service designed to operate on a national scale, for the common good of the approximately 50 million inhabitants of the UK. In order to manage such a large scale, industrial operation, logic dictated that a strict chain of managerial hierarchy be established to run and maintain the health service – simply leaving the British population to self-organise the health service would undoubtedly have been disastrous.

This appear to be a case which supports the logic later espoused by Garret Hardin in his famed 1968 essay the Tragedy of the Commons, whereby Hardin, an American ecologist forcefully argued that the model of the commons could only be successful in relatively small-scale endeavours, and that within industrial society this would inevitably lead to ruin, as individuals sought to maximise their own benefit, whilst overburdening the communal resource. Interestingly, Hardin’s central concern was actually overpopulation, and he argued in the essay that ‘The only way we can preserver and nurture other, more precious freedoms, is by relinquishing the freedom to breed.’ Years later he would suggest that it morally wrong to give aid to famine victims in Ethopia as this simply encouraged overpopulation.

More recent developments, however, have shown quite conclusively that Hardin was wrong: the model of the commons is not doomed to failure in large-scale projects. In part this is due to the fact that Hardin’s model of the commons was predicated on a complete absence of rules – it was not a communally managed asset, but a free-for-all, and partially this can be understood as a result of the evolution of information processing technologies which have revolutionised the ways in which distributed access, project management and self-organisation can occur. This contemporary mode of the commons, described by Yochai Benler and others as commons-led peer production, or by other proponents simply as peer-to-peer(P2P) resembles aspects of the distributed and horizontal access characteristic of pre-modern commons, but allows access to these projects on a nonlocal scale.

Emblematic of P2P process has been the Free and Open Source Software (FOSS) and Creative Commons movement. FOSS projects often include thousands of workers who cooperate on making a piece of software which is then made readily available as a form of digital commons, unlike proprietary software which seeks to reduce access to a good whose cost of reproduction is effectively zero. In addition to the software itself, the source code of the program is made available, crucially meaning that other can examine, explore, alter and improve upon existing versions of FOSS. Popular examples of FOSS include WordPress – which is now used to create most new websites as it allows users with little technical coding ability to create complex and stylish participatory websites – the web browsers Firefox and Chrome, and the combination of Apache (web server software) and Linux (operating system) which together form the back end for most of the servers which host World Wide Web content.

What is really interesting, is that in each of these cases, a commons-led approach has been able to economically outcompete proprietary alternatives – which in each case have had huge sums of money invested into them. The prevailing economic logic throughout industrial culture – that hierarchically organised private companies were most effective and efficient at generating reliable and functional goods was shown to be wrong. A further example which highlights this is Wikipedia, the online open-access encyclopaedia which according to research is not only the largest repository of encyclopaedic knowledge, but for scientific and mathematical subjects is the most detailed and accurate. Had you said 15 years ago that a disparate group of individuals who freely cooperated in their free time over the Internet and evolved community guidelines for moderating content which anyone could alter, would be able to create a more accurate and detailed informational resource than a well-funded established professional company (say Encyclopaedia Brittanica) most economists would have laughed. But again, the ability of people to self-organise over the Internet based on their own understanding of their interests and competencies has been shown to be a tremendously powerful way of organising.

Of course there are various attempts to integrate this type of crowd-sourced P2P model into new forms of capitalism – it would be foolish to think that powerful economic actors would simply ignore the hyper-productive aspects of P2P. But for people interested in commons and alternative ways of organising, a lot can be taken from the successes of FOSS and creative commons.

Now where some this gets really interesting, is in the current moves towards Open Source Hardware (OSH), what is sometimes referred to as maker culture, where we move from simply talking about software, or digital content which can be entirely shared over telecommunications networks. OSH is where the design information for various kinds of device are shared. Key amongst these are 3D printers, things like RepRap, an OSH project to design a machine allowing individuals to print their own 3D objects. Users simply download 3D Computer-Assisted-Design (CAD) files, which they can then customise if they wish, before hitting a print button – just as would print a word document, but the information is sent to a 3D rather than 2D printer. Rather than relying on a complex globalised network whereby manufacturing largely occurs in China, this empowers people to start making a great deal of things themselves. It reduces reliance on big companies to provide the products that people require in day-to-day life and so presents a glimpse of a nascent future in which most things are made locally, using a freely available design commons. Rather than relying on economies of scale, this postulates a system of self-production which could offer a functional alternative which would have notable positive social and ecological ramifications.

Under the current economic situation though, people who contribute to these communities alongside other forms of commons are often not rewarded for the work they put into things, and so have to sell their labour power elsewhere in order to make ends meet financially. Indeed, this isn’t new, capitalism has always been especially bad at remunerating people who do various kinds of work which is absolutely crucial the the functioning of a society – with domestic work and raising children being the prime example. So the question is, how could this be changed so as to reward people for contributing to cultural, digital and other forms of commons?

One possible answer which has attracted a lot of commentary is the notion of a universal basic income. Here the idea is that as all citizens are understood to actively contribute to society via their participation in the commons, everyone should receive sufficient income to subsist – to pay rent, bills, feed themselves and their dependants, alongside having access to education, health care and some form of information technology. This basic income could be supplemented through additional work – and it is likely that most people would choose to do this (not many people enjoy scraping by with the bare minimum) – however, if individuals wanted to focus on assisting sick relatives, contributing to FOSS projects or helping out at a local food growing cooperative they would be empowered to do so without the fear of financial ruin. As an idea it’s something that has attracted interest and support from a spectrum including post-Marxists such as Michael Hardt and Antonio Negri through to liberals such as British Green Party. It certainly seems an idea worth considering, albeit one which is miles away from the Tory rhetoric of Strivers and Skivers.

For more details on P2P check out the Peer to Peer Foundation which hosts a broad array of excellent articles on the subject.