Feeds:
Posts
Comments

We’re into the third and final week of The Lives and Afterlives of Plastic, it’s been wonderful so far, and I’m looking forwards to working through the last set of presentations.

This week we have a keynote from professor Ian Shaw that looks at plastics and endocrine disrupting chemicals, as well as panels on public awareness of marine plastics, plastics and microfibres/fabrics, waste management, and last of all materiality.

In that final panel is the paper that Trisia Farelly and I co-authored, which is a fairly accessible and informal discussion of a range of issues around plastic, accumulation, toxicity and regulation. It’s called Technofossils and Toxicity, but the Anthropocene/Technofossils bit didn’t make it to the final cut as out original discussion went for way over the 20 minutes

Plastic rubbish garbage pollution in ocean causes environmental problem

We’re now into the second week of The Lives and Afterlives of Plastic, an onlinbe conference that I’m helping run as part of the Political Ecology research Centre at Massey University.

This week, we have a Keynote from Professor Gay Hawkins entitled Governed By Plastic, as well as having four panels. These look at 1) Packaging Life Cycle Analysis and Design, 2) Representations and Aesthetics, 3) Materiality and 4) Marine Microplastics.

It’s been wonderful watching the diverse and brilliant ways that people have responded to and challenged the idea of what an online version of a conference paper might look like, and it’s been fascinating to watch and hear about such a broad range of projects relating to plastic.

It’s also been really interesting to see how the diverse forms of scholarship form the sciences, social sciences, arts and humanities speak to one another.

plastic polymer granules

This week sees the launch of an online, interdisciplinary conference that I’ve been involved in organizing as part of the Massey University Political Ecology Research Centre. It’s called The Lives and Afterlives of Plastic, and it focuses on the broad range of issues that pertain to plastics, waste, toxicity and pollution.

We’re pleased to say that we’ve got presenters from fields ranging from marine biology and toxicology through to media studies, fine art and anthropology, so there’s a real mix of fields and areas, and will be fascinating to see how that mix of voices works together in the discussions.

The first week of the conference has a keynote from Richard Thompson, who’s one of the world’s top experts on marine plastics, which is titled Marine Debris: Are There Solutions to this Growing Problem?, along with panels the look at the amazing an inspirational Civic Laboratory of Environmental Action Research, a feminist science lab in Newfoundland, Canada, Marine Plastics, and Representation and Aesthetics.

I’m really looking forwards to seeing these presentations, and taking part in the online discussions around them. Being in New Zealand can be quite geographically isolating (especially compared to the UK, where so many researchers and institutions are so close), and online conferences might be a really useful way of allowing us to stay connected to our overseas colleagues without having the ecological (or for that matter economic) cost associated with getting on a plane and flying halfway across the world. Indeed, when the University of California Santa Barbara Environmental Humanities centre ran a similar online conference last year, they estimated that this only involved around 1% of the carbon footprint associated with a traditional conference.

On Friday journalist Paul Mason published a fairly long article in the Guardian entitled ‘The End of Capitalism Has Begun.’ It features some interesting thoughts, and will hopefully help disseminate some ideas which have been floating about in academia for quite a while to a broader audience. That said, there are a few things in the piece which I think are somewhat naieve and require a response to.

The main thrust of Mason’s argument is that capitalism is inevitably on the way out because of several social changes being wrought by contemporary networked information processing technologies. Firstly, Mason argues that because of the increased levels of automation brought by digital systems, there will be a dramatic reduction in the volume of work required within a society. Secondly, he argues that the fundamental laws of economics have been broken by an information economy within the contemporary state of informational abundance. Finally, he argues that ‘cognitive capitalism’ is predicated on a mode of collaborative and networked social production which itself is contradictory to the type of individualised wealth production associated with capitalism.

The first of these points is hardly new. The displacement of labour from humans into various forms of machinery is, of course, something which has occurred for at least a couple of hundred years, as was presciently observed and described by Karl Marx (in the Fragment on Machines, a text which Mason cites later in his essay). Alongside the ongoing historical transformation of production processes, there have always been the claim that technology will make everyone’s life better by reducing the need for arduous and boring labour tasks, instead freeing humanity to enjoy increased levels of leisure time accompanied by a higher level of material wealth and comfort. And whilst there are certainly some humans who are in that situation today, we could also point to the increasing precariousness of work, particularly within neoliberal economies where full employment has never been an important goal, as a reminder that decreasing the overall level of manual labour does not necessarily entail benefits for all.

Rather than seeing work and wealth equally being divided amongst citizens, today we instead find millions of unemployed or underemployed humans who are effectively used as an industrial reserve force to reduce any demands for increased wages, reduced working hours and other kinds of benefits which were associated with the collective action of the twentieth century trade union movements. Whilst a relatively small number of humans become more materially wealthy than any of their predecessors, this occurs alongside a growing inequality between the global super rich and everyone else. As research last year found, the richest 85 individuals on the planet now own more than the poorest 50% of the global population, around 3.5 billion people.

Additionally, in a ‘creative’ ‘digital economy where communicative acts are themselves commodified over corporate social networks, what does and does not count as productive work is itself problematised. Theorists ranging from autonomist Marxists such as Franco Berardi through to cyberutopian capitalists such as Clay Shirky have argued that what used to count as leisure time is now a key motor of wealth generation, as your online ‘leisure’ activities are used to tailor personal, location-aware advertising to your behaviour.

Which brings us to Mason’s second point, that economics is predicated upon scarcity, and that the current abundance of information demarcates that we have entered an era where traditional economic theory cannot adequately function. Again, rhetoric surrounding the end of the economics of scarcity is not new, but such thinking fundamentally fails to grasp the dynamics of scarcity surrounding informational systems, and systems is a key word here, because economics is about circulation and flows, not a single thing (be it information, energy or anything else). Information is certainly a crucial component of digital networked ecologies, and the volume of contemporary information – what Mark Andrejevic and Berardi have both described as information overload – is certainly not one of scarcity, but the key is to think in systemic terms as to what type of scarcity is generated as a consequence of the abundance of information. The answer, is that human attention is what become scarce when information is abundant.

Indeed, the notion of the attention economy is not that new, with early versions of the term being deployed by authors such as Michael Goldhaber and Georg Franck around the turn of the century. For an excellent overview of contemporary debates surrounding economies of attention I would suggest reading this article by Patrick Crogan and Sam Kinsley. The key point, is that far from rendering the economics of scarcity redundant, what we instead find is that the abundance of online information means that human attention is increasingly scarce and thus becomes a desirable and lucrative commodity, which is why heavily targeted online advertising is a booming multi-billion business, one which ventures such as Google’s search engine, Facebook, YouTube and other major online players are almost entirely dependent upon for their revenues and astronomical market valuations.

The third point Mason raises, that online networks are predicated upon modes of social cooperation and collectivity which are contradictory to the mode of capitalism they are located within, and thus contain the seeds of a new social system which will eventually replace capitalism itself, is arguably the most complex and interesting point he raises. However, this too is hardly a new statement, as it is one of the central tenets of Michael Hardt and Antonio Negri’s triad of books Empire, Multitude and Commonwealth, as well as being an argument which has been raised in differing forms by theorists such as Bernard Stiegler (via the economy of contribution) and Michel Bauwens (via peer-to-peer production). I wont go into these positions in much detail here, but what I do think is worth highlighting is that many of these claims about biopolitical production, economies of contribution and peer-to-peer production were originally made quite a while ago (Empire was released in 2000), and that since those times, there has been the emergence of the the big corporate social media players whose financial model is entirely predicated on the exploitation of the free cooperative labour of their users.

This isn’t to say that people don’t get anything from Facebook (basically some cost-free server storage, a fairly clean user interface, and access to the billion-plus strong Facebook network), but that Facebook’s market valuation of over 250 billion US dollars is entirely built upon its ability to commodify the social relationships of its users. Far from existing outside of, and in opposition to a capitalism which is wrongly assumed to by monolithic and rigid, we see the way that capitalism (which depends upon finding new areas to provide growth) has found a way of extending what it understood to be a commodity, so that many aspects of our social lives, which were previously thought to be intangible, unquantifiable and thus could not be monetised, are now major players in global financial markets.

Indeed, whereas during the early days of the internet, the underlying technology itself and the modes of cooperation it made possible such as the distributed mode of production that underpins Free and Open Source software were seen as radical new technologically-enabled alternatives to neoliberal capitalism, what we have seen more recently has been the way that capitalism has been able to find novel ways of reintegrating these innovations into financial markets, such as the way that Google utilises open source software outside of search in areas such as Android and Chrome. Indeed, one of the most interesting analyses of contemporary capitalism comes from Jodi Dean, who argues that our current era is marked by a stage of communicative capitalism, whereby far from forming alternatives to global capitalism, participation in networked digital telecommunications has become a central driver of the capitalist economy.

Mason surmises his argument by stating that:

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next

This presents a straightforward binary opposition between network and hierarchy, between the new, good digital ways which point towards a postcapitalism and the bad, old ones which represent our capitalist past and present. However much I might wish this to be the case, and it would be really lovely to think that current technologies will inevitably lead to the replacement of a system of gross global social inequalities and catastrophic climate change with something better, I find the kind of technological determinism present in Mason’s essay to be blinkered at best. As Gilles Deleuze and Felix Guattari remind us in the introduction to A Thousand Plateaus, it is not a case of opposing hierarchical models with networked and decentralised ones, but a case of understanding how these two tendencies occur in different ways in actual systems which are almost always a combination of the two.

Thinking this way means mapping the new hierarchies and modes of exploitation associated digital technologies whilst also looking for the lines of flight, or positive ways of transforming the situation that the new technological formations present. That doesn’t mean that there can be no hope for change that involves technology, but that positing this situation as a good/bad binary opposition, or suggesting that technology itself holds essential characteristics which will necessarily transform society in a particular direction is a misguided approach. Indeed, some of the most interesting materials coming out of the P2P foundation recently have argued that openess is not enough, that just making things open or collaborative can lead to growing inequalities as the actors with the most attentional, algorithmic and economic resources are ususally those best placed to leverage open data, open culture and open source ventures. Alongside openess, they argue that we need to think about sustainability and solidarity in order to bring about the type of social and ecological transformation that would mark the end of capitalism. That to me sounds like a far more productive call to action than simply gesturing towards the digital technologies whose introduction has not thus far been accompanied by a more egalitarian and sustainable global society.

 

 

A paper I originally wrote for the ANZCA conference which was held at Swinburne University in Melbourne last year has just been published in the Australian edition of the Global Media Journal as part of a special issue comprised of papers from the conference.

ANZCA 2014 had papers from about 250 scholars, and I was thrilled to be approached by the conference organisers who asked to publish mine in this special edition.

You can find the journal here and my paper here. It’s  primarily a discussion of how we might understand the notion of sociality being advance in the phrase ‘social media,’ if we look beyond the hype and marketing slogans which claim that social media makes the world more connected, collaborative and democratic.

Here’s the Abstract:

Loops + Splices

Last Friday I was at the Loops + Splices symposium hosted at Victoria University Wellington, and co-organised by Victoria and some of my Massey colleagues. Overall it was a fun and entertaining day with some really strong talks.

The keynote speaker was Professor Ian Christie, who was over from Birkbeck College in London. Christie’s presentation was entitled ‘Denying depth: uncovering the hidden history of 3D in photography and film’ and provided a genealogical/media archaeological exploration of stereography, moving from a range of pre-film technologies through to contemporary 3D cinema such as Avatar. Christie’s starting point was the outright dismissal of 3D as a gimmick by film critics such as the late Roger Ebert and respected editor Walter Murch who argued that ‘It (3D) doesn’t work with our brains and it never will.’ Christie’s argument developed through the interest which pioneering film theorists such as Andre Bazin and Sergei Eisenstein both had in stereoscopy, and passed through a variety of technologies and techniques by which 3D cinema was a reality in the 19th and 20th centuries. It was interesting to learn that until the stabilisation of photographic techniques through the standardisation enacted by cheap consumer cameras such as the Eastman Kodak, that 3D images were as popular and common as their non-stereo counterparts. Christie argued that the adoption of 3D imaging and simulation apparatus by professions such as surgeons and pilots demonstrates the range of utility presented by stereo imaging techniques, and that it is wrong to dismiss the technology on the basis of  some of the poor narrative qualities of the 3D films which followed Avatar. It also feels worth noting that Christie was a model keynote speaker throughout the day, being engaged with all the panels, asking thoughtful and pertinent questions, and being kind and generous about the various presentations which followed his keynote.

The morning panel following the keynote was composed of Allan Cameron from the University of Auckland, and my Massey colleagues Kevin Glynn, Max Schleser along with myself. Allan’s paper, “Facing the Glitch: Abstraction, Abjection, and the Digital Face” examined the history of glitch as a form within both music and video, and specifically explored the role of the face within glitch videos. The paper outlined ways in which forms of long group of picture compression and the generation of intermediate frames which are interpreted from keyframes serves as a framework for work which uses compression artifacts, pixelation and glitch as an aesthetic strategy. I have to say, that on a personal level, any paper which shows clips of glitched up sequences from David Cronenberg’s Videodrome is a winner.

Kevin’s paper “Technologies of Indigeneity: Māori Television and Convergence Culture” comes out of his Marsden-funded project working with Julie Cupples on ‘Geographies of Media Convergence: Spaces of Democracy, Connectivity and the Reconfiguration of Cultural Citizenship.’ The paper focused on NZ media representations of the Urewera raids of 2007, and a more recent case where Air NZ, who prominently feature Maori iconography in their branding, terminated an interview with a woman for having a ta moko (traditional body markings), which they claimed would unsettle their customers. The paper explored impacts associated with the introduction of Maori TV and social networking software such as Facebook and Twitter on the ability of Maori to represent themselves and partake in mediated debates surrounding cultural identity.

Max’s paper “A Decade of Mobile Moving Image Practice” was an overview of some of the changes that have occurred over last the ten years with regards to mobile phone filmmaking. Going from the early days of experimenting with low resolution 3GP files which were not designed to be ingested or edited, through to the contemporary situation whereby a range of mobile phone apps exist to provide varying levels of control for users working in High Definition, Max mapped out some of the ways that the portability and intimacy afforded by mobile phones allow for modes of filmmaking which depart from the intrusive nature of working with digital cinema cameras. It was also highly entertaining to see some decade-old pictures of Max looking very young.

My paper, “ArchEcologies of Ewaste” was a look at how media archaeology and media ecologies can be complementary methods in examining a range of issues pertaining to materiality and the deleterious impacts caused by the toxic digital detritus that we discard, focusing particularly on ewaste in New Zealand, where there currently isn’t a mandatory (or even free) nationwide ewaste collection scheme, unlike in the EU where the WEEE directive mandates that all ewaste must be recycled in high tech local facilities. The prezi for the talk is here if you’re interested.

After lunch there were a couple of panels, with some varied and interesting presentations, from which my two highlights were the papers from Michael Daubs and Allen Meek. Michael’s paper “What’s New is Past: Flash Animation and Cartoon History” conducted a re-evaluation of early rhetorics of the revolutionary newness and democratic and transformative potentials of Flash animation, exploring the way in which a range of cell animation techniques such as layering and keyframing were appropriated into Flash, alongside a detailed history of Flash’s adoption in Web-based animation. The paper concluded by mobilising this archaeological exhumation of past claims surrounding deterministic claims for democratising technologies to interrogate some of the hyperbole surrounding HTML5 and CSS3, the currently-still-being-finalised web standards which incorporate scalable vector graphics into the web itself, thus removing the need for a proprietary layer of Flash on top of web-native code.

Allen’s paper, “Testimony and the chronophotographic gesture” examined the historical relationships between gesture, imaging technology and biopolitics. The paper began by exploring ways that early film was utilised under Taylorism as a means by which to quantify bodily movements and gestures in order to recombine them in the most temporally efficient manner so as to enact a form of disciplinarity upon the workforce. This history of gesture on film as a tool for quantifying gesture was contrasted with material from Claude Lanzmann’s Shoah, where gesture was used to reawaken embodied but subconscious memories of the Holocaust, which are being recorded for the documentary as a means of bearing witness to those memories. The dialectic between the employment of film as an apparatus of disciplinarity and as a means of witnessing was theorised via Agamben and Foucauldian biopolitics, and made for a fascinating paper.

There were also interesting and enjoyable papers from Damion Sturm, who examined T20 cricket in Australia as an exemplar of the increasing mediatisation of sport, Kirsten Moana Thompson, who used A Single Man  as a case study to explore a range of phenomena surrounding digital technologies and colour in cinema, and Leon Gurevitch who examined some of the relationships between industrial design practices and computational 3D design and animation practices .

On the whole, it was a hugely enjoyable day, and it was great to meet a range of researchers doing various forms of work around media, archaeology, history and technology. A big thanks to the Loops + Splices organising committee, Kirsten Thompson, Miriam Ross, Kathleen Kuehn, Alex Bevan, Radha O’Meara, and Michelle Menzies, for putting everything together.

Arts of the Political is the new release penned by cultural geographers Nigel Thrift and Ash Amin, and which explores various manifestations of left wing politics and political movements in order to consider why movements based around equity and community have seemingly achieved so little over the past thirty odd years in the face of neoliberalism. Indeed, this question is particularly pertinent given the financial crisis of 2008, and the inability of the left in places such as the UK (where both Thrift and Amin teach at Warwick and Cambridge respectively) to form a movement seemingly capable of enacting widespread positive changes, or even mounting a serious campaign to challenge the Conservative narrative of enforced austerity as a means for enacting further cuts to public services – a policy David Cameron has recently felt sufficiently emboldened to openly state is a reflection of ideology rather than a situation enforced by economic circumstances.

One might argue that the sweeping cuts made by the Tory/Lib Dem coalition are a prime exemplar of what Naomi Klein has termed the shock doctrine – the neoconservative leveraging of moments of critical instability to enact sweeping changes which increase inequality and benefit elites through privatisations which would likely be too unpopular to pass outside of these specific moments. The question then, is why has the right been so successful at exploiting these opportunities whilst the left has not.

For Thrift and Amin, the answer is primarily that the left has historically been successful when it has been able to articulate new visions, new desires and new organisations which expand the terrain of what is understood to be politics itself, and by doing so energises mass movements through articulating the possibility for a better collective future.

 Movements campaigning for the rights ofwomen, the working class, and other neglected and downtrodden subjects managed to turn engrained orthodoxies on their head in the quarter-century before the First World War by building mass support and accompanying socio-political reform. Although these movements applied particular principles and practices, the record shows that their acts of redefinition went far beyond what was originally intended. These movements freed up new imaginations, invented new political tools, pointed to elements of existence that had been neglected or concealed, and created a constituency that, once constructed, longed for another world. In other words, these movements produced a new sense of the political and of political potential. 1he emerging Left both opened the doors of perception and provided the tools with which to do something about these new perceptions. This is what was common, in our view, in the disparate examples we consider, from the American Progressive Movement and British feminism to German Marxism and Swedish social democracy. In their own way, each of these movements disclosed new desires.

The thesis that drives this book is that progressive movements should pay more attention to such world-making capacity, understood as the ability not just to produce a program in the future but also to open up new notions of what the future might consist of. The most important political movements, in our estimation, are those that are able to invent a world of possibility and hope that then results in multiple intertventions in the economic, social, and cultural, as well as the political sphere. They free thought and practice and make it clear what values are being adhered to, often in quite unexpected ways. p9

Thrift and Amin contend that three areas (or arts) of the political which it is crucial for the left to pay close attention towards are invention, the process of bringing forth tangible futures which hold the promise of a better life; organisation, the practices which are used to bind and articulate these movements; and the mobilisation of affect, considering the ways that political decision making goes beyond rational information processing:

In particular, we consider the whole phenomenon of what Walter Lippmann (1961) called the manufacture of consent: how it is being bent to the needs of the Right and how it could be mobilized more effectively by the Left. At the same time, we attend to how the consideration of affect brings space into the frame. A whole array of spatial technologies has become available that operate on, and with, feeling to produce new forms of activism, which literally map out politics and give actors the resources to kick up more and across more places.3 In other words, the practical mechanics of space must be part of the politics of the Left. p15

Thrift and Amin begin by exploring a range of historical examples whereby left wing politics was able to achieve the kind of redefinition of the political they seek, considering the German Socialism of the SPD before world war 1, Swedish Social Democracy, the British Suffragette movement, and progressive capitalists in the US circa 1900. Thrift and Amin contend that:

In all cases, progress depended on prizing open newpolitical ground and filling it with real hope and desire. Appeal and effectiveness-at a time heavily laden with the weight of tradition, vested power, restricted social force, and new capitalist imperative-had to come from an ability to imagine and build community around the yet to come or the yet to be revealed. This meant inventing new historical subjects, new technologies oforganization and resistance, newvisions of the good life and social possibility, new definitions ofhuman subjectivity and fulfillment, and new spaces of the political (such as “direct action,” “voting,” “public involvement,” “class struggle,” “welfare reform,” “government for the people,” “women’s rights”). A possible world had to be fashioned to render the old unacceptable and the new more desirable and possible. The Left today seems to have less desire or ability to stand outside the given to disclose and make way for a new world.

In seeking to formulate areas where there is the potential for opening up analogous new political spaces, Thrift and Amin incorporate theoretical material from Bruno Latour regarding the status of democracy and agency with regards to nonhumans, arguing that the traditional binaries between sovereign human subjects and inert and passive nonhuman objects is an area which can productively challenged by a revitalised left wing politics.

bow. We want to take up Bruno Latour’s (1999) call for a new parliament and constitution that can accommodate the myriad beings that populate the world, a call that entails acts of definition and redefinition of “actor” so that many humans and nonhumans can jostle for position, gradually expanding the scope and meaning of”collective”
politics. p41

This leads T&A to consider the human as a distributed being, whose processes of cognition stretch far beyond the boundaries of the skin, coming close to Deleuze and Guattari’s positions around ecological machinic flows of matter. Arguing from a position which begins to sound fairly close to some of Bernard Stiegler’s work, they contend that:

Human being is fundamentally prosthetic, what is often called “tool-being.” We are surrounded by a cloud ofall manner of objects that provide us with the wherewithal to think. Much ofwhat we regard as cognition is actually the result ofthe tools we have evolved that allow us to describe, record, and store experience. Take just the example of the craft of memory. 1his has extended its domain mightily since the time paintings were made on the walls of caves, and as a result, a whole new means of thought has come into being…

Memory is a compositional art depending on the cultivation of images for the mind to work with. This state of affairs has continued but has been boosted by modern media technology and its ability to produce communal rhetorics that would have been impossible before and that are inevitably heavily political, especially in their ability to keep inventing new variants of themselves that can be adapted to new situations. p50/51

This sense of distributed being and agency is used to reinforce the Latourian argument surrounding the agency of objects, and thus their importance in a new an enlarged sense of politics and democracy. Using Gilbert Simondon’s notion of transduction T&A explore:

The way in which tools and other symbiotes can produce environments that are lively in their own right, that prompt new actants to come into existence. To illustrate this point, we need to look no further than the types of digital technology that have become a perpetual overlay to so many practices and the way in which they are changing political practices. Here we find a domain that has gained a grip only over the past ten years but is now being used as part of an attempt to mass-produce “ontological strangeness” (Rodowick 2007) based on semiautomatic responses designed into everyday life through a combination of information technology based tools and the practices associated with them (from implants and molecular interventions to software-based perception and action). In particular, these automatisms are concerned with the design and prototyping of new kinds of space that can produce different affective vibrations. p64

T&A bring this discussion back in to realm of the more conventionally political by using distributed agency and co-evolutionary strategies as a way of opening up though surrounding ecological crises and how a coherent left political response to climate change requires precisely the type of expanded politics which they characterise as world making:

What is needed instead is a leftist politics that stresses interconnection as opposed to the “local,” however that is understood. What is needed is “not so much a sense of place as a sense ofplanet” (Heise 2008, ss) that is often (and sometimes rather suspectly) called “eco-cosmopolitanism.” Thus, to begin with, the experience of place needs to be re-engineered so that its interlocking ecological dimensions again become clear. This work of reconnection is already being done on many levels and forms a vital element in the contemporary repertoire of leftist politics: slow food, fair trade, consumer boycotts, and so on. Each of these activities connects different places, and it is this work of connection that is probably their most important outcome. Environmental justice then needs to be brought into the equation. The privileges of encounters with certain ecologies, as well as the risks associated with some branches of industry and agribusiness, are clearly unevenly distributed, and it may well be that certain environmentally unsound practices have been perpetuated because their effects go unnoticed by the middle class. Again, environmental justice movements have to refigure spaces, both practically and symbolically, so that interconnection becomes translucent. Finally, we need new ways to sense and envisage global crowds that are dynamic. The attempts to produce people’s mapping and geographic information systems, to engage in various forms of mash-up, and to initiate new forms of search are all part and parcel of a growing tendency to produce new lands of concerned and concernful “Where are we?” Politics starts from this question. p75

This is followed by a a chapter which claims to look at contemporary leftist politics, surveying the landscape through the apertures of anti-capitalism, reformist capitalism, post-capitalism and human recognition. What is striking about the majority of these contemporary left wing political movements is that they aren’t actually political movements.  Anti-Capitalism is not approached through Occupy or Climate Camp, it is Zizek, and Badiou alongside Hardt and Negri – which conflates two very different theoretical perspectives on anti-capitalism – and is summarily dismissed as hopelessly over-optimistic and unable to visualise a future. Reformism is not Syriza/The Five Star Movment/Bolvarian Socialism, it is Ulrich Beck and Anthony Giddens’s reflexive modernity and third way. By post-capitalism T&A mean ‘A third leftist stance on the contemporary world can be described as “poststructuralist,” in that it draws on feminist, postcolonial, antiracist,and ecological thinking, much of which heavily influenced by poststructuralist ideas’ p91. Conceptually that would seem to fit Hardt and Negri quite well, but here T&A refer instead to GIbson-Graham’s work on small scale, local, co-operative ethical and sustainable, which could have been productively mapped onto the actions of groups and initiatives such as transition towns, permaculture groups, feminist networks, Greenpeace and other NGOs and the broad range of groups and movements who actually practice some of these ideas, but instead is again explored as a mere theoretical argument rather than political praxis. Human recognition is used to refer to a liberal left based around ethics derived from Wendy Brown’s writings – again rather than exploring groups who actually employ this mode of left politics, probably best embodied by online liberal campaign groups such as Avaaz or 38 Degrees. Finally T&A return to Latour and the notion of Dingpolitik and the role of bringing objects into democracy, a position which has been criticised within academia for being politically conservative as Latour’s works tend to entirely ignore issues surrounding inequalities and exploitation, content instead to simply map actor networks, in contrast with more politically engaged posthuman scholarship from the likes of Felix Guattari or Manuel DeLanda. Perhaps there could have been an interesting dialogue here between T&A’s Latourian positions and the actions and ideologies of animal rights groups or deep ecologists, but again for T&A the left today does not consist of movements of people actually campaigning, occupying, protesting and organising, it simply appears to be a disparate collection of academics.

Put simply, this was what was most frustrating about Arts of the Political, rather than engaging with the broad and varied range of social and ecological activisms which currently exist, the left is reduced to academic thought, whilst the authours proclaim themselves to be engaged in materialist analysis. Perhaps it is simply indicative of the fact that the book’s authors are ageing men living and working in universities who are so totally detached from the actual practices of the left wing groups they claim to represent that they are barely able to acknowledge their existence. Indeed, Thrift has seen protests and occupations from students at Warwick surrounding his astronomical pay increases as Vice Chancellor of the University of Warwick over the past couple of years (from 2011 to 2013 Thrift’s salary has increased from £238,000 to £316,000 at a time when tuition fees have tripled for his institution’s students). That as a background perhaps helps explain why the actually existing left is almost entirely absent from T&A’s exploration of left wing politics.

In the following chapters where T&A discuss organisation, there is a mixture of some interesting thoughts surrounding ecology, using Stengers, Deleuze and Guattari, to consider the notion of ‘addressing the political as an ecology of spatial practices’ p133 alongside a consideration of the organisation of the EU as a potentially fruitful model for the left, as it involves multiple parties across different scales having to cooperate. Such a politics of pragmatic cooperation could of course be understood as a mainstay of anticapitalist politics since the 1990s – the alter-globalisation movement and its manifestations within the world social forum, the peoples global assembly and Indymedia all sought to embody a politics of the multiple, as theorised by Hardt and Negri, and similar claims could be made regarding the anti-war movement, climate change activism and Occupy. But in keeping with their refusal to actually engage with left wing movements, we instead get a lionisation of the EU at a time where elements of the actually existing left are campaigning against the EU’s proposed free trade deal with the US which would effectively allow corporations to sue governments using secret panels to bypass parliaments.

This is a shame, as some of theoretical material around affect, space and that relating to the need to build positive visions of a left wing future articulated by T&A are in places very strong. The central argument that the left needs to find a way to escape what Mark Fisher has called Capitalist Realism, the notion that neoliberalism is the only possible game in town (with the alternative being an eco-apocalypse), is undoubtedly correct, and the politicisation of affect and the reorientation of politics towards an ecology of ethical practices are both concepts worth pursuing. However, they require consideration in relation to the actual practices of political movements, rather than simply as abstract theoretical constructs.

MINA 2013 Symposium Review

Last week I was in Auckland for a couple of days to go to the MINA (Mobile Innovation Network Aotearoa) 2013 Symposium at the Auckland University of Technology. Having just recently arrived in New Zealand the symposium seemed like a great opportunity to meet some researchers and artists working in and around pervasive/locative media, and to see what kinds of mobile media research and praxis are going on in New Zealand.

The conference kicked off with a fascinating keynote from Larissa Hjorth from RMIT in Melbourne. Hjorth looked at practices surrounding current cultural usages of mobile imaging technologies from an ethnographic perspective, and charaterised this as second generation research in camera phone studies. Whereas the first wave focussed on mobile imaging through the perspectives of networked visuality, sharing/storing/saving, and vernacular creativity, she characterises second generation camera phone studies as focussing on the notions of emplacement through movement, the prominence of geo-temporal tagging and spatial connectivity, intimate co-presence and re-conceptualising casual play as ambient play.

My other highlights on the first day were a fantastic session on activism and mobile video practices which features papers from Lorenzo Dalvit and Ben Lenzner. Dalvit explored the use of user uploaded mobile phone videos to a tabloid online newspaper The Daily Sun, which provides a public forum for citizens to publish and attract widespread attention to instances of police brutality within South Africa. In particular Dalvit focussed on a case where police dragged a Mozambican taxi driver to his death through the streets, and mobile footage posted to the Daily Sun was used to contradict the official police account that the taxi driver was armed, and was thus pivotal in bringing the policeman in question to face trial for their actions. Dalvit also highlighted the utility of audiovisual media in cultural contexts where literacy cannot be assumed as universal, and the ways that the Daily Sun provided a forum of public discussion surrounding the commonplace acts of police brutality which are primarily aimed at impoverished black youths in SA.

This was followed by a look at some of Lenzner’s PhD research which compares the usage of mobile video streaming techniques by US activist such as Tim Poole and the Indian community-activist group India Unheard. Similarly to Dalvit’s South African case study, Cole’s footage of Occupy Wall Street was used in court to quash bogus charges fabricated by police against an Occupy protester, again highlighting the ways that citizen journalism and in particular video evidence can provide a powerful tool in providing counter-narratives to official accounts which are often pure fabrications. Whereas Cole was able to stream video live on to UStream, community video activists working for India Unheard have to go somewhere to compress and upload material due to the difference in bandwidth between New York and Mumbai. This forced pause means that they produce activist video which is closer to traditional forms of video activism, providing edited stories rather than just a live stream of events. Both these papers were fantastic examples of how the increasing access to media production tools provides ways for previously unheard voices to be heard, and within a legal context, to provide very strong evidence to contradict official statements from powerful institutions linked to the state.

Also on the Thursday were really interesting papers from Craig Hight and Trudy Lane. Hight’s paper focussed on the implications of emerging software digital video, and in particular various ways that numerous forms of consumer/prosumer software are automating increasing amounts of the editing process. The paper outlined a number of fairly new tools, such as Magisto, which claims to ‘automatically turns your everyday videos into beautifully edited movies, perfect for sharing. It’s free, quick, and easy as pie!’ Within the software you select which clips you wish to use, a song to act as the soundtrack and a title, and Magisto assembles your video for you. While Hight was quite critical of the extremely formulaic videos this process produces, it’s interesting to think about what this does in turns of algorithmic agency and the unique ability of software to make the types of decisions normatively only associated with human (what Adrian Mackenzie has described as secondary agency).

Lane by contrast is an artist whose recent project the A Walk Through Deep Time  was the subject of her paper. While the deep time here is not the same as Sigfried Zielinski’s work into mediation and deep time, it does present an exploration of a non-anthropocentric geological temporality, intially realised through a walk along a 457m fence to represent 4.57 billion years of evolution. The project uses an open-source locative platform called roundware which provides locative audio with the ability for users to upload content themselves whilst in situ, allowing the soundscape to become an evolving and dynamic entity. The ecological praxis at the heart of Lane’s work was something that really resonated with my interests, and it was great to see that there are really interesting locative art/ecology projects going on here.

The second day of the symposium opened with a keynote from Helen Keegan from the University of Salford. Keegan’s presentation centred on a unit she had run as an alternate reality game entitled Who is Rufi Franzen. The project was a way of getting students to engage in a curious and critical way with the course, rather than the traditional ways of learning we encounter within lectures and seminars. The project saw the students working together across numerous social media platforms to try and piece together the clues as to whom Rufi was, how he had been able to contact them, and what he wanted. The project climaxed with the students having been led to the triangle in Manchester, where they were astonished to see their works projected on the BBC controlled big screen there. It looked like a great project, and a fantastic experience.

My highlight of the second day was a paper by Mark McGuire from the University of Otago who presented on the topic of Twitter, Instagram and Micro-Narratives (Mark’s presentation slides are online via a link on his blog and well worth a look). Taking cues from Henry Jenkin’s recent work into spreadable media, which emphasises the ways that contemporary networked media foregrounds the flow of ideas in easy to share formats, McGuire went on to explore the ways that micro-narratives create a shred collaborative experience whereby the frequent sharing of ideas and experiences, content creators become entangled within a web of feedback or creative ecologies which productively drives the artistic work. Looking at Brian Eno’s notion of an ecology of talent and applying interdisciplinary notions of connectionist thinking and ecological thought and metaphors, McGuire made a convincing case as to why feedback rich networks provide a material infrastructure which cultivates communities who learn to act creatively together.

There was also a really interesting paper on the second day from Marsha Berry from RMIT, Melbourne, who built upon Hjorth’s notions of emplaced visuality to explore how creative practices and networked sociality are becoming increasingly entangled. Looking in detail at practices of creating retro-aesthticised images using numerous mobile tools including Instagram and retro camera filters, Berry explored these images as continuity with analogue imaging, as a form of paradox, as Derridean hauntology – as a nostalgia for a lost future, and finally as the impulse to create poetic imagery, highlighting that for teenagers today there is no nostalgia for 1970s imaging technologies and techniques which pre-date their birth.

Max Schleser and Daniel Wagner also presented interesting papers, looking at projects they had respectively been running which used mobile phone filmmaking. Schleser outlined the 24 Frames 24 Hours project for workshop videos, which featured a really nice UI designed by Tim Turnidge, and looked like a really nice tool for integrating video, metadata and maps. Schleser explored how mobile filmmaking is important to the emergence of new types of interactive documentary, touching on some of the conceptual material surrounding iDocs. Wagner presented the evolution of ELVSS (Entertainment Lab for the Very Small Screen), a collaborative project which has seen Wagner’s Unitec students working alongside teams from AUT, University of Salford, Bogota and Strasbourg to collectively craft video based mobile phone projects. The scale of the project is really quite inspiring in terms of thinking what it’s possible to create in terms of global networked interdisciplinary collaborations within higher education today.

Overall, I really enjoyed attending MINA 2013. The community seems friendly, relaxed and very welcoming, the standard of presentations, artworks and keynotes was really high and it’s really helped me in terms of feeling that there are academic networks within and around New Zealand who’re involved in really interesting work. Roll on MINA 2014.

This is a draft version of a paper which was published in NYX: Vol 7 Machines in 2012.

 

Contemporary analyses of the relationships between humans and machines − ways that machines influence the scale, pace, and patterns of socio-technical assemblages − tend to focus upon the effects, impacts, and results of the finished products: the packaged information processing commodities of digital culture. This work is undeniably important in demarcating the multiple and complex ways that human symbiosis with machinic prostheses alters cognitive capacities and presents novel, distributed, peer-to-peer architectures for economic, political, and socio-technical networks. However, existing discourses surrounding machines and digital culture largely fail to explore the wider material ecologies implicated in contemporary technics.

 

Ecological analysis of machines seeks to go beyond exploring marketable commodities, instead examining the ecological costs involved in the reconfiguration of ores, metals, and minerals into smartphones and servers. This involves considering the systems implicated in each stage of the life-cycle of contemporary information-processing machines: the extraction of materials from the earth; their refinement and processing into pure elements, compounds, and then components; the product-manufacturing process; and finally what happens to these machines when they break or are discarded due to perceived obsolescence. At each stage of this life-cycle, and in the overall structure of the ecology of machines, there are ethical and political costs and problematics. This paper seeks to outline examples of these impacts and consider several ways in which they can be mitigated.

 

Hardware is not the only ecological scale associated with machines: flows of information and code, of content and software, also comprise complex, dynamic, systems open to flows of matter and energy; however, issues surrounding these two scales are substantially addressed by existing approaches to media and culture. We can understand scale as a way of framing the mode of organisation evident within the specific system being studied. The notion of ecological analysis approaching different scales, stems from the scientific discipline of ecology and is transposed into critical theory through the works of Gregory Bateson and Felix Guattari. Within the science of ecology, scale is a paramount concern, with the discipline approaching several distinct scales, the relationships between: organism and environment, populations (numerous organisms of the same species), communities (organisms of differing species), and ecosystems (comprising living and nonliving elements within a geographical location).i No particular scale is hierarchically privileged, with each nested scale understood as crucial to the functioning of ecosystem dynamics.

 

The notion of multiple, entangled scales are similarly advanced by Bateson, who presents three ecologies − mind, society and environment.ii Key to understanding their entangled − and thus inseparable – nature, is Bateson’s elaboration of distributed cognition, whereby the pathways of the mind are not reducible to the brain, nervous systems, or confines of the body, but are immanent in broader social and environmental systems. The human is only ever part of a thinking system which includes other humans, technology and an environment. Indeed, Bateson contends that arrogating mental capacity exclusively to individuals or humans constitutes an epistemological error, whose wrongful identification of the individual (life-form or species) as the unit of ecological survival necessarily promotes a perspective whereby the environment is viewed as a resource to be exploited, rather than the source of all systemic value.iii

 

Guattari advances Bateson’s concepts in The Three Ecologies,iv expounding a mode of political ecology which has little to do with the notion of preserving ‘nature’, instead constructing an ethical paradigm and political mobilisations predicated upon connecting subjective, societal and environmental scales in order to escape globalised capitalism’s focus upon economic growth as the sole measure of wealth. According to Guattari, only by implementing an ethics which works across these three entangled ecologies can socially beneficial and environmentally sustainable models of growth be founded. Ecology then, presents a way of approaching machines which decentres the commonly encountered anthropocentrism that depicts machines (objects) assisting humans (subjects), instead encouraging us to consider ourselves and technologies as nodes within complex networks which extend across individual, social, environmental, and technological dimensions. Correspondingly, ecology requires a shift when considering value and growth; moving from the economic-led anthropocentric approach characteristic of neoliberalism, to valuing the health and resilience of ecosystems and their human and nonhuman, living and nonliving components. Consequently, applying an ecological ethics may prove useful in considering ways to mitigate many of the deleterious material impacts of the contemporaneous ecology of machines.

 

This paper will proceed by exploring the contemporary ecology of hardware, examining ecological costs which are incurred during each phase of the current industrial production cycle. Additionally,  the overall structure of this process will be analysed, alongside a conclusion which considers whether current iterations of information processing machines presents opportunities for the implementation of a mode of production within which the barriers between producers and consumers are less rigid, allowing alternative ethics and value systems to become viable.

 

The initial stages in the contemporary industrial production process are resource extraction and processing. A vast array of materials is required for contemporary microelectronics manufacturing, including: iron, copper, tin, tungsten, tantalum, gold, silicon, rare earth elements and various plastics. Considering the ways that these materials are mined connects information processing technologies to the flows of energy and matter that comprise the globalised networks of contemporary markets and trade systems, refuting claims that information processing technologies are part of a virtual, cognitive, or immaterial form of production.

 

One environmentally damaging practice currently widely employed is open-cast mining, whereby the topmost layers of earth are stripped back to provide access to ores underneath, whilst whatever ecosystem previously occupied the surface is destroyed. Mining also produces ecological costs including erosion and the contamination of local groundwater, for example in Picher, Oklahoma, lead and zinc mines left the area so badly polluted and at risk of structural subsidence, that the Environmental Protection Agency declared the town uninhabitable and ordered an evacuation.v Another series of ecological costs associated with resource extraction surrounds conflict minerals, which is increasingly being acknowledged thanks to the activities of NGOs and activists publicising the links between conflict minerals in the Democratic Republic of Congo (particularly coltan, the Congolese tantalum-containing ore) and information technologies (particularly mobile phones). Whilst coltan and other conflict minerals were not a primary factor in the outbreak of civil/regional conflict in the DRC, which has led directly or indirectly to the deaths of over five million people over a dozen years, as the conflict wore on and the various factions required revenue-raising activities to finance their continuing campaigns, conflict minerals ‘became a major reason to continue fighting… the Congo war became a conflict in which economic agendas became just as important as other agendas, and at times more important than other interests.’vi Factions including the Congolese army, various rebel groups and invading armies from numerous neighbouring states fiercely contested mining areas, as controlling mines allowed the various armed groups to procure minerals which were then sold for use in microelectronics, in order to finance munitions, enabling the continuation of military activities.

The role of the global microelectronics industry in financing the most brutal conflict of the last twenty years, reveals the connections between ‘virtual’ technologies and the geopolitics of globalised capitalism.

 

Engaging with the ecology of machines requires consideration of the ethical and political implications of the consequences wrought by current patterns of consumption upon people and ecosystems geographically far removed from sites of consumption, onto whom the brunt of negative externalities generated by current practices frequently falls. In this case the costs of acquiring cheap tantalum – a crucial substance in the miniaturisation of contemporary microelectronics – are not borne by consumers or corporations, but by people inside an impoverished and war-ravaged central African state.

 

Once extracted, materials are refined into pure elements and compounds, transformed into components, and then assembled into products during the manufacturing phase of the production process. Since the late 1980s there has been a shift away from the corporations who brand and sell information technology hardware incorporating manufacturing into their operations. Instead, a globalised model now dominates the industry, whereby manufacturing is primarily conducted by subcontractors in vast complexes concentrated in a handful of low cost regions, primarily south-east Asia.[vii] This can be understood within the broader context of changes to the global system of industrial production, whereby manufacturing is increasingly handled by subcontractors in areas where labour costs are low and there does not exist rigorously enforced legislation protecting the rights of workers or local ecosystems. Consequently, this transition has been accompanied by marked decreases in wages and safety conditions, alongside increased environmental damage as companies externalise costs onto local ecosystems.viii

 

Information technology sweatshops are receiving increasing attention, and have begun to punctuate public consciousness, partially as a consequence of campaigning from NGOs, and partially due to a spate of suicides among young migrant workers at Foxconn’s Longhua Science and Technology plant in Shenzhen, China. Fourteen workers aged 18-25 jumped off factory roofs to end their lives between January and May 2010 to escape an existence spent working 60-80 hours a week and earning around US$1.78 per hour manufacturing information processing devices such as the Apple iPad for consumers elsewhere in the world.

 

Once information processing technologies have been discarded, they become part of the 20-50 million tonnes of annually produced e-waste,ix much of which contains toxic substances such as lead, mercury, hexavalent chromium and cadmium. Whilst it is illegal for most OECD nations to ship hazardous or toxic materials to non-OECD countries, and illegal for non-OECD nations to receive hazardous wastes,x vast quantities of e-waste are shipped illicitly, with e-waste routinely mislabelled  as working goods for resale, circumventing laws such as the Basel Convention and the EU’s Waste Electrical and Electronics Equipment (WEEE) Directive.xi In 2006 estimates suggest that 80% of North American and 60% of the EU’s electronics wastes were being exported to regions such as China, India, Pakistan, Nigeria and Ghana.xii Essentially, wealthy nations externalise the ecological costs of their toxic waste to impoverished peoples in the global south.

 

Once e-waste arrives in these areas it is ‘recycled’: machines are manually disassembled by workers often earning less than US$1.50 per day,xiii who implement a variety of techniques for recovering materials which can be resold. For example, copper is retrieved from wiring by burning the plastic casings, a process which releases brominated and chlorinated dioxins and furans; highly toxic materials which persist in organic systems, meaning that workers are poisoning themselves and local ecosystems. Investigation by the Basel Action Network reveals that:

 

Interviews reveal that the workers and the general public are completely unaware of the hazards of the materials that are being processed and the toxins they contain. There is no proper regulatory authority to oversee or control the pollution nor the occupational exposures to the toxins in the waste. Because of the general poverty people are forced to work in these hazardous conditions.xiv

 

This activity is often subsumed under the rhetoric of ‘recycling’, with associated connotations of environmental concern, however, the reality is that international conventions and regional laws are broken in order to reduce the economic costs of treating the hazardous remains of digital hardware.

 

The systematic displacement of negative externalities minimises the cost of commodities for consumers and improves profitability for corporations, but in doing so, makes the epistemological error delineated by Bateson and Guattari regarding the wrongful identification of value within systems. Creating systems designed to maximise benefits for the individual consumer − or individual corporation − while externalising costs onto the social and ecological systems which support those individual entities ultimately results in the breakdown of systems which consumers and corporations rely upon. Although such strategies create short term profitability, their neglect for longer term consequences breeds systemic instabilities which will eventually return to haunt these actors:

 

            If an organism or aggregate of organisms sets to work with a focus on its own survival and thinks that is the way to select its adaptive moves, its ‘progress’ ends up with a destroyed environment. If the organism ends up destroying its environment, it has in fact destroyed itself… The unit of survival is not the breeding organism, or the family line or the society… The unit of survival is a flexible organism-in-its-environment.xv

 

There have however, been numerous interventions by NGOs, activists, and concerned citizens who have employed the guilty machines at issue to address and alter these deleterious effects. The deployment of social media, for instance, to raise awareness of these issues and pressure corporations and governments to alter practices and laws, highlights what Bernard Stiegler and Ars Industrialis describe as the pharmacological context of contemporary technics:xvi xvii machines are simultaneously poisonous and the remedy to this poison. Thinking in terms of poison and toxicity is particularly cogent with reference to the material impacts of digital technologies, whereby what can otherwise appear to be a metaphorical way of approaching attention and desire amongst consumers, presents an insightful analysis of the material impacts which accompany the shifts in subjectivity, which Stiegler argues arise from changing technological environments.

 

The actions implied by this approach initially seem entirely inadequate given the scope of the problems: ‘retweeting’ messages and ‘liking’ pages in the face of serious social and ecological problematics that relate to the dynamics of globalised capitalism appears laughable. However, the impacts of collective action made possible by networked telecommunications has effected numerous cases: Wages at Foxconn’s plant in Shenzhen have risen from 900 to over 2000 yuan in less than a year in response to sustained pressure mobilised by assemblages of humans and machines, many of the latter having been assembled within that factory. In the face of widespread networked protests, Apple cancelled a contract with another Chinese subcontractor because of their employment of child labour.xviii Lobbying by NGOs such as Raise Hope For Congo,xix supported by a networked activist community, has convinced the US congress to examine legislating to phase out the use of conflict minerals.

 

The mobilisation of attention via these socio-technological networks effects change in two primary ways: through raising awareness and altering vectors of subjectivity amongst consumers, and by subsequently mobilising this attention as public opinion to pressurise governmental and corporate actors to alter practices. In the face of this type of networked action, governments are compelled to avoid the appearance of supporting unethical practices. Corporations, as fabrication-free entities which design and market, but do not manufacture products, are faced with the potential toxification of their brand. Corporations such as Apple and Dellxx have demonstrated a willingness to take remedial action, albeit often in a limited way.xxi

 

There are additional issues raised by the structure of the flows of matter associated with the system in its entirety. The industrial model of production involves a near-linear flow throughout the stages of a machine’s lifespan; resources are extracted, processed, used, and then discarded. Recycling is partial, leading to the steady accumulation of ‘waste’ matter in landfills. By contrast, when examining how ecosystems work, we are confronted with cyclical processes with multiple negative feedback loops. These cycles create sustainable processes: there is no end stage where waste accumulates, as the outputs of processes become inputs for other nodes in the network, allowing systems to run continuously for millions of years. Feedback loops within these systems build resilience, so minor perturbations do not create systemic instability or collapse, only when the system faces major disturbance, a substantial alteration to the speeds or viscosities of ecological flows which exceed adaptive capacity, does collapse occur. In the past, ecological collapse and planetary mass extinction events have been triggered by phenomena such as an asteroid striking the planet, today a mass extinction event and new geological age, the Anthropocenexxii is under way because of anthropogenic industrial activity.

 

Given the state of play with reference to climate change, loss of biodiversity, and associated impacts upon human civilisations, urgent action is required in reconfiguring the industrial production process along alternatives based on biomimicry: cyclical processes resembling closed-loop systems such as the nitrogen cycle. This methodology has been adopted by the cradle-to-cradle movement, who advocate that the waste from one iteration of processes should become the nutrients, or food for successive iterations. Products are not conceived of as commodities to be sold and discarded, but valuable assets to be leased for a period before the materials are transformed into other, equally valuable products. A cradle-to-cradle methodology also seeks to remove toxic substances from goods during the design process, entailing that there is no subsequent conflict of interest between cheap but damaging and responsible but expensive disposal at a later date. 

 

Another movement which points towards alternative methods of producing machines are open-source hardware (OSH) communities, which apply an ethic derived from free/open-source software (FOSS) development, and implement homologous processes to designing and producing hardware.  Whereas FOSS involves the distributed collaboration of self-aggregating peers using the hardware/software/social infrastructures of the Internet to create software – a non-rival good which can be directly created and shared by exchanging digital data – OSH communities cannot collectively create the finished products, but share designs for how to make machines and source the requisite parts. Operating in this manner enables a mode of producing rival goods, including information technology hardware, which is led by user innovation and the desires and ethics of the producer/user community, rather than profit-orientated corporations, who have a vested interest in creating products which rapidly become obsolete and require replacement. OSH presents an example of the democratisation of innovation and production,xxiii and a rebuttal of the contention that peer-to-peer systems are only relevant to non-rival, informational ventures, whilst also presenting one way of approaching Stiegler’s concept of an economy of contribution.

 

Stiegler contends that the particular affordances of contemporary computing technologies enable the construction of a new economy which elides the distinction between producers and consumers. According to Stiegler, free software exemplifies a historically novel methodology predicated on communal labour and which is characterised by the formation of positive externalities.xxiv Whereas the contemporary ecology of machines is dominated by a model based on an econocentricism which advocates the externalisation of any possible costs onto social and environmental systems which are seen as ‘outside’ of economic concern and therefore valueless, Stiegler contends that there exists the potential to construct an alternative ecology of machines based upon broader conceptions of growth, resembling the ecological value systems advocated by Bateson and Guattari.

 

While the pharmacological context of technology entails that an economy of contribution is by no means certain, or even probable, a reorientation of the ecology of machines is crucial if we are to escape the spectre of ecological collapse. The current system of producing the material infrastructure of digital cultures is ecologically unsustainable and socially unjust, with problems at the scales of the structure of the production process as a whole, and within the specificities of each constituent stage. Only through a sustained engagement with the material consequences of information technologies, involving an eco-ethically inflected application of these machines themselves, may equitable alternatives based around contribution rather than commodities supersede the destructive tendencies of the contemporary ecology of machines.

 

i Michael Begon, Colin Townsend and John Harper, Ecology: From Individuals to Ecosystems, 4th Edition, Malden MA and Oxford: Blackwell Publishing, 2006

ii Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p435-445

iii Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p468

iv Felix Guattari, The Three Ecologies,  trans Ian Pindar and Paul Sutton, London:Athelone Press, 2000

v John D. Sutter Last Man Standing at Wake for Toxic Town, 2009, CNN, available at http://articles.cnn.com/2009-06-30/us/oklahoma.toxic.town_1_tar-creek-superfund-site-picher-mines?_s=PM:US#cnnSTCText last visited 22/03/2012

vi Michael Nest, Coltan, Cambridge: Polity Press, 2011 p76

vii Boy Lujthe (2006) The Changing Map of Global Electronics: Networks of Mass Production in the New Economy, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p22

viii Rohan Price, WhyNo Choice is a ChoiceDoes Not Absolve the West of Chinese Factory Deaths, Social Science Research Network, 2010, Available at SSRN: http://ssrn.com/abstract=1709315   (last visited 15/03/2012)

ix Electronics Takeback Coalition,  Facts and Figures on E-Waste and Recycling, 2011, avaialble at http://www.electronicstakeback.com/wpcontent/uploads/Facts_and_Figures_on_EWaste_and_Recycling.pdf last visited 15/03/2012

Under the Basel Convention which forbids the transfer of toxic substances from OECD nations to non-OECD nations. However, the USA, Canada and Australia refused to sign the convention, and so it remains legal for these states to export hazardous wastes, although it is illegal for the non-OECD countries they send hazardous wastes to, to receive them

xi The WEEE directive, passed into EU law in 2003 and transposed into UK law in 2006 states that all e-waste must be safely disposed of within the EU at an approved facility, and that consumers can return used WEEE products when they purchase new products

xii Jim Puckett, High-Tech’s Dirty Little Secret: Economics and Ethics of the Electronic Waste Trade, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p225

xiii Jim Puckett and Lauren Roman, E-Scrap Exportation, Challenges and Considerations, Electronics and the Environment, 2002 Annual IEEE International Symposium, available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1003243 last visited 15/03/2012

xiv Basel Action Network and Silicon Valley Toxics Coalition, Exporting Harm, The High-Tech Trashing of Asia, 2002, p26 available at http://www.ban.org/E-waste/technotrashfinalcomp.pdf last visited 15/03/2012

xv Gregory Bateson, Steps To An Ecology of Mind, Northvale, New Jersey: Jason Aronsen Inc, 1972 p457

xvi Bernard Stiegler, For a New Critique of Political Economy, Cambridge:Polity 2010

xvii Ars Industrialis, Manifesto 2010, 2010, available at http://arsindustrialis.org/manifesto-2010 last visited 17/03/2012

xviii Tania Branigan, Apple Report Reveals Child Labour Increase, The Guardian, 15 February 2011, available at http://www.guardian.co.uk/technology/2011/feb/15/apple-report-reveals-child-labour last visited 18/03/2012

xix http://www.raisehopeforcongo.org/ last visited 15/03/12

xx David Wood and Robin Schneider, Toxicdude.com: The Dell Campaign, in Ted Smith, David Sonnenfeld, David Naguib Pellow, (2006) Challenging the Chip, Labor Rights and Environmental Justice in the Global Electronics Industry, Philadelphia:Temple University Press, 2006, p285-297

xxi For example the similarities between the labour rights violations found in reports at Foxconn in Shenzhen in 2006 and 2012 suggest that Apple’s claims in 2006 that they would take action to redress these violations were public relations rhetoric not substantiated by actions

xxii Jan Zalasiewicz, Mark Williams, Will Steffen, and Paul Crutzen, The New World of the Anthropocene,  2010, Environment Science & Technology 44 (7): 2228–2231. doi:10.1021/es903118j.

xxiii Eric Von Hippel, Democratising Innovation, Cambridge MA: MIT Press, 2005

xxiv Bernard Stiegler, For a New Critique of Political Economy, Cambridge:Polity 2010 p129

The Global Warming Pause

Ahead of the upcoming IPCC report into global climate, and climate change, the news agenda seems to been largely dominated by stories asking why global warming has paused for the last 15 years (see the BBC, the BBC again, the torygraph and the NZ herald among countless other examples).

A substantial part of this seems to be the repeat of familiar claims that 1998 was the hottest year on global record, and if global warming scientists were right there is no way that we should not have a seen a hotter year during the past 15 years. Hence, climate change has paused, the models and data suggesting that human fossil fuel emissions were to blame for late 20th century warming were wrong, and that consequently any argument for restricting emissions in future are null and void.

Which of course ought to lead to the question, who says that 1998 was the hottest year on record? Well the answer to this is somewhat complicated, but also somewhat revealing. It aint NASA, who run GISStemp (the Goddard Institute for Space Studies Surface Temperature Analysis) who have 2010 as the hottest year on record followed by 2005, with 9 of the 10 hottest years occurring after the year 2000 (with 1998 as the only pre-2000 year in that list).  It also isn’t NOAA (the US National Oceanic and Atmospheric Administration) who compile a global temperature record at the National Climactic Data Centre (NCDC), whose data again places 2010 as the hottest year on record, followed by 2005, with 1998 in third, and 9 of the hottest 10 years on record occurring after the year 2000 (ie after global warming has allegedly paused). Which leaves the UK Met Office’s Hadley Centre and and University of East Anglia’s Climactic Research Unit record HadCRU. The CRU is of course the unit who were the subject of the Climategate faux controversy where sceptics hacked emails, published some excepts from private correspondence out of context claiming fraud and data manipulation generating global headlines, and which were subsequently found by numerous independent investigations to have found no evidence of wrongdoing. The latest version of this temperature series is HadCRUT4v which again shows that 2010 was the hottest year on record, followed by 2005, followed by 1998.

So where does the 1998 was the hottest year claim come from? Well, HadCRUT4v is the latest, and most accurate temperature record maintained by the Met Office and CRU (for a detailed explanation of what’s changed look here). If we ignore that and instead use their previous version, HadCRU3v, then, and only then does 1998 appear to be the warmest year on record. So why did this old record suggest a different year to the NASA and NCDC records (and indeed the latest version of the CRU record)? Well the main reason for this was the different methods used to generate global temperatures. Of course none of these institutions are able to measure the temperature in every place in the world, they use stations in various locations, and the places where there tend to be the fewest stations tend to be the polar regions (where there also tend to be the fewest people). And one of the things we know quite well, is that the Arctic has been the fastest warming region on the planet. Whereas GISStemp interpolates values between measured locations in the Arctic, HadCRU3v left them blank as unknown, which introduced a cold bias into their dataset compared with the others, and explaining why it has been replaced by a dataset which features a greater number of stations and which correlates much more strongly with the other datasets.

So the ‘pause’ in climate change is something that only exists if you exclusively look at a now obsolete and known to be biased dataset generated by a group who those using this data have previously claimed to be frauds. And decide to ignore that 1998 was in any case a super El-nino which had a dramatic short term effect on global weather – hence the other 9 of the 10 hottest years on record all occurring since the year 2000. If you used 1997 or 1999 as start dates there wouldn’t appear to be any pause in any dataset (outdated or otherwise), but cherry-picking the year when specific short-term conditions made things abnormally hot added to cherry-picking a now obsolete dataset allows sceptics to make the global warming has paused argument (see this excellent skeptical science post for details on cherry-picking)

So why are so many mainstream media outlets focussing upon this as the main story in the lead up to the IPCC report? Probably because it’s a more sensationalist and conflict-driven story than one which reads science has been slowing progressing, turning a 90% confidence in predictions in 2007 into a 95% confidence by 2013, allied with a big PR drive from a number of the main players in the climate denial industry.