Excessive research

Liverpool, UK, November 3-5, 2015

Pablo Velasco – Logics of excess: waste and surplus in Bitcoin mining

Superabundant design

Cryptocurrencies are surreptitiously inhabited by a nature of excessiveness. Logics of overabundance are expressed in different levels: first and more important, on the core technique used for the double purpose of securely validate transactions and produce tokens, also known as mining. Second, as a consequence of the former, in the energy consumption and production of waste of specialized mining hardware. Particular logics of excess are expressed in the large and futile surplus of algorithmic power, energy, and waste embedded in cryptocurrencies’ operation of a successfully secure transaction system.
In Bitcoin, the prototypical cryptocurrency, the intrinsic value of the tokens settles under an algorithmic regime of language quite related to excessiveness. A very introductory video explains that “the bitcoin network is secured by individuals called miners. Miners are rewarded newly generated bitcoins for verifying transactions.” (WeUseCoins). Miners are machines that verify the signed public keys for each transaction and which validate these into blocks in a public registry (i.e. the Blockchain). The job for successfully validating and packing the transactions produces new tokens for the miner, and generates a Proof-of-Work. The former is the result of a ‘puzzle’, which can be then easily checked by any other machine in the network. Since the design of the system seeks a controlled pace, if the coins are generated to fast (because there are more and/or stronger miners) the ‘puzzle’ becomes harder (Nakamoto).
The analogy of a puzzle is only appropriate within its algorithmic dimension, that means it must be understood not as a toy or a game, but as a problem that must be solved by following a set of rules. More accurately, the puzzle consists in generating aleatory hashes (a string of numbers and letters with a defined length) until one of them fulfils the requisites asked for the difficulty (in the case of Bitcoin, a number of zeroes at the beginning of the resulting hash). Due to the random non-repeatable number involved in the operation, the ‘nonce value’, it is especially difficult to create a ‘desirable’ hash. Every attempt to come up with a successful hash uses a new random number, thus randomizing the result. Difficulty is hence, in this context, associated with probability and far from tribulation. Regarding Bitcoin, difficulty is an algorithmic adversity.

Difficulty (D) is now (19th September 2015) set on 59,335,351,233.87, which translates as a 225xD number of average hashes to find a block. This means one opportunity to build a block for every 19,909,640,081,173,010,000 (A) tried hashes. The only way to deal with the odds involved in this operation is to have a machine capable of generating as many number of attempts per second as possible. A state-of-the-art dedicated unit available today can manage to make 5,500,000,000,000 [SP20 Jackson by Spondoolies-Tech]. To calibrate the surplus involved, it is better to think of it in negative terms: unlike the lottery (at which a lonely miner would have better odds) where every non-winner plays a passive role, the miner is a machine that actually uses computational power to actively generate 19,909,640,081,173,009,999 (A – 1) useless hashes. It is hard to think of a greater surplus for a system.

Waste and energy

The excessive nature of the puzzles that mechanically produce hashes takes place on a logical level, but it also transforms itself into a material overflow. Controlled production of tokens directly translates into a relevant issue of consumption of energy and production of waste. From the deployment of the device up until the middle of 2010, mining was a task that any modern CPU could handle, even thought the process would push it to its limits and heavily reduce its lifetime. Until mid-2011 the workload moved to GPUs, but was rapidly surpassed by FPGA’s (Field Programmable Gate Arrays), which reduced energy consumption while achieving more hashes per second. The next natural step were ASIC miners (Application Specific Integrated Circuit) at the beginning of 2013. [For a history of Bitcoin mining hardware, up until the end of 2013, see (Taylor)].
Even thought the network was maintained at the beginning by every enthusiast with a computer and some energy to spare, today the mining industry is populated with pools and dedicated farms. This evolution was foreseen in Bitcoin’s design (Nakamoto, ‘NCML’). In pools, different miners contribute their processing power to calculate a block together. The reward is then distributed among them, usually accordingly to the computational power given, although each pool has its own share protocols). Each one of these clustered miners can have one or multiple ASIC’s. Mining farms on the other hand are dedicated places that behave in an undistributed fordist fashion, and are even located in old factories or abandoned stores, which house swarms of ASIC’s (‘Bitcoin Mining in an Abandoned Iowa Grocery Store’). The energy consumed in farms is noteworthy. A one year old paper estimated (Malone and O’Dwyer) that the mining network at the time was on par with the electricity consumption in Ireland. Mining units have improved in the last year and also its energy efficiency, but the difficulty enlarged too, resulting on a considerable energy footprint problem. An specific still operating farm has been told to have 10,000 S3 mining units (‘My Life Inside a Remote Chinese Bitcoin Mine’). The Antminer S3 is able to produce 441 GH/s and consumes 800W/TH: that is roughly 4761 Watts in a day, for just one unit. A farm with 10,000 of these units would consume 47,616 KW a day. Comparing these figures with home energy consuming estimates in the U.S. (‘How Much Electricity Does an American Home Use? – FAQ – U.S. Energy Information Administration’) shows that just this farm consumes 1,571 times more energy than an average household.
Mining, at this point of the evolution of the device, is a race, and reducing the energy footprint is not grounded in pollution awareness, but in costs cutting. And while mining units become progressively more energy efficient, they simultaneously become more obsolete. A constant refill of state-of-the-art equipment is necessary to stay in the race. Obsolescence of hardware is not exclusive to the Bitcoin phenomenon, smartphones and all sorts of gadgets are ‘recicled’ every year as a complex economical and cultural outcome of, among other things, planned obsolescence -an appealing subject for marketing and industrial economics some decades ago, but recently reborn within the scope of ecological awareness (Guiltinan). But unlike the smartphone market, mining units do not suffer of a short life because of its hardware resistance, cheap materials or fashionable ideologies of consumption, ‘planned obsolescence’ for ASIC’s resides in the scarcity model of Bitcoin’s design. Tokens have a fixed limit (21 million) and are getting harder to obtain, so the fast production and consumption cycles of the hardware are intrinsic to the system. At least until the mining becomes unprofitable, in such scenario the number of miners diminish and with it the difficulty (which, again and recursively, makes the people interested in mining to go up). Difficulty, however, rarely drops, and on the long run describes a stepping curve (‘Bitcoin Difficulty Chart – Chart of Mining Difficulty History’), which makes mining hardware to age fast.
Being specific circuits optimized for hashing, ASIC’s do not have a second life. Unlike GPU’s, they are useless for any other tasks, which makes them completely worthless after its useful, yet short, life. Since there is no second hand market for mining units, they rapidly add up to High Tech trashing problems. Electronic waste arguably conforms today about the same amount as plastic packaging waste (in municipal numbers) (Puckett and Smith). Most of the e-waste is recycled in foreign countries because of low labor costs, and loose environmental regulations both externally (at least in the, U.S. for export of hazardous materials) and internally (waste handling in the host countries). Arguably, around 80% of e-waste is exported to Asia, and 90% of these exports goes to China. The hashing power that runs throughout the bitcoin network, i.e. the most and more powerful machine miners, clusters in China too. On a rough estimate (‘Bitcoin Hashrate Distribution –’) more than 50% of the hashing power is concentrated in Chinese mining pools and a significant part of the rest is in the U.S., meaning that most bitcoin’s e-waste hazardous recycling labor will end eventually in poor communities of Asia.

Surplus logics

The number of mines and of ASIC’s in them is obscure. Nonetheless, the quantity of e-waste coming directly from mining does not compare to the waste produced by other gadgets, like those of the smartphone industry. The discussion around excess is not so much framed in quantity, however, but in its lifespan and purpose: hardware mining units are limited to the one and only task of producing hashes. The substantial empty computational work, energy usage, and e-waste produced in the mining operation has no other goal, and so far no other purpose, than to keep the machine running. Cryptocurrencies’ personal system of consumption is a medium to an end, and whether this surplus is void or not hangs from the latter. To the question if Bitcoin mining is a waste of energy Bitcoin Foundation (‘FAQ – Bitcoin’) answers that “Spending energy to secure and operate a payment system is hardly a waste”. The former phrase can be reformulated as “it is not a waste, as long as the system works”. The idea of waste is superseded by efficiency, and annulled in a scenario where the system is fully operative.
The competence and superior security of the system, underpinned by the former logics of wastage, is what gives Bitcoin and other cryptocurrencies a compelling symbolic value. An economic value is added to this initial computational worth after media attention and market performance effectively consider the tokens of this system as assets or financial objects. A rush to adopt and exploit the venues followed, as the system became more and more public, in great part due to its speculative disposition, which ended in more traditional representations of excess in the form of financial bubbles.
Due to the layered nature and fuzz of cryptocurrencies, it is difficult to avoid the accumulation of different expressions of symbolic, economic and informational value. Cryptocurrencies and its ecosystems are expressed in diverse financial fields; social platforms, project platforms (i.e. Github), mainstream and dedicated news (i.e. Coindesk), scholar research, and its own material network and Blockchain. Open research questions arise from the multiple informational sources of the object: How to frame research within this information overload? What is research surplus here? How much of the object’s nature resides in its very excessive performances and how much is made up through mere contemporary compulsive research? Are the former logics of wasted surplus to keep systems running exclusive to Cryptocurrencies, or is it the subtle milieu of a networked/algorithmic society? It has been argued that information technologies, material production and disposal included, operate as technologies of excess and, recursively, the devices involved in these cycles are “the very devices through which we can trace emerging forms of proliferation” (Gabrys 33). As research gets involved with the digital, both as an object of study and as a methodology device, the surplus that comes from within it is inherited in different forms. In the case of cryptocurrencies, a network communicates uninterruptedly to share an undetermined number of registry’s copies, in order to create what probably are the first truly unique digital tokens. Peculiarly, the token is a non-duplicable unity enabled by the performance of a multiplicity of machines, the entropy of a large number, and the logics of (an illusive) overabundant machine labor.


‘Bitcoin Difficulty Chart – Chart of Mining Difficulty History’. CoinDesk. N.p., n.d. Web. 28 Sept. 2015.
‘Bitcoin Hashrate Distribution –’. N.p., n.d. Web. 28 Sept. 2015.
‘Bitcoin Mining in an Abandoned Iowa Grocery Store’. Motherboard. N.p., n.d. Web. 28 Sept. 2015.
‘FAQ – Bitcoin’. N.p., n.d. Web. 28 Sept. 2015.
Gabrys, Jennifer. Digital Rubbish: A Natural History of Electronics. Reprint edition. University of Michigan Press, 2013. Print.
Guiltinan, Joseph. ‘Creative Destruction and Destructive Creations: Environmental Ethics and Planned Obsolescence’. Journal of Business Ethics 89 (2009): 19–28. Print.
‘How Much Electricity Does an American Home Use? – FAQ – U.S. Energy Information Administration (EIA)’. N.p., n.d. Web. 28 Sept. 2015.
Malone, D., and K.J. O’Dwyer. ‘Bitcoin Mining and Its Energy Footprint’. Institution of Engineering and Technology, 2014. 280–285. CrossRef. Web. 23 July 2015.
‘My Life Inside a Remote Chinese Bitcoin Mine’. CoinDesk. N.p., n.d. Web. 28 Sept. 2015.
Nakamoto, Satoshi. ‘Bitcoin: A Peer-to-Peer Electronic Cash System’. Consulted 1 (2008): 2012. Print.
—. ‘CML: Bitcoin P2P E-Cash Paper’. Archive. Cryptography Mailing List. N.p., 1998. Web. 12 Apr. 2014.
Puckett, Jim, and Ted Smith, eds. Exporting Harm: The High-Tech Trashing of Asia. Seattle, Wash.: Diane Pub Co, 2003. Print.
Taylor, Michael Bedford. ‘Bitcoin and the Age of Bespoke Silicon’. Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems. Piscataway, NJ, USA: IEEE Press, 2013. 16:1–16:10. ACM Digital Library. Web. 23 July 2015. CASES ’13.
WeUseCoins. What Is Bitcoin? (v2). N.p., 2014. Film.

Graziele Lautenschlaeger – Sensing phenomena and the translation of materialities in Media Art


In order to overcome dichotomies that usually impoverish the debates and the proposals on the Media Art field, this research is based on an object of analysis related to its very materiality: the sensing phenomena. It studies sensitive materials and devices, specifically the photosensitive ones. Organic and machinic sensors are on spot, whose ambiguous nature of being a concept and a device at the same time is a key element to feed a transdisciplinary discussion. Sensors enable us to bridge the physical and conceptual worlds. While creating a genealogy of the sensing phenomena related to the Art field, the research analyses the sensing phenomena in relation to two main operations: the translation of materialities, and the role it plays in the automatization and regulation of systems. The methodology has a historical and analytical approach, through Media Archaeology, Cultural Techniques and Second-order Cybernetics. It reviews and inquires the traditional and established paradigms of Media Theory, structuring a thought which integrates thinking and doing aspects of Media Art production, towards a “material philosophy” or a “philosophical engineering”. For that, a practical project is developed as part of the investigation’s method, an aesthetic experiment called “Self-portrait of an absence”.

Media Art, materiality of communication, sensing phenomena



The main purpose of the research is to develop a critical approach to contemporary Media Art production, in which artists constantly offer us conceptual and/or technically very hermetic proposals, reflecting also an historical and cultural constructed gap between theory and practice in creative processes. An expression of this distance is a statement by Edmond Couchot, a renowned author and critic of the field. In the book Media Art Histories, he states:

With digital images, a radically different automatization mode appears. Let’s not forget that digital images have two fundamental characteristics that distinguish them from the images mentioned earlier[from photography to television]: they are the result of an automatic calculation made by a computer. There is no longer any relation or direct contact with reality. Thus the image-making processes are no longer physical (material or energy related), but ‘virtual’ “(Couchot, 2006, pp. 182-3).

When Couchot says that “the image-making processes are no longer physical (material or energy related)” he ignores all the existent materialities that his limited human senses cannot perceive.  It is maybe a result of the separation between the world of thinkers and the world of the makers. Facing this situation, emerges the question of what would be an interesting and effective entrance to inquire such kind of misinformation that only reinforces the gap between conceptualization and hands-on, and therefore allow me to produce a significant material for media art community, contributing for makers and thinkers to visit each others world’s. Materials and devices related to the sensing phenomena showed up to be a promising vector for the investigation.

Sensing phenomena: Some definitions

Before articulating sensitive elements and the Media Art field, it is important to have some definitions as starting point. Let us consider that the sensing world is divided into natural and man-made sensors, as classified by Jacob Fraden:

On the one hand “The natural sensors, like those found in living organisms, usually respond with signals, having an electro-chemical character; that is, their physical nature is based on ion transport, like in the nerve fibers” (Fraden 01).

On the other hand, “in man-made devices, information is also transmitted and processed in electrical form – however, through the transport of electrons. Sensors that are used in artificial systems must speak the same language of as the devices with which they are interfaced” (Fraden 01-02).

Moreover “The purpose of a sensor is to respond to some kind of an input physical property (stimulus) and convert it into an electrical signal which is compatible with electronic circuits. We may say that a sensor is a translator of a generally nonelectrical value into an electrical value” (Fraden 02).

This technical definition using the idea of translation is in consonance with the idea that sensors are elements that enable the translation of materialities, topic to be further discussed. The argument is that they play an essential role in Media Art and in its simultaneous effects of presence and meaning production, towards Hans Ulrich Gumbrecht’s concept of materiality of communication.

Since there is the huge variety of sensitive materials and sensors existing, in the scope of this research, it was opted to focus on the photosensitive ones.

Photosensitive elements and media

Starting by the natural sensors, we can mention the sight sense of plants, phenomena that has already been used in some artworks. Plants cannot properly ‘see’ like a human does, but it is essential to their lives the sensitivity they present to light. Besides their photosynthesis ability, sensors placed in the tip of plants stem, for instance, allow them to notice the direction of light, triggering the growing process towards light source (phototropism). Another sight sense is located in the plant leaves and manages the flowering process, which is influenced by the amount of red light or by the length of the night (photoperiodism). The phytochrome of the plant leaves measures the red light and takes over the role of a light activated switch. Depending on the kind of red light the flowering process is turned on or off.

Another interesting example of natural sensing phenomena is called Quorum Sensing. In most cases it consists of a system of stimulae and response correlated to population density. Quorum sensing is used by several species of bacteria to coordinate gene expression according to the density of their local population. Similarly, some social insects use quorum sensing to determine where to nest. It can be understood as a sensor in a social scale and can function as a decision-making process in any decentralized system.

Bacteria that use quorum sensing produce and secrete certain signaling molecules (called autoinducers or pheromones). They also have a receptor that can specifically detect the signaling molecule (inducer). When the inducer binds the receptor, it activates transcription of certain genes, including those for inducer synthesis.

Quorum sensing was first observed in a bioluminescent bacterium that lives symbiotically in the photophore (or light-producing organ) of a Hawaiian bobtail squid. When the bacteria’s cells are free-living, the autoinducer is at low concentration, and, thus, cells do not luminesce. However, when they are highly concentrated in the photophore, transcription of luciferase is induced, leading to bioluminescence. In addition to its function in biological systems, quorum sensing has several useful applications for computing and robotics.

An example closer to our physical reality is the human eye, usually understood and modeled in anatomy and physiology books as a metaphor of a camera. Playing the role of film or a CCD, the photosensitive cells of our eyes are located in the retina: they are rod and cone cells. Located on the outer edges of the retina, rods are responsible for the reception of small intensity light and for peripheral view. Cones are further classified into 3 kinds of cells, each type responding to visible light of different wavelengths on the electromagnetic spectrum. Long cones respond to light of long wavelengths, peaking at the color red; medium cones peak the color green; and short cones are most sensitive to wavelength of the color blue. According to Kittler, it is very much possible that, the development of color images in Media technology – the RGB system – became only possible after the understanding of such cells in our eyes. He states:
In a similar way, the construction of images on television corresponds to the structure of the retina itself, which is like a mosaic of rods and cones; rods enable the perception of movement, while cones enable the perception of color, and together they demonstrate what is called luminance and chrominance on color television”(Kittler 36).

First, technology and the body: the naked thesis, to place it immediately up front, would read as follows: we knew nothing about our senses until media provided models and metaphors” (Kittler 34).

These are only some examples that show how the understanding of the natural world and human ability for building machines are mutually influenced. As the sensing phenomena can not be isolated observed, it is part of the research processes to identify and analyze the operations related to it, specially regarding expression on the fields of media and art. For the occasion of the workshop, the operation that I would like to focus is the sensor’s role in the idea of translation of materialities.

Translation of materialities

The photophone is an example that illustrates sensors in their interface functionality: the translation of materialities.

Coincidentally, the photophone is an invention whose origin is based on the discovery of new chemical elements in nature, specifically the Selenium, a photosensitive element. The photophone was a telecommunications device which allowed for the transmission of speech on a beam of light. It was invented jointly by Alexander Graham Bell and his assistant Charles Sumner Tainter in 1880, at Bell’s Laboratory. It worked through the exchange of two parts: transmitter and receiver. The receiver was a parabolic mirror with selenium cells at its focal point. One can say this device is a precursor of optic fiber technology.

When sensitive materials are associated to electronics and digital processes, the creative possibilities of human beings are refreshed. When Vilém Flusser discusses about the zero dimensionality of digital media this means that those media offer us the possibility of gathering all materialites in a lowest common denominator and, in second step, transform them in other possible materialities, playing around the flux between the abstract and the concrete worlds. In other words, this aspect of digital media drives us to translation issues, once they theoretically allow us to translate anything into anything.

The media art scene is also translating data and materialities the whole time. And it is quite often that we see artworks whose translations are meaningless or not powerful enough to trigger conversations on audience and contribute to the emergence of new knowledge. What kind of translation has been done? Why are we so obsessed about translating?

The Italian humanist Leonardo Bruni was probably one of the first modern thinkers to write a scientific treatise about the issue of ‘translation’ in the fifteenth century. Later on the twentieth century many other theoreticians discussed the topic, such as Croce and Rosenweig, Benjamin (“The task of the translator”) and Steiner (“After Babel”). The interest of those thinkers in the topic is a sign that the importance of translation reaches beyond the language domain, to encompass ontological and philosophical territories. Moreover, it is not by chance that the concept is also used in Molecular Biology and Genetics, calling translation the process in which cellular ribosomes create proteins. Such a broad spectrum of uses leads us to understand translation as playing out in the middle space between one reality and another.

A significant artwork concerning this definition is “Genesis”(1999) by the Brazilian based in USA artist Eduardo Kac. The key element of the work is a synthetic gene that was created by Kac translating a sentence from the biblical book Genesis into Morse Code, and converting Morse Code into DNA base pairs. The “Genesis gene” was inserted into a bacteria and the audience on the internet could turn on an ultra-violet light in the exhibition space, causing real biological mutations in the living organism, which was at the end retranslated into the Genesis book.

On the one hand this artwork is an example that demonstrates radically what translations can be and its implications, whereas on the other hand, it is very good at constructing the metaphors of the most current “problems” on translations also outside of the art word: ambiguity, noise, and subjectivity. As long as each ‘reality’ or ‘system’ has its own structure, it is absolutely impossible to find exact correspondences in both universes. That also explains the difficulties in translating poetry.

Overcoming the obsession of precise analogy, the French philosopher Paul Ricouer states that despite our excessive desire for translation, it is impossible to find parameters to identify what is a successful translation, able to reveal the same issues from different universes while retaining their specific logic and structures.

Self-portrait of an absence

Using the eye as the closest reference of photosensitive element, a practical part of the research is planned, a project being called until now by “Self-portrait of an absence”. The project consists of an eye-tracking device attached to the blind eye of the researcher, programmed to generate sound landscapes. It exercises a flusserian dialogue, sharing an absence and translating, beyond light into sound, an intimate characteristic into a universal experience. It is related to Wolfgang Sützl’s inquiries specially concerning the fact that “Where everything must be exchangeable, the concept of loss has no meaning”. Beyond the commercial use of biodata, the project grasps symbolic and aesthetic levels of relationships between body and technology.

Mixing the inputs of the conceptual framework and the practical experiment, the aim is to discover why do we translate and how this human inherent desire extent to materialities and contributes to overcome the pre-established dichotomies between nature and culture, especially reflected in the Media Art scene.



Couchot, Edmond. “The automatization of figurative techniques: towards the autonomous image”. In: Grau, Oliver. (Ed.) Media Art Histories. London, Cambridge: The MIT Press, 2007.

Flusser. Vilém. Universo das imagens técnicas: elogio da superficialidade. São Paulo: Annablumme, 2008. 

Fraden, Jacob. Handbook of Modern Sensors: Physics, Designs and Applications. New York, Berlin, Heidelberg: Springer-Verlag, 2004.

Gumbrecht, Hans Ulrich. Production of presence: What meaning cannot convey. Stanford, CA: Stanford University Press, 2004.

Kittler, Friedrich. Optical media: Berlin Lectures 1999. Translated by Anthony Enns. Cambridge, UK/Malden, USA: Polity Press, 2010. (first published in German as Optische Medien / Berliner Vorlesung 1999. Merve Verlag Berlin, 2002).

Bassler, Bonnie. “How bacteria ‘talk’ In: TED Talks, Feb, 2009. Web. Available at: Web. Accessed 28.Jun 2015.

Ricouer, Paul. On translation. London, New York: Routledge Taylor and Francis Group, 2006.

Weil, Florian.  Artistic Human Plant Interfaces. Masterarbeit zur Erlangung des akademischen Grades Master of Arts. Universität für künstlerische und industrielle Gestaltung – Kunstuniversität Linz. Institut für Medien Interface Cultures. Linz, Österreich, 2014.

Long, Chris; Groth, Mike. “Bibliography of early optical (audio) communications” In: Bluehaze. June 2005. Web. Available at: Accessed 27.Sept 2015.

Aideen Doran- Power Users

 Cybersyn & Arpanet    

Cybersyn, designed for Salvador Allende’s Socialist Chilean government by the British cybernetician Stafford Beer in the early 1970s, has been called “a socialist Internet, decades ahead of it’s time.” (Fisher 144) Cybersyn comprised of a network of telex machines and communications devices that would allow workers unprecedented control over their own lives and work. The central module collected production data from factories across the nation and responded to them in real time, rather than dictating economic policy from central government. Eventually it was hoped that it could be developed in a way that would allow citizens to communicate their feelings about the workings of the state directly to the government, an enterprise called Project Cyberfolk. It was a wholly unrealised ambition: Allende’s government was overthrown in a military coup led by general Pinochet in 1973. Cybersyn never had a chance to flourish. The images that remain from the Cybersyn project seem like a dispatch from the future, a glimpse of a world that could have been ours: an ‘alternative Internet,’ another kind of networked commons, built to ensure abundance for all.

Cybersyn was developing contemporaneously with military research in the United States towards building an inter-connected system of computers. The mandate was to create a distributed network of computers that could resist any single point of failure, preserving information and remaining operational in the event of nuclear catastrophe or hostile attack. Called Arpanet, it was an initiative of the Defence Advanced Research Projects Agency or DARPA, an agency of the United States Department of Defence. DARPA was one of the driving forces behind the development of the Internet, and more recently has been at the vanguard of research into virtual reality, battlefield robotics, and interstellar travel.

The End of the Internet

The Internet as we know it today “arrived from two directions: one top-down and the other bottom-up,” (Lanier 27) the cumulative result of military and governmental research alongside the efforts of independent computer scientists, programmers and entrepreneurs. To many, the de-centralised nature and universality of the Internet, both expressions of the “universal and non-discriminatory” (Semeniuk 47) principles of its design, seemed to promise a wider decentralisation of power and the creation of a new global commons, as the collective knowledge of the world became universally available, and creative and intellectual collaboration over the Internet became possible.

The revolutionary potential of the Internet to usher in an era of the democratic and free exchange of all the world’s knowledge has been compromised by a counter-revolution of enclosure, surveillance, and a concentration of corporate and governmental powers. At the present time, the experience of living in the world where the Internet is a ubiquitous phenomena contrasts starkly with the utopian ambitions of the early network pioneers. Both economic and political power is now greatly augmented by dominance over information. Singular, monolithic corporate entities have come to dominate popular niches of Internet activity such as social networking. The most iconic corporations of the new century, e.g. Google and Facebook, have made access to and control over huge swathes of data enormously lucrative. We live with what the artist and writer Hito Steyerl calls an ‘Internet Condition,’ in which the conditions of surveillance and corporate monopolisation, normalised on the Internet, spill over into the ‘real’ world. The Internet, according to Steyerl, “is undead and it’s everywhere.” (Steyerl, “Too Much World: Is the Internet Dead?”) Every action online we make can be tracked, traced and stored, your location monitored, your life surveilled in the interests of both capitalist accumulation (e.g. online tracking) and state security (e.g. mass data collection by GCHQ and the NSA). We live in a kind of digital panopticon, a high-tech version of Jeremy Bentham’s ideal panopticon prison, where prisoners were aware of being watched at all times without being able to see or identify the watcher. The Internet has become a tool of social control, where it once could have one of emancipation and commonality.

The Disappearing User

In addition to this concentration of powers, there is another troubling aspect to digital network technologies as they exist now, and that is the complete disappearance of the interface as we know it. Internet artist and theoretician Olia Lialina has written about this issue, suggesting that the boundaries between technology and us are becoming increasingly invisible:

Computers are getting invisible. They shrink and hide. They lurk under the skin and dissolve in the cloud…with the disappearance of the computer, something else is silently becoming invisible as well — the User. Users are disappearing as both phenomena and term, and this development is either unnoticed or accepted as progress — an evolutionary step.
(Lialina, “Turing Complete User”)

Computing processes are completely ubiquitous yet increasingly opaque. Computers begin to disappear as discrete objects, distinct from other consumer objects in the world and are absorbed into all other objects from watches to toasters in the Internet of Things. An interface is no longer an interface, but an experience. Many of the leading contemporary technology companies actively pursue the development of software interfaces that are both intuitive and ‘invisible’. When the interface disappears, the user too becomes invisible, when the term user is a useful reminder that the computer is a programmed system designed by another. It is not neutral. To fail to recognise that a person is a user of a system puts in jeopardy the users’ right to question that system, and to critique it.

An interface designed to be invisible renders a device almost unrecognisable as technology: it instead becomes naturalised as a benevolent, non-human factotum, a familiar spirit. We touch immaterial images and symbols on the glass screen of our smartphones, while fitness trackers and smart watches track our heartbeats and metabolic rates and weight loss apps send daily diet reminders and motiving messages. The intimate details of our lives are shared freely on social media, read by scopophilic algorithms and used to more efficiently market products and services. As Donna Haraway writes, “our machines are disturbingly lively, and we ourselves frighteningly inert.” (152) The experience of living with networked information technologies is interpenetrated on multiple levels by elements of embodied and affective experience, yet the ways in which we engage with technology are more often framed as a disembodied experience, one that is structured by language and overwritten by Heidegger’s ‘rule of instrumentalism.’

In The Question Concerning Technology (Heidegger, 1977), Heidegger describes how in a technocratic society all things “live under the rule of instrumentalism” (Bolt 71) in which the earth is a resource to be used to do or to produce, a resource which can be mastered through technological means. Our engagement with technology is limited to what technology can do for us rather than an engagement with the fundamental essence of what it is. However technology is more than just means: it is also a “challenging revealing,”(Heidegger 16) a system of thought which orders the world in a constant cycle of unlocking and transforming the energy in nature and then storing and distributing that energy through production. This revealing is a system of thought that delegitimises and drives out other ways of thinking about technology outside of its particular system of enframing. (Bolt 75) The instrumentalising effect of a technological enframing begins to colour all other relationships according to this system of thought, reducing the world and humanity to a “standing-reserve” (Heidegger 17) of energy and all beings to resources awaiting use (an effect that is vividly expressed in the managerial language of ‘human resource management’). In opposition to this, Heidegger sets out poiēis, (10) a mode of bringing-forth presence that involves “openness before what is” (Bolt 80) rather than ordering and mastery. Heidegger associates art with poiēitic revealing, but also with techne, an ambivalent term between poiesis and technological enframing that is the etymological root of the word ‘technology.’ It is both its likeness to and its difference from the technological that gives art a unique power to unsettle an instrumental view of the world, arts’ ‘accursed share’ (Bataille) of non-recuperable excess.

Radical Boredom

A refusal to engage with, to share in, a digital network culture that demands a permanent state of receptivity, can be a powerful statement personally and politically. Boredom, melancholy and negativity can be refigured as productive affective states, alike to art in that they, too, are possessed of an ‘accursed share.’ We are surrounded by anti-boredom devices, and we can be bored as well as overwhelmed by information overload- but it’s a mediated form of boredom that allows no room for thought or reflection. The sociologist and critic Sigfried Kracauer went even further, suggesting that only ‘extraordinary, radical boredom’ (Kracauer, quoted in Morozov ‘Only Disconnect’), as opposed to the ‘radical distraction’ of a real-time social media news feed, could reunite us with our body, our heads and the lived materiality of the world. Only in moments of silence and solitude could one flirt with radical and unscripted ideas. Boredom was rethought as political. In Kracauer’s writing, boredom allows us to experience the world at different temporalities, and to reimagine not only what the present can look like, but what the future could look like too. To Kracauer, boredom is not only our “modest right” (303) to do no more than be with ourselves, but it is also “the necessary precondition for the possibility of generating the authentically new.” (301-2) If an individual is never bored, then they are also never really present. So, if to be bored is to be present, then ‘radical boredom’ brings us back to Heidegger and his concept of Dasein, ‘being in the world,’ wherein human existence is grounded in the body and in the specific place in which we live. Being in the world emphasises that we are more than just an incorporeal self that is distinct from the “confining prison house”  of the body, (Cottingham 252) that consciousness is more than a string of information that can flow seamlessly between and the synapses of the brain and the silicon chips of a computer. An explanation of consciousness as an informational pattern, equally replicable in organic or non-organic materials, falls short of accounting for ‘Dasein.’

The culture of distraction demands not only a permanent state of receptiveness, but also a permanent ‘now,’ a temporal state radically different from the ‘being present’ of Dasein. The temporality of the network world is one of urgency, of being ‘just in time’ rather than ‘in the moment’. Zygmunt Bauman describes this as “the insubstantial, instantaneous time of the software world,” (118) an inconsequential time, immediately evanescing from experience into “exhaustion and fading of interest.” (ibid) Exhaustion is the inevitable result of the over-participation and over-sharing demanded by the network world, yet withdrawal and recuperation are not necessarily solitary and isolated acts. As Jan Verwoert writes, “the exhibition of exhaustion produces public bodies.” (Verwoert 107)

Works cited:

Bauman, Zygmunt. Liquid Modernity. Cambridge: Polity Press, 2000. Print.
Bataille, Georges. The Accursed Share: An Essay on General Economy. New York: Zone Books, 1991. Print.
Bolt, Barbara, Dr. Heidegger Reframed: Interpreting Key Thinkers for the Arts. London: I.B. Tauris, 2011. Print.
Cottingham, John. “Cartesian Dualism: Theology, Metaphysics, and Science.” The Cambridge Companion to Descartes. Ed. Cottingham, John. Cambridge: Cambridge University Press, 1992. 236-57. Print.
Fisher, Mark. “Picture Piece: Cybersyn, Chile 1971-73” Frieze. March 2014. P144. Print.
Haraway, Donna. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge 1991. PP 127-48. Print.
Heidegger, Martin. The Question Concerning Technology, and Other Essays. New York, London: Harper and Row, 1977. Print.
Kracauer, Siegfried. “Boredom.” The Everyday Life Reader. Ed. Highmore, Ben. London: Routledge, 2002. 301-04. Print.
Lanier, Jaron. “The Suburb That Changed the World.” New Statesman. 18 August 2011: 27-30. Print.
Lialina, Olia “Turing Complete User.” Contemporary Home Computing. n.p. 12 December 2012. n.d. Web. 26 December 2014.
Morozov, Evgeny. “Only Disconnect ” The New Yorker. The New Yorker Magazine. 28 October 2013. Web. 18 May 2015.
Semeniuk, Ivan. “Six Clicks of Separation ” New Scientist. March 2006: 46-48. Print.
Steyerl, Hito. “Too Much World: Is the Internet Dead?” E-Flux. E-Flux. 1 November 2013. Web. 01 December 2013.
Verwoert, Jan. “Exhaustion & Exuberance: Ways to Defy the Pressure to Perform.” Dot Dot Dot 15 (2008): 89-112. Print.

Elisavet Christou – The New Condition of Art Exhibition


Art exhibition has been through major changes since the emergence of the World Wide Web. Galleries and museums have been active players in the forming of new modes of cultural consumption and participation online while technology companies like Google invest in online platforms for digital exhibition (Google Art Project). From virtual exhibitions and online archives to multimedia applications like games and 3D, cultural organizations are learning to utilize the best digital technology has to offer in order to expand their reach and become more competitive. Social media in the beginning of 2000s allowed for new exhibition practices that were no longer associated with art exhibition as an event but rather as a condition. Free and open posting, sharing, curating and publishing has become a reality through these platforms. Entire social networks like pinterest, instagram and flickr are open exhibition spaces available for mass consumption anywhere and anytime. Physical or digital artworks are designed to fit in this online exhibition (photographs, videos etc.) while internet and post-internet art is designed online and for online exhibition (often for both virtual and physical spaces (Vierkant, Image Objects)).

The digitization of art exhibition has resulted in a massive shift on how we evaluate art, since much of today’s art success depends on reach and popularity (Arora and Vermeylen 206). Online, every artwork has an audience to find. Follower culture makes audience engagement easier and interaction between artists and audiences instant. An artist’s follower can have direct access to the artist’s work and progress and either approve or disapprove by altering her social network relationship (ex. Friend/unfriend, follow/unfollow, participate by commenting & sharing/not participating). Artists can have instant feedback on their work and themselves by using network and website analytics, effectively treating themselves and their work as brands. The artist becomes a public persona (famous artist Ai Weiwei has 288K Twitter and 146K Instagram followers), selfies, social updates, commentary are few of the tools used by artists in order to engage with and expand their audience, popularity and status. Exposure means popularity, bigger audiences and more possibilities of financially surviving the hard reality of the art world. The above results to a confusion as to what art exhibition entails today, who is involved and where and when it takes place.

I attempt to examine these phenomena, behaviors and interactions from both a technical and theoretical understanding. Often art research fails to examine the technical and systemic characteristic of the medium through which art is being mediated and the apparatus through which art is being systematized. The internet is not simply “open” or “closed” but above all a form that is modulated. Information does flow but it does so in a highly regulated manner (Thacker xix). As Bowker et al. argue, these embedded technological frames are often invisible to perceive in social systems such as the art world; hence to genuinely comprehend its impact, it involves the unfolding of ‘the political, ethical, and social choices that have been made throughout its development’ (Bowker et al. 99; Arora and Vermeylen 6). This falls under a wider discussion of how the social and the political are not external to technology and of how technological developments (research, design, use, distribution, marketing, naturalization, consumption) affect and/or determine all aspects of social life (Thacker xi).


Traditionally art exhibition is understood as an event. It takes time and resources, it requires preparation by the artist/s, venue, and curator/s, it has a lifespan of a set time and it has an afterlife through documentation, evaluation/critique, reflection, impact, publication and archive. Exhibitions can be reproduced or travel yet their spatiotemporal lives are limited by physical and logistical laws. Such art exhibitions constitute some of the major events (Biennale, Documenta etc.) in the art world and produce great revenue for cultural organizations and institutions. The life span of art exhibitions though has been massively affected by Web 2.0. Escaping the limitations of the physical world, art can be exhibited and accessed anywhere, anytime and by anyone. The whole wide web is available for exhibition, it is an open venue for both art and artists, servicing some of the traditional purposes of art exhibition like communication, audience reaching, cultural exchange and discourse, popularity and of course sales. Time, space, accessibility and audience participation are some of the major changes of how art is being exhibited online but in order to understand what makes art exhibition today a constant condition, I will briefly discuss here two specific effects of online interactions, the mixed reality effect and the bandwagon effect.


As technological innovations continue to extend our notion of the visible experience we now recognize ourselves as both the observer and the observed on a constant basis and we often understand this as a requirement for belonging. At the same time our notion of what is a visible experience has massively changed as visibility now belongs to both physical and virtual realms. These experiences are taking place in very distinct spaces online, that are both controlled public spaces and monitored private spaces – neither public nor private, neither here or there, they are heterotopic as Foucault describes them or interstitial spaces as Paul Virilio describes them. These spaces are what we call non-space. Originally these non-spaces refer to spaces one travels rather than inhabits. These are airports, hotel lobbies, shopping malls etc. Today non-space can describe the public/private, physical/virtual, instant/past and future spaces of online interactions. Heteropic, interstitial and non-spaces theories, fall under the more generic concept of the mixed-reality effect.

Mixed-reality ideas and theories are the results of a greater confusion in post-modern and contemporary years around time, space, public sphere, individuality and community that emerged through the online technologies of WEB 1.0 and WEB 2.0. Physical and digital events have merged into a cluster, while artists, curators and organizations have lost control over the lifespan of exhibitions through the uncontrollable reproduction of posts, photos and information that social media sharing allowed. Originally, mixed reality was used to describe the merging of real and virtual worlds that produce new environments where physical and digital objects co-exist (Ohta and Tamura 6). Today mixed reality theory is used in order to explore various phenomena of co-existing in both physical and virtual spaces. In Mark Hansen’s Bodies in Code (139), all reality is mixed reality which means that instead of thinking of our digital identity and our real-space/physical identity as two separate things, today we understand reality as a fluid space of both virtual and physical; both states are equally real and exist as one. This mixed reality condition challenges the spatiotemporal constrains of experiencing art exhibition – amongst other things – as an event. We could argue that as reality is a condition, a state of things, today’s mixed reality is also a condition which escapes the boundaries of the physical world and allows for new understandings of our experiences. These new conditions are the result of the specificity of the digital computer, the internet and the web as a medium, a medium with its own protocols and networks that needs to be examined as such.


Networked society’s online culture and its compulsory characteristics of exhibiting ones work, actions and value by constantly sharing and participating in an effort to stay relevant, become formal measurements of effectiveness. If everyone is part of the networked society and you are not, how can you form connections, be visible and get noticed? If you can’t form connections, be visible and get noticed how can you affect change? It is a matter of scale. The medium’s ability of reaching massive audiences together with the systemic characteristic of network effectiveness based on popularity creates a network effect and a bandwagon effect. For example, the more people already use a social network the higher is the chance that more people will start using it as well (Bandwagon Effect). This results to social networks becoming extremely valuable to individuals and communities as the more people use them the more valuable they become to each user (Network Effect). Thus, chances of being effective are higher within a medium used by everyone.

Even if someone is targeting a niche audience, audience members are more likely to participate and engage with ones work in a familiar environment (like facebook) where the platform, its looks, feel and functions require no extra effort of environmental adjustment. Consequently, cultural interactions in such environments create a positive first impression since this is where people would expect to come across something relevant to their cultural preferences (by targeted advertisement and curation algorithms). Finally, this is where people can publicly exhibit their action/participation to someone’s work by joining events or by commenting and liking (actions that will later appear on their wall), fulfilling this way their part of exhibiting activity and participation. Other social networks like instagram, flickr, youtube etc. allow for similar behaviors within their systems, structures and environments. This network effect makes acting outside these platforms a very hard choice.


Remaining active, sharing and participating become measures of value for sociality, popularity, work and impact to the level of excess. At the same time acts of artistic exhibition online, add to this constant condition of making and receiving information for cultural consumption within mixed reality conditions. Social and behavioral norms along with systemic characteristics of the digital medium like protocols, software, applications and its commercial character become the gatekeepers of how we evaluate and interact with art today.

As more and more artists and cultural organizations become active participants in online environments and as more and more novel art spaces emerge online, we need to examine the conditions under which the art world is changing including the ways of exhibiting art. The art world online is being transformed into a network within a network. Further research is necessary in order to examine the impact and significance of the new technological developments on the art world and the massive changes in its hierarchies, knowledge production and valuation systems, and of course its exhibition practices and conditions.

Above all, our very short experience of life with the internet reveals issues of digital reproduction, digital mediation, digital surveillance and a generalized application of systemization. Our communication practices, our language, our image, our creativity and culture including art, is being mediated and systematized to fit online. This can liberate and/or restrain us at the same time but it certainly won’t leave us the same.


Arora, Payal., Vermeylen Filip. “The End of the Art Connoisseur? Experts and Knowledge Production in the Visual Arts in the Digital Age”, p. 6, p.206 Information, Communication & Society, 194-214, Routledge, 2013. Print

Bowker, Geoffrey C., Baker, Karen., Millerand, Florence., Ribes, David., “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment”, International Handbook of Internet Research, pp. 97-117, Springer Netherlands, 2010. Print

Foucault, Michel. ”Of Other Spaces, Heterotopias”, Architecture/Mouvement/Continuite Journal, 1984. Print

Google Art Project, 2011 – ongoing

Hansen, Mark B N. Bodies in Code, p.139, Routledge, 2006. Print

Lastname, Firstname. Title of Book. City of Publication: Publisher, Year of Publication. Medium of Publication (Print, Web, Film etc.).

Ohta, Yuichi.,Tamura, Hideyuki. Mixed Reality: Merging Real and Virtual Worlds, p.6, Springer, 1999. Print

Thacker, Eugene.”Protocol Is as Protocol Does”, p. xi, p. xix, Galloway, Alexander R., Protocol, Massachusetts Institute of Technology, 2004. Print

Vierkant, Artie. Image Objects, 2011 – ongoing.,

Virilio, Paul. Negative Horizon An Essay in Dromoscopy, Continuum, 2005. Print

Blog at

Up ↑