Event/s
Over the past decade, researchers have been investigating new technologies for categorising people based on physical attributes alone. Unlike profiling with behavioural data created by interacting with informational environments, these technologies record and measure data from the physical world (i.e. signal) and use it to make a decision about the ‘world state’ – in this case a judgement about a person.
Automated Personality Analysis and Automated Personality Recognition, for instance, are growing sub-disciplines of computer vision, computer listening, and machine learning. This family of techniques has been used to generate personality profiles and assessments of sexuality, political position and even criminality using facial morphologies and speech expressions. These profiling systems do not attempt to comprehend the content of speech or to understand actions or sentiments, but rather to read personal typologies and build classifiers that can determine personal characteristics.
While the knowledge claims of these profiling techniques are often tentative, they increasingly deploy a variant of ‘big data epistemology’ that suggests there is more information in a human face or in spoken sound than is accessible or comprehensible to humans. This paper explores the bases of those claims and the systems of measurement that are deployed in computer vision and listening. It asks if there is something new in these claims beyond ‘big data epistemology’, and attempts to understand what it means to combine computational empiricism, statistical analyses, and probabilistic representations to produce knowledge about people.
Tue, 24. Jul–
Sun, 28. Oct
2018
EAVESDROPPING used to be a crime. According to William Blackstone, in his Commentaries on the Laws of England (1769): ‘eavesdroppers, or such as listen under walls or windows, or the eaves of a house, to hearken after discourse, and thereupon to frame slanderous and mischievous tales, are a common nuisance and presentable at the court-leet.’ Two hundred and fifty years later, eavesdropping isn’t just legal, it’s ubiquitous. What was once a minor public order offence has become one of the most important politico-legal problems of our time, as the Snowden revelations made abundantly clear. Eavesdropping: the ever-increasing access to, capture and control of our sonic worlds by state and corporate interests.
But eavesdropping isn’t just about big data, surveillance and security. We all overhear. Listening itself is excessive. We cannot help but hear too much, more than we mean to. Eavesdropping, in this sense, is the condition – or the risk – of sociality per se, so that the question is not whether to eavesdrop, but the ethics and politics of doing so. This project pursues an expanded definition of eavesdropping therefore, one that includes contemporary mechanisms for listening-in but also activist practices of listening back, that is concerned with malicious listenings but also the responsibilities of the earwitness.
This project directs our attention towards specific technologies (audio-tape, radio-telescope, networked intelligence) and politics (surveillance, settler colonialism, detention). Some contributions address the personal and intimate, others are more distant or forensic. Their scale ranges from the microscopic to the cosmic, from the split-second to the interminable. What all the artists and thinkers involved have in common, however, is a concern not just for sound or listening, but what it might mean for someone or something to be listened-to.
Movement 1: Overhear (July 24–August 5)
wiretapping, the sonic episteme, sonic agency,
excessive listening, forensic listening
Movement 2: Silicon ear (Aug 9–11)
big data, automation, algorithmic listening,
panacousticism
Movement 3: Earwitness (August 20–31)
the sonic colour line, sonic warfare, listening to history, the hearing, justice as improvisation
Movement 4: Listen Back (Oct 19-28)
Movement 1: Overhear
Movement 2: Silicon ear
Movement 3: Earwitness
Movement 4: Listen Back