About 'Automating Vision'
What does it mean for a machine to 'see'? or, how does a machine learn to see? How does machine vision affect us individually and how does it reshape society? These are not easy questions to answer. In Automating Vision we explore some of the key sites of innovation in machine vision - facial recognition, mobile augmented reality and mapping, drone vision, driverless cars, deepfakes. We approach these domains of innovation through the very old concept of 'camera consciousness' - an idea we use to consider both the emerging power of machine vision cameras to see, and the effect they have on the people and environments they become an often hidden part of. Breaking from the tradition of much current critical surveillance and technology studies, the aim of the book is to both explain the technologies and contexts that have given rise to 'smart cameras', and explore how it might be possible to live a safe and equitable future as we learn to see with and negotiate the interventions of seeing machines. To this end, camera consciousness is both a descriptive concept, and a technique for improving digital data and visual literacies in the age of automation.
"The authors provide an invaluable guidebook to an emerging and at times uncanny technological landscape whose unblinking, opaque, and distributed gaze stares back at us from a growing array of devices that promise to sort, recognize, and evaluate us. Automating Vision is a crucial contribution to the new forms of visual literacy we must cultivate if we are to reap the benefits of the burgeoning field of machine vision while evading its pitfalls. It is an elegantly written, theoretically sophisticated book that is destined to become a touchstone work for our times." - Mark Andrejevic, Monash University. Author of Automated Media.
"Snapshots are automated, vision becomes machinic, cars sense more than the driver, and seeing is more like data analysis; it’s in this field of transformations of media that Automating Vision offers an excellent analysis of the social aspects of artificial intelligence. Warmly recommended across the multiple contemporary disciplines that have to make sense of this situation but also to develop a fresh approach to media literacy." - Jussi Parikka, University of Southampton and FAMU, Prague
"This timely volume offers a rich discussion of the social impact of smart cameras across a range of domains, ranging from surveillance and facial recognition to drones and self-driving cars. The central term "camera consciousness" grounds the productive analysis of the social interactions around and with new visual technologies. This book will be a key reference for scholars interested in the social aspects of algorithmic visual technologies." - Jill Walker Rettberg, Author, Professor and Leader of the Digital Culture Research Group at the University of Bergen, Norway
About camera consciousness
About Automating Vision
Here we explain the idea of camera consciousness: Components of camera consciousness, and why it matters
Video resources for Automating Vision
Here is a whole stack of video resources covering different aspects of automated vision, matched roughly to the book chapters and case studies. This could be really useful for putting together a lecture or for starting points and overviews.
For ease of access, we've put together a curated reading list on the social aspects of machine vision, visuality, and camera consciousness. Some core sources and inpsirational pieces of work for us. It's just a starting point for a big set of inter-disciplinary published work addressing the topics covered in our book. Check the reference list here.
Power Point Slides
Please feel free to use and adapt these slides, with atrribution! Please cite: Anthony McCosker and Rowan Wilken (2020) Automating vision: the social impact of the new camera consciousness.
Activity: Becoming camera conscious
When researching this book, one of the standout features of the technology developments and applications in machine vision and digital image processing we found was the uneven distribution of hype and fear. Technologies position and empower people, groups, social strata unequally.
Objective: Use the concept of camera consciousness to evaluate the inclusive and exclusive capacities of camera technologies and automated vision systems, their impact on social relations and behaviours.
Consider this text from Nancy Frazer (2000, Rethinking Recognition, p. 113.) in relation to the proliferation of facial recognition technology:
To view recognition as a matter of status means examining institutionalized patterns of cultural value for their effects on the relative standing of social actors. If and when such patterns constitute actors as peers, capable of participating on a par with one another in social life, then we can speak of reciprocal recognition and status equality. When, in contrast, they constitute some actors as inferior, excluded, wholly other, or simply invisible—in other words, as less than full partners in social interaction—then we can speak of misrecognition and status subordination.
Task: Now, consider all of the cameras in your life - your phone, laptop, those used for video calls or social media sharing, around the home, in the streets, shops, train stations, sports stadiums on drones, in satellites. Some of those cameras will have automated image processing capabilities - machine vision. Find out more about them.
What are their 'intentions'? e.g., to prevent crime, ensure safety, guide people or direct machines, inspect, create an interface for communication, produce information, monitor... How many actively make decisions of some kind? What kind of information informs that decision making?
What are all the ways smart cameras might 'see' or position people differently depending on who that person is? In other words, how does inequality come to be built into machine vision systems?
How can smart cameras be used to improve our social conditions - improve, for instance, social connection, recognition of social need and inequality, of safety in the built and urban environment, and in the natural environment, improve health and medicine.
Are there ways to improve the 'reciprocity' of machine vision - so that people can see and understand the components and processes that go into the visual data capture, analysis and decisions made by seeing machines?
In 2020 during the US and global #BlackLivesMatter protests, IBM, Amazon and Microsoft decided to stop selling facial recognition technologies to Police departments, stating that they will do so until there is Federal regulation of their use. What are some of the key safeguards that could be introduced to ensure these technologies are not misused? Or do you think that misuse is inherent or built into their design? How can we design for fair and trustworthy AI?
Please get in touch!!
Please, if you do use any of our resources, or engage with the book for teaching or even just for personal interests, let us know. It's ALWAYS helpful to know where this work ends up, or how we might evolve and improve it.