Date of Degree
Social and Cultural Anthropology
STS, artificial intelligence, epistemology, machine learning, ethnography
Machine intelligence, or the use of complex computational and statistical practices to make predictions and classifications based on data representations of phenomena, has been applied to domains as disparate as criminal justice, commerce, medicine, media and the arts, mechanical engineering, among others. How has machine intelligence become able to glide so freely across, and to make such waves for, these domains? In this dissertation, I take up that question by ethnographically engaging with how the authority of machine learning has been constructed such that it can influence so many domains, and I investigate what the consequences are of it being able to do so. By examining the workplace practices of the applied machine learning researchers who produce machine intelligence, those they work with, and the artifacts they produce—algorithmic systems, public demonstrations of machine intelligence, academic research articles, and conference presentations—a wider set of implications about the legacies of positivism and objectivity, the construction of expertise, and the exercise of power takes shape.
The dissertation begins by arguing that machine intelligence proceeds from a “naïve” form of empiricism with ties to positivist intellectual traditions of the 17th and 18th centuries. This naïve empiricism eschews other forms of knowledge and theory formation in order for applied machine learning researchers to enact data performances that bring objects of analysis into existence as entities capable of being subjected to machine intelligence. By “data performances,” I mean generative enactments which bring into existence that which machine intelligence purports to analyze or describe. The enactment of data performances is analyzed as an agential cut into a representational field that produces both stable claims about the world and the interpretive frame in which those claims can hold true. The dissertation also examines how machine intelligence depends upon a range of accommodations from other institutions and organizations, from data collection and processing to organizational commitments to support the work of applied machine learning researchers. Throughout the dissertation, methods are developed for analyzing the expert practices of machine learning researchers to transform situated, positional knowledge into machine intelligence and re-present it as objective knowledge. These methods trace the chains of dependencies between data collection, processing, and analysis to reveal where and how hidden assumptions about the phenomena being analyzed are advanced.
The second half of the dissertation focuses on how the authority of machine intelligence to control or ensure compliance is developed. This authority rests not only on applications of machine intelligence which constrain the freedom of others to act in accordance with their own desires, but also on the ways in which attempts to critique or curtail the authority of machine intelligence are assimilated into the logics and practices of machine intelligence itself. Attempts to limit the authority of machine intelligence, particularly AI ethics and algorithmic fairness are explored ethnographically to conclude that even in recognizing and attempting to take responsibility for the harms it risks producing in the world, machine intelligence nevertheless remains resistant to forms of accountability that are external to its own practices. This ensures that machine intelligence remains a deeply conservative project, contrary to its presentation as futuristic or transformative, that conserves the power of those who already wield it.
Moss, Emanuel D., "The Objective Function: Science and Society in the Age of Machine Intelligence" (2021). CUNY Academic Works.
This work is embargoed and will be available for download on Saturday, September 30, 2023
Graduate Center users:
To read this work, log in to your GC ILL account and place a thesis request.
See the GC’s lending policies to learn more.