Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

You are here

Sociable Media: Coding CODA, Google Meets, and Representations (Final Project)

aconn's picture

The link to the Google Document is here, and it includes the advertisement as well as footnotes for news I reference. LINK: https://docs.google.com/document/d/13REpUzKXYEtbeorUTWry4vmBiiVVR_t77Spy6GqCud0/edit?usp=sharing 

I received this advertisement on a youtube video a day or two after our class discussion on cochlear implants and the Ruha Benjamin article “Interrogating Equity: A Disability Justice Approach to Genetic Engineering”. Having it marketed to me showed two things: Google and Amazon are listening to me even while I’m in class, and their advertising raises questions and commercial awareness around disability and technology. The former is a topic I’ve been interested in while at Haverford (The text that introduced me to surveillance studies was Simone Browne’s Dark Matters: The Surveillance Of Blackness) and the latter is something that we have discussed thoroughly during our class time together. “Google, A CODA Story” is the first site of a critical analysis through the lens of disability. (The link is pasted here for those who cannot access the one above: https://youtu.be/pXc_w49fsmI)

Representation does matter, so it is important to analyze and critique representations posed by dominant systems of power, like Google in the information analytics field. We are consumers of these corporations’ products, and analysing the way they sell the products seeks to reform the ethical relationship between the average user of Google and Google itself. Google and others are companies that have an image to maintain and a product to sell, and that is important to keep in mind when viewing media like “A CODA Story”. One of the underlying motives for Google releasing this video would be to get people to use their platform; there is more to the video, however, than to just gain users. 

Google released this advertisement to market an ideology most aptly described in Ruha Benjamin’s work as the “informed fallacy”, which is outlined as part of larger tendencies in the conflation of technology and social progress. Benjamin’s context concerned gene editing and scientific ableism, but the tendencies are evident in the digital domain of the internet, especially data analytics. Data analytics comprises the systems that companies use to market their products, but it is not restricted to marketing. 

Data analytics also refers to the data gathered by companies like Google, whose data set is really anything searched on Google. The user's participation in the search engine provides the data it needs to ‘more accurately’ present the inquiry’s answer. There are many ethical concerns with this, which Benjamin outlines:

The third way we routinely constrict our ethical imagination is an informed fallacy when we presume that standard approaches to informed consent are sufficient in arenas that are characterized by so much scientific and medical uncertainty. The best that researchers can really promise is a partially informed consent—so that we urgently need to re-think and re-invest in technologies of trust and reciprocity that address the many uncertainties involved. (“Interrogating Equity”, 53)

Ruha Benjamin’s informed fallacy appears in Google’s advertising, and even more so in its 2021 Keynote address. The Keynote displayed Google’s new features on Google Drive and Google Meets, which can be summarized as an integration of the two platforms where you can edit a Google Document within a video meeting. The technology is already achieved by screen sharing on Zoom and other platforms, however the integration may signal a cultural shift towards Google Meets. (I would not be surprised if school systems stratify across these platforms, or wholly switch to Google Meets.) Benjamin’s quote, then, is useful here, as it denotes the uncertainty contained in Google’s products and ultimately the CODA advertisement. 

Google’s Keynote does not directly address any ethical concerns with technology other than safety; in fact it assumes the benevolence of the gathered data as its framework for progress, rather than presenting that data. About thirty-nine minutes in, the presentation shifts to differential privacy, which is the process in which algorithms aggregate data without revealing personal details of the individuals. The issue with this is one of categorization, or what Benjamin calls the ‘fixed fallacy’, which “is the tendency to assume that the way in which scientific harms get enacted in the present will look the same way they did in the past, rather than mutating with the times” (54). Benjamin uses the example: “state-sponsored eugenics, for instance, overlooking the way that market logic puts the responsibility of “racial fitness” in the hands of the consumer,” which, in terms of the consumer dynamic, can then be overlaid to the situation with Google (54). The consumer is the data that Google uses, first and foremost. We are the individuals that are aggregated and then categorized through several algorithms as data. Disability, then, serves as a critical framework for analyzing how differential privacy invokes both the fixed and informed fallacies.

The Keynote makes sure to represent a diverse cast of voices in their images, but fails to consider its past pitfalls in the present and future conversation. By curating the Google’s data analytics has nearly two decades of searches (Google’s advertising campaign literally promotes their associative word algorithms as the forefront of their twenty-two year long project) that are directly informed by market trends and consumer logics, Google has created an indisputable power dynamic between consumers, advertisers, and product providers. That is why Benjamin’s informed fallacy is crucial in unpacking that power dynamic.

Google consistently offers “partially-informed consent'' in its search engines and image repositories, the inquiry and answer method necessitates a partially-informed consent even for Google searches you already know the answer to. Google is the standard for internet inquiry; the company has become a verb similar to walking, running, breathing. To search something in Google is to seek an answer, and that answer changes depending on the phrasing of the question. That dynamic in such a ubiquitous phrase should not go uncriticized, and that is what the works cataloged in “Interrogating Equity”. Benjamin’s call for a reciprocal relationship between user and technology calls this very practice into question, as the practice  with Google ultimately relies on the user not knowing the full situation of engagement. To look at this in action, we will actually delve into the CODA advertisement itself.

The video itself shifts between shots of the Google interface (the search engine, the Google Meets interface when you’re on a call, the closed captioning button found in the margins of most videos and video platforms) and pictures of Tony, a CODA. The google search at the beginning of the video reads “child of deaf adults,” setting a meaning to the acronym CODA. The narration is presumably from Tony, who recounts his experience growing up with deaf parents, and having “one foot in either world” of deafness and hearing (00:00:39). He describes his experience learning sign language and translating for his parents, further emphasizing the two worlds. The narration sets up a dichotomy between deafness and hearing, only to have that binary be bridged by Google’s very own video platform, Google Meets. 

We circled again and again around societal reliance on technology to solve social issues, so for now it is important to note that the shots of Google's services are intentional: they serve to promote the accessibility features in Google Meet’s interface. The technology is marketed as a solution to communication issues, both geographic and ability-wise. The technology represents an issue that can be solved with a few google searches and a voice-to-text application. Technology aligns with medicine in that it is portrayed as prescriptive, or the state of things.

Disability is portrayed in the video as an issue that technology can easily surmount, when we know that is not wholly the case. Tony’s narrative serves as an anchor to a solution that does not fully ‘resolve’ the dilemma of communication. Tony still uses sign language, and is teaching it to his son, showing outright that those real forms of communication are still necessary. Tony is not physically present with his parents, so the technology takes on the simulation of communication while presenting itself as the future of communication. The technology is being offered as a new language to be learned; but it is not enough that sign language is present within the video or within the Keynote. The existing hierarchical framework still resides at the heart of the information being gathered, causing a realization that Tony’s story serves to market a reality rather than represent reality.  Google’s focus is on security technology and as it stands, serves as a mediating device for existing hierarchies to exert power on the internet.

Speech recognition in particular is an enormous site for critique, as Google is incorporating active translation into its video meetings.In fact,  Google has developed speech recognition software to aid in communication services for years and their services contain robust digital algorithms that have a decade of data. In other words, this technology is not new. Zoom had closed off their closed captioning services to non-paying users until February of 2021, meaning that you had to pay for access to an accessibility service. The contradiction should not go unemphasized. Companies have an underlying profit motive, and that results in overaccumulation of information, data sets, and money for the purposes of making more money.

To summarize: There will be a lot of technological improvements upon accessibility features in consumer products in the coming years and it will be important to critique and contextualize these features as they are developed. Google has fired two members of their A.I. ethics board, including the founder of the board. The first member to be fired, Timnit Gebru, was about to publish a paper on the very models of data analytics described prior. The paper was a collaboration of seven researchers and titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. The paper was not published at Google's discretion, but it sought to look at the field of language modeling within data analytics. 

The MIT Technological Review describes the paper as addressing the environmental costs of running these models, the fact that the models do not understand language and just manipulate it, the data sets are too large to check for bias, the models overrepresent western countries, and the models are resistant to intentional changes in language, such as the endeavours by BLM and the MeToo movement to center anti-racist language in political and social discourse. The implications for this paper are enormous and require further investigation, which is why I ultimately want to continue Sociable Media in some fashion after the semester. As it stands, it seems like the issue of language and accessibility will play out in systems largely indecipherable to the average user of Google, but I want to engage in the work to fight what seems like the mutation of hierarchy into digital information. 

What does this have to do with disability in the end? Tony’s story centers disability in the advertisement, but I am not so sure Google is doing the same with search engines. Based on the company’s past mismanagement of data and general obfuscation of its processes to the non-computer science perspective, it is important to approach this ethical issue from each intersection in order to fully render the hierarchy of neoliberal capitalism inert. Like Timnit Gebru herself said, “ It could not have been done by a pair of researchers or even four. It was a collaboration,” and we need to do the same. Critical media analysis seems central in disability studies, but it is not so in other domains. Using interdisciplinary methods such as textual, image-based, auditory-based, and other ways of analysis will help to build an archive of resistance against the seemingly ubiquitous internet and its service providers. Applying the methodology of disability studies will be crucial in unpacking the power dynamics latent in using the internet. All of this is to say that using the internet is not immoral or unethical, but rather the way we have allowed our very real oppressive hierarchies to leak into it. Conceived and typed in Google Docs, of course.