Seven people and a dog are illustrated in watercolour standing against a light blue background, in a variety of poses but all intently looking at their mobile phones. They are coloured in blue with yellow detail, and cast blue shadows. Their phones are all connected via a web like network in the air, showing various connection points. The image is portrait shape and allows space for text in the top left hand corner.

Selective Perspectives

A Content Analysis of The New York Times’ Reporting on Artificial Intelligence
Jamillah Knowles & Reset.Tech Australia / betterimagesofai.org

Executive Summary

Mainstream media reporting on artificial intelligence (AI) is primarily influenced by the opinions, agendas, and perspectives of the commercial technology industry. This produces reductive narratives about AI, how it operates, and who is best suited to inform the public on its development.

We conducted a mixed-methods content analysis of the New York Times (NYT) to investigate whose perspectives and voices are cited in the NYT’s reporting on AI, and which industries and sectors they come from.

This work contributes to our understanding of how we can support the inclusion of more voices and expertise in media coverage. The final section of the report contains ideas for next steps on what can be done about the lack of nuance, accuracy, and diversity of voices in AI reporting.

The NYT’s reporting is disproportionately influenced by the perspectives of
individuals within the commercial technology industry

Specifically, the breakdown of individuals mentioned and quoted focuses on individuals working at commercial technology organizations. Elon Musk and Sam Altman are the individuals the NYT frequently covers, and OpenAI and Google are the two organizations mentioned the most.

Those within the industry are often framed as ‘experts’ while those from academia, civil society, et al are framed as ‘outside experts’.

Those from academia, civil society, et al are also featured less frequently, and are typically framed as critics and/or skeptics despite the substance of their claims.

The NYT lacks clear and consistent definitions of specific AI technologies.

It relies instead on loose definitions and hyperlinks to vague or outdated articles.

Reporting consistently overlooks the perspectives of voices coming from every other sector except the commercial technology industry.

Narratives often shift to current news stories concerning commercial technology organizations.

Stories rely on a reductive hero vs. villain literary trope.

This trope, between Anthropic vs. OpenAI, the United States vs. China, and Humans vs. Machines, fuels polarizing and reductive narratives about AI.

Case Study

The New York Times

Written by Hanna Barakat, Edited by Georgia Iacovou

We chose to focus on the NYT because it is a dominant publication in setting the agenda for international and United States domestic media coverage. It has also attempted to position itself as “a leader” in assessing how journalists should use AI in their reporting.

While we were interested in whose voices and organizations are dominating the coverage of AI, we were also interested in the long tail distribution: which v​​oices and industries were only mentioned a few times? Thus, we took a mixed-method approach to balance high-level insights while allowing room to investigate specific quotes and the article’s contextual information.

A black and white watercolor image of the New York Times office building in New York

Industries of the Individuals Quoted

Our findings show that the NYT frequently quotes, mentions, and cites leaders within commercial technology organizations, which suggests that the NYT’s reporting on the development of AI is influenced by the priorities and motivations of the commercial technology industry.

Specifically, 67% of people quoted and 61% of the people mentioned work in the commercial tech industry,  while only 6% and 3.5%, respectively, were from civil society organizations.

Download the full report to explore our findings and sources in detail.

Conclusion


The commercial technology industry is not only building AI systems, but also influencing public perception of AI. This occurs partly due to explicit efforts by companies to shape the public perception of their products and their role in society. But it also is reinforced when media companies and journalists make decisions about who gets to tell stories about technology. While we focused on the NYT, our findings are likely indicative of a broader problem. And if we want to have robust public debate about the role of AI in society, the public needs to be presented with a range of perspectives and expertise. Based on our research, we find that this is not happening in NYT coverage of AI.

This research leaves us asking: what is the role of journalism on AI? What responsibility do journalists have in researching and interrogating the development of AI as a technology, rather than just periodic business updates on the technology companies themselves? How can journalists integrate the assumptions and ideological underpinnings that the commercial technology industry and civil society–alike– perpetuate?

Abeba Birhane, an AI researcher and senior fellow at the Mozilla Foundation, clearly captures this industry alignment, saying, “They [Reporters] just end up becoming a PR machine themselves for those tools.” Our research shows how these dynamics permeate the layers of reporting: whose voices are centered, who is afforded ‘expertise’, which resources are hyperlinked, and who is framed as the villain, and who the protagonist.

Using obscure language and vague definitions preserves the power structures that  benefit technology developers and their respective organizations. Meaning not only is the language unclear for audiences, but it also acts as an effective mechanism for maintaining corporate power.