Research

6 min read

Our Method: Educate, Assess, Analyze

Published on
August 2, 2023

Tech complexities addressed for better futures. Focus on marginalized communities. Concerns over Google's FLoC and privacy.

Research Abstract:
Subscribe to newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
“[I]t is through the acknowledgement of the incredible complexity of human THOUGHT and ingenuity of our practices that we can rethink our mistakes and create a host of other options for sociotechnical futures worth inhabiting”
Kavita Philip, Your Computer Is On Fire

The last few months in the tech ethics space have felt like an uphill battle to many in the field, the Cyber Collective team included. From Google’s firing of their Ethical AI leads, Dr. Timnit Gebru and Dr. Margaret Mitchell, to Karen Hao’s investigation of the systemic underpinnings of Facebook’s misinformation problem, these recent events are a symptom of the problem we’re helping to solve.

We work to address the unintended harmful consequences of the technologies deployed in today’s landscape by centering Black, Indigenous, people of color (BIPOC), women, and marginalized communities in our education and research. In this report, we’ll take a deeper look into these recent events, the larger context of the problems they speak to, and the method we’ll use to investigate possible solutions this year.

Recent News in Tech

Google fired its two AI ethics research leads, Dr. Timnit Gebru and Dr. Margaret Mitchell.¹ ² Both experts were co-authors on a paper called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, that discussed four major risks of large language models.³ These models can write text, provide translations, and generate code, and form a large part of the foundation of Google’s business. Experts in the field suspected that the firings were retaliatory in response to this paper.

In these firings, there’s a clear tension between business interests and society’s well-being, a theme that recurs in our work and creative research sessions (CRS). Linguist Emily M. Bender, another coauthor on the paper, describes how this tension plays out: “[W]e end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.” ⁴

The systemic issues underlying Facebook’s polarization and misinformation problems were brought to light in an MIT Technology Review article by Karen Hao.⁵ Facebook’s focus on growth has forced the company to grapple with the unintended consequences of the spread of misinformation and polarization on its platform. Hao describes the mechanics of the problem: “The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.” In essence, Facebook’s priority to grow its business has had social costs, including worsening political tension and fraying social fabric through polarization. Hao argues that underlying systemic and organizational issues prevent the company from effectively challenging the status quo it created.

Google announced that it would phase out the use of third-party cookies in favor of Federated Learning of Cohorts (FLoC) across its ad network and Chrome.⁶ Per Google, FLoC is a new approach to targeted advertising that groups together users who have similar interests into cohorts. It’s meant to be a privacy-friendly alternative to third-party cookies.

While FLoC will replace third-party cookies, its function is nearly identical: to share insights about users with advertisers. FLoC groups users together based on their weekly browsing history and assigns a behavioral label that advertisers can use to serve targeted ads to these cohorts. The Electronic Frontier Foundation (EFF) warns that FLoC does not change the status quo of the surveillance business model and potentially introduces new privacy risks.⁷ The EFF also argues that “the power to target is the power to discriminate”, and that FLoC’s opacity will make both the management of abuse and regulation difficult. Even with the introduction of FLoC, a major power imbalance still remains between users, advertisers, and Google, and harms will continue to be produced for marginalized people in particular.

As people who believe in a future where technology can be a force for good for all, it’s our responsibility to continue to investigate, question, and peel back the layers of these issues for a wider audience, as well as advocate for a world where technology alone isn’t the solution to the problems of human life. Central to our work is creating space for people’s voices by reinforcing their power to advocate for their agency and autonomy through behavioral change, amplification, and feedback.

That’s what we’re continuing to do this year during what we’ve, in the past, called our community events. The next section outlines the structured approach we’ll be pivoting to this year for our new creative research sessions (CRS).

Our Approach to Creative Research Sessions

During our new CRS, our aim is to investigate the impact that technology has on marginalized communities, who overwhelmingly make up our CRS attendees. We gather our participants’ thoughts and feedback on the technologies they use regularly. We then use their feedback to provide guidance on social contexts and downstream effects to companies developing technology with the intent to produce equitable outcomes. We also use this feedback to advocate for change in policy.

To effectively gather informed participant responses, here’s the approach we’re implementing and testing:

Educate: We explain concepts and technologies and their social implications in accessible language.

Assess: We ask participants poll questions during the education session to assess comprehension and retention.

Analyze: We ask participants open ended discussion questions and open the floor for questions.

About Our Approach

Step 1: Educate

What we do: We explain concepts and technologies and their social implications in accessible language, breaking these themes down in ways that people who use technology—but aren’t necessarily experts— can understand.

Why we do this: We do this so that participants can better understand the technologies they use, are exposed to, and may be affected by, then give informed feedback that provides the social and cultural contexts and impact of the technologies we discuss. We also aim to wrap vocabulary around technical concepts that participants can use to further investigate the topics we discuss and continue the conversation outside of the CRS they attend. Benjamin Peters describes the problem we’re solving as a fire, and argues that “few have the collective language to call to put it out”.⁸ Making tech education and vocabulary accessible during the education portion of the CRS is our answer to this gap.

Step 2: Assess

What we do: We ask participants poll questions during the education session to assess comprehension and retention.

Why we do this: To determine whether our explanations are clear and easy to understand, we ask questions like the following:

  • Do you understand what we’re teaching?
  • Are we making this information accessible to you?
  • After attending this session, will you consider changing your habits online?

This ensures that we can better tailor our conversations in real time and get more accurate participant insights. We also use poll responses to, in part, inform the conversation in the “Analyze” portion of the CRS.

Step 3: Analyze

What we do: We ask participants open-ended discussion questions and open the floor for questions.

Why we do this: We ask open-ended questions to figure out what is important to our participants—the unintended consequences and downstream effects of these technologies that are most important to them—through their respective lenses. By letting our participants tell us what’s most important to them through their answers and questions, we can be most inclusive of different perspectives and minimize blind spots in our advocacy and work the best we can. With this approach, we aim to investigate and understand the social context for everyday technologies, with accurate and meaningful feedback from our participants. We look forward to advocating for mutually beneficial change in industry and policy.

What’s Next?

As a translator between our community and the tech industry, our goal is to humanize technology and create a future where tech works for everyone. To do this, we need to question and investigate the technology and tech industry we often take for granted—it’s become, at least in part, the water in which we swim: increasingly interwoven with our lives and difficult to challenge.

Technology’s direct impact on marginalized people has, for many, raised the issue of navigating the tension between being beholden to technology and freed by it. Join us this year at our Creative Research Sessions as we explore these themes and learn how to improve outcomes for marginalized communities produced by technology.

Notes

Sources

  1. Gebru, T. [@timnitgebru]. (2021, February 19). I expected nothing more obviously. I write an email asking for things, I get fired, and then after a 3… [Tweet]. Twitter. https://twitter.com/timnitgebru/status/1362838828046831619
  2. Mitchell, M. [@mmitchell_ai]. (2021, February 19). I’m fired. [Tweet]. Twitter: https://twitter.com/mmitchell_ai/status/1362885356127801345
  3. Bender, E. M., Gebru, T., Mcmillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. doi:10.1145/3442188.3445922
  4. Hao, K. (2020, December 07). We read the paper that forced Timnit Gebru out of Google. Here's what it says. Retrieved from https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethicsresearch-paper-forced-out-timnit-gebru/
  5. Hao, K. (2021, March 11). He got Facebook hooked on AI. Now he can't fix its misinformation addiction. Retrieved from https://www.technologyreview.com/2021/03/11/1020600/facebook-responsibleai-misinformation/
  6. Bindra, C. (2021, January 25). Building a privacy-first future for web advertising. Retrieved from https://blog.google/products/ads-commerce/2021-01-privacysandbox/
  7. Cyphers, B. (2021, March 30). Google's FLoC Is a Terrible Idea. Retrieved from https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea Mullaney, T. S., Peters, B., Hicks, M., & Philip, K. (2021). Your computer is on fire. Cambridge, MA: The MIT Press
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2023 Cyber Collective. All rights reserved. Site credits: The Process AutomatorRR Digital Media