At the Intersection of Politics, Surveillance, and Race: An Interview with PhD Candidate Karina Rider, Queen’s University Department of Sociology

Tanisha Amarakoon, Chief of External Affairs

May 21, 2020

01ClKAVlIdjlCC5sSShgAxq-1.fit_scale.size_2698x1517.v1570636930.jpg

The rise of digital surveillance technologies in today’s economy have indubitably sparked conversations on how data, and the technologies which collect it, can negatively affect certain communities. As a result, The Observer’s External Affairs Branch brings to you an interview with Queen’s University PhD Candidate Karina Rider, who specializes in digital technology, economic policy, and political processes. 

1.     What is the focus of your research and what interests you about surveillance studies?

When I began studying surveillance, I was really interested in the narrative that went off in 2008 about phone encryption and the American federal government “going dark”. I thereafter did a historical analysis of policies related to encryption by collecting and analyzing every mention of encryption off of the Congressional records post-1990. In analyzing these policies and records, I came to one overarching conclusion: that many “anti-surveillance arguments” that relied on privacy as a defence could easily be coopted by the federal government to facilitate additional surveillance.

That research has led me to my current dissertation, which looks at how political groups, grassroots organizations, and non-profits in the San Francisco Bay area target digital technologies for social change. As a result, I’ve had the opportunity to conduct many interviews with various organizations that do counter surveillance advocacy on a local level. For example, I’ve spoken with labour organizers inside technology companies who hold companies accountable for the products that are produced. I’ve also spoken to a lot of tech workers who’ve volunteered to build and design tech for social good. Those are usually the types of technologies that locals governments are supposed to use to make their work more efficient, effective, and accountable for local residents. Last, and perhaps most interesting, I’ve spoken with many student groups that are organizing boycotts. I think this will be a really effective tactic in the future, where students at universities like Stanford— who make up a significant portion of the hiring pool in the area— say “no, if you’re building technologies to do harm to people with drones, I’m not going to work for you”. 

So in sum, I’m looking at how the problems of surveillance are being framed by commentators, how it’s being addressed in practice, and what some of the potential consequences of that might be.

2.     Many scholars have argued that surveillance is a heavily racialized phenomenon. What are some examples of how surveillance, and the technologies which are used to pursue it, demonstrate racialization and stratification in the digital service economy?

It’s key to understand that when talking about the service economy, it’s almost impossible to separate surveillance from digital services. In order to understand economic production under digital capitalism, you by definition have to talk about surveillance, data collection, and analysis. 

As a result, the basis of providing services is rooted in surveillance. 

In terms of how it’s racialized and stratified, there’s a couple of different layers to it. 

On one hand, there’s the economic organization of certain companies. On the local level, these companies have a huge impact on labour markets and who gets to work where. What often happens is companies stratify their labour force into the highly paid engineers, who tend to be predominantly and disproportionately white and male, and a subset of workers who remain almost hidden from consumers and citizens who are doing a lot of other work behind the scenes. For example, content moderation on Facebook. The people who are doing the cleaning of the data are predominantly low income people of colour, and in some cases, outside of the United States who are working for almost nothing. 

Then on the other hand, how the services themselves are offered. A good example is how digital scooters were regulated in San Francisco last year. When piloting how scooters were going to be offered, the city put forth a set of criteria for companies to follow. One of which was that companies had to ensure sure that 20% of the services were offered to low income/at risk communities that don’t have equitable access to public transit. However, some of these companies did what’s called “digital redlining,” where they didn’t let users of the scooters drop off the units in certain neighbourhoods. One of the neighbourhoods was the Tenderloin district, which has a large homeless population and is also known to be a low-income area with over policing. Another one was Chinatown, a community of colour. 

These scooter companies were heavily criticized. Despite presenting their business in egalitarian terms, the platform itself was built into assumptions regarding which communities were dangerous and which ones were not. 

We’ve also seen this happen with Uber. There are reports online where customers claim they were charged extra for being picked up or dropped off in certain neighbourhoods. Obviously it’s kind of ambiguous, because we don’t know exactly how the pricing algorithm works, but some people are anecdotally coming out and saying that they are being charged more when ordering an Uber in a neighbourhood that might be a historically black community. 

Again, it seems at the outset that these technologies are objective and based on data collected on surge pricing, traffic, time of day, etc., which seems like it has nothing to do with politics. But, when we dig into how the data is collected, and what actually gets inputted into those systems, we see otherwise. The reality is that if the technology comes from a stratified society, we’re going to get a stratified service. 

3.     In what ways do such technologies disadvantage people of colour? 

One technology that most certainly affects people of colour is Amazon Ring. It’s one of the best illustrative examples of how tech companies can profit off of pre-existing fears and racialization. 

Amazon’s Ring was originally built so that customers could take control over watching Amazon packages on their doorsteps. The device is motion censored, records any movement, and allows for users to upload surveillance content to social media. Despite the original intent however, Ring ended up becoming more of a “do it yourself” technology for policing who belongs in your neighbourhood based on who users deem to be “suspicious”. 

Obviously the second you start talking about neighborhoods, you’re automatically talking about race. In the same light, when you start talking about neighborhood safety, and toss technology in the mix, you’re only going to get heightened conversations about race. The reality is that technology just accelerates the process of discrimination.  

So, with Ring, it’s very easy to think how categorizing people who look “suspicious” on surveillance footage might disproportionally affect certain races. 

Further, Ring users can also choose to share footage they deem to be “suspicious” with local police authorities. That’s what makes a difference. It’s so easy to capture data and share it with local police, and thereafter, for the police to keep track of people—plus, Amazon has a ton of non-disclosure agreements with local police units so we don’t really know what’s going on with the data that’s being shared. 

We also see these disadvantages in more benign technologies, like motion sensing devices. 

Many of these devices are built to function based on the light that refracts off of human skin, which obviously works better for lighter skinned individuals. That’s an active decision made on behalf of the creators. They could build the system a different way, but they don’t. It serves as a constant reminder of how the creators of such technologies envision their users. 

In sum, these technologies disadvantage people at different levels and can have varying effects. Ring could have some very severe consequences, like criminalization. However, these smaller and daily mundane interactions with technologies also have effects. The fact that such technologies may not function for people with darker skin only works to remind those affected that they’re not included in what the definition of what a “user” is. 

Karina Rider’s contact information can be found on the Queen’s University Sociology Department page:

https://www.queensu.ca/sociology/people/graduate-students/karina-rider.

Like Us on Facebook