Gender, Sex and Tech: Continuing the Conversation
Episode 9: Interview with Sahar Raza
Transcription by Ganesh Pillai
Jennifer Jill Fellows: This episode comes with a content warning. A discussion of police racism and brutality begins at the 44 minute mark and ends at the 50 minute mark.
Jill: Thatâs the sound of a police surveillance helicopter flying overhead, a sound that has become commonplace to anyone living in an urban or heavily policed area. I think many of us know that this surveillance is not neutral, and that it disproportionately harms Black and Indigenous people. However, increasingly, surveillance sounds not like helicopters flying overhead, but like this (cue smartphone typing). Social media posts, email accounts, smartphones, smartspeakers, and even smartfridges are all methods of surveillance. And while we might think of these methods as innocuous when compared to police helicopters, the truth is that this surveillance is also not neutral.
(Music)
Jill: Hello and welcome to Gender, Sex and Tech: Continuing the Conversation. Iâm your host, Jennifer Jill Fellows, and today Iâve invited Sahar Reza to speak with me about her research on so-called Smart AI. Sahar Raza is the project manager of the national right to housing network, and a recent graduate from Ryerson and York University is joint communication and culture master’s program for federally funded thesis research, critically analyzed so-called smart city projects, and discourses in Canada to uncover the policy and social justice implications of corporate made smart, artificially intelligent, and algorithmic decision-making technologies. She now works as a human rights advocate and public policy professional, who continues to research intersectional Canadian issues rooted in colonialism, privatization, systemic discrimination, and an over-reliance on AI technologies. And today she joins me to talk about what we can uncover about so-called “smart AI” when we consider it through sociotechnical lens.
(Music)
Jill: Hi, Sahar. Welcome to the podcast.
Sahar Raza: Hi, Jill. Thank you for having me.
Jill: Thank you for being here. Well, virtually here.
Sahar: Yeah.
Jill: Which actually brings me into the next thing that I want to highlight, because I think it’s very easy to forget that digital space is also a physical space. Well, multiple physical spaces. There are servers and cables connecting us today that lie on physical space. I am occupying a physical space. You are occupying a physical space. So I want to be mindful as I record gender, sex, and tech, continuing the conversation today that I am on the unseated land of the Coast Salish people of the QiqĂ©ytnation where I live, learn, play, and do my work. Sahar, will you share with us where you are located today?
Sahar: Yeah. I am calling in from the traditional territory of the Haudenosaunee, Huron-Wendat and many Anishinaabe peoples in the city that is known by settlers as Alliston, Ontario.
Jill: So, I definitely want to talk about smart AI and get into all of that. But before we do, let’s, let’s build a little bit of context here. Sahar, can you tell me a bit about your academic journey? Or perhaps more specifically, how it was that you became a communications and culture scholar?
Sahar: Yes. Okay. So, this is a great question because I definitely didn’t see myself becoming a communication scholar when I was growing up. I didn’t even know this was a field that existed frankly. But yes, I did grow up in a very radical, social justice oriented, critical-thinking household. I was the daughter of immigrants, so I’m considered a second generation Canadian. But my parents made a lot of effort to, you know, offset some of the discrimination that I could face in society, and to really ingrain equitable thinking into me and my brother. We would talk about systemic discrimination and racism at the dinner table and so on. But it was through media that I would see my parentsâ analysis in action because like in my in my day-to-day life, they really protected me from the impacts of racism, sexism, and so on. But then I lived in this very visual culture as a millennial, I was always exposed to TV and social media. And I noticed that I would rarely see people like myself on those platforms. So, I wouldn’t see people with curly hair or my curvy body, or my skin tone, or my family type, even parents like my own, represented in the media. And particularly after 9 11, I would notice that either we were not represented, or we would be framed in a very specific, stereotypical, negative and not very empowering ways. And so that really culminated in my undergraduate research, which I wasn’t studying communications at the time, I was actually studying arts, sciences and mathematics. But I ended up doing a thesis on that second-generation Asian Canadian identity because it was really meaningful to me. I interviewed a bunch of other South Asian kids like me. I found out that we all have very similar experiences of being stereotyped, racism and media perpetuating all that through invisibalizing us or misrepresenting us and so on. And so, I think that really led me to then engaging in more Communication Studies scholarship in my graduate studies when I finally realized that this was a real field that you could study. And I find it a very exciting field because it really allows you to make sense of the world, to make sense of the power structures and the logics that inform everything that we see and do. It’s a very open-ended field in my mind because anything can be studied as a communications product. And so, yes, and now I continue to bring that social justice oriented lens that my parents ingrained in me into my communication scholarship.
Jill: I think that’s really, really interesting, This idea that your households provided you with certain tools, and frameworks, and lenses through dinnertime conversation and through the work that your parents were involved in, And that then being a millennial and growing up in a very visual culture, you could take this and apply it to television and then I assume, later in your life, to the Internet and all of that that comes with that. So, I think that’s a really interesting journey to think about kind of the tools that we get when we’re young, and how we can continue to use them even as the technologies around us kind of change and shift.
Sahar: Yeah, exactly.
Jill: So when it comes to changing and shifting technologies, your chapter in the book “Gender, Sex and Tech” focuses quite a lot on smart AI. So for our listeners, can you give us a quick rundown of what smart AI is and perhaps also what led to your interest in smart AI?
Sahar: Yeah, I think this is a natural progression of where I left off with my undergraduate thesis because that was very much focused on media representation and the way that that can perpetuate certain forms of discrimination and marginalization that already have existed in society for decades. And, and then I started to see this transition to algorithmic decision-making and algorithmic content sorting logics on social media platforms. And it got me to kind of progress my thinking from just media representation to what some scholars refer to as “media recognition,” which is looking at the ways in which technologies themselves, and the way that they’re produced, can embed these systemic ways of discriminating against folks directly into these technological product and their logics. And so that’s kind of what led me to this research and, to your point or a question about what is smart AI and some examples, I would say some of the key components of smart technologies is that they are connective, or they have some sort of connectivity. So Wi-Fi, Bluetooth, something of that sort. And then there is an element of using sensors or data collection mechanisms to record and transmit, store, and analyze data. And so, some obvious examples are smart home technologies like smart light bulbs and smart thermostats, smart speakers like your Amazon Alexa, and so on. But what I find to be one of the most powerful, but often overlooked, ones is the smart phone, which we use all the time. And when you think about the fact that they use, technologies, are always listening, always collecting data, and always using that to kind of accrue profit for the developers of these technological products, it is kind of a scary thing to think about to, hence why I really focused on that in my research.
Jill: I also think it’s really interesting that you highlight the smart phone because I am at least old enough, aging myself, to remember like when there was all the branding about smart phones, right? We’re all going to, we’re all going to have phones that are smart somehow, as opposed to like flip phones of previous generations or whatever. And now everyone, like they’re ubiquitous, right? People don’t necessarily even highlight as a smart phone. It’s just kind of assumed. And they’re everywhere. Like, I feel really uncomfortable now because mine is sitting right here on this table beside me.
Sahar: I know, I know. Sometimes me and my friends are like should we all turn off our phones? We’re talking about private matters.
Jill: Right, and honestly, that doesn’t occur to me as much as I wish it did, that these have just become not just ubiquitous but completely accepted. We’ve just completely accepted these into our homes, and into our lives, and into our personal spaces. And when you talk about that, they are always listening, it is kinda creepy. And I think a lot of people just maybe don’t want to think about that.
Sahar: Yeah, exactly. It’s easier not to, but then, I don’t know if this has ever happened to you, but sometimes I’ll talk about something once and then the next thing I know, the next ad I see on my phone is for that exact thing. So, there’s clearly some sort of listening processing thing happening very quickly.
Jill: Yeah, so now that we’ve established these are a little bit, it’s unsettling to us, at least, and our friends and probably listeners out there, what are some of the usual justifications that are given for implementing smart AI, whether it be phones or other smart tools that we have around us. How are we sold these?
Sahar: Yeah. I mean, that’s a good question because a lot of times they’re not things that we actually need. I mean, when I think about smart refrigerators, I was like, did anyone need this in their life? I think it’s all marketed to us as innovation almost for the sake of innovation, as if like being innovative, creating a new technology is just inherently better, and having the newer, better thing is always beneficial to us, right? And so, these data-driven, smart AI technologies are really sold to us as efficient, innovative, beneficial, superior to human decision-making. And then also objective, which I think is something that I really problematize in my research because I don’t think that they are objective, but they are sold to us in that way. And I also think that because we live in this like very tech driven, capitalist society, they’re also sold to us as almost inevitable, like when the iPhone 10 comes out, we’re all eventually going to transfer over to the iPhone 10. So, may as well just get it now, right? Which is also very much that consumer, consumerist type logic, I would say.
Jill: Yeah, no, I think that’s really interesting. So, there is this kind of idea that newer is better. There’s this idea that innovation is a good thing – innovation for innovation’s sake, progress for progresses sake. And then this inevitability, this is going to happen anyway, so you might as well jump on the bandwagon. You’re going to be using your sad, old phone and you’ll stand out or something like that.
Sahar: You’ll actually have to check how many eggs you have, your smart fridge won’t tell you.
Jill: Oh yeah, make your own grocery list. And then, yeah, this idea that it’s going to be more objective, and all of this is kinda framed as a good thing. In your work, you take these ideas, and you draw a relationship between these justifications and, as you said, capitalist thinking. But also relatedly, this idea of positivism or positivist thinking. Can you tell us a little bit about what positivism?
Sahar: Yes. So, I think in the context of this work, what was relevant about this term is that it, it comes from this belief that researchers, mathematicians, and scientists can somehow study reality from this objective, neutral perspective. And then from that, extract these universal, generalizable facts that just exist. And they can do this with their rationality, and there’s really no acknowledgment of how their own experiences or perspectives could influence the way that they conduct their research, or their findings and their construction of reality. And I think that is very much what we’re seeing in the tech industry because we have these tech giants who claim to be neutral arbiters of reality, right? Like we’re just a platform, we’re just giving you a space to communicate so there is nothing nefarious going on here. We’re just using your data to give you more of what you like. So, it’s your own behavior, your own data that we are just kind of channeling back towards you. And so, there’s this whole narrative that there’s no need for regulation. There’s no need for any social justice oriented thinking because they’re neutral, which is obviously not the case. And we see that in the outcomes of many of these products, right?
Jill: Right. Yeah, So I’ve sometimes heard positivism, I feel like maybe this is a grisly metaphor, but I’m going to put it out there anyway: I’ve sometimes heard of like positivistic scientific, in particular, scientific thinking as being like âthe ability to carve nature at its joints.â I’m making air quotes here for people who can’t see that, which is all of the listeners. Yeah, the idea that nature exists out there and it has existing categories and existing phenomena that we can just identify and, and pick out, carve nature at its joints, that we will be able to find these natural breaking points in the world and completely understand the world from this neutral view from nowhere perspective, where it’s not my biases or my background that is influencing what I’m seeing. I’m seeing what’s âgenuinely there.â Again, Scare-quotes. Yeah. And then if we take that idea, as you’ve said, and kind of bring it in, out of specifically science, though related, and into kind of this tech sector. We get this idea that once I have this understanding of the world, I can use that to create neutral tools, and that the tools that can be used to gather more neutral data about the way the world is, to understand the world even better from this view from nowhere perspective.
Sahar: Exactly. It’s like this assumption that we can rely on the calculation and mathematics to solve all the world’s problems, and instead of that neutral researcher or scientists that we had before, it’s the neutral AI or algorithm.
Jill: And I’ve seen this justification before, but now it’s like we’re kinda cluing in that people are biased, people have perspectives. So, our solution to that is like, well, we’ll just remove the people and we’ll just use machines, and then it will be fine. But like there’s, there’s some problems with this right? So, can we remove the bias by removing the pupil?
Sahar: Oh gosh. No, I mean that how ridiculous. The technologies are still made by people and I think that’s, that’s where it all comes to a head is we are still programming these technologies to certain ends, like we are trying to achieve certain outcomes. And so that, in itself, is biased and that’s going to bias all of the functioning of the AI. Plus there’s the whole issue of the data that we’re using to power the AI. So, I mean, I mean, I won’t get into too many examples right now.
Jill: Oh, I want examples, what are examples of biased AI.
Sahar: Oh, yeah. Okay. Well then, yeah, I think I mentioned in my chapter, but Safyia Noble, the scholar in this sector, she talks about an app or algorithm called Northpointe that was used by US courts to determine the future criminality, of a first time offender. And these algorithms were consistently over-assuming the future criminality of Black defendants, which means that they were going to go to jail because of that assumption. Whereas white defendants were consistently assumed to not re-offend, even though the current data says that’s not the case and it’s actually the opposite. But, it leads you to wonder how that happened and it very much is likely due to historical data. I mean, we know that Black people and people of color have been historically over-policed and over-incarcerated. So, if you use that data, and funnel it into an algorithm, it’s going to pop out more outcomes that reflect the history, right? So, we need to bring some sort of social justice oriented lens, or sociotechnical, lens to this. But the tech companies don’t seem so interested in that, you know.
Jill: If we use data that was collected and interpreted with bias social systems in place, and we train our algorithms on that data, surprise, the algorithms will also learn the biases is basically what Iâm hearing.
Sahar: Exactly, exactly. And you know one really obvious one that like almost is laughable is when Microsoft created this AI chat-bot that was supposed to interact with people on Twitter. And then literally, within 24 hours, became racist and Nazi because it was interacting with racists and neo-Nazi people in content. And that’s just the most obvious example of systems that often work in much more insidious, backend ways that we don’t get to see.
Jill: So in that case, all of us participating on Twitter just trained this AI to be as horribly racist, and sexist, and transphobic as everybody else on Twitter is.
Sahar: I try to not be part of the subculture on Twitter, but it is powerful.
Jill: Yeah, even if you’re not part of that, I feel like everybody has seen a glimpse of it.
Sahar: Yes, definitely.
Jill: If you’re on those platforms. I think we’ve got a good picture now of what smart AI is, how it is that smart AI is sold to us, built for us, marketed as this kind of objective savior, positivistic narrative, and why we should be questioning that narrative, at the very least, or outright calling that narrative out as being untrue. But in your own research on smart AI, you spent a lot of time developing methods and tools that allow us to more systematically or critically critique the kind of positivistic justifications that are given. In particular, you developed a lens that you called a sociotechnical lens. So, can you tell us a little bit about what a sociotechnical lens is and, or how we might use it?
Sahar: Yeah, so I think the essence of the socio-technical lens is about zooming out a bit. It’s not just looking at the outcomes of technology, but looking at, again, like what are the logics that are going into these technologies? Who is producing these technologies? And what’s sociopolitical environment are they being produced, and to what end? And also, why are they being produced? Because I think that is often cause of a lot of the unjust outcomes, because these technologies are being produced for corporate profit, or state surveillance. And so yeah, they’re going to have inequitable outcomes. So, I draw from four main theories. And the reason that I chose these four is because Critical Theory is really important for bringing that critical lens to the capitalist influences on technology and kind of the income inequality elements that come out of that. And then we have the post-modernist theory, which I think is really interesting because it directly refutes that positivistic thinking we were talking about, and it’s all about appreciating that there’s no such thing as an unbiased perspective of this world, and there’s no such thing as unbiased intentions. And so I think that encourages us to think more intentionally about what kind of technology we want to produce, and what kind of outcomes we want it to have.
Jill: And maybe why we want, the outcomes we want, what’s our own context and situation?
Sahar: Exactly. Yes, that’s exactly it. And then I think that’s where our intersectional feminism comes into play. Because we want to think about the ways that these technologies could impact people of all sorts of social and political locations based on gender, race, income, ability, disability, etc. And think about will this technology actually improve these people’s lives, the most marginalized folks? Who will it benefit and harm? That’s what intersectional feminism brings to the table. And then anti-colonial theory is just this really key piece to me because it reminds us that this obsession with producing something that’s better and smarter than human decision-making very much reflects the way that settler colonialism went down, right? With the settlers coming, in the Canadian context, coming to Canada and determining that these indigenous folks are uncivilized and they don’t know how to have a productive society and economy, so let us enforce our beliefs and our systems onto you. And now it’s very interesting, just as equity, diversity and inclusion become really big conversations in workforces and workplaces in every sector, suddenly where we’re moving to tech instead for decision-making and now, okay, because humans are not good decision-makers so now let’s trust these technologies that are largely also made by white men when you look at the tech industry. So, yes, there are just so many layers to this. And that’s what the socio-technical lens, that was a very long way of saying, that’s what it brings to the table is it allows you to look at all of these different power dynamics, and really, really look critically at how bias fuses itself into technologies and all these ways.
Jill: Awesome. So, if we’re using a sociotechnical lens, we want to be paying attention to positionality. Who was making these decisions? Why are they making them? What’s the goal? What’s the context in which people are making them? We want to be paying attention to who is being affected, and how that might not, or we know, isn’t equally distributed in terms of who’s being affected and how, that a lot of smart AI unjustly targets certain groups of people more than others. And instead, we want to look at trying to use technology that, and I’m thinking of KimberlĂ© Crenshaw’s phrase here: âwhere I enter, we all enter,â to use things that kind of benefit everybody, as you said, marginalized from the ground up, these kind of ideas. And then lastly, we want to think about parallels that we’re seeing here through the narratives that we’re given that justify AI tech, and the narratives that we’re given to justify settler colonialism, and really notice that those narratives haven’t changed a whole lot. And the thing that I found really stark and I want to draw out in what you said, stark in a good way, I want to draw it out, is this idea that because the narratives haven’t changed too much, what we have now is people saying, âOh yeah, like maybe we used to think that white people or white men were unbiased and objective, and now we know everybody actually has a perspective. So instead, let’s look to technology to be unbiased and objective. And by the way, technology is still created by white men.â
Sahar: That’s exactly it. It’s like just this, a lot of scholars say that capitalism and colonialism have this incredible way of reinventing themselves over and over again. And I just feel like AI and technology are just the latest frontier of those systems, right?
Jill: So, we think things have changed, but instead, kind of a fast one has been pulled, and the power is still maintained?
SR: Exactly. Yeah. And when and when you look at the terminology and the narrative that’s used to justify technology, it is, again, yeah, all very similar to what we were using to justify colonialism, marginalization, racism, sexism, all of the terrible things.
Jill: So, I feel like I’m getting a feel for this. So, can we try using the socio-technical lens? Can you give us an example of smart tech, and how the use of a sociotechnical lens to analyze this, works?
Sahar: Yeah. Okay. So, I mean, obviously there are examples like that North Point example I gave you which has very significant impacts on people’s lives in terms of incarceration and so on. But I think one that we can all relate to is the content sorting algorithms on social media. And I think that one’s a big one because we know that these algorithms take our data and then they make a lot of assumptions about us. So first of all, one assumption is that we will want to keep seeing the kind of content that we’ve already looked at, which is already a big assumption because I don’t think that’s the case. And I can tell you that after I planned my wedding, I have been seeing wedding content for the past two years, like, I don’t care anymore, but so it’s not actually that smart. But yeah, making these assumptions about who you are, it’s putting you in boxes based on your gender, your age, your race, and the things that you’ve clicked. And then, rather than exposing you to new ideas and allowing you to have dialogue with people who think differently, we are just pushed into these echo bubbles, and we’re seeing the effects of that on our broader society, I think. I mean, there’s a reason that society has become so divisive, and we’re completely unable to have productive dialogue amongst people who think differently. It’s because we are being trained via these social media platforms to only interact with people who think like us, and to think that anyone else is different or out to lunch, right? And so, I think right there we see the effects of just how these technologies can operate and how, through sociotechnical lens, you can see that they’re really putting us into these boxes and reproducing harmful almost colonial, capitalist effects that maybe weren’t unintentional, or maybe are unintentional, but have very serious consequences for society.
Jill: Yeah. So one thing that this kind of reminds me of, particularly when we think about capitalism and settler colonial mindsets, the idea of like sorting us into bubbles or into boxes based on data that they have about your race, your gender, your interests, is really kind of chilling when I think about settler colonialism’s kind of history of doing that en masse all around the world. And now we’re doing it in digital space as well, right? So the idea of drawing borders and geographically separating people, and now we’re kind of seeing that digitally separating. I don’t know that, that’s just something that struck me as you were talking.
Sahar: That’s completely what’s happening. I mean, when I compare my explore page on Instagram to my partner’s, I see some clear distinctions being made between our genders and like whatever assumption are being made about us. You know, his is all sports, and memes, and animals, and then mine is home decor, and weddings, and make up. And I’m like, geez, I would like some of your content, but no, they created this definition of what we are, and now it’s going to reinforce itself, right? Because that’s all I see.
Jill: Yeah. And then you talked about how, when we think about this in wider society, this is quite harmful because we’re losing abilities to talk to other people. I think there’s also been discussion of how this can radicalize people, that you get pulled into a bubble and you keep getting your views reinforced and amplified by other people in the same bubble. And this leads to radicalization of people who definitely had problematic views before they got sucked into the bubble, but now things are worse.
Sahar: Yes. Yeah, exactly. I’ve heard that about ISIS, and QAnon, and stuff.
Jill: So why do you think this is, like if we think about the socio-technical lens, and we think about who has designed this and for what purpose, and we think about social media and these kind of bubbles, you say that the bubbles maybe kind of an unintentional byproduct, but what was the purpose? Why is this happening?
Sahar: Yeah. So, my chapter touches on social sorting. And I think that a large part of why we are sorted is because these tech companies have way too much data, and they need to make sense of that data in order to market to us better. Ultimately, either they want to market to us, or they want to sell our data to somebody else who can market to us. But regardless, it’s all in the service of promoting consumerism, right? So, by putting us into these boxes, then there’s a greater chance of us purchasing these products and enjoying this content and spending longer on these platforms. And then also because we keep seeing the same content, you start to internalize it and you think that, that, that is who you are, that is, this is what I care about, this is what I like. And then you’re more likely to feed back into the system, right? So, I think it is very consumerism driven, although social sorting is also used by military and police bodies for other purposes, national security and policing purposes. And they use the same logic, right? It’s to categorize, and manage, and influence people essentially.
Jill: Okay. So, I want to come back to the issue of it being used kind of at a state level. But first I want to dig a little bit more into surveillance capitalism in the consumerist level. So, you have this quotation from a Chief Data Scientist in Silicon Valley that I want to highlight because I found it so chilling. And this is in your chapter in the book, you say that they say, “the goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad.” So that’s the end of the quote. And this really stood out to me, especially the last part, reward the good and punish the bad, this moralizing of behavior based on capitalist’s goals was just so stark to me. And I was wondering if you would speak to this a little bit, either using your sociotechnical lens, or say something about what you think is going on here.
Sahar: Yeah, it is quite chilling. But you know, when, when you start to think about the way that you interact with some of these platforms, and the way that we actually know that they operate, you see it in action. So, for example, on social media platforms like Instagram, you are rewarded with notifications, likes, and comments the more that you post. And it’s actually been stated online that if you post less, your content is going to be seen by less people, you won’t get as much of that reward. So, in that way, you’re being punished for not spending as much time on the app, right? And likewise, I mean, this is, I use Duolingo to learn new languages, and I’ve seen that they explicitly do this. They make you watch an ad so that you can get more rewards that will allow you to play the game for longer and better. And so yeah, there’s this clear element of not only socially sorting your behaviors into good and bad and, and, and I’m sure that there are many layers in that as well, but then also shaping our behavior to make us more profitable data cows, users and consumers, I would say.
Jill: Yeah, more profitable in terms of holding our attention for longer, and also that we give them more data. And then that we spend the money from the ads that they are returning to us based on the data that we gave to them, and the attention that we gave to them.
Sahar: Exactly. Yeah, it’s like a self-fulfilling, system that just keeps reinforcing itself.
Jill: And I find the moralizing here really interesting when I think about the positivistic framework that you have identified, that many of these data scientists and developers are working under. There is this idea that we have to keep this going, that there is something morally good, inevitable, and imperative in holding our attention on these apps and in mining more and more and more of our data so that they can drill down and sort me into boxes so that they can sell me more makeup or what have you.
Sahar: Yeah, capitalism is so contingent on this need to constantly grow and produce more profit and accumulate more things. And so, yeah, you see that in the way that they likely categorize our good and bad behaviors. And then you see that in the values that they continue to sell us, that you need more and better things, right?
Jill: So to stick with the consumer capitalist framework, for a moment, you talk about labortainment and aspirational labor in this chapter as well. And I was wondering if you could discuss these concepts a little bit for people who might not be familiar.
Sahar: Yeah, so labortainment is essentially when we do any sorts of activities online that we may find entertaining, like writing a product review, making a video, uploading a photo, offering feedback to an online platform of some sort. We may find it, find helpful are meaningful, but it is also a form of labor that is directly benefiting these tech firms, these brands, these corporations, because they have users and opinion leaders like us, freely using their products and vouching for them, without being paid or compensated in any way. So, in that way, we essentially become this unpaid, co-producer of tech and brands and so on. And then along a similar vein, aspirational labor, I would essentially make it akin to an aspiring social media influencer. It’s like when you want to achieve some sort of entrepreneurial or creative success outside of maybe these platforms, but you think that maybe using these platforms and doing what you love will eventually lead to compensation of some sort. However, a lots of studies have found that majority of folks who engage in that aspirational labor don’t realize the dream of going pro. And, and so really you’re just offering free, or cheap, labor to brands on social media platforms again. But again, through this narrative or assumption that by using these platforms, you are somehow expressing your individualism and creativity and so on, just as a little anecdote, when Snapchat was a really big thing, I remember my friends used to joke that âif it didn’t Snap, it didn’t happ.â It’s almost like if you don’t post it, it doesn’t exist, you know? It’s like I find labortainment and aspirational labor both really pray on that thinking that the more you produce, the more you do, the more real and meaningful your life is.
Jill: Yeah, I’m thinking about earlier too, when, when Facebook was much bigger and people used to like check in everywhere they were going. So you’re like, I’m at the bar, checked in at the bar or checked in, I don’t know like at that vacation that I went to, or wherever. And it was like everyone was doing this. Like here’s my location, here’s where I am, and it was this really big thing. And like if you didn’t check in, your life was sad.
Sahar: And it’s like a realistically, no one probably cares that much. But the social media platforms are learning a lot about us from that.
Jill: Yeah, they care.
Sahar: Yeah, they care.
Jill: They definitely want us to check in. When I teach tech ethics, or gender and tech, so many of my students, when, when we talk about aspirational labor, have participated in some form of, not necessarily that they wanted to go pro, but some form of labortainment. And I would guess the majority of our listeners have done so too. I’ve done it, right? I have a couple of YouTube videos, I have posted in other social media places, taken pictures, put them online, reviewed products, these kind of things. And a lot of people, as you said, find this is creative, a way to express themselves or find joy, find that they’re able to build kind of a small community around something that they love maybe. And so, what do we need to remember about this labor, since so many of us are engaged in it?
Sahar: Yeah, that’s such a good point because it can be enjoyable. I mean, there’s a reason that we’re all on these platforms. Yes, they are designed to be addictive and that’s something that I think we need to be mindful of. I think that’s one thing to think critically about is, at what point does it transition from being something truly enjoyable, to something that has just become an instinct or an addiction? Right? Like sometimes I find that I’ll just open my phone and start clicking all the social media apps because I’m just used to it, although there’s nothing that I’m really looking for on those apps, right? So, I think it’s, it’s being mindful and critical of the way that we’re using these technologies and whether it’s actually benefiting us, you know. At what point does it start doing more harm than good? Because, yes, social media can give us a lot of connective feelings, but often those relationships that you create on, especially platforms like Instagram that are very visual and so on, those are not the friendships you’re going to call on when you’re going through a tough time or something. And so, at what point do you kind of need to detach from that develop real, in-person relationships and a sense of belonging that is meaningful. You have to figure that out for yourself, right? But even on a more macro scale, I think one thing that the socio-technical lens demands, in my mind, is that we need to zoom out and think about what kind of world we want to live in, and then start to think about how technology fits into that. And so, even on a more personal level, when we think about our relationship with technology, I think we need to zoom out and think about what we want to achieve in our life, what we want from it, what are our values? And then start thinking about how is our activity on these technological platforms actually moving us towards those ends? And at what point is it starting to do the opposite? I think that’s the only thing I would think about, but I completely agree. It can be fun sometimes.
Jill: If I don’t particularly want a world where giant tech firms require their delivery drivers to pee in bottles on their way to people’s houses, and where CEOs make multiple, hundreds of times more than frontline workers, I may want to consider how my unpaid labor, in terms of reviewing products, or other things I might be doing, is contributing to this kind of world that I don’t necessarily want to see perpetuated, for example.
Sahar: Yes, exactly. I mean, that getting the real deep to the core of this, Yes. You know, I mean, like that’s the tough part is that you don’t want to put the onus on individual people to be like you are either feeding into capitalism by using this platform because I think there’s more to it than that. Like individuals not using it or using it won’t really stop the capitalist machine from churning on. But at least if we can foster a culture of mindfulness when it comes to using these technologies, I think that we could see some more systemic change.
Jill: And I think that’s a good place to begin. Because I do think, and this is a point you make in your chapter, there are a lot of people that don’t think about how their labor is being exploited. Because they do think these platforms more the way we might think of national telephone or post office systems, instead of for profit, multinational tech, corporate elites.
Sahar: Exactly.
Jill: So, if you think of them as kind of public goods rather than a for-profit enterprise, then you may not really even be asking the kinds of questions that we’ve been prompting in this podcast, or that you phrased in your chapter. So, it’s a good place to begin, I think.
Sahar: Yeah, that’s it. You’re so right because I think with social media, people are waking up to the fact that is corporate driven. But for example, Google the search engine is a great example, I think that you’re right, is perceived as a public good or just this neutral. . .
Jill: Objective?
Sahar: Yeah, all the things we’ve said when, in fact, you can pay to be at the top of Google, you can use search engine optimization and hire consultants and do this and that. And it’s very much not neutral.
Jill: And it also varies by what Google knows of us and our location in terms of if two people put it in the same search term in different parts of the world, Google does not return the same results.
Sahar: Yeah, even the, when you start a sentence and it tries to fill in the rest of the sentence for you, I think there are assumptions being made there about what you may want to ask.
Jill: Right, which depends on what Google already knows about you and what boxes Google has put you in.
Sahar: Exactly, through those social sorting mechanism that we talked about that.
Jill: So, let’s talk about social sorting in another context. Because we’ve talked about social starting in kind of this capitalist context. But as you said earlier in the podcast, it’s not just corporations that are doing this, right? So, you give this quotation in your book to kind of show a different way in which social sorting is done. You say, “Canada’s ongoing crisis of missing and murdered Indigenous women and girls exemplifies how social sorting can also be used to intentionally and unjustly exclude certain groups from protective surveillance, and thus basic rights to safety, security, dignity, and life.” Can you talk about this aspect of social sorting and what a sociotechnical lens might allow us to see here?
Sahar: Yeah, so I think, much like the corporate version of social sorting, the bias comes into play when you start to think about the historical systems and structures that have led us to this point, right, making these capitalists products. And so, when it comes to policing and surveillance, I think it’s important to remember that surveillance has always historically been targeted towards Indigenous, and specifically Indigenous and Black folks in North America. Dating back to the transatlantic slave trade and settler colonialism, this policing-related surveillance has always categorized these folks and many other racialized folks, but I would say specifically Black and Indigenous peoples. They have been categorized as threats to national security that we need to protect ourselves from, whereas the white dominant class is perceived as the âusâ that needs to be protected. And so I think this is most acute for Indigenous peoples because they have ongoing claims to this land that we have built this colonial and capitalist state on, and so I think that dichotomy very much still exists, and you see it in the statistics of the fact that, I mean, I think the statistic has become much more jarring, but last I checked it was about 4% of the population that is Indigenous. But over 40 percent in prisons are Indigenous.
Jill: And this is specific to Canada?
Sahar: Yeah, this is specific to Canada. Yeah. But then it makes you wonder, okay, if we are over-incarcerating Indigenous folks, then how are so many Indigenous women and girls being missed when it comes to protective policing and surveillance? And again, it goes back to the fact that they have never been, they’ve been sorted into the threat category and not into the protective category, right? But I think they’re seen as these impediments to Western capitalist modernization. And yeah, they are because capitalism is leading to environmental degradation, concentration of wealth and power amongst elite, and many Indigenous folks are standing against those things, and so.
Jill: Yeah, so we can see that social sorting can both be used to include groups and target them, both in terms of capitalist programs to get more attention, get more data, sell more stuff, but also as you say, there are, there’s the social sorting into the groups that must be protected, and the groups that are threats and that the other group needs to be protected from. And when that social sorting happens, what we see is police working against the threat, and anybody who’s captured in the group that must be a threat, then loses police and legal protection because they’re in the wrong box.
Sahar: Exactly. And I think that everything that we saw last year, you know with the whole George Floyd incident and then Black Lives Matter, is very much speaking to the same issue in United States, right, for Black Americans.
Jill: Yeah. And again, we can see that the same biases that existed in white settler colonialism are being reproduced in this social sorting, both algorithmically and in terms of how the police operate, in terms of which people are put in the boxes for protection – white women, white men – and which people are put in the boxes of biggest threat, which as you said, disproportionately falls to Black and Indigenous people.
Sahar: Yeah, exactly. And I think the trouble is sometimes when you try to make these arguments, people will draw attention to the fact that not all racialized folks are put in that bucket. But I think again, we have to apply that intersectional lens of like yeah, if you’re a property-owning racialized person, then suddenly you’re in the protected class versus the threatening class.
Jill: So, this is where intersectionality can really help us kind of see how these prejudices are playing out when it comes to this algorithmic social sorting.
Sahar: Exactly, yeah. And right now at least it’s still a bit more easily discernible because we don’t have AI being used quite as, as significantly in policing just yet compared to these tech corporations and so on and platforms. But it’s just a matter of time.
Jill: Yeah. So, this is a place, it sounds like, where because the AI isn’t being so heavily used yet, we could actually make a difference at the start, rather than having to try to work backwards, as we do when it comes to the capitalist situation. Like, we can already think now about what kind of world we want, and try and direct AI accordingly, and put public pressure before we get to this inevitability that keeps being sold to us?
Sahar: Yeah, exactly. I mean, yeah, there’s a bit more of an understanding that police can be oppressive, unlike the tech corporations which are really riding the wave of âoh we’re neutral arbiters of reality.â
Jill: âWe’re just providing tools and the platforms.â
Sahar: Yeah, exactly.
Jill: Yeah. So, there’s another narrative that I’ve heard quite often that I wanted to talk about with you. Quite often, when people start talking about kind of all this surveillance, particularly surveillance capitalism, but also just kind of digital online surveillance in general, and talking about being uncomfortable with data mining and surveillance, the response will be something like âwell just get offline. Like if you don’t like it, just stop. Don’t go to Twitter. Don’t go to Instagram. Don’t use the Internet. Just get offline. You don’t need to.â In other words, we often have this kind of dichotomy between digital space or the internet, and what is sometimes, again, I’m using my air quotes for people who can’t see, referred to as the âreal world.â What would you say to this kind of response?
Sahar: Yeah, I’ve heard this a lot too, and I think there’s two folds to it. So first of all, on a macro level, I think that this argument in general, it’s, it’s just reproducing the same logic. It’s really telling us that it’s an individual responsibility to securitize yourself. And essentially saying that we shouldn’t question the larger issue of surveillance capitalism, like just if you have a problem with it, you remove yourself. But there’s no conversation here about maybe public good and solidarity with other people who live in the society and are experiencing the same surveillance in the larger public. And so, I think in that sense it’s, it’s again, treating privacy like a commodity, and an individual liability. So that in itself, I would not agree with when you apply the socio-technical lens and, and you know, other important lenses like a human rights lens and so on. But then on top of that, as we’ve talked about, yeah, there is no dichotomy between the real world and the internet world, because what you do on the Internet directly feeds into the power dynamics and income inequality and everything that we see in the real world. So, you just removing yourself from it will not change the impacts that are still occurring, right?
Jill: Right.
Sahar: And so, I do think, yeah, that’s just a way of, it’s like an extension of neoliberalism. Like just, this is not our problem to deal with, you just privatized all responses to this issue instead of our state and our tech elite actually taking responsibility for addressing these real human rights concerns that we have about surveillance.
Jill: Yeah, I think that’s, I think that’s really important. That this, this kind of narrative does take something that is kind of a broad social issue and social problem, and try and turn it into an individual problem, like this is your problem. Just get offline or, I don’t know, secure your Internet connection somehow.
Sahar: Use VPNs.
Jill: Yeah, the idea that you can secure your connection somehow so your data can’t get mined, maybe, or just get offline. And that of course, when I cancel my Facebook account, first of all, Facebook tells me how sad they will be that I’m leaving, and how they’ll keep everything for what I want to come back to it. Cool, cool, that’s not, but also it doesn’t matter, right? Me leaving does not change anything. And in fact, in many cases, depending on who you are, and how much you may depend on the digital world for family ties, cultural ties, career ties, what have you, this can harm you and it will not harm them.
Sahar: Yeah, that’s another thing, the rest of the world is still on there. So, if you want to be connected, you have to participate.
Jill: Cool. And I also really liked what you said, that there really is no dichotomy between the Internet and the real world. And I think that’s right. If my smart phone is sitting on my desk listening to me, it in the real world.
Sahar: Yeah. And actually that’s like, I don’t think I talk about it in my chapter, but people usually refer to smart technologies as part of the âInternet of Thingsâ, which is just like connecting real objects that live in our real world. And so yeah, it’s very much like it’s become a two-way street that they feed into our lives, and we feed into a technological world, right?
Jill: Yeah. So we’re stuck with AI, I think, in one form or another, for a while. I wonder what a sociotechnical lens might tell us about how we might build, and/or use or relate to, AI technology in less controlling or destructive ways, perhaps in more equitable ways. Is this possible?
Sahar: Yes, I think I think absolutely it is. I think that the first step though is, again, to take that step back and to ask ourselves, which the socio-technical lens, I think, encourages us to do, what kind of world we actually want to live in and what our values actually are for society. Because I think one of the biggest issues with this whole AI bias situation is that we’re just creating technology for technology’s sake, and innovating for innovation’s sake and for capitalist production. But if we start from a place that this is the world that we want, and then bring in technology and think, okay, how can we work with AI to produce this reality that we want and this future that we want, I think that intention and that process is really significant and it could really change the outcome of what these technologies look like. I think that would demand some interdisciplinary work in the tech sector, which we are not seeing very much right now. And in particular, I think anti-colonial theory and Indigenous scholars give us some really interesting insight into how we can think about these more social-justice oriented ways of relating with AI. And what they say is that we need to think of AI as one nodal point within our large, larger network of social relations. And so we can’t just even start treating AI like lower on the hierarchy and all humans up here because that in itself is still reproducing the same capitalists relationships and colonial relationships that we have been dealing with over time. And so, Indigenous scholars really talk about how we actually need to think about our relationship with AI as well as, as part of building. When we’re thinking about building a more equitable society, we also need to be equitable and just to AI. I know it sounds a bit ridiculous, but like we, we need to think about all of those things.
Jill: Yeah, and that is very different from the narratives that we are getting from tech companies where they’re saying we’re just creating a platform, we’re just creating a tool. Like if we really thought about AI as the foundation for building relationships, as a medium of reciprocal exchange, and as something worthy of consideration and respect, would we use AI in some of the really destructive ways that we do?
Sahar: Yeah, exactly. And just to build on that a bit, I think that part of achieving this kind of equitable and reciprocal relationship with AI is that we also need to build more transparency into what the tech industry is doing in the first place. And we need more digital literacy so people actually known what we’re talking about when we’re talking about these technologies and how they operate
Jill: So that we know what the goals are right now.
Sahar: Yes. Exactly.
Jill: Like it’s all well and good for me to say I know what my goal for future society is. But I should also be able to identify what the goals are like, what Mark Zuckerberg’s goals are. When I participate in these platforms, what goals am I helping to achieve?
Sahar: Yeah, exactly. What goals are we helping to achieve, and what are the logics that are informing the ways that these technologies operate? So many tech companies are able to hide the algorithms behind proprietary laws and policies and NDAs. And so, you really just never know how they are working in the first place. So how are we supposed to fix them, and/or rethink them? And so that’s, I think that’s where legislation and policy and regulation come into play as well. We need some sort of accountability, in the tech industries, state parties, and the public. And I think often we forget that we’re in a social contract with our government, like they’re supposed to maintain and uphold some of our human rights. And that’s why we give them authority. And so, it’s completely their responsibility to start upholding some of those rights like privacy.
Jill: Call your local MP.
Sahar: Yeah, call your local MP. Do it.
Jill: If you’re concerned about your smart phone listening to you, call your local MP.
Sahar: Yeah. And actually, the Canadian Human Rights Commission just launched a project in collaboration with the Ontario Human Rights Commission, I think, looking at the ethical implications AI and the way that it can reproduce discrimination, exactly in line with this project. And so reach out to the Canadian Human Rights Commission and tell them your concerns.
Jill: Amazing, thanks. So now we have an action. Is there anything else you’d like to leave our listeners with regarding smart AI technologies today, Sahar?
Sahar: I would, I think, want to reiterate that my goal with this chapter is not just to make us fearful of technology itself. It’s more to take a step back and think critically about who is developing the technology, how and why it’s being developed, and with what intentions, goals, and logics. Who was, who was going to be benefited or harmed by these technologies. And so to your point about calling your MP, I would say that when we start to see the logics and how they can be inequitable, I think that it’s important that we resist, like if these are not, if we don’t want to promote the capitalist and colonial project, which we can see now that AI and smart technologies are reproducing, then, you know, whether it’s through our academic research, through non-profit work, advocacy, governments, or even through the tech companies themselves, I think we need to get comfortable with being uncomfortable. Be disruptors, have these conversations and hold people accountable because technology is developing at such a fast rate right now that if we do not incorporate those checks and balances, we’re just going to see our social, economic, and environmental crises of the day worsen exponentially, I think. And that’s not to leave us on a depressing note here because I do think that we can achieve change through movement-building, advocacy and research, and we can hold our governments accountable. We just have to, have to show them that we care.
Jill: Yeah, I think that people working together can be very powerful. But first we have to stop thinking of these platforms as public goods that are neutral and objective.
Sahar: We need to encourage the critical thinking. And one or two pieces, I guess that I didn’t include in my sociotechnical lens, but I think will be very relevant moving forward, is to add an environmental lens to this because there’s also huge environmental implications to this constant recycling, manufacturing, consumerist way of cycling through technologies. And then also, on the back end, to storing all of that data, like this is an incredible amount of data that you to power data hubs.
Jill: Huge environmental impact.
Sahar: Exactly. So, I think it’ll be also important for folks to start adding that lens to the analysis.
Jill: This episode of Gender, Sex and Tech continued a conversation begun in chapter nine of the book Gender Sex and Tech: An Intersectional Feminist Guide. The chapter is called âArtificial Unintelligence: How âSmartâ and AI technologies perpetuate bias and systemic discriminationâ was written by Sahar Raza. I would like to thank Sahar for joining me today, for this important and engaging discussion. And thank you, listener, for joining me for another episode of Gender Sex and Tech, Continuing the Conversation. If you would like to continue the conversation further, please reach out on Twitter @tech_gender, or consider creating your own material to continue the conversation in your own voice. Music provided by Epidemic Sound. This podcast is created by me, Jennifer Jill Fellows, with support from Douglas College in New Westminster BC. And support from the Marc Sanders Foundation for Public Philosophy. Until next time, Bye!!