Garbage in. Garbage Out. - NEVERTHELESS

One afternoon in Florida in 2014, 18-year Brisha Borden was running to pick up her god-sister from school when she spotted an unlocked kid’s bicycle and a silver scooter. Brisha and a friend grabbed the bike and scooter and tried to ride them down the street. Just as the 18-year-old girls were realizing they were too big for the toys, a woman came running after them saying, “That’s my kid’s stuff.” They immediately dropped the stuff and walked away. But it was too late — a neighbor who witnessed the event had already called the police. Brisha and her friend were arrested and charged with burglary and petty theft for the items, valued at a total of $80.

The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. He had already been convicted of several armed robbery charges and had served 5 years in prison. Borden, the 18 year old, had a record too — but for juvenile misdemeanors.

Something weird happened when the two defendants were booked into jail: a computer program spat out a score predicting the likelihood of each committing a future crime. Borden, the girl who picked up the bike, — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Three years later, we know the computer algorithm got it exactly the wrong way round. Brisha Borden not been charged with any new crimes. Vernon Prater is serving an eight-year prison term for breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

These risk assessments — done by an algorithm — , are common in American courtrooms. They inform decisions about who gets set free and who is punished at every stage of the criminal justice system. But a ProPublica investigation looked into the accuracy of these risk assessments in a county in Florida and found that the algorithm was somewhat more accurate than a coin flip. Not only that, but the algorithm also showed significant racial bias in its outcome: it wrongly predicted black defendants as future criminals at almost twice the rate as white defendants.

It’s generally the most vulnerable in society who are exposed to evaluation by automated systems in the first place. It doesn’t help if the systems themselves are often biased against them too. So do why big institutions rely on hallowed algorithms to make decisions for them?

“This is Nevertheless — a podcast celebrating the women transforming teaching and learning through technology. Supported by Pearson”.

Algorithms were in part precisely meant to remove bias from the equation. The kind of discrimination that Dame Stephanie Shirley, a pioneer in tech, suffered from when she was sending out her CV out in the 1950s…

 

Dr Sue Black

Sue: She found that when she was sending her CV and to get jobs she was not getting any interviews. So chatting to her husband, her husband said ‘Why don’t you try sending in your CV with the name Steve Shirley to see if it makes any difference’. So she did that and found that she did get called in for interviews when people thought that she was a guy.

That’s Dr. Sue Black, OBE, a British computer scientist and academic, author of Saving Bletchley Park.

Sue: She went on to have a baby and I think in those days you had to stop working if you became a mum. So she stopped working for the company that she was working for and decided, once she’d had the baby, to set up her own business. So she set up a consultancy. But what happened was she got lots of work and had too much work for herself. So she went to women with babies, working from home as programmers. An example of the kind of software that they were producing is the Concord black box flight recorder. She grew what was a consultancy into a massive software house called ‘F International’ which after a few years employed about 300 something women. Why haven’t I grown up knowing her name? Really? You know. Why isn’t she one of the most famous people in the UK. I really don’t know. When she’s done stuff which is similar to any of the tech pioneers that we do know about, most of whom are men.

But it wasn’t always like that.

Sue: It started off with women in computing then it’s moved gradually to where we are today, where it’s mainly 15–20%. I think back to the 1940s, where it wasn’t all coding but it was 80% women working at Bletchley Park as part of the code-breaking process.

Bletchley Park in Britain is where the Nazi codes were deciphered during WWII. Although it was men who were building the hardware and machinery, women were doing the maths. In the first half of the 20th century, it was not uncommon for women to study math at university. They mostly went into teaching, but by WWII they were helping with the war effort. Similarly in the US, it was women programmers who worked at NASA during WWII.

Sue: Women were thought to be much better at attention to detail…seen as being really useful for the code-breaking process. Gone from having women as computers to women writing code in the 1960s and 70s. And then it seems kind of around the 1980s when personal computers came in, the way personal computers were advertised was that they were for men. And if you look at any articles you can see there’s photos from advertisements from the day which show a boy sitting at a computer and his dad behind him showing him what to do with it.

In the mid-90s Sue herself was doing a PhD in software engineering and she started going to conferences.

Sue: It was 95% men at conferences and if I wanted to get my paper published it was good to network….The first conference I went to I decided to speak to one person…I chatted to this guy for 10 mins and then for the rest of the conference he was staring at me. I thought I’d offended him. 10 years later I realised he probably thought I fancied him. But I had no clue at the time, I just thought I’d done something awfully wrong to upset this speaker.

Then Sue went to a Women in Science conference where there were mostly women.

Sue: I went there thinking I don’t really like conferences but I’m going anyway. I had the best time. I realised that if you’re in the majority life is so much easier. Going to that conference changed my life and made me realise it’s not me that’s rubbish at talking to people, it’s the environment that’s not conducive to me sharing my research.

As a result Sue set up the first online network for women in tech in the UK. This of course came about from her very personal experience of being a minority.

Sue: But if you’ve not had that experience how are you going to figure out what to do about it. If I’d been a male student, I wouldn’t have had any problems, so how would I have then known that there was an issue and what to do about it.

That’s exactly the crux of the problem. It’s usually white men doing the hiring in tech companies. Even if they don’t have explicit bias, there’s always the niggly worm of implicit human bias. There’s no way of easily overriding that. What many companies have done is try to remove some subjectivity in the hiring process by passing the task off to machines, to algorithms and AI — which are seen as more objective. So people called Stephanie don’t have to change their name to Steve to stand a better chance. The intentions are good. Problem is, algorithms are built by humans.

Sue: Even in specifying what you want something to do, you can have bias going in there, and then bias going in at the design stage. By the time you get to the implementation you can have a very biased piece of software.

There was one particular video that went viral which showed exactly how prejudiced and malfunctioning software can be.

Sue: There was a handryer and a black guy was putting his hand under the handryer and it wasn’t going on. So he put a piece of white toilet paper on his hand and put it under the handryer and it goes on. So basically it only works for white people. And that’s exactly it. So there was no diversity of thought in that. How would you manage to do that? Obviously because you’ve never tried it out on a black person.

That wouldn’t be the first time software was explicitly racist. A couple of years ago the Google Photos App tagged black people as gorillas. I mean what kind of sample images did they use during the software development to make this possible? This isn’t a one-off. It’s deeply rooted: way back when people used film cameras, Kodak used a coating on its film that favored white skin tones.

There’s also in-built gender bias. A year ago, if you searched for a female contact on LinkedIn (the professional networking site), say Stephanie Shirley, the website would ask if you meant Steve Shirley.

Really, it’s no surprise that Microsoft’s chat teen bot Tay, designed to mimic the speech of an average American girl, turned into an absolute unmitigated jerk. Hours into its grand debut, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. Microsoft invited users to be part of the process of helping to learn. You can imagine what happened next. Garbage in garbage out.

JoannaNo magic algorithm automatically generates a robot. Every AI system is a very complicated system of design that humans have intended, it’s not something that is just coming out of outer space or evolving out of the soil.

That’s Dr. Joanne Bryson professor at the University of Bath and a fellow at Princeton. She researches intelligence both artificial and human. In a recent study she tried to empirically demonstrate that when you put words into an AI system, it absorbs all kinds of meaning outside the bare bones dictionary definition — including cultural prejudice.

 

Joanna Bryson

Joanna: So what is going on with artificial intelligence, especially in the last few years is that we’ve got really good at taking all the stuff that humans have already learned and uploading it into computers using things we call machine learning.

The amazing thing is that we can upload even semantics. Semantics is what do words mean. So if you say “I’m going to go home and feed the dog and feed the cat’ then someone who didn’t know anything, like a computer which had just been built could guess that cat and dog are sort of similar. And actually the fact that you walk one and not the other that’s a little different. Right. And so you can just use the words around the words without knowing what any of the words mean to get sort of a structure so that she can get a structure of how the words relate to each other. And then if you learned just a couple of things about the world that’s called grounding. All right. And so then you can get with all the other words mean that you already figured out from how they related to each other anyway. So that was my theory of semantics. And we decided to test it and we tested it by looking at a really basic psychological study called the Implicit Association Test.

So this is something kind of famous and actually controversial because it shows that at an implicit level, so not consciously not something you choose to do, but the implicit level you tend to associate, for example, women with the home and men with careers. It’s not that you don’t associate women with careers. It’s that it’s easier to associate women with the home and men with a career than to associate men with the home and women with a career. These implicit biases have already been proved by psychologists.

What Joanna was trying to do was see if the AI would react the same way that humans would. Would the AI pick up on this implicit human bias? She selected a set of words to represent men, women, home and work that psychology studies show humans had associated with each other (they measure this by how fast it takes you to associate certain words with certain things) and fed them into the computer. And guess what?

Joanna: Every single form of bias that the psychologists had found in humans, we also found in the computer.

If people are sexist and racist…computers will be too. She also studied who gets invited to interviews. The humans in the implicit psychology tests discriminated against African American names. So did the computers.

Joanna: One of the things that psychologists tried just to make sure, they did this controversial concept was insects and flowers. So which is more pleasant or more unpleasant between insects and flowers, and we also got that. And so to me that’s astonishing that you could have a computer know that the flowers are pleasant, you know because it can be a computer with no experience of the real world. And yet we have this visceral, it’s literally visceral to say that you like flowers and you don’t like insects. So all these attitudes that we have that must come from our embodiment. We have, you know, there’s good physical reasons for liking flowers and disliking insects, right.

But it’s been captured and we can communicate between each other, and people sort of suspected this before, because you can talk to someone who’s blind and if you’re talking like across a computer you couldn’t tell they were blind. Right. So they’ll stay, they’ll still use visual metaphors. They’ll say ‘I’ll look into it’. ‘Yeah see you later’, you know, they’ll say things like that. And of course they would right. But how do the uses metaphors. Because our whole culture tells them what it’s like to have vision. And so this is saying that our whole culture is telling us that unfortunately that there’s something unpleasant about being African-American and that’s not something that we were of course happy to find out. And so then the headlines were computers are racist.

But, how does AI learn implicit bias? Not just the dictionary meaning of the words.

Joanna: The point of all this is that if we expose our machine learning system to what humans are exposed to then it gets the same kind of prejudices. Or I should say stereotypes that humans get and that actually most of those are accurate. And so this actually gives a different understanding of where the stereotypes and the prejudices are coming from because we always assumed that like you know evil people are taught their kids to be to be evil or something. But at this point we’re saying, well maybe all these things that we call stereotypes are based on something about our past that we’ve decided consciously we don’t want this to be true anymore. Isn’t that machine learning is making it worse than it already is. I think one of the really important things is to realize that kids just be just reading the Internet, just reading the newspapers would pick up the same biases.

It isn’t that machine learning is making it worse than it already is. One of the really important things to realise is that kids just reading the newspapers would pick up the same biases.It’s not that you can just protect your kid from seeing these biases if you’re letting your kid read. Right. Artificial intelligence gives us the opportunity to examine these things happening in our culture already. Now the question is does it give us also an opportunity to engineer it.

So let’s say you wanted to remove all of these biases from data. Let’s say you wanted to correct your test results to eliminate sexism. Sexism is only one ism. There are multiple different ways that prejudices intersect.

Joanna: So you’re talking about engineering society at that scale and that’s something I find really worrying. And I’m not sure I want the companies to decide how to engineer society. So I think it’s better for us to think about how we can create the training data we want to live with and how do we improve society. As long as we keep working on our culture and we keep updating the AI so it keeps up to date with the contemporary culture, that makes a lot more sense than setting it up to companies.

But there are things that tech companies can do and are trying to do to diversify the culture and make sure human bias doesn’t get into the actual tech products. Here’s Sue Black again:

Sue: Whoever you’ve got designing the AI system, if there’s no diversity in the team, then you’re going to get a product that is tailored specifically only for the mindset of the people that have designed and built it. So you could have gender bias, all sorts of biases in your system. People are writing the code, people are doing the design, people are biased. You need diversity inclusion to be considered very seriously when you’re designing and building any software system.

The whole diversity and inclusion piece I think is hard if you haven’t been a part of that. I guess because tech is mainly white males, then if you’ve never experienced any of that, how are you going to know first of all that it exists. OK, so we’re hearing more and more now which is great — women speaking out about what’s happening to them. So then taking experience and figuring out exactly what to do about it is not a straightforward thing, but at the same time we need to do it.

This is something Chuin Phang, global advisor for diversity and inclusion at Pearson, is acutely aware of.

Chuin: Once someone’s hired in what are they doing to reflect our customers? Do all our teams all work and think the same? What are we doing to ensure our products are accessible to all learners for example? Diversity can’t just be this extra-curricular thing in people’s heads. We see it as necessary as part of the product development cycles and teams. That’s what we’re looking at as well.

Then there are the pure economics to consider. More diverse companies tend to be more profitable. Gender-diverse companies are 15% more likely to outperform homogenous companies — ethnically diverse companies by 35% — according to McKinsey.

And if someone isn’t happy at a company and leaves…

Chuin: it costs the company on average 20%, based on recent studies, of that person’s salary to replace that person. From a bottom line standpoint you can look at the cost involved when teams are less diverse. You can also look at it in terms of how you develop products and services that meet the needs of the growing diversity in your marketplace.

— —

Jan: So our team has people with disabilities on it and I think that is the greatest value that our team offers to product teams across Pearson. We’re able to show them that a person who is blind or visually impaired uses special tools, but they’re just like anybody else

Jan McSorley is the VP of Accessibility at Pearson. Bias (be it gender, race, ability) is all about who gets opportunities and access.

Jan: it helps product teams understand that the way we design our product can either open a door or it can close it. And once we as a company can understand that when we design our products with the needs of people with disabilities with diverse needs in mind from the beginning. Not only are we helping those people with disabilities reach their goals and objectives in life, but we’re also enhancing our own products. We’re making our products better because accessible design is just better design.

Ok, so it’s a universally held truth that diversity makes companies and software better, in many different ways. The big challenge is actually implementing this truth — making the workplace diverse. The hiring practice is the obvious, first place to start. Carissa Romero works at Paradigm, where she applies behavioural science and data to help companies become more diverse.

 

Carissa Romero

Carissa: I think companies often find it challenging to then figure out well how do I know if I have an inclusive culture. And so what what we did is we developed an inclusion survey by looking at things in behavioral science research that we know are important for success. For all people but particularly for people from underrepresented groups and areas where we know there are likely to be differences based on past research.

So this includes things like a sense of belonging. Do you just generally feel like you belong in the organization? This includes things like fairness. Do you feel like promotion decisions are there? Do you feel like opportunities are equally distributed? Things like voice. Do you feel comfortable speaking up when you do speak up? Do you feel heard? So companies can measure these things and then they can also ask people lots of questions about their demographic background.

So you can then break down the results by gender, race, ethnicity age parental status and see are there groups that are having a different experience who are perceiving the culture differently here. And you know which groups are those that we can start to address those challenges.

One challenge the tech industry faces is this culture of genius. You know, the tech wizard or the rockstar programmer.

Carissa: This believes that only certain people can be super star performers or this idea that some people have a technical mind. We know that when companies have more of a fixed mindset culture where or when people have more of a fixed mindset culture is actually more likely to rely on stereotypes which make sense because stereotypes are essentially beliefs about groups mixed ability. So you kind of buy into that idea a fixed abilities get to be a lot more like reader to rely on stereotypes.

The industry ends up a lot more likely to be relying on these stereotypes and these prejudices filtered down right from the hiring process into the software itself that tech companies build. Bias and algorithms used in the criminal justice system has pretty high stakes. They could determine whether you’re a free individual or get all your basic freedoms taken away. Other software biases, like the case of search engines trying to correct female names to male ones might not significantly affect one person’s life but they are no less insidious. They reveal something ugly about the core of our culture and then it ends up being kids who absorb and mimic our culture the most. If there is bias built into the educational technology that they use, whether that’s around gender or race or ability, the prejudices in our society will just continue repeating themselves in an endless cycle. In turn, slowly changing the culture, may change the algorithms.

CREDITS:

Nevertheless is a Storythings production — produced by Dasha Lisitsina, research and editing by Anjali Ramachandran, music and sound design by Jason Oberholtzer, supported by Pearson publishing, presented by me, Leigh Alexander.