Posts in Bias
Studies show facial recognition software almost works perfectly – if you’re a white male - Global News

Recent studies indicate that the face recognition technology used in consumer devices can discriminate based on gender and race.

A new study out of the M.I.T Media lab indicates that when certain face recognition products are shown photos of a white man, the software can correctly guess the gender of the person 99 per cent of the time. However, the study found that for subjects with darker skin, the software made more than 35 per cent more mistakes.

As part of the Gender Shades project 1,270 photos were chosen of individuals from three African countries and three European countries and were evaluated with  (AI) products from IBM, Microsoft and Face++-. The photos were classified further by gender and by skin colour before testing them on these products.

The study notes that while each company appears to have a relatively high rate of accuracy overall, of between 87 and 94 per cent, there were noticeable differences in the misidentified images in different groups.

Full article:

https://globalnews.ca/news/4019123/facial-recognition-software-work-white-male-report/

Read More
Kriti Sharma: rendre l’intelligence artificielle plus éthique - Business au Feminin

Vice-présidente Bots et intelligence artificielle chez Sage, Kriti Sharma est une pionnière dans le développement de machines intelligentes capables de fonctionner et de réagir comme des êtres humains pour simplifier les tâches administratives des entreprises. Elle est aussi la créatrice de Pegg, le premier chatbot de comptabilité au monde qui sera sera commercialisé en 2018 en France et désormais adopté dans 135 pays.

L’intelligence artificielle est une des plus grandes révolutions de notre temps pouvant mettre en danger le pouvoir de l’être humain et son travail. Quel est votre point de vue ?

Kriti Sharma: L’intelligence artificielle est comme n’importe quelle autre révolution technologique majeure, elle aura des implications positives comme négatives. Maintenant, il faut être sûr qu’elles sont utilisées à de bonnes fins. Par exemple pour les petites entreprises qui n’ont pas beaucoup d’équipes technologiques, l’intelligence artificielle peut les aider à automatiser un certain nombre de process.

Par ailleurs, la technologie attire une main d’œuvre de plus en plus diversifiée, ce qui n’existait pas auparavant. L’intelligence artificielle peut également s’automatiser elle-même. Avant, créer un software prenait du temps, maintenant, l’IA commence à écrire ses propres codes. Elle peut, dans une certaine mesure, automatiser le travail de l’ingénieur software. Donc nous avons maintenant un besoin de gens aux compétences créatives, plus seulement des ingénieurs mais une combinaison de profils Art et Science.  Autrement dit, vous n’avez pas besoin d’être un ingénieur ou un Data scientifique avec un master pour travailler dans l’intelligence artificielle.

Dans « the end of the professions » David Susskind évoque des professions comme les avocats, qui vont être impactées par l’automatisation et l’intelligence artificielle.  Ne pensez-vous pas que cela va accroitre les inégalités à l’échelle mondiale ?

Read More
DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass - Wired

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems.

At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely. They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

Read More
Enquête au cœur de l’intelligence artificielle, ses promesses et ses périls - Le Monde

L’être humain est-il menacé par la technologie ? La machine risque-t-elle de le dominer ? Notre dossier spécial pour faire le tri entre fantasmes et réalité.

L’intelligence artificielle (IA) est à la mode. Rien que dans Le Monde et sur ­Lemonde.fr, le sujet a été évoqué dans 200 articles en 2017, soit presque 15 % de plus qu’en 2016. Il en a été question dans tous les domaines : en économie, en science, et même en politique, ­puisque le premier ministre, Edouard Philippe, a confié une mission sur la question au député (LRM) mathématicien Cédric Villani, dont les conclusions sont attendues en janvier.

Il reste à savoir ce que cache ce terme. Bien sûr, il y a ces fantastiques percées montrant que des machines surpassent désormais l’homme dans des tâches spécifiques. Dans le secteur de la santé, elles repèrent mieux que les médecins des mélanomes ou des tumeurs du sein sur des images médicales. Dans le transport, elles causent moins d’accidents que des chauffeurs. Sans compter les autres avancées : la reconnaissance vocale, l’art du jeu (poker, go), l’écriture, la peinture ou la musique. En coulisse de ce monde si particulier s’activent les géants du ­numérique (Google, Facebook, Amazon, Microsoft, IBM, Baidu…) ou des start-up désireuses de leur voler la vedette.

Read More
AI reveals, injects gender bias in the workplace - BenefitsPro

While lots of people worry about artificial intelligence becoming aware of itself, then running amok and taking over the world, others are using it to uncover gender bias in the workplace. And that’s more than a little ironic, since AI actually injects not just gender, but racial bias into its data—and that has real-world consequences.

A Fox News report highlights the research with AI that reveals workplace bias, uncovered by research from Boston-based Palatine Analytics. The firm, which studies workplace issues, “analyzed a trove of data—including employee feedback and surveys, gender and salary information and one-on-one check-ins between managers and employees—using the power of artificial intelligence.”

Read More
The world is relying on a flawed psychological test to fight racism - Quartz Media

In 1998, the incoming freshman class at Yale University was shown a psychological test that claimed to reveal and measure unconscious racism. The implications were intensely personal. Even students who insisted they were egalitarian were found to have unconscious prejudices (or “implicit bias” in psychological lingo) that made them behave in small, but accumulatively significant, discriminatory ways. Mahzarin Banaji, one of the psychologists who designed the test and leader of the discussion with Yale’s freshmen, remembers the tumult it caused. “It was mayhem,” she wrote in a recent email to Quartz. “They were confused, they were irritated, they were thoughtful and challenged, and they formed groups to discuss it.”

Finally, psychologists had found a way to crack open people’s unconscious, racist minds. This apparently incredible insight has taken the test in question, the Implicit Association Test (IAT), from Yale’s freshmen to millions of people worldwide. Referencing the role of implicit bias in perpetuating the gender pay gap or racist police shootings is widely considered woke, while IAT-focused diversity training is now a litmus test for whether an organization is progressive.

This acclaimed and hugely influential test, though, has repeatedly fallen short of basic scientific standards.

Full article: https://qz.com/1144504/the-world-is-relying-on-a-flawed-psychological-test-to-fight-racism/

 

Read More
Unconscious Bias Training Isn't the Silver Bullet For a Biased Hiring Process - Elevate Blog

The latest fashion trend with most of my clients is Unconscious Bias Training. While those trainings are interesting and engaging, and may raise awareness about various biases, there's little evidence to their effectiveness in eliminating those. This is well explained in Diversity and Inclusion specialist's Lisa Kepinski's article, Unconscious Bias Awareness Training is Hot, But the Outcome is Not: So What to Do About It?

Lisa outlines two problems with these trainings:

  1. The "So What?" effect: having done the training, leaders and HR professionals alike remain at loss for the next steps that could deliver a sustainable cultural change, and
  2. The training may backfire by encouraging more biased thinking and behaviors (by conditioning the stereotypes). Moreover, "by hearing that others are biased and it's ‘natural’ to hold stereotypes, we feel less motivated to change biases and stereotypes are strengthened (‘follow the herd’ bias)."
Read More
Artificial intelligence could hardwire sexism into our future. Unless we stop it- WEF Blog

In five years’ time, we might travel to the office in driverless cars, let our fridges order groceries for us and have robots in the classroom. Yet, according to the World Economic Forum’s Global Gender Gap Report 2017it will take another 100 years before women and men achieve equality in health, education, economics and politics.

What’s more, it's getting worse for economic parity: it will take a staggering 217 years to close the gender gap in the workplace.

How can it be that the world is making great leaps forward in so many areas, especially technology, yet it's falling backwards when it comes to gender equality?

 

 

Read More
Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.

Read More
Working for the algorithm Machines will help employers overcome bias - The Economist

Who is best placed to judge a firm’s workers? In 2018 employees everywhere will increasingly feel the effects of the rise of “talent analytics”, also known as “people analytics”, as they go about their daily work. Having been relatively slow compared with other corporate departments in making use of big data, in 2018 human-resources (HR) folk will become its most enthusiastic proponents—with significant implications for who gets hired, what they are paid and whether they are promoted. Employees will have to get used to being (often unwitting) guinea pigs in frequent HR experiments. And wise ones will think ever more carefully about how they express themselves in e-mails and on digital collaborative-working platforms such as Slack.

One reason is the pressure HR executives will face to make workplaces better for women and minority groups. The limitations of established approaches, such as training and awareness programmes, had caused “diversity fatigue” to set in. But it has become a corporate priority again after shocking headlines in 2017 about sexual discrimination and harassment in Silicon Valley, Hollywood, professional sports and big media firms, which reminded the world that bad corporate culture is a serious business risk. 

Read More
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Something really is wrong on the Internet. We should be more worried. The Washington Post

“Something is wrong on the internet,” declares  trending in tech circles. But the issue isn’t Russian ads or Twitter harassers. It’s children’s videos.

The piece, by tech writer James Bridle, was published on the heels of a report from the New York Times that described disquieting problems with the popular YouTube Kids app. Parents have been handing their children an iPad to watch videos of Peppa Pig or Elsa from “Frozen,” only for the supposedly family-friendly platform to offer up some disturbing versions of the same. In clips camouflaged among more benign videos, Peppa drinks bleach instead of naming vegetables. Elsa might appear as a gore-covered zombie or even in a sexually compromising position with Spider-Man.

The phenomenon is alarming, to say the least, and YouTube has said that it’s in the process of implementing new filtering methods. But the source of the problem will remain. In fact, it’s the site’s most important tool — and increasingly, ours.

YouTube suggests search results and “up next” videos using proprietary algorithms: computer programs that, based on a particular set of guidelines and trained on vast sets of user data, determine what content to recommend or to hide from a particular user. They work well enough — the company claims that in the past 30 days, only 0.005 percent of YouTube Kids videos have been flagged as inappropriate. But as these latest reports show, no piece of code is perfect

Read More
Garbage in. Garbage Out. - NEVERTHELESS

One afternoon in Florida in 2014,18-year Brisha Borden was running to pick up her god-sister from school when she spotted an unlocked kid’s bicycle and a silver scooter. Brisha and a friend grabbed the bike and scooter and tried to ride them down the street. Just as the 18-year-old girls were realizing they were too big for the toys, a woman came running after them saying, “That’s my kid’s stuff.” They immediately dropped the stuff and walked away. But it was too late — a neighbor who witnessed the event had already called the police. Brisha and her friend were arrested and charged with burglary and petty theft for the items, valued at a total of $80.

The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. He had already been convicted of several armed robbery charges and had served 5 years in prison. Borden, the 18 year old, had a record too — but for juvenile misdemeanors.

 

For the the full transcript and podcast: 

https://medium.com/nevertheless-podcast/transcript-garbage-in-garbage-out-78b74b08f16e

Read More
Understanding Bias in Algorithmic Design - ASME Demand

In 2016, The Seattle Times uncovered an issue with a popular networking site’s search feature. When the investigative reporters entered female names into LinkedIn’s search bar, the site asked if they meant to search for similar sounding male names instead — “Stephen Williams” instead of “Stephanie Williams,” for example. According to the paper’s reporting, however, the trend wouldn’t happen in reverse, when a user searched for male names.

Within a week of The Seattle Times article’s release, LinkedIn introduced a fix. Spokeswoman Suzi Owens told the paper that the search algorithm had been guided by “relative frequencies of words” from past searches and member profiles, not by gender. Her explanation suggests that LinkedIn’s algorithm was not intentionally biased. Nevertheless, using word frequency — a seemingly objective variable — as a key parameter still generated skewed results. That could be because men are more likely to have a common name than American women, according to Social Security data. Thus, building a search function based on frequency criteria alone would more likely increase visibility for Stephens than Stephanies.

Examples like this demonstrate how algorithms can unintentionally reflect and amplify common social biases. Other recent investigations suggest that such incidents are not uncommon. In a more serious case, the investigative news organization ProPublica uncovered a correlation between race and criminal recidivism predictions in so-called “risk assessments” — predictive algorithms that are used by courtrooms to inform terms for bail, sentencing, or parole. The algorithmic predictions for recidivism generated a higher rate of false-negatives for white offenders and a higher rate of false-positives for black offenders, even though overall error rates were roughly the same.

Read More
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT- WIRED

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.

The report, released two weeks ago, is the brainchild of Kate Crawford and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.

Read More
A Study Used Sensors to Show That Men and Women Are Treated Differently at Work - HBR

Gender equality remains frustratingly elusive. Women are underrepresented in the C-suitereceive lower salaries, and are less likely to receive a critical first promotion to manager than men. Numerous causes have been suggested, but one argument that persists points to differences in men and women’s behavior.

Which raises the question: Do women and men act all that differently? We realized that there’s little to no concrete data on women’s behavior in the office. Previous work has relied on surveys and self-reported assessments — methods of data collecting that are prone to bias. Fortunately, the proliferation of digital communication data and the advancement of sensor technology have enabled us to more precisely measure workplace behavior.

We decided to investigate whether gender differences in behavior drive gender differences in outcomes at one of our client organizations, a large multinational firm, where women were underrepresented in upper management. In this company, women made up roughly 35%–40% of the entry-level workforce but a smaller percentage at each subsequent level. Women made up only 20% of people at the two highest seniority levels at this organization.

Read More
Your Data Are Probably Biased And That's Becoming A Massive Problem Beware of black boxes - INC

Nobody sets out to be biased, but it's harder to avoid than you would think. Wikipedia lists over 100 documented biases from authority bias and confirmation bias to the Semmelweis effect, we have an enormous tendency to let things other than the facts to affect our judgments. We all, as much as we hate to admit it, are vulnerable.

Machines, even virtual ones, have biases too. They are designed, necessarily, to favor some kinds of data over others. Unfortunately, we rarely question the judgments of mathematical models and, in many cases, their biases can pervade and distort operational reality, creating unintended consequences that are hard to undo.

Yet the biggest problem with data bias is that we are mostly unaware of it, because we assume that data and analytics are objective. That's almost never the case. Our machines are, for better or worse, extensions of ourselves and inherit our subjective judgments. As data and analytics increasingly become a core component of our decision making, we need to be far more careful.

Read More
Taking control of your unconscious bias? Guardian/HSBC

With attention now a scarce resource, we increasingly rely on algorithms to help us navigate the world. Only now are we beginning to experience the side-effects of these filter bubbles as our ability to see and understand the bigger picture is eroding.

Part 1: Six key unconscious biases when making decisions

Dr Norma Montague cites five key unconscious biases to be aware of when making decisions. We’ve added a sixth for good measure.

Full article: https://www.theguardian.com/hsbc-fuel-the-ambition-series/2017/aug/29/taking-control-of-your-unconscious-bias

Read More
Are algorithms making us W.E.I.R.D.? - alphr

Western, educated, industrialised, rich and democratic (WEIRD) norms are distorting the cultural perspective of new technologies

From what we see in our internet search results to deciding how we manage our investments, travel routes and love lives, algorithms have become a ubiquitous part of our society. Algorithms are not just an online phenomenon: they are having an ever-increasing impact on the real-world. Children are being born to couples who were matched by dating site algorithms, whilst the navigation systems for driverless cars are poised to transform our roads.

Read More
Biases in Algorithms - Cornell University Blog

http://www.pewinternet.org/2017/02/08/theme-4-biases-exist-in-algorithmically-organized-systems/

In class we have recently discussed how the search algorithm for Google works. From the very basic material that we learned about the algorithm, it seems like the algorithm is resistant to failure due to its very systematic way of organizing websites. However, after considering how it works, is it possible that the algorithm is flawed? More specifically, how so from a social perspective?

Well, as it turns out, many algorithms are indeed flawed- including the search algorithm. The reason being is that algorithms are ultimately coded by individuals who inherently have biases. And although there continues to be a push for the promotion of people of color in STEM fields, the reality at the moment is that the majority of people in charge of designing algorithms are White males.

Read More