Delivering through diversity - Mc Kinsey

Our latest research reinforces the link between diversity and company financial performance—and suggests how organizations can craft better inclusion strategies for a competitive edge.

Awareness of the business case for inclusion and diversity is on the rise. While social justice typically is the initial impetus behind these efforts, companies have increasingly begun to regard inclusion and diversity as a source of competitive advantage, and specifically as a key enabler of growth. Yet progress on diversification initiatives has been slow. And companies are still uncertain about how they can most effectively use diversity and inclusion to support their growth and value-creation goals.

Read More
Studies show facial recognition software almost works perfectly – if you’re a white male - Global News

Recent studies indicate that the face recognition technology used in consumer devices can discriminate based on gender and race.

A new study out of the M.I.T Media lab indicates that when certain face recognition products are shown photos of a white man, the software can correctly guess the gender of the person 99 per cent of the time. However, the study found that for subjects with darker skin, the software made more than 35 per cent more mistakes.

As part of the Gender Shades project 1,270 photos were chosen of individuals from three African countries and three European countries and were evaluated with  (AI) products from IBM, Microsoft and Face++-. The photos were classified further by gender and by skin colour before testing them on these products.

The study notes that while each company appears to have a relatively high rate of accuracy overall, of between 87 and 94 per cent, there were noticeable differences in the misidentified images in different groups.

Full article:

https://globalnews.ca/news/4019123/facial-recognition-software-work-white-male-report/

Read More
Technology will widen pay gap and hit women hardest – Davos report- Guardian

Research into jobs finds men’s dominance in IT and biotech is reversing trend towards equality

The gulf between men and women at work – in both pay and status – is likely to widen unless action is taken to tackle inequality in high-growth sectors such as technology, say researchers at this week’s World Economic Forum summit in Davos.

A new WEF report on the future of jobs finds the dominance of men in industries such as information and biotechnology, coupled with the enduring failure of women to rise to the top even in the health and education sectors, is helping to reverse gender equality after years of improvements.

Read More
Kriti Sharma: rendre l’intelligence artificielle plus éthique - Business au Feminin

Vice-présidente Bots et intelligence artificielle chez Sage, Kriti Sharma est une pionnière dans le développement de machines intelligentes capables de fonctionner et de réagir comme des êtres humains pour simplifier les tâches administratives des entreprises. Elle est aussi la créatrice de Pegg, le premier chatbot de comptabilité au monde qui sera sera commercialisé en 2018 en France et désormais adopté dans 135 pays.

L’intelligence artificielle est une des plus grandes révolutions de notre temps pouvant mettre en danger le pouvoir de l’être humain et son travail. Quel est votre point de vue ?

Kriti Sharma: L’intelligence artificielle est comme n’importe quelle autre révolution technologique majeure, elle aura des implications positives comme négatives. Maintenant, il faut être sûr qu’elles sont utilisées à de bonnes fins. Par exemple pour les petites entreprises qui n’ont pas beaucoup d’équipes technologiques, l’intelligence artificielle peut les aider à automatiser un certain nombre de process.

Par ailleurs, la technologie attire une main d’œuvre de plus en plus diversifiée, ce qui n’existait pas auparavant. L’intelligence artificielle peut également s’automatiser elle-même. Avant, créer un software prenait du temps, maintenant, l’IA commence à écrire ses propres codes. Elle peut, dans une certaine mesure, automatiser le travail de l’ingénieur software. Donc nous avons maintenant un besoin de gens aux compétences créatives, plus seulement des ingénieurs mais une combinaison de profils Art et Science.  Autrement dit, vous n’avez pas besoin d’être un ingénieur ou un Data scientifique avec un master pour travailler dans l’intelligence artificielle.

Dans « the end of the professions » David Susskind évoque des professions comme les avocats, qui vont être impactées par l’automatisation et l’intelligence artificielle.  Ne pensez-vous pas que cela va accroitre les inégalités à l’échelle mondiale ?

Read More
How Robots Could Make the Gender Pay Gap Even Worse - Fortune

A new report published Thursday suggests that robots could make the gender pay gap even worse, stoking existing fears and uncertainty around the concept of automation.

In a paper titled “Managing automation Employment, inequality and ethics in the digital age,” the Institute for Public Policy Research argued that a greater share of jobs that women hold—46.8% versus 40.9% for men—have the technical potential to be automated since female workers are more likely to hold low-skill “automatable” occupations. Paired with women’s underrepresentation in high-skill occupations that may be complemented by technology, that means that automation could exacerbate gender inequality.

“Automation,” IPPR says, “is more likely to accelerate inequalities of wealth and income than create a future of mass joblessness.”

Initially, IPPR says, automation could narrow the gender pay gap since it would displace women from jobs that tend to earn below-average pay. (According to the latest OECD data, the gender wage gap in the U.K. is 17.1%; in the U.S., it’s 18.9%.) But that progress would remain only if displaced women re-entered the labor market at around the new average salary for their gender. That’s unlikely, IPPR says. Some industries dominated by women (such as retail or child and elderly care) are seeing less investment in productivity-raising technology, perhaps because the current human labor is so cheap.

Read More
DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass - Wired

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems.

At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely. They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

Read More
Enquête au cœur de l’intelligence artificielle, ses promesses et ses périls - Le Monde

L’être humain est-il menacé par la technologie ? La machine risque-t-elle de le dominer ? Notre dossier spécial pour faire le tri entre fantasmes et réalité.

L’intelligence artificielle (IA) est à la mode. Rien que dans Le Monde et sur ­Lemonde.fr, le sujet a été évoqué dans 200 articles en 2017, soit presque 15 % de plus qu’en 2016. Il en a été question dans tous les domaines : en économie, en science, et même en politique, ­puisque le premier ministre, Edouard Philippe, a confié une mission sur la question au député (LRM) mathématicien Cédric Villani, dont les conclusions sont attendues en janvier.

Il reste à savoir ce que cache ce terme. Bien sûr, il y a ces fantastiques percées montrant que des machines surpassent désormais l’homme dans des tâches spécifiques. Dans le secteur de la santé, elles repèrent mieux que les médecins des mélanomes ou des tumeurs du sein sur des images médicales. Dans le transport, elles causent moins d’accidents que des chauffeurs. Sans compter les autres avancées : la reconnaissance vocale, l’art du jeu (poker, go), l’écriture, la peinture ou la musique. En coulisse de ce monde si particulier s’activent les géants du ­numérique (Google, Facebook, Amazon, Microsoft, IBM, Baidu…) ou des start-up désireuses de leur voler la vedette.

Read More
AI reveals, injects gender bias in the workplace - BenefitsPro

While lots of people worry about artificial intelligence becoming aware of itself, then running amok and taking over the world, others are using it to uncover gender bias in the workplace. And that’s more than a little ironic, since AI actually injects not just gender, but racial bias into its data—and that has real-world consequences.

A Fox News report highlights the research with AI that reveals workplace bias, uncovered by research from Boston-based Palatine Analytics. The firm, which studies workplace issues, “analyzed a trove of data—including employee feedback and surveys, gender and salary information and one-on-one check-ins between managers and employees—using the power of artificial intelligence.”

Read More
The world is relying on a flawed psychological test to fight racism - Quartz Media

In 1998, the incoming freshman class at Yale University was shown a psychological test that claimed to reveal and measure unconscious racism. The implications were intensely personal. Even students who insisted they were egalitarian were found to have unconscious prejudices (or “implicit bias” in psychological lingo) that made them behave in small, but accumulatively significant, discriminatory ways. Mahzarin Banaji, one of the psychologists who designed the test and leader of the discussion with Yale’s freshmen, remembers the tumult it caused. “It was mayhem,” she wrote in a recent email to Quartz. “They were confused, they were irritated, they were thoughtful and challenged, and they formed groups to discuss it.”

Finally, psychologists had found a way to crack open people’s unconscious, racist minds. This apparently incredible insight has taken the test in question, the Implicit Association Test (IAT), from Yale’s freshmen to millions of people worldwide. Referencing the role of implicit bias in perpetuating the gender pay gap or racist police shootings is widely considered woke, while IAT-focused diversity training is now a litmus test for whether an organization is progressive.

This acclaimed and hugely influential test, though, has repeatedly fallen short of basic scientific standards.

Full article: https://qz.com/1144504/the-world-is-relying-on-a-flawed-psychological-test-to-fight-racism/

 

Read More
Unconscious Bias Training Isn't the Silver Bullet For a Biased Hiring Process - Elevate Blog

The latest fashion trend with most of my clients is Unconscious Bias Training. While those trainings are interesting and engaging, and may raise awareness about various biases, there's little evidence to their effectiveness in eliminating those. This is well explained in Diversity and Inclusion specialist's Lisa Kepinski's article, Unconscious Bias Awareness Training is Hot, But the Outcome is Not: So What to Do About It?

Lisa outlines two problems with these trainings:

  1. The "So What?" effect: having done the training, leaders and HR professionals alike remain at loss for the next steps that could deliver a sustainable cultural change, and
  2. The training may backfire by encouraging more biased thinking and behaviors (by conditioning the stereotypes). Moreover, "by hearing that others are biased and it's ‘natural’ to hold stereotypes, we feel less motivated to change biases and stereotypes are strengthened (‘follow the herd’ bias)."
Read More
Artificial intelligence could hardwire sexism into our future. Unless we stop it- WEF Blog

In five years’ time, we might travel to the office in driverless cars, let our fridges order groceries for us and have robots in the classroom. Yet, according to the World Economic Forum’s Global Gender Gap Report 2017it will take another 100 years before women and men achieve equality in health, education, economics and politics.

What’s more, it's getting worse for economic parity: it will take a staggering 217 years to close the gender gap in the workplace.

How can it be that the world is making great leaps forward in so many areas, especially technology, yet it's falling backwards when it comes to gender equality?

 

 

Read More
Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.

Read More
Working for the algorithm Machines will help employers overcome bias - The Economist

Who is best placed to judge a firm’s workers? In 2018 employees everywhere will increasingly feel the effects of the rise of “talent analytics”, also known as “people analytics”, as they go about their daily work. Having been relatively slow compared with other corporate departments in making use of big data, in 2018 human-resources (HR) folk will become its most enthusiastic proponents—with significant implications for who gets hired, what they are paid and whether they are promoted. Employees will have to get used to being (often unwitting) guinea pigs in frequent HR experiments. And wise ones will think ever more carefully about how they express themselves in e-mails and on digital collaborative-working platforms such as Slack.

One reason is the pressure HR executives will face to make workplaces better for women and minority groups. The limitations of established approaches, such as training and awareness programmes, had caused “diversity fatigue” to set in. But it has become a corporate priority again after shocking headlines in 2017 about sexual discrimination and harassment in Silicon Valley, Hollywood, professional sports and big media firms, which reminded the world that bad corporate culture is a serious business risk. 

Read More
AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News

At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.

But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.

The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.

“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.

“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.

Read More
Can A.I. Be Taught to Explain Itself? New York Times

As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.”

(They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”

Read More
UK employers join forces to improve tech diversity- FT

UK employers join forces to improve tech diversity.

Data on recruitment and pay to be shared in effort to correct gender imbalance.

In London Technology, media, telecoms and professional services employers in the UK have committed to improving the diversity of their IT staff following scandals that have focused attention on the overwhelmingly male proportion of technology professionals. Eighty nine of the country’s largest employers of computer developers — with almost 400,000 employees — have signed up to the Tech Talent Charter, which asks businesses to share recruitment and gender pay gap data specifically for their tech staff.

Read More
Something really is wrong on the Internet. We should be more worried. The Washington Post

“Something is wrong on the internet,” declares  trending in tech circles. But the issue isn’t Russian ads or Twitter harassers. It’s children’s videos.

The piece, by tech writer James Bridle, was published on the heels of a report from the New York Times that described disquieting problems with the popular YouTube Kids app. Parents have been handing their children an iPad to watch videos of Peppa Pig or Elsa from “Frozen,” only for the supposedly family-friendly platform to offer up some disturbing versions of the same. In clips camouflaged among more benign videos, Peppa drinks bleach instead of naming vegetables. Elsa might appear as a gore-covered zombie or even in a sexually compromising position with Spider-Man.

The phenomenon is alarming, to say the least, and YouTube has said that it’s in the process of implementing new filtering methods. But the source of the problem will remain. In fact, it’s the site’s most important tool — and increasingly, ours.

YouTube suggests search results and “up next” videos using proprietary algorithms: computer programs that, based on a particular set of guidelines and trained on vast sets of user data, determine what content to recommend or to hide from a particular user. They work well enough — the company claims that in the past 30 days, only 0.005 percent of YouTube Kids videos have been flagged as inappropriate. But as these latest reports show, no piece of code is perfect

Read More
Garbage in. Garbage Out. - NEVERTHELESS

One afternoon in Florida in 2014,18-year Brisha Borden was running to pick up her god-sister from school when she spotted an unlocked kid’s bicycle and a silver scooter. Brisha and a friend grabbed the bike and scooter and tried to ride them down the street. Just as the 18-year-old girls were realizing they were too big for the toys, a woman came running after them saying, “That’s my kid’s stuff.” They immediately dropped the stuff and walked away. But it was too late — a neighbor who witnessed the event had already called the police. Brisha and her friend were arrested and charged with burglary and petty theft for the items, valued at a total of $80.

The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. He had already been convicted of several armed robbery charges and had served 5 years in prison. Borden, the 18 year old, had a record too — but for juvenile misdemeanors.

 

For the the full transcript and podcast: 

https://medium.com/nevertheless-podcast/transcript-garbage-in-garbage-out-78b74b08f16e

Read More
The key to closing the gender gap? Putting more women in charge - WEF

While women worldwide are closing the gap in critical areas such as health and education, significant gender inequality persists in the workforce and in politics. Given current rates of change, this year’s Global Gender Gap Reportestimates it will be another 217 years before we achieve gender parity. 

As part of its workforce gap analysis, the World Economic Forum turned to LinkedIn to better understand the trends in gender equality across the workforce. Thanks to our unique insight into real-time workforce trends, LinkedIn can provide more depth, nuance, and timeliness than the sort of data historically gathered by governments or NGOs. Our data provides insight into the role women leaders play in driving overall economic equity and participation.

Read More
Understanding Bias in Algorithmic Design - ASME Demand

In 2016, The Seattle Times uncovered an issue with a popular networking site’s search feature. When the investigative reporters entered female names into LinkedIn’s search bar, the site asked if they meant to search for similar sounding male names instead — “Stephen Williams” instead of “Stephanie Williams,” for example. According to the paper’s reporting, however, the trend wouldn’t happen in reverse, when a user searched for male names.

Within a week of The Seattle Times article’s release, LinkedIn introduced a fix. Spokeswoman Suzi Owens told the paper that the search algorithm had been guided by “relative frequencies of words” from past searches and member profiles, not by gender. Her explanation suggests that LinkedIn’s algorithm was not intentionally biased. Nevertheless, using word frequency — a seemingly objective variable — as a key parameter still generated skewed results. That could be because men are more likely to have a common name than American women, according to Social Security data. Thus, building a search function based on frequency criteria alone would more likely increase visibility for Stephens than Stephanies.

Examples like this demonstrate how algorithms can unintentionally reflect and amplify common social biases. Other recent investigations suggest that such incidents are not uncommon. In a more serious case, the investigative news organization ProPublica uncovered a correlation between race and criminal recidivism predictions in so-called “risk assessments” — predictive algorithms that are used by courtrooms to inform terms for bail, sentencing, or parole. The algorithmic predictions for recidivism generated a higher rate of false-negatives for white offenders and a higher rate of false-positives for black offenders, even though overall error rates were roughly the same.

Read More