Debugging Data: Microsoft Researchers Look at Ways to Train AI Systems to Reflect the Real World

0
220
Hanna Wallach at Microsoft Research in New York City.

Artificial Intelligence is already helping people to enhance their abilities, allowing them to type faster texts and take better pictures. It is also being used to decide on bigger matters such as who gets a new job and who goes to jail. This is prompting researchers across Microsoft and throughout the machine learning community to make sure that the data is used to develop AI systems that reflect what is happening in the real world, and is safeguarded against unintended bias and handled in ways that are transparent and respectful of privacy and security.

Data is the single most important component to machine learning. It’s the representation of the world that is used to train machine learning models, explained Hanna Wallach, a senior researcher in Microsoft’s New York research lab. Wallach is the program co-chair for the Annual conference of Neural Information Processing Systems from Dec. 4 to Dec. 9 in Long Beach California. Better known as NIPS, it is expected to draw thousands of computer scientists from academia and the industry to discuss machine learning – the branch of AI that focuses on systems that learn from data.

“We often talk about datasets as if they are these well-defined things with clear boundaries, but the reality is that as machine learning becomes more prevalent in society, datasets are increasingly taken from real-world scenarios, such as social processes, that don’t have clear boundaries,” said Wallach, who together with the other programme co-chairs introduced a new subject area at NIPS on fairness, accountability and transparency. “When you are constructing or choosing a dataset, you have to ask, ‘Is the dataset representative of the population that I am trying to model?”

Kate Crawford, a principal researcher at Microsoft’s New York research lab, calls it “the trouble with bias” and it’s the central focus of the invited talk she will be giving at NIPS.

“The people who are collecting the datasets decide that, ‘Oh this represents what men and women do, or this represents all human actions or human faces.’ These are types of decisions that are made when we create what are called datasets,” she said. “What is interesting about training datasets is that they will always bear the marks of history, that history will be human, and it will always have the same kind of frailties and biases that humans have.”

Researchers are also looking into diversity among AI researchers, and whether there is enough. Previous work has shown that there is a direct correlation between the diversity of teams and the problems they choose to work on and produce solutions for. Two events co-located with NIPS will address this particular issue: The 12th Women in Machine Learning workshop, where Wallach, the co-founder of Women in Machine Learning, will give a talk on the merger of machine learning with social sciences, and Black in AI workshop, which is co-founded by Timnit Gebru, a post-doctoral researcher at Microsoft’s New York Lab.

“In some types of scientific disciplines, it doesn’t matter who finds the truth, there is just a particular truth to be found. AI is not exactly like that,” said Gebru. “We define what kinds of problems we want to solve as researchers. If we don’t have diversity in our set of researchers, we are at risk of solving a narrow set of problems that a few homogeneous groups of people think are important, and we are at risk of not addressing the problems that are faced by many people in the world.”

Timnit Gebru, Postdoctoral Researcher with the Microsoft Research Labs, Fairness Accountability Transparency and Ethics (FATE) group, at Stanford University, Tuesday 28 November 2017 in Stanford, CA.

Machine learning core

NIPS is an academic conference with hundreds of papers that show the development of machine learning models and the information used to train them.

Microsoft researchers authored or co-authored 43 accepted conference papers, describing everything from the latest advances in retrieving data stored in synthetic DNA to a method for repeatedly collecting telemetry data from user devices without breaking the users privacy.

Almost all papers presented at NIPS over the past 30 years considers information in some way, said Wallach. “The difference in recent years, though,” she added “is that machine learning no longer exists in a purely academic context, where people use synthetic or standard datasets. Rather, it’s something that affects all kinds of aspects of our lives.”

The application of Machine-Learning models to real life problems and challenges is, bringing into focus issues of fairness, accountability and transparency.

“People are becoming more aware of the influence that algorithms have on their lives, determining everything from what news they read to what products they buy to whether or not they get a loan. It’s natural that as people become more aware, they grow more concerned about what these algorithms are actually doing and where they get their data,” said Jenn Wortman Vaughan, a senior researcher at Microsoft’s New York lab.

The trouble with bias

Information, or data is not something that exists in real world settings and a tangible object that everyone can see and recognise, explained Crawford. Data is made. When Scientists first began cataloging the history of the natural world, they recognised types of information as data, she noted. Today data is recognised as a construct of human history.

Crawford’s invited talk at NIPS will highlight examples of machine learning bias such as news organisation Propublica’s investigation that uncovered bias against African-Americans in an algorithm used by courts and law enforcement to predict the tendency of re-offending among convicted criminals, and then to discuss how to solve the issue of such bias.

“We can’t simply boost a signal or tweak a convolutional neural network to resolve this issue,” she said. “We need to have a deeper sense of what is the history of structural inequity and bias in these systems.”

One way to address the bias is to take what she calls a social system analysis approach, to conception, design, deployment and regulation of AI systems to think of all the possible effects of AI systems, says Crawford. She recently described the approach in a commentary for Nature, the international Journal.

Crawford also noted that the challenge isn’t just for computer scientists to solve alone. She is also the co-founder of AI Now Institute, a first of its kind, interdisciplinary research institute based out of New York University, that was launched in November to bring together social scientists, computer scientists, lawyers, economists and engineers to study the social implications of AI, machine learning and decision making algorithms.

Researchers at Microsoft office in New York City. Jenn WV

Interpretable machine learning

One way to address issues about AI and machine learning is to make sure that there is transparency by making AI systems easier for humans to understand. At NIPS, Vaughan, one of the New York lab’s researchers, will give a talk describing a very large experiment that she and her colleagues are running, to learn what makes machine learning models interpretable and understandable for people that are not familiar with machine learning.

“The idea here is to add more transparency to algorithmic predictions so that decision makers understand why a particular prediction is made,” said Vaughan.

As an example, does the number of features or inputs to a model impact the person’s ability to spot instances where the model makes a mistake? Do people trust a system more when they can see how it makes it’s predictions as opposed to when the model is simply a black box?

The research, said Vaughan, is a first step toward the development of “tools aimed at helping decision makers understand the data used to train their models and the inherent uncertainty in their models’ predictions.”

Patrice Simard, an engineer at Microsoft’s Redmond, Washington, research lab who is a co-organiser of symposium, said the field of interpretable machine learning should take note of computer programming, where problems are broken down into smaller problems with more simple and understandable steps have been learned. “But in machine learning, we are completely behind. We don’t have the infrastructure,” he said.

In order to catch up, Simard suggests a shift to what he calls machine teaching – giving machines features to look for when solving a problem, rather than looking for patterns in huge amounts of data. Instead of training a machine learning model for car buying with millions of images of cars labeled as good or bad, teach the model about features such as fuel economy and crash test safety, he explained.

The teaching strategy is deliberate, he said, and results in an interpretable hierarchy of concepts used to train machine learning models.

Researcher diversity

In order to safeguard against unintended bias from occuring in AI systems, diversity in the field should be encouraged, noted Gebru, the co-organiser of the black in AI workshop co-located with NIPS. “You want to make sure that the knowledge that people have of AI training is distributed around the world and across genders and ethnicities,” said Gebru.

The importance of researcher diversity struck Wallach, the NIPS program co-chair, at her fourth NIPS conference in 2005. She was sharing a hotel room with three roommates, all women, for the first time. One of them was Vaughan, and the two along with another roommate, co-founded the Women in Machine Learning group, which is now in it’s 12th year and has held a workshop co-located with NIPS since 2008. An expected 650 women are likely to attend.

Wallach will give a talk at the Women in Machine Learning workshop about how she applies machine learning in the context of social science to measure unobservable theoretical constructs such as community membership or topics of discussion.

“Whenever you are working with data that is situated within society contexts,” she said, “necessarily it is important to think about questions of ethics, fairness, accountability, transparency and privacy.”

Robotics News, December 5