Moral Code: Can Computer Science Students Be Taught To Think Ethically?

Before we worry about teaching ethical decision-making to robots, we need to figure out how to teach it to the people who build them. Explore the approach that universities like Stanford, MIT, and Harvard are taking to cultivate an understanding of ethical issues in their students, and how this can benefit the companies that may one day employ them.

By Kathryn Nave, Contributor

From Facebook’s, “Move fast and break things,” to Google’s, “Don’t be evil,” youthful tech startup founders have a history of coining ethical principles that come to back haunt them.

But while a cavalier approach to moral quandaries may have traditionally been accepted as part and parcel with Silicon Valley’s culture of rapid innovation, it’s increasingly difficult to overlook as tech companies move beyond designing systems for faster commerce or communications, to those that control everything from our news feeds to our cars.

The problem, believes MIT Media Lab director, Joi Ito, is not that programmers are particularly unethical, but that they often simply fail to recognize the extent to which the decisions they’re making are ethical at all.

“You can force computer scientists to take ethics courses,” he said. “But there’s a two-sided problem whereby the engineers aren’t thinking enough about how [ethics] affects their work and the people who are teaching [these courses] don’t know enough to connect classical ethics to current concerns.”

Today, educators like Ito are designing interdisciplinary courses to supplement computer science knowledge with information on its legal landscape and the ethics that guide it.

“You can force computer scientists to take ethics courses…but there’s a two-sided problem whereby the engineers aren’t thinking enough about how [ethics] affects their work and the people who are teaching [these courses] don’t know enough to connect classical ethics to current concerns.”

— Joi Ito, MIT Media Lab Director

An Interdisciplinary Approach

Last year Ito, helped launch a course called, “Internet & Society: The Technologies and Politics of Control,” co-taught with Jonathan Zittrain, a professor of international law and computer science at neighboring Harvard Law. The class, which is open to students from both MIT and Harvard, aims to tackle thorny questions of ethics and regulation.

By combining Ito’s experience leading the Media Lab (which has spent over thirty years at the vanguard of new technologies, from the touch screen to voice recognition) with Zittrain’s background in internet governance, its subject matter sits at the cross-section of emerging technology development and regulation. The pair gives equal weight to each discipline, while being tightly connected to the real concerns that today’s students may face.

Yet, Ito and Zittrain are not alone. At Stanford University, another academic cornerstone of America’s tech industry, a similar effort is underway. Associate Chair for Education in the Computer Science department at Stanford University and former Google research scientist Mehran Sahami is set to launch a computer science ethics course with several colleagues from the university’s law school in January 2019.

Ito is particularly pleased that his course’s cross-disciplinary approach is reflected in his students, who come not only from the engineering department, but also from sociology, law, history, and philosophy fields.

“It’s not just important that the engineers understand these issues, but also that the lawyers and the policymakers have a grasp of this technology,” Ito said. “If they don’t know what they’re talking about, how can they regulate it?”

For a particularly salient example, Ito pointed to April’s Congressional hearing, whereby Congress called forward Facebook CEO, Mark Zuckerberg to address accusations of failing to protect user data from harvesting by political consulting firm Cambridge Analytica. While Facebook allowed the firm to obtain personal information vis a vis a personality quiz app, it was actually used in targeted political campaigns—including those for now president Donald Trump. Yet, many questions the panel of lawmakers asked seemed to reflect the lack of even a basic understanding of Facebook’s business model or the technology that supports it.

“It’s really hard to eradicate or alter technologies once they’ve built up a significant install base,” Ito explained, “So my concern is, how do we act before something becomes a Google, or a Facebook size, to make sure that we are not allowing companies to build up structures that we discover later are troublesome?”

Managing Structural Bias

Back at MIT, Ito’s course focuses on the ethics and governance of AI technology, reflecting his growing concern over how to oversee key societal functions, from predicting criminality to voter behavior.

“The problem is that while engineers think about how to eliminate bias in their algorithms, they don’t necessarily consider pre-existing structural biases in society that their algorithm may replicate,” Ito explained. For example, a model that excludes the use of race as a predictor for recidivism, could nevertheless end up generating a biased system by using zip code as a proxy to make predictions.

“It doesn’t sound very fair to punish people more harshly for being from a certain zip code, but that’s exactly what these risk scores are doing,” Ito said. “We’re going to say that if you have these factors you’re more likely to commit crime, so we will increase your risk score, making it harder to get probation, more expensive to secure bail, and possibly even a longer sentence.”

Recidivism risk prediction algorithms are already in wide use across the U.S., but because they are often owned by commercial companies and operate as ‘trade secrets,’ they cannot be subpoenaed to examine the logic of these decisions. This is just one case for how AI algorithm design interacts both with fundamental ethical questions, alongside more practical issues of regulation.

And while the European Union’s new General Data Protection Regulation (GDPR) governs how long and for what purposes companies are allowed to keep and use the data that they collect on private individuals, Ito is not convinced this type of regulation is enough to produce the desired outcome. Just look at elections.

“Cambridge Analytica claim they don’t use the data from one election to another,” he said, “but they do keep the models that they’ve trained in previous elections, and this raises the question, is throwing away the data enough? What about the insights that the machine has already learned about you which are now preserved in its model?”

Ito doesn’t have straightforward answer for how to enforce more effective public protection, but by encouraging his students to think more about the broader impact of the tools they design, he hopes he can at least encourage the next generation to slow down and think about what they might be breaking.

“If nothing else, I hope they leave with the awareness that every technical decision they make has both ethical and scientific repercussions,” he said. “And I hope they understand that they are responsible.”