Can We Still Rely On Science Done By Sexual Harassers?

March 20, 2019 0 By JohnValbyNation

The pandemic of sexual harassment and abuse—you saw its prevalence in the hashtag #metoo on social media in the past weeks—isn’t confined to Harvey Weinstein’s casting couches. Decades of harassment by a big shot producer put famous faces on the problem, but whisper networks in every field have grappled with it forever. Last summer, the story was women in Silicon Valley. Last week, more men in media.

Earthquakes of this magnitude are never any fun for people atop shifting tectonic plates. But the new world they create can be a better one. No one misses Gondwanaland.

Still, records of those lost continents remain in the fossil record. The downstream effects of sexual harassment have the potential to color everything from the apps you use to the news you read. From now on, when we watch movies that Weinstein touched we’ll think about the women actors, wondering what they had to go through to be there—or what happened to the ones who couldn’t bear it, who left, who didn’t get the jobs, who self-deported their talent from Hollywood. We’ll wonder who enabled it, who let it happen and then perhaps surfed to their own success on Weinstein’s waves of destruction. The same goes for movies directed by Woody Allen or Roman Polanski. Or others.

There’s a word for that kind of work: “problematic.” It’s stuff you love tainted by people you hate. It’s Steve Ditko’s weird Randian objectivism metastasizing into Spider-Man, and Dr. Seuss doing anti-Japanese propaganda work during World War II. It’s Roald Dahl, anti-semite. Can we love Kind of Blue and Sketches of Spain and also condemn Miles Davis for beating his wives? Is Ender’s Game less of a masterpiece for Orson Scott Card’s homophobia? Maybe. Looking hard at the flaws of the artist is an important way to engage with the art.

The scientific community has been contending with its own habitual harassers. (Amid the Weinstein scandal, the news section of the journal Science broke the story of field geologists in Antarctica alleging abuse by their boss. As the planetary scientist Carolyn Porco tweeted: Imagine the implications of an abuser and his target confined on a long-duration space mission.)

This isn’t like art. Science’s results and conclusions are nominally objective; failures in the humanity of the humans who found them aren’t supposed to have any bearing. Yet they do.

Nazi “research” turned out to be barely-disguised torture; it was easy to condemn the people who did it and consign to history the crappy outcomes they collected. The racist abuses of the Tuskegee experiments and consent problems with human radiation exposure experiments of the post-World War II era yielded data of questionable use, but led to reform, to rethinking the treatment of human scientific subjects.

But what about, for example, exoplanets? Geoff Marcy, a pre-eminent astronomer at UC Berkeley, pioneered techniques for finding planets outside Earth’s solar system. He also, it seems, sexually harassed students without repercussion for decades. Clearly, that kept good science from happening—it’s reasonable to conclude that Marcy’s alleged abuses prevented his targets from doing their best work, or forced them out of science altogether. He pushed all the science they might have done into some alternate timeline.

He also discovered or helped discover thousands of worlds.

Unlike art, science has little mechanism to engage with that. They’re, you know, planets.

Researchers might get banned from conferences or kicked out of professional societies. Marcy has left his job at UC Berkeley. But no one has suggested that his findings are compromised, and astronomy will continue to build on Marcy’s work—to reference his papers and build on his team’s findings. Citation networks will link to them. The way the system works, that accrues fame and influence back to Marcy, even if subsequent researchers might not want to.

Gliese 436b, 55 Cancri b, the worlds of Upsilon Andromedae, and the other planets Marcy’s teams found still orbit their suns no matter what Marcy did. But they are … problematic.

Some science has more obvious consequences here on Earth. Last spring a former student of the esteemed philosopher John Searle—also at UC Berkeley—filed a lawsuit alleging frequent sexual harassment and abuse. That lawsuit elicited a history of prior claims.

Searle is famous for a thought experiment called the Chinese Room. It’s a way to try to understand if a machine could have consciousness. In brief: A guy stands in a sealed room with a slot in the wall. Every so often a slip of paper comes through the slot with some Chinese characters on it. The guy looks up at a display in the room and sees another set of Chinese characters expressing the right response. He writes those on another piece of paper and pushes it back out the slot.

So: Does the guy in the room understand Chinese?

Searle pitched the idea in 1980, and over decades it became one of the most argued-over concepts in philosophy and the theory of mind. If your answer to the question is “no,” as was Searle’s, that suggests that you don’t believe in “strong AI,” the idea that a mechanistic system of circuits or some other, unimaginable technology could think or feel. It might pass a Turing Test, Alan Turing’s famous assessment that says any system that can trick a human into thinking it’s sentient might as well be. But it won’t really be a mind.

Don’t panic; I’m not going to walk through all the arguments and counter-arguments. Most students of computer science will tell you that even if Searle’s alleged misdeeds compromised his ideas (why is it a man in the room? Why is Chinese synonymous with incomprehensibility?), the philosophical problem of consciousness probably doesn’t have a grand impact on the unfolding, gnarly issues of bias in what we've all come to think of as artificial intelligence—machine learning.

Software has indeed managed to learn gender stereotypes and racism. Algorithms have sent African Americans to prison more often than white people.

But today’s computer science students were taught by people who learned about the Chinese Room problem (and not taught by people who got marginalized or pushed out of the field). Bias introduced way back up the line is all the more insidious. It kneecaps people’s ability to tell where they might be going wrong.

One way biases work their way into machine learning systems is through the database; it might be incomplete, or corrupt. But the other point of ingress is the human factor. Programmers choose when to tell the machine when it has gotten something right or wrong. It’s “through the ‘definition of success,’ which will be skewed to whatever the designer of the algorithm thinks matters. Also they define the penalty for failure, another way of embedding values,” says Cathy O’Neil, a mathematician and author of Weapons of Math Destruction.

    Problematics

  • Alexis Sobel Fitts

    Tech’s Harassment Crisis Now Has an Arsenal of Smoking Guns

  • Sarah Scoles

    The University of Rochester Sexual Harassment Case Is Complicated—And That's the Point

  • Sarah Scoles

    Month by Month, 2016 Cemented Science’s Sexual Harassment Problem

Maybe that matters less when the algorithm’s job is to figure out how to park a robot car or find the right widget in a warehouse. But what about when it’s trying to, for example, assess the actuarial fitness of health insurance applicants? “If you’re defaulting your model to think of a household that’s made up of a man, a woman, and kids, that’s a simplification,” says Osonde Osoba, an engineer at RAND and co-author of a study on bias in machine learning. “If you don’t interrogate that anchor, your model won’t be fully representative.”

If the lesson of #metoo is that monsters are everywhere, they're in Silicon Valley, too. The “white guy problem,” as the researcher Kate Crawford called it in the New York Times—goes beyond even the pernicious failures of Hollywood or astronomy.

Those people are writing the code that’ll train machine-learning systems embedded in every part of our world, from autonomous cars to to medical diagnosis to the internet of things. Like some dark version of Isaac Asimov’s Three Laws of Robotics, no machine will be free of those biases, no matter how sophisticated. The prevalence of bias will be 100 percent, and the real fault won’t be in our robots. It’ll be in ourselves.

Related Video

Culture

Take Back the Net

We asked a group of experts what Silicon Valley can do about online harassment.