Jump to content

For AI, a real-world reality check

An intelligent computer is only as well-rounded as the people who teach it.

Robert Ito

For the past three summers, around two dozen would-be computer scientists have come to Stanford University to learn about artificial intelligence from some of the field’s brightest. The attendees, culled from hundreds of applicants, take day trips to nearby tech companies, interact with social robots and hexacopters, and learn about computational linguistics (what machines do when words have multiple meanings, say) and the importance of time management (very). They play Frisbee. But if your mental picture of AI is a bunch of guys creating wilier enemies for their favorite videogames, well, this isn’t that. All the students here at the Stanford Artificial Intelligence Laboratory’s Outreach Summer (SAILORS) program are girls who have just completed ninth grade, and their studies focus on finding ways to improve lives, not enhance their game play: How do we use AI to keep jumbo jets from careening into one another? To ensure that doctors wash their hands before hitting the OR? “Our goal was to rethink AI education in a way that encourages diversity and students from all walks of life,” says Fei-Fei Li, director of Stanford’s AI lab and a founder of the SAILORS program. “When you have a diverse range of future technologists, they really care that technology is being used for the good of humanity.”

“When you have a diverse range of future technologists, they really care that technology is being used for the good of humanity.”

Fei-Fei Li Google & Stanford

Fei-Fei Li

SAILORS was created in 2015 by Li and former student Olga Russakovsky (now an assistant professor at Princeton University) to help bring greater gender equality to the tech industry. The cause is both noble and urgent. According to a recent survey, the number of women seeking computer science degrees is dropping; in the AI sector, women hold less than 20 percent of executive positions. It’s an enormous field to be left out of, considering that, every day, more and more people use AI to make their lives easier and more efficient: AI is how photo apps recognize your face among everyone else's, not to mention the beach where you took the picture. It's how your devices understand you when you ask what the weather will be tomorrow. Then there are the lesser-known applications, like diagnosing diabetic retinopathy (which often leads to blindness) or sending a drone on a search-and-rescue mission to the most remote reaches of the world.

With AI becoming ever more ubiquitous, the need for gender balance in the field grows beyond just the rightness of the cause—diversity is a crucial piece of AI due to the nature of machine learning. A goal of AI is to prod machines to complete tasks that humans do naturally: recognize speech, make decisions, tell the difference between a burrito and an enchilada. To do this, machines are fed vast amounts of information—often millions of words or conversations or images—just as all of us absorb information, every waking moment, from birth (in essence, this is machine learning). The more cars a machine sees, the more adept it is at identifying them. But if those data sets are limited or biased (if researchers don’t include, say, images of Trabants), or if the folks in AI don’t see or account for those limits or biases (maybe they’re not connoisseurs of obscure East German automobiles), the machines and the output will be flawed. It’s already happening. In one case, image recognition software identified photographs of Asian people as blinking.

“It’s not just about having transparency in data. We actually need to make the numbers move in the right direction.”

Tracy Chou Project Include

Tracy Chou

How do humans create more inclusive labs and workspaces? A number of projects and individuals are taking on that challenge. This year, Li—who is also chief scientist of AI and machine learning at Google Cloud—and others helped launch AI4ALL. The national nonprofit is aimed at bringing greater diversity to AI and has engaged experts in genomics, robotics, and sustainability as mentors. It’s building on the work of SAILORS but also targeting people of color and low-income students across the country through partnerships with Princeton, UC Berkeley, and Carnegie Mellon, in addition to Stanford. “We had a lot of colleagues and industry leaders coming up to us and saying, ‘SAILORS is great, but it’s just Stanford serving a few dozen students per year, mostly from the Bay Area,’ ” Li says. “So AI4ALL is about diversity and inclusion. It’s not only gender.”

Other similar initiatives include Code Next, Google’s Oakland-based effort to encourage Latino and African American students to explore careers in tech; DIY Girls, an educational and mentoring STEAM (science, technology, engineering, art, and math) program for under-resourced communities in Los Angeles; and Project Include, which helps new and midstage startups hire more women and people of color. Tracy Chou, formerly of Pinterest, founded Project Include last year with seven other prominent women in the tech industry. In 2013, Chou famously urged tech companies to come clean about how many women they employed. As the numbers trickled in, they substantiated what everyone in Silicon Valley knew: The tech world, from the biggest corporation to the smallest startup, is overwhelmingly white and male. Project Include, says Chou, was the logical next step. “After a couple of years of these data reports coming out and not a lot of change happening, there started to be a shift in the conversation,” she says. “Now it’s not just about having transparency in data. We actually need to make the numbers move in the right direction.”

That direction includes making work in the field of AI more accessible to the masses. There are relatively few people employed in AI, and already we’re seeing robots that care for people and personal assistants that anticipate our needs. With humans controlling the data and criteria and machines doing the work, better and greater human input means better and greater results.

For AI, a real-world reality check

In many ways, the democratization of AI is already on its way. Take this example: In Japan, a farmer’s son used AI to sort his family’s harvest of cucumbers by various characteristics. It’s the kind of story that appeals to Li, who came to the US from China at age 16 knowing little about her adopted country and even less about New Jersey, where she ended up. After working a variety of odd jobs, from cleaning houses to walking dogs to cashiering at a Chinese restaurant, Li found herself at Princeton, and later at graduate school at Caltech.

Li comes to her work as a triple outsider: an immigrant, a woman, and a person of color in a world dominated by white men. What might have been obstacles for anyone else have become prods for Li. She spends much of her time studying computer vision, a component of machine learning she calls “the killer app of AI.” Computer vision analyzes and identifies visual data and may someday help create more responsive robotic limbs, say, or solve the knottiest of mathematical proofs. But as with all AI, the key to this technology is teaching machines to unpack a wealth of information from different places and perspectives. To be, in essence, visual citizens of the world—not unlike Li.

Fostering a diverse group of creators to shape that world is essential to the sorts of story and technical issues that content strategist Diana Williams encounters every day at ILMxLAB, the top-secret Lucasfilm dream center where developers craft immersive, interactive entertainment—a VR encounter with Darth Vader, perhaps—inspired by the vast Star Wars universe. Williams is deeply involved in pro-tech organizations like Black Girls Code and remembers the dearth of women of color at her college in the ’80s. “I was always the only one in my math classes, the only one in my business classes,” she says. “That gets tiring, and it gets scary.” Her solution to pointing more women toward tech: “Start them young and get them strong in their confidence, so that when they walk into the room and they’re the only ones there, they don’t turn around.”

“Start them young and get them strong in their confidence, so that when they walk into the room and they’re the only ones there, they don’t turn around.”

Diana Williams Lucasfilm

Diana Williams

Maya Gupta, a machine-learning researcher at Google, is working to improve AI, albeit from a different angle. At Stanford, she helped a Norwegian company detect cracks in its underwater gas pipelines. “You can’t go in there very well, so we had to use partial information to try to guess,” she says. Teaching machines to make nuanced guesses is familiar terrain to Gupta. If you’re on YouTube listening to tenor saxophonist Kamasi Washington’s “Truth” and the music effortlessly segues into Alice Coltrane’s gorgeous “Turiya and Ramakrishna,” like the work of the smartest DJ you never knew, thank Gupta, whose team helps computers fine-tune their recommendations. “It’s all about predicting, right?” she says. “You’re trying to guess what’s going on with limited data.”

Today she’s leading a research and development team at Google to, among other things, create greater accuracy in machine learning. “Let’s say I want to be equally accurate at identifying a Boston accent and a Texas accent, but I have a speech recognizer that’s a little better at the Texas one,” she says. “Should I penalize the people with a Texas accent by making the recognition just as bad as it is for Boston, to be fair? And what if it’s simply harder to recognize people speaking with a Boston accent?”

Gupta and her team are also refining systems that would be infinitely more transparent than their carbon-based designers. With machines, the hope goes, we can eliminate many of the biases or subconscious processes that plague human thought—or at least more easily recognize them when they emerge. Machines don’t lose focus when they’re tired, or irritable, or hungry. A study showed that judges are less apt to grant parole right before lunch, when they’re thinking of sandwiches rather than sidebars. “It’s hard to measure what’s really going on in the minds of humans,” Gupta says. “We want our machine-learning systems to be explainable, and frankly many of them are already more explainable than humans are.”

“We want our machine learning systems to be explainable, and frankly many of them are already more explainable than humans are.”

Maya Gupta Google

Maya Gupta

As AI becomes increasingly useful—not to mention easier to use—the push is on to place it into as many hands as possible. Christine Robson, an IBM researcher before coming to Google, is an enthusiastic champion of open source software like TensorFlow, a machine-learning system that can be used for a host of tasks, from translating languages to spotting illnesses to creating original art.

For Robson, inclusivity in AI means making its tools accessible to more than just self-professed math nerds like herself. “I’m excited about the availability of machine learning to the world,” she says. “We talk a lot about democratizing machine learning, but I am a big believer in this. Making these tools really easy to use, and making these techniques possible for everybody to apply, is just so critical.”

An intelligent computer is only as well-rounded as the people who teach it.

Sci-fi literature and film have long proffered examples of AI gone awry (Mary Shelley’s Frankenstein turns 200 next year). Today, many in the industry—including Li, Robson, and Chou—are concerned less about what AI might do to us and more about what we humans might do to AI. An example: Programmers give virtual assistants a female voice because, well, men and women alike tend to prefer one. “But it perpetuates this idea that assistants are female, so when we engage with these systems, it reinforces that social bias,” says Chou. Many of the field’s best minds worry about what’s going into real-life AI systems—and thus what’s going to emerge. That’s where the push for greater diversity in AI comes in. Little of this will be easy. But its proponents are smart, resourceful, and committed to the cause.

“Making these AI tools really easy to use, and making these techniques possible for everyone to apply, is just so critical.”

Christine Robson Google

Christine Robson

We have to make sure that everyone feels welcome, Gupta says. She recalls the wall of photographs of retired electrical-engineering professors at her alma mater Rice that ‘’did not look like me.” We need to convince girls that AI isn't magic, adds Robson. “It's math."

At SAILORS, students are learning how to use natural language processing to search social media and aid in disaster relief. “It would help rescuers discover people in need in real time, using their Twitter messages,” Li says. The effects of the classes and projects last well past the unforgettable summers. Some of the students have started their own robotics clubs at school, published pieces in scientific journals, and held workshops at middle schools to spread the gospel of AI to even younger girls. For these students, whose backgrounds and experiences are as diverse as the myriad projects they tackled at camp, AI isn’t the latest cool gadget, but a powerful force for good. In the lead-up to the first SAILORS gathering in 2015, the program shared messages from incoming campers, including this ambitious wish: “I hope to begin my AI journey now so I can make an impact on the world in the future.”

Learn more about Google's AI initiatives

Related Stories

How AI is helping doctors prevent blindness in diabetics

How AI is helping doctors prevent blindness in diabetics

Meet the team using machine learning to help save the world's bees

Meet the team using machine learning to help save the world's bees

How AI is helping provide advance warnings in emergency situations

How AI is helping provide advance warnings in emergency situations

A young music lover finds his path by following the call of the whales

A young music lover finds his path by following the call of the whales

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

How AI is helping doctors prevent blindness in diabetics

How AI is helping doctors prevent blindness in diabetics

Meet the team using machine learning to help save the world's bees

Meet the team using machine learning to help save the world's bees

How AI is helping provide advance warnings in emergency situations

How AI is helping provide advance warnings in emergency situations

A young music lover finds his path by following the call of the whales

A young music lover finds his path by following the call of the whales

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

How AI is helping doctors prevent blindness in diabetics

How AI is helping doctors prevent blindness in diabetics

Meet the team using machine learning to help save the world's bees

Meet the team using machine learning to help save the world's bees

How AI is helping provide advance warnings in emergency situations

How AI is helping provide advance warnings in emergency situations

A young music lover finds his path by following the call of the whales

A young music lover finds his path by following the call of the whales

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

A tribe uses cell phones and TensorFlow to fight illegal logging in the Amazon

Back to top