In February, Microsoft Vice President Derrick Connell visited the Bing search team in Hyderabad, India, to oversee a Monday morning hackathon. The goal was to build bots artificial intelligence programs that chat with users to automate things like shopping and customer service. Connell’s boss, Chief Executive Officer Satya Nadella, thinks they’re the next big thing, a successor to apps.
The Bing team was so excited they showed up Sunday night to throw a party and brought their spouses and kids. There was even the Indian version of a pinata. Some engineers hacked a Satya-bot that answered questions like “what’s Microsoft’s mission?” and “where did you go to college?” in Nadella’s voice by culling quotes from his speeches and interviews.
Connell thought it was a clever way to show how the technology worked and told Nadella about it, thinking he’d be flattered. But the CEO was weirded out by a computer program spitting back his words.
“I don’t like the idea,” said Nadella, half laughing, half grimacing on a walk to a secret room earlier this month to preview bot and AI capabilities he demonstrated Wednesday at Microsoft’s Build conference. “I shudder to think about it.”
As Microsoft unveils a big bot push at the conference, after a year of increased focus on AI and machine learning, Nadella’s discomfort illustrates a key challenge. Microsoft must balance the cool and creepy aspects of the technology as it releases tools for developers to write their own bots, as well as its own, like Tay, a snarky chat bot that ran amok last week.
“We may want to add emotional intelligence to a lot of what we do Tay is an example of that but I don’t want to overstate it to a point where somehow human contact is something that is being replaced,” he said earlier in March.
Microsoft quickly yanked Tay after naughty netizens taught it to spew racist, sexist and pornographic remarks. The company plans to reintroduce Tay, but the experience left it mulling the ethics of the brave new bot world and how much control it should exert over how people use its tools and products.
Nadella in his keynote Wednesday listed ethical principles he wants the company to stand for, including human-centered design and trust.
“All technology we build has to be more inclusive and respectful,” Nadella said in the keynote. “S o it gets the best of humanity not the worst.”
Microsoft will try to keep humans at the center of AI and avoid unnerving users, even as it uses bots and machine intelligence to help customers comb through corporate data, socialize and make purchases, and its research arm digs deeper into the field.
“I do want the human to be in the loop,” he said earlier this month. “I don’t want AI overlords nor do I want AI servants. I just want AI to help me as a human that’s a design principle that I want Microsoft to stand for.”
And regardless of how bullish he is on AI, Nadella is mindful of its limits.
Microsoft technology can scan a face or photo and recognize a person’s emotional state, whether they are smiling or sad. That’s nice, said Nadella, but it’s not real human understanding. To say otherwise “is a little bit of tech people getting too high on our own supply.”