In the Head of AI Scientist Shane Legg

New Zealander Shane Legg, DeepMind Technology’s chief scientist, gives significantly fewer talks and far less quotes to journalists than his other co-founders, CEO Demis Hassabis and head of applied AI Mustafa Suleyman.

Legg, 43, remains somewhat of an unknown entity, choosing only to talk about his work at the occasional academic conference or university lecture. With the exception of this rare interview, you’ll be hard pushed to find many stories about the British artificial intelligence company DeepMind that contain quotes from the safety-conscious co-founder, who mathematically defined intelligence as part of his PhD with researcher Marcus Hutter.

Much of Legg’s time is dedicated to hiring and deciding where DeepMind, which employs around 400 people over two floors of a Google office in King’s Cross, should focus its efforts next. Arguably more importantly, he also leads DeepMind’s work on AI safety, which recently included developing a “big red button” to turn off machines when they start behaving in ways that humans don’t want them to.

Scientists like Stephen Hawking and billionaires like Elon Musk have warned that we need to start thinking about the dangers of self-thinking “agents”. DeepMind is currently making some of the smartest “agents” on the planet so it’s probably worth knowing who all of the company’s founders are, what they believe in, what they’re working on, and how they’re doing it.

Before DeepMind, Legg spent several years in academia, completing a Post Doc in finance at the University of Lugano in Switzerland and a PhD at Dalle Molle Institute for Artificial Intelligence Research (IDSIA). He also held a research associate position at University College London’s Gatsby Computational Neuroscience Unit and a number of software development positions at private companies like big data firm Adaptive Intelligence.

Prior to the Google acquisition, the New Zealand-born scientist gave a revealing interview to a publication called Less Wrong in 2011 – a year after DeepMind was formed – where he talks very candidly about the risks associated with AI.

“What probability do you assign to the possibility of negative/extremely negative consequences as a result of badly done AI?” Legg was asked, where “negative” was defined as “human extinction” and “extremely negative” was defined as “humans suffer.”

Responding to the question, Legg said: “Depends a lot on how you define things. Eventually, I think human extinction will probably occur, and technology will likely play a part in this. But there’s a big difference between this being within a year of something like human-level AI, and within a million years.”

Original article by Sam Shead, Business Insider, January 26, 2017.

Photo by DeepMind.


Tags: AI  Business Insider  DeepMind  Google  Shane Legg  

Emilia Wickstead Helping Airline Make an Impression

Emilia Wickstead Helping Airline Make an Impression

Around the globe, airlines and hotels are collaborating with top fashion houses to reshape brand narratives, like Air New Zealand and their partnership with London-based Emilia Wickstead. Condé Nast Traveler’s Caitlin…