We should have a balanced discussion around AI, say leading researchers in computer science.

Artificial intelligence is no longer a futuristic promise; it is here, and seems to be embedded almost everywhere. Few technologies have spread so quickly, and few technologies have split opinion so sharply. To some, AI is the dawn of a new golden age, while others see a ticking time bomb. This tension between possibility and risk was also visible during a live poll performed with the audience at the 12th Heidelberg Laureate Forum this year, where "deepfakes and misinformation" was chosen as the most important AI challenge over the next 10 years, followed by concerns about ethics and privacy.

Beneath all this tension is one key question: How do we make sure AI works for people, not against them?

This week, our Blog contributor Andrei Mihai analyzes a talk on that subject by Jeff Dean (Chief Scientist, Google DeepMind and Google Research; ACM Prize in Computing – 2012) and David Patterson (ACM A.M. Turing Award – 2017), who spoke at the 12th HLF.

Check out the full article here: HLFF Blog

Image caption: Jeff Dean and David Patterson during their talk at the 12th Heidelberg Laureate Forum. Image credits: HLFF.