AI Conversation, Robot Trust, AI Music. May 18, 2018, Part 2
Should autonomy be the holy grail of artificial intelligence? Computer scientist Justine Cassell has been working for decades on interdependence instead—AI that can hold conversations with us, teach us, and otherwise develop good rapport with us. She joined Ira live on stage at the Carnegie Library of Homestead Music Hall in Pittsburgh to introduce us to SARA, a virtual assistant that helped world leaders navigate the World Economic Forum last year. Cassell discusses the value of studying relationships in building a new generation of more trustworthy AI.
Robot assistants talk to us from our phones. Home robots have faces and facial expressions. But many of the robots that might enter our lives will have no such analogs to help us trust and understand them. What’s a roboticist to do? Madeline Gannon, a Carnegie Mellon research fellow, artist, and roboticist for NVIDIA, trains industrial robots to use body language to communicate, while Henny Admoni, psychologist and assistant professor of robotics at Carnegie Mellon University, teaches assistive technology to anticipate the needs of its users.
The pop hits of the future might be written not by human musicians, but by machine-learning algorithms that have learned the rules of catchy music, and apply them to create never-before-heard melodies. Those tunes may not even require human hands to be heard, because a growing army of musical robots, from bagpipes to xylophones, can already play themselves—even improvise too. We talk with computer scientist Roger Dannenberg and artist-roboticist Eric Singer about the implications of computerized composition, and unveil a song created by AI. (We’ll let you judge whether it’s worthy of the top 40.)