Invited speakers

Prof. Michael Beetz, University of Bremen

Title: Automated Models of Everyday Activity

Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing.

Modern simulation-based game technologies give us for the first time the opportunity to acquire the commonsense and naive physics knowledge needed for the mastery of everyday activities in a comprehensive way.
In this talk I will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically

(1) create and simulate virtual human living and working environments (such as kitchens and appartments) with a scope,
     extent, level of detail, physics, and photo-realism that facilitates and promotes the natural and realistic execution of human everyday manipulation activities;
(2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events;
(3) decompose and segment the recorded activity data into meaningful motions and categorize the motions according to action models used in cognitive science; and
(4) represent the interpreted activities symbolically in KnowRob using first-order time interval logic formulas linked to     subsymbolic data streams.

Prof. Yiannis Demiris, Imperial College

Title: Machine Learning for Personalised Human-Robot Interaction

In my talk I will describe our research in using machine learning methods to personalise the interaction between robots and their human users. I will argue that hierarchical methods using machine
learning at different levels of abstraction are needed to be able to address the diverse needs and opportunities that arise in adaptive human-robot interaction over short and long time scales.
I will give examples from our research in personalised human robot interaction for children and adults with disabilities, as well as real-time human-robot collaboration in cognitive and musical tasks. 

Prof. Wataru Takano, Osaka University

Title: Physical Consistent Motions, Symbolic Representation, and Language


Language is a symbolic system unique to the humans. We handles the varieties in the real world by breaking them down or putting them together in the language forms. The symbolic system is the core of highly intellectual inferences for us, and is necessary for a humanoid robot that is integrated into our daily life. This talk presents two contributions on constructing intelligence from the human/robot’s motions and their relevant language. The first contribution is to encode human whole body motions into the stochastic models that are referred to as motion symbols, and to generate language from the motion symbols. The second contribution is to generate the robot’s whole body motions that satisfy the physical consistency from the motion symbols. Our artificial intelligence makes it possible for the robot to interpret observations of human actions in the language forms, and to generate its own whole body motions from the language commands. 

Prof. George Konidaris, Brown University

Title: Robots, Skills, and Symbols

I will discuss recent results on the connection between high-level skills and abstract representations. My approach is to formalize the queries that an abstract representation should support in order to reason about plans composed of a set of motor skills. I will show how to construct a symbolic representation that is both necessary and sufficient for planning, and how that representation can be learned autonomously by the robot. I will also show a demonstration of a robot learning, and then planning with, an abstract symbolic representation of a mobile manipulation problem, directly from sensorimotor data.

Prof. Tomoaki Nakamura, The University of Electro-Communications

Title: Toward Realization of Intelligent Robots That Can Learn Concepts and Language

We define concepts as categories into which a robot classifies perceptual information obtained through interaction with others and the environment, and the inference of unobserved information through
the concepts is defined as understanding. Furthermore, a robot can infer unobserved perceptual information from words by connecting concepts and words. This inference is the understanding of word
meanings. We have proposed probabilistic models that enable robots to learn concepts and language. In this presentation, we present an overview of the proposed models.

Dr. Dustin Tran, Columbia University

Title: Deep Probabilistic Programming with Edward

Probabilistic modeling is a powerful approach for analyzing empirical information, spanning advances in science as well as perception-based and cognitive tasks in artificial intelligence. In this talk, I will
provide both an overview of and the most recent advances in Edward, a probabilistic programming system built on computational graphs. Edward supports compositions of both models and inference for flexible
experimentation, ranging from a variety of composable modeling blocks such as neural networks, graphical models, and Bayesian parametericsand a variety of composable inferences such as point estimation,
variational inference, and MCMC. In particular I will show how Edward can be applied for expanding the frontier of deep generative models and variational inference.

Tadahiro Taniguchi,
Oct 15, 2016, 10:28 PM
Tadahiro Taniguchi,
Oct 17, 2016, 4:51 PM