The results of the exam Visual Computing for Communication (VCC) can now be found in meincampus. Examination records can be inspected on April 5th, 2016, between 8:00 and 9:00 a.m. in room N 6.17.
Presenter: Prof. Jan Peters, Technische Universität Darmstadt, Max Planck Institute for Intelligent Systems
Room: H15 (Hans-Wilhelm-Schüßler Lecture Hall)
Chair: Prof. Walter Kellermann
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.
Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society‘s Early Career Award as well as numerous best paper awards. In 2015, he was awarded an ERC Starting Grant.
Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC). He has received four Master‘s degrees in these disciplines as well as a Computer Science PhD from USC.
The EARS@HOME team led by the LMS won the Functionality Benchmark FBM3 “Speech Understanding” of the ROCKIN@HOME Challenge, which took place in Lisbon from November 19-23, 2015. The team members are working within the EU-funded project Embodied Audition for RobotS (EARS), which is led by the LMS as project coordinator.
This talk will be given as part of the EEI Colloquium (Elektrotechnisches Kolloquium des Departments EEI)
|Title:||An Overview on Robot Audition|
|Presenter:||Prof. Hiroshi G. Okuno|
|Affiliation:||Department of Intelligence Science and Technology,
Graduate School of Informatics, Kyoto University
|Duration:||about 30min (w/o discussion)|
Auditory processing for a robot, that is, robot audition is essential to human-robot interaction since the robot is expected to communicate with humans as we do. The main problem with robot audition is that the input signals captured with robot-embedded microphones are contaminated by noise sources such as environmental noise, interruption by other speakers, ego-motion noise of the robot, and so on. This talk will show, supported by demo videos, how robot audition can solve these problems, e.g., by means of binaural processing and microphone array processing.
The results of the Image and Video Compression (IVC) exam are now available on MeinCampus. A proposed solution is pinned to the bulletin board of the chair (6th floor).
Examination records can be inspected on October 13th, 2015 between 8:00 and 9:00 a.m. in room 6.20.
In terms of a research project in cooperation with Siemens the very efficient lossless codec "Vanilc" has been developed for image compression using autoregressive least-squares pixel-prediction. It is now available for free on the LMS institute's websites: Dipl.-Ing. Andreas Weinlich
The results of the exam Visual Computing for Communication (VCC) can now be found in meincampus. Examination records can be inspected on March 26th, 2015, between 8:00 and 9:00 a.m. in room N 6.17.
"Große Ehre für zwei Wissenschaftlerinnen der FAU: Prof. Dr. Elke Lütjen-Drecoll ist mit einem Bundesverdienstkreuz 1. Klasse und Prof. Dr. Dr. Helga Schüßler mit einem Bundesverdienstkreuz am Bande ausgezeichnet worden. Damit werden ihre herausragenden Verdienste um die Wissenschaft gewürdigt." - Quelle: https://www.fau.de/2014/12/19/news/zwei-bundesverdienstkreuze-fuer-fau-wissenschaftlerinnen/
The supplements in "Statistical Signal Processing" given by Dr.-Ing. Roland Maas were ranked second out of 51 courses, with an overall rating of 1.23 in the category ÜP5 (compulsory subject).