Software design and architectures for interactive avatars: a hands-on approach (Seminar, WS 2017-2018)


Important News

[2018-05-16] The video showing the final project summary is finally online:

[2018-02-12] Final project presentation on Friday 16th February 2018, E1.1 room 121.

[2017-10-12] Inserted the calendar of the classes. First class: 27th October 2017.

[2017-08-24] The seminar is already filled with 20 participation requests. New applications will be placed on a waiting list.

[2017-07-13] Seminar announced. With respect to the previous year, we will use Unity (instead of the Blender Game Engine) as the animation platform.


“Software design and architectures for interactive avatars: a hands-on approach” is a seminar for the students of the Saarland University, Germany, Winter Semester 2017-2018.

In this seminar, you will discover the hidden mechanics driving current autonomous embodied conversational agents.
The seminar will last about 12 weeks and is composed of four parts.
In the first part, we will introduce the fundamentals of 3D rendering and character animation.
In the second part, we will give an overview of the state-of-the-art implementations.
In the third part, you will get familiar with the authoring pipeline for the production of interactive avatars.
In the last part, you will implement your own interactive avatar in a group project.

This seminar is not of the kind: “read a paper + present it + listen to the others”. It will rather be a “hands-on” work: use several 3D applications, study APIs, develop intermediate demos, show a final demo.

Students will receive several mini-assignment throughout the whole seminar.
Regular meetings will allow you to share your experiences and progress.

Participants are supposed to attend the seminar with their own laptop and prepare at least 10 Gigabytes of space for the installation of the needed software.


For questions and registration send an e-mail with your matriculation number to: fabrizio_punkt_nunnari_at_dfki_punkt_de.

Later, students have to register in HISPOS to receive their grades. An official e-mail will follow at the beginning of the semester.


  • Fundamentals of 3D Characters’ Rendering and Animation;
  • Architectures for Embodied Conversational Agents;
  • Character Generation Tools & Techniques;
  • Authoring Support (Blender Edit);
  • Real-time control (Unity Game Engine);
  • Projects report.


Classes: every Friday @ 10:00 until 12:00.

First class: 27th October 2017.

27.10.2017 – First class.
03.11.2017 – No Class!
10.11.2017 – Regular class.
17.11.2017 – Regular class.
24.11.2017 – Regular class.
01.12.2017 – Regular class.
08.12.2017 – Regular class.
15.12.2017 – Regular class.
22.12.2017 – TBA
29.12.2017 – No Class!
05.01.2018 – TBA
12.01.2018 – Regular class.
19.01.2018 – Regular class.
26.01.2018 – Regular class.
02.02.2018 – Regular class.

Location: Building E1.1, first floor, left corridor, seminar room 1.06.


We will use Blender [] as the authoring platform. Please, download Blender and get confident with the basic windows manipulation system. Follow this tutorial before starting the third class Blender uses Python 3 as its scripting language. Hence, get confident with the Python syntax before starting the third class:

For the real-time animation, we will use Unity []. Please get confident with the Unity interface following the essential tutorials: Unity uses C# as its scripting language. Get ready by learning some basic C# syntax before the seminar.

The lecturers,

Fabrizio Nunnari & Alexis Heloir

Posted in Research

Now hiring!

slsiagentsThe independent “Sign Language Synthesis and Interaction” research group hosted at DFKI’s Intelligent User Interfaces Lab and funded by the Multimodal Computing and Interaction Cluster is seeking a new member in the following areas:

  • chatbots,
  • 3D user interfaces,
  • animated conversational agents,
  • performance capture,
  • computer animation,
  • game engine.

We are looking for candidates to work on a selection of the above themes. The contract may start from April 2017 and run until the end of 2018. The position(s) are open until filled.

Ideal candidates will have a background in Computer Animation and/or HCI. A broader interest in issues related to collaborative science, embodied agents, verbal and non-verbal communication, and machine learning would be a plus. In any case, candidates should have outstanding programming skills and experience with practical system development. A thorough experience in software engineering would be desirable. They are also expected to collaborate in writing scientific papers and technical documents as well to supervise student assistants (Bachelor and Master).

Highly valued personal qualities include creativity, open-mindedness, high motivation, initiative, team spirit, willingness to address new challenges in an unknown scientific territory, integration with the Deaf community as well as an interest to collaborate closely within a distributed, multidisciplinary, and international team. The candidates must be fluent in oral and written English, as it is the project language and the language used in our group. Oral command of German, as the language of the surroundings, is optional, but definitely an advantage.

DFKI GmbH is located on the campus of the Saarland University in Saarbrücken, Germany. The university’s research groups and curricula in the fields of Visual Computing, Computational Linguistics, and Computer Science are internationally renowned. DFKI offers excellent working conditions in a well-established research lab. The position provides opportunities to collaborate in a variety of international projects.

DFKI is a member of the Multimodal Computing and Interaction Cluster together with the Computer Science and Computational Linguistics and Phonetics departments of Saarland University, the Max Planck Institute for Informatics and the newly established Max Planck Institute for Software Systems. As MMCI fellows, candidates will benefit from the outstanding conditions as well as unique collaboration opportunities offered by the participating institutions.

Please send your electronic application (preferably in PDF format) at your earliest convenience. A meaningful application should include a cover letter, a brief summary of past achievements, a statement of interest in the position(s) offered, the date of earliest availability, and salary expectation.


Contact for questions: Alexis Heloir (cf. contact section)

Posted in Job, Research | Tagged , , , , ,

Software design and architectures for interactive avatars: a hands-on approach


Important News

[2016-11-12] The blender file illustrating the first assignment is available. You can download it here : You can either finish the skinning of the provided sample (add missing bone weights for each vertex) or create your own model from scratch. We expect you to send us a resulting blender file following the following convention : firstname_surname.blend to Fabrizio Nunnari within Wednesday the 16th.

Happy Blending!

[2016-10-24] The Schedule has been finalized. Classes on Fridays @ 10:00. First class on Friday, November, 4th, 2016

[2016-10-24] We have currently already more than 40 applications for 14 available places. The chances to be picked up from the waiting list are very low.


In this seminar, you will discover the hidden mechanics driving current autonomous embodied conversational agents.
The first two weeks, we will give an overview of the field and a selection of state-of-the-art implementations (e.g., ICT’S Smartbody).
During the following weeks, you will get familiar with the authoring pipeline for the production of interactive avatars, and you will work on a short project. Regular meetings will allow you to share your experiences and progress.

During the last two weeks, wrap up sessions will be organized. During these sessions, we will sketch together the blueprints of an ideal embodied agent framework.

There will be an official presentation of the seminar on Monday, October, 24h, 2016, “Günter-Hotz-Lecture-Hall”, building E2 2, 4.30 p.m.


For questions and registration send an e-mail with you matriculation number to: fabrizio_punkt_nunnari_at_dfki_punk_de

Later, students have to register in HISPOS to receive their grades. An official e-mail will follow at the beginning of the semester.


  • Architectures for Embodied Conversational Agents
  • Character Generation Tools & Techniques
  • Authoring Support (Blender Edit)
  • Real-time control (Blender Game Engine)
  • Projects report


Classes: every Friday @ 10:00 until 12:00.

First class: Friday November 4th, 2016 @ 10:00.

Location: building E1.1, first floor, left corridor, seminar room 121.


We will use Blender [] as authoring and run-time platform. Please, download Blender and get confident with the basic windows manipulation system. Follow this tutorial before starting the third class

Blender uses Python 3 as its scripting language. Hence, get confident with the Python syntax before starting the third class:

The lecturers,

Fabrizio Nunnari & Alexis Heloir

Posted in Research

Looking for an EASY Job?

THIS JOB HAS BEEN FILLED. Thanks to everyone who applied.

The MMCI-groups “Sign Language Synthesis and Interaction” and “Embodied Spoken Interaction” are collaborating on a project which enables users to interact with a virtual agent that can follow the user’s gaze. We need your help to create the 3D- environment of the agent!

Screen Shot 2016-02-22 at 1.26.44 PM

Your task will be to gather unlicensed / creative commons objects from the internet (approx. 60) and process them in Blender (fix orientation, vertices density and texture conventions) in order to build a convenient library of 3D objects. Objects are supposed to lay on the table and replace the cubes depicted in enclosed picture.

We offer a hiwi job for 2 months of 8-16 weekly hours where you can gain experience with Blender. If you’re interested and have experience with Blender, here is how to apply:

How to apply?

Download this zip file containing two blender files: 000Barrel.blend and 001Wheelbarrow.blend, open them in Blender and inspect them carefully. You should produce a similar file named 002Deudeuche.blend according to the instructions below and send us the file via email ( We will hire the first student capable of following the instructions below:

Create a new Blender file with a small car model (Deudeuche, or 2CV). This file should be named 002Deudeuche.blend. It should contain 6 small 2CV cars correctly scaled and set on the table. each car should belong to a group named from Group.000 to Group.005. Textures should be correctly set to be played in GLSL render mode (game engine). The blend file should also contain a text block with the initial copyright of the 2CV model. The original blend file containing the 2CV model is in the zip archive (66427_Citroen_2CV folder).

Posted in Research

Featured on local TV

Our research made the headline of the local TV channel on November the 16th. You can watch the reportage on the SR-Rundfunk website at 22:15

Video | Posted on by

German Finger Spelling Recognition System (GFRS)

Screenshot from 2015-10-30 17:42:14

This project consisted into evaluating the performances of the Leap Motion Controller in the context of isolated and continuous handshape recognition. An HMM-based letter to letter transition model was used in order to describe the dynamics of the hand motion.

Experiments conducted on both isolated and continuous recognition. For isolated recognition, the system could achieve an accuracy of 80% using a vocabulary of 100 transitions and the accuracy could be further improved to 89.96% when reducing the vocabulary size to 30. When applied on continuous recognition, the accuracy of the system fell to 68%.

Tengfei’s (王腾飞) master thesis is now available for download (link below). The presentation will be held at DFKI in Room Turing II on November the 11th (2015) at 14:00.

Hidden Markov Model Based Recognition of German Finger Spelling Using the Leap Motion

Posted in Research

Character Animation

Character animation is the art of creating moving characters with the use of computers. It is a subfield of computer graphics. Until recently, crafting animation was mostly a manual, time demanding process. The arrival of new interactive medias like video games established new requirements and constraints to the animation production process in terms of realism and flexibility. This course first introduces the essentials of computer animation: how traditional animation principles have been applied to computer animation an how interpolation techniques has increased animation productivity. It then presents the major breakthrough observed in the field of interactive character animation during the last decades: Automatic motion generation using forward, inverse kinematics and physical simulation, realistic motion playback through motion capture and the subsequent data driven animation trend.

Because know how is essential to computer animation, course slots will be punctuated by short practicals demonstrations. Discussion is encouraged during these interactive slots. You are encouraged to bring your laptop and experiment on place to enrich the discussion.

Math Prerequisites:
• trigonometry
• matrix algebra
• 3D geometry

Computer Science Prerequisites:
• basic computer graphics
• basic computer programming
• basic C++ programming

Here are the topics you might select, with their reference material (to be discussed):

Introduction to character animation Watt, A., Watt, M. (1992) Advanced Animation and Rendering Techniques, Chapter 16.

Parent, R. (2002) Computer Animation, Section 4.2 + Chapter 6.

Quaternions + SLERP Shoemake (1985) Animating Rotation with Quaternion Curves

Lander (1998) Better 3D.

M. P. Johnson (2003) Exploiting Quaternions to Support Expressive Interactive Character Motion, MIT, PhD thesis.

Subject taken by Mykola Byelytskyy

Slides as pdf


Inverse kinematics: Inverse Jacobian + CCD Welman (1993) Inverse Kinematics and Geometric Contraints for Articulated Figure Manipulation

Lander (1998) Oh My God, I Inverted Kine

Lander (1998) Making Kine More Flexible
Complementary: Baxter (2000) Fast Numerical Methods for Inverse Kinematics

D. Tolani and N. I. Badler (1996) Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs

Kulpa, R. and Multon, F. (2005) Fast inverse kinematics and kinetics solver for human-like figure (ieee paper, ask us if not available.)

Complementary: Chris Hecker et al. (2008) Real-Time Motion Retargeting to Highly Varied User-Created Morphologies

Complementary: Buss, Samuel R., and Jin-Su Kim. (2005) Selectively Damped Least Squares for Inverse Kinematics

Topic taken by Rui Xu

Principles of traditional animation Lasseter (1987) Principles Of Traditional Animation Applied To 3d Computer Animation

Franck Thomas and Ollie Johnson (1981) The illusion of Life, Disney animation, chapter 3. “The principles of animation”.

Complementary: Sonoko Konishi, Michael Venturini (2007) Articulating the Appeal

Complementary: Shawn Kelly (2008) Animation tips and tricks

Topic taken by Bhavesh Bhansali, presentation scheduled on the 30th of November. rehearsal on the 23rd.

EMOTE Chi et al. (2000) The EMOTE Model for Effort and Shape.

B. Hartmann and M. Mancini and C. Pelachaud (2006) Implementing Expressive Gesture Synthesis for Embodied Conversational Agents

Motion Blending L. Kovar and M. Gleicher (2003) Flexible automatic motion blending with registration curves

Complementary : L. Ikemoto and D. Forsyth (2004) Enriching a Motion Collection by Transplanting Limbs

Motion synthesis from annotations O. Arikan and D. A. Forsyth and J. F. O’Brien (2003) Motion synthesis from annotations
Motion Graphs L. Kovar and M. Gleicher and F. Pighin (2002) Motion graphs

Complementary : Jehee Lee et al. (2004) Interactive Control of Avatars Animated with Human Motion Data

Character Animation Authoring (NOT using “mouse+keyboard” approach) Dontcheva, M. et al. (2003) Layered Acting For Character Animation

Kass, M., Anderson, J. (2008) Animating Oscillatory Motion With Overlap: Wiggly Splines

Shiratori, T. et al. (2013) Expressing Animated Performances through Puppeteering

Rhodin, H. (2014) Interactive motion mapping for real-time character control

Jin, M., et al. (2015) AniMesh: interleaved animation, modeling, and editing

Physics J. K. Hodgins and W. L. Wooten and D. C. Brogan and J. F.O’Brien (1995) Animating Human Athletics
Complementing physics with motion capture V. B. Zordan and J. K. Hodgins (2002) Motion capture-driven simulations that hit and react

O. Arikan and D. A. Forsyth and J. F. O’Brien (2005) Pushing people around

The Smartbody character animation engine — feature tour with an emphasis on inverse kinematics — Building a Character Animation System”, A. Shapiro, 4th Annual Conference on Motion in Games 2011, Edinburgh, UK, November 2011
Smartbody website, hosted at ICT
This hands-on project will consist of presenting a feature overview of the Smartbody character animation system. Student will also asked to showcase a short interactive gaze controller using SmartbodyTopic taken by Nirmal Kumar Ramadoss
Animating characters using XML3D XML3D website
Topic taken by Jonas Trottnow

Slides as pdf


Authoring an interactive reactive avatar in Blender Blender website

General Requirements

Good command of English for understanding research papers. Most of the discussed papers are written in English.

Requirements for Certificates

A seminar certificate has the following requirements:

  • Regular attendance.
  • A talk (English, 30-35 minutes, 10 minutes discussion).
  • A report (English) that covers the facts addressed in the talk and the related discussion.
  • Participant should be in the role of the discussion manager for one talk.


Language of reports: English

Deadline: TBD

Size: 6-8 pages

Format: file format is PDF, page style is LNCS, which can be found on the LNCS springer web page. Start with the Default Author Instructions file. This is also an example how the LNCS style looks. LaTeX is recommended if possible, write your report using the ShareLatex online app ( and add your advisor (e.g. as a collaborator to the project (using ShareLatex will considerably improve the feedback cycle).

General requirements – reports should:

  • be understandable and well formatted!
  • cover the individual topic of the talk, questions that have been discussed during the sessions, and they should address relevant issues of other talks.

First Version of Reports reviewed until: [TBD] – Beginning by this date each supervisor contacts his students for individual feedback.

Final deadline10 days after feedback on the first draft of the written report is given. This ensures that everybody has the same time to finalize the report after s/he has received feedback.


  • Reports can be also submitted before the announced deadline.
  • Please remember that reports have to be in final state when submitted!


Language of talks: English

Date: Monday afternoon, 16:00–18:00

Location: Seminar room (121) in the Computer Science building E 1 1, first floor,

 Kick-Off meeting and talk assignment: Monday, November 2nd. 14:00, Seminar room (121) in the Computer Science building E 1 1, first floor,

Topic attributions: TBD

Calendar: TBD

Posted in Teaching | 9 Comments