Oct 7, 2019
The research institute Fraunhofer HHI member of consortium Content4All aims to make more content accessible to the deaf community by developing a photorealistic 3D human avatar.
They use a volumetric studio to capture a performer, this innovative capturing technology allow to reproduce fine details (e.g. finger, hands for sign language animation) with limited modeling effort.
Volumetric studio @Fraunhofer HHI
Volumetric studio @Fraunhofer HHI
A 3D point cloud of the performer is built from the volumetric studio capture.
From all these points in space, the challenge for me is to bring out movements, emotions more intensely than if it had been told to us with words.
Tests - 3D character process
1- Captation in studio 2- 3D Pointcloud avatar 3- Realtime 3D Avatar, 1M particles from movement (90FPS 3k stereo)
Between vision and sound: vibration
Hear with the eyes, see with the sound.
Tests - 3D character / 1M particles following waves oscillations / Realtime VR 90 FPS
Part of the work will be oriented on waves as a transfer of sensation between sight and hearing. The visual creation of the 3D character will be associated with sound thanks to the wave phenomenon.
The wave passes from one world to another by changing its frequency and length, like a channel for movement, sound, colour....
The particles composing the 3D character follow the fluctuations of waves that can resonate. (just incredible inside the VR headset !!)
One million interactive particles
To be able to emit and control 1 million particles interactively (realtime) in 3K at a frequency of 90 frames per second, it is impossible to use the computer's CPU alone.
The technique I use allows me to store the spatial coordinates x,y,z of each particle emitted inside an image. The texture thus created can be directly used by the GPU (Graphic card) to place each particle in space.
This way of storing point coordinates is up to 10 times faster. For each frame (90 per second) an image is built.
A square image of 1024 pixels on the side therefore contains 1,048,576 pixels
Each pixel contains 4 channels: Red, Green, Blue and Alpha
For on particule position
Px is stored in the red channel
Py is stored in the green channel
Px is stored in the blue channel
(Alpha channel is left vacant)
one pixel = one point in space
1 million pixels = 1 million points
Space coordinates stored inside 1,048,576 pixels : Same 1,048,576 particles in space :
Created by Christophe Monchalin
with the support of the
VERTIGO / STARTS Residencies project as part of the STARTS program of the European Commission, based on technological elements from Content4All.
Fraunhofer HHI - Vision and Imaging Technologies
Chroniques 2018 – International digital art biennal
Created as part of the production platform, supported by the Région Sud
Coordination : Seconde Nature et ZINC.
With the support of Fédération Wallonie-Bruxelles
and Wallonie-Bruxelles international
With the help of :
La Maison des Cultures et de la Cohésion Sociale de Molenbeek-Saint-Jean
Le BRASS – Centre Culturel de Forest
La Maison du peuple, Commune de Saint-Gilles
Le Centre culturel Bruxelles Nord - Maison de la création
La Raffinerie / Charleroi Danses
Muted v1 Prototype
IT Dev and Graphics : Christophe Monchalin, Gauthier Roumagne - Interpretation : Léonore Guy, Maud Chapoutier - Illustration : Tomoko Yoshida, Christophe Monchalin - 2D animation : Evelien de Roeck - Music : Yann Deval - Voice : Léonore Guy