The Virtual Human Markup Language is designed to
accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation,
Body Animation, Dialogue Manager interaction, Text to Speech production, Emotional Representation
plus Hyper and Multi Media information. The text that a Virtual Human is to speak is marked up with tags that
direct the character's emotions, gestures and body motion. For Example
<sad>
You <emph>said</emph> to me
once <pause length="short"/>
that pathos left you unmoved, but that beauty,
<look-down wait="750ms"/><look-up/>
<emph affect="b" level="moderate"> mere</emph> beauty,
could fill your eyes with tears.
</sad>
|
|
More examples can be found
here.
The following workshops concerned with VHML or VHML-like languages may be of interest:
For more information please contact raytrace@talkingheads.computing.edu.au