Thursday 26 April 2012

Monday 23 April 2012

Tuesday 17 April 2012

Monday 16 April 2012

This final artefact has been the most useful artefact created during this research project because of the drastic improvement it has shown to the quality of the performance seen in Artefact 4. During the creation of Artefact 5, the original Bill Hicks scene was analysed to help understand the different facial expressions that were used in conjunction with specific words. Whilst studying this footage it became apparent that Hicks has a very unique face and way of speaking. He often speaks out of one side of his mouth and his lips are very exaggerative when articulating certain words. All of these observations helped to improve the performance of the digital character being worked on; they enabled added layers of complexity within the character’s face that instilled a greater depth to the character that weren’t seen in Artefact 4.
This artefact is the best example that has been created during this research project; it shows how using reference material can enhance the authenticity of a performance of a digital character, which was what the initial purpose of the research aimed to show. Although this artefact is a great improvement to previous artefacts, there is still room for improvement. In certain areas, the digital performance leans on the original too much; merely mimicking a real-life performance is not enough in the digital realm, a digital character must go further and exaggerate certain actions in order to accommodate for the obvious physical limitations.
Furthermore, a digital performance is always limited in some way by the character’s rig and specific model. There are parts of Hicks’s original performance that simply could not be replicated using the Morpheus rig because of how it has been put together. This isn’t necessarily a criticism of the Morpheus rig; it is merely a reality that must be acknowledged. Again, it was not the intention of this research to directly mimic the actions of the real-life performer, rather to understand the motions in that performance and translate them in a way that fits the character being animated. If someone actually wanted to mimic the exact actions of a real person, they would either have to model and rig a character with that in mind, or use state of the art motion capture technology, which would capture the minute details of a particular person. However, even motion capture has its limitations and it cannot be solely relied upon. Animators are often needed to tweak and alter certain parts of the capture footage.

Artefact 4

For this artefact the Morpheus rig has been used to create a new performance, focusing on facial animation. Although the Max rig has been a useful tool in experimenting with general animation, the facial controllers are somewhat limiting compared to the Morpheus rig. This rig offers much more control over specific areas of the face and has helped craft a new animation with the help of a new audio clip from Bill Hicks’s stand-up comedy routine ‘Relentless’.

Artefact 4 is a decent attempt at conveying the feelings behind Hicks’s words but even when just concentrating on the motion of the face, having a visual reference would greatly improve the performance. It is difficult to imagine specific facial expressions and combine them in a cohesive way that looks convincing to an audience. The lip-synching of the mouth is quite rigid and somewhat robotic; this area specifically deserves more attention. This artefact has helped to show the difficulties that come with animating a person’s face with only an audio clip for context.

There will be much to improve in Artefact 5, when the original scene will be analysed so that a better understanding of the tone and emotions that Hicks displays on his face can be articulated onto the digital character; however, the digital character needs to maintain a certain level of individuality and not become merely a clone of the original performance.

Friday 13 April 2012

Monday 9 April 2012