Article Preview
Top1. Introduction
3D facial animation reconstruction approach has been an active area of research in 3D Collaborative Virtual Environment (CVE). The need for new approaches that handle CVE runtime extensibility requirements without a complex 3D data manipulation and engine restarting has emerged out in the recent past few years due to its application in realistic facial animation for 3D disaster management. Such a requirement is vital for very large critical virtual environmental applications like military training, emergency preparedness scenarios, and E-shopping. Every year, Saudi Arabia receives about three million people to perform Al-Hajj in Makkah. Two incidents (Hajj Incidents, 2016) occurred in 2004 and 2006 during the stoning ritual in Mina (ramy al-jamart) where more than 500 were killed and about 500 injured. Rescuers went to the scene, and security officials attempted to control the crowds to prevent further crushing. Due to the absence of information and technologies in-place, no clear picture has been given of what caused accidents. It is important to have a believable virtual environment to prepare a backup rescue scenario with an acceptable response time whenever there is a need.
The integration of the avatars’ face animation as an on the fly feature remains a challenge and require an important amount of work for qualified artists with a strong knowledge in facial anatomy. Each player in the CVE is represented by a 3D body called an avatar (Hasgand, 1996), which allows the players to see, interact, and hear each other. The ability to change and extend a 3D collaborative virtual environment (CVE) without having to stop it is an important non-functional requirement especially when used for critical applications such as military training, and disaster management systems. Application services should be available around the clock without interruption. In related approach most widely used in CVE, changes in the requirements are followed by changes in the game engine and are time consuming. Broadly, 3D objects and avatars’ faces may change in various ways to give the users a visual display about the actions that are being applied to the objects or embed animation modeling into an avatar according to the constantly changing situation in the game portfolios. Traditionally, any modifications in the virtual environment system require collaborative efforts from graphic designers and 3D programmers to generate a new game scenario, then stop the engine to inject the new scenario, and finally restart the engine to reflect the new modification in the 3D space. Several research studies on 3D disaster management have been proposed to model human behavior and offer true-to-life VE. However, there is still a lack of studies that can adequately present avatar animation in emergent situations as shown in (Bourkerche et al. 2005; Information Technology, 2000; VRML Online; 2016) in real time without stopping the engine. Its representation structure is complex to parse and visualize, so that it requires professionals, and time with additional cost to develop playable game description.