Identify and Mark Up an Actor's Shapes

The main purpose of Faceware Analyzer is to track facial expressions quickly and accurately so those expressions can be applied to facial animation using Faceware Retargeter. Tracking lets the user create the shapes for a particular actor, eventually ending up with a model that reflects the actor's expressions. This increases the accuracy of the track and does a better job at picking up the kinds of small movements that really help to sell a facial performance. This tutorial will cover the basics of Analyzer tracking.

 

*Note - This tutorial begins from the point immediately after creating a new job. For details on job creation in Analyzer, see the following article: New Job Creation

 


 

Create Training Frames on the Actor's Shapes

The way one gives the software information about the actor’s performance is to “markup” frames that have extreme and distinct expressions, creating training frames. These frames are then processed by the software to allow it to hit all of the in-between expressions. The user input is moving points (called “landmarks” in Analyzer) around the face in a specific way which is demonstrated below.

 

*Note: Training and tracking functionality is DISABLED for the Face group. This group is simply present for reviewing a shot after tracking is complete.

 

  1. Select the “Eyes” group from the toolbar dropdown menu. This will display the Eyes face group and the associated landmarks. Note that all of the face groups contain the Nose landmarks.




    Eye landmark placement
     

  2. On the first frame, move the eye and nose landmarks into the appropriate places, similar to the picture. This is most easily accomplished using “Intelligent Drag.” Hold Shift while dragging landmarks and the other landmarks will move into the approximate areas they should be. Continue to move the landmarks around until satisfied with their placement. Intelligent drag will only affect those landmarks that have not been placed by hand. Landmarks that have been moved by hand will turn green while other will remain blue (colors can be changed in the options menu).
  3. Note that when you place a landmark a green marker appears on the timeline. This indicates that you have created a training frame which will be used by Analyzer in its calculations. A training frame will be created automatically once one landmark on a given frame has been placed. The software will, however, use all the landmarks on a training frame to derive data.
  4. Scrub through the timeline and place training frames on the most extreme eye poses, such as blinks, open wide, and the extremes of pupil movement. If the shot is particularly long, you can hold Ctrl and drag on the timeline to select smaller sections to work on. This is also useful if there are some segments of the shot that are valid and some that are not.

*Note: There must be at least three training frames to continue, but there is no maximum. A good rule of thumb is to start with a few and add more as necessary. The number needed will depend on the performance, shot length, video quality, and the user’s skill level and can vary greatly.

 

*Note:  If using an Imported Tracking Model, only one training frame is necessary.

 

 

 

Examples of marked up Brows. Note the locked (grey) nose landmarks.

 

Examples of how to mark up the Mouth.

 

Once you’ve tracked the Eyes, Brows, and Mouth, select the Face group from the menu and review your tracking. The Face group is not used in tracking, only for review. Once you are satisfied, it is time to parameterize. You can find details about parameterization here: Parameterization

 

Create your own Knowledge Base