Skip to content

Creating Custom FaceFX using FaceFX Editor

SpaceD0lphin edited this page Mar 12, 2021 · 8 revisions

Mass Effect 3's FaceFX are exactly what they sound like they are - facial animation effects. These control lip-sync and expression information for the character named in a given Dialogue Node. FaceFX work in conjunction with gestures and other types of animation, but as you might expect, control only the character's face.

This tutorial covers:

  • The basic principles of animation necessary to understand the FaceFX window
  • Adding a blink animation to a line
  • How to use FaceFX to prevent the clipping of dialogue that has been overwritten or redirected
  • Best practises for using FaceFX to create entirely bespoke facial effects
  • Importing FaceFX sets from one node to another, in whole or in part

This tutorial assumes basic familiarity with Dialogue Editor. If it is unfamiliar to you, you may wish to visit the Dialogue Editor overview tutorial first.

1. What am I Looking at, Here?

Use the Dialogue Editor to open the package file containing the conversation you are modding. Open the conversation containing the line you wish to edit the FaceFX for. Click the node, and on the right-hand side under its String Reference ID, navigate to the Assets panel.

Figure 1

This particular node is a little bit different. Those familiar with the Dialogue Editor will notice straight away that although this is a Reply Node, and thus spoken by the player character, there is no associated Wwisestream asset for a male character. That is because this node can only ever be accessed by a female Shepard.

Both the Male and Female sections have their own sets of FaceFX.

Mark Meer and Jennifer Hale's line readings can differ in terms of timing, even on the same set of words, and so one set of animations may not work for both. They may need to be edited independently.
For non-player characters, both sets may also need to be edited independently.
In all cases regardless of an NPC's gender, NPC audio is stored in the Male section of the dialogue node, and it is usually the male set of FaceFX used. There are exceptions to this.

Be Aware: Even when no conditionals are checked, when the Listener of a conversation node is the player, and the player is female, NPCs may use audio assets from the male section of the dialogue node, yet use the female FaceFX when speaking. Be sure to check what the Listener is.

Fortunately, there is a process for exporting and importing FaceFX animation. More on that later.

In the Female section, click the button with the tragedy and comedy masks on it to bring up our friend, the FaceFX Editor.

Figure 2

This is the anatomy of the FaceFX Editor.

Under File, it shows us the conversation currently loaded, and what character it belongs to.
Under Lines, it shows us the FaceFX from within this conversation. Directly beneath that panel is a pane that shows us details about this line's audio. Because we opened FaceFX Editor through a dialogue node, the line we want to edit is already selected for us.
The Animations column lists a lot of different things, but more on that in a moment.
The large grey grid is our Timeline. The X axis shows us a selected animation's weight value. The Y axis shows us the progression of time. Underneath the Timeline, there is a subtitle to remind us of what's being said, although if you have overwritten the dialogue node's audio, this is probably inaccurate.

2. Phonemes and Controls

The Animations column looks very similar across all FaceFX sets. The first 15 items in the list are called phonemes in animation terms. These correspond to the shapes human mouths make when they speak. Not every letter appears on the list, and this is because many sounds share the same shape. Most of these phonemes are fairly self-evident as to what they control, however note that plosives, that is p and b sounds, are controlled by m_Flap, and sounds like 'uh' are controlled by m_Open. m_Jaw+ and m_Jaw- are special cases - these control opening and shutting the jaw, respectively, and do nothing else to the mouth. Because of the way animation works, most jaw operations can be done solely with m_Jaw+; its opposite is rarely used. Facial animations are created by combining these effects.

Figure 3

Clicking through the list will show patterns of dots and lines. These dots are called keys, and the lines between them are called tangents. Animations have minimum and maximum weight values. A weight value of '0' describes the shape not being present on the face at all, and 1 plays it at its maximum value. So, with something seen here like m_Jaw+, '0' means the jaw is completely closed. '1' means the jaw is wrenched all the way open, and is usually not a great value to input if you want naturalistic looking animation.

Some animations, such as Orientation_Head_Pitch, Orientation_Head_Roll, and Orientation_Head_Yaw, can go through 0 and into negative numbers. This means that the animation is playing in the "opposite direction." So, Orientation_Head_Yaw at 0 will be looking straight on. At 1, it will be to the right. At -1, it will be to the left.

Not all animations can or should be used with negative values, and an animation's first key must always be at 0.

Each key describes how strongly this shape should be present on the face at any given point in time. The tangent shows how the shape will "move" to meet where it should be at the next key. This known as in-betweening, or tweening in animation terms. Smoother curves are slower, gentler movements and sharper ones are quicker and more dramatic. Here we see m_Jaw+'s movements, which are usually fairly sharp, but subtle. In fact, throughout this line, the strongest weight it ever has is 0.40, which corresponds roughly with the start of the word "one" in this line.

Figure 4

Continuing to click through this list will show that some things only have one or two movements, and their timeline seems to stop much sooner than the end of the audio, as seen in fig. 4 above with m_OH. Things may be keyed to remain at a value of 0, or something very subtle. This is because the effects of all of these work in conjunction with one another, and they stack. You will note that the timeline resizes as you click through. This is because the timeline shows you only the selected animation's keys up until the last one keyed in the sequence. For something like Blink, the range for this might only be very small.

Figure 5

Right-clicking anywhere in the Timeline gives these options.

  • Add Key will place a key at exactly where your cursor is, both time and weight-wise. This can be adjusted after the fact.
  • Offset All Keys After This Point is used when your line's audio has its float value set to something greater than 0, ie, a delay before it starts, and you need to move the animation up to match. (If your line has a float value, you can check it in the Dialogue Node's Matinee.)

3. Putting it Into Practise: Adding in a Very Simple Animation

We are going to make Shepard blink during this line at 4 seconds in. The version of this particular dialogue node being worked on has had its audio replaced. The new length of the line is 7 seconds. I want Shepard to blink about 4 seconds in. However as seen above, Blink has no keys, and the timeline is only visible up to 2 seconds.

Right-click, Add Key as close to 0 as you can. Right-click the key itself.

  • Set Time moves the key to a point in time.
  • Set Value gives it weight value.
  • Break Tangents is usually used to fix weird curves after deleting keys.
  • Delete Key does what's on the tin.
  • Flatten Tangents is a contextually available option that creates a straight line from one key across to another. This option usually appears when a tangent does something it shouldn't and takes on a value it should not have. If your tangents look weird, see if this option is available.

Note: Keys depend on what is ahead of them. You cannot move a key ahead in time if it overtakes another in front of it. You must delete the key, and then place another ahead, in the desired position.

Set your key's time to 0. Set its value to 0. The screen will update and the grid will change. At about 0.50 in the timeline, place a second key and set its value to 0. This defines the beginning and end of the animation. Now, add a third key as close to in the middle of them as you can, and set its value to 0.95 - this means that over the course of a half second, Shepard will almost completely close her eyes, then open them again. Your completed keys should look something like this:

Figure 6

But, the timeline shows this animation starts at 0 and completes at 0.50 - blink and you'll miss it. Right-click before the first key, and choose Offset All Keys After This Point. Set it to go at around 4. Upon confirming, FaceFX Editor will update, and more of the timeline will be visible. It's as simple as that for a blink! Setting keys at different values will make Shepard hold her eyelids lower or higher, useful for different expressions. Change the pacing of the blink to make it something slow and owlish, or something fluttery.

4. Using FaceFX Editor to Fix Clipping Audio

FaceFX has a close relationship with audio. As something of a failsafe, audio will stop playing if the animation sequence is complete and there are no more facial animations. This is meant to stop the game from giving dead air if there is a second or two of silence accidentally left on the end of an audio asset. However this can present problems for modders who have overwritten dialogue nodes with files significantly longer than their original settings. Correctly setting a dialogue node's InterpLength and its associated WwiseEvent's DurationMilliseconds values are very important to do first and are beyond the scope of this guide. However, once those steps are done, if audio still skips, adding FaceFX will solve the issue. Simply add a key to any animation in the FaceFX set and set its time to fit the DurationMilliseconds. Make sure its value is above 0, even if small. This works as a temporary measure to ensure the audio will play, but more work in FaceFX will probably be needed.

Deleting keys down to a certain point from all animations in a FaceFX set will cause an audio line to terminate early if needed.

5. Keying a Completely Custom Line of Dialogue

It's important to understand that whilst FaceFX is a powerful tool, it is limited, and its use comes with several drawbacks that are not present in conventional animation environments. Unlike the original animators, you will not be able to see your changes reflected live as you do them. You will not have access to a preview of the animation. You will be animating this line through a combination of guesswork and memory. There are, fortunately, a few tips tricks and tools to help make this process a little easier, although it can still be accomplished without them.

Suggested Programs:

  • Audacity - If you are dealing with replacing and editing audio, you probably have this already. But, it's a free program for editing audio.
  • OBS Broadcaster - For recording what things look like in-game. Also free. Pretty easy to set up.
  • GIFCam - A fantastic little app for making GIFs quickly. It was used for making the GIFs in this tutorial, which were used as iterative reference.

Good Practise:

  • Have a save file as close as possible to the conversation your line is editing.
  • Have a backup of your mod before you start messing with FaceFX.
  • Look around in dialogue nodes at the vanilla's FaceFX. Compare what keys look like, what values are usually used.

So, you have a modded Dialogue Node with fresh audio injected into it. It's a combination of lines from all over the place and doesn't have animation data you can steal easily from other places. You have checked that the line plays properly, but the FaceFX look weird, because of course they're set up for a different line. You are looking at the line's FaceFX not knowing where to start.
If I'm working from scratch, I tend to delete almost all the existing FaceFX information. Sometimes, I leave in things like blinks, eyebrow movements, or head movements if I think they work.
Because these need to be done one phoneme at a time, I like to start with m_Jaw+. The reason for this is it usually has movements that go all the way through the line from beginning to end. It's something that makes an immediate and obvious difference. So I go ahead and delete what's there.

Open your line's audio in Audacity.

"Hehe. I already figured that out. It's not a big deal. Heh. I'll look forward to our next talk."

Note the peaks and valleys in the waveform. Usually, big peaks correspond to larger jaw movements. To get a good sense of what you need to do, try saying your line aloud along with your recording. Note the way your face moves. What's most useful about Audacity here is that it's easy to correlate the shape of a waveform with its timeline, and sync up what you're doing on yours in FaceFX. I delete all m_Jaw+'s keys individually, then start fresh.

These are all my m_Jaw+ keys, from beginning to end. By comparing the waveform to the keys, you will see a basic correlation. Ingame, this won't look like much yet, however:

First Iteration

This has a head gesture left in, and a few other phonemes at the very beginning that I want to work with. The rest is all the jaw. You can see there's a lot to be done.

Second Iteration

This is the same line, with just a few more phonemes added later on in the sequence, including the blink from earlier. It's beginning to look a lot more lively, like she's talking!

Final Iteration

The final line sequence. It can be seen in context with sound here.

Iteration is key when animating blind. When I make a few significant changes, I reapply my mod, load it up and record what's happening using OBS. Then, I look at my recording and use GIFCam to make a GIF of the line in its current state. I use this for visual reference in adding more changes or focussing on problem areas, because it's nice to have a looping example of what my changes have done right in front of me. ME3's phonemes are quite forgiving, so long as an effort is made to basically match sounds, it will look believable enough, or at least on par with the rest of ME3's lip-sync.

Exporting and Importing FaceFX Animation

You may wish to take entire portions of a different FaceFX set and import them into yours. Perhaps it has a word or expression you like. Or, once your animation is complete, you may need to duplicate it into another set. This is usually a relatively painless process.

Warning, these options deal with all information in a given set, not just the selected phoneme or gesture. Everything.

  • Delete Section of line deletes all information from all phonemes and gestures in a set between defined points of time.
  • Import Section of line takes .json file information and adds all keys from it to everything in the timeline between defined points.
  • Export Section of line exports all information from all phonemes and gestures in a set between points of time. Its output is a .json file.
  • Offset keys after time is like the function inside the editor, except it applies it to everything in the set at once.
  • Delete Line does what's on the tin, the nuclear option. Caution.

Best Practises

If your intent is to take information from one node to another to work on as a template, it is best to delete all unneeded information first, as importing a .json file in this manner is additive. If the timeline from your import overlaps with information you already have, the imported file will add its own keys to the ones you already have. Be aware of what keys you already have at what points in time, or you will end up needing to do extensive repair work.

If you just want one phoneme or gesture, open two instances of FaceFX. Find the set with the desired phoneme or gesture, and drag and drop it into your target set's animations list. It should populate at the bottom.

A Note on Expressions

By clicking around in dialogue nodes, you will notice that different FaceFX sets have different emotional values attached to them.
Things like E_S_Amusement, E_B_Anxiety, E_Y_Concern and so on.
What these do is change the way phonemes interact with each other to produce an overall expression, the general effect of which also has its own weight value that can be turned up or down for emphasis. You can even transition through emotions in a sentence in this way, by blending and fading weights across emotion sets. Try them out!

Clone this wiki locally