-
-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Live2D with Lipsync (using audio file/link) #122
base: master
Are you sure you want to change the base?
Conversation
Changes: 1. updated model.motion(group, index, priority) to model.motion(group, index, priority, sound, volume, expression); 2. added model.stopSpeaking() 3. updated readme.MD with demos 4. Workflow will save files to dist (won't be gitignored) 5. Praying this new change doesn't break
voice volume expressions are now optional arg {name: value, ....}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again for the PR! I think we are getting close but some changes are still needed as described in the comments.
I noticed that some of the code is not properly linted. After making changes to the code, please run npm run lint:fix
to automatically fix the linting errors, and address any remaining errors manually. (except for the triple slash reference
errors, which I will fix later)
After you finish theses changes, I'll be adding some tests to make sure this feature works as expected.
src/cubism4/Cubism4InternalModel.ts
Outdated
@@ -248,6 +257,11 @@ export class Cubism4InternalModel extends InternalModel { | |||
this.coreModel.addParameterValueById(this.idParamBodyAngleX, this.focusController.x * 10); // -10 ~ 10 | |||
} | |||
|
|||
|
|||
updateFacialEmotion(mouthForm: number) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could a name like setMouthForm
be better? Because it's not changing the entire facial expression but only the mouth form. Also, update
implies that this function will do some computations other than setting the value, so set
will be more suitable here.
As a new API, this method should also be added to Cubism2InternalModel
for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll test and run on the cubism 2 (well the issue is i tried and failed to set up the development env on my local system, but the github action worked fine even the codespace failed, i know my skill issue) So probably won't be able to run the npm lint (will try)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The development guide in DEVELOPMENT.md
was a bit messy and I've rewritten it, now I guess there won't be problems if you follow the steps (if there is please let me know!)
It's not your issue but the codespaces being problematic with submodules, browser testing etc. So better run it locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot 😭
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to remove this method because setting this param is pretty straightforward and isn't really worth adding a method for it.
Co-authored-by: Guan <[email protected]>
Co-authored-by: Guan <[email protected]>
Co-authored-by: Guan <[email protected]>
also remove cache buster and autoplay
Hey @RaSan147 , is there a reason why these two calculations are different? I wonder if they can be consistent so I can move them into pixi-live2d-display/src/cubism2/Cubism2InternalModel.ts Lines 250 to 266 in b00b64b
pixi-live2d-display/src/cubism4/Cubism4InternalModel.ts Lines 225 to 236 in b00b64b
|
Sorry, i might forgot to update the another one. Would recommend adding bias_power and weighted one (otherwise lips don't move well). |
Finally it's ready to merge! Before I merge it, are there any changes you would like to make or suggest? |
Sorry didn't notice, gimme a bit time, testing... |
btw can you please check the PR i've sent you on cubism folder repo... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vite requires terser
in this version (my fresh install was not working without it, so kindly add it the deps
i fixed it with npm install terser
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add another option, force or priority, if force or higher priority, will stop current audio, otherwise current one will play.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will add onFinish and onError callback as option
Well, I'm gonna miss motion(...., {sound}) |
期待,更新 |
Gotta re-test and look for compatible way to shift from patch to official version |
live2d official website The demonstration video of the model can flexibly display mouth movements, and the lip-syncing looks quite natural. Demo Video In this example model, not only can the mouth opening be set based on audio information, but vowel mouth shapes can also be set by adjusting 'ParamA', 'ParamE', 'ParamI', 'ParamO', 'ParamU'. model.internalModel.coreModel.setParameterValueById('ParamMouthOpenY', mouthY)
model.internalModel.coreModel.setParameterValueById('ParamA', 0.3) I feel there might be better methods to achieve lip-syncing. Can the model be set to correspond to the mouth shape based on the audio? Also, Alibaba Cloud's TTS can output the time position of each Chinese character/English word in the audio. How can the model play the audio, and can it set the corresponding mouth shape based on the phonetic information? Seeking guidance from the experts! 🙏 |
Whenever you're ready. Thanks for all your hard works |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
@guansss 大佬快合并吧,期待发包❤️ |
Eagerly awaiting merge! |
Solving issues mentioned in #117