Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with the SMPL-X face #39

Open
RR-28023 opened this issue Dec 10, 2024 · 8 comments
Open

Issue with the SMPL-X face #39

RR-28023 opened this issue Dec 10, 2024 · 8 comments

Comments

@RR-28023
Copy link

Hi,

I'm trying to replicate your results with your own data and I've run into the following issue: the SMPL-x MODEL seems to not be renderign properly the face vertices.

For example, below is my SMPL-X render:

image

And this is the one published in the project page:

image

The face is cleary off in the one I'm getting. Any clue on what I may be doing wrong? I'm using FLAME 2020 as instructed.
Thanks!

@mks0601
Copy link
Owner

mks0601 commented Dec 10, 2024

There are a couple of possible reasons, which is already reported in here: #13

Check DECA's results, which is available in data/Custom/data/$SUBJECT_ID/flame_init/renders. Some people do not have DECA's results and run fitting.

And try the fitting again.

@RR-28023
Copy link
Author

Thanks for the prompt response. Indeed it was an issue with the DECA step. I had not downloaded the deca_model checkpoint, and the process had not raised any error.

I've now succesfully trained the avatar with your example dataset. May I ask if the results shown or the github project page are with only 5 epochs of training (which is the default in the code)?, I ask because I get pretty decent results but not as good as yours. Thanks

@mks0601
Copy link
Owner

mks0601 commented Dec 15, 2024

thanks for your confirmation. I added a line to ensure the availability of deca_model.tar: line.

I used only 5 epochs for all videos in the project page. Actually, you can download the avatars from here

@RR-28023
Copy link
Author

RR-28023 commented Dec 16, 2024

Understood. Then for some reason the avatar I get is slightly worse. See the picture below and compare it with the one above. Some details (e.g. the shoes black areas) are missing. Any suggestions on what I should review?
image

@mks0601
Copy link
Owner

mks0601 commented Dec 16, 2024

How did you make the avatar? Did you do all the fitting process by yourself? Or used provided pre-processed fitting data and just made the avatar based on the provided ones?

@RR-28023
Copy link
Author

For the avatar I did everything by myself except for the segmentation. That is, the only initial data I had was the video recording and the masks.

For the SMPL-X parameters of the target video (tvxq_mirotic) I used the data provided by you.

@mks0601
Copy link
Owner

mks0601 commented Dec 16, 2024

could you first check the quality of several dumped videos from the phone scan (not the target video)? you can compare the video with mine. for example, keypoints_whole_body.mp4 and smplx_optimized_smoothed.mp4

@RR-28023
Copy link
Author

Yep okay that makes sense. I will see if I can resolve any discrepancies between my intermediate outputs and yours, and may open another issue if I can't solve it. Feel free to close this one, and thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants