Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The face of "smplx_optimized_smoothed“ #13

Closed
zhong2001 opened this issue Sep 26, 2024 · 52 comments
Closed

The face of "smplx_optimized_smoothed“ #13

zhong2001 opened this issue Sep 26, 2024 · 52 comments

Comments

@zhong2001
Copy link

zhong2001 commented Sep 26, 2024

Hi,Sorry to bother you again
During my Fitting process, the fit of the face of "smplx_optimized_smoothed" is not ideal, and the model will be distorted when turned sideways, is there any way to solve this problem?
smplx-smooth

@mks0601
Copy link
Owner

mks0601 commented Sep 26, 2024

I haven't seen this kind of failure case. Did you change any of the fitting code?

@zhong2001
Copy link
Author

I haven't seen this kind of failure case. Did you change any of the fitting code?

Hello, I have checked my files and it seems that the fitting code has not been modified. If there were any modifications, I only modified the environment file, but that was due to compatibility issues caused by a too high torch version, such as changing "mask_c2=(1- mask_d2) * mask_d0_nd1" to "mask_c2=(~mask_d2) * mask_d0_nd1"

I don't know where the problem lies

@mks0601
Copy link
Owner

mks0601 commented Sep 26, 2024

during the pre-processing, the code dump several videos. Could you first check them? For example, keypoints_whole_body.mp4. Also, how the results look like without smoothing? For this, you can check smplx_optimized.mp4.

@zhong2001
Copy link
Author

zhong2001 commented Sep 26, 2024

Thank you for your suggestion. In fact, smplx_optimized. mp4 has a similar issue. I have looked at keypoints_whole_body. mp4 and compared it to your example video, it seems that there is no problem. Is this correct?

@mks0601
Copy link
Owner

mks0601 commented Sep 26, 2024

During the fitting, we use face-only fitting of FLAME as a supervision target of the face part of SMPL-X after checking the face visibility in here.

Could you lower the threshold of here like math.cos(math.pi/2)?

@zhong2001
Copy link
Author

Okay, I will give it a try and get back to you. Thank you again for your suggestion.

@zhong2001
Copy link
Author

zhong2001 commented Sep 27, 2024

Hello, I have adjusted the parameters you mentioned, but it seems that the issue of facial deformation still persists.

Additionally, I realized that the project version I downloaded was outdated. You submitted new files (such as unwrap.py) six days ago, so I reconfigured the file directory and retrained the model. Furthermore, I have also adjusted the parameters you mentioned, but there are still facial issues with both models.

Furthermore, I have a question. It seems that the file generated by unwrap.py is not being used by the subsequent smooth_smplx_params.py code. What is its purpose?

Do you have any other suggestions? I am also working hard to identify the problem. I am sorry that I have so many questions. I hope to get your help. Thank you again

@mks0601
Copy link
Owner

mks0601 commented Sep 27, 2024

unwrap.py is to unwrap face texture from the phone scan. Sorry math.cos(math.pi/2) actually increases the threshold, which could make the results worse. Could you show me rendering of smplx_init of that frame?

@mks0601
Copy link
Owner

mks0601 commented Sep 27, 2024

The face_valid is calculated from here based on smplx_init. if smplx_init is very wrong, face_valid could be wrong, which could make face geometry bad.

@zhong2001
Copy link
Author

zhong2001 commented Sep 27, 2024

Hello, I am happy to provide you with the result of smpl_init.mp4, but it seems that there is no problem.

Regarding the parameter issue of math. cos (math. pi/2), it has indeed deteriorated in the new project environment. However, strangely enough, in the original project version (which did not have unwrap. py), I changed this parameter and the model's face seemed to magically improve intuitively(the second figure). But when I use it to create avatars, the facial effect is not good, and it cannot achieve the effect of using the Neuman dataset, so it is actually not feasible. It's really confusing.

I am conducting comparative experiments and corresponding training to identify the problem. I also hope to receive more advice from you. Thank you again for your help.
屏幕截图 2024-09-27 233649

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

you should not use math.pi/2 as it makes the results worse. could you use lower threshold like math.pi/6*5? Also, the face quality might not depends on this specific frame. please share your phone scan and avatar animation as well.

@zhong2001
Copy link
Author

zhong2001 commented Sep 28, 2024

These are some screenshots screenshot of my phone scan and avatar animation. If you need, I can also send the complete video to your homepage email. I hope this won't disturb you. I will modify the parameters(math.pi/6*5) and try the effect. Thanks.

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

I also recommend to change this function to below

    def check_face_visibility(self, smplx_kpt):
        lear = smplx_kpt[:,smpl_x.kpt['name'].index('L_Ear'),:]
        rear = smplx_kpt[:,smpl_x.kpt['name'].index('R_Ear'),:]
        leye = smplx_kpt[:,smpl_x.kpt['name'].index('L_Eye'),:]
        reye = smplx_kpt[:,smpl_x.kpt['name'].index('R_Eye'),:]

        center = (lear+rear)/2.
        eye = (leye + reye)/2.

        eye_vec = eye - center
        cam_vec = center - 0

        eye_vec = F.normalize(torch.stack((eye_vec[:,0], eye_vec[:,2]),1), p=2, dim=1)
        cam_vec = F.normalize(torch.stack((cam_vec[:,0], cam_vec[:,2]),1), p=2, dim=1)

        dot_prod = torch.sum(eye_vec * cam_vec, 1)
        face_valid = dot_prod < math.cos(math.pi/4*3)
        return face_valid

And change this line to

face_valid = self.check_face_visibility(smplx_kpt_cam_init)

@zhong2001
Copy link
Author

zhong2001 commented Sep 28, 2024

I also recommend to change this function to below

    def check_face_visibility(self, smplx_kpt):
        lear = smplx_kpt[:,smpl_x.kpt['name'].index('L_Ear'),:]
        rear = smplx_kpt[:,smpl_x.kpt['name'].index('R_Ear'),:]
        leye = smplx_kpt[:,smpl_x.kpt['name'].index('L_Eye'),:]
        reye = smplx_kpt[:,smpl_x.kpt['name'].index('R_Eye'),:]

        center = (lear+rear)/2.
        eye = (leye + reye)/2.

        eye_vec = eye - center
        cam_vec = center - 0

        eye_vec = F.normalize(torch.stack((eye_vec[:,0], eye_vec[:,2]),1), p=2, dim=1)
        cam_vec = F.normalize(torch.stack((cam_vec[:,0], cam_vec[:,2]),1), p=2, dim=1)

        dot_prod = torch.sum(eye_vec * cam_vec, 1)
        face_valid = dot_prod < math.cos(math.pi/4*3)
        return face_valid

And change this line to

face_valid = self.check_face_visibility(smplx_kpt_cam_init)

Hi,I changed the function and corresponding lines as you said, but unfortunately, the situation seems to have worsened. Additionally, math. pi/6 * 5 seems to have poor performance as well.

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

Could you share the full video of your phone scan?

@zhong2001
Copy link
Author

Sure, how should I send it to you? Is it feasible to send it to your homepage email?

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

you can share the link of the google drive

@zhong2001
Copy link
Author

I have uploaded my video to the following link
https://drive.google.com/file/d/16pVCHpJHNUuYIaloZux38Q_KFeNNF-my/view?usp=drive_link

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

I don't have an access to it. please make it public

@zhong2001
Copy link
Author

I don't have an access to it. please make it public

Sorry, I didn't pay attention. I have made the changes, can you access it now

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

Let me check it two days later as I'm heading to ECCV now. Could you also post the link of smplx_optimized.mp4 and smplx_optimized_smotohed.mp4?

@zhong2001
Copy link
Author

zhong2001 commented Sep 28, 2024

Okay, I have uploaded them to the following link. They are the trained versions after modifying "def check_face_visibility".
https://drive.google.com/drive/folders/1BTCh9FZsPkApJ3VxuLU8TzcnSffJ5jGJ?usp=drive_link
Wishing you all the best, thank you again for your help.

@mks0601
Copy link
Owner

mks0601 commented Sep 28, 2024

This is very weird.. how the results look like in flame_init? Also, could you strictly follow up-to-date code without modifying any of it?

@zhong2001
Copy link
Author

zhong2001 commented Sep 28, 2024

I am uploading the flame_init folder to the previous link, you can check it later, it seems to be normal.
In order to adapt to my torch version, I had to make several modifications as shown below, but usually this does not affect the fitting process. I will organize a file later that includes my modifications and upload the trained model results, which is the relatively strictest version. Thank you for your patience.

#fitting/tools/DECA/decalib/datasets/detectors.py

#self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)
self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType.TWO_D, flip_input=False)

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

I think flame_init has some problem. images in renders folder does not have rendered 3D face geometry. could you check it?

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

the results should look like this
https://github.com/yfeng95/DECA#usage

@zhong2001
Copy link
Author

the results should look like this https://github.com/yfeng95/DECA#usage

You are right, I have checked all the training versions in history and found that they are all missing 3D facial geometry. Perhaps this is where the problem lies. May I ask what I should do to correct this issue

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

Nothing special. Just follow instructions of mine and DECA. After following instructions of DECA, did you run python copy_code.py?

@zhong2001
Copy link
Author

I followed your steps and initially ran python copy_code.py, then followed the steps and finally executed python run.py. Perhaps I should run it separately according to the DECA project page? I originally thought that there was no need to run projects separately.

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

You do not need to run DECA separately. You first need to install DECA, then run copy_code.py, and run run.py. that's all.

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

I recommend checking whether you installed DECA properly by running example cmds of DECA website.

@zhong2001
Copy link
Author

Okay, I will conduct the inspection as you said.

@zhong2001
Copy link
Author

Your suggestion is very effective! I missed two weights when installing DECA, which resulted in the loss of 3D facial geometry. What a stupid mistake... After I made up for this mistake, the facial effect of the model looks normal. Thank you very much for your patience and help. I will further create avatars to view the effect in the future. Thank you again for your help!

@zhong2001
Copy link
Author

But I have encountered some new problems. I also encountered the situation of issues 17, and a similar phenomenon appeared in my 437th frame.

The following link includes the file contents of 436-438. The file weight value of AI analysis 437 is relatively low, and the content in the image is analyzed by AI. What should I do?

https://drive.google.com/drive/folders/1hOCazvMM34Uw3ndRmHu8li2bVTiCjM64?usp=drive_link
image

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

Why is the number of data points is different?

@zhong2001
Copy link
Author

This is also what I am puzzled about. According to my records, I did not make any changes to the mmpose code

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

could you rerun mmpose and check the keypoints_whole_body.mp4 of that frame?

@zhong2001
Copy link
Author

zhong2001 commented Sep 29, 2024

I have rerun mmpose, and this is the result of frame 437, which seems to be no different from the previous and subsequent frames

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

could you rerun the fit.py and subsequent smoothing based on the new mmpose results?

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

please use up-to-date code

@zhong2001
Copy link
Author

I am doing this and it will take some time before the training ends. The code I am currently using was downloaded three days ago. Do you have any new updates since then? If there is, I will download again

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

Just in case, let's download it again

@zhong2001
Copy link
Author

Okay, once I have the results, I will reply to you immediately. Thanks

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

if cou can, could you remove all dumped results (e.g., flame_init) and start from the scratch?

@zhong2001
Copy link
Author

Sure, I always create a new folder every time. I hope there will be a good outcome this time. I'm sorry to bother you so much

@zhong2001
Copy link
Author

I downloaded all the projects and code again and started training from scratch. Unfortunately, the same issue still occurred at frame 437.

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

A weird point is that mmpose uses kpt threshold of 0.3 when visualizing 2D keypoints: https://github.com/open-mmlab/mmpose/blob/71ec36ebd63c475ab589afc817868e749a61491f/mmpose/apis/visualization.py#L14

As your 437.json shows, many kerypoints have scores lower than 0.3, but the visualized keypoints of 437th frame looks good. How did you install mmpose and which modification did you made to the mmpose?

@mks0601
Copy link
Owner

mks0601 commented Sep 29, 2024

Sorry I think I found the bug. I saved kpt instead of kpt_save in this code, which is wrong. I changed the code to save kpt_save. could you run mmpose again with new run_mmpose.py and run fit.py based on the new mmpose results?

@zhong2001
Copy link
Author

Thank you for your suggestion. After I made modifications to this area, the model is functioning normally. Thank you again for your patience and assistance.

@Jun-Pu
Copy link

Jun-Pu commented Sep 30, 2024

To supplement, I solved a similar face fitting problem (as attached) by manually adding "deca_model.tar" to the ./tools/DECA/data/ . It seems that the DECA code won't raise an error for the absence of the pretrained models

@zhong2001
Copy link
Author

To supplement, I solved a similar face fitting problem (as attached) by manually adding "deca_model.tar" to the ./tools/DECA/data/ . It seems that the DECA code won't raise an error for the absence of the pretrained models

Yes, this is exactly one of the two missing weights I mentioned earlier. Another one isFLAME_albedo_from_BFM.npz, also placed under./tools/DECA/data. I have resolved the issue, thank you for your kindness.

@mks0601
Copy link
Owner

mks0601 commented Sep 30, 2024

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants