Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My own data test is very poor #5

Open
pengweixiang opened this issue May 16, 2019 · 27 comments
Open

My own data test is very poor #5

pengweixiang opened this issue May 16, 2019 · 27 comments

Comments

@pengweixiang
Copy link

I use my own data for testing. The results show the following strange situation, I would like to ask what is the situation?
1_frame30290face0_1_frame30268face0
2_frame163116face0_2_frame166662face0

@donydchen
Copy link
Owner

Hi @pengweixiang , did you load the pretrained weights provided by this project? If yes, how did you crop the face and extract AU vectors?
As mentioned here, this project uses face_recognition to extract face bounding box and Openface to obtain AU vectors. Kindly follow the same settings if you want to test on your own dataset.

@pengweixiang
Copy link
Author

Hi @pengweixiang , did you load the pretrained weights provided by this project? If yes, how did you crop the face and extract AU vectors?
As mentioned here, this project uses face_recognition to extract face bounding box and Openface to obtain AU vectors. Kindly follow the same settings if you want to test on your own dataset.

Except not used face_recognition to extract face bounding box. Can you trouble this part of the code for my reference, thank you very much!

@donydchen
Copy link
Owner

@pengweixiang , the main function goes as below. Note that you need to install face_recognition package first.

import face_recognition
from PIL import Image

def crop_face(img_path, size=(128, 128)):
    face_im = face_recognition.load_image_file(img_path)
    bboxs = face_recognition.face_locations(face_im)

    im = None
    if len(bboxs) > 0:
        im = Image.fromarray(face_im)
        bbox = bboxs[0]
        im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2]))
        im.thumbnail(size, Image.ANTIALIAS)

    return im

@pengweixiang
Copy link
Author

@pengweixiang , the main function goes as below. Note that you need to install face_recognition package first.

import face_recognition
from PIL import Image

def crop_face(img_path, size=(128, 128)):
    face_im = face_recognition.load_image_file(img_path)
    bboxs = face_recognition.face_locations(face_im)

    im = None
    if len(bboxs) > 0:
        im = Image.fromarray(face_im)
        bbox = bboxs[0]
        im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2]))
        im.thumbnail(size, Image.ANTIALIAS)

    return im

Thank you very much for your answer, I will try first.

@pengweixiang
Copy link
Author

Did not work, the problem still exists. very strange.
extract
N_0000000376_N_0000000283

@donydchen
Copy link
Owner

Maybe you can try to train the model with your own dataset?

@pengweixiang
Copy link
Author

Maybe you can try to train the model with your own dataset?

I suspect that it is an openface issue. I used the data you provided to re-generate the expression expression with the 2.0.5 version and compare the two values ​​to find a certain deviation. After retesting, the effect was also worse. I want to ask if you are using that version of openface?

@donydchen
Copy link
Owner

The version of OpenFace I used for this project is 2.0.4.

@pengweixiang
Copy link
Author

The version of OpenFace I used for this project is 2.0.4.

I can't search this version online. Can you provide related connections? I want to retrain the model here, using the data you provide to train a model that has the same effect as you provided? Is there a requirement for training parameters?

@donydchen
Copy link
Owner

The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.

@pengweixiang
Copy link
Author

The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.

The training results are not effective. . . It seems that luck is very important.

@pengweixiang
Copy link
Author

The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.

191942_195985
197408_200048
The result is like this, I don’t know why, ask for advice.

@donydchen
Copy link
Owner

Sorry but I don't have too much insight on your codes, settings or experiment environment, so I'm afraid I can't provide any effective suggestion for your case.
But if you use the code and dataset provided by this project, it should be able to yield the similar results as shown in the READMD. I trained and tested it on several different machines before, and they all worked out fine.

@ghost
Copy link

ghost commented May 20, 2019

I am testing in google colab and trying to get this setup so the environment will not matter, I am still having trouble with action units, if anyone is interested in setting this project up to train and test in google colab feel free to contact me and then we should all be getting the same results
@donydchen I can share my google colab notebook or do you plan to set this up in google colab to make it easy to reproduce and use in any environment?

@donydchen
Copy link
Owner

Hi @ak9250 , many thanks for your suggestion. However, I'm busy doing research on other topic these days, so I'm afraid I don't have time to update the project for the time being.
For extracting Action Units, you can check out https://github.com/donydchen/ran_replicate/blob/master/tools/extract_au.py for some reference.
For using other dataset, you'll need to create a specific dataset Class by inheriting the base_dataset.py. Basically, you can just copy celeba.py and modify a few lines of code to adapt to your own dataset. Then call your dataset class in data_loader.py.
Hope it help.

@pengweixiang
Copy link
Author

Sorry but I don't have too much insight on your codes, settings or experiment environment, so I'm afraid I can't provide any effective suggestion for your case.
But if you use the code and dataset provided by this project, it should be able to yield the similar results as shown in the READMD. I trained and tested it on several different machines before, and they all worked out fine.

Found the problem. Successfully achieved. thank you very much

@xrtbuaa
Copy link

xrtbuaa commented Jun 3, 2019

@pengweixiang
hi ,I just want to know the detail of how do you fix the problem?
MIne just din't work after fine-tune on the dataset of celebA(it does not change at all)
I do not know why?
thank you

@plutoyuxie
Copy link

plutoyuxie commented Jul 23, 2019

Hi. I used celeba data, face_recognition package, openface to test the consistency of aus values. I found that the way of alignment really affects those values.
This method 'im.thumbnail(size, Image.ANTIALIAS)' returns(not 'return' actually) a image with its height and width smaller than 128, and happens to be a patch of the corresponding cropped face image that the author provide.
Maybe is there any padding tricks or does the version of face_recognition package matter?

@donydchen
Copy link
Owner

Hi, @plutoyuxie, im.thumbnail is self-explanatory and it aims to down sample a given image. That means, for an input image, if its size is larger than 128x128, it will be down sampled to 128x128, while if it is smaller than 128x128, the original size will be retained.

If you'd like to make sure the size of an image is resized to 128x128, kindly check im.resize.

Note that before feeding to the training network, an image will always be resized to a specific shape, e.g. 128x128. So the size of images in the pre-processing may not really matter.

@plutoyuxie
Copy link

plutoyuxie commented Jul 23, 2019

Hi, @donydchen When I use size=(128,128) as parameters, 'im.thumbnail' makes an image size equal to (107, 108) (sth like this). So I'm confused.
I want to test my own data, but the way of face alignment is defferent (after using im.thumbnail and im.resize, my handmade celeba image is different from yours.), so the results turns to be not that good.

I fix my preprocessing problem, just delete the line 'im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2]))'. Then I get the same cropped face image as the author gives.

@guozhongluo
Copy link

@pengweixiang
hi ,I just want to know the detail of how do you fix the problem?
MIne just din't work after fine-tune on the dataset of celebA(it does not change at all)
I do not know why?
thank you

me too

@18810251126
Copy link

hi @pengweixiang,I just want to know the detail of how do you fix the problem?
MIne just din't work after fine-tune on the dataset of celebA(it does not change at all)
I do not know why?
thank you

@pongkun
Copy link

pongkun commented Apr 5, 2020

hi @pengweixiang,I just want to know the detail of how do you fix the problem?
MIne just din't work after fine-tune on the dataset of celebA(it does not change at all)
I do not know why?
thank you

See if the default setting of your dataset is ‘none’. If yes, set resize

@pengweixiang
Copy link
Author

pengweixiang commented Apr 10, 2020 via email

@sssssshf
Copy link

sssssshf commented Apr 1, 2022

嗨@ak9250,非常感谢您的建议。不过这几天忙着研究其他课题,恐怕暂时没有时间更新项目。 对于提取动作单元,您可以查看https://github.com/donydchen/ran_replicate/blob/master/tools/extract_au.py以获得一些参考。 要使用其他数据集,您需要通过继承base_dataset.py创建一个特定的数据集类。基本上,您只需复制celeba.py并修改几行代码即可适应您自己的数据集。然后在data_loader.py中调用您的数据集类。 希望它有所帮助。

Can you provide OPENFACE code to extract action units

@sssssshf
Copy link

sssssshf commented Apr 1, 2022

每个系统里面生成的表情au参数都不一样,所以你需要重新训练调整,不能直接使用demo 发自我的iPhone

------------------ Original ------------------ From: AndyWang <notifications@github.com> Date: Sun,Apr 5,2020 10:13 PM To: donydchen/ganimation_replicate <ganimation_replicate@noreply.github.com> Cc: pengweixiang <519774491@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [donydchen/ganimation_replicate] My own data test is very poor (#5) @pengweixiang @donydchen same problem, how is it solved? Thank you — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

这个au的值如何获取 可以提供openface的代码吗 感谢

@yuangan
Copy link

yuangan commented Oct 14, 2022

Did not work, the problem still exists. very strange. extract N_0000000376_N_0000000283

I meet the same problem in testing. I success in celeba from gdrive, but fail in my own dataset. And I find a way to figure it out. The reason is that I use the error parameter extracted by Openface. Here is my way:

  1. Download and install Openface from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4. Crop images to 128x128.
  2. Extract the AUs with the command: ./build/bin/FaceLandmarkImg -fdir ../val_set/img_128/ -out_dir ../val_set/aus/ -aus
  3. Use the code https://github.com/albertpumarola/GANimation/blob/master/data/prepare_au_annotations.py to extract [2: 19] as the readme said. If we don't add '-aus' in step 2, we will get wrong AUs parameters here.
  4. Prepare the dataset as celeba dataset in gdrive and test.
    Here is my result:
    image

The work is interesting and the pre-trained weights are helpful. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants