Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release of ShapeNet dataset #2

Open
OscarXZQ opened this issue Apr 5, 2024 · 9 comments
Open

Release of ShapeNet dataset #2

OscarXZQ opened this issue Apr 5, 2024 · 9 comments

Comments

@OscarXZQ
Copy link

OscarXZQ commented Apr 5, 2024

Dear authors,

Thanks for this impressive work!

I am wondering if you can share the ShapeNet cars/chairs dataset with 512x512 resolution / 100 views you used.

Thanks,
Oscar

@GaussianCube
Copy link
Owner

Thanks for your interests in our work. For shapenet data rendering, we adopt the rendering script from GET3D. For OmniObject3D, we adopt the official rendering script.

@OscarXZQ
Copy link
Author

Thanks!

May I ask if you have used any non-default parameters to render your dataset for shapenet?

And also it seems that 150 views are rendered, may I ask how these views are arranged/sampled?

Thanks again for your response.
Oscar

@OscarXZQ
Copy link
Author

Hi,
Another question: the FID-50K in your paper is said to be measured between 50k fake samples and 50k ground truth renderings. May I ask how the 50k ground truths are selected? And do the 50k fake samples consist of the same objects/views as the 50k ground truth renderings?

Thanks,
Oscar

@ForeverFancy
Copy link
Collaborator

The camera distributions are shown in the following table:

Dataset Rotation Angle Elevation Angle
ShapeNet Car (0, 2π) (1π/6 , π/2)
ShapeNet Chair (0, 2π) (1π/6 , π/2)

And the ground truth renderings are randomly sampled. Generated samples of both our models and baselines are start from random gaussian noise, and then we render the generated objects using randomly sampled camera poses of GT-rendering.

@m-violet-s
Copy link

m-violet-s commented Jun 12, 2024

The camera distributions are shown in the following table:

Dataset Rotation Angle Elevation Angle
ShapeNet Car (0, 2π) (1π/6 , π/2)
ShapeNet Chair (0, 2π) (1π/6 , π/2)
And the ground truth renderings are randomly sampled. Generated samples of both our models and baselines are start from random gaussian noise, and then we render the generated objects using randomly sampled camera poses of GT-rendering.

I still have some questions about this question, because I noticed that the Elevation Angle of the photo in the example_data.zip data provided by GaussianCube_Construction should be rendered from (0, π/3), because the effect of performing densification-constrained fitting based on the rendered image is very different.
This is the effect of Elevation Angle at (0, π/3)
image

This is the effect of Elevation Angle at (1π/6, π/2)
image
This has an inappropriate point under the bottom of the car.
So I think Elevation Angle should be within (0, π/3)
Is there something wrong with my understanding? I would be very grateful if you can tell me the answer

@ForeverFancy
Copy link
Collaborator

Hi,

Thank you for your query about the elevation angle in the rendering.

The elevation angle is the angle between the horizontal plane and the camera's line of sight towards the object. The rendering script uses Euler angles, where the rotation around the X-axis corresponds to the elevation angle.

We use the rendering script of GET3D. The rotation_euler[0] are randomly sampled between 0 and 60 degrees (0 to π/3), which is consistent with elevation angle ranged from (1π/6 , π/2):

elevation_angle_list = np.random.rand(args.views) * 60  
cam_empty.rotation_euler[0] = math.radians(elevation_angle_list[i])  

The angles are saved and applied correctly as per the GET3D paper's specifications.
I hope this clarifies the matter. If you have further concerns or examples of the issue, please share them for more targeted assistance.

@m-violet-s
Copy link

Thank you for your answer, which is very helpful to me. I would like to ask how to obtain the data ./shapenet_car/mean.pt and ./shapenet_car/std.pt in the training of Unconditional Generation on ShapeNet? I don't seem to find the corresponding method in GaussianCube_Construction

@ForeverFancy
Copy link
Collaborator

You could refer to this script of calculating the mean and std among all your fitted assets.

@m-violet-s
Copy link

Oh, thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants