Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running inference on single image overloads gpu memory #5

Open
shahyaan opened this issue Feb 29, 2024 · 3 comments
Open

running inference on single image overloads gpu memory #5

shahyaan opened this issue Feb 29, 2024 · 3 comments

Comments

@shahyaan
Copy link

Hi,

When trying to run inference on a test image using your script I get a "CUDA out of memory error". My image size is 640x480, and I have a GPU with 24GB memory. I'd appreciate it if you could help me resolve this. Thanks!

@rohit901
Copy link
Owner

rohit901 commented Feb 29, 2024

Hello,

Thanks a lot for your interest in our work.

I tested it out on a 24GB machine available to me, and unfortunately it does not run on it, and requires 40GB GPU machine to run.

However, I can suggest you to try removing one of the models like either GDINO or SAM, and see if it helps you?

In the main script, sam is initialized here: (you may also have to remove the usage of sam in other parts of the script in scripts/novel_object_detection/ground_dino_utils.py in the "inference_gdino" function too.

sam init:

sam = load_sam_model(device, sam_checkpoint)

gdino init:

model = load_model("cfg/GroundingDINO/GDINO.py", gdino_checkpoint)

@shahyaan
Copy link
Author

Hi Rohit,

Thanks for the quick response. I tried what you suggested, but even disabling SAM and then also MaskRCNN models doesn't seem to help with GPU memory overruns.

@rohit901
Copy link
Owner

rohit901 commented Mar 1, 2024

Did you try disabling GDINO itself, and using the SRM scores or refinement from SAM as it is? Depending on your use-case, it may still be helpful to you?

Otherwise, could you please try to run it on a 40GB machine? I've tested the code on 1x A100 as well which has 40GB VRAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants