Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug : [ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory? #844

Open
chang-yichen opened this issue Nov 4, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@chang-yichen
Copy link

I am using:
docker-compose up --build

Set up .env file in root folder:
VITE_LLM_MODELS_PROD="diffbot,openai-gpt-3.5,openai-gpt-4o"
OPENAI_API_KEY="your-openai-key"

But get the following error:
[ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory?

I am able to upload pdf file and click generate graph. But gets stuck at 0% processing:
image

When I check the neo4j graph, some of the chunks are created:
image

Not sure what is the issue here.

@chang-yichen chang-yichen changed the title [ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory? Bug - [ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory? Nov 4, 2024
@chang-yichen chang-yichen changed the title Bug - [ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory? Bug : [ERROR] Worker (pid:1087) was sent SIGKILL! Perhaps out of memory? Nov 4, 2024
@kartikpersistent kartikpersistent added the bug Something isn't working label Nov 5, 2024
@kartikpersistent
Copy link
Collaborator

Hi @chang-yichen can you share the error trace

@chang-yichen
Copy link
Author

Hi @kartikpersistent I see the following in my terminal:

backend | /usr/local/lib/python3.10/site-packages/pydantic/_internal/fields.py:161: UserWarning: Field "model_name" has conflict with protected namespace "model".
backend |
backend | You may be able to resolve this warning by setting model_config['protected_namespaces'] = ().
backend | warnings.warn(
backend | [2024-11-06 01:00:07 +0000] [1] [ERROR] Worker (pid:283) was sent SIGKILL! Perhaps out of memory?
backend | 2024-11-06 01:00:07,473 - Embedding: Using SentenceTransformer , Dimension:384
backend | 2024-11-06 01:00:07,475 - Embedding: Using SentenceTransformer , Dimension:384
backend | [2024-11-06 01:00:07 +0000] [515] [INFO] Booting worker with pid: 515
backend | 2024-11-06 01:00:07,538 - USER_AGENT environment variable not set, consider setting it to identify your requests.
backend | [2024-11-06 01:00:08 +0000] [380] [INFO] Started server process [380]
backend | [2024-11-06 01:00:08 +0000] [380] [INFO] Waiting for application startup.
backend | [2024-11-06 01:00:08 +0000] [380] [INFO] Application startup complete.
backend | /code/src/shared/common_fn.py:89: LangChainDeprecationWarning: The class HuggingFaceEmbeddings was deprecated in LangChain 0.2.2 and will be removed in 1.0. An updated version of the class exists in the :class:~langchain-huggingface package and should be used instead. To use it run pip install -U :class:~langchain-huggingface and import as from :class:~langchain_huggingface import HuggingFaceEmbeddings. backend | embeddings = SentenceTransformerEmbeddings( backend | 2024-11-06 01:00:11,697 - PyTorch version 2.4.1 available. backend | 2024-11-06 01:00:12,312 - Use pytorch device_name: cpu backend | 2024-11-06 01:00:12,313 - Load pretrained SentenceTransformer: all-MiniLM-L6-v2 backend | /usr/local/lib/python3.10/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field "model_name" has conflict with protected namespace "model_". backend | backend | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`. backend | warnings.warn( backend | 2024-11-06 01:00:13,961 - Use pytorch device_name: cpu backend | 2024-11-06 01:00:13,961 - Load pretrained SentenceTransformer: all-MiniLM-L6-v2 backend | 2024-11-06 01:00:15,851 - Embedding: Using SentenceTransformer , Dimension:384 backend | 2024-11-06 01:00:15,878 - USER_AGENT environment variable not set, consider setting it to identify your requests. backend | 2024-11-06 01:00:17,699 - Embedding: Using SentenceTransformer , Dimension:384 backend | 2024-11-06 01:00:17,900 - Use pytorch device_name: cpu backend | 2024-11-06 01:00:17,910 - Load pretrained SentenceTransformer: all-MiniLM-L6-v2 backend | [2024-11-06 01:00:19 +0000] [1] [ERROR] Worker (pid:380) was sent SIGKILL! Perhaps out of memory? backend | [2024-11-06 01:00:19 +0000] [550] [INFO] Booting worker with pid: 550 backend | /code/src/shared/common_fn.py:89: LangChainDeprecationWarning: The class `HuggingFaceEmbeddings` was deprecated in LangChain 0.2.2 and will be removed in 1.0. An updated version of the class exists in the :class:`~langchain-huggingface package and should be used instead. To use it run `pip install -U :class:`~langchain-huggingface` and import as `from :class:`~langchain_huggingface import HuggingFaceEmbeddings.
backend | embeddings = SentenceTransformerEmbeddings(
backend | 2024-11-06 01:00:22,495 - Embedding: Using SentenceTransformer , Dimension:384
backend | 2024-11-06 01:00:24,824 - PyTorch version 2.4.1 available.
backend | 2024-11-06 01:00:25,429 - Use pytorch device_name: cpu
backend | 2024-11-06 01:00:25,429 - Load pretrained SentenceTransformer: all-MiniLM-L6-v2
backend | [2024-11-06 01:00:25 +0000] [409] [INFO] Started server process [409]
backend | [2024-11-06 01:00:25 +0000] [409] [INFO] Waiting for application startup.
backend | [2024-11-06 01:00:25 +0000] [409] [INFO] Application startup complete.
backend | /usr/local/lib/python3.10/site-packages/pydantic/_internal/fields.py:161: UserWarning: Field "model_name" has conflict with protected namespace "model".
backend |
backend | You may be able to resolve this warning by setting model_config['protected_namespaces'] = ().
backend | warnings.warn(
backend | 2024-11-06 01:00:26,153 - Use pytorch device_name: cpu
backend | 2024-11-06 01:00:26,155 - Load pretrained SentenceTransformer: all-MiniLM-L6-v2
backend | [2024-11-06 01:00:29 +0000] [1] [ERROR] Worker (pid:409) was sent SIGKILL! Perhaps out of memory?

@kartikpersistent
Copy link
Collaborator

hi @chang-yichen it is related no of workers please reduce the workers according to the CPU capacity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants