Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

image_to_animation.py is not working with docker #305

Open
vararth opened this issue Nov 13, 2024 · 4 comments
Open

image_to_animation.py is not working with docker #305

vararth opened this issue Nov 13, 2024 · 4 comments

Comments

@vararth
Copy link

vararth commented Nov 13, 2024

Terminal Log -

(animated_drawings) D:\Github\AnimatedDrawings\torchserve>docker run -d --name docker_torchserve -p 8080:8080 -p 8081:8081 docker_torchserve
7dce048edf8f4a9d892ff22746d299e0999f98d86abb532fbae31fb7cb998dfb

(animated_drawings) D:\Github\AnimatedDrawings\torchserve>curl http://localhost:8080/ping
{
  "status": "Healthy"
}

(animated_drawings) D:\Github\AnimatedDrawings\torchserve>cd ../examples

(animated_drawings) D:\Github\AnimatedDrawings\examples>python image_to_animation.py drawings/garlic.png garlic_out
Traceback (most recent call last):
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
    response = self._make_request(
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connection.py", line 507, in getresponse
    httplib_response = super().getresponse()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 1348, in getresponse
    response.begin()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\adapters.py", line 486, in send
    resp = conn.urlopen(
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
    retries = retries.increment(
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\util\retry.py", line 474, in increment
    raise reraise(type(error), error, _stacktrace)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\util\util.py", line 38, in reraise
    raise value.with_traceback(tb)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
    response = self._make_request(
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\urllib3\connection.py", line 507, in getresponse
    httplib_response = super().getresponse()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 1348, in getresponse
    response.begin()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\http\client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "image_to_animation.py", line 41, in <module>
    image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn)
  File "image_to_animation.py", line 19, in image_to_animation
    image_to_annotations(img_fn, char_anno_dir)
  File "D:\Github\AnimatedDrawings\examples\image_to_annotations.py", line 51, in image_to_annotations
    resp = requests.post("http://localhost:8080/predictions/drawn_humanoid_detector", files=request_data, verify=False)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\api.py", line 115, in post
    return request("post", url, data=data, json=json, **kwargs)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "C:\Users\ADSMN\miniconda3\envs\animated_drawings\lib\site-packages\requests\adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

(animated_drawings) D:\Github\AnimatedDrawings\examples>

I am unsure what the issue is - can someone please help out?

@IamGodRod
Copy link

i met same problem

@sGxOxDs
Copy link

sGxOxDs commented Dec 20, 2024

I think this problem may be related to docker
here is my conda log

(base) PS C:\Users\RUMU-RTX3070> conda activate animated_drawings
(animated_drawings) PS C:\Users\RUMU-RTX3070> cd "D:\Test\AnimatedDrawings\AnimatedDrawings\torchserve"
(animated_drawings) PS D:\Test\AnimatedDrawings\AnimatedDrawings\torchserve> cd ../examples
(animated_drawings) PS D:\Test\AnimatedDrawings\AnimatedDrawings\examples> python image_to_animation.py drawings/garlic.png garlic_out
Traceback (most recent call last):
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
    response = self._make_request(
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connection.py", line 507, in getresponse
    httplib_response = super().getresponse()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 1348, in getresponse
    response.begin()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\adapters.py", line 486, in send
    resp = conn.urlopen(
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
    retries = retries.increment(
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\util\retry.py", line 474, in increment
    raise reraise(type(error), error, _stacktrace)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\util\util.py", line 38, in reraise
    raise value.with_traceback(tb)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
    response = self._make_request(
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\urllib3\connection.py", line 507, in getresponse
    httplib_response = super().getresponse()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 1348, in getresponse
    response.begin()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\http\client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "image_to_animation.py", line 41, in <module>
    image_to_animation(img_fn, char_anno_dir, motion_cfg_fn, retarget_cfg_fn)
  File "image_to_animation.py", line 19, in image_to_animation
    image_to_annotations(img_fn, char_anno_dir)
  File "D:\Test\AnimatedDrawings\AnimatedDrawings\examples\image_to_annotations.py", line 51, in image_to_annotations
    resp = requests.post("http://localhost:8080/predictions/drawn_humanoid_detector", files=request_data, verify=False)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\api.py", line 115, in post
    return request("post", url, data=data, json=json, **kwargs)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "D:\RUMU-RTX3070\anaconda3\envs\animated_drawings\lib\site-packages\requests\adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
(animated_drawings) PS D:\Test\AnimatedDrawings\AnimatedDrawings\examples>

this is my try to ping docker

PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
{
  "status": "Healthy"
}
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
{
  "status": "Healthy"
}
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
{
  "status": "Healthy"
}
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
{
  "status": "Healthy"
}
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070> curl http://localhost:8080/ping
curl: (52) Empty reply from server
PS C:\Users\RUMU-RTX3070>

and docker log

2024-12-20 15:34:16 Removing orphan pid file.
2024-12-20 15:34:17 WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2024-12-20 15:34:17 nvidia-smi not available or failed: Cannot run program "nvidia-smi": error=2, No such file or directory
2024-12-20 15:34:17 2024-12-20T07:34:17,563 [DEBUG] main org.pytorch.serve.util.ConfigManager - xpu-smi not available or failed: Cannot run program "xpu-smi": error=2, No such file or directory
2024-12-20 15:34:17 2024-12-20T07:34:17,567 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties
2024-12-20 15:34:17 2024-12-20T07:34:17,580 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2024-12-20 15:34:17 2024-12-20T07:34:17,631 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml
2024-12-20 15:34:17 2024-12-20T07:34:17,814 [INFO ] main org.pytorch.serve.ModelServer - 
2024-12-20 15:34:17 Torchserve version: 0.12.0
2024-12-20 15:34:17 TS Home: /opt/conda/lib/python3.8/site-packages
2024-12-20 15:34:17 Current directory: /
2024-12-20 15:34:17 Temp directory: /tmp
2024-12-20 15:34:17 Metrics config path: /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml
2024-12-20 15:34:17 Number of GPUs: 0
2024-12-20 15:34:17 Number of CPUs: 20
2024-12-20 15:34:17 Max heap size: 3984 M
2024-12-20 15:34:17 Python executable: /opt/conda/bin/python
2024-12-20 15:34:17 Config file: /home/torchserve/config.properties
2024-12-20 15:34:17 Inference address: http://0.0.0.0:8080
2024-12-20 15:34:17 Management address: http://0.0.0.0:8081
2024-12-20 15:34:17 Metrics address: http://0.0.0.0:8082
2024-12-20 15:34:17 Model Store: /home/torchserve/model-store
2024-12-20 15:34:17 Initial Models: all
2024-12-20 15:34:17 Log dir: /logs
2024-12-20 15:34:17 Metrics dir: /logs
2024-12-20 15:34:17 Netty threads: 0
2024-12-20 15:34:17 Netty client threads: 0
2024-12-20 15:34:17 Default workers per model: 20
2024-12-20 15:34:17 Blacklist Regex: N/A
2024-12-20 15:34:17 Maximum Response Size: 6553500
2024-12-20 15:34:17 Maximum Request Size: 6553500
2024-12-20 15:34:17 Limit Maximum Image Pixels: true
2024-12-20 15:34:17 Prefer direct buffer: false
2024-12-20 15:34:17 Allowed Urls: [file://.*|http(s)?://.*]
2024-12-20 15:34:17 Custom python dependency for model allowed: false
2024-12-20 15:34:17 Enable metrics API: true
2024-12-20 15:34:17 Metrics mode: LOG
2024-12-20 15:34:17 Disable system metrics: false
2024-12-20 15:34:17 Workflow Store: /home/torchserve/model-store
2024-12-20 15:34:17 CPP log config: N/A
2024-12-20 15:34:17 Model config: N/A
2024-12-20 15:34:17 System metrics command: default
2024-12-20 15:34:17 Model API enabled: false
2024-12-20 15:34:17 2024-12-20T07:34:17,829 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager -  Loading snapshot serializer plugin...
2024-12-20 15:34:17 2024-12-20T07:34:17,848 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: drawn_humanoid_pose_estimator.mar
2024-12-20 15:34:22 2024-12-20T07:34:22,217 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model drawn_humanoid_pose_estimator
2024-12-20 15:34:22 2024-12-20T07:34:22,217 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model drawn_humanoid_pose_estimator
2024-12-20 15:34:22 2024-12-20T07:34:22,217 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model drawn_humanoid_pose_estimator loaded.
2024-12-20 15:34:22 2024-12-20T07:34:22,217 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: drawn_humanoid_pose_estimator, count: 20
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] main org.pytorch.serve.ModelServer - Loading models from model store: drawn_humanoid_detector.mar
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9002, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9006, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9001, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9011, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9004, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,232 [DEBUG] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9003, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,233 [DEBUG] W-9019-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9019, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9008, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9009, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9014, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,233 [DEBUG] W-9016-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9016, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9015, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9007, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9013, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9017-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9017, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,235 [DEBUG] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9010, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,232 [DEBUG] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9012, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,234 [DEBUG] W-9018-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9018, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:22 2024-12-20T07:34:22,231 [DEBUG] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9005, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9002, pid=72
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9013, pid=79
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9006, pid=67
2024-12-20 15:34:25 2024-12-20T07:34:25,736 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9011, pid=70
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9016, pid=78
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9017, pid=82
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9013
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9010, pid=81
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9016
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9002
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9010
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9000, pid=84
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9018, pid=86
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9005, pid=87
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9001, pid=68
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9012, pid=85
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9008, pid=75
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9018
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9011
2024-12-20 15:34:25 2024-12-20T07:34:25,738 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9006
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9003, pid=71
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9012
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9008
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9003
2024-12-20 15:34:25 2024-12-20T07:34:25,739 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9005
2024-12-20 15:34:25 2024-12-20T07:34:25,740 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9001
2024-12-20 15:34:25 2024-12-20T07:34:25,741 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9009, pid=76
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9019, pid=74
2024-12-20 15:34:25 2024-12-20T07:34:25,742 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9017
2024-12-20 15:34:25 2024-12-20T07:34:25,742 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9019
2024-12-20 15:34:25 2024-12-20T07:34:25,742 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9009
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9014, pid=77
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9004, pid=69
2024-12-20 15:34:25 2024-12-20T07:34:25,735 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9007, pid=80
2024-12-20 15:34:25 2024-12-20T07:34:25,742 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9007
2024-12-20 15:34:25 2024-12-20T07:34:25,742 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9004
2024-12-20 15:34:25 2024-12-20T07:34:25,743 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9014
2024-12-20 15:34:25 2024-12-20T07:34:25,746 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,747 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]87
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [DEBUG] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9005-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,749 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,750 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,750 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,750 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9015, pid=83
2024-12-20 15:34:25 2024-12-20T07:34:25,751 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,752 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9015
2024-12-20 15:34:25 2024-12-20T07:34:25,752 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,753 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,753 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,747 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]67
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]71
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]78
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]77
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [DEBUG] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]84
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]70
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]86
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9011-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9018-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9018-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]69
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]72
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,755 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,755 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]68
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [DEBUG] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9014-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,755 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]82
2024-12-20 15:34:25 2024-12-20T07:34:25,754 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]75
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]80
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [DEBUG] W-9016-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9016-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [DEBUG] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,756 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]79
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]74
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [DEBUG] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9013-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,757 [DEBUG] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,758 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,758 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [DEBUG] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9008-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]76
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]81
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]85
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,759 [DEBUG] W-9017-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9017-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [DEBUG] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9009-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,760 [DEBUG] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,761 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,761 [DEBUG] W-9019-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9019-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,761 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,761 [DEBUG] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9012-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,761 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,762 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:25 2024-12-20T07:34:25,763 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - [PID]83
2024-12-20 15:34:25 2024-12-20T07:34:25,763 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:25 2024-12-20T07:34:25,763 [DEBUG] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - W-9015-drawn_humanoid_pose_estimator_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:25 2024-12-20T07:34:25,764 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9019
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9015
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9016
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9008
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9013
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9004
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9010
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9009
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9017
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9012
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9018
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9007
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001
2024-12-20 15:34:25 2024-12-20T07:34:25,768 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9006
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9011
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9014
2024-12-20 15:34:25 2024-12-20T07:34:25,769 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9005
2024-12-20 15:34:25 2024-12-20T07:34:25,820 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9009.
2024-12-20 15:34:25 2024-12-20T07:34:25,822 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9005.
2024-12-20 15:34:25 2024-12-20T07:34:25,824 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9002.
2024-12-20 15:34:25 2024-12-20T07:34:25,824 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9008.
2024-12-20 15:34:25 2024-12-20T07:34:25,823 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9011.
2024-12-20 15:34:25 2024-12-20T07:34:25,825 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9018.
2024-12-20 15:34:25 2024-12-20T07:34:25,825 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9010.
2024-12-20 15:34:25 2024-12-20T07:34:25,825 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9016.
2024-12-20 15:34:25 2024-12-20T07:34:25,825 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9004.
2024-12-20 15:34:25 2024-12-20T07:34:25,825 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9017.
2024-12-20 15:34:25 2024-12-20T07:34:25,820 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9001.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9019.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9014.
2024-12-20 15:34:25 2024-12-20T07:34:25,821 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9015.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9007.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9003.
2024-12-20 15:34:25 2024-12-20T07:34:25,823 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9006.
2024-12-20 15:34:25 2024-12-20T07:34:25,826 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9012.
2024-12-20 15:34:25 2024-12-20T07:34:25,827 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9013.
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9018-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9016-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9017-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,830 [DEBUG] W-9019-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680065830
2024-12-20 15:34:25 2024-12-20T07:34:25,832 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065832
2024-12-20 15:34:25 2024-12-20T07:34:25,832 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065832
2024-12-20 15:34:25 2024-12-20T07:34:25,832 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065832
2024-12-20 15:34:25 2024-12-20T07:34:25,832 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065832
2024-12-20 15:34:25 2024-12-20T07:34:25,833 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065833
2024-12-20 15:34:25 2024-12-20T07:34:25,833 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065833
2024-12-20 15:34:25 2024-12-20T07:34:25,833 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065833
2024-12-20 15:34:25 2024-12-20T07:34:25,833 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065833
2024-12-20 15:34:25 2024-12-20T07:34:25,834 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065833
2024-12-20 15:34:25 2024-12-20T07:34:25,834 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065834
2024-12-20 15:34:25 2024-12-20T07:34:25,834 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065834
2024-12-20 15:34:25 2024-12-20T07:34:25,834 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065834
2024-12-20 15:34:25 2024-12-20T07:34:25,834 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065834
2024-12-20 15:34:25 2024-12-20T07:34:25,835 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065835
2024-12-20 15:34:25 2024-12-20T07:34:25,835 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065835
2024-12-20 15:34:25 2024-12-20T07:34:25,835 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065835
2024-12-20 15:34:25 2024-12-20T07:34:25,835 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065835
2024-12-20 15:34:25 2024-12-20T07:34:25,836 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065836
2024-12-20 15:34:25 2024-12-20T07:34:25,836 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065836
2024-12-20 15:34:25 2024-12-20T07:34:25,836 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680065836
2024-12-20 15:34:25 2024-12-20T07:34:25,864 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,870 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,886 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,910 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,911 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,911 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,916 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,929 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,927 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,923 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,916 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,934 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,935 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:25 2024-12-20T07:34:25,944 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_pose_estimator, batchSize: 1
2024-12-20 15:34:27 2024-12-20T07:34:27,325 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model drawn_humanoid_detector
2024-12-20 15:34:27 2024-12-20T07:34:27,325 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model drawn_humanoid_detector
2024-12-20 15:34:27 2024-12-20T07:34:27,325 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model drawn_humanoid_detector loaded.
2024-12-20 15:34:27 2024-12-20T07:34:27,325 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: drawn_humanoid_detector, count: 20
2024-12-20 15:34:27 2024-12-20T07:34:27,326 [DEBUG] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9020, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,326 [DEBUG] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9023, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,326 [DEBUG] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9021, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,326 [DEBUG] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9024, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,326 [DEBUG] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9022, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,340 [DEBUG] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9025, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,344 [DEBUG] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9026, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,344 [DEBUG] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9027, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,345 [DEBUG] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9028, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,347 [DEBUG] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9029, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,347 [DEBUG] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9030, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,347 [DEBUG] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9031, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,355 [DEBUG] W-9032-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9032, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,357 [DEBUG] W-9033-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9033, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,364 [DEBUG] W-9034-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9034, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,440 [DEBUG] W-9035-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9035, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,491 [DEBUG] W-9036-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9036, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,551 [DEBUG] W-9037-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9037, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,556 [DEBUG] W-9038-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9038, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,596 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2024-12-20 15:34:27 2024-12-20T07:34:27,625 [DEBUG] W-9039-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/opt/conda/bin/python, /opt/conda/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9039, --metrics-config, /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml]
2024-12-20 15:34:27 2024-12-20T07:34:27,649 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2024-12-20 15:34:27 2024-12-20T07:34:27,649 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2024-12-20 15:34:27 2024-12-20T07:34:27,731 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2024-12-20 15:34:27 2024-12-20T07:34:27,731 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2024-12-20 15:34:27 2024-12-20T07:34:27,744 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2024-12-20 15:34:28 2024-12-20T07:34:28,738 [INFO ] pool-2-thread-41 ACCESS_LOG - /172.17.0.1:57644 "GET /ping HTTP/1.1" 200 15
2024-12-20 15:34:28 2024-12-20T07:34:28,747 [INFO ] pool-2-thread-41 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:abe351cd462b,timestamp:1734680068
2024-12-20 15:34:28 Model server started.
2024-12-20 15:34:28 2024-12-20T07:34:28,949 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.
2024-12-20 15:34:29 2024-12-20T07:34:29,087 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,098 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:900.5448188781738|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,098 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:55.09349060058594|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,098 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:5.8|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,098 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:7908.28515625|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,098 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:7735.41796875|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,099 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:50.4|#Level:Host|#hostname:abe351cd462b,timestamp:1734680069
2024-12-20 15:34:29 2024-12-20T07:34:29,639 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9025, pid=742
2024-12-20 15:34:29 2024-12-20T07:34:29,640 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9025
2024-12-20 15:34:29 2024-12-20T07:34:29,670 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:29 2024-12-20T07:34:29,684 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]742
2024-12-20 15:34:29 2024-12-20T07:34:29,685 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:29 2024-12-20T07:34:29,685 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:29 2024-12-20T07:34:29,686 [DEBUG] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9025-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:29 2024-12-20T07:34:29,686 [INFO ] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9025
2024-12-20 15:34:29 2024-12-20T07:34:29,705 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9025.
2024-12-20 15:34:29 2024-12-20T07:34:29,705 [DEBUG] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680069705
2024-12-20 15:34:29 2024-12-20T07:34:29,706 [INFO ] W-9025-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680069706
2024-12-20 15:34:29 2024-12-20T07:34:29,753 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:29 2024-12-20T07:34:29,808 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9024, pid=733
2024-12-20 15:34:29 2024-12-20T07:34:29,809 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9024
2024-12-20 15:34:29 2024-12-20T07:34:29,822 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:29 2024-12-20T07:34:29,823 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]733
2024-12-20 15:34:29 2024-12-20T07:34:29,823 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:29 2024-12-20T07:34:29,823 [DEBUG] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9024-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:29 2024-12-20T07:34:29,823 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:29 2024-12-20T07:34:29,823 [INFO ] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9024
2024-12-20 15:34:29 2024-12-20T07:34:29,828 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9031, pid=769
2024-12-20 15:34:29 2024-12-20T07:34:29,829 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9031
2024-12-20 15:34:29 2024-12-20T07:34:29,845 [DEBUG] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680069845
2024-12-20 15:34:29 2024-12-20T07:34:29,845 [INFO ] W-9024-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680069845
2024-12-20 15:34:29 2024-12-20T07:34:29,845 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9024.
2024-12-20 15:34:29 2024-12-20T07:34:29,858 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]769
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9028, pid=766
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9028
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [DEBUG] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9031-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:29 2024-12-20T07:34:29,892 [INFO ] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9031
2024-12-20 15:34:29 2024-12-20T07:34:29,896 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9022, pid=734
2024-12-20 15:34:29 2024-12-20T07:34:29,897 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:29 2024-12-20T07:34:29,897 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9022
2024-12-20 15:34:29 2024-12-20T07:34:29,897 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]766
2024-12-20 15:34:29 2024-12-20T07:34:29,898 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:29 2024-12-20T07:34:29,898 [DEBUG] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9028-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:29 2024-12-20T07:34:29,898 [INFO ] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9028
2024-12-20 15:34:29 2024-12-20T07:34:29,898 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:29 2024-12-20T07:34:29,921 [DEBUG] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680069921
2024-12-20 15:34:29 2024-12-20T07:34:29,921 [DEBUG] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680069921
2024-12-20 15:34:29 2024-12-20T07:34:29,921 [INFO ] W-9031-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680069921
2024-12-20 15:34:29 2024-12-20T07:34:29,927 [INFO ] W-9028-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680069921
2024-12-20 15:34:29 2024-12-20T07:34:29,911 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:29 2024-12-20T07:34:29,933 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]734
2024-12-20 15:34:29 2024-12-20T07:34:29,933 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:29 2024-12-20T07:34:29,934 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:29 2024-12-20T07:34:29,934 [DEBUG] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9022-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:29 2024-12-20T07:34:29,921 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9028.
2024-12-20 15:34:29 2024-12-20T07:34:29,911 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9031.
2024-12-20 15:34:29 2024-12-20T07:34:29,934 [INFO ] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9022
2024-12-20 15:34:29 2024-12-20T07:34:29,934 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:29 2024-12-20T07:34:29,961 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9022.
2024-12-20 15:34:29 2024-12-20T07:34:29,961 [DEBUG] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680069961
2024-12-20 15:34:29 2024-12-20T07:34:29,961 [INFO ] W-9022-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680069961
2024-12-20 15:34:29 2024-12-20T07:34:29,974 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:29 2024-12-20T07:34:29,983 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,015 [INFO ] pool-2-thread-41 ACCESS_LOG - /172.17.0.1:57652 "GET /ping HTTP/1.1" 200 0
2024-12-20 15:34:30 2024-12-20T07:34:30,016 [INFO ] pool-2-thread-41 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:abe351cd462b,timestamp:1734680070
2024-12-20 15:34:30 2024-12-20T07:34:30,030 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9026, pid=758
2024-12-20 15:34:30 2024-12-20T07:34:30,030 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9026
2024-12-20 15:34:30 2024-12-20T07:34:30,043 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,044 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]758
2024-12-20 15:34:30 2024-12-20T07:34:30,044 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,044 [DEBUG] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9026-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,044 [INFO ] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9026
2024-12-20 15:34:30 2024-12-20T07:34:30,044 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,046 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9026.
2024-12-20 15:34:30 2024-12-20T07:34:30,047 [DEBUG] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070047
2024-12-20 15:34:30 2024-12-20T07:34:30,047 [INFO ] W-9026-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070047
2024-12-20 15:34:30 2024-12-20T07:34:30,052 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9029, pid=767
2024-12-20 15:34:30 2024-12-20T07:34:30,055 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9029
2024-12-20 15:34:30 2024-12-20T07:34:30,059 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,068 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,068 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]767
2024-12-20 15:34:30 2024-12-20T07:34:30,069 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,070 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,071 [DEBUG] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9029-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,072 [INFO ] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9029
2024-12-20 15:34:30 2024-12-20T07:34:30,098 [DEBUG] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070098
2024-12-20 15:34:30 2024-12-20T07:34:30,098 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9029.
2024-12-20 15:34:30 2024-12-20T07:34:30,098 [INFO ] W-9029-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070098
2024-12-20 15:34:30 2024-12-20T07:34:30,103 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9027, pid=759
2024-12-20 15:34:30 2024-12-20T07:34:30,104 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9027
2024-12-20 15:34:30 2024-12-20T07:34:30,106 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,118 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,119 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]759
2024-12-20 15:34:30 2024-12-20T07:34:30,119 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,119 [DEBUG] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9027-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,120 [INFO ] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9027
2024-12-20 15:34:30 2024-12-20T07:34:30,120 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,141 [DEBUG] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070141
2024-12-20 15:34:30 2024-12-20T07:34:30,141 [INFO ] W-9027-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070141
2024-12-20 15:34:30 2024-12-20T07:34:30,141 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9027.
2024-12-20 15:34:30 2024-12-20T07:34:30,187 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,244 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9032, pid=771
2024-12-20 15:34:30 2024-12-20T07:34:30,244 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9032
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]771
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [DEBUG] W-9032-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9032-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,277 [INFO ] W-9032-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9032
2024-12-20 15:34:30 2024-12-20T07:34:30,295 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9032.
2024-12-20 15:34:30 2024-12-20T07:34:30,295 [DEBUG] W-9032-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070295
2024-12-20 15:34:30 2024-12-20T07:34:30,295 [INFO ] W-9032-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070295
2024-12-20 15:34:30 2024-12-20T07:34:30,329 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,531 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9033, pid=773
2024-12-20 15:34:30 2024-12-20T07:34:30,532 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9033
2024-12-20 15:34:30 2024-12-20T07:34:30,548 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,549 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]773
2024-12-20 15:34:30 2024-12-20T07:34:30,549 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,549 [DEBUG] W-9033-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9033-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,550 [INFO ] W-9033-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9033
2024-12-20 15:34:30 2024-12-20T07:34:30,550 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,575 [DEBUG] W-9033-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070575
2024-12-20 15:34:30 2024-12-20T07:34:30,575 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9033.
2024-12-20 15:34:30 2024-12-20T07:34:30,576 [INFO ] W-9033-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070576
2024-12-20 15:34:30 2024-12-20T07:34:30,589 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,673 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9038, pid=828
2024-12-20 15:34:30 2024-12-20T07:34:30,675 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9038
2024-12-20 15:34:30 2024-12-20T07:34:30,690 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,691 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]828
2024-12-20 15:34:30 2024-12-20T07:34:30,691 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,691 [DEBUG] W-9038-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9038-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,691 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,692 [INFO ] W-9038-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9038
2024-12-20 15:34:30 2024-12-20T07:34:30,705 [DEBUG] W-9038-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070705
2024-12-20 15:34:30 2024-12-20T07:34:30,705 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9038.
2024-12-20 15:34:30 2024-12-20T07:34:30,706 [INFO ] W-9038-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070706
2024-12-20 15:34:30 2024-12-20T07:34:30,726 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9023, pid=731
2024-12-20 15:34:30 2024-12-20T07:34:30,727 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9023
2024-12-20 15:34:30 2024-12-20T07:34:30,739 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9036, pid=818
2024-12-20 15:34:30 2024-12-20T07:34:30,740 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9036
2024-12-20 15:34:30 2024-12-20T07:34:30,740 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,741 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9030, pid=768
2024-12-20 15:34:30 2024-12-20T07:34:30,741 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9030
2024-12-20 15:34:30 2024-12-20T07:34:30,753 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,759 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,759 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]818
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [DEBUG] W-9036-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9036-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [INFO ] W-9036-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9036
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]731
2024-12-20 15:34:30 2024-12-20T07:34:30,760 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,761 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,761 [DEBUG] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9023-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,762 [INFO ] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9023
2024-12-20 15:34:30 2024-12-20T07:34:30,773 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,794 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]768
2024-12-20 15:34:30 2024-12-20T07:34:30,794 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,794 [DEBUG] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9030-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,795 [INFO ] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9030
2024-12-20 15:34:30 2024-12-20T07:34:30,795 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,810 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9034, pid=786
2024-12-20 15:34:30 2024-12-20T07:34:30,810 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9034
2024-12-20 15:34:30 2024-12-20T07:34:30,822 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,835 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]786
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [DEBUG] W-9034-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9034-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9034-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9034
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9036.
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [DEBUG] W-9036-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070836
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [DEBUG] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070836
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9036-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070836
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9023-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070836
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9023.
2024-12-20 15:34:30 2024-12-20T07:34:30,836 [DEBUG] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070836
2024-12-20 15:34:30 2024-12-20T07:34:30,855 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9030.
2024-12-20 15:34:30 2024-12-20T07:34:30,854 [INFO ] W-9030-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070854
2024-12-20 15:34:30 2024-12-20T07:34:30,885 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,885 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,885 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,888 [DEBUG] W-9034-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070888
2024-12-20 15:34:30 2024-12-20T07:34:30,888 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9034.
2024-12-20 15:34:30 2024-12-20T07:34:30,888 [INFO ] W-9034-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070888
2024-12-20 15:34:30 2024-12-20T07:34:30,903 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:30 2024-12-20T07:34:30,941 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9039, pid=836
2024-12-20 15:34:30 2024-12-20T07:34:30,942 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9039
2024-12-20 15:34:30 2024-12-20T07:34:30,945 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9037, pid=826
2024-12-20 15:34:30 2024-12-20T07:34:30,955 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,956 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9037
2024-12-20 15:34:30 2024-12-20T07:34:30,970 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:30 2024-12-20T07:34:30,970 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]836
2024-12-20 15:34:30 2024-12-20T07:34:30,970 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,971 [DEBUG] W-9039-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9039-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,971 [INFO ] W-9039-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9039
2024-12-20 15:34:30 2024-12-20T07:34:30,971 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,984 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]826
2024-12-20 15:34:30 2024-12-20T07:34:30,984 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:30 2024-12-20T07:34:30,984 [DEBUG] W-9037-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9037-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:30 2024-12-20T07:34:30,984 [INFO ] W-9037-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9037
2024-12-20 15:34:30 2024-12-20T07:34:30,985 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:30 2024-12-20T07:34:30,985 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9039.
2024-12-20 15:34:30 2024-12-20T07:34:30,986 [DEBUG] W-9039-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680070985
2024-12-20 15:34:30 2024-12-20T07:34:30,986 [INFO ] W-9039-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680070986
2024-12-20 15:34:31 2024-12-20T07:34:31,004 [DEBUG] W-9037-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680071003
2024-12-20 15:34:31 2024-12-20T07:34:31,004 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9037.
2024-12-20 15:34:31 2024-12-20T07:34:31,004 [INFO ] W-9037-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680071004
2024-12-20 15:34:31 2024-12-20T07:34:31,014 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:31 2024-12-20T07:34:31,014 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:31 2024-12-20T07:34:31,027 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,028 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,031 [INFO ] W-9025-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,068 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9020, pid=730
2024-12-20 15:34:31 2024-12-20T07:34:31,069 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9020
2024-12-20 15:34:31 2024-12-20T07:34:31,082 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:31 2024-12-20T07:34:31,085 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]730
2024-12-20 15:34:31 2024-12-20T07:34:31,086 [DEBUG] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9020-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:31 2024-12-20T07:34:31,086 [INFO ] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9020
2024-12-20 15:34:31 2024-12-20T07:34:31,086 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:31 2024-12-20T07:34:31,088 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:31 2024-12-20T07:34:31,094 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9020.
2024-12-20 15:34:31 2024-12-20T07:34:31,094 [DEBUG] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680071094
2024-12-20 15:34:31 2024-12-20T07:34:31,095 [INFO ] W-9020-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680071095
2024-12-20 15:34:31 2024-12-20T07:34:31,144 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:31 2024-12-20T07:34:31,199 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,200 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,201 [INFO ] W-9026-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,230 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9021, pid=732
2024-12-20 15:34:31 2024-12-20T07:34:31,231 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9021
2024-12-20 15:34:31 2024-12-20T07:34:31,243 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,244 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,244 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:31 2024-12-20T07:34:31,244 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]732
2024-12-20 15:34:31 2024-12-20T07:34:31,245 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:31 2024-12-20T07:34:31,245 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:31 2024-12-20T07:34:31,245 [DEBUG] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9021-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:31 2024-12-20T07:34:31,245 [INFO ] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9021
2024-12-20 15:34:31 2024-12-20T07:34:31,245 [INFO ] W-9032-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,254 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9021.
2024-12-20 15:34:31 2024-12-20T07:34:31,257 [DEBUG] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680071257
2024-12-20 15:34:31 2024-12-20T07:34:31,257 [INFO ] W-9021-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680071257
2024-12-20 15:34:31 2024-12-20T07:34:31,268 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:31 2024-12-20T07:34:31,271 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,272 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,272 [INFO ] W-9028-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,289 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,290 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,290 [INFO ] W-9024-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,294 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9035, pid=808
2024-12-20 15:34:31 2024-12-20T07:34:31,294 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9035
2024-12-20 15:34:31 2024-12-20T07:34:31,297 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,298 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,298 [INFO ] W-9031-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,306 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Successfully loaded /opt/conda/lib/python3.8/site-packages/ts/configs/metrics.yaml.
2024-12-20 15:34:31 2024-12-20T07:34:31,307 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - [PID]808
2024-12-20 15:34:31 2024-12-20T07:34:31,307 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch worker started.
2024-12-20 15:34:31 2024-12-20T07:34:31,307 [DEBUG] W-9035-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - W-9035-drawn_humanoid_detector_1.0 State change null -> WORKER_STARTED
2024-12-20 15:34:31 2024-12-20T07:34:31,308 [INFO ] W-9035-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9035
2024-12-20 15:34:31 2024-12-20T07:34:31,309 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Python runtime: 3.8.13
2024-12-20 15:34:31 2024-12-20T07:34:31,310 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9035.
2024-12-20 15:34:31 2024-12-20T07:34:31,310 [DEBUG] W-9035-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1734680071310
2024-12-20 15:34:31 2024-12-20T07:34:31,310 [INFO ] W-9035-drawn_humanoid_detector_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1734680071310
2024-12-20 15:34:31 2024-12-20T07:34:31,323 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - model_name: drawn_humanoid_detector, batchSize: 1
2024-12-20 15:34:31 2024-12-20T07:34:31,324 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,328 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,328 [INFO ] W-9022-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,422 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,422 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,422 [INFO ] W-9029-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,483 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,483 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,483 [INFO ] W-9027-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,519 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,520 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,520 [INFO ] W-9033-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,694 [INFO ] pool-2-thread-41 ACCESS_LOG - /172.17.0.1:57664 "GET /ping HTTP/1.1" 200 0
2024-12-20 15:34:31 2024-12-20T07:34:31,694 [INFO ] pool-2-thread-41 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:abe351cd462b,timestamp:1734680071
2024-12-20 15:34:31 2024-12-20T07:34:31,845 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,855 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,856 [INFO ] W-9038-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,946 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:31 2024-12-20T07:34:31,964 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:31 2024-12-20T07:34:31,965 [INFO ] W-9034-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,023 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,024 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,024 [INFO ] W-9030-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,093 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,102 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,103 [INFO ] W-9036-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,157 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,166 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,167 [INFO ] W-9039-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,171 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,172 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,172 [INFO ] W-9023-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,355 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,355 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,356 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,356 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,356 [INFO ] W-9004-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,356 [INFO ] W-9000-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,359 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,360 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,360 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,360 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,360 [INFO ] W-9008-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,360 [INFO ] W-9015-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,367 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,367 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,368 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,368 [INFO ] W-9013-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,369 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,370 [INFO ] W-9016-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,372 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,372 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,372 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,373 [INFO ] W-9012-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,373 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,373 [INFO ] W-9005-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,374 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,408 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,408 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,408 [INFO ] W-9010-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,411 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,412 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,412 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,412 [INFO ] W-9009-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,413 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,413 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,413 [INFO ] W-9007-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,414 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,414 [INFO ] W-9002-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,417 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,417 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,417 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,417 [INFO ] W-9011-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,417 [INFO ] W-9014-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,420 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,420 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,420 [INFO ] W-9019-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,425 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,426 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,427 [INFO ] W-9003-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,436 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,436 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,437 [INFO ] W-9018-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,440 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,440 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,440 [INFO ] W-9001-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,446 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,446 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,447 [INFO ] W-9017-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,459 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,459 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,459 [INFO ] W-9035-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,486 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,486 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,487 [INFO ] W-9021-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,494 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,495 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,496 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,497 [INFO ] W-9006-drawn_humanoid_pose_estimator_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,497 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,497 [INFO ] W-9020-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,769 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-12-20 15:34:32 2024-12-20T07:34:32,794 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-12-20 15:34:32 2024-12-20T07:34:32,794 [INFO ] W-9037-drawn_humanoid_detector_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-12-20 15:34:33 2024-12-20T07:34:33,256 [INFO ] pool-2-thread-41 ACCESS_LOG - /172.17.0.1:57676 "GET /ping HTTP/1.1" 200 0
2024-12-20 15:34:33 2024-12-20T07:34:33,257 [INFO ] pool-2-thread-41 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:abe351cd462b,timestamp:1734680073

@hjessmith
Copy link
Contributor

Please check #105 and see if that solves your issue.

Alternatively, there are instructions in the front page README about how to install everything without using Docker. You can also try that.

@sGxOxDs
Copy link

sGxOxDs commented Dec 23, 2024

It works after my increasing RAM available to the docker container to 24 GB (not worked for 16 GB).
It is also recommended to restart the computer and the wsl
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants