You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m facing a peculiar issue with uwsgi==2.0.28-1 on newly deployed servers. Specifically, my workers are hanging when they’re supposed to respawn, causing them to max out the CPU at 100%.
After attaching GDB to one of the stuck workers, I traced the hang to the following:
(gdb) bt
#0 0x00007f8642c3a069 in SpinLock::lock (this=0x55a9104d9724) at src/spinlock.h:27
#1 std::lock_guard<SpinLock>::lock_guard (__m=..., this=<synthetic pointer>) at /opt/rh/devtoolset-10/root/usr/include/c++/10/bits/std_mutex.h:159
#2 HeapProfiler::HandleFree (ptr=0x7f863571fd30, this=0x55a9104d9720) at src/heap.h:116
#3 (anonymous namespace)::WrappedFree (ctx=0x7f8642d120b0 <(anonymous namespace)::g_base_allocators+80>, ptr=0x7f863571fd30) at src/malloc_patch.cc:95
#4 0x00007f864bd2781b in PyUnicode_FromFormatV () from target:/usr/lib/libpython3.9.so.1.0
#5 0x00007f864bd2ba90 in PyErr_Format () from target:/usr/lib/libpython3.9.so.1.0
#6 0x00007f864bd208db in _PyObject_GenericGetAttrWithDict () from target:/usr/lib/libpython3.9.so.1.0
#7 0x00007f864bd350e2 in PyImport_ImportModuleLevelObject () from target:/usr/lib/libpython3.9.so.1.0
#8 0x00007f864bddfa99 in PyImport_ImportModuleLevel () from target:/usr/lib/libpython3.9.so.1.0
#9 0x00007f864bd580fe in PyImport_Import () from target:/usr/lib/libpython3.9.so.1.0
#10 0x00007f864bddfa3d in PyImport_ImportModule () from target:/usr/lib/libpython3.9.so.1.0
#11 0x00007f864bfc0ada in get_uwsgi_pydict () from target:/usr/lib/uwsgi/python_plugin.so
#12 0x00007f864bfbd460 in uwsgi_python_atexit () from target:/usr/lib/uwsgi/python_plugin.so
#13 0x000055a90e2c3811 in uwsgi_plugins_atexit ()
#14 0x00007f864e3d8697 in __run_exit_handlers () from target:/usr/lib/libc.so.6
#15 0x00007f864e3d883e in exit () from target:/usr/lib/libc.so.6
#16 0x000055a90e276650 in uwsgi_exit ()
#17 0x000055a90e2c3867 in end_me ()
#18 0x000055a90e2c73aa in uwsgi_ignition ()
#19 0x000055a90e2cbdca in uwsgi_worker_run ()
#20 0x000055a90e2cc360 in uwsgi_run ()
And there are no threads other than the main thread.
For context, here’s my uwsgi configuration:
[uwsgi]
master = true
single-interpreter = true
disable-logging = false
log-4xx = true
log-5xx = true
master-fifo = /var/tmp/service.fifo
heartbeat = 10
uid = tech
guid = tech
thunder-lock = true
enable-threads = true
vacuum = true
workers = 160
plugin = python
socket = /opt/service/service.sock
stats = 127.0.0.1:4402
chmod-socket = 666
chdir = /opt/service
venv = /opt/service/venv
module = application.wsgi:application
logto = /var/log/service/service_uwsgi.log
pidfile2 = /var/tmp/service.pid
procname = Service worker
procname-master = Service master
need-app = true
hook-pre-app = exec:sudo chown tech:tech /var/log/service/*.log
listen = 2048
harakiri = 120
buffer-size = 16384
max-request = 1000
max-worker-lifetime = 1200
max-worker-lifetime-delta = 60
reload-on-rss=1024
Interestingly, this problem does not occur on another server with a similar configuration.
To get things working, I had to use the skip-atexit option, which results in the worker process terminating forcefully. I’m unsure if this is the best approach.
Has anyone faced a similar issue? How did you address it? Any insights would be greatly appreciated!
The text was updated successfully, but these errors were encountered:
I’m facing a peculiar issue with uwsgi==2.0.28-1 on newly deployed servers. Specifically, my workers are hanging when they’re supposed to respawn, causing them to max out the CPU at 100%.
After attaching GDB to one of the stuck workers, I traced the hang to the following:
And there are no threads other than the main thread.
For context, here’s my uwsgi configuration:
To get things working, I had to use the skip-atexit option, which results in the worker process terminating forcefully. I’m unsure if this is the best approach.
Has anyone faced a similar issue? How did you address it? Any insights would be greatly appreciated!
The text was updated successfully, but these errors were encountered: