Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example for Django #977

Open
TheRealBecks opened this issue Sep 8, 2023 · 9 comments
Open

Example for Django #977

TheRealBecks opened this issue Sep 8, 2023 · 9 comments

Comments

@TheRealBecks
Copy link

I saw that some people already asked for a Django example, but nobody provided a full example for Django. I also want to use Loguru for my Django 4.2 project, because I think it's a very good and flexible logging mechanism to log several packages in a huge project.

I also asked in the Django forum for help, but it looks like that I won't get any further with a solution.

This is my starting point:


I already found out that I can use Loguru as middleware in Django: https://docs.djangoproject.com/en/4.2/topics/http/middleware/

I found this useful tutorial to get an idea how that integration could work: https://betterstack.com/community/guides/logging/loguru/#creating-a-request-logging-middleware

But:

  • I want to use classes instead of functions
  • The tutorial only uses the info logging, but all other levels (debug, warning, ...) are missing
  • Some HTTP 404 (file not found) messages were not covered even if I configure all levels manually in that function. I'm not sure why that happened, maybe underlaying packages like the python webserver throw that error code? Some 404 worked, some not

And:

  • I would like to use that middleware as an package that can be added to Django or replaced by another logging package (or even don't use that package at all). That means that I want to use the built-in logging function in my app (models, ...), so these apps don't depend on this specific package and its functions, but when adding the middleware the logging will be handled by that package.

Next steps afterwards:

  • I want to provide a Loguru model in Django, so I can add the logging messages into the database, so i can use these in my frontend
  • I already use Celery and want Celery to depend on the Django logging settings and environment

Do you think it makes sense what I'm writing here? 😃 If yes is it possible to get help from your side? If yes then I would like provide a full example for your documentation as PR if you would like to get that :)

@Delgan
Copy link
Owner

Delgan commented Sep 8, 2023

Hi @TheRealBecks.

I'm not familiar with logging in Django, but I hope we can come up with a recipe that works.

Regarding your concerns:

I want to use classes instead of functions

Is there any difficulty in that? What's stopping you from implementing middleware as a class?

The tutorial only uses the info logging, but all other levels (debug, warning, ...) are missing

Kind of same question. The guide uses logger.info(), but I'm not sure to understand what prevent anyone to use logger.debug() when needed?

Some HTTP 404 (file not found) messages were not covered even if I configure all levels manually in that function. I'm not sure why that happened, maybe underlaying packages like the python webserver throw that error code? Some 404 worked, some no

You mention the use of a middleware, but given that Django uses the standard logging library, shouldn't we instead transfer messages to Loguru using an InterceptHandler? My understanding of middlewares it that you'll be able to create additional logs in response to some events, but you'll never be able to capture the logs issued by default.

I would like to use that middleware as an package that can be added to Django or replaced by another logging package (or even don't use that package at all). That means that I want to use the built-in logging function in my app (models, ...), so these apps don't depend on this specific package and its functions, but when adding the middleware the logging will be handled by that package.

As opposed to the InterceptHandler mentioned above, I suppose you need to add a PropagateHandler in charge of transferring Loguru logs from your package to standard logging of the app.

I want to provide a Loguru model in Django, so I can add the logging messages into the database, so i can use these in my frontend

I guess you need a custom sink then?

I already use Celery and want Celery to depend on the Django logging settings and environment

Is this the default behavior (Celery automatically inherits Django logging settings)?
I'm not sure how to help here.

@TheRealBecks
Copy link
Author

@Delgan I will be back in a few weeks. Afterwards I will check and test the Django default logging configuration, maybe we can start from that point on.

@TheRealBecks
Copy link
Author

@Delgan After hours and hours of work I finally got some progress thanks to your InterceptHandler. I created a Django app with a app.py file (not yet finished):

"""Add Loguru Logging app to Django."""
import inspect
import logging

from django.apps import AppConfig
from icecream import ic
from loguru import logger


class InterceptHandler(logging.Handler):
    def emit(self, record: logging.LogRecord) -> None:
        # Get corresponding Loguru level if it exists.
        level: str | int
        try:
            level = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message.
        frame, depth = inspect.currentframe(), 0
        while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__):
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())


class DjangoLoggingLoguruConfig(AppConfig):
    name = "django_logging_loguru"

    def ready(self):
        logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

At startup code for an app can be loaded in the ready function. Normally that's not needed, but this way the InterceptHandler can be loaded.

But there are still some wrongly formatted logging messages:

[19/Oct/2023 19:00:38] "GET /broker/list/ HTTP/1.1" 200 36077
Not Found: /broker/list/img/flags/es.png
2023-10-19 19:00:38.694 | WARNING  | django.utils.log:log_response:241 - Not Found: /broker/list/img/flags/es.png

It's correct that there are Not Found messages, because I placed some wrong image links into my HTML code.

As we can see the log message after the Not Found message is already using the default Loguru settings - GREAT! 😄

But there are also these ones that are 'Django formatted':

[19/Oct/2023 19:00:38] "GET /broker/list/ HTTP/1.1" 200 36077

I also checked the [Django default logging settings|https://docs.djangoproject.com/en/4.2/ref/logging/#default-logging-configuration] and I found the setting for django.server. I am able to change these settings in a variable called LOGGING that can be configured like you're seeing in the example. Nevertheless I wasn't able to get message to Loguru.

Do you know why the message for django.server will be handled by the Python logging module?

@Delgan
Copy link
Owner

Delgan commented Oct 20, 2023

Do you know why the message for django.server will be handled by the Python logging module?

When you call logging.basicConfig(), you effectively install the InterceptHandler on the root logger of the logging hierarchy.

According to the docs you linked:

All loggers except django.server propagate logging to their parents, up to the root django logger.

That means message logged by the django.server logger, such as "GET /broker/list/ HTTP/1.1" in your case, are not propagated to the root logger. Consequently, they don't pass through the InterceptHandler.

I do not know why Django behaves this way. You might need to remove or replace the handler they're installing on the django.server logger.

@TheRealBecks
Copy link
Author

Thanks, so I'm already on the right track! 😄

I also tried to reset the LOGGING variable:

  1. Delete the default settings:
{
    "version": 1,
    "disable_existing_loggers": True,
}

-> Nothing will be logged anymore. Maybe that's the way to go and I need to configure Loguru from there on?

  1. Take the default settings and edit them: I was able to change the format for django.server, but were not able to delete all django.server settings (formatter, handler, etc.).

Do you think solution 1 could be the way to go? Because starting from scratch sounds good?

@Delgan
Copy link
Owner

Delgan commented Oct 20, 2023

From what I read in the docs, the disable_existing_loggers does not seem very appropriate:

If the disable_existing_loggers key in the LOGGING dictConfig is set to True (which is the dictConfig default if the key is missing) then all loggers from the default configuration will be disabled. Disabled loggers are not the same as removed; the logger will still exist, but will silently discard anything logged to it, not even propagating entries to a parent logger. Thus you should be very careful using 'disable_existing_loggers': True; it’s probably not what you want. Instead, you can set disable_existing_loggers to False and redefine some or all of the default loggers; or you can set LOGGING_CONFIG to None and handle logging config yourself.

It sounds like it prevents the messages to be propagated to the root logger, which implies the InterceptHandler won't receive them.

@adilnaimi
Copy link

I came across this thread while searching for a Loguru example in Django to compare with Structlog. Sharing my Django and Structlog setup to inspire a Loguru+Django setup hopefully:

logging.py:

import logging
from typing import Any

import ddtrace
import structlog


def tracer_injection(_: logging.Logger, __: str, event_dict: structlog.types.EventDict) -> structlog.types.EventDict:
    span = ddtrace.tracer.current_span()
    trace_id, span_id = (span.trace_id, span.span_id) if span else (None, None)
    event_dict['dd.trace_id'] = str(trace_id or 0)
    event_dict['dd.span_id'] = str(span_id or 0)
    event_dict['dd.env'] = ddtrace.config.env or ''
    event_dict['dd.service'] = ddtrace.config.service or ''
    event_dict['dd.version'] = ddtrace.config.version or ''
    return event_dict


# To enable standard library logs to be formatted via structlog, we add this
# `foreign_pre_chain` to both formatters.
foreign_pre_chain: list[structlog.types.Processor] = [
    structlog.contextvars.merge_contextvars,
    structlog.processors.TimeStamper(fmt='iso'),
    structlog.stdlib.add_logger_name,
    structlog.stdlib.add_log_level,
    structlog.stdlib.PositionalArgumentsFormatter(),
    structlog.processors.StackInfoRenderer(),
    structlog.processors.format_exc_info,
    structlog.processors.UnicodeDecoder(),
]

structlog.configure(
    processors=[
        structlog.stdlib.filter_by_level,
        *foreign_pre_chain,
        structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
    ],
    logger_factory=structlog.stdlib.LoggerFactory(),
    wrapper_class=structlog.stdlib.BoundLogger,
    cache_logger_on_first_use=True,
)


# Configure all logs to be handled by structlog `ProcessorFormatter` and
# rendered either as pretty colored console lines or as single JSON lines with structured tracebacks
def get_logger_config(json_logs: bool = False, log_level: str = 'INFO') -> dict[str, Any]:
    """Configure python stdlib's logging"""
    base_config: dict[str, Any] = {
        'version': 1,
        'disable_existing_loggers': False,
        'formatters': {
            'json': {
                '()': structlog.stdlib.ProcessorFormatter,
                'processor': structlog.processors.JSONRenderer(sort_keys=True),
                'foreign_pre_chain': [*foreign_pre_chain, structlog.processors.dict_tracebacks, tracer_injection],
            },
            'console': {
                '()': structlog.stdlib.ProcessorFormatter,
                'processor': structlog.dev.ConsoleRenderer(),
                'foreign_pre_chain': foreign_pre_chain,
            },
        },
        'handlers': {
            'console': {
                'class': 'logging.StreamHandler',
                'formatter': 'json' if json_logs else 'console',
            },
        },
        'loggers': {},
    }

    logger_config_map = {
        '': {'level': log_level, 'propagate': False},
        'src': {'level': log_level, 'propagate': True},
        'django': {'level': 'WARNING', 'propagate': True},
        'django.request': {'level': 'ERROR', 'propagate': False},
        'django.security': {'level': 'WARNING', 'propagate': False},
        'django_oso.oso': {'level': 'ERROR', 'propagate': False},
        'requests': {'level': 'WARNING', 'propagate': False},
        'celery': {'level': 'WARNING', 'propagate': False},
        'urllib3': {'level': 'WARNING', 'propagate': False},
        'asyncio': {'level': 'WARNING', 'propagate': False},
        'redis': {'level': 'WARNING', 'propagate': False},
        'uvicorn': {'level': 'WARNING', 'propagate': False},
        'gunicorn': {'level': 'INFO', 'propagate': False},
        'uvicorn.access': {'level': 'INFO', 'propagate': False},
        'gunicorn.access': {'level': 'INFO', 'propagate': False},
        'uvicorn.error': {'level': 'ERROR', 'propagate': False},
        'gunicorn.error': {'level': 'ERROR', 'propagate': False},
        'ddtrace': {'level': 'ERROR', 'propagate': False},
        'botocore': {'level': 'ERROR', 'propagate': False},
        'boto3': {'level': 'ERROR', 'propagate': False},
        'opensearch': {'level': 'ERROR', 'propagate': False},
        'numexpr': {'level': 'ERROR', 'propagate': False},
        # Add other loggers as needed
    }

    for logger_name, config in logger_config_map.items():
        base_config['loggers'][logger_name] = {
            'handlers': ['console'],
            'level': config['level'],
            'propagate': config.get('propagate', False),
        }
    return base_config

settings.py

LOGGING = get_logger_config(json_logs=env.USE_JSON_LOGGING, log_level=env.LOG_LEVEL)

gunicorn_config.py

logconfig_dict = get_logger_config()

@TheRealBecks
Copy link
Author

@adilnaimi Thanks for your input! Unfortunately I'm still very busy, but still interested to get Django + Loguru running. I already had some progress and I would like to get it running as a package that can be installed via Pypi. But there's still a lot of work to be done, so if you or someone else is interested in joining the forces feel free to contact me! 😃

@sumpfgottheit
Copy link

sumpfgottheit commented Jan 13, 2025

I just ran into this issue and found a reasonable solution for me:

In settings.py set

from proj.logging import configure_logging

LOGGING_CONFIG=None

configure_logging()

I import the logging.py after settings LOGGING_CONFIG to None - don't know if this is necessary.

logging.py:

import inspect
import logging
import sys

from loguru import logger


class InterceptHandler(logging.Handler):
    def emit(self, record: logging.LogRecord) -> None:
        # Get corresponding Loguru level if it exists.
        try:
            level: str | int = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message.
        frame, depth = inspect.currentframe(), 0
        while frame:
            filename = frame.f_code.co_filename
            is_logging = filename == logging.__file__
            is_frozen = "importlib" in filename and "_bootstrap" in filename
            if depth > 0 and not (is_logging or is_frozen):
                break
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())

def log_filter(record):
    if record['name'].startswith("django."):
        return record['level'].no >= logger.level("INFO").no
    return record['level'].no >= logger.level("DEBUG").no

def configure_logging():
    # Remove default logger configuration
    logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
    logger.remove()
    logger.add(sys.stderr, filter=log_filter)

The django loggers are very verbose on the DEBUG Loglevel, so I use the log_filter function to change their level to INFO even if I use DEBUG generally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants