-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Teach zerolog how to return errors, with and without stack traces #11
Conversation
When `zerolog` logs an error message with an `Error()` finalizer, it returns a `error`. If the `StackTrace()` event has been called, the `error` will be a wrapped error using `github.com/pkg/errors` and will include the stack trace. There are a few subtle differences introduced with this change: 1. It is possible to defeat this behavior using the Fields() method. 2. Only one error can be handled by an Event at a time. Time was spent considering introducing an accumulator for errors that would be evaluated at finalizer-time, but this seemed unnecessary given errors should be wrapped and multiple calls to `Err()` seemed unlikely in practice. The behavior of the `Msg()` finalizer was left in tact in order to avoid any performance penalty in non-error handling conditions. The deferral of error handling can best be observed by looking at the performance hit of an `Err()` Event with and without `StackTrace()`: ``` BenchmarkErrorContextFields-8 20000000 84.8 ns/op BenchmarkErrorContextFields_WithStackTrace-8 500000 2552 ns/op BenchmarkLogContextFields-8 30000000 47.6 ns/op PASS ok github.com/rs/zerolog 4.642s ``` Fixes: rs#9
@rs If this looks okay, I'll clean up the test for 1.7. The test is brittle for obvious reasons. |
Thank you for this contribution @sean-, the idea is very interesting. I'm not comfortable with adding a dependency on I quickly looked at the exposed API, and we might be able to do that using the Format method. This would prevent us from expanding the stack trace as a JSON structure, but I am not sure it necessarily a good thing to do that anyway ; I am under the impression that storing the whole stack-trace as a string would be more relevant. At the end of the day, stack traces are computed by humans, and humans are better at reading plain stack traces rather than JSON structs. I don't see a use-case where the JSON exploded version would be useful. Perhaps you have some in mind? Would you agree on trying to remove the dependency by checking for |
@rs The dependency on Regarding humans consuming stack traces, I very much disagree with that. Humans reading stack traces is okay on the small scale, but that's exactly it: it doesn't scale well. Structured logging exists so tools can chew on the output. The same is true of exceptions and error management (e.g. ❤️ backtrace.io). I'm happy to make the change but probably won't use this as a result. I actually want to go the other direction and more tightly couple @davecheney's I don't know if it's a fair characterization or not by my sense is that:
The two should be able to inter-operate tightly but their goals in life are slightly different. My goal is to have tight coupling between two systems because the structured logging of errors is important to us. I see unifying both as a good thing in that it reduces developer repetition and unifies logging and error management into the same post-processing system. This is why I left the If I can get away with not forking/repackaging, I want to do that where possible. :) |
I would arg that any automation on the stack trace could parse the stack trace format, as I guess the outputted JSON structure is no more standard than a string formatted stack trace. Structured logging is mainly for aggregation. I doubt you could efficiently index Instead of coupling the two library you could have an helper taking a You would have something like: if ev := log.Error(); ev.Enabled() {
errutil.StackTrace(ev, err).Msg("some message")
} Does it make sense? |
I hear what you're saying, yes. To be clear, there isn't really an open standard regarding logging stack traces. Backtrace was just an example highlighting that stack traces aren't exclusively for humans to consume. Emitting something that's human-centric, then re-parsing it into something a machine can process is counter-intuitive given the point of structured logging is to optimize log ingest for computers (not necessarily aggregation, though that's a very common thing to do with logs). The reason I initially coupled the two was to defer the cost of acquiring the stack trace until the last possible minute (e.g. in the // Is this what you're suggesting?
log.Info().StackTrace(err).Msg("emit a log message with stack trace details, either from err or here at the call site")
// I was hoping to end up with something like:
var annotatedError error = log.Warm().Str("foo","bar").StackTrace(err).Error("emit a log message with stack trace details, either from err or here at the call site")
// or something like `EStr()` which passes along the annotated value to the error object returned from the Error() finalizer
var annotatedError error = log.Warm().EStr("foo","bar").StackTrace(err).Error("emit a log message with stack trace details, either from err or here at the call site") ? |
I'm not sure to understand. What |
@rs returning an func foo() error {
var annotatedError error = log.Warm().EStr("foo","bar").StackTrace(err).Error("emit a log message with stack trace details, either from err or here at the call site")
return annotatedError
} |
Check last few commits. With object/array support you might be able to implement as an extension pretty easily. Check out the |
@rs This looks really good. I left a few small nits in a couple of those commits. I need to think about the accumulator and how this would work at finalizer-time and whether or not the log interface would be able to record the value and capture the desired values and return them up the stack as an |
2af4783
to
9e5c06c
Compare
Any updates on this pull request? This is the only missing piece for us to move from Zap to ZeroLog. I will be happy to help out with this PR. |
I would accept a PR that does add a dependency on |
@rs so I looked through #24 and #25 but don't see how I'd immediately use those, however
so we could unify logging and error handling? Here are the samples I was chewing on:
Options 3 and 4 are close, and 5 is a convenience wrapper over 4. I like 4 (and 5) because it's more expressive. Pulling in a new package like option 1 is doable, but feels very clunky, especially given how frequently this would get used. Do you have any thoughts on the following:
Where
? I think that'd be close to the interface (dry-coding in this issue). Basically Thoughts? |
Another solution would be to expose a global |
Ha! I actually really like that idea. It's clean and puts the control in the developers hands without changing any of the interface semantics. What about the |
@sean- I'm not sure to understand what you mean by "the |
Many (most?) functions return an func foo() error {
if err := doSomething(); err != nil {
return log.Debug().Err(err).Stack().Error("this emits a log entry with relevant stack details or other contextual information in foo()")
}
} I've annotated my error w/ relevant details and already level-guarded the log level. But as I increase the debugging verbosity, I get increasing amounts of detail. The rest of my stack, however, continues to get an func foo() error {
if err := doSomething(); err != nil {
log.Debug().Err(err).Stack().Error("this emits a log entry with relevant stack details or other contextual information in foo()")
return errors.Wrap(err, "this emits a log entry with relevant stack details or other contextual information in foo()")
}
} which is sub-optimal because idioms with the lowest friction tend to get used the most in daily practice. We have a bunch of context already in a custom error type, but sometimes we want to extend that error object: https://github.com/joyent/triton-go/blob/master/errors/errors.go#L19-L29 I'm faltering slightly on the |
@sean- I totally support the point However this snippet looks strange for me
Most strange is log level for an error. I assume the point where the error is logged decides about log message level. |
@soulne4ny Yeah, that example was contrived. After I submitted this PR I hashed out a better approach and where I'd like to ultimately go. What I realized was that To do this, additional logging information should accumulate in the import "errlog"
func emitError() error {
return errlog.InfoMsg("info foo").DebugMsg("error number #123").DebugStr("key", "val").TraceStack()
}
func catchError() {
if err := emitError(); err != nil {
log.Err(err).Msg("emitError returned an error")
}
} Now when |
Please see proposal in #35 |
@rs @sean- I don't think that having direct or indirect dependency on any error library is a great fit here. I would go with any solution that doesn't introduce this coupling, like:
I personally like the first solution, since the Does this make sense? |
As per #9 and to offer a different approach from #11 and #35 this PR introduces custom error serialization with sane defaults without breaking the existing APIs. This is just a first draft and is missing tests. Also, a bit of code duplication which I feel could be reduced but it serves to get the idea across. It provides global error marshalling by exposing a `var ErrorMarshalFunc func(error) interface{}` in zerolog package that by default is a function that returns the passed argument. It should be overriden if you require custom error marshalling. Then in every function that accept error or array of errors `ErrorMarshalFunc` is called on the error and then the result of it is processed like this: - if it implements `LogObjectMarshaler`, serialize it as an object - if it is a string serialize as a string - if it is an error, serialize as a string with the result of `Error()` - else serialize it as an interface The side effect of this change is that the encoders don't need the `AppendError/s` methods anymore, as the errors are serialized directly to other types.
1cea9c5
to
8479412
Compare
Fixed by #35 |
When
zerolog
logs an error message with anError()
finalizer, it returns aerror
. If theStackTrace()
event has been called, theerror
will be a wrapped error usinggithub.com/pkg/errors
and will include the stack trace.There are a few subtle differences introduced with this change:
Time was spent considering introducing an accumulator for errors that would be evaluated at finalizer-time, but this seemed unnecessary given errors should be wrapped and multiple calls to
Err()
seemed unlikely in practice.The behavior of the
Msg()
finalizer was left in tact in order to avoid any performance penalty in non-error handling conditions. The deferral of error handling can best be observed by looking at the performance hit of anErr()
Event with and withoutStackTrace()
:Fixes: #9