You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks especially to @yashugarg we've now got some more fuzzing options in fuzz/ that are fairly easy to run. They currently don't find much, which is partially because we did a reasonable job using robust parsers (go us!) and partially because we just haven't shoved the fuzzer everywhere we have potentially input yet.
Currently, we've been running things in our local setups as we experiment. That's good for now.
But I wanted to open it up for discussion: Does anyone have any particular vision of how we should integrate fuzzing into our release processes and regular testing?
Some options/thoughts:
I can run the fuzzers for some length of time directly before release (e.g. for at least a day after I tag a pre-release)
We could set up the fuzzers to run on some regular cycle. it doesn't make sense for it to be on every commit, and probably not even daily, but we could have a once-a-week or once-a-month run assuming it's not a waste of resources. I don't know if Github Actions allows this, but I have access to internal systems where I could set up a regular job.
We could see if we qualify for OSSFuzz (or similar programs if any exist? That's the only one I know about off the top of my head.)
Do we care about making fuzzing results public or running fuzzers directly on systems others can examine more thoroughly? I don't know what level of audit people like in their tools and what a meaningful report would look like for other users. I suspect anything we do in the open is more "icing on the cake" than "base requirement for someone to trust using our tool" but that doesn't mean we shouldn't think about what info to provide and how.
Is there anything @yashugarg and I should to do help make it easier for people to run fuzzing or improve our fuzzing setup? I think we've got a minor docs gap and should probably start making some wishlist stuff before @yashugarg is done GSoC for the year.
Any other thoughts/considerations on fuzzing and how we could make sure it fits into our regular release/testing procedures going forwards?
The text was updated successfully, but these errors were encountered:
terriko
added
the
discussion
Discussion thread or meeting minutes that may not have any trivially fixable code issues associated
label
Aug 23, 2022
Thanks especially to @yashugarg we've now got some more fuzzing options in
fuzz/
that are fairly easy to run. They currently don't find much, which is partially because we did a reasonable job using robust parsers (go us!) and partially because we just haven't shoved the fuzzer everywhere we have potentially input yet.Currently, we've been running things in our local setups as we experiment. That's good for now.
But I wanted to open it up for discussion: Does anyone have any particular vision of how we should integrate fuzzing into our release processes and regular testing?
Some options/thoughts:
Any other thoughts/considerations on fuzzing and how we could make sure it fits into our regular release/testing procedures going forwards?
The text was updated successfully, but these errors were encountered: