Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Running fuzzing automatically / in the open? #1908

Closed
terriko opened this issue Aug 23, 2022 · 1 comment
Closed

Discussion: Running fuzzing automatically / in the open? #1908

terriko opened this issue Aug 23, 2022 · 1 comment
Labels
discussion Discussion thread or meeting minutes that may not have any trivially fixable code issues associated

Comments

@terriko
Copy link
Contributor

terriko commented Aug 23, 2022

Thanks especially to @yashugarg we've now got some more fuzzing options in fuzz/ that are fairly easy to run. They currently don't find much, which is partially because we did a reasonable job using robust parsers (go us!) and partially because we just haven't shoved the fuzzer everywhere we have potentially input yet.

Currently, we've been running things in our local setups as we experiment. That's good for now.

But I wanted to open it up for discussion: Does anyone have any particular vision of how we should integrate fuzzing into our release processes and regular testing?

Some options/thoughts:

  1. I can run the fuzzers for some length of time directly before release (e.g. for at least a day after I tag a pre-release)
  2. We could set up the fuzzers to run on some regular cycle. it doesn't make sense for it to be on every commit, and probably not even daily, but we could have a once-a-week or once-a-month run assuming it's not a waste of resources. I don't know if Github Actions allows this, but I have access to internal systems where I could set up a regular job.
  3. We could see if we qualify for OSSFuzz (or similar programs if any exist? That's the only one I know about off the top of my head.)
  4. Do we care about making fuzzing results public or running fuzzers directly on systems others can examine more thoroughly? I don't know what level of audit people like in their tools and what a meaningful report would look like for other users. I suspect anything we do in the open is more "icing on the cake" than "base requirement for someone to trust using our tool" but that doesn't mean we shouldn't think about what info to provide and how.
  5. Is there anything @yashugarg and I should to do help make it easier for people to run fuzzing or improve our fuzzing setup? I think we've got a minor docs gap and should probably start making some wishlist stuff before @yashugarg is done GSoC for the year.

Any other thoughts/considerations on fuzzing and how we could make sure it fits into our regular release/testing procedures going forwards?

@terriko terriko added the discussion Discussion thread or meeting minutes that may not have any trivially fixable code issues associated label Aug 23, 2022
@terriko
Copy link
Contributor Author

terriko commented Dec 26, 2024

We're doing this within our github actions setup, so I think this can be closed.

@terriko terriko closed this as completed Dec 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Discussion thread or meeting minutes that may not have any trivially fixable code issues associated
Projects
None yet
Development

No branches or pull requests

1 participant