Skip to content

Use Larger Testing Data Sets in Naga #4326

Open
@kvark

Description

It would be useful to have some place where we can store the bigger sets of shaders (SPIR-V, WGSL, GLSL, whatever). We'd then have a Github Action to fetch them and parse/validate. Since this would be a heavy action, we'd run it either manually, or on tag creation (seems most practical).

Here some info about SPIR-V corpus:

vulkan CTS has 750K of them
lots of them in SPIRV-Tools test suite accumulated over years
When you build vulkan CTS, you can run external/vulkan/modules/vk-build-programs -v to build all the shaders and run validation on them.
You can hack that flow to dump; I think there's a flow to dedup them and save them in a binary database of some kind but I never looked deeply at that.
But they're not very diverse. About 99% of them are generated from Glslang, so there's a monoculture problem.
The other 1% are generated from templated SPIR-V assembly text.
And recently there are a few hundred harder cases found through spirv-fuzz; using tech evolved from GraphicsFuzz folks
All the .amber scripts in Vulkan CTS are under https://github.com/KhronosGroup/VK-GL-CTS/tree/master/external/vulkancts/data/vulkan/amber with a subdir for graphicsfuzz

Metadata

Assignees

No one assigned

    Labels

    area: infrastructureTesting, building, coordinating issueshelp requiredWe need community help to make this happen.lang: SPIR-VVulkan's Shading Languagelang: WGSLWebGPU Shading LanguagenagaShader Translatortype: enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions