Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program seems to find new compressions to make after multiple runs (?) #649

Open
EnderNon opened this issue Nov 26, 2024 · 4 comments
Open

Comments

@EnderNon
Copy link

EnderNon commented Nov 26, 2024

See my commits here https://github.com/EnderNon/Wynntils-jpegnoise-fix

So basically what happened was:

  • Ran these for initial commit:
    • oxipng -o max --strip all --alpha **/*.png in common/src/main/resources/assets/wynntils/textures,
    • then ran oxipng -o max --strip all --alpha logo.png in common/src/main/resources/assets/wynntils
    • creating the commit refactor: compress every image lossless
  • Realised logo.png became slightly dimmer,
  • Then a contributor to upstream Wynntils asked to see what happens if you were to run the first command from the first commit again.
  • Running it again afterwards showed in the output that there was no new differences.
    So for some reason, running the command once wasn't enough to achieve max savings - running it twice was necessary. Seems like a bug.
@andrews05
Copy link
Collaborator

Hi @EnderNon, thanks for the report.

Analysis of map_icon.png: The palette sorting evaluations are all tied for best result. When this happens it chooses the one that was performed first.
In the first run, the luma sort is performed first and this is what is selected for the output. In the second run, the luma sort is skipped because it is the same as the input, so the battiato sort is selected instead. The input "baseline" is always evaluated last, so it is never preferred in the case of a tie and is why a second run may give a different result. A third run would end up the same as the first and can therefore never produce further improvement.

I'll have to consider if there's anything we can improve here but my initial thought is to just keep it as-is. If we were to change it to, e.g., prefer the input baseline in the case of a tie, then you would be stuck with the first result and never achieve the smaller size of the second run.

@AlexTMjugador
Copy link
Collaborator

A somewhat naive initial idea I'm thinking is that choosing one of the tied sorts at random, or from some tuned probability distribution, could be an improvement overall if the expected final size is less than just always sticking with the first, but doing that comes with its own set of gotchas related to reproducibility... Perhaps there is something better than that though.

@TPS
Copy link

TPS commented Nov 27, 2024

@andrews05
Copy link
Collaborator

A somewhat naive initial idea I'm thinking is that choosing one of the tied sorts at random, or from some tuned probability distribution, could be an improvement overall if the expected final size is less than just always sticking with the first, but doing that comes with its own set of gotchas related to reproducibility... Perhaps there is something better than that though.

Yeah, determinism is important. Perhaps we could just adjust the order so that, e.g. mzeng is preferred if we know it's usually the best one.

Maybe, if running multi-threaded anyway, run all of these tied (& close-to-tied) palette sorts in parallel? Kinda like CPUs do predictive branching?

Indeed, I still plan to get around to #523 someday...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants