Description
I am trying to do something new here. So I have the yolov2 model in frozen_pb format from tensorflow. I have successfully converted it to mlmodel and got it working using this repo.
Now, I have a another model where I have quantized the weights in the frozen_pb model from float32 to 8-bits (but the numbers are still in float32 format just the different levels of unique values are now just 8-bits i.e 255 unique float values only). This kinda compresses the model.
I was able to successfully convert the model using the tf-coreml repo. Same as the float32 model.
Now on adding this model to the Xcode.proj , it gives this error.
HERE
There was a problem decoding this CoreML document
validation error : Unable to deserialize object
The model is still float32 data-type (and same size) . Any ideas where it might be going wrong ? Is there a way that the mlmodel protobuf stores differently for lesser unique values ?