For a newly frozen, and optionally transformed and memmapped, model, you can always try it with the TensorFlow pod to see whether you're lucky to be able to use it in the simple way. In our case, the alphazero19.pb model we generated would cause the following error when using the TensorFlow pod to load it:
Couldn't load model: Invalid argument: No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels: device='GPU'; T in [DT_FLOAT] device='GPU'; T in [DT_INT32] device='GPU'; T in [DT_BOOL] device='GPU'; T in [DT_STRING] device='CPU'; T in [DT_INT32] device='CPU'; T in [DT_FLOAT] [[Node: batch_normalization_13/cond/Switch = Switch[T=DT_BOOL, _output_shapes=[[], ...