python -c "import torch; import numpy; import os; print('Torch Version:', torch.__version__); print('CUDA Available:', torch.cuda.is_available()); print('CUDA Version ...
I'm trying to do something I'm starting to think it is not possible... which is using a CUDNN that is different between torch and onnx. In the past I was using torch 2.1.2+cu121 and onnxruntime 1.20.0 ...