New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Adding 'numel' and 'to' export for script module #36501
[ONNX] Adding 'numel' and 'to' export for script module #36501
Conversation
…oof/interpolate # Conflicts: # torch/onnx/symbolic_opset12.py
…oof/interpolate
…oof/interpolate
…oof/interpolate
…oof/interpolate
…nto neraoof/interpolate
…oof/interpolate
…oof/interpolate
…nto neraoof/scriptingFixesTorchvision
💊 Build failures summary and remediationsAs of commit 7e3091c (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 23 times. |
torch/onnx/symbolic_opset9.py
Outdated
def numel(g, self): | ||
shape = g.op("Shape", self) | ||
dtype = shape.type().scalarType() | ||
if dtype is None: # dtype could be None for a script module |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shape
is just created above, I think here dtype
would always None
.
On the other hand, according to onnx spec, Shape
operator always output tensor of type Long
, why do we need to insert the cast
node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was seeing the ONNXRuntime error for kernel not implemented, and I thought this missing type is the issue. But that's correct, it should not be neede.
This PR needs to wait until microsoft/onnxruntime#3507 is merged then.
@neginraoof, it seems like ReduceProd is not available in ort for all opsets (it's erroring out with "Failed to find kernel for ReduceProd" for opsets 7-9). Could you disable the tests for the older versions and/or request the kernel implementation? |
@lara-hdr Thanks for reviewing this. I update onnxruntime ReduceProd, waiting for nightly build (it's stuck), I need to update ort-nightly version after that. |
…aoof/pytorch into neraoof/scriptingFixesTorchvision
…oof/scriptingFixesTorchvision # Conflicts: # torch/csrc/jit/passes/onnx/function_substitution.h
cc @houseroad for review. Thanks. |
1 similar comment
cc @houseroad for review. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @BowenBao |
@houseroad merged this pull request in 5809288. |
These two ops are needed for torchvision model export. Since we're scripting a part of the code for dynamic export of models (in pytorch/vision#2052), these two changes are requited.