New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Support clamp_min and clamp_max #37872
Conversation
@@ -1515,6 +1515,15 @@ def forward(self, x, k): | |||
k = torch.tensor(3) | |||
self.run_test(MyModuleDynamic(), [x, k]) | |||
|
|||
@skipIfUnsupportedOpsetVersion([7, 12]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is opset 12 skipped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we can enable this after ORT version is updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@houseroad merged this pull request in 7be9796. |
clamp_min is used in
torch.nn.functional.normalize
. Update symbolic_opset11 to support with updated clip in onnx opset 11.