New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] disable size optimizations for onnx #36243
Conversation
…_disable_size_opt
💊 CircleCI build failures summary and remediationsAs of commit c00c00c (more details on the Dr. CI page): ✅ None of the build failures appear to be your fault 💚
🚧 1 upstream failure:These were probably caused by upstream breakages:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 2 times. |
PR looks fine but it looks like in a separate PR we're adding more constant propagation at export time of shape information - #35386 . I would image we don't want both this PR & that one too land. |
@eellison the ONNX constant folding is different, the scope is smaller. Since it is export time optimization, we can only fold things that we know won't change, thus only constant nodes are folded. e.g. onnx::Constant[value_t=...]. The jit optimization here happens for any tensor with shape info. |
So the traced graph for the example looks like:
how does the %9 : Tensor = prim::GetAttr(...) get lowered ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Reviving this PR pytorch#35401 eellison. I believe after the profiled graph executor fix the test failures are handled. Pull Request resolved: pytorch#36243 Differential Revision: D20950623 Pulled By: eellison fbshipit-source-id: 5fbee426d1a098d84d5938540d45ce00828299be
Reviving this PR #35401 @eellison. I believe after the profiled graph executor fix the test failures are handled.