Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Enable constant folding for Shape #35386

Closed
wants to merge 14 commits into from

Conversation

KsenijaS
Copy link
Contributor

Enabled constant folding for onnx::Shape

@KsenijaS KsenijaS requested a review from apaszke as a code owner March 25, 2020 17:21
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Mar 25, 2020
@dr-ci
Copy link

dr-ci bot commented Mar 25, 2020

💊 CircleCI build failures summary and remediations

As of commit 77c327d (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no CircleCI failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 37 times.

test/onnx/test_utility_funs.py Outdated Show resolved Hide resolved
test/onnx/test_utility_funs.py Outdated Show resolved Hide resolved
@neginraoof
Copy link
Contributor

Also, please check the failures. Looks like test test/onnx/test_operators.py TestOperators.test_expand is failing.
Export of expand op in opset 9 uses ones_like op which uses shape op. This could be expected, please verify and update the expand expected graph file.

@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 25, 2020
@ngimel ngimel requested a review from houseroad March 25, 2020 23:13
@ngimel ngimel added the module: onnx Related to torch.onnx label Mar 25, 2020
Copy link
Contributor

@neginraoof neginraoof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fixes. Please check if there's an onnxruntime test for shape and add that if not.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@KsenijaS
Copy link
Contributor Author

@houseroad can you merge this PR?

@neginraoof
Copy link
Contributor

@KsenijaS could you rebase this PR please?

@KsenijaS
Copy link
Contributor Author

KsenijaS commented Apr 7, 2020

@houseroad can you please review my PR, it seems that none of the failure are caused by my changes

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure why we need to change onnx::gather part

@@ -295,7 +299,7 @@ c10::optional<at::Tensor> runTorchBackendForOnnx(
inputTensorValues[0], p, node->is(attr::axes), node->i(attr::keepdims));
return c10::optional<at::Tensor>(updated_val);
} else if (node->kind() == onnx::Gather) {
assert(inputTensorValues.size() == 1);
assert(inputTensorValues.size() == 2);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gather has 2 inputs, that's why my tests were failing

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a bug in gather implementation for constant folding. This node is required to have 2 inputs always (Note that this assertion as not blocking tests on CI, and was caught locally).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad if you have no further comments, can this PR be merged?

@KsenijaS
Copy link
Contributor Author

KsenijaS commented Apr 8, 2020

@houseroad can we merge this PR?

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, please update assert to TORCH_INTERNAL_ASSERT

torch/csrc/jit/passes/onnx/constant_fold.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in 4f728c9.

1 similar comment
@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in 4f728c9.

ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
Summary:
Enabled constant folding for onnx::Shape
Pull Request resolved: pytorch#35386

Reviewed By: hl475

Differential Revision: D20682412

Pulled By: houseroad

fbshipit-source-id: 4559a35f174edfb7e6364c0fbf5bc1d55d0d26dc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: onnx Related to torch.onnx oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants