Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Added support for constant folding onnx::Add and onnx::Sub #35869

Closed
wants to merge 1 commit into from

Conversation

tanyokwok
Copy link
Contributor

Added support for constant folding onnx::Add and onnx::Sub

@tanyokwok tanyokwok requested a review from apaszke as a code owner April 2, 2020 06:58
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 2, 2020
@dr-ci
Copy link

dr-ci bot commented Apr 2, 2020

💊 CircleCI build failures summary and remediations

As of commit 8afbc09 (more details on the Dr. CI page):


None of the build failures appear to be your fault 💚


  • 2/2 broken upstream at merge base e56ba84 since Apr 05

    Please rebase on the viable/strict branch (expand for instructions)

    If your commit is newer than viable/strict, you can try basing on an older, stable commit:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase --onto FETCH_HEAD $(git merge-base origin/master HEAD)
    

    If your commit is older than viable/strict:

    git fetch https://github.com/pytorch/pytorch viable/strict
    git rebase FETCH_HEAD
    

    Check out the recency history of this "viable master" tracking branch.


🚧 2 upstream failures:

These were probably caused by upstream breakages:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 20 times.

Copy link
Contributor

@bddppq bddppq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

test/onnx/test_utility_funs.py Show resolved Hide resolved
@bddppq
Copy link
Contributor

bddppq commented Apr 5, 2020

CI failures are unrelated

@bddppq
Copy link
Contributor

bddppq commented Apr 5, 2020

@pytorchbot retest this please

@bddppq bddppq added the module: onnx Related to torch.onnx label Apr 5, 2020
Copy link

@yangjunpro yangjunpro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some small comments

operator_export_type=OperatorExportTypes.ONNX)
for node in graph.nodes():
assert node.kind() != "onnx::Add"
assert len(list(graph.nodes())) == 1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also add a value check to verify the correctness of the constant folding behavior?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

operator_export_type=OperatorExportTypes.ONNX)
for node in graph.nodes():
assert node.kind() != "onnx::Sub"
assert len(list(graph.nodes())) == 1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -233,6 +233,12 @@ c10::optional<at::Tensor> runTorchBackendForOnnx(
} else if (node->kind() == onnx::Mul) {
updated_val = at::mul(inputTensorValues[0], inputTensorValues[1]);
return c10::optional<at::Tensor>(updated_val);
} else if (node->kind() == onnx::Sub) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A potential more generic approach is to abstract the Mul/Div/Sub/Add as binary op, the pseudo code may be as following:

if (node->kind() == onnx::Div || node->kind() == onnx::Mul ...) {
kind2func k2f = { {onnx::Mul: at::mul}, ... };
function f = k2f(node->kind());
return c10::optionalat::Tensor(f(inputTensorValues[0], inputTensorValues[1]));
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion with Tianyou, I take back my comments since aten::add/sub has different argument number against aten::div/mul ( 3 against 2), although we may use bind tricks to unify the interface, it is a little bit overkill. I think Tianyou's current implementation is good enough.

test/onnx/test_utility_funs.py Show resolved Hide resolved
Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good. Could you rebase to master and let the ONNX related CI run?

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@tanyokwok
Copy link
Contributor Author

Overall looks good. Could you rebase to master and let the ONNX related CI run?

Done

@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in 8dba98d.

ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
…torch#35869)

Summary:
Added support for constant folding onnx::Add and onnx::Sub
Pull Request resolved: pytorch#35869

Reviewed By: hl475

Differential Revision: D20865640

Pulled By: houseroad

fbshipit-source-id: 2b8c1cc196959b5b5b9ce018dbdcb74d59a92d9f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: onnx Related to torch.onnx oncall: jit Add this issue/PR to JIT oncall triage queue open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants