Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Adding 'numel' and 'to' export for script module #36501

Closed

Conversation

neginraoof
Copy link
Contributor

These two ops are needed for torchvision model export. Since we're scripting a part of the code for dynamic export of models (in pytorch/vision#2052), these two changes are requited.

…oof/interpolate

# Conflicts:
#	torch/onnx/symbolic_opset12.py
@dr-ci
Copy link

dr-ci bot commented Apr 13, 2020

💊 Build failures summary and remediations

As of commit 7e3091c (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 23 times.

def numel(g, self):
shape = g.op("Shape", self)
dtype = shape.type().scalarType()
if dtype is None: # dtype could be None for a script module
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shape is just created above, I think here dtype would always None.

On the other hand, according to onnx spec, Shape operator always output tensor of type Long, why do we need to insert the cast node?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was seeing the ONNXRuntime error for kernel not implemented, and I thought this missing type is the issue. But that's correct, it should not be neede.
This PR needs to wait until microsoft/onnxruntime#3507 is merged then.

@lara-hdr
Copy link
Contributor

@neginraoof, it seems like ReduceProd is not available in ort for all opsets (it's erroring out with "Failed to find kernel for ReduceProd" for opsets 7-9). Could you disable the tests for the older versions and/or request the kernel implementation?

@neginraoof
Copy link
Contributor Author

@lara-hdr Thanks for reviewing this. I update onnxruntime ReduceProd, waiting for nightly build (it's stuck), I need to update ort-nightly version after that.

@neginraoof
Copy link
Contributor Author

cc @houseroad for review. Thanks.

1 similar comment
@neginraoof
Copy link
Contributor Author

cc @houseroad for review. Thanks.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@neginraoof
Copy link
Contributor Author

Thanks @BowenBao
@houseroad if there are no internal failures, can we merge this please?
Thanks.

@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in 5809288.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants