New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bring back requestViewportScaling? #1091
Comments
/agenda |
As a note, HoloLens 1 originally had only a latent Allowing apps to render to an arbitrary subset of the framebuffer and simply tell the UA what portion they're using that frame seems like a reasonable design to me if no other vendors will hit gotchas there. |
Somewhat relevant: In my implementation of FPO views in Servo I downscale them by a constant for performance. It would be interesting if this could be exposed and overrideable through such an API, though I don't think the current proposal is for a per-view thing. |
Marking as future because while this did see some interest on the call and so we should continue looking into it, it's something that can be brought in additively later with no complications. I'll try and capture some more of the call discussion in another comment. |
In summary, the rough consensus seemed to be that people seemed generally OK with a mechanism for apps to request use of under-sized viewports. There didn't seem to be specific concerns that implementations would be unable to accommodate this, but it seems safer to provide a mechanism for UAs to decline the request or to adjust the resulting value if needed. Providing GPU metrics to applications would make this more useful, but it seems worth experimenting with viewport scaling even in the absence of that. Meeting minutes: https://www.w3.org/2020/06/30-immersive-web-minutes.html Background: the current framebufferScaleFactor is set when creating the XRWebGLLayer, and cannot be changed for an existing layer. It's possible to change the scale at runtime by creating a new XRWebGLLayer and setting it as a new base layer via XRSession's updateRenderState, but this generally involves reallocations in the graphics pipeline. Applications should expect dropped frames when changing this, and it's not suitable as a per-frame dynamic scaling mechanism. The framebuffer is carved up into views, for example one view per eye for a typical VR display. Currently, this viewport allocation is fully controlled by the UA. In the previous WebVR API, applications had full control over viewport allocation and could request using only subsets of the framebuffer. Some implementations (Hololens?) had concerns that full application control could be inefficient or problematic. UA control could enforce constraints such as aligning to a preferred pixel grid. According to Microsoft, dynamic viewport scaling hadn't seen much uptake by developers, including in corresponding native APIs, where developers often chose a fixed size and kept that for the duration of the experience. One concern was that it was difficult for applications to do smart scaling if they don't have any metrics about GPU utilization or time budgets. Currently, developers targeting phone AR, for example , have expressed strong interest in dynamic scaling. In phone AR, even a rudimentary approach that accepts dropped frames seemed useful, since dropping frames isn't nearly as impactful as for a VR experience. According to @thetuvix, OpenXR supports full viewport control, so implementations based on that should easily be able to do arbitrary viewports. Developers were inappropriately using framebuffer scaling, and viewport scaling would be much more efficient. It would be nice if there were a "keep me at framerate" flag in engines, but that seems rare, but it would be useful to have a way to prototype that. The viewport scale mechanism should take effect on the same frame where it was requested. If a UA can't do this directly, they could use workarounds such as adjusting reprojection to compensate, but this must not cause visible dropped frames (especially if it would cause black flashes or similar) to avoid user discomfort if apps change the scale frequently. The UA could provide a recommended viewport scale for each frame, for example based on a heuristic using UA-internal metrics. Applications could opt in to dynamic scaling by using that as-is, or apply clamping or other adjustments as needed, i.e. a lower bound to ensure small text remains readable. UAs could also potentially do automatic viewport scaling by default with an opt-out mechanism, but this risks surprising applications. |
One more thing - I think there was also interest in applying a custom viewport scale to third-person views. For this use case, the requested and/or recommended viewport size should be a per-view property, not per-frame, so that the views can be scaled independently. |
Just to make the proposal a bit more concrete, how would something like this sound?
Alternatively, assuming people are OK with overloaded methods in the API, this could be done by adding a new optional In code, based on example 5 from https://immersive-web.github.io/webxr/#xrviewport-interface : for (xrView of viewer.views) {
let xrViewport = xrWebGLLayer.getViewport(xrView);
gl.viewport(xrViewport.x, xrViewport.y, xrViewport.width, xrViewport.height); Scaled viewports: let xrViewport = xrWebGLLayer.getScaledViewport(xrView, xrView.recommendedViewportScale || 1.0);
// Variant: overload getViewport with an extra parameter
let xrViewport = xrWebGLLayer.getViewport(xrView, xrView.recommendedViewportScale || 1.0);
// Variant: an undefined value is treated as 1.0
let xrViewport = xrWebGLLayer.getViewport(xrView, xrView.recommendedViewportScale); Of course, applications could use their own logic to calculate a scale factor from scratch, or base it on clamping/modifying the UA-provided recommended scale factor. Allowing the UA's flexibility to ignore or modify the requested scale factor is helpful for several reasons. It allows the UA to align the viewports to pixel boundaries where this improves efficiency, and avoids the need for complicated rectangle area allocation tracking. For example, if an application for some reason calls |
This sounds great to me; I like the overload version a little better, but it's not a strong opinion. Out of curiosity, in what circumstances would recommendedViewportScale != 1.0? In any case, this scaling is critical for smooth frame rates, especially since I've observed frame rates dropping by more than 4x just by walking up to an AR model. The shader only runs on the blocks of pixels the rendering covers, which is usually not all that much, but when you get close, it can quickly become a disaster. In |
I'm generally comfortable with keeping the UA in the loop to allow runtimes to give back a slightly tweaked viewport (or even ignore the request and just return the original viewport) if needed. One gotcha with the particular design proposed above is that it gives side effects to the call to It may make more sense to have a That approach still has the subtlety that the app must call Perhaps more importantly, if most platforms just blindly accept any viewport you request, some apps may not actually go through the extra bother of calling |
@elalish wrote:
The recommended viewport scale is intended as a mechanism for the UA to suggest a scale factor based on internal metrics or heuristics, with the goal that an application using that scale factor should get an acceptable compromise between resolution and framerate. It would stay at 1.0 if the application's rendering is keeping up with the target framerate at full viewport size, and would only drop lower if the rendering time exceeds the frame time budget. Ideally, the UA's recommended viewport scale would remain at 1.0 if the frame rate is low due to reasons other than GPU rendering time, i.e. if the application spends an excessive amount of CPU time in JS processing, where reducing the viewport scale would reduce the visual quality without any performance benefits. |
@thetuvix wrote:
I'm not sure I understand the failure mode you're worried about here. As far as I can tell, there is no way for applications to "run with the exact viewport they requested", the WebXR API doesn't have any way to request a specific viewport. The existing If there are separate parts of the application that call I had mentioned this in previous comment #1091 (comment)
Would it address your concern to make this a spec requirement instead of a UA choice? As an example, let's say the application does something like this: // Main rendering, requesting 50% scaling
let xrViewport = xrWebGLLayer.getViewport(xrView, 0.5);
gl.viewport(xrViewport.x, xrViewport.y, xrViewport.width, xrViewport.height);
// ... draw main content
// Auxiliary rendering, i.e. an added effect, unaware of viewport scaling
let auxViewport = xrWebGLLayer.getViewport(xrView);
gl.viewport(auxViewport.x, auxViewport.y, auxViewport.width, auxViewport.height);
// ... draw aux content In this case, the UA would return the same viewport for both calls. That would be the half-sized viewport if the UA supports viewport scaling, the full-sized viewport if it doesn't support scaling, or potentially even a different size such as 0.75 scale if the UA enforces a lower limit. On a spec/implementation level, the list of viewports would be treated as modifiable at the start of each frame, with each view's viewport being decided and locked in for that frame on the first call to that view's getViewport. Additional calls to getViewport for that same view within the same frame return the same viewport, no matter if the following calls use a different scale factor or don't supply a scale at all. This still leaves a few UA implementation choices:
If the application doesn't call getViewport for any view, the UA should continue using the previous frame's viewport list. The assumption here is that applications must do a gl.viewport call using the returned viewport. They'd get wrong results if they simply assume what the viewport size should be, but arguably that this would already be wrong without viewport scaling. For example, an application must not assume that a stereo headset uses the left half of the viewport for the left eye. As a side note, the |
@thetuvix , does comment #1091 (comment) address your concerns? To move this forward, can we revisit this in one of the next meetings? We had a few API variants under discussion, I'd propose the following to make it more concrete:
I think this should be forwards and backwards compatible. If an application uses As far as the specification is concerned, I think the needed changes would be something like this (handwavingly):
The goal of this is that a viewport's scale is only modifiable once within a given animation frame, and is then locked in for the rest of the animation frame. Also, a view's viewport will only change as a result of calling I added the constraint that the scaled viewports must each be fully contained within the corresponding original full-sized viewport. This ensures that each view can always be resized individually even if the other views aren't getting changed in this frame. Initially I thought it might be useful to let the UA change locations more freely, i.e. to keep two undersized eye views packed together in the top left corner of the framebuffer, but that would require moving other views to avoid overlap even if the application didn't call getViewport for them. UAs could still get contiguous viewports in some cases if desired, i.e. by arranging left/right eye views symmetrically around the middle of the framebuffer. /agenda to discuss this proposal and potential alternatives |
Apologies for the delay! Yes, my primary concern here was too much ambiguity across UAs when two I was skeptical in my comment when imagining a "last scale wins" approach, because a later component could cause spooky action at a distance to the viewport expected for the rendering already underway in an earlier component. However, "first scale wins" solves that nicely. Once any app code observes a viewport, it remains valid for the rest of that frame. One more detailed spec questions for us to answer is what precisely not specifying scale means. I could see arguments for any of these - we should be specific, though:
Also pinging @rcabanier to reason about how the layers module can support viewport scaling in |
@thetuvix wrote:
I think an undefined scale needs to be treated as 1.0 if the goal is to make this an opt-in mechanism. The We could consider using an opt-in mechanism similar to the secondary-views feature, or potentially allow the UA to automatically scale viewports if |
Notes from today's call: @thetuvix: Instead of adding an extra argument to I think this sounds reasonable, and seems cleaner than the overloaded method. It should be compatible with the proposed spec changes and semantics. We need to clarify the specific meaning of the scale factor, is it an edge or area multiplier? Consensus was that the scale factor should multiply the width and height individually, consistent with the existing framebufferScaleFactor, but this needs to be clarified in the specification, including clarifying Related to this, the specification should clarify that if the UA implements viewport scaling, it should consistently interpret the requested scale in this way to avoid inconsistencies. The UA would be free to apply constraints such as a minimum scale, or apply rounding or modifications such as aligning to a preferred pixel grid, but the overall result should be close to what the application requested. @Manishearth : a downside is that applications don't know the size of the viewport before having to request and lock in a scale factor. I think that isn't a major issue, the intent is for viewport scaling to be dynamic, so applications could modify the scale for future frames. @toji : If the application knows ahead of time that it wants fewer pixels to render, it should use Next steps: I'll work on a PR. |
/agenda PR #1132 is now merged, adding to agenda for visibility and in case anyone has additional feedback. |
Closing since I think we don't have any open discussion points, thank you to everyone for their feedback. The API is currently being prototyped in Chrome Canary behind the "WebXR Incubations" flag for Android's GVR and ARCore devices, please see https://crbug.com/1133381 in case you want to follow that. |
Initial versions of the WebXR spec had included a requestViewportScaling API that allows applications to use a subset of the overall framebuffer for rendering.
Issue #617 had requested deferring it to simplify the initial WebXR API, but it sounded as if people were generally not opposed to the API as such and would be open to bringing it back at a later time.
In the meantime, we've gotten feedback that performance scaling can be tricky especially for smartphone AR applications, where render costs can rise dramatically when people move close to complex AR objects. For example, uses aggressive autoscaling in non-XR canvas mode, and the developers would be very interested in having similar functionality available in WebXR AR mode.
Can we revisit this to see if it would make sense to reintroduce this API?
For reference, the removal was in #631 .
The text was updated successfully, but these errors were encountered: