Hm, now that I think of it, this approach in fundamentally unsuited to an EGLStream-based platform, where we can only provide a mg::Buffer abstraction at a performance cost.
To provide an mg::Buffer from a client we need to read out of the EGLStream with an gl_texture_consumer sink; to forward that on to the host server we'd need to blit it to the egl_surface_producer of a separate stream.
I'm not really sure where I'm going with this. Maybe that we shouldn't try to design for it until the necessary EGL extensions are implemented in a driver, presumably nvidia's?
Hm, now that I think of it, this approach in fundamentally unsuited to an EGLStream-based platform, where we can only provide a mg::Buffer abstraction at a performance cost.
To provide an mg::Buffer from a client we need to read out of the EGLStream with an gl_texture_consumer sink; to forward that on to the host server we'd need to blit it to the egl_surface_ producer of a separate stream.
I'm not really sure where I'm going with this. Maybe that we shouldn't try to design for it until the necessary EGL extensions are implemented in a driver, presumably nvidia's?