Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

not that hard to do if you have actual depth sensing cameras, and even without those, something like the oculus quest 2 does that exact task (generate a rough 3d volume based on several 2d video feeds) you can see a neat example when you draw your guardian space, and move objects (and notice how it updates the 3d volume representation)


The difficulty completely depends on the level of quality you're after. They're certainly working on cutting-edge-level quality, so it is likely no easy task. Someone else pointed out that they released a paper on related tech last year:

https://augmentedperception.github.io/deepviewvideo/


That's not true -- even with depth sensing cameras, it will still be full of artifacts, and things like curly hair or strands of hair will become disastrous because they're not easily geometrically modeled.

The Oculus Quest 2 doesn't do anything like what you're describing -- it essentially just pipes in stereoscopic video from its stereo cameras and stitches them together in a trivial way. It doesn't attempt to build geometric representations of objects in your environment at all.

(For guardian functionality it does very simple things like using the depth cloud to figure out the height of the floor, and if there are points inside the guardian that shouldn't be there, but that doesn't inferring object geometries.)


The Oculus Quest 2 (and the Quest 1) infers the geometry of your environment in the same way a Magic Leap does. The Quest uses the mesh to show perspective-correct stereoscopic pass-through views. https://www.youtube.com/watch?v=3V__SEPobM4


if you look at the video you can see there are artifacts around the hair. It is likely applying some matting via AI to make it less obvious, but it is still there.


Good point, I haven't followed the latest VR advancements, that does sound neat. Still, Starline's approach is surely much more sophisticated (the hardware obviously has a lot to do with that, these are prototypes of a desk-sized machine vs a headset). The 3D model looks reasonably detailed, and the final render has very few artifacts. Making it all work over a WAN link with latencies critical for teleconferencing is also impressive.


> not that hard to do

Then it would be done already.


Yep, it is done already. https://kinectron.github.io/#/


Using Oculus Quest 2 had me just walking around my room in wonder about seeing straight through the headset for a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: