For anyone wondering: 'Why JavaScript? Why React?', let me remind you, the goal of ReactVR is not to create the most performant game, but to allow the people that are creating websites right now, to create VR websites in future.
I would add that there is absolutely no technical reason why an application expressed in one language need not be as fast as an application expressed in another. First, compilers improve static analysis, and second, JIT analysis will also improve. For this sort of thing, running on the GPU matters a lot more than source language, and WebGL has you covered there.
This is wrong. There absolutely are technical reasons why implementations of different languages perform differently- no amount of static or runtime analysis can entirely save you from performance-hostile semantics baked into the language spec.
WebGL is a huge bonus, but WebAssembly will also be important for a lot of web-based VR.
> no amount of static or runtime analysis can entirely save you from performance-hostile semantics baked into the language spec
You can in almost all cases check for these semantics statically, and if not found, run a super fast path. This is the whole premise behind asm.js.
Heck, even C++ has plenty of features that have performance-hostile semantics. But the compiler checks for these and you don't pay the penalty if you don't use them.
Wasm will help, but I disagree that anything in the JS spec makes the _language itself_ inherently slower than any other language, if we disregard textual compilation overhead.
You're right, which is why one of the fundamental optimizations in a modern Javascript engine is inferring static memory layouts for your objects. Another is getting rid of unnecessary allocations to reduce the need for GC.
> inferring static memory layouts for your objects
I'm deeply suspicious that you'll do better than something purpose built. Things like arena allocators for levels. Knowing how your scene-graph and animation traversal are really hard to get right and without direct control over SoA vs AoS and the like I don't see how you get within a factor of 10 to something written in native.
asm.js and WebAssembly give you the same control over memory layout as C (in fact the memory layout is absolutely identical to a little-endian 32-bit host) and unless you need to call a lot out into HTML5 APIs the garbage collector will be idle. These two things together are basically the magic sauce why asm.js/wasm are faster than manually written JS and faster then most 'managed languages' where most objects are created on the heap and passed around by reference.
Totally, since the memory model is identical. However then you're not really talking about something that looks like JS(or a managed language) at all.
I'm actually a huge fan of WASM, I think it gives you a great escape hatch to break out of these problems(much like JNI/etc in other managed languages).
Not at all, both asm.js and wasm give you a flat memory region (in asm.js it's a single, big JS typed array), the stack and all allocations live in this 'heap', managed by emscripten's malloc wrapper (which is jemalloc I think), the resulting memory layout is the same as on a native 32-bit platform, with the same alignment rules. There are no 'managed' Javascript objects in asm.js or wasm (unless you need to call out into web APIs).
So does every other language, where it matters is with objects. Until I can mark a whole object as a value type and fix it in memory relative to other objects then you'll pay a 10-50x slowdown for cache misses and the like.
Typed arrays are close, but still not good enough.
On the Java side I've used ProtoBuf for similar approach(using byte[] which does have memory placement semantics). While it helps, the bounds checking and accessors that you need still have a non-trivial cost both in branching and cache thrashing(len is usually stored for an array at a different location or at the head so data items near the mid/tail will still take a hit).
Realistically you can get within about 1/5th the speed of native using cache aware data structures and techniques. As always it matters if your use case needs that speed but choosing a JS/managed based tech-stack will always limit you from getting that last 5x perf improvement.
But then you're no longer using the language. The performance-hostile semantics are still there, you're just not using that part of the language, and thus losing out on all their benefits.
The ReactVR framework can use asm.js to gain acceleration but you as a front-end developer could stick to best JS VR practices to not get in the way of the performance ReactVR with asm.js brings you (if it uses it now or in the future).
How so? LLVM is used to implement language features, performance-hostile ones included. asm.js is a codified way to avoid language features.
You may be confusing layers here. asm.js as a compilation target is used the same way as LLVM- it is used to implement the source language's features, while simultaneously avoiding some of JS's.
I'm sorry, I didn't think I needed to spell it out: LLVM front-end compiles a high-level language into an intermediate representation ("IR") which is then transformed by the backend into machine code. A compiler targeting wasm occupies the same space as the LLVM front-end. Now, in a browser there is not as obvious a back-end, but it's there: it's the JSVM JIT code.
> I would add that there is absolutely no technical reason why an application expressed in one language need not be as fast as an application expressed in another.
There absolutely is.
If I write a program in a language that's fuzzy enough that a machine (which follows the specifications of the language) cannot definitely tell what I was asking it to do at compile time, there are compiled optimizations it can never rely on.
You make a good chunk of that back with JIT / predictive runtime compiling, but it's never going to be the same. At minimum, you're burning CPU and cache on your miss rate. To say nothing of the additional overhead of running JIT while executing in the first place.
As I understand it, this is the entire purpose of things like Vulkan / Webasm. (I realize they're primitive targets vs high level language, but same principle applies at any level of the stack)
Happy to have someone tell me I'm wrong, but the original quote flies in the face of everything I learned in language / compiler design.
This is a good comment. Ideally, JIT would reach some asymptotic improvement and then stop running. It would only need to start running again if the code changes.
I think however there is some fundamental communication issues talking about "compilers vs language". Optimizing compilers, by definition, rewrite your code into a "better" form. Even if you are writing C/C++ you probably don't really know what's happening at the register level. Heck, with todays CPUs I wonder if even assembly people know what's actually happening in the registers!
And in x86 at least, even the assembly gets sliced and diced behind the scenes.
There have been comments to the same effect on HN before, but I look at language design as an API between humans and computers. Computers need as strict rules on how to execute a thing as possible: humans need something comprehensible. The ideal language finds ways you can increase the former without decreasing the latter (in ways they notice).
Theoretically that may be true, but in reality there are a lot of little detail problems which cannot simply be hand-waved away. For instance, WebGL behaves very differently compared to a native 3D API, mainly because there's a much longer pipeline for each call from JS to the driver. Manually written JS has garbage collection spikes, input and audio have much higher latency. I think WebVR provides shortcuts for some of those problems, and mobile/web VR content needs to be designed with those limitations in mind, I just want to point out that the browser platform is full of technical limitations and compromises compared to 'bare metal' platforms which at most can be softened a bit, but will never completely disappear, no matter how much code is thrown at the problems.
In the real world case though there are lots of extant technical reasons why this won't be as performant. As the parent says though it's okay for its usecase.
In the meanwhile many people like me who love TS/Flow wait for the days when their annotations could be used to optimize their apps. If types on JS doesn't come I'm sure someone will compile TS to WebAssembly or something.
I'd be in even just for the possibility for reflection/dependency injection.
Boost has existed since 1999-2000, which aligns with ES3.
As for solid and well-understood... Douglas Crockford wrote a book on how "solid" Javascript is, called "Javascript: The Good Parts". One of his most common refrains about JS was "Javascript is The World's Most Misunderstood Programming Language". [0][1]
ES5 came out in 2009. ES6 was standardized after React came out.
Jeeeeziz. HTML/JS were designed to display interlinked text documents. Now you can use them for VR. It's like building a spaceship out of bicycle parts. That's crazy. Crazy cool maybe, but definitely crazy.
Love React by the way, will definitely check this out.
I think Scrap Mechanic[1] may basically be this. I haven't played it, but my son had me buy it for him the other day. I'll be trying it out with him in a day or two.
Technically _all_ of our software comes from interlinked text documents. The magic is in the software that eats the interlinked text and makes it real. HTML/JS are no different in this regard, VR or not.
I'm constantly amazed and thankful to be living in a world of spaceships built out of bicycle parts.
I like this sentiment a lot, as it applies to most computer uses. A couple standout exceptions however are images, videos, 3d models and animations. It's certainly possible to represent these in text, but most uses and tooling respect their visual/spatial content. This seems a major criticism of the 'just use existing web tech to make vr'. The former really was about digesting and emitting text documents, but vr is more about models/layout/animation - is text really the right foundation for the job?
Sort of, binary formats are mostly optimisation for scene data specifically and working with human readable formats is quite normal for content creation. If you can put up with the time taken to parse then it's workable.
If you think of declarative vr in those terms it's a lot like a web page where a portion arrives as text that describes the layout of the bulk of binary data for rich applications.
Where it gets tricky is for large scenes and/or streaming scene data where generally you need better performance. But this sort of thing could still be pre-baked and live within a declarative wrapper.
My point isn't so much text vs binary. It's text==1D vs much of VR content which is multi-dimensional. And the more multi-dimensional, the more the most human readable format becomes the non-text version (text -> trees -> images -> models/videos/animations).
Honestly even html suffers from the translation between linear-text and the tree-like DOM. Many of the pain points of webdev are dealing with this boundary. Go even farther to scenarios where you'd want to interact with the 2D rep of a webpage and you're completely off the reservation. Imagine using javascript/dom to ask the question "what letter is below the 17th character of this this paragraph", the question almost comes across as absurd! "What about different fonts? What about phones? What if there's an image below, is that 'null' or should I skip to the text?" The question is meaningless! Then again, text selection is exactly this question... On the web we've found some very reasonable boundaries between webpage responsibilities, browser responsibilities, and accepted a large swath of functionality as impossible.
In VR those boundaries are still in flux. VR scenes are a combination of all sorts of content, of all sorts of dimensionality, often including temporal components as well. Certainly for storage and transmission 'linear' is the only option, but what is the best working representation? Is it a tree with rich nodes? Well maybe, but that's going to make it that much harder for those nodes to interact. On the other extreme is it a 4d array of temporal-voxels? Probably not until we have petabyte DSL. Imo this is why this is a hard problem, why VRML has never really caught on, why after 25 years of making 3d games/movies there's still no standard scene description, why Facebook is taking a stab at this with this project, and why VR is a pretty exciting place to work :)
Scene representations in game engines tend to follow the use case so there tend to be several generated from the same scene description. Generally because one representation that is useful for organising a scene isn't great for spatial queries. Likewise we need a representation that can be fed efficiently to graphics hardware. Or the use of voxels for simulation. My rambling description aside my point is that there isn't likely to be one working representation.
For WebVR hopefully we can steer a good path between that which should be represented in an HTML/DOM context and that which should be represented elsewhere. For example glTF looks quite promising both as a runtime format for models and for scenes.
That is another fascinating point, that all of our software comes from interlinked text documents. The reactions to React VR are really leading to some great quotes ;-)
Natural indeed; VR games made with Unity are also defined as declarative trees of objects in a similar fashion. In fact, if you added a "behaviors" prop to all of the React components in this system you would essentially have a basis for porting Unity-like scenes pretty directly (maybe even automatically).
I tried out ReactVR for a prototype and I liked that I was able to use my familiarity with React to set up a simple VR project in a few hours with no prior VR experience and without having to learn any new syntax, just a few new classes.
I tried out A-Frame for a prototype and I liked that I was able to set up a simple VR project in a few minutes with no prior VR experience and without having to learn React. I'm sticking to A-Frame
Why?/Commercial: Facebook is putting a lot of effort into VR. It owns Oculus. It uses React. And so it needs some React + VR. Why not aframe-react or react-three or react-three-renderer? Control of your own core tech can be important.
Carmel might be another example of this. Chromium/Firefox WebVR support on Linux and Mac has been an unstaffed joke. Even on Windows, it's been moving very slowly. As Facebook pushes to be a dominant player in social vr, it needs platforms it can rapidly advance.
Why?/Technical: Why ReactVR rather than aframe-react? A-Frame makes several architecture bets in a new and unfamiliar domain. It reminds me of early Angular. Angular devs said "revolutionary! it's the future! google!". And what I heard was "This is a new thing, so of course we're making bad architectural choices! This generation is an exploratory throw-away! You should skip it! And we're so inexperienced we don't even realize that's what we're saying!" :) Maybe the bets will all win. Maybe. But ReactVR is much more incremental progress. And if both become popular, and if there's need for A-Frame components in ReactVR, I expect someone will write glue. Or even sooner.
And since it sometimes comes up, why is the comparison ReactVR vs aframe-react (or something like it), and not ReactVR vs A-Frame? While React began life as a dinky virtual-dom library with delusions of grandeur, it has become an ecosystem for managing complex applications. So when A-Frame says "we're HTML and Components, for VR!", it's also saying "we're no more sufficient for creating complex apps than html is!" and "we're yet another attempt to create web components!" (a path littered with corpses, with the survivors tied to particular frameworks). So one could argue for angular-aframe, or ember-aframe, but complex A-Frame apps will need to be aframe plus something.
tl;dr: Perhaps control of development; and it's not clear A-Frame is the right thing.
Same reason Mozilla created A-Frame instead of contributing to my framework Primrose. No corporation is going to build a core project in someone else's backyard.
If we're counting experiments before WebVR was called WebVR, I was making stereo view apps in the browser on smartphones with a cardboard box as a viewer 6 years before it was called Google Cardboard.
Both A-Frame and VRML use an XML based approach to describe the scene graph but the similarities stop there. As someone mentioned, A-Frame is an entity component system that makes extensibility very easy. A-Frame is also not a standard but a JS library allowing us to iterate on the API much quicker and based on real world usage. With the lessons learned we might want to consolidate the API in a standard in the future. jQuery and its crystallization on the querySelector API is a precedent of this approach
More than one is great—but ideally we'd have at least one which has reached a level of completeness, stability, desirability etc. to be a sort of standard people are happy with (again more than one such standard would be great). So if the tradeoff is completing a small amount or starting a larger amount, I'd rather have the small number of more complete options.
I guess if you're already happy with VRML, you're set though ;)
VRML was largely just 3D model format. WebVR is API-level access to VR hardware. WebVR frameworks are application frameworks that incorporate the ability to load many model formats, as well as provide different levels of configuration and convention.
Seriously, anyone remember VRML? It was so cool... it was technically junk and implemented horribly, but it was awesome still somehow. Like goat simulator.
I've used this for a couple of hobby projects and it's been fun. One thing I think it does need soon is a VR Browser for the Oculus Rift and/or Vive HMD's as well. Currently the use of the 'Carmel' VR browser is Gear VR only, and it would be nice to try some React VR on something with greater oomph.
During last year's absence of any WebVR or SteamVR support for linux and Vive, I cobbled together an alternate stack[0], using Valve's low-level OpenVR device driver, node.js electron as runtime and compositor, and a WebVR 1.0-ish api. I've used it with React and three.js.
No ReactVR. No A-Frame. No lens correction. Insecure. The device driver api has moved on, and current device firmware may or may not work. But fyi, fwiw.
Extending VR to the web and to other sets of developers is great!
However, I don't see any useful abstractions here besides a bunch of declarative boiler-plate.
Creating something that allows more people access to a creative medium requires real abstraction, not just wrapping a bunch of Three.js API's in react components.
Garbage collection and compilation operations can cause hard to predict, noticeable slowdowns for large real-time JS apps so I'm skeptical of the use for ReactVR right now. Perhaps I'm wrong though? Or maybe it will be useful for static, UI scenes?
The React team has been working on an improved rendering engine called Fiber. This is an oversimplification but Fiber essentially breaks down per-frame rendering into several sub rendering tasks, allowing for smarter multi-threading (in the case of web workers) and even single threading (the renderer now returns control of the main thread much faster and more frequently, allowing for more opportunities to do GC and other operations without as much of a negative effect.)
What this amounts to for React developers is the introduction of explicit priority rendering, which can be very useful in VR environments. So for example, updates to your UI from external processes (such as an XHR) can be set to lower priority than updates to the hand model that's controlled by the controllers.
I think maybe the basic idea is that by utilizing more cores and getting more done concurrently, they leave more idle time per frame for the GC to work. If that's the case, it doesn't necessarily solve the problem, but it may move the bar enough to be currently usable and useful.
I can't speak to ReactVR specifically, but if you've tried and profiled WebVR you realize that it (surprisingly) works in practice.
It's actually not hard to finish executing your Javascript in 11 milliseconds (90 FPS) if you avoid the well-known, profilable tripping hazards such as not blocking on the DOM. 11ms is an eternity in CPU time if you're not running some big-N synchronous algorithm (which you should be doing in a worker), so most of this time is not even going to be spent in your Javascript: it's going to be in blocking GPU drawcalls and uploads, which can be executed in parallel with the garbage collector.
So although it intuitively _seems_ Javascript VR would be killed by garbage collection, you can actually spend a large proportion of your time in garbage collection for free.
And it works for much more than static scenes. You can basically do anything you can imagine. It just requires a bit more thought and learning than copying and pasting jQuery.
The reason I asked is because I've worked on several large commercial 3D WebGL games which ran into the issues I described. Yes you can get a complex game running at <=16ms per frame, but GC and compiler operations make it hard to achieve consistently smooth performance. This is even more noticeable in VR.
> You can basically do anything you can imagine. It just requires a bit more thought and learning than copying and pasting jQuery.
No. If you create large scale real-time JS apps, you will soon run into the issues I described.
Scale is a very wishy-washy term. You can build complex applications with JS, you just need to move the scale out of the main loop. Don't run business logic alongside your render, or else you will indeed run into these problems.
This advice isn't unique to Javascript and goes back to Carmack hacking on Doom, Wolfenstein 3D, and beyond. It's just that Javascript makes it easy to shoot yourself in the foot here, because this isn't a role that was forseen when JS was designed.
Do you have any links to share? I'd love to learn more about these performance problems you describe.
On platforms that support it, depending on how big the GC stalls are (extrapolation isn't magic) and it's not really something you should be triggering regularly by design. Async reprojection was invented to mitigate the fact that a variable frame rate makes you sick in VR so really it's evidence to support the parents case. If you're regularly dipping into async reprojection due to the GC then a trade off between visual accuracy/artifacts and the GC is being made.
What device would you recommend for playing with VR? I've never tried a headset but 3D UI interfaces has been something I've day dreamed about for well over a decade now and it'd be fun to see it happen.
FWIW I'm on Linux & have an iPhone, if that matters.
I'd love to see something like what Expo is doing for regular React Native for VR. Even them doing it would be great, I assume that might complicate their branding, but still.
This is awesome, however I just spent a month making the exact same thing. Oh well, I might give this a go and see if this is more in line with what I want to do.
This is one project that has changed my mind to finally get going with facebook's React framework. I would love to see VR development more simplified and accessible to web developers.
You're more than welcome to also try http://aframe.io/
That's exactly our goal, onboarding Web Developers into the new medium. Let us know what you think.
There are tools for 'drawing pictures' already.
From Google's Tilt Brush, to Oculus' Medium and Quill.
And game engines start supporting scene editing directly in VR.
This is not about pictures. This is about interactive websites in VR.
I get it...javascript is the universal language. It's also universally hard to fully understand, hard to scale to large codebases, has no shared memory parallelism, has a GC-heavy runtime, and is extremely hard to optimize for performance (both by humans and by compiler optimizers).
I appreciate what React has done, it's pretty impressive. But I can't help but think we're getting into parody territory here.
I don't think the other subcomments actually address your concerns, so here's my take: Javascript is the language known by every web developer. Right there, they're opening up their developer audience to a much larger group of people than is possible with C++ or even C#.
"Hard to scale large codebases" isn't really an argument, large code bases are hard in every language, JS isn't really an exception here.
Low level stuff (GC, perf, memory) has been, and will continue to be a problem for Javascript. You can mitigate a lot of the problems with well designed code. React attempts to minimize a lot of those problems with solutions that work for most general use cases (and now with Fiber, we're getting even more granular control.)
Until React moves to Web Assembly you're always going to have overhead when using JS. But let me know when you've got a solution for VR in a web browser that's not JS and I'll give it a try.
All of your comments are fair, although I'd dispute that some codebases are much harder to scale than others (x64 Assembly most definitely begs to differ), and javscript is most definitely one of the worst of the general purpose languages. But even then, as far as realistically cross-platform languages go, javascript is actually one of the best (wow).
I happen to think though that the web is an obvious use case for the React model, and phone UIs are only slightly less obvious low hanging fruit that has worked out really well.
The scalability of the model, however, depends on how much workload you can offload into the React system, and with VR, there is only so much you can do. Even if React is handling all of the rendering with extremely efficient native code, you still have to have a your own full 3d timespace model with realtime response demands. Maybe it will work for extremely basic use cases, but at some point having a familiar language becomes a relatively tiny benefit when compared to an efficient runtime.
Perhaps because it's a really accessible platform and by baking in support now it will be mature when VR hardware is more generally accessible (2018-2019 for mobile). I'm from a game development background but accessible, interconnected, multi-user VR for all seems like it's something that will come from the web rather than bespoke attempts. Perhaps a better platform will come along with time.
Sounds to me like you saw some jquery spaghetti 10 years ago, threw up your hands, and said "Never Again!"
Seriously, its a lot better now, and if there's a thing you don't like about it (like static typing maybe?) there's something for you out there (typescript, clojurescript, elm, etc...)
I use React, and I use it with Typescript. So far it is the only thing I've found to be remotely tolerable for the web. There are better languages out there (I'd love to use scalajs), but even the best of them have noticable additional runtime overhead over the already ridiculous runtime overhead of javascript, and some of them go far beyond noticable.
Maybe someday this will be better with WASM, but it's not there today. Javascript runtimes work fine (not perfect, but passable) for page-based and widget-based UIs. There's no way I would touch it for something as computation-intensive as real-time 3D modeling.
I'd imagine that this, like all the other 3D JS libraries, use WebGL under the hood, which is quite fast, hardware accelerated, and just exposes a JS API. You wouldn't actually write your shader in JS, unless it happened to be asm.js which allows the browser to get close to C level performance.
Why does it matter in the least that its js? That's the beauty of react. It abstracts the ui from the renderer. Its the reason cool things like this can be built. I for one enjoy being able to build interfaces on a common platform for a multitude of application types.
Is this really that valuable? Java and C++ had both provided this to some extent before. Rather, I think the majority of the power in Javascript tools comes from distribution. The distribution to both potential users and potential developers is unmatched, and those trump any technical deficiency in the language.
I think if programming language designers should have learned anything in the past 50 years, it should be, "It's the distribution!"
Using the same technologies is natural.