> "it can be compiled to WebAssembly in order to run Python in the browser."
I have seen this approach with C-python and NodeJS already and I think it simply not viable, what they are suggesting is compiling the runtime (the same one you use in non-wasm projects) to wasm and then run your python code on top of it.
This is a double-whammy of performance degradation, you basically have two JIT-compilation steps happening (once by the wasm runtime to compile the rust-python wasm and again by the rust-python code compiling your python code). And this is on top of normal performance degradation from using a dynamically typed language compared to a statically typed language
To make dynamic languages (even JS) viable to run in a wasm runtime, the language must be compiled directly to wasm.
Project still looks pretty cool and useful though, there is plenty of python code that could be useful to use in the browser no matter how badly it runs. Just don't try to build a web framework on top of this kind of approach.
Edit: Let me reframe this a bit, this is what I think, I haven't really benchmarked anything and rust python might be doing some tricks I am not aware of.
The reality is that the "dark" majority of preexisting code has essentially no performance requirements/concerns; they're business scripts that could literally run on a toaster with no problem if you could get the code onto it.
So really most business logic can easily be satisfied by "compile the interpreter to wasm and then run the dynamic language on that", and doing it this way can move existing "learned the hard way special cases" byzantine business code to something that can run on a web server and be accessed by the companies employees rather than passing around scripts for them to run, with a lot of benefits including instant upgrades for everyone for bug fixes.
That said, this specific impl claims to only support half of the standard library so I kinda doubt its ready for any 'serious' business usecases yet anyway.
I don't think you grasp quite the implications of what I was saying, this kind of approach could take _seconds_ to even start running your python application. Large python codebases could take like a minute to start if loaded that way.
Once it does start then your arguments can make sense, but even so it would still make it impractical for most things.
Trust me, when the Javascript dev tells you something will be slow, it WILL be very slow
Pyodide (standard cPython in WebAssembly) loads surprisingly quickly.
My https://lite.datasette.io application usually starts up in less than 10s - most of that is downloading about 10MB of WASM blobs, and 10MB isn't actually that big these days (many sites serve more than that in image headers).
When I built Datasette Lite I did it as a research project, assuming it would be far too slow loading to be useful. I've since changed my mind on that.
I hate to say this, but have you used any $ModenWebApp with $HotJSFramework recently? I thank the gods when those pages load without a 5-10 second of fancy spinning animation. Really thought we would be in a better place by 2024 but nope.
Don't work for a company where you need to turn in receipts or fill out info in crappy HR software? I mean, good for you and knowing the command line, but doing the less-fun, less-specialized parts of jobs usually involves regressing to the mean of what interfaces people know how to use.
I started a job in an unusual way (first week, I deployed to the Greenland ice sheet). Workday wouldn't work over the high-latency connection so I couldn't fill out my HR paperwork without VNCing into a computer in CONUS...
> I don't think you grasp quite the implications of what I was saying, this kind of approach could take _seconds_ to even start running your python application.
Indeed, the web demo takes about 5 seconds to cold-start on my beefy PC, between downloading the 22MB WASM blob and compiling it. It also grows the WASM heap to 160MB after running the simple Fibonacci example, and WASM heaps can't (yet) be shrunk, so the only way to reclaim any of that memory is to discard the whole instance and start over.
Depends a little on if you're going to a website to use an app, or running something on a always on PC on say a production floor where the app never gets exited, I'd think.
As far as I'm aware, even discarding the instance isn't good enough, since v8 doesn't seem to reclaim the Wasm Linear Memory ever. I think the only thing you can do is start it in a worker and then terminate the entire worker.
What do you envision these scripts to be doing that it would matter if it takes minutes, hours even, to start? Fire and forget.
Granted, the environmental cost to all that extra energy consumption may not be palpable. Then again, you're exactly choosing Python in the first place if you care about the environment.
I just opened Facebook.com and it took more then 10 seconds time to first content paint, and it isn't because of slow internet. I've worked on more than one FAANG external user facing site fixing stuff to get median initial page latency to be below 5 seconds, so I think your "this will take _seconds_" emphasis is kind of funny.
Internal business crud apps often take multiple seconds on every click. Even a minute for an employee to start an app that is then snappy would be acceptable for practically any internal business usecase if they can just leave the tab open, and there's no technology reason why python wasm interpreters would need to be that slow.
>> The reality is that the "dark" majority of preexisting code has essentially no performance requirements/concerns; they're business scripts that could literally run on a toaster with no problem if you could get the code onto it.
Which means this whole thing is pointless from an end user point of view. The technology stack is getting very deep - Python, Rust interpreter, WASM, in a browser. I'd love to get back to running things on a toaster with no dependencies.
Technically this approach puts the burden on the build toolchain, the toaster only needs a WASM runtime which is actually not that big of an ask (it is far easier to put a standalone WASM runtime in a toaster than a full browser)
> can move existing "learned the hard way special cases" byzantine business code to something that can run on a web server and be accessed by the companies employees rather than passing around scripts for them to run
Or… you could just use Django. The framework built for running python on a web server.
That's totally different, at least out of the box. The use-case seems to be running user-generated scripts that aren't known in advance and can be added/edited/ran in a self-service manner.
The usual way to do this is get a Python interpreter, sandbox the hell out of it on your server, and then run the untrusted code but this obviates the need for security paranoia quite a bit since it's running in the user's environment.
Sounds like a use case for simple Python rather than RustPython?
If you go full RustPython, surely looking to eek out performance must be a major reason to even go there rather than just script with CPython like it was 1999?
I can't speak for rustpython, but you can partially evaluate dynamic languages in wasm with something like wizer https://github.com/bytecodealliance/wizer. So you can let the runtime do all the parsing and compiling ahead of time. We do this with javascript (quickjs). It does have a few downsides regarding memory size but it is pretty fast.
Ok wow, wizer is really really cool. I was thinking a few months back it would be great if you could dump out the jitted version of a wasm program so you could load it faster and... here it is, someone has built it.
> once by the wasm runtime to compile the rust-python wasm
I'm not sure what you mean by that. The runtime doesn't compile WASM, it simply executes it.
There are tools for dealing with interpreter runtime overhead this by pre-initalizing the environment like Wizer[0]. ComponentizeJS[1] uses it to pre-initialize the Spidermoney engine it packages to gain fast startup times (and you can then prune the initialization only code with wasm-opt). As techniques like ComponentizeJS are also being applied for a specific set of interpreted files, you can even prune parts of the interpreter that would never be used for that specific program. If you want to go even further you could record specific execution profiles and optimize based on those.
You are right for the most part. I attended a talk about pyscript[1] (runs python in the browser using wasm which is similar) and there is a 2x performance hit.
PyScript has actually been massively refactored in the last few months and is much faster now. You can check the performance of PyScript running MicroPython here: https://laffra.github.io/ltk/
If you need/choose Pyodide load time will of course by much more but that's mostly because of the size of full core Python and the dependencies you might have.
So you can run NodeJS code in the browser, even though both are JS-based, NodeJS has a bunch of APIs that deal with things like file systems that the browser doesn't have.
For example, imagine you have a lib that converts markdown to html, but the lib happens to write the files directly to the disk, hence it can't be used in the browser. If you compile the nodejs runtime to wasm with a WASI that maps the file system to local storage, then you can just read the file from local storage after invoking the lib.
Just saying it is a technical possibility, this kind of approach is really only meant to be used if you _really_ just want to run some lib in the browser, no matter how slow it gets.
Also technically if you could compile JS to wasm (without nodejs, so more like how you run C through wasm right now) then you don't need to care about browser versions and JS api polyfills while still using JS.
I have seen this approach with C-python and NodeJS already and I think it simply not viable, what they are suggesting is compiling the runtime (the same one you use in non-wasm projects) to wasm and then run your python code on top of it.
This is a double-whammy of performance degradation, you basically have two JIT-compilation steps happening (once by the wasm runtime to compile the rust-python wasm and again by the rust-python code compiling your python code). And this is on top of normal performance degradation from using a dynamically typed language compared to a statically typed language
To make dynamic languages (even JS) viable to run in a wasm runtime, the language must be compiled directly to wasm.
Project still looks pretty cool and useful though, there is plenty of python code that could be useful to use in the browser no matter how badly it runs. Just don't try to build a web framework on top of this kind of approach.
Edit: Let me reframe this a bit, this is what I think, I haven't really benchmarked anything and rust python might be doing some tricks I am not aware of.