The allocation of each object still has overhead though, even if they all live side-by-side. You get memory overhead for each value. A Uint8Array is tailor-made for an array of bytes and there’s a constant overhead. Plus the garbage collector doesn’t even have to peer inside a Uint8Array instance.
The engine can optimize all those allocations out of existence so they never happen at all, so it's not a problem we'll be stuck with forever, just a temporary inconvenience.
If a generator is yielding values it doesn't expose step objects to its inner code. If a `for of` loop is consuming yielded values from that generator, step objects are not exposed directly to the looping code either.
So now when you have a `for of` loop consuming a generator you have step objects which only the engine ever can see, and so the engine is free to optimize the allocations away.
The simplest way the engine could do it is just to reuse the same step object over and over again, mutating step.value between each invocation of next().
WebAssembly is particularly attractive for agentic coding because prompting it to write Zig or C is no harder than prompting it to write JavaScript. So you can get the authoring speed of a scripting language via LLMs but the performance close to native via wasm.
This is the approach I’m using for my open source project qip that lets you pipeline wasm modules together to process text, images & data: https://github.com/royalicing/qip
qip modules follow a really simple contract: there’s some input provided to the WebAssembly module, and there’s some output it produces. They can’t access fs/net/time. You can pipe in from your other CLIs though, e.g. from curl.
I have example modules for markdown-to-html, bmp-to-ico (great for favicons), ical events, a basic svg rasterizer, and a static site builder. You compose them together and then can run them on the command line, in the browser, or in the provided dev server. Because the module contract is so simple they’ll work on native too.
An advantage of running a coding agent in a VM is that to answer your question, it can install arbitrary software into the VM. (For example, running apt-get or using curl to install a specialized tool.) WebAssembly seems suitable for more specialized agents where you already know what software it will need?
> We said the runtime asks the OS for large chunks of memory. Those chunks are called arenas, and on most 64-bit systems each one is 64MB (4MB on Windows and 32-bit systems, 512KB on WebAssembly).
Incorrect. You ask the OS for pages. (Golang does internally appear to manage its heap into “arenas”.) On WebAssembly the page size is 64KiB. Window 64-bit it’s 4KiB, Apple Silicon 16KiB, Linux x86_64 4KiB.
"Page" is OS terminology. "Arena" is Go terminology. An arena is made up of sequential pages. Go asks the OS for 64MB of sequential memory, and calls that 64MB chunk an arena; this is consistent with the text you quoted. It is not incorrect.
If people who wore Google Glass without respect for others were Glassholes, perhaps people who unleash their OpenClaw instance onto the internet without respect are Clawholes?
We have LLMs that generate code but that code should be untrusted: perhaps it overflows or tries to read ssh keys. If we aren’t reviewing code closely a major security hole could be on any line.
And since LLMs can generate in whatever language, it makes sense for them to write fast imperative code like C or Zig. We don’t have to pick our favorite scripting language for the ergonomics any more.
So qip tries to solve both problems by running .wasm modules in a sandbox. You can pipe from other cli tools and you can chain multiple modules together. It has conventions for text, raw bytes, and image shaders, with more to come.
I am excited by the capabilities of probabilistic coding agents, but I want to combine them deterministic code and that what these qip modules are. They are pure functions with imperative guts.
WebAssembly Text Format (wat) is fine to use. You declare functions that run imperative code over primitive i32/i64/f32/f64 values, and write to a block of memory. Many algorithms are easy enough to port, and LLMs are pretty great at generating wat now.
I made Orb as a DSL over raw WebAssembly in Elixir. This gives you extract niceties like |> piping, macros so you can add language features like arenas or tuples, and reusability of code in modules (you can even publish to the package manager Hex). By manipulating the raw WebAssembly instructions it lets you compile to kilobytes instead of megabytes.
I’m tinkering on the project over at: https://github.com/RoyalIcing/Orb
> Which brands do people trust? - Which people do people of power trust?
These are often at odds with each other. So many times engineers (people) prefer the tool that actually does the job, but the PMs (people of power) prefer shiny tools that are the "best practice" in the industry.
Example: Claude Code is great and I use it with Codex models, but people of power would rather use "Codex with ChatGPT Pro subscription" or "CC with Claude subscription" because those are what their colleagues have chosen.
This is why Steve Jobs demoed software. Watch when he unveils Aqua, there’s a couple of slides of the lickable visuals and then he sits down and demos it. He clicks and taps and shows it working. Because that’s what you the user will do.
He’ll show boring things like resizing windows because those things matter to you trying and if he cares about resizing windows to this degree then imagine what else this product has.
Apple today hides behind slick motion graphics introductions that promise ideal software. That’s setting them up to fail because no one can live up to a fantasy. Steve showed working software that was good enough to demo and then got his team to ship it.
If you use something long enough, you'll get used to its idiosyncrasies. Jobs would have clicked and dragged 10px away from the rounded corner here instinctively. This is why the owner of an old car can turn it on and drive away in a blink while his son has trouble: hold the accelerator 10% down, giggle the key a little while turning, pull the wheel a bit, ... all comes natural to the owner.
Yes, and Mac owners will do the same thing. I don't use MacOS but people will just figure out the new behavior, be briefly annoyed by it, and then get used to it and move on. Apple could have done better here but users acclimate to much worse UX than this.
I'm working on a compiler for WebAssembly. The idea is you use the raw wasm instructions like you’d use JSX in React, so you can make reusable components and compose them into higher abstractions. Inlining is just a function call.
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
reply