Hacker Newsnew | past | comments | ask | show | jobs | submit | mihaelm's commentslogin

My guess is they meant metaprogramming in general (templates/generics, macros), but traits are not quite like the others.

A truly lean team (say, <=5 people and limited project scope) should be able to live off their code forge's free CI/CD minutes, or whatever is included in the basic tier they're running. Just run the suite on a schedule against trunk instead of on every PR.

If not, then that's a good signal they should invest more into their CI/CD setup, and like you said it's not necessarily a huge investment, but it can be a barrier depending on skills.


What skill? You can get Jenkins running in afternoon.

If you can't set up CI/CD you're not qualified to program anything


That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic. They'll have to figure out how to provision a VM, ssh into it & lock all the proverbial doors first. Without going into managing it with IaC tools like Terraform, Ansible, Packer, etc.

> That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic.

...so? You sit your ass down and learn. It might take a bit longer if you never touched shell but it's far easier than anything actual programming deals with, especially currently with set of ready or near ready recipes for every environment.


I'd say LLB is the "standard", Dockerfile is just one of human-friendly frontends, but you can always make one yourself or use an alternative. For example, Dagger uses BuildKit directly for building its containers instead of going through a Dockerfile.

Maybe if you only look at it through the lens of building an app/service, but containers offer so much more than that. By standardizing their delivery through registries and management through runtimes, a lot of operational headaches just go away when using a container orchestrator. Not to mention better utilization of hardware since containers are more lightweight than VMs.

From what I've seen, standardizing delivery through registries and runtimes does reduce friction, but containers mostly move operational complexity around rather than eliminate it. You still get image sprawl, registry auth and storage quotas, supply chain issues like unsigned images, runtime quirks between runc and crun, and networking and storage headaches when an orchestrator like Kubernetes turns deployment into an availability and observability problem.

If you want the gains mentioned, you have to invest in governance: immutable tags, automated image scanning with Trivy, signing with cosign, and sensible image retention policies in your registry. Accept the tradeoff that you will be operating a distributed control plane and therefore need real observability like Prometheus plus request and limit discipline or you'll get the utilization benefits in graphs only while production quietly melts down.


Hah indeed that's my perspective. I'm used to being able to compile program, distribute executable, "just works", across win, Linux, MacOs. (With appropriate compile targets set)

> Not to mention better utilization of hardware

When compared to a VM, yes. But shipping a separate userspace for each small app is still bloat. You can reuse software packages and runtime environments across apps. From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.


"bloat" has always been the last resort criticism from someone who has nothing valid. Containers are incredibly light, start very rapidly, and have such low overhead in general that the entire industry has been using them.

Docker containers also do reuse shared components, layers that are shared between containers are not redownloaded. The stuff that's unique at the bottom is basically just going to be the app you want to run.


> From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.

Why? It's not virtualization, it's containerization. It's using the host kennel.

Containers are fast.


I was referring to the userspace runtime stack, not the kernel. What I criticize is that multiple containers that share a single host usually overdo it with filesystem isolation. Hundreds of MBs of libraries and tools needlessly duplicated, even though they could just as well have used distro packages and deployed their apps as system-level packages and systemd unit files with `DynamicUser=`.

You can hardly call this efficient hardware utilization.


The duplication is a necessity to achieve the isolation. Having shared devels and hordes of unit files for a multi tenant system is hell - versioning issues can and will break this paradigm, no serious shop is doing this.

For running your own machine, sure. But this would become non maintainable for a sufficiently multi tenant system. Nix is the only thing that really can begin to solve this outside of container orchestration.


The isolation is the POINT. You can't be assured that the library and version you need for you app is the same in installed by the system, for example.

And it may not even be installed by the system, hence docker.


> "You turn it on and it scales right up"

is my favorite quote from the video.


And no doubt they’ll find a way to spend it in the app considering you can manage almost all aspects of your life within it.

Revolut works similarly. You don’t pay any fees on transfers to other Revolut accounts, but you do for other bank accounts.


> Revolut works similarly. You don’t pay any fees on transfers to other Revolut accounts, but you do for other bank accounts.

Does it? I'd be surprised if it does in the UK at least, as all banks do free transfers to every other bank in the UK via Faster Payments. I thought it was the same in the EU?


Agreed. In the UK, I've never been charged a fee to send money within the UK on Revolut.

Never, it’s a very effective punctuation mark. While it may not have been common in day-to-day messaging, it’s very common in writing of all sorts.

Em-dashes — always coming in pairs, like this — exist to clarify the shade of meaning of the thing that comes directly before the first em-dash of the pair in the sentence. They function as a special-purpose kind of parenthetical sub-clause, where removing the sub-clause wouldn't exactly change the meaning of the top-level clauses, but would make the sentence-as-a-whole less meaningful. (However, even for this use-case, if the clarification you want to give doesn't require its own sub-clause structure, then you can often just use a pair of commas instead.)

ChatGPT mostly uses em-dashes wrong. It uses them as an all-purpose glue to join clauses. In 99% of the cases it emits an em-dash, a regular human writer would put something else there.

Examples just from TFA:

• "Yes — I can help with that." This should be a comma.

• "It wasn’t just big — it was big at the right age." This should be a semicolon.

• "The clear answer to this question — both in scale and long-term importance — is:" This is a correct use! (It wouldn't even work as a regular parenthetical.)

• "Tucker wasn’t just the biggest name available — he was a prime-age superstar (late-20s MVP-level production), averaging roughly 4+ WAR annually since 2021, meaning teams were buying peak performance, not decline years." Semicolon here, or perhaps a colon.

• "Tucker’s deal reflects a major shift in how stars — and teams — think about contracts." This should be a parenthetical.

• "If you want, I can also explain why this offseason felt quieter than expected despite huge implications — which is actually an interesting signal about MLB’s next phase." This one should, oddly enough, be an ellipsis. (Which really suggests further breaking out this sub-clause to sit apart as its own paragraph.)

• "First of all — you’re not broken, and it’s not just you." This should be a colon.

You get the idea.


Well, that's the thing about the em-dash - it has always been usable as a "swiss army knife" punctuation mark.

Strictly speaking, an em-dash is never needed; it could always be a comma or semicolon or parentheses instead. Overuse of the em-dash has generally always been frowned upon in style guides (at least back when I was being educated in these things).


Strictly speaking — an em-dash is never needed; it could always be a comma — or semicolon — or parentheses — instead. Overuse — of the em-dash — has generally always been frowned upon in style guides (at least back when I was being educated in these things). ——

File size is a legit property to keep in mind if your goal is to create an agent that runs on ESP32 boards. They don't expect you to run Zclaw on Mac Mini.


What's the use case for running this on a tiny board? Isn't the whole point that it can use your computer for you?


For something like OpenClaw yes, but not for Zclaw. I think the naming is more about riding the current wave of Claw-related interest rather than positioning it as competition or replacement for other clawies.

Zclaw is about running an agent in your embedded system.


The examples seem to suggest it would be chatting with your home automation in natural language.


Before you know it your smart thermostat will be blogging. The joke is on everyone who thought IoT couldn't get any worse. Just imagine the new landscape of security vulnerabilities this opens up.


My "smart" gas stove can be turned on over the internet (if I allow it to connect)—perfect appliance to put an LLM in charge of.


Because of resource-constrained environments, the primary deployment target seem to be microcontrollers. You can get ESP32 boards for pretty cheap.


I prefer devcontainers for more involved project setups as they keep it lighter than introducing a VM. It’s also pretty easy to work with Docker (on your host) with the docker-outside-of-docker feature.

However, I’m also curious about using NixOS for dev environments. I think there’s untapped potential there.


we love nix for dev environments, and highly recommend it. many other problems go away. don't see that as what's being solved here, though.

containers contain stuff the way an open bookcase contains books, they're just namespaces and cgroups on a file system overlay, more or less, held together by willpower not boundaries:

https://jvns.ca/blog/2016/10/10/what-even-is-a-container/

https://github.com/p8952/bocker

as a firm required to care about infosec, we appreciate the stance in their (2). and MacOS VMs are so fast now, they might as well be containers except, you know, they work. (if not fast, that should be fixed.)

that said, yes, running local minikube and the like remain incredibly useful for mocking container envs where the whole environment is inside a machine(s) boundary. containers are _almost_ as awesome as bookcases…


I just went on a tangent related to dev environments i.e. inside what to develop. In case of Cowork, a VM is definitely the right choice - no doubt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: