Hacker Newsnew | past | comments | ask | show | jobs | submit | embedding-shape's commentslogin

Great intro that quickly explains the reasoning for the proposed new measure:

> Virtually everyone would agree that a 20-meter tree is twice as tall as a 10-meter tree. Conversely, everyone would agree that the 10-meter tree is twice as short as the 20-meter tree. There is no threshold or “shortness line” above or under which these relationships cease to hold: a 5-meter tree is twice as short as a 10-meter tree, a 1-meter tree is twice as short as a 2-meter tree, and so on. This reasoning remains valid when considering other multiples: a 1-meter tree is three times shorter than a 3-meter tree. To be sure, when assessing the height of a single tree, different people may disagree whether it is short or tall, as their judgment will depend on the benchmark they use for their assessment. However, when comparing two different trees, virtually everyone would make similar cardinal comparisons. In mathematical terms, shortness is the reciprocal of tallness. [...] In this paper, I apply the same logic to define a new poverty measure


There are a lot of words that have nothing to do with the claimed measure.

I'm still trying to figure out how he reached the conclusion that it takes 63 minutes to earn $1 in the US


And it's silly. A person earning $100 a year is not "twice as poor" as a person earning $200 in any meaningful sense; both are extremely poor and will require essentially the same amount of public support. But this metric treats the difference as so huge (80 hours to earn $1 vs 40) that it drowns out any differences in the rest of the income distribution.

> People know what they want and need

If they truly did, there wouldn't be a huge amount of humans whose role is basically "Take what users/executives say they want, and figure out what they REALLY want, then write that down for others".

Maybe I've worked for too many startups, and only consulted for larger companies, but everywhere in businesses I see so many problems that are basically "Others misunderstood what that person meant" and/or "Someone thought they wanted X, they actually wanted Y".


> Our name for this new CMS is EmDash. We think of it as the spiritual successor to WordPress. It’s written entirely in TypeScript. It is serverless, but you can run it on your own hardware or any platform you choose. Plugins are securely sandboxed and can run in their own isolate, via Dynamic Workers, solving the fundamental security problem with the WordPress plugin architecture. And under the hood, EmDash is powered by Astro, the fastest web framework for content-driven websites.

To me this sounds of the polar opposite of the direction CMS's need to go, instead simplify and go back to the "websites" roots where a website are static files wherever, it's fast, easy to cache and just so much easier to deal with than server-side rendered websites.

But of course, then they wouldn't be able to sell their own "workers" product, so suddenly I think I might understand why they built it the way they built it, at the very least to dogfood their own stuff.

I'm not sure it actually solves the "fundamental security problem" in actuality though, but I guess that remains to be seen.


I love building static (or statically generated) websites, but all too often, customers want dynamic content. And what's worse, they don't tell you up-front, because they don't really understand the difference.

"I need a website for my bakery". "What's supposed to be on it?" "Our address, opening times, a few pictures". I build them a static website.

"Now I need a contact form". Ok, that doesn't really fit into a static website, but I can hack something together. "Now I need to show inventory, and allow customers to pre-order". A static website won't cut it anymore.

When you develop for clients, especially those that you don't know very well, it's a bad idea to back yourself into a corner that's not very extensible. So from that perspective, I really get why they give plugins such a central spot.


This is the main reason why WordPress is so popular still to this day. You can cache the crap out of the frontend to the point that it’s basically a static site at that point but then it’s still all running on top of a dynamic platform if you need that flexibility in the future.

I got my start in webdev slinging WordPress sites like a lot of self taught devs and I definitely see the pain points now that I’ve moved on to more “engineering” focused development paradigms but the value proposition of WP has always been clear and present.

Given how WP leadership is all over the place at the moment, I can see how Cloudflare sees this as an opportunity to come in and peel away some market share when they can convince these current WP devs to adopt a little AI help and write applications for their platform instead.

Let’s see if it pays off!


I think this is true, however, when it comes to non-coding clients I've worked with they really do like the ability to make minor edits to a site with a UI rather than having to continually ping a developer.

The problem with WordPress (and it looks like this solution largely just replicated the problem) is that it's way too cumbersome and bloated.

It really is unlike any modern UI for really any SaaS or software in general.

It's filled with meaningless admin notices, the sidebar is 5 miles long and about 98% of what the user sees is meaningless to them.

Creating a very lightweight, minimal UI for the client to edit exactly what they need or like you said, just static files really is the best solution in most cases. The "page builders" always just turn into a nightmare the clients end up handing over for a dev to "fix" anyways.

Not sure why so many people feel the need to continue on the decades of bloat and cruft WordPress has accumulated, even if it's "modernized."


There are two types of WordPress sites from my perspective as someone who got their start in webdev in that ecosystem.

The first and arguably largest is exactly what you describe. Little sites for small businesses who just want an online presence and maybe to facilitate some light duty business development with a small webshop or forum. These sites are done by fly by night marketers who are also hawking SEO optimization and ads on Facebook and they’ll host your site for the low low price of $100/mo while dodging your phone calls when the godaddy $5/mo plan they are actually hosting your site on shits the bed.

The second, and more influential group of WordPress users, are very large organizations who publish a lot of content and need something that is flexible, reasonably scalable and cheap to hire developers for. Universities love WP because they can setup multisite and give every student in every class a website with some basic plugins and then it’s handsoff. Go look at the logo list for WordPress VIP to see what media organizations are powered by WP. Legit newsrooms run on mostly stock WP backends but with their own designers and some custom publishing workflows.

These two market segments are so far apart though that it creates a lot of division and friction from lots of different angles. Do you cater to the small businesses and just accept that they’ll outgrow the platform someday? Or do you build stuff that makes the big publishers happy because the pay for most of the engineering talent working on the open source project more generally? And all that while maintaining backwards compatibility and somewhat trying to keep up with modern-ish practices (they did adopt React after all).

WordPress is weird and in no way a monoculture is what I guess I’m trying to say.


Are you sure the admin notices and sidebar are not plugin issues?

I use Wordpress for my blog because I stopped caring about maintaining one, and I'm mildly confident wp will be around for 10 more years.

There are basically no notices and the admin sidebar is ~10 obvious entries (home, posts, pages, comments, appearance, settings etc).


If it uses Astro, then it's a literal static website generator. But with modern React components if you need anything on top of this. The same with plugins, I assume people don't have to use those but the important thing is that you can if you want to.

Sure, but if I want to host my static files on a website where they are easily cached... cloudflare also offers this product?

I am confused - what are the good “websites” roots? Server-side rendered or not?

Websites used to be static html files.

You either write them by hand, or use a tool that generates it locally, upload everything and you're done. Perfect security. Great performances.

It's in this sense that static generators go back to the source, the simply produce dumb HTML files that you upload/publish to a web server that doesn't need to run any code. Just serve files.


Imho CMS is just a tool that generates static html files on the server. The distinction is a bit artificial. CMSes have static html cashing and CDNs will allow you to "one-click" firewall the dynamic administration and cache the static html for you.

Static website generators are cool way for programmers to do that work on their machine but in the end the distinction of what gets served is very small (if you set up the basics).


But "back to CMS roots" is absolutely not what the WordPress ecosystem is about. It's about the absolute galaxy of plugins that provide you with an entire digital experience "in a box". You can just install whatever plugins for ecommerce, CRM, forms management, payments, event calendars. They will all plugin to both the template system and the MySQL database. There are a lot of well-known and reputable plugins with huge installed bases (woocommerce, gravity forms, yoast seo) but there's a ton of shady ones that can infect your install. Cloudflare is directly addressing the shortcomings of the existing plugin architecture indicating they intend for EmDash to fill a similar niche as an All-in-One digital experience and not just a simple CMS.

The question is then they'd be building some brand new thing not compatible with wordpress. Supposedly the proposition is to steal people away from wordpress. Not just get people building something from scratch looking for a new framework. I'm guessing the recent lawsuits also provide some momentum.

It's not compatible with WordPress, though. It slurps a WordPress export, which is quite literally static data. They expect you to code up anything dynamic using their agent skill.

It looks like they rolled it so you can plug in local components of your choice, though? The security model does assume you have MAC containerized environments available at your fingertips though, so having something like DHH's once is probably a soft minimal dependency if you want to do-it-yourself.

Reading this paragraph I was genuinely convinced it was an April 1st thing.

Reminds me of Vercel and NextJS, where a popular framework design is constrained by, or optimally runs, on their infra, but then comes with pains or unusualness if self-hosted (eg. middleware). Vendor lock-in plays are a big red flag

Probably a requirement from Ookla, so again "They refuse to implement anything that isn't strictly required".

> it’s disturbing to see people clamoring to deny others their freedom in a FOSS context

How does "allow building Linux to be IPv6-only" somehow "deny others their freedom" exactly? I'm willing to wager most distributions will still be dual v4+v6, but if they aren't, isn't that something for you to bring up with your distribution rather than that the kernel just allows something?


Coupling this patch with language about “legacy IP”, along with the follow up comments from the person who submitted the patch, it is clear that the submitter is hostile towards IPv4. I also see hostility towards IPv4 in the comments here and other similar discussions.

I have no problem with allowing optional IPv4 or IPv6 only builds as long as both are kept well-maintained.


> it is clear that the submitter is hostile towards IPv4

But so what? It still doesn't remove v4, in any shape or form, and if that was proposed to the kernel, I'm again fairly confident it'd be rejected.

> I also see hostility towards IPv4 in the comments here and other similar discussions

Ah, yeah that might be. I just saw your comment first, with no context of what you were actually answering, so it kind of looks like you're replying "to the submission", which really isn't denying any freedoms, I guess I was confused about that, my bad. Still, wouldn't it be better to answer directly to those comments, rather than "replying" to an argument/debate that is actually happening elsewhere?


Somehow IPv4 versus IPv6 has become one of those noxious political-technical debates like Android versus Apple or GPL versus BSD/MIT, in which both sides are dug in and think that the other side must be destroyed.

The reason that I don’t like seeing patches like this, even as a “joke”, is that there are real people who would like to see IPv4 removed (possibly by government intervention) in order to achieve their dream of an IPv6 only internet. The whole idea is preposterous, but here we are. It’s about as realistic as banning cars but that doesn’t stop the endless flame wars about it.

Someone has to step in to point out that v4 and v6 were designed to coexist, this is fine, please don’t remove common standards for your personal preferences.


The website mentions "giving you full control over performance", what are those knobs and levers exactly? What does those knobs and levers influence, and what sort of tradeoffs can you make with the provided controls?

Unlike other UI libraries, I would say Sycamore has a very clear execution model. If you've used something like React before, there is all this thing about component lifecycles and hook rules where the component functions run over and over again when anything changes. This can all end up being fairly confusing and has a lot of performance footguns (looking at you useRef and useMemo).

In sycamore, the component function only ever runs a single time. Instead, Sycamore uses a reactive graph to automatically keep track of dependencies. This graph ensures that state is always kept up to date. Many other libraries also have similar systems but only a few of them ensure that it is _impossible_ to read inconsistent state. Finally, any updates propagate eagerly so it is very clear at any time when any expensive computation might be happening.

For more details, check out: https://sycamore.dev/book/introduction/adding-state


The Dioxus library seems really similar to me. How is Sycamores model different?

Dioxus originally was more like ReactJS and used hooks. However, they have since migrated to using signals as well which makes Dioxus and Sycamore much more similar.

One remaining major difference is that Dioxus uses a VDOM (Virtual DOM) as an intermediary layer. This has a few advantages such as more flexible rendering backends (they also support native rendering for desktop apps), at the cost of an extra layer of indirection.

Creating native GUI apps should also be possible in Sycamore, and something I'm interested in although there is currently no official support. However, I think one of the big differences with Dioxus would be that Dioxus supports "one codebase, many platforms" whereas I think that is a non-goal with Sycamore. Web apps should have one codebase, native apps should have another. Of course, it would still be possible to share business logic but the actual UI code will be separate.


How does it compare to leptos? Leptos is roughly based on Solidjs and uses signals, to enable fine grained reactivity and avoid a vdom. Why sicamore over leptos?

With Tauri you also get the freedom of choosing frontend frameworks and can reuse existing frontend code/skills. Yes React has issues, for example Svelte handles reactivity in a much better way. I don't see real benefits of re-implementing the whole thing in Rust.

A word to the wise: similar to how foam is mostly air, Tauri is mostly marketing. Most of those 15MB "lightweight" bundles expand to 2 GB+ RAM in practice. Of course, devs still shamelessly (ignorantly, in all likelihood) call the apps "lightweight", while taking up, say, 6 GB of RAM for a 5 button UI. Tauri have also proven reticent [0] to correct the record. One supposes the sole advantage of sharing memory with other Tauri apps is not a sufficient sell to overcome Electron's single-browser-engine advantage.

A pure Rust app takes up ~60 MB for the same UI, with a large portion of that going towards graphics (wgpu table stakes).

[0] https://github.com/tauri-apps/tauri/issues/5889


You can't fit browser JS ergonomics into Rust and expect zero friction, because once you wire up a stateful UI with the kind of component churn you get in React, you spend more time satisfying the type system, and you also give up hot reload plus a decade of npm junk for odd corner cases.

You need a hard reason for that rewrite.


> I’ve worked on a project for one year now

> What I am most concerned about is the maintainability of the project and how we will get this live.

I'm not sure if it's something that got "lost in translation" or whatever, but are you really saying this project has been under development for more than a year, yet no one attempted to deploy this to a live environment yet? If so, it's understandable you're concerned about it. A lot of the times when I jump on projects that got stuck in development hell in order to unblock them, this is a huge thing that gets in the way for teams. My approach usually is to focus on getting the whole "Know we want a change -> Implement change -> Deploy to test -> Testing -> Deploy to Production" process down first, before anything else, together with practicing at least one rollback.

It really ties into everything you do when working on a project, as this process itself basically decides how confident you can be about changes, and how confident you can be about that some bad changes can easily be rolled back even in production.

Besides that, having non-technical people trying to contribute to a technical project, is a great way for those people to unintentionally damage how well technical people can actually work on the project. I think, explaining to them exactly what you said here, that it isn't feasible long-term, that it's hard for you to have a clear mental model if they're just chucking 10K PRs at you and that you need to understand the code you deploy, should be enough to convince them. If it doesn't, you might want to ask yourself if that's the kind of environment you want to work in anyways.


The project is deployed to a test and "live" environment, but since it is a rebuilt of a very old project that is currently running their business, we don't have to build in production. They needed the rebuilt because the project that is currently in production is not maintainable anymore because of (ironically) technical debt. I agree it is still a weakness that it is not in production, and it needs a strong vision from their side to invest for one or two years into a project without seeing any revenue. However, the environment does not feel right now, I've not very often felt such a misalignment when it comes to a software project.

> If you're not using comments, you're doing agent coding wrong.

Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.


You're wasting context doing that when a 3 line comment that the agent itself leaves can prevent the agent from searching and reading 30 files.

You're wasting context re-specifying what the code should already say, defining an implementation once should be enough, otherwise try another model that can correctly handle programming.

Yeah, that was my reaction too. A shame they try to hide themselves, but even worse, the instructions to this "Fake Human" is wrong too!

I think you're talking past each other.

"DSLs" can both mean "Using the language's variant of 'arrays' to build a DSL via specific shapes" like hiccup in Clojure does, and also "A mini-language inside of a program for a specific use case" like Cucumber is its own language for acceptance testing, but it's "built in in Ruby" in reality.

Clojure favors the "DSLs made out of shapes" rather than "DSLs that don't look/work like lisp inside of our programs".


no, not really. when people talk about DSLs in context of lisps, they usually still mean staying in the domain of s-expressions.

Yes, maybe that's the sort of DSL you're talking about, the other person mentioned "Clojure style discourages building DSLs" which I'm fairly sure to be about the other DSL and is also true, hence the whole "you're talking/reading past each other".

that doesn't make sense, why would they be talking about departure from traditional lisps if they weren't talking about macro-based DSLs?

> Clojure style discourages building DSLs and the like and prefers to remain close to Clojure types and constructs

This to me, seems to indicate they're talking about "DSLs not built with Clojure types and constructs", I'm just trying to have the most charitable reading of what people write and help you understand why it seems you're not actually disagreeing, just talking about different things.


> DSLs not built with Clojure types and constructs

and in context of lisps that still most likely means macro-based DSLs using traditional lisp constructs ¯\_(ツ)_/¯


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: