Hacker Newsnew | past | comments | ask | show | jobs | submit | Hendrikto's commentslogin

> like you need data centers in every continent or to quickly scale to 10+ thousands of cpus

Which for some reason many people think they need, while in reality 1% actually need it.


> I do wish to point out, of course, that the whole reason it was possible to experiment cheaply and come across this serendipity was because 9 months ago, faced with the choice to either do the bad easy thing or the good nothing, I chose to do the bad easy thing.5 The SQLite database worked! I understood how it worked, behind the scenes with its B-trees and its Full Text Search extension.

This is the most important takeaway, imo, and a very valuable technique: Start with the obvious, stupid solution that definitely works. Then do the optimized version, while making sure it matches the naive implementation. In this case, the optimized version could even be generated from the naive one.


A modern spin on this technique is as follows: Write (or use an LLM to write) something so simple that it is both obviously correct and very easy to verify the correctness of. Then, use that same LLM to create a comprehensive suite of tests, which further prove the correctness of the simple implementation. Once the tests are there, let the LLM run wild and ask it to optimize the hell out of the implementation while keeping the tests untouched.

problem with such approach is that sometimes "obvious easy thing" becomes so entrenched and affecting everything, that ripping it out becomes unproportionally monumental task

Technical debt, like all forms of debt, can be used for leverage.

I feel like it is important to manage the risk and to clearly manage this debt. personally I try to stay safe from both debt and technical debt until there are sound reasons for both.

It is KISS stack for me personally (Keep it stupid simple)

I would still consider technical debt to be different than other forms of debt though, It feels way more of a tradeoff to me but perhaps all debt can be classified as such. Either way I think it makes for an interesting decision nonetheless.


If the product fails all technical debt goes away. So using technical debt to prove out a product is very different. Cause often times you don’t have the money or resources to build correctly at first since you don’t know if the expected payoff will be there

Yeah, how you end up acquiring technical debt can mean a lot. If it's because the system is underspecified, then your technical debt is really a prototype. If it's because your team doesn't know what they are doing, you're kind of screwed either way. But if you have the right requirements and the right team and you know you're trading money today for time tomorrow, it can be the difference between meeting a deadline or getting canned.

It jumped out at me too, but because I wondered what it would look like in the AI version of this story. Having had it build the SQL version do you ... a) miss the leap because you don't understand how it works, don't care to know, and go off to vibe the next thing b) ask it lots of questions because reasons to develop that deep understanding then make the leap or c) rely on it (prompt: "this can't be good enough do better") to go make the leap for you.

(Assuming for the sake of argument that you guided it to the SQL version first)


Depends on what your overall goal is with what you're building. Is it to rush out as many features as you possible can, before VC-funding falls through the floor so you too can get a slice of the pie before the party is over? Or are you "retired" in your 30s and now have time to build the perfect software for you? Do you need to publish and release an experiment to see how people react to it or use it, before you can know if it's the right thing or not?

Almost everything needs to be contextualized before you can even begin to answer what the right way forward is, depends so heavily on what situation you're in.


Fwiw, giving opus 4.7 two sentences about building a cli doing Finnish to English translation and looking for a space efficient solution leads to an answer pointing to fst. For the same reasons stated in the blog. This is without a search tool.

The K shaped LLM scenario makes a lot of sense to me. Educated and experienced devs get better output because they know what to ask.


Came here to add this, too. Sometimes the most valuable thing a solution can buy you is time to think of a better solution.

Yes - tho I'd refine it to say it's not just more time, it's time plus the forcing function of a working solution requiring more deeply understanding the problem space.

Terrible advice. In the real world you can't fully rewrite a project when it turns out that your obvious stupid solution isn't good enough. You're stuck with it forever.

> That's why if x is state then f can never be purely idempotent

That is simply not true. f could be, for example, “set x.variable to 7”, which is definitely idempotent.


There's no side effects in f here, so the statement does not apply

Parent said

> State is in practice always subjected to side effects and concurrency.

There was never any claim or assumption regarding f. Maybe the way you interpreted it is what they meant, but it is not what was stated.


you are oversimplifying with your set variable example. the context is complex state management as with online purchases.

It allows verifying that the binaries actually match the source, which is extremely valuable.

Bit for bit matching is not required for that.

It makes it much simpler and more robust though. Also, it allows for content addressing a la Nix, among other benefits.

Why the fuck does that site break the back button? DO NOT do that.

since there is no other way to reach you please allow me to use this off topic message to let you know that there is a response to your comments on the gnupg discussion from two weeks ago.

Same for me. I cannot access it either.

The alternative is FOSS.

Seems like instructure canvas is FOSS: https://github.com/instructure/canvas-lms/tree/master

If your line is GPL rather than AGPL there's Moodle.

But you do then have to have a sysadmin capable of managing an enterprise grade LAMP stack.


Canvas already is AGPL, though?

So it can be used by multiple universities who share the maintenance. That is my point: Not everybody has to develop their own.

For what they charge for these LMSs, they should definitely be able to sent some emails.

No concerns about privacy or regulatory considerations that might vary by jurisdiction? Just yolo it and deal with breech later?

I used LaTeX for approximately 10 years, for little things to relatively complex, including my bachelor’s and master’s theses. It never felt natural at reliable or consistent. Every customization required weird \makeatletter \makeatother hacks and was very brittle. Everything seemed more complicated than necessary and hard to grok, with weird interdependencies and interactions.

There are probably good reasons for all of that, but it is just both bad DX and bad UX. It feels like you need to be a hardcore LaTeX expert or consult with one, in order to accomplish the most mundane things. Especially in a reliable way, that won’t break upon making seemingly unrelated changes, or won’t break other things itself.

I used Typst for a few weeks. It already feels much more understandable, consistent, hackable, and customizable. I guess that is the difference between an ad hoc macro system and an actually thought through programming language.

The only drawback I can see is the ecosystem being smaller and less mature. That is, however, counteracted by being able to do things on your own, without immersing yourself deeply in LaTeX for years. Also, it will improve with time.

LaTeX is great, don’t get me wrong. But its heritage and historical baggage is really dragging it down.


It's kinda fascinating how dominant LaTeX is, how nice its output is, how respected Knuth is as a computer scientist, and at the same time how totally awful it feels to use it. Hard to figure out how it can be so good and so bad at once.

Posts/discussion I found interesting:

- http://www.goodmath.org/blog/2008/01/10/the-genius-of-donald...

- https://tex.stackexchange.com/q/24671

- https://news.ycombinator.com/item?id=15733381

In particular it's interesting how people seem to think TeX itself is actually quite nice to use but its popularity and LaTeX packages created a huge mess of a system.


Well -- TeX is "80s good". We've gotten better at designing ergonomic software since and it really doesn't meet the modern standard. But it's good enough for most people, and sufficiently hard to replace, that it has stuck around.

Added to that, academics specifically are more willing to suffer old crufty stuff than software engineers tend to be. After all their job is to absorb fields of material whether good or bad, and the technology tends to be lagging behind the bleeding edge in many subfields anyway so TeX doesn't even necessarily stand out.


> TeX is "80s good"

Bingo. Compared to troff and what preceded, TeX was amazing just in its usage. But its real value was in the quality of its typesetting. Knuth put a lot of effort into the beauty and historical correctness of the output, so much so that it was solving optimization problems to calculate line breaks. MS Word still can't break a line properly in 2026.


If TeX is “80s good”, Typst might be “90s” good, being generous.

Celebrating batch-mode typesetting in 2026 feels like some weird cyberpunk fixation.

Programmable like Emacs (but via Scheme), interfaced with major Computer Algebra Systems, tree-structured documents that are live-queryable and modifiable, and typesetting that rivals TeX without using TeX - TeXmacs provides all that, and much more (https://www.texmacs.org/tmweb/home/videos.en.html)


erg. You're not wrong but TexMacs looks like more 80s software that no one wants to use anymore because the user experience is awful.

There was a point in the 1990's where microsoft word wasn't truly WYSIWYG. IIRC it was like an infinite page and the line breaks and page breaks were "estimates"

Further many docs from that era are plagued with abandonware.

TeX did one thing well for an era when often the only interface to the machine was over a Xyplex terminal server connecting to a tty at 9600 baud.


You're linking to posts from 15 and from 18 years ago. And the post from 2011 is about how Donald Knuth wrote TeX (not LaTeX) in the early 1980s. While TeX and LaTeX have fundamental design flaws, it is much less awful to use them these days, with a rich selection of rather robust packages available, that vastly reduce the need to go into hard-core LaTeX programming yourself.

I won't lie: It takes getting used to and you need to learn a lot if you want to achieve fancy complex typesetting effects. But - it's not half as inconvenient as it once was.


part of the challenge is the inherent irreducible complexity of the domain. "Make text look good on page" leaves lots of details unspecified.

another part is many people built their own solution to their own corner of this domain, and not all of them had the deep appreciation for how the rest of the TeX system works.

I hear similar complaints about "Make web page look good", which is popular but also a huge mess of a system.


> "Make text look good on page" leaves lots of details unspecified.

Even just a sane layout renderer is incredibly hard. A decade ago I wrote a bespoke DNA sequence typesetter (in svg) and I had claude build an extension, for whatever reason it chose to build it from scratch instead of using the components I had built, and it did everything wrong.


Because Knuth wrote TeX, not LaTeX. All the parent comment's grievances are about LaTeX features, not TeX.

to be fair to knuth, he had nothing to do with latex. it's conceivable that one could start over from plain tex and build up a different high level system. (then again perhaps some of the brittleness of latex comes from unavoidable issues with the tex layer; lamport is a very respected computer scientist too!)

It’s my understanding that Knuth has little to nothing to do with latex and he himself uses tex for his books.

The dichotomy comes from conflating the TeX syntax with tex macro system, both use backslash.

The backslash based syntax allows for some really powerful typesetting which is far above anything that exists today. At the same time, the use of backslash-based langauge right to the bottom in terms of macros is what is causing the frustration.

Typst kind of solves that by having backslash based syntax implemented in Rust.


I’ve been using Typst for years now. Wrote my PhD thesis in it [1] as well as a book. Works great; can’t recommend it enough. I usually barely use plugins because it’s either already included or pretty easy to write a bit of code yourself

[1]: https://github.com/rikhuijzer/phd-thesis


Didn't see a pdf of your thesis, except on your web-site[1]. But the version there (at least as it renders on my machine), has numerous formatting issues. For one egregious example, look at the letter spacing in the title and legend of Figure 2.2 (page 27): "civilia ns", "Pe rs ona lity s core". I'm sure the content is great, but using it as an example of Typst prowess, seems ill-advised.

[1] https://huijzer.xyz/files/f72fa09561f20162.pdf


I don't see any issues with the title of Figure 2.2, but the legend and the x-axis label have weird letter spacing indeed. It seems like images like this are standalone (https://github.com/rikhuijzer/phd-thesis/blob/main/images/pe...) and probably aren't generated by Typst. So perhaps the weird spacing is not Typst's fault.

Looks like the SVG was converted from an EPS file, and the resulting SVG contains individual glyph positions (advances) for the characters in "Personality score", but it doesn't specify a valid font, probably because the font name was mangled in the original EPS file (which is pretty typical).

So whether the resulting file looks right depends on whether the rendering engine chooses the correct font. Looks like it's supposed to be Nimbus Sans or something metric compatible with that, but the serif font chosen by Typst looks obviously wrong.


I took a look at the repo and it's probably the fault of the the SVG of the graphs, not of typist itself. Now, you could have used typst libraries to generate the graphs but back then (2 years ago I think?) it was probably a struggle.

Yea, I don’t see a point of criticizing minutiae from a thesis that has already been accepted, but I agree, the graphs look out of place and generally not in the same style of the other text. Also, I guess I am just really used to latex’s font, it just automatically gives an academic style that I do t get from this. Again, pure personal bias.

If anyone else is looking to make graph with typst, this can be done with https://cetz-package.github.io/ -- which is inspired by Tikz from the LaTeX world -- or https://lilaq.org/ which seems more appropriate for this type of data plotting

> look at the letter spacing in the title and legend of Figure 2.2 (page 27): "civilia ns", "Pe rs ona lity s core"

The legend and title were generated by Gadfly.jl.


Someone willing to use AI to convert the thesis to Latex so we can compare?

Wow. I was not expecting that topic when I clicked on the link you provided.

When I went through such a selection process years ago, we took all sorts of tests before and even after the selection process. Towards the end, the head instructor told us they don't really have a good way to measure who will make it through. What he did tell us though is that top physical fitness test scores were not indicative that a candidate will make it to the end.

Is there a PDF version or instructions for building your thesis? I'd like to read it.


How about adding a PDF release ;-)

I like LaTeX for the most part (I have had to use some weird hacks but usually once they are done they are stuck in a macro and turned ignorable).

But I think the main things it has going for it are that it: produces nice output, and all the journals accept it. Does there exist a tool that renders Typist to LaTeX? That could play nicely with the existing ecosystem.


Pandoc can convert LaTeX to Typst and back but probably only for simple snippets without any obscure packages. It’s not lossless.

Huh. My special lady friend is in the process of finishing up her thesis using LaTeX after ditching LibreOffice. It was nightmare for some of the same reasons: bad UX, bad portability and crippling bugs. There was a ramping up period, and she had an out of date GitHub repo to help her, but she is incredibly happy that she switched. Collaboration could be smoother I guess.

Latex is used because writing math in latex is very good, and despite how everything else (like tables and figures) is so bad.

That's why people take the math subset of latex and use it in other contexts - exactly like this product.


I agree that tables are not great, but figures? Seems to work pretty well, better than word for sure.

I still find it easier to align tables with numbers in TeX on the decimal point than in anything else.

How is this even achieved with HTML or Word?

Eg. variable width numbers like

           4.53
          13.98765
           7
  -1,000,234.76
Not perfect in TeX either, but at least possible.

In Word you add a decimal tab stop to the ruler. Word's main problem is that people are unaware of its features; they don't spend time learning it like they do with TeX.

https://imgur.com/jipZ90B


The irony of your comment...

If you are not referring to the use of exaggerated examples to more clearly show the point, I wonder what do you refer to?

Not a latex post with someone talking about typst. Come back when the html output works. Not having good accessible output was more acceptable back when Tex was invented, it definitely isn’t now, and they made a new system and somehow got this worse then modern latex.

Typst does have have accessibility features.[1]

I don't worry too much about HTML output still being WIP. Even if TeX had a massive head start, Typst has a good development speed, and a little bit of slope makes up for a lot of y-intercept.

[1]: https://typst.app/docs/guides/accessibility/


This mirrors my experience.

It's worth noting that TeX was developed in the same time period that the details of lexical scope were being nailed down by Guy Steele in the Rabbit compiler for Scheme. It's not that TeX is an ad hoc system; it's more the case that people didn't actually know how to implement a better system at the time.


'People' in this case were Don Knuth (TeX) and Leslie Lamport (LaTeX). Both are Turing Award winners.

That's true. Do you know who else won a Turing Award? Tony Hoare.

What is Tony famous for? Well, lots of things, including his very important comparison sort algorithm Quicksort, but, in this context how about the Billion Dollar Mistake ? That's a pretty nasty booboo in many programming languages for which Tony blames himself because it was his idea.

Like your parent said, TeX shipped a long time ago and we learned a lot since then, it is not a surprise that we know how to do better today, in fact it would be a serious black mark for Computer Science if we couldn't.


Which means what, exactly.

The details of lexical scope where defined in Algol 60 (1960), nearly 2 decades before Rabbit (1978).

People did know how to implement things back then, and TeX is a great example of that. It is just our definitions have changed over the years of what we consider better.


Rabbit was closure conversion, which was efficient implementation of first-class functions with lexical scope. The abstract discusses the difference with ALGOL:

https://dspace.mit.edu/handle/1721.1/6913


Not necessarily my experience. I wrote (and I am writing) several academic documents with it. There are its quirks, of course, but with good classes such as memoir, I don't feel the need to do a lot more than basic customization in the preamble. Still is a good tool for me.

For me the memoir class made LaTeX worse. The titlesec package is more customizable, easier to use, and reduced the size of the preamble significantly.

That's if you need to customize headings. I usually can keep using the standard ones, so that's not my case, and I suspect this also applies to a lot of people.

I have been (and still am, I guess) a big LaTeX fan. I wrote a flora in LaTeX, with the idea that it would be a "living document" that I could update easily. That was a mistake. LaTeX really is very brittle. My flora no longer makes it through typesetting without erroring out, and trying to figure out what is going wrong takes so much time (not the first time I have had to deal with this). My idealism got ahead of practicality. Looking back, I should have used Word and InDesign; that combo would have prevented so many headaches and been more stable.

Agree. Love latex and LLMs made it a lot more accessible but DX wise it's still a huge hassle, esp when it feels that it's easier to just generate HTML and PDF it from there.. I mean the best workflow is probably to write markdown, take a site you like, pick a color scheme and have Claude do the transformation along with a script to make it repeatable

> The only drawback I can see is the ecosystem being smaller and less mature.

This seems like the _perfect_ use for an LLM: systematically porting over as much of the "ecosystem" to Typst as possible. Is anyone doing that?


Two hours ago a coworker told me that he let an llm port his latex template to typst. According to him, it was perfect.

as long as the arXiv doesn't accept Typst, it's never going to be a real alternative to LaTeX. and the arXiv maintainers seem either hostile or indifferent to Typst

https://www.youtube.com/watch?v=zNZlAbCOjd8

doesn't appear indifferent or hostile


TL;DW: there is interest in supporting Typst, but there are roadblocks:

* Higher priority work currently being done on ArXiV (moving from Perl to Python/cloud)

* No "standard" Typst distro

* Support team needs to be re-trained for a new language

* Persistency: TeX has 30+ years of history; will Typst be around in 30 years? Will current code compile? Will existing documents be supported?


haven't watched it but my only data point for that claim is this post: https://www.reddit.com/r/typst/comments/1dcu3p8/i_contacted_...

Yups, I love the idea of LaTeX, LaTeX itself not so much.

I hope Typst eventually gets some equivalent to tkz-euclide, as I've never seen anything even remotely comparable.

Meh. For cross-platform, low hassle, just use LyX + XeTeX + decent fonts. Once you learn how to do the commonly annoying things (alignment; indices; pages without footers; custom footers; etc. - AI is good), you can achieve all of them. You will positively loathe using conventional word processing after a short while. https://www.lyx.org/

Oh, wow, there's a new LyX release! It's been almost 30 years since I used it and it's still going.

I agree re. LaTeX. I've tried Typst for some complex projects, and it's just not quite there yet. ConTeXt on the other hand ... chef's kiss.

+1 for Typst being amazing.

I can actually like write my own functions when I need to. I don't think I have ever written a LaTeX macro without having to look up a lot of stuff.


Is Typst appropriate for web apps; e.g., the input forms here?

Nope. Typst's primary output is PDF, and it is a stand-alone binary. It's a replacement for most uses of LaTeX to produce documents. It is not a replacement for this project, which focuses only on rendering LaTeX math code and can be embedded in multiple different runtimes.

I know exactly what you mean, and that paired with a community that is absolutely sure that they know exactly how things need to be done and everyone that wants it in another way is dumb.

You want Typst: https://github.com/typst/typst

It's like the JSX of Latex: markup in a programming language, not a programming language pretends to be markup.


> I used Typst for a few weeks. It already feels much more understandable, consistent, hackable, and customizable. I guess that is the difference between an ad hoc macro system and an actually thought through programming language.

> The only drawback I can see is the ecosystem being smaller and less mature. That is, however, counteracted by being able to do things on your own, without immersing yourself deeply in LaTeX for years. Also, it will improve with time.


> When I am working with a small team, I do not care if my commits are ugly or repetitive

Your team cares though. Probably including yourself later. Maintaining proper commit history is always worth it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: