Hacker Newsnew | past | comments | ask | show | jobs | submit | allixsenos's commentslogin

This is the best piece of speculative fiction I've read in the last year :D :D :D :D

I didn't make it past page three. Enjoy responsibly.


I've been using Deckard since the earliest builds, and can confirm libghostty was the worst part of it, to the point I had to go back to my old terminal.

I came back to Deckard a week ago and I've been enjoying building out my startup rmBug in it, running many sessions in parallel and never missing a beat when one of them is ready for my input. I would go so far as to say it's been instrumental in our launch yesterday.

Thanks so much for building this, Gilles.


"Selling the wall and the ladder."

"Biggest betrayal in tech."

"Protection racket."

These hot takes sound smart but they're not.

The web was built to be open and available to everyone. Serving static HTML from disk back in the day, nobody could hurt you because there was nothing to hurt.

We need bot protection now because everything is dynamic, straight from the database with some light caching for hot content. When Facebook decides to recrawl your one million pages in the same instant, you're very much up shit creek without a paddle. A bot that crawls the full site doesn't steal anything, but it does take down the origin server. My clients never call me upset that a bot read their blog posts. They call because the bot knocked the site offline for paying customers.

Bot protection protects availability, not secrecy.

And the real bot problem isn't even crawling. It's automated signups. Fake accounts messaging your users. Bots buying out limited drops before a human can load the page. Like-farming. Credential stuffing. That's what bot protection is actually for: preventing fraud, not preventing someone from reading your public website.

Cloudflare's `/crawl` respects robots.txt. Don't want your content crawled, opt out. But if you want it indexed and can't handle the traffic spike, this gets your content out without hammering production.

As for the folks saying Cloudflare should keep blocking all crawlers forever: AI agents already drive real browsers. They click, scroll, render JavaScript. Go look at what browser automation frameworks can do today and then explain to me how you tell a bot from a person. That distinction is already gone. The hot takes are about a version of the internet that doesn't exist anymore.


Evolving existing solutions that work is always a balancing act -- do you buy into the latest trends wholesale, or do you stop and consider your actual needs before picking a solution that will serve you best? A recap of how we successfully containerized workloads without Kubernetes


It's in the pipeline :)


seems DOA


The first version was built with Facebook login on purpose, because we offload the social graph building to FB. Destinations are suggested based on where your friends currently live, and where they've previously been or want to go in the future.

It's not just about not bothering with user registration and passwords, it's about using the data from FB to give users added value.

We are in the early stages of building a completely non-social experience, including possibly a fully-logged out mode, which will use even more signals to suggest destinations. We don't have an ETA to announce for this as of yet.

-Luka (Hitlist)


Ugh really? Travel for me is almost exclusively about getting away from my "social graph." If I'm taking a vacation it is hopefully no where near anyone I know.


Well said.


Hitlist is more like "flexible destination" though ;)

-Luka (Hitlist)


hypothetically, this applies to every company trying to give you an edge over something ever :)

-Luka (Hitlist)


w00t


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: