Hacker Newsnew | past | comments | ask | show | jobs | submit | alwillis's commentslogin

> I think this is the argument for UIs

To quote The Godfather II, "This is the business we have chosen."

The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.

For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.


> In a recent episode of Dwarkesh the guest who is a semiconductor industry analyst predicted that an iPhone will increase in price by about $250 for the same stuff due to increased ram/chip costs from AI. Apple will not be able to afford to put a bunch more RAM into the phones and still sell them.

Apple recently stated on an earnings call they signed contracts with RAM vendors before prices got out of control, so they should be good for a while. Nvidia also uses TSMC for their chips, which may affect A series and M series chip production.

Yes, TSMC has a plant in Arizona but my understanding is they can't make the cutting edge chips there; at least not yet.


> Realistically you need +300GB/s fast access memory to the accelerator, with enough memory to fully hold at least greater than 4bit quants.

The latest M5 MacBook Pro's start at 307 GB/s memory bandwidth, the 32-core GPU M5 Max gets 460 GB/s, and the 40-core M5 Max gets 614 GB/s. The CPU, GPU, and Neural Engine all share the memory.

The A19/A19 Pro in the current iPhone 17 line is essentially the same processor (minus the laptop and desktop features that aren’t needed for a phone), so it would seem we're not that far off from being able to run sophisticated AI models on a phone.


> Putting the GPU and CPU together and having them both access the same physical memory is standard for phone design.

> Mobile phones don't have separate GPUs and separate VRAM like some desktops.

That's true. The difference is the iPhone has wider memory buses and uses faster LPDDR5 memory. Apple places the RAM dies directly on the same package as the SoC (PoP — Package on Package), minimizing latency. Some Android phones have started to do this, too.

iOS is tuned to this architecture which wouldn't be the case across many different Android hardware configurations.


> The difference is the iPhone has wider memory buses and uses faster LPDDR5 memory. Apple places the RAM dies directly on the same package as the SoC (PoP — Package on Package), minimizing latency. Some Android phones have started to do this, too.

Package-on-Package has been used in mobile SoCs for a long time. This wasn't an Apple invention. It's not new, either. It's been this way for 10+ years. Even cheap Raspberry Pi models have used package-on-package memory.

The memory bandwidth of flagship iPhone models is similar to the memory bandwidth of flagship Android phones.

There's nothing uniquely Apple in this. This is just how mobile SoCs have been designed for a long time.


> The memory bandwidth of flagship iPhone models is similar to the memory bandwidth of flagship Android phones

More correct to say that the memory bandwidth of ALL iPhone models is similar to the memory bandwidth of flagship Android models. The A18 and A18 pro do not differ in memory bandwidth.


> The A18 and A18 pro do not differ in memory bandwidth.

A18 Pro has a modest memory bandwidth advantage over the standard A18, which is part of why it can support ProRes recording and always-on display while the standard A18 cannot.


> Apple is having their Windows ME moment.

As someone who lived through the early days of Windows, macOS Tahoe and Windows ME aren’t in the same universe.

> It doesn't matter how much cheap hardware you throw at the unwashed masses.

It's meaningful that a product line that's 41 years old had its best launch for customers new to the platform. That's unprecedented in the computer industry.


LetsEncrypt has been checking for DNSSEC since they launched 10+ years ago.

       The ACME standard recommends ACME-based CAs use DNSSEC for validation, section 11.2 [1]:
       An ACME-based CA will often need to make DNS queries, e.g., to
       validate control of DNS names.  Because the security of such
       validations ultimately depends on the authenticity of DNS data, every
       possible precaution should be taken to secure DNS queries done by the
       CA.  Therefore, it is RECOMMENDED that ACME-based CAs make all DNS
       queries via DNSSEC-validating stub or recursive resolvers.  This
       provides additional protection to domains that choose to make use of
       DNSSEC.

       An ACME-based CA must only use a resolver if it trusts the resolver
       and every component of the network route by which it is accessed.
       Therefore, it is RECOMMENDED that ACME-based CAs operate their own
       DNSSEC-validating resolvers within their trusted network and use
       these resolvers both for CAA record lookups and all record lookups in
       furtherance of a challenge scheme (A, AAAA, TXT, etc.).
[1]: https://datatracker.ietf.org/doc/html/rfc8555/#section-11.2

Yes, that's my understanding as well. You'll notice my top-level comment from a few hours ago says that as well.

(You edited your comment to include more detail about when LE started validating DNSSEC; all I know is that it's been many years that they've been doing it.)


Just wanted to add the latest data on DNSSEC [1]. 25 million zones is a drop in the bucket compared to the size of the internet, but it's also not nothing.

    |  Last updated.                        | 2026-03-16 05:04 -0700 |
    |:--------------------------------------|:-----------------------|
    | Total number of DS Records            | 25,099,952             |
    | Validatable DNSKEY record sets        | 24,559,043             |
    | Total DANE protected SMTP             | 4,165,253              |

There's a graph of the growth of signed zones the past 7 years [2].

I get it that DNSSEC doesn’t make a lot of sense for large organizations with complex networks. that have been around for decades.

But if you're self-hosting a website for your personal use or for a small-ish organization and your registrar supports it (most do), there's no reason not to enable DNSSEC. I did it recently using Cloudflare and it was a single checkbox in the settings.

An estimated more than 90% of ICANN's ~1,400 top-level domains are DNSSEC enabled, so that shouldn't be a barrier.

Since most of us don't have a personal IT department at our disposal, for the small guy, DNSSEC prevents cache poisoning attacks, man-in-the-middle attacks and DNS spoofing. There are other ways to mitigate these attacks of course, but I've found DNSSEC to be pretty straightforward.

[1]: https://stats.dnssec-tools.org/#/top=tlds

[2]: https://stats.dnssec-tools.org/#/top=dnssec?top=dane&trend_t...


Wait, that's not true:

* The same reasons not to deploy DNSSEC that face large organizations apply to you: any mistake managing your DNSSEC configuration will take your domain off the Internet (in fact, you'll probably have a harder time recovering than large orgs, who can get Google and Cloudflare on the phone).

* Meanwhile, you get none of the theoretical upside, which in 2026 comes down to making it harder for an on-path attacker to MITM other readers of your site by tricking a CA into misissuing a DCV certificate for you --- an attack that has already gotten significantly harder over the last year due to multiperspective. The reason you don't get this upside is that nobody is going to run this attack on you.

Even if the costs are lower for small orgs (I don't buy it but am willing to stipulate), the upside is practically nonexistent.

"Cache poisoning attacks, man-in-the-middle attacks and DNS spoofing" are all basically the same attack, for what it's worth. DNSSEC attempts to address just a subset of these; most especially MITM attacks, for which there are a huge variety of vectors, only one of which is contemplated by DNSSEC.

Finally, I have to tediously remind you: when you're counting signed domains, it's important to keep in mind that not all zones are equally meaningful. Especially in Europe, plenty of one-off unused domains are signed, because registrars enable it automatically. The figure of merit is how many important zones are signed. Use whichever metric you like, and run in through a bash loop around `dig ds +short`. You'll find it's a low single-digit percentage.


> The same reasons not to deploy DNSSEC that face large organizations apply to you: any mistake managing your DNSSEC configuration will take your domain off the Internet (in fact, you'll probably have a harder time recovering than large orgs, who can get Google and Cloudflare on the phone).

Set your TTL to five minutes and/or hand over DNS management to a service provider.

> Meanwhile, you get none of the theoretical upside, which in 2026 comes down to making it harder for an on-path attacker to MITM other readers of your site by tricking a CA into misissuing a DCV certificate for you --- an attack that has already gotten significantly harder over the last year due to multiperspective. The reason you don't get this upside is that nobody is going to run this attack on you.

Didn't save Cloudflare from a bad TLS certificate being issued. I still think that reducing the number of bad actors from 300 to the root servers and your registrar is a meaningful reduction in attack surface.

> DNSSEC attempts to address just a subset of these; most especially MITM attacks, for which there are a huge variety of vectors, only one of which is contemplated by DNSSEC.

How would authenticating DNS records cryptographically not address cache poisoning, MITM, and DNS spoofing in relation to DNS lookups? Also, DNSSEC doesn't have to solve all problems to make it worth doing.

> Finally, I have to tediously remind you: when you're counting signed domains, it's important to keep in mind that not all zones are equally meaningful. Especially in Europe, plenty of one-off unused domains are signed, because registrars enable it automatically. The figure of merit is how many important zones are signed. Use whichever metric you like, and run in through a bash loop around `dig ds +short`. You'll find it's a low single-digit percentage.

Yet you complain about DNSSEC being to hard to deploy and not getting enough deployment. Wouldn't it be nice if they could leverage that automatic signing to also generate TLS, SSH, and other certificates?


> The same reasons not to deploy DNSSEC that face large organizations apply to you: any mistake managing your DNSSEC configuration will take your domain off the Internet (in fact, you'll probably have a harder time recovering than large orgs, who can get Google and Cloudflare on the phone).

There are several mistakes one can make to knock oneself off the Internet that have nothing to do with DNSSEC. These are not the bad old days; compared to 10 years ago, DNSSEC is a lot easier to administer.


If I accidentally yank the power cable out of my load balancer, I can plug it back in and I'm back up and running.

If I cock up my DNSSEC config, nobody can resolve any records under my org's domain (goodbye internal email!) and you've got to twiddle your thumbs for a period of time waiting for various timeouts to pass (go ask Slack how it went for them).

These things are not the same.


Totally agree the LLM sucks posts should be accompanied with the prompt.

I agree, but at the same time it feels like victim blaming.

I don't know. Is pointing out that someone holding a drill by the chuck won't get the results they expect that bad?

But what if the drill is non deterministic?

Nah, it's a variant of the XY Problem: https://xyproblem.info

> I think this is actually the reason the Neo has 8 GB of RAM (non-upgradable). It’s their anti-cannibalization strategy.

It has 8 GB of RAM because they wouldn’t be able to hit the price point of $599 with more; their target audience doesn't need more. It's also why the SSD is slower than a MacBook Pro or MacBook Air; it's the only device in the lineup other than the entry-level iPad with a sRGB display; the other devices have P3 Wide Color Displays. No Thunderbolt ports, only supports 1 external display and only at 4K. No Wi-Fi 7.

These are some of the compromises they made to keep the price down. They're also using a binned A18 Pro with 5 GPU cores instead of the 6 core version in the iPhone 16 Pro and Pro Max.

There are lots of potential customer for which a Mac laptop was out of reach; it's a lot more affordable at $49.91 /month for 12 months for the $599 model.

Its display is better than PC laptops in the same price range, but that display is a non-starter for graphic designers, video editors, etc.

That's why cannibalization is a non-issue.


> It's also why the SSD is slower than a MacBook Pro or MacBook Air;

It's actually not that much slower, at least if you compare machines with the same amount of storage. The M2 and M3 MacBook Air with 256GB comes in at 1700 MB/s[1], while the Neo with 256GB is... drumroll... 1700 MB/s[2].

Yes, Air and Pro machines with more storage are faster. I have not seen any benchmark of the Neo with 512GB, so maybe it lags behind the Air and Pro there. But I've not seen anyone publish a benchmark which actually demonstrates that.

[1] https://www.reddit.com/r/mac/comments/1gvovdt/the_ultimate_g...

[2] https://forums.macrumors.com/threads/macbook-neo-has-up-to-8...


I should clarify that I was referring to the memory bandwidth. Compared to the 100 GB/s of a M3 MacBook Air, the 60 GB/s of the Neo is 40% slower. My M1 Pro MacBook Pro's memory bandwidth is 200 GB/s; that's 3.33x faster than the Neo.

> Apple is going to cannibalize their own laptop market.

As long as you buy a Mac laptop, Apple is fine with that, regardless of which one. That’s because they know who their customers are.

The Neo is in its own category; the $599/$699 Neo doesn’t compete with a 14-inch MacBook Pro with a M5 Pro, 24GB of RAM, and 1 TB SSD at $1899. If you know you need more RAM and storage than Neo, the M5 Mac Air is $1099. But if you need to stay under $1000, the decision is clear.

If anything, the Neo is more competitive with the entry-level iPad with 128 GB of storage at $349; with Apple's keyboard at $249, the total is $598, $1 less than the entry-level Neo.

For someone who wants a "real" laptop with more flexibility than an iPad, getting the $599 Neo is a no-brainer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: