Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, as computers don't really get faster anymore, and I travel all the time (right now writing this from a hotel restaurant), the biggest change I notice is in battery time, which depends on energy efficiency improvements that usually come with process node change. I'm not being sarcastic.


The M-series is way faster than previous generations even in single treaded general purpose code. “Computers don’t get faster anymore” was Intel’s inability to innovate for many years, not an inevitability of physics. We will reach the end of Moore's law someday, but it’s not there yet.


Computers get faster in benchmarks. If anything, regular computer tasks like checking your email or editing a file has more latency than 5 years ago. Software bloats to fill whatever container you put it in.


If you use the proper software, it doesn’t. Vim is still as fast as it was 40 years ago if not faster. IO is also getting a lot faster. The SSDs these days are what RAM used to be.


Software gets slower faster than computers get faster.


Why then don’t you say

“I generally care about battery life as the main feature of a computer.”

?


I trust more TSMC process numbers (for large changes, not the 4nm bulls*t) more than battery life published by a PR/marketing team for my buying decisions. They are much harder to fake. And sure, performance/watt is what matters generally, which is again hard to compare between brands and marketing teams.


Even then, I would say you care about battery life and performance/Watt, and think process node a better proxy for them than what manufacturers and reviews say about them.


>Yes, as computers don't really get faster anymore

This was true from the late 2000s to about 2017 but in the last four years or so computers have gotten much, much faster.

Obviously, it is workload dependent. If all you use is a web browser then you may not notice. If you use a computer for any sort of parallelizable processing work the last four years have been transformative.

For workstations, performance has doubled in single-threaded applications since 2019 and since more cores are standard you can see up to a 10x increase in performance. For servers the performance improvement is even greater.

I do synthetic aperture radar processing and image generation and can get through 10x the amount of data in a day than I could in 2019. That's just CPU stuff, for GPU stuff it's even more-- I can do stuff now that was impossible in 2019 due to VRAM limitations.

There have also been specialized accelerations included in CPUs like the JavaScript optimizations Apple baked into the M1, which definitely do make a noticeable difference if all you use a computer for is web browsing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: