Hacker Newsnew | past | comments | ask | show | jobs | submit | mrob's commentslogin

Youtube doesn't implement a back function. A real back function would take you back to the same page you came from. If you click a video from the Youtube home page, then click the back button, Youtube will regenerate a different home page with different recommendations, losing the potentially interesting set of recommendations you saw before. You are forced to open every link in a new tab if you want true back functionality.

Double clicking is not a fix because it doubles latency, and more than doubles latency if you don't want to issue page loads that are immediately aborted. Long clicking is such a bizarre anti-feature that I never considered it might exist until I read about it in this HN discussion. Putting touchscreen-specific workarounds for lack of mouse buttons and modifier keys in a traditional GUI app is insanity.

"Spider and Web" is famous because it's a subversion of genre norms. It does not play fair by traditional text adventure game standards. I don't recommend it for beginners, because other than the central gimmick, the puzzles are not particularly interesting. You won't appreciate it unless you know how unusual it is.

And even if you do know how unusual it is, you won't necessarily like it. I can't go into detail without spoilers, but I can compare it to an analogous situation with the Fighting Fantasy gamebook "Creature of Havoc", which is, depending on your point of view, either a work of genius or a broken mess. You opinion of "Spider and Web" will likely match that of "Creature of Havoc".


That would happen in a free market, but software is intentionally not a free market thanks to copyright/patent laws. In software, lock-in effects dominate. People will continue using bad software because it's necessary to interoperate with bad software other people are using. There's a coordination problem where everybody would be better off if they collectively switched to better software, but any individual is worse off if they're the first to switch.

>History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives.

Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:

  Whatever happens, we have got
  The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.

I used to think this, but the AI labs sure seem neck-and-neck in the model race. Doesn't appear that anyone is developing an enormous lead. So I've become skeptical of the runaway king-of-the-world-maker model scenario.

The open models seeming to be ~6 months behind is very encouraging, too.


AI progress can potentially be extremely non-linear because of feedback effects. The first to build an AI smart enough to accelerate building even smarter AIs wins (or loses along with everybody else if it's more successful than they expected).

People have said this, but so far if anything the opposite has been empirically true. OpenAI had a huge lead and it just didn't matter, Anthropic and Google both caught them and now they're neck and neck. It seems like compute overhang forecloses the possibility of runaway progress which eliminates all your competitors.

Any feedback process has a hard threshold for instability. The PA system doesn't howl until the microphone is close enough to the loudspeaker. The atomic bomb doesn't explode until the fissile material reaches critical mass. If you don't know where the threshold is you can't extrapolate.

Compute is a limiting factor now, but there have already been huge improvements in compute efficiency, e.g. mixture of experts. It seems extraordinarily unlikely that there are no more to be found. And compute capacity continues to increase too.


>They may be describing a radical new chemistry that I'm not familiar with.

It's probably pot still vs. reflux still. Chemists use fractionating columns to get better separation. Home distillers won't necessarily do so, so official advice has to assume they will not.


Yeah column stills exist for home use but they’re not very popular. They’re big and expensive and strip flavor. It’s probably because Home distilling, like home brewing, is largely focused on the craft side rather than trying to get drunk cheaply.

If you’re trying to get drunk cheaply, and without tasting liquor, you cannot beat the product and efficiency of a column still.

But I want my whiskey or apple brandy to have the characteristics of the mash I distill it from. A column still would reduce that.

So most home distilling is a pot still for sure.


We could be breaking new grounds with spinning band distilled moonshine.

DVDs are extremely robust against scratches, even more so than CDs. Unlike CDs, which have the data protected by only a thin layer of lacquer under the label, DVDs sandwich the data between two layers of polycarbonate. The error correction is improved too.

Unlike hard disks, they're practically immune to shock (e.g. being dropped). Unlike SSDs and unlike hard disks, they're immune to ESD. And even if you somehow manage to damage one, it's just one, not your whole collection.


>jaggies are a visual distraction

So are serifs, and people don't complain about those. Whether any "visual distraction" actually distracts you is a matter of what you're accustomed to. If you read enough cursive or blackletter it will start to look normal to you. I disable anti-aliasing because I'm accustomed to aliasing and it doesn't distract me at all. In exchange, I get sharp text on an 1080p monitor, effectively quadrupling my graphics performance because I no longer need 4K. I'd prefer bitmap fonts, but in practice I find full automatic hinting of vector fonts good enough.

The only cases where I can see anti-aliasing helping are with Chinese and Japanese fonts, which have characters with unusually fine details. But on any GUI using Fontconfig you can enable anti-aliasing for those fonts specifically and leave it disabled for the rest.


Serifs are chosen intentionally to be harmonious with the overall letterforms. They provide a feeling of visual stability and additional cues for recognizing letterforms. They provide a kind of consistency. They're not a distraction.

Jaggies come from a limitation of the pixel grid. They arbitrarily make diagonal strokes and curves bumpy while horizontal and vertical strokes are perfectly smooth, an inconsistency that would otherwise have no rhyme or reason behind it. Before letterforms were constrained to square grids, nobody was making diagonals and curves bumpy because it was a desirable aesthetic effect.

Jaggies are a distraction from the underlying letterform we all recognize. We know they are an undesirable distortion. Serifs are not. They serve an intentional aesthetic purpose, proportioned in a carefully balanced way.


Serifs are a skeuomorphic artifact of stone-carved text. They're no more legible than sans-serif fonts (see https://news.ycombinator.com/item?id=47492894 ). The only reason people like them is because they're used to them. You can get the same feeling from bitmap fonts if you read them enough.

I very intentionally didn't say serifs were more legible than sans-serif.

There are reasons people like them more than just that "they're used to them", however. I named a couple of them. Just because they originated in stone doesn't mean we kept using them for the same reason. A lot of things originate for one reason and then become used for other reasons.

Believe me, I got "used to" bitmap fonts throughout the 80's and 90's. But I still always preferred the 300dpi version of a document from my LaserWriter and then inkjet. Getting used to bitmap fonts never meant preferring them for general computer usage. Jaggies that appear arbitrarily on some strokes but not others is not visually pleasing. Nostalgic, maybe, but virtually never anything you'd choose if you weren't intentionally trying to create a retro vibe.


If you want minimum latency, you want the input side of an traditional vocoder, not an FFT. This is the part that splits the modulator signal into frequency bands and puts each one through an envelope follower. Instead of using the outputs of the envelope followers to modulate the equivalent frequency bands of a carrier signal, you can use them to drive the visualizer circuit.

That can be done with analog electronics, but even half an analog vocoder needs a lot of parts. It's going to be cheaper and more reliable to simulate it in software. This uses entirely IIR filters, which are computationally cheap and calculated one sample at a time, so they have the minimum possible latency. I'd be curious if any LLM actually recognizes that an audio visualizer is half a vocoder instead of jumping straight to the obvious (and higher latency) FFT approach.


Using LLM to target an FPGA would be interesting, generate the Verilog/VHDL.

much faster, above all, but this would be electronic signal processing with quantisation etc. I am targeting zero latency purely analog circuit. I'm not sure if FPGA can do it...

I got it to produce me an 6x lm3915 + 10x TL072CP + 18 1N4148 solution. Need to order the BOM and try it in free time

Audio mastering is already automated to the level of a mediocre human:

https://github.com/sergree/matchering

(I haven't actually tried this, I just watched the linked Benn Jordan video.)

IMO, the ideal would be for all music to be supplied unmastered so the listener's playback software can apply this process to their own taste. Mastering is necessary for listening with garbage playback equipment (e.g. phone speakers) or noisy listening environments (e.g. cars, parties), but it makes things sound worse in good conditions. The best sounding music CDs I own are classical CDs on Telarc that have liner notes bragging about the complete lack of mastering.


> Mastering is necessary for listening with garbage playback equipment (e.g. phone speakers) or noisy listening environments (e.g. cars, parties), but it makes things sound worse in good conditions.

Eh? I listened to it on quite good nearfield gear, in a decent room, and the AI track linked above still sounds like it needs a good bit of help from a responsible adult to bring it up on this rig. :)

Good mastering helps everywhere -- on all systems. For instance: The sound of Steely Dan is pretty good on playback with about anything, I think, and that sound took a ton of work.

And while classical music is not my first preference, I do love me a good Telarc recording. I strongly suspect that the signal path that they use isn't necessarily quite as pure as they insist that it is. Everything is a tone control, including a microphone -- and money is money. They're not going to reschedule an orchestra to fix an untoward blip at 3KHz. They'll just fix it in post (hopefully, as minimally as possible) and send it.

But otherwise, I agree. The mastering process can be automated. Ultimately, it will be. And for sure, it will also be a customizable user preference.

Some of that work has already been in the bag for decades. Ford, for instance, has been using DSPs in their factory car audio systems to shape sounds in unconventional ways for over 30 years. This gives them a lot of knobs to turn, and to fix into constraints, to help shape a listener's chosen music to sound as good as it can on less-than-ideal built-down-to-price on-road audio systems.

Or at least: It sounds as good it can to a consensus of engineers, or of a focus group.

But the knobs exist. And they don't have to be fixed or constrained: They can (and will) be automatically twisted to suit a listener's preferences.

I'll try to make time to check out your link in a day or two.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: