Hacker Newsnew | past | comments | ask | show | jobs | submit | sirwhinesalot's commentslogin

And here's the ruining of Pixelmator Pro everyone was waiting for. I paid one time 20 euros for it (discounted). And I would gladly pay again even full price for a new major version.

I don't want yet another subscription.

I see that they can still be bought (for now) but I wonder how long that will last.


Pixelmator Pro is upgraded a couple of times under Apple's wings, and this thing is not being ruined.

You'll still be able to buy it if you want. All apps are still can be bought. It's in the text.

Apple surprised me nicely there.


One-time purchase versions are still available. For Pixelmator Pro it's $49.99 on the App Store

That's not very reassuring.

For now.

It's a pun, look up Alan Dye :)

I had friends use those exact words but replaced with "Windows Phone" and "Windows 8 Computer".

That's not what they mean. As a developer, the API you used to develop your app was now deprecated with no migration path. That meant your app was deprecated, with no migration path.

For an app platform already a distant third place and struggling to attract developers, pissing off the few devs you do have TWICE was not a smart move.


Even then, that happened at most twice as you say, not three times as the other poster said.

And I disagree with your implicit claim that the WP7 & WP8 Silverlight -> Win10 UWP transition had no migration path. There was >= 90% source code similarity, bolstered if you had already adopted the Win8.1/WP8.1 "universal" project templates. And Microsoft provided tooling to ease the transition. Sometimes it was literally just s/Microsoft.Phone/Windows.UI/g.

Games were a different matter, I'll admit. XNA as an app platform had no direct replacement that was binary compatible with Win10 desktop, but even then, not only was DirectX already available from WP8.0, but Microsoft invested in MonoGame as an XNA replacement precisely because they knew the end of XNA would hit hard. (In fact it was the Windows Phone division that had essentially kept XNA on life support in its final years so that WP7 games would not break.)


"the API you used to develop your app was now deprecated with no migration path."

Seems that's the standard now for .NET desktop dev. Every 2 or 3 years MS crank out a new XAML based framework that's not compatible with the previous and never gets completed before a new framework comes out.


Nobody in their right mind should be touching any Microsoft provided API that isn't already well established (like Win32 and Direct3D).

I'm happy they're at least maintaining (to a limited extent) Windows Forms and WPF and updating their styles to fit with their fancy Fluent design.

But even that is a pretty sad state of affairs, since Windows Forms should be able to get that info from uxtheme (which Microsoft fumbled) and WPF should be able to get that info from the style distributed with the system-installed .NET framework (which Microsoft fumbled and now only exists for backcompat).

For the company with the best track record for backwards compatibility (with Windows), they sure suck at developing and evolving the same API for long.


So what is the right way that Skia uses? Why is there still discussion on how to do vector graphics on the GPU right if Skia's approach is good enough?

Not being sarcastic, genuinely curious.


The major unsolved problem is real-time high-quality text rendering on GPU. Skia just renders fonts on the CPU with all kinds of hacks ( https://skia.org/docs/dev/design/raster_tragedy/ ). It then renders them as textures.

Ideally, we want to have as much stuff rendered on the GPU as possible. Ideally with support for glyph layout. This is not at all trivial, especially for complex languages like Devanagari.

In the perfect world, we want to be able to create a 3D cube and just have the renderer put the text on one of its facets. And have it rendered perfectly as you rotate the cube.


It's hard to justify Liquid Glass in general. The wastefulness of flat design (in terms of space) married with the visual excess of skeuomorphism, but without even providing any affordances (does the sidebar being raised give you any new information on how to use a sidebar? No).

If you're a designer at a top 10 S&P 500 company making 6 figures, you owe it to yourself to have some love for your craft. If a PM tells you to shove a UI style meant for an unsuccessful VR device onto desktop and mobile platforms, say no. Get your colleagues to say no. Make that PM read everything the Nielsen Norman group has ever written. Read it too.


> If a PM tells you to shove a UI style

More than likely designers are making up work to justify their jobs. Not good for your career if you admit the desktop interface was perfected in ~1995.


100% this. I recall watching their launch video about Liquid Glass. It was filled with ego-driven "we're changing the world here" nonsense. They were designing in a bubble and wanted to do something different so they could justify the work. It was never about the user.

My hot take on this is that there is a business goal to Liquid Glass that extends beyond ego - but it's about the restoration of Apple UI as an exclusive status symbol, not as a usable experience.

Apple looked at innovations in hardware form factor and, rather than trying to out-innovate in that sphere, said, instead: how do we make something in software that nobody would ever try to imitate, and thus position ourselves as the innovators once again?

And the monkey's paw curled and said: Liquid Glass is a set of translucency interactions that are technically near-impossible to imitate, sure, but the real reason nobody will try to imitate is because they are user-hostile to a brand-breaking extent.

And Apple had nobody willing to see this trainwreck happening and press the brakes.


The business goal is clear: visionOS. Liquid Glass is designed with AR in mind, that's the only place where it actually makes some sense. Pretty much the same thing as Microsoft did with Windows 8, trying to unify the UX and visual style across PCs and phones. And it's going similarly well.

That's interesting and plausible.

It also contributes to obsolescing older hardware.


we saw this exact playbook with ios 7. i don't think you need to attribute malice or read into it much.

ios 7 relied heavily on blurring effects-- a flex at the time due to the efficient graphic pipeline vs android they had. this was coming off the heels of Samsung xerox'ing and they wanted a design that would be too expensive for competitors to emulate without expensive battery hit. liquid glass is following in this tradition.

and similarly to ios 7, the move to flat design was predicated on the introduction of new form factors and screen. flat design lent itself well to new incoming screen sizes and ratios. prior there was really one or two sizes and that was it, easy to target pixel perfect designs against. as apple moves to foldables and more, this design flexibility is once again required.

as for no one trying to emulate it, i'm not so sure, OriginOS 6 ripped it off back in October.


I honestly think they could have done that and still had some taste and considered usability much more than they did.

A design system I am required to use made a recent "major" update announcement: "Styles have been converted to variables. Styles are out and Figma variables are in".

Where what we really needed was a stable release version (now a year late from the original promised date) so we can build out UI components for the content editors to use that don't require constant design tweaks.

You know the designers are:

a) Just fucking around having fun

b) Making busy work to drag it out as long as possible

As it's now 4 years since they began working on the "design system", there's a good chance it will get canned as there's some more modern design they will want to use.


There is a product I have to use that updated its ui design some years ago, only the functionality is partially implemented and the new design has some functional elements that weren't present in the old configuration.

This has been solved with a button that switches the layout between the two designs, when I'm making changes it is sometimes necessary to flip back and forth between the two mid-change.


Material Design?

I used to work for one big company. Every newly hired design director desperately wanted to create a new design for the corporate portal because it would add a new line to his resume.

marketers are not designers and vice versa. of course the press release is going to be melodramatic no matter what the designers thought, or were told

“Marketers are not designers” is fine, except it was the designers themselves pushing the marketing drivel in those videos.

Who was driving that, though? If the project has high-level management buy-in, the people in the scripted videos are going to be on message if they want to stay employed.

You’re not looking far enough ahead. Liquid glass isn’t about Mac. It’s about VisionPro and wearables. This is a strategic play by Apple.

Microsoft totally screwed up the windows interface with windows 8 to suit tablets which they viewed as the future of computing. Not only were they wrong, they also really broke the UX for the users they did have for a new product that hardly sold and still doesn't (windows tablets). Eventually they had to cave in but Apple is more stubborn than Microsoft.

Even if Apple is right, why shoehorn the future into the present on devices unsuitable for its new paradigms? The iOSification also only worsened the macOS UX. It's one of the reasons I moved to Linux with KDE which I can configure as I like.

If they want make the AR OS of the future then make it on the vision pro where it belongs.


Microsoft may have "caved", but we're still stuck with two different settings menus and a start menu that prioritizes ads and search results over your own programs.

2 settings menus? We have every version of Windows from 3.1 all the way to 11 styled settings menus, sometimes multiple styles depending on which settings you want to look at. It's a total shitshow.

Yeah, windows make linux desktop using 3 versions of GTK, QT +motif an gnustep look homogenous in comparison.

Windows is borderline unusable to me without Open Shell / Win 7 settings. I refuse to learn yet another icon idiom, just to have it change 2 years later. Thinking of trying Bazzite for my new upcoming 2nd gaming machine (for daughter) build because I'm tired of Windows. If it goes well, may convert multiple Win 10 HTPC/gaming machines.

It’s not that bad if you configure it. Much like much of Linux…

Apple just reduced Vision Pro production, but Liquid Glass was in motion well before that. What leaves me scratching my head is I never got the impression Apple believed in Vision Pro. It launched because after years of research, management wanted to see if the effort was worth continuing to invest in, but that wasn't a vote of confidence.

I'll have to second this. It's not even on Apple's homepage! I hadn't heard it mentioned for months before today. It had its niche share of users who actually found it useful, but apart from them it seems that the world is not ready for spatial computing (or maybe current spatial computing isn't ready for people, who knows?).

The hardware seems good, but with it being tied to the Apple ecosystem there's just no way.

I'd buy one if I could use it with my Linux (KDE) workstation, but there's no chance I'm going to be using it via a mac.


I'm hoping the new Valve headset will be like, 60% of what the Apple vision is. My boss got the Apple vision on launch day and it is really premier hardware, visuals that are almost exactly like seeing the thing you're looking at in real life, and the hand sensing / interactivity was the best I have experienced, even though it still had flaws.

But being tied to Apple's ecosystem, not being really useful for PC connection, and the fact that at least at the time developers were not making any groundbreaking apps for it all makes it a failure in my book.

If Valve can get 60% of that and be wirelessly PC tied for VR gaming then even if they charge $1800 for their headset it will likely be worth it.


I have a vision pro (obtained on day 1 for development purposes), and have given demos of it to a number of non enthusiast/non techie people.

All of them immediately hate that it’s bulky, it’s heavy, it messes with your hair, messes with your makeup, doesn’t play well with your glasses, it feels hot and sweaty. Everyone wants to take it off after 5-10 minutes at most, and never asks to try it again (even tho the more impressive 3D content does get a “that’s kinda cool” acknowledgment).

The headset form factor is just a complete dud, and it’s 100% clear that Apple knew that but pushed it anyway to show that they were doing “something”.


If it weren’t $3500+ I’d love one. The world isn’t ready for that price point.

Exactly. More expensive than a high end desktop or laptop while having less useful software than an iPad. No thanks.

If it were around the $500 point I’d pick one up in a heartbeat. Maybe even $1000. But $3500 is nuts for how little they’re offering. It seems like a toy for the ultra rich.

I assumed the price would eventually come down. But it seems like they’ll just cancel the project entirely. Pity.


I’m assuming Vision Pro is viewed as what the Newton was to the iPhone. It will provide some useful insight way ahead of its time but the mainstream push will only happen after a number of manufacturing breakthroughs happen allowing for a comfortable daily driver UX. Optics and battery tech will need multiple generational leaps to get to a lightweight goggle / sunglasses form factor with Apple-tier visuals, tracking, and battery life…

Magic Leap 2 and HoloLens 2 proved that we still haven't cracked the code on AR/XR. Similar price point, plenty of feasible enterprise use cases for folks willing to pony up money to hire Unity or Unreal devs. And I'm sure there are enough of them tired of being flogged to death by the gaming industry. But they both went splat.

It's going to take a revolution on miniaturization AND component pricing for XR to be feasible even for enterprise use cases, it seems.


Apple can afford to improve and cheapify this thing for a decade.

That’s sort of what they did with the Watch.

It has incrementally improved, and gotten cheaper, to the point that I now see them everywhere. When they first came out, they were pretty expensive. Remember the $17,000 gold Watch (which is now obsolete)? The ceramic ones were over a couple of grand.

But the dream of selling Watch apps seems to have died. I think most folks just use the built-in apps.


The $17,000 Apple Watch was a (rather silly) attempt to compete in the high end watch space. However, they also launched the base "Sport" model at US$349.

Not really anything like the watch, the existence of a stupidly expensive "luxury" version doesn't change the fact that the normal one started at $350.

I think the current rumor is that development of a cheaper XR headset has been shelved in favor of working on something to compete with Meta's AI glasses.


Did they commit to additional production of the Vision Pro? I read their announcement as quiet cancellation of VR products. They announced some kind of vaporware pivot, but I didn't read a single analyst projection that Apple ever intended to bring another wearable to market. Customer usage statistics of the Vision Pro are so low Apple hasn't even hinted about reporting on them.

Wearable products, outside of headphones, have a decade-long dismal sales record and even more abysmal user retention story. No board is going to approve significant investment in the space unless there's a viable story. 4x resolution and battery life alone is not enough to resuscitate VR/AR for mass adoption.


> outside of headphones, have a decade-long dismal sales record

Outside of headphones and watches


Do they sell many Apple Watches? Maybe it is an euro thing but I only very rarely see people wearing one.

I would see 9 garmins for 1 Apple Watch for instance and many more people wearing cheap casios or no watch at all.


I dunno. I see them all the time here (Seattle). Wikipedia estimates 267 million sold as of 2023.

I see mostly Apple watches, a few Samsungs, a small smattering of Pixel watches, and then rarely other brands like Garmin and what not around me.

That's probably regional then. In my area most people using watches nowadays are usually into sports.

I must admit I don't understand the point of a smart watch when most people have their smartphone in their hand a significant amount of time a day and said smartphones screen sizes have been increasing over the year because people want to be able to doom scroll at pictures and videos and interact with whatsapp all day. I don't know how you can do that from a tiny screen on a watch.

Those like me who don't subscribe to that way of living don't want distractions and notification so they use regular watches and would see as a regression a device that needs to be charged every few days.

Some people said payments but I see peolle paying with their smartphone all the time since they have it at hands or in a pocket very close anytime having it in a watch doesn't look like a sigmificant improvement. I'd be curious to see a chart of smartwatch adoption by country.


Apple watches have the highest marketshare in a lot of the world's markets. According to this analysis[1], watchOS (Apple watches) make up around half of all smartwatches used in Europe. Global sales puts Apple around 20-30% market share, with brands like Samsung and Garmin around 8% [2]. I haven't found good US-only statistics to show what the market share is of watchOS is, but I'd imagine its probably close to 50% or more.

I do agree though, anecdotal experiences will vary depending on the kind of people you hang out with. For the people I know heavily into running and cycling, brands like Garmin are over represented. Meanwhile lots of other consumers practically don't even know these are options.

[1] https://www.mordorintelligence.com/industry-reports/europe-s...

[2] https://scoop.market.us/smartwatch-statistics/


I'm in India and the Xiaomi watches are everywhere (they probably don't sell those in the States/EU?) But also Apple and Samsung.

Claiming watches, phones and absolutely everything else they make are everywhere in Poland.

They need to workout how to drop the price. I want one. But really can’t justify that price.

Recent moves have convinced me that Apple is getting ready to push Vision Pro substantially harder.

In recent weeks, I’ve been getting push notifications about VP.

They hired Alex Lindsay for a position in Developer Relations.

And there’s the M5 update.

Just remember, it’s a lot cheaper than the original Mac(inflation adjusted). Give it 40 years – hell, given the speed of change in tech these days, it won’t even take 10.


I think they bought the metaverse hype and hurried up. If only they had put half the energy on AI, we'd have a createML with something else than yolov2 in 2026

. . . so other devices are required to have the same interface? No, they're not. Just because you want to share enough design cues to make people understand they're dealing with the same brand doesn't mean you have to hammer square pegs into round holes.

Not to mention the fact that first, you have to get to a point where AR wearables are commercially viable, and we don't seem to have hit that point yet.


It doesnt really introduce anything that makes Vision Pro in any way better, though

I think this is the right read in terms of intent but I also feel it offers a lens into the silliness of Apple's current strategy around all this. VisionPro appears to be currently floundering, and no matter how much they try to make it unintrusive and airy and transparent in its interface, it's presently an unwieldy device not designed to leave the home or office. Predicating company-wide design systems on this line being the future feels aspirational at best and delusional at worst. And what good is liquid glass on a Mac? To show me an obscured glimpse of my desktop background and add visual clutter?

(Apologies to @cyberge99 if my tone comes off intense, this is not to come at you but rather is just me venting my frustrations with Apple. I think you are correct in your assessment of the idea here.)


What’s frustrating about the VisionPro is their absolute refusal to address it as a giant screen.

All people I know describe this usecase first: “Will be awesome when it replaces my 2x34" screens”. I described it to the salesman when he asked me why I wanted to try it. He never showed it. Gave him 0/5, he complained, I explained this is specifically what I asked. You can emulate one screen in VisionPro but it’s absolutely obnoxious about making it about apps and iPhotos 3D whatever. Users desire it. Apple is hell-bent in not addressing that usecase, and addressing family usecases first.

Imagine they find a proper UI to visualize an infinite Typescript file. Something like flinch and you find yourself in a method, look elsewhere and you immediately see the other method. Make it viral by making it possible to write files in a way that is not practical to normal-screen users, like the old “max 1 screen height” limit. View your team in the corners of your vision. THE tool for remote working.

Workplaces would look futuristic. Your experience at the workplace would be renewed thanks to Apple.

And then, reuse the desktop’s UI on VisionPro instead of the desktop using VP’s concepts.

But no, Apple prefers killing off VisionPro and imposing LiquidGlass to everyone. (In waiting for my threat letter from Steve Jobs for suggesting ideas now).


> In waiting for my threat letter from Steve Jobs ...

Ummm, you know he died yeah?


Sounds like Windows 8 designing their touch-first interface for a desktop, with about the same success.

>This is a strategic play by Apple

No, this is the fault of a company and industry with way too much money and not knowing what to do with it.

So they hired a bunch of artists who would otherwise be carving wood in a decrepit loft somewhere after taking half a mushroom cap. These people now decide how computers should operate.

I remember watching a documentary from the 80s where Susan Kare explained how every pixel in the Macintosh UI was deliberately put there to help the user in some way. One lady did the whole thing, the whole OS.

Now we have entire teams of people trying to turn your computer into an avant-garde art piece.


> a bunch of artists who would otherwise be carving wood in a decrepit loft somewhere after taking half a mushroom cap. These people now decide how computers should operate.

…brother, you’ve just described the history of the personal computer and the Internet. It’s not the hippie artists causing this problem, I promise you that.


Bill Atkinson designed HyperCard on LSD.

https://www.mondo2000.com/the-inspiration-for-hypercard/

The last decade or so of Apple designers have been as out of their minds on ego and cocaine as Donald Trump Jr.


Not only that, we have teams of people that very obviously (based on OP) dont talk to each other.

Eh, I would disagree as there's nothing in it where you go "Oh wow, that's why they did it" in the context of Vision Pro or wearables.

It seems much more likely that the driver here was to produce a UI that was resource intensive and hard to replicate unless you control the processors that go into your devices as well as the entire graphics processing stack that sits above that as well. It seems created to flaunt the concept of "go ahead and try to copy this" to Google and Microsoft.


If it's a strategic play, it's a terrible one that douses usability in gasoline and sacrifices it at the altar of visual novelty for no real gain. Apple has spent literal decades working on and refining their Human Interface Guidelines for different devices. Between Tahoe and Liquid Glass, they seem to have just tossed them on the bonfire for no justifiable reason.

VisionPro was meant to literally overlay its interface over your field of vision. That's a very different context and interaction paradigm. Trying to shoehorn the adaptations they made for it into their other, far more popular interfaces for the sake of consistency? It's absurd.


> Apple has spent literal decades working on and refining their Human Interface Guidelines for different devices

Things like “human interface guidelines” get written by nerds who dive deep into user studies to make graphs about how target size correlates to error rate when clicking an item on screen.

Things like Liquid Glass get designed by people who salivate over a button rendering a shader backed gradient while muttering to themselves “did I cook bro???”

They’re just two very orthogonal cultures. The latter is what passes for interface design in software these days.


It's like the KDE developer who reluctantly gave out the script to set "border offset" from a window back to 0 (i.e. how close you could snap/drag the window to the border of the screen). He had defaulted it to something like -5 (i.e. at minimum, 5 pixels between the edge of the screen and the window, no matter WHERE you tried to place it), because "otherwise, how would you use the negative space, bro?". I.e. left-clicking JUST outside the window brought up a context menu for the window. WTF? I've been doing GUIs since 1987. Don't make "clicking outside the window" a way to interact WITH the window. I very nearly threw KDE out before he gave the fix.

This. The Windows 95 interface was optimal in many regards. Given how much faster computers are now, every UI operation should just be instantaneous. It's ridiculous that desktop interfaces and Web pages became heavier as computers got faster, so that a heavy website today does not load meaningfully faster than a plain text webpage in 1995.

Edit: On Linux, you have desktop environments like LXQt for this. Unfortunately, last time I checked, Wayland was not supported.


Beyond that, any lag from Win95 era was probably because of spinning hard drives. Running it on a SSD would be instantaneous. Also, file search might even work, instead of whatever we have now.

I can attest to this one, 6 months ago I had a vintage pc I needed to rehab due to the curse of the proprietary ISA card. Imaged the failing drive to an ssd, sata->IDE adapter. P3 733MHz, 128 mb ram, W98SE, its astonishingly fast and responsive. Boots nearer to my memories of MSDOS 6.22 firing up than anything else.

Acrobat reader still performs like a lead balloon though, even a miracle can't fix that one.


I don't know how I used Windows before I installed Everything.

Deactivate web search via regedit, then it works.

Good advice, but there's still a lot of utility in Everything that is not built into Windows.

The responsiveness of windows 2000 in a vm is insane. It feels like every action happens instantaneously.

Contrast this with the "os" of my LG oled monitor. It seriously takes 5 seconds to open the settings menu.


Contrast this with the "os" of my LG oled monitor. It seriously takes 5 seconds to open the settings menu.

I'm not sure what they use these days, but 10-15 years ago the MCU in a monitor was likely to be a ~10MHz 8051.


A whole installation of Win95 with Office95 is only a few hundred MB and would fit entirely in RAM on a modern system. You can run a VM of it like that to experience the extreme speed. Even a browser these days uses several times that.

Wayland with Sway feels toptier already. Tiling WM is just so simple, clean, fast, and perfect in every way.

Personally, I can't wait till we have webpages that load slower then windows 95 boots.

Loved the win95 interface. I've always wanted it on new Windows box but the existing solutions out there seem lacking.

Wayland is supported with LXQt, labwc replacing openbox. If anything, UX is snappy.

Even Gnome and KDE can feel much snappier if you remove effects and animations.

Wayland being as needful as Liquid Glass itself.

There have been many, many, desktop improvements since 1995, some of which came from the Mac, some came from Windows and some came from UNIX/Linux & friends.

- Arguably the dock, though it's probably contentious - Ubiquitous instant search (e.g. Spotlight) - Gesture-based automatic tiling of windows to left/right side of the screen, tiling presets - Smooth scrolling, either via scroll wheel or trackpad - Gesture-based multi tasking, etc - Virtual desktops/multiple workspaces - Autosave - Folder stacks, grouping of items in file lists - Tabbed windows - Full-screen mode - Separate system-wide light and dark modes - Enhanced IME input for non-latin languages - App stores, automatic updating - Automatic backup, file versioning - Compositing Window Managers (Quartz, Compiz, DWM, modern Wayland compositors...) - The "sources bar" UI pattern - Centralized notification centers - Stack view controlelr style navigation for settings (back/forward buttons) - Multi device clipboard synchronization - Other handoff features - Many accessibility features - The many iteration of Widgets - Installable web apps - Virtual printers ("print to PDF") - Autocomplete/autocorrect - PIP video playback - Tags/Labels - File proxies/"representations" - Built-in clipboard management - Wiggle the mouse to find the pointer

None of these can be said to be at their final/"perfect" form today, and there are hundreds if not thousand of papercuts and refinements that can be made.

The real issue is probably due to management misunderstanding designer's jobs, and allocating them incorrectly. The focus should be more on the interactions and behaviors than necessarily on the visuals.


> Arguably the dock

The Dock came from NeXtSTEP circa 1989. It had square edges and no Happy Mac. (So did Mail.app, TextEdit, some of the OS X Finder, and a whole bunch of other things.)

To the untrained eye it looks like an Apple innovation because most people couldn't afford NeXt computers unless you worked in a university or research lab.


This is the howling insanity that drove Microsoft to kill the start button, then reanimate it, then move it to the center of the task bar.

They also forced us to waste relatively valuable vertical screen space on the task bar, taking away our ability to move it to the left or right screen edges.

... thus falling into the Pitt of failure, where every way out is just a little too far away.

When I started using Linux, I didn't do so because I disliked Windows so much, I just was an insatiably curious nerd.

But since then, each new version of Windows has made me more and more grateful for not having to deal with that dumpster fire on my personal devices.

The saddest part to me is that I have the strong impression it wouldn't take that much work to turn Windows into a much better system. But for whatever reason, Microsoft is not interested in making that happen. Maybe they are incapable of doing so. But the system itself not the reason.


No kidding. “Reverting” to something pretty similar to Mac OS in the late ‘90s, as far as visuals and basic UI behavior (but not removing modern features) would be a big improvement. And once they did that they could just leave it that way. It’d be fine. UI churn sucks enough that it’s not worth it for users unless it’s a huge improvement, and nothing has been.

Though if we could get the newer settings panel of macOS a few versions back, before they inexplicably ruined the best OS GUI settings interface I’ve ever used, that’d be great.


I would say, "Give me Mac OS 9, now" but smartphone-addicted brain-damaged zoomers don't understand what folders are or how file management works.

I don't need or want art, eye candy, or animations. I need to get work done and the rest of the OS to stay tf out of my way.


It’s not just career. Any artistic endeavor suffers from competition with the past, which eventually becomes a guaranteed loss. Negating the frame becomes the primary way to leave a mark.

But for actual art, such as music, constantly doing novel things is kind of the point. Every generation needs its own music that runs counter to what came before.

User interfaces are not art.


> User interfaces are not art.

Do UI designers think that way?

I imagine some see it as engineering - make things work efficiently for the users. Others see it as art. The outcome will depend on which group gains the upper hand.


There's some linguistic ambiguity here if we just say "art", because it includes things we might divide into "artistic choice" versus "craftsmanship", ex:

1. "Picasso, that's the wrong way to depict a human nose."

2. "Picasso, that's the wrong material, that vibrant paint is poisonous and will turn to black flakes within the year and the frame will become warped."

I interpret parent-poster's "interfaces are not art" as meaning they're mostly craftsmanship.

It may not be quantifiable enough to be labeled "engineering", but it's still much less-subjective and more goal-oriented than the "pure art" portion. All these interfaces need to be useful to tasks in the long term. (Or at least long enough for an unscrupulous vendor to take the money and run.)


I wish I could remember where I picked up this quote

> No project manager ever got promoted for saying "let's keep things the same".


Style matters, maybe unfortunately depending on the point of view. Products like consumer electronics have a large amount of fashion to them. Just like the t-shirt was perfected in the 1950s people still make new ones with little style changes for no functional reason.

Designers at Balenciaga don't have to justify their jobs when they make oversized t-shirts, neither do the ones at Apple.


Fashion is a very appropriate place for style. Tools, less so.

Corollary: the extent of fashion-driven variability those "tools" support over generations tells us just how little utility those tools provide.

In actual tools, the form and function are strongly connected. Tools of competing brands look pretty much the same, except for color accents, because they can't look any different without sacrificing functionality, performance and safety characteristics.

You don't see power tool vendors trying to differentiate their impact drivers by replacing rubber handles with flat glass because it's more "modern", because it would compromise safety and make the tool unsuitable for most jobs its competitors fulfill. This happens in software mostly because the tools aren't doing much of anything substantial - they're not powerful enough for design to actually matter.


I do see tool vendors often adding their own logos to the tools. They choose non-functional colors for styling. They'll make something more rounded or more squared for aesthetic reasons. For consumer-facing tools there are lots of little non-functional changes they'll choose to do for their own stylistic and branding purposes. They do want to ultimately differentiate their products from competitors, not just be the exact same as all the others on the shelf.

I can't see how half of the icon choices made in the article would pass internal testing, let alone actual user testing.

Maybe stakeholders were calling the shots and everyone was like, "Fine. If you want us to reuse the same icon for different purposes, you're the boss. We are done trying to explain why this is a bad idea."


Yes this is always how it's been, especially if you're a front end developer. Changing designs every few months just for the hell of it is what designers do.

Sadly, the desktop interface was not perfect by 1995. It was visually near-perfect, but the UX, acutal ease of interaction, left much to be desired. Sadly, it's the visuals that make pretty screenshots. The actual UX of OSX is quite jarringly bad in many regards 30 years later :( But developing interactions is much harder.

Uber, Airbnb, Robinhood all took off because they created easy to use beautiful apps (compared to their predecessors).

The anti-design bias in this forum is genuinely unhinged. I see some saying the entire destruction of the natural world stems from design lol.


I worked at Uber. The UX designers were pretty obsessed with the iPhone app, making sure it was pixel perfect and the little cars in the city view moved smoothly and every transition was crisp and so on. The vast majority of new users at the time were on the comparatively ugly Android app.

Things got pretty bad. More than 95% of all employees (and I'm guessing 99% of designers) were using iPhones at the time. There would be rough edges all over the Android app, but as one of our designers said "people with taste don't use Android".

Imagine knowing that most of your new users were getting a subpar experience, and that not being enough motivation to expense a flagship Android and drive it daily.

But the new users kept coming, and despite mostly being Android users, they still used the product. Turns out that legacy taxis are themselves an ugly interface, and ugliness is relative.


> The vast majority of new users at the time were on the comparatively ugly Android app.

Probably the vast majority of profitable Uber users were still on iOS, though, like most apps?

> but as one of our designers said "people with taste don't use Android".

Based lol


> people with taste don't use Android

Probably true at the time.


I remember excitedly switching to a Nexus 5X and then going back to my old iPhone a few months later because every app felt like a bad port of the ”original” iPhone app.

Using the term "Legacy Taxi" to imply that a taxi you don't summon by phone is somehow out-dated is wild. I understand the reason you would use it, especially at a company like Uber, but it still seems hilariously delusional.

I've never worked for Uber and I see the old model as a barbaric non-starter, why on earth would I want to flag a car down instead

The point is that a taxi is a taxi. It's like calling cash "Legacy Payments"

That statement is just 100% demonstrably false.

I don't think anyone seriously believes Uber, Airbnb and Robinhood won because of "beautiful apps".


"Beautiful" maybe, not "good UX" for sure. Prior to Uber, calling a cab using the phone app was...suboptimal and easier to actually call a taxi company. They also provided much better services, which is what make them stick around.

RH made a lot of investment tool accessible to people that "I just want to buy stock of some company", I used tasty trades for a while, but their mobile app while has all functionality, but realistically you will just look to overview portfolio.


The point still stands. The improvement in "usability" in those three cases did not come from app UI/UX.

It came from adjusting IRL to work with better UI/UX rather than adjust UI/UX to work with existing business processes.

Beauty has real value, but usability is far more important.

Unfortunately, most of the SW industry isn't even aware of the difference:

For beauty you hire a graphic designer

For usability you hire a PhD in cognitive psychology


No they took of because they used illegal practices and VC money to undercut competitors.

I think we can pretty safely assume that every large and successful company has done horrible and illegal things to get themselves there. That being said I think design and ease of use still play a significant part in Uber’s success.

Well, I didn't have Balzac on today's HN bingo card, for sure!

"Behind every great fortune is an equally great crime."

https://www.britannica.com/biography/Honore-de-Balzac/La-Com...


Bit of column A, bit of column B.

There’s a huge difference between anti-design bias and calling out Liquid Glass for the garbage human interface design that it is. If anything, it demonstrates a substantial *pro-design* bias because it shows that people actually care about design more than any “party line” here.

design for design's sake is bad, and that's what Liquid Glass is. There was no thought behind it.

It is 2026 and UIs are still abysmally slow in many cases. How is that even remotely possible? Now, with that in mind, consider (just for a moment) why people might think that UX people don't know what they're doing.


> It is 2026 and UIs are still abysmally slow in many cases. How is that even remotely possible?

Because UI/X teams were separated from engineering. (Same thing happened with modern building architecture)

It's fundamentally impossible to optimize if you're unaware of physical constraints.

We need to get rid of the "It's okay to be a UI/UX designer who doesn't code" cult. (Looking at you, Adobe and Figma...)


> Same thing happened with modern building architecture

Yes. Yes, it has. I'm currently in the midst of a building project that's ten months behind schedule (and I do not know how many millions of dollars over budget), and I'd blame every one of the problems on that. I - the IT guy - was involved in the design stage, and now in construction (as in, actually doing physical labor on-site), and I'm the only person who straddles the divide.

It's utterly bizarre, because everyone gets things wrong - architects and engineers don't appreciate physical constraints; construction crews don't understand functional or design considerations - so the only way to get things right is for someone to understand both, but (apart from me, in my area - which is why I make sure to participate at both stages) literally no one on the project does.

Seen from a perspective of incentives I guess I can understand how we got here: the architects and engineers don't have to leave their offices, and are more "productive" in that they can work on more projects per year, and the construction crews can keep on cashing their sweet overtime checks. Holy shit, though, is it dispiriting to watch from a somewhat detached perspective.


Agreed. The further you are away from how a computer works internally, the worse your product for a computer will be.

We have convinced ourselves as an industry that this is not true, but it is true.


> We need to get rid of the "It's okay to be a UI/UX designer who doesn't code" cult.

I don’t think designers who don’t code are really a problem. They just need to dogfood, and be lead by someone who cares (and dogfoods as well).


In the case of Apple, I really doubt its designers don't dogfood. Do you expect them to have Android phones and linux desktops?

I would think like you, but then some of their design decision are truly baffling. I like the idea of Liquid Glass, but there are thousands of rough edges that scream lack of care.

I have a strong feeling people working and approving Liquid Glass didn't dog food it in dark mode because it just looked BAD in the first builds available.

I sometimes wonder if anyone in charge at Apple uses Apple devices the way I do. I expect they have one, consistently-apple, high-end setup and it probably works very well for their style. Some things are great but others are insane and it seems like that happens most when using things like non-apple monitors or not typing a certain way on the phone or if you don't drive the same car.

Switching windows between two non apple monitors after waking from sleep is wildly unpredictable and has insane ux like resizing itself after a drag.

My carplay always starts something playing on my car speakers even when I wasn't listening to anything before connecting. It's so off it's comical.

The iPhone alarm will go off like normal, loudly from the speaker, even if you're currently on the phone and have it up to your ear. This has been a problem since my very first iPhone.

There has been a bug about plugged in physical headphones being unrecognized sometimes after waking from sleep even if it worked fine when going into sleep. I checked once in probably 2014 and Apples' official response was that it literally wasn't physically possible despite all of us people experiencing it. The bug was ancient even at that time and >ten years later my m4 macbook pro STILL DOES IT.

Apple and apple fanboys seem to take the stance that these are all user error on my part (remember the "you just aren't a Mac person" era?). I bet some of these are configurable with settings deep in some menu somewhere so from a certain perspective that's right but also underscores my point about the limitations of myopic dogfooding.

As a fun aside, the ux for turning on the "Voice Over" tutorial is the worse thing I've ever experienced on an Apple device. I was laughing out loud trying to figure out how to get out of it instead of finishing the unknown remaining steps. I feel bad for folks who need that accessibility in order to be effective.


This. Every time Apple made Ui changes, I’ve seen negative reactions. People react negatively to change, not to good or bad necessarily. I’ve been using the ne UI since it was in the developer beta, and can’t really tell a practical difference.

Uber very much falls into the category of "useful despite the awful app" for me.

It's slow, bloated, buggy and ugly. Probably one of the worst apps running on my phone.


Nowadays yes, since they need to justify the jobs of hundreds of Javascript developers.

But there was a time when their app was native and was actually quite good.


It's difficult for me to believe that you might be arguing all of the icons in the drop-down menus are beautiful... I know I have found them distracting.

In my opinion, this article had very clear and direct criticisms; they were hardly "anti-design bias". The increase in visual clutter is, for sure, a net loss for MacOS Tahoe.


Design is not just pretty visuals but also solving problems. The design of Tahoe doesn't solve any problems the previous designs didn't but it solves many of the previous problems worse than the designs before.

None of those companies had predecessors.

My main gripe with Liquid Glass is how distracting it is.

Many top bars have become a group of bubbles over the content, which we’ve been conditioned to see as floating notifications for years. Things shine and move when they don’t require attention, just because.

The end result is that my OS feels like a browser without ad blocker. As much as people hated flat design, at least it didn’t grab your attention with tacky casino tricks.


My main gripe is that the visual shenanigans alone were enough of a change, why rearrange the buttons?! In the early iOS beta, the new tab button was at the top of Safari, as far away from your thumbs as it could be.

Genuinely believe Apple’s design team are rudderless or have unintentionally been forced to produce something to justify someone’s career, because this whole thing is disastrous.


> to produce something to justify someone’s career,

This is the curse of being a UI designer for a long lived product. Once a thing has been created and future work consists of 99% code and 1% UI, your UI designer job has evaporated. And so we see that everything changes every major release of an operating system, so the UI people can justify their pay checks.


I think you have cause and effect the wrong way around.

These changes in design are intended to appeal to our magpie brain of wanting the latest, shiniest, things.

You have to understand the vanity of consumers. If every new product looked the same then a lot of people wouldn’t both buying the latest gizmo because there’s no magpie appeal. So when the market stagnates, you need to redesign the product to convince consumers to throw away a perfectly good, working device.

And it usually works as a sales strategy too.

So designers then get told thy has to come up with something that looks newer and more futuristic than the current designs. Regardless of how much those designers might love or hate those current designs.

They come up with this shit not to justify their jobs but because they’re hired exactly to come up with this shit.


If it's coming down from the C suite, that just makes it worse. That's cheap marketing tricks winning priority over lasting intent. It's not just the design folks trying to justify their job at that point, it's the executives surrendering to the "stock must go up during my quarters at all costs" mentality.

Worse in some ways, but understandable an others.

If Company X didn’t reinvigorate their product line then consumers might switch to Company Ys products because they look shiny and new. Which is literally why people switched from BlackBerry et al to iPhones in the previous decade.

Consumers are fickle and want that dopamine hit when they spend money. I know this and even I find myself chasing shiny things. So there’s no way we can change that kind of consumer behaviour.

To be clear, I’m not saying it’s right that companies do this, but I do think they’d go out of business if they didn’t because consumer trends will continue like this regardless of how ethical companies tried to be.

So the problem here isnt that Apple tried to refresh its operating system look. It’s that they completely jumped the shark and created something that was too focused on aesthetics while failing in literally every other metric.


People switched from BlackBerry to iPhone for far more than just iPhones being "shiny and new." Visual voicemail, Safari, touchscreen, etc. The recent UI redesign effort is not remotely comparable to the investment and strategy that went into distinguishing the iPhone from the rest of the cell phone market.

We're discussing this on one of the most bare and plain sites on the popular internet. Folks who are attracted to value don't care if stuff isn't redesigned if it works well. It's a bad sign if executives at Apple feel the need to invest in cheap dopamine hacks for the sake of novelty farming.

A company that stagnates or even shrinks to a healthy size can be more valuable to society, and the stock market in the long term, than one that mutilates itself in chase of unnecessary growth.


In my experience, it's usually just UX hubris and ignorance about a product's expert users.

UX folks usually have no understanding of the impact of moving a common control and/or keyboard shortcut.


You’re talking about very specific rearrangements of controls. Whereas I was talking about why these big redesign initiatives get green lit to begin with.

> In the early iOS beta, the new tab button was at the top of Safari, as far away from your thumbs as it could be.

It’s relatively recent in iOS history that Safari’s address bar is at the bottom. There’s a setting to move it back to the top. This specific example is probably as innocent as a default getting accidentally changed during the development process.


> In the early iOS beta, the new tab button was at the top of Safari, as far away from your thumbs as it could be.

Can't you swipe past the end on the tab bar (along the bottom by default) to create a new tab?


Only when you're currently in the rightmost tab, but yes

Yes, I meant past the end of all tabs

Every distracting visual element of liquid glass looks like a tiny Ad to me which is constantly trying to distract me from what I am doing and trying to grab my attention. Super annoying.

Someone posted these settings on HN recently and it has made working on my Mac once again usable: https://imgur.com/a/macos-accessibility-settings-simpler-ret...

Maybe that's why disabilities are on the rise. I went through the MacOS installer yesterday, it asked 3 times if I wanted to configure any a11y options, with cognitive disability featured.

I'm usually in linux (dual boot on a mac) but had to boot into macOS for something. I was utterly confused when I moved my cursor to the top-left and missed clicking the apple menu

Alan Dye ruined everything at Apple, no idea how he clung on for so long. You know he designed the horrible ios7 as well?

https://www.youtube.com/watch?v=YSzjcVZXolc

https://tjkelly.com/blog/ios-7-sucks/

And he also takes credit for the dynamic island. It is an assault on my senses to see everything constantly moving around on my screen.

I have been working with Macs since 1995, but this year is my first using Pixel with GrapheneOS, that is how done I am with Apple. Unfortunately I know the UI will not change for years and I just could not take it.


>Alan Dye ruined everything at Apple, no idea how he clung on for so long

Cook doesn't seem to a have any taste for product design, isn't he a logistics guy?


Yup. He is, by all accounts, a great supply-chain guy. eg. As far as I can tell, there were no significant breaks in Apple's supplies during COVID.

But he clearly falls afoul of Steve Jobs'warning about leaders with no taste.


This is even an understatement.

It’s not a stretch to say that Tim Cook created the whole Shenzhen microelectronics industry. The thousands of specialist component vendors and large integrators that assemble products trace to his instigation with Compaq and then Apple. The iPod, Macs, iPhone, copied the Swiss Watch model of vast redundant networks of component competetors working as an ecosystem to drive down costs.

This created the skill and machinery base that made it possible for other western design companies (such as Android vendors that were not Samsung or Japanese) to make clones of the iPhone quickly and easily. (Let’s be real, every smartphone is an iPhone 1 clone)

China owes a lot to this work.


Tim Cook needs to drop some acid. He has no creativity what so ever.

You'd think a supply chain guy would be able to get ahold of some psychedelics too...

You know he designed the horrible ios7 as well?

I don't think that's fully accurate, unless you have a link that confirms it? That Dye designed it, I mean, not that it was horrible...

Jony Ive was the head of design at that point (both hardware and UI). Wikipedia says Dye "contributed greatly to the design language of iOS 7" but Ive would have had final say. Certainly at the time as I recall it, iOS 7 was seen as Ive's baby.

Also, I'm not defending iOS 7, but I reckon its visual design was a lot more influential than it gets credit for. Think of those ubiquitous Prime bottles, with their bright saturated color gradients; the first place I remember seeing that style was iOS 7. I bet they picked that style for Prime because kids liked it, and kids liked it because kids like iPhones.

Edit to add: "bright saturated colors" goes back a long way to Fisher Price toys and the like, of course, but it's the gradients specifically that I think iOS 7 popularized.


I've heard rumors that part of why iOS 7 was so garish is because Dye's background was in product packaging so his team were doing design reviews on paper and didn't realize that the colors would look different on device due to CMYK vs RGB. Not sure if it's ever been confirmed but it would explain a lot.

I was in the room for a few design reviews for my part of iOS 7 (I was an engineer writing the new screens). Everything was done on a 90+ inch HDTV that we AirPlayed from our Macs or iPhones to for the room to view. Not printed, though the design studio walls were covered in printed explorations of variations of concepts, that is true.

Dye was the senior rep of the Design org present and commenting on all our software progress. I never once encountered Ive.


Thanks for the clarification! Out of curiosity, do you have any other insight into how/why iOS 7 turned out the way it did? What was the internal attitude towards it like?

I find it hard to believe that Dye would be so incompetent to not even know about CMYK vs RGB. How did he even get hired by Apple?

Dye was a symptom, not the cause. At a widespread organizational level, Apple just does not give a shit.

I will take the bottom bar of iOS7 Safari any day over the wretched mess that it is now.

Is it just me, or does answering a Facetime call now require pressing buttons at the opposite top and bottom corners of the screen?

How does that benefit anyone?


Is he the one responsible for the 2016 MBP? It took Apple like 6 years to fix everything about that

Thank god he left for Meta

Really? Same guy? OUFF. What a track record...

I rather like the skeuomorphism ... on the buttons anyway. The distortion effect on the glass is simply annoying, and the overall effect on an already-cramped UI like Safari on a phone is just ... ugh. There's now basically three blobs of mystery meat at the bottom of the screen. So if Liquid Glass was made for mobile first, it's an even bigger failure. It's actually more tolerable on desktop, though the double-border effect on things like the control panel stick out pretty badly.

It’s not how you get promoted though. Plus implementing complex UIs is challenging which engineers like. The incentives are off.

It's not only not how you get promoted, it's a pretty good way to get canned as well. If you don't like the work you're being asked to do, your options are pretty limited to doing it or going elsewhere. There are a million UXers and engineers who'd love to work at Apple and would be happy making whatever their boss suggests.

That's how you know the "culture" and the "vision" really do have to come from the top, and how you know Steve Jobs really was providing value.

Seriously. People got canned for resisting the corporate overlords. That’s capitalism. Corporations run by their employees? Guilds? Cooperatives? Hah! https://www.youtube.com/watch?v=ynbgMKclWWc

Just that usually the forcefed initiatives have to do with corporate profits for shareholders, or trends like shoving AI into everything. Imagine saying no to that!

Even at the supra-corporate and supra-national level, if the organizing principle is competition, no actor not even a CEO or a corporate board or a government can afford to stop racing towards disaster. There is a simple mantra: “If we don’t achieve AGI first, China will and then they’ll dominate.”

Once in a while, the world comes together to successfully ban eg chemical weapons or CFCs, and repair the hole in the ozone layer. Cooperation and restraint takes effort.

Judging by the way we’ve drained all the aquifers, overfished the last fish, destroyed the kelp forests, cut down the rainforsts, bleached the corals, and polluted the world with plastic, I don’t think there is much hope of stopping.

Insects and pollinators are way down, and many larger species are practically extinct, 95% of the world’s animal biomass is humans and their food, and people still pretend environmental catastrophe is all about a few degrees of temperature.

PS: Yes, that escalated quickly. In the real world, it has taken only 80 years… :-/


I don't think corporate profits are the reason Apple has shitty UX because it's hard to argue how shitty UX correlates to higher profit, especially when it costs more to create a shitty UX than to keep the good one you already have.

I reckon it's more that some Apple VP has to justify their million dollar equity package by creating work for their org, because otherwise why should you still have a job?


That’s not capitalism. That’s hierarchy. It didn’t start in the 20th century

Capitalism and competition produce the results I describe further down the comment. It escalates to planetary catastrophe

Let’s focus on the specific claims in your comment.

> People got canned for resisting the corporate overlords. That’s capitalism

Being told to do things by your boss is a problem as old as time. Except with capitalism you can change bosses — a luxury which has not existed throughout history.


Okay great. Now keep going with the rest of my comment and address the rest point by point. You’ll find that it expands from that first point, and describes the consequences of capitalism and competition as an organizing principle.

We are discussing UI/Icon design, not the geopolitical implications of AGI or the Holocene extinction event.

Why should someone that disagrees with you on whether capitalism is uniquely responsible for bad icon design now be forced to defend it for every sin / shortcoming ranging from the social inequity to ecological collapse?


Sure, I guess I’m here.

Why is capitalist competition worse than any other form of competition? Wouldn’t wartime competition over land and sovereignty be far worse? Didn’t the Soviet Union have extreme forms of political competition?


Yeah, software orgs ship their promotion structures.

This explains why all commercial software enshittifies.

And why open source UIs are anarchist. :)

Besides the visual design, I've been thinking about the tech part of it. There's so many bits shifting, morphing and having state, that it sounds antithetical to what a UI is supposed to be: a consistent and unnoticeable tool to interact with software. I do like some of the things they do to free up screen space, but having components being to programmatically complex is bound to cause issues. Besides having your presentation desync with your data, your UI now has opportunities to desync with itself...

It’s an extremely uncomfortable conflict and contradiction between corporate organization, finance “capitalism”, engineering, and creatives; in addition to individual vs group dynamics.

Corporate structure is driven by exploiting and using value for and by a de facto nobility (the c-suite).

Finance “capitalism” seeks to extract value, be it short or long term.

Engineers are motivated by building and creating value.

Creatives are driven by changes for changing’s sake to remain or get a seat at the table.

The uncomfortable reality is that these are inherently conflicting interests that are pulling and pushing each other, but mostly top down.

It’s essentially the “colonialist” exploitative model of existence using creators to leverage rather than extract natural resources, a system that is increasingly not suitable for the modern, technological, commoditized world. AI is a good example of that; it arguably diminishes the value to n degrees of both engineers and creatives, while also leaving the “nobility” and their neo-aristocratic corporate system out in the open exposed as revealing it not only as having no clothes on, but utterly abusive, useless, and downright evil. And no, that’s across the whole political spectrum, not just the opposite of your silly system approved political sport team.


The problem with Apple's execution of Liquid Glass is that the intended audience isn't the iOS user, it's the onlooker watching the iOS user (FOMO). That was an effective strategy when iPhone popularity was growing rapidly. Now that we're in a plateau of market saturation (post-peak Apple), anything directed at the onlooker which detracts for the actual user will hurt their bottom line.

How would this work? The only place I've ever heard of liquid glass is from iPhone users complaining about it incessantly and tutorials on how to turn it off or diminish it. What would I fear missing out on?

I'm merely saying that Apple is cargo-culting their own success formula and it's failing.

I agree, Liquid (Gl)ass is hideous. I'll stick to macOS Sequoia for the time being.

MacOS 26 highlights were... and this was _Apple's_ opening modal...

"Icons that look like shit!"

and

"Notification summaries that may not be correct!"

In general I feel as if Apple's software feels buggier and less solid lately across my iPhone and my computers. Won't be upgrading the personal computer for as long as possible


> Apple's software feels buggier and less solid lately across my iPhone and my computers.

Agreed. Rendering is very flaky. Input events are dropped.

Blinky. Laggy. Two of the Seven Dwarves of Liquid Glass.


Buggy, Flaky, Dropsy, Blinky, Laggy... that's five. How about Wobbly and Gloopy to round out the seven?

I finally talked my company into letting me swap out my MacBook pro for some little Dell. After the last year of updates, my Mac has stopped resuming from sleep reliably, everything is ugly, they took the already barely usable (imo) finder and system settings and found exciting new ways to make them worse. Sadly corporate security means os updates were not optional.

I've never really liked macOS but it feels like someone at Apple was hired just to make it even less likable for me personally lol


> already barely usable (imo) finder

been using a Mac for years, and to this day I don't know how it's possible to navigate directories using Finder. It only has shortcuts for a few folders by default (photos, documents...) and doesn't have a button to navigate to the parent folder. I have literally no idea how to get to my home directory, I need to use the CLI


> doesn't have a button to navigate to the parent folder.

Command + Up Arrow, which is also visible if you click on the "Go" menu. There is also a toolbar button that shows the entire set of enclosing directories; offhand I can't remember whether this is visible by default. There is also "View -> Show Path Bar" which shows all this information at the bottom of the window.

> I have literally no idea how to get to my home directory

Go -> Home, which shows a shortcut key for this, Command-Shift-H.


I've grown increase hate towards Finder to the point that I avoid using at all costs. I've been migrating to the terminal, using fzf to find files and directories and yazi for a more graphical experience.

How can it be called FINDER, if it can't FIND things? cmd+shift+g should be a fuzzy search, but it returns nothing 80% of the time. cmd+f often can't see files that are in first level folders inside my home folder.

Meanwhile, hitting Esc+C in the terminal (via fzf) it's totally effective.


Off the top of my head, I want to say you can right-click on the current folder name to see (and navigate to) all its ancestors.

Correct - IIRC it's called the "proxy icon"

> I have literally no idea how to get to my home directory

Just add it to the sidebar. Finder > Settings > Sidebar > Locations. Or drag it into Favorites.

> doesn't have a button to navigate to the parent folder

View > Show Path Bar. You can also right click on the directory name at the top of the window and it’ll give you the same options.


I’ve said many times before that I think Finder is the worst default file manager of any popular desktop environment.

I get it’s supposed to be easy to use but so much functionality is hidden behind non-obvious shortcuts. The end result is you either need to memorise a dozen secret handshakes just to perform basic operations, or you give up and revert to 70s technology in the command line.


> I’ve said many times before that I think Finder is the worst default file manager of any popular desktop environment.

[GNOME enters the chat]: "That's nothing, I'm way worse!"


When on macOS using Finder I often wish I had something as nice and consistent and usable as Nautilus.

Finder is genuinely horrible. It’s obvious no one at Apple cares about files anymore nor anyone working with them.

We’re all supposed to consume cloud these days or so it seems.


My go to example would be long lasting issues with SMB support in Finder. All operations are very slow, the search is almost unusably so. The operations that are instant on every non-Apple device take ages on a Mac. I first ran into these issues 7 years ago when I set up my NAS, and they present to this day. I tried all random suggestions and terminal commands, but eventually gave up on trying to make it perform as it does on Linux.

With Apple's focus on cloud services, fixing the bugs that prevent the user from working with their local network storage runs contrary to their financial incentives.


Is it actually though? It’s cool to criticise Nautilus but, at worst, it’s just equally as bad as Finder. Which shouldn’t be surprising given how much it’s styled to look like Finder.

However in my personal opinion Nautilus’s breadcrumb picker does edge it against Finder.

So I stand by my comment that Finder is the worst.


Nautilus opens a new window for every folder you enter. Finder does not.

That used to be a preference, and last I used it, it was not. It is forced on because that’s how the GNOME developers thought you should use it… “Our way or the highway!” — GNOME devs.

Finder wins based on that alone. Finder wins so completely because of that one single thing that I’ll never voluntarily use GNOME again.


You can add shortcuts to the sidebar by dragging. You can right click the folder name in the top bar to get a list of parents. You can also View > Show Path Bar and see the the full clickable bread crumbs. Not sure why this is so confusing if you bother to try.

Even after decades of using macOS I still cannot wrap my head around the fact that Finder has no single button shortcut for opening a file - the most common operation a file manager should do. It’s Cmd+O, and it cannot be changed to anything sane like Enter key.

You can remap it to any shortcut you want (as long as it has a modifier key in it)

>as long as it has a modifier key in it

Why on Earth is this a requirement? When you're navigating through Finder using keyboard, it's very inconvenient to use two keypresses to perform a very basic operation. Using Enter to open a file is how every file manager on every operating system works except Finder. Why would Enter key be hardcoded to a file rename operation instead?

It is a typical Apple behaviour of doing things differently from the rest of the world just for the sake of it, even when it's detrimental to the user experience.


>Why on Earth is this a requirement?

Actually I just checked and it's not, technically you can create key equivalents without modifiers as well [1]. For Finder this doesn't work though, because enter seems to be specifically handled before menu-level key equivalent processing. (Note that it's not guaranteed to work on other apps either, based on [2] seems key equivalents are only dispatched if modifier keys exists. But that might be out of date since it worked for the people in the SE post.)

Option+Enter is the next closest thing.

I agree that their implementation here is not good. In fact there's already a "Rename" menu item, which isn't actually wired to the enter hotkey (this is very un mac like because it means there is no easy way to discover it). The "rename" menu item is actually a fairly recent addition to mac (I think maybe 10.11) while Finder itself is ancient (it was one of the last few apps to be migrated to Cocoa and even today still has lots of legacy warts), and possibly no one bothered cleaning things up.

[1] https://apple.stackexchange.com/questions/132984/keyboard-sh...

[2] https://developer.apple.com/library/archive/documentation/Co...


Space for preview.

Do you like Windows dark patterns more than Mac's shitty designs? Seems like no one wins here. I personally just refrained from upgrading to Tahoe

Edit:typo


I got a new Apple Watch and just getting it set up was a pain. For some reason the passcode input would fail to register key prompts and I had to spam the buttons until something clicked. Then I gave my old Apple Watch to my mom and the setup failed like three times before we managed to get it done. Did make me wonder if anyone at Apple actually tests setting up these devices.

Apple has been crashing with increasing frequency for me. Luckily all minor but I’m waiting for the big crapout.

Also what happened to their filters? I get daily spam from Apple email addresses now.


What I love is how their own features don't even play well together anymore...

For instance you can "hide your e-mail" by using Apple's relay, but if you do so... your payments using Apple Pay will fail unless you fill all the information in manually because the e-mail addresses don't match

It's ridiculous how poorly tested everything is, and that combined with their newly entered foray into the world of politics has nearly destroyed three decades of steady Apple use for me. I'll be actively considering other options, not upgrading, and looking elsewhere for products in spaces they're in


I too have the techie urge to upgrade my software whenever I can, ut after watching the Tahoe demos, I'm staying on Sequoia indefinitely.

I would really appreciate it if the next macOS would be about stability instead of some fancy features barely anyone asked for.


Same. Wish I could go back to Ventura or even Mojave though. There are zero new features I use. Still hate the newish Settings app.

Yeah, me too. The old Settings app was easy and intuitive; I hate the new one.

Same. And finally thinking that our KDE home server doesn’t look too bad, it’s almost comforting.

The major reason to stay on macOS is stability. Hopefully they stop breaking things on the Mac front.


The only "real" justification is that this is a long term play to figure out a UI and interaction shift that will work for general augmented reality devices (aka whatever device Apple releases five years and two iterations from now based on the Vision Pro.)

That’s the same logic that led to the Windows 8 UI being designed tablet first because that’s “where we’re all going”. Fast forward to Windows 10 where MS had to concede that, no, turns out it isn’t where we’re all going and rolled most of it back.

It's somewhat understandable.

A lot of us felt at the time that surely laptops and tablets would converge. Otherwise, what a waste of hardware.

But it hasn't really happened. From a hardware perspective, things have gotten closer with the iPad's magnetic keyboard. But, I still find that the iPad as laptop replacement to be a compromise that I may tolerate for travel but don't love for a lot of laptop work.


Magnetic keyboard on the iPad is such a lose. Some Thai hacker got full macOS running on his iPhone over Christmas. Apple are cowards collecting dividends.

The magnetic keyboard lets you use the iPad without it having to be resting on a solid surface like basically every tablet/keyboard combination out there. Microsoft has tried various hybrid laptop/tablet arrangements that did both things mediocrely. I have zero interest in what someone has hacked together.

iPads are great devices for non-tech savvy people who need a computer for stuff like writing and reading documents and e-mails, planning travel and making reservations, keeping in touch with people on messaging and social media.

That's a gigantic market segment, and Apple has to be very careful to not make those devices complicated or vulnerable.


As a tech support for a person with iOS gadgets, this is exactly some tasks which are way too hard on iPad. Emails getting lost because default Mail app was flaky, so I need to install Gmail app. Travel involves tickets, aka files. Now I need to help find the files in the locked down hell of a UI and figure out how to send them to a different app. Bonus points if they were archived and need unpacking. Messaging - some accounts tied to an old number, and a new number is needed to, well, make calls, and Apple generously doesn't provide dualsim option outside of China, so now I need to figure out sync between two devices with two sims, and then some messengers don't allow that while others do, and then I need to explain all that to an elderly non-IT person... In short - it's a mess, every time any task outside of doomscrolling an watching YT arises on iOS.

Exactly. I tend not to use my iPad that much except when traveling--partly because I have an old MacBook that lives on my dining room table. If I didn't have that I would certainly use my iPad more (and would doubtless get more comfortable using it for more purposes).

I struggle to find any use case at all for my iPad. Even when traveling, I use my phone most of the time and when I want something bigger, I have the MacBook Air in my bag, which doesn't feel any more cumbersome to have with me than an iPad.

I won't really argue much. I can get by with just my iPhone. I think a MacBook (don't have an Air) is better for a lot of things even if the iPad is better for media on a plane. The weight difference is minimal if you count the keyboard. I don't draw so don't need an iPad for that. A Kindle weighs nothing so I can always bring that for reading.

Not sure I'll buy another iPad given my current lifestyle.


That will be part of it. The main driver though that they've been working on for years is trying to figure out how to add just enough desktop to UIKit to allow them to kill off AppKit as a separate thing.

That isn’t a justification for fucking up the icons or using single retina pixels to differentiate them.

It’s just stupid people doing stupid things.


Without hyperbole, Liquid Glass on Mac OS is visually the worst commercial desktop UX I have ever had the displeasure of using. It is amateurish and frankly, ill advised to have even tried to unify the aesthetic across devices so universally. I think much of what is on the phone works fine. There are some pain points and some bits that are visually awkward, but generally it works and is new and fresh, but on the Mac it is as if no one really cared. And that reflects really poorly on where Apple is at, because if nothing else, Apple seemed like the company that always really cared about the user experience.

There are some things that are nice. The dock looks nice. The transparent menu bar is nice enough too and there is a toggle off switch if it doesn't work for you. Spotlight looks fine. But the rest is so bad that I just cannot fathom how someone at Apple did not stop it before release. I would be throwing a fit to stop it from being released if I was in Apple and had any sway at all. I assume the executive team was all looking at it and using it before release. So how did this happen? The new side bar and the new tool bars are abominations. I cringe every time I have to use the finder; it is just a blob of various shades of white or, if you prefer, dark mode, grey.

My hope is that if nothing else they roll back the sidebar and the tool bar changes or do a complete rethink on how they are implemented. If they rolled back the extra rounded corners I wouldn't complain either.


Clearly a generation of mobile-first designers not understanding how desktops are used.

It’s dreadful, it still blows my mind that out of Windows, macOS and Linux, my Linux desktop with KDE has the most premium experience now.


Even on mobile it took a few iterations from when the design was first introduced for it to be usable. Not good mind you, just usable.

Even Apple's own marketing material had screenshots where text was near impossible to read, even for someone with good eyesight: grey text on top of highly transparent glass... what were they thinking!?


>Spotlight looks fine.

Keyword “looks”. Because considering behavior, there’s tons of delay introduced and results change under your finger as you’re selecting them, causing you to get the wrong thing.


I agree.

My first rebuttal was going to be Windows 8, but that was actually a lot better.


A lot of Windows 8 I liked, but Windows perpetually suffers from needing to support older versions of Windowing systems, or some corporate usecase from the early 90s that carries too much money to ever say no to implementing.

Windows 11 is, I think, worse than MacOS these days, half for still dragging the past along with it, and half for introducing a second start menu just for ads.


I think Windows greatest strength is their greatest weakness, which is backwards compatibility. MacOS greatest weakness is their UX, which has slowly been going downhill for the past few years and on this release took a nose dive. It is a wild reversal from the mid 2000s when Apple's UX was so far superior to anything else that it felt revelatory to switch from Windows XP to OSX.

Oh geez, I forgot about Windows 8. Visually it looked nice enough, though. Once you got out of the insane touch first overlay it was fine, but I reinstalled Windows 7 so fast I never had to spend much time with it. I guess by that measure Windows 8 was worse.

Windows 8's alternative UI was very snappy and fluid. It was not great because it was completely disconnected from the normal UI.

The justification is that they feel it necessary to make obvious, superficial changes in pursuit of differentiation between software versions to help keep the upgrade treadmill running.

The only UI change that I've found useful since Yosemite was Mojave's introduction of a dark mode. They made the fonts look worse on non-Retina displays, threw out the Preference pane in favor of a weird list that can only be resized vertically, added transparent everything, and banned any icon that's not in a squircle. Such UI, many differentiation, much insanely great, wow!

Anyway, I bought a ThinkPad.


Dont worry, there are thousands of aspiring artists that will gladly do it for a 6 figure salary. The person that manages your salary, they determine what you work on. Dont like the work, walk away …

I'd say this is a pretty unhelpful, demanding, and borderline damaging take to spread. I've learned from experience that it's unlikely "a designer" should follow this advice or accept the archaic guilt-trip "because you make six figures" whatever that means.

If you run your own design agency, you've got your own company's reputation and yours on the line, so be as opinionated as you find necessary, but otherwise if you're just an employee without an inordinate amount of clear authority within the scope of your discipline at the large company (you know if you do or don't) then don't try and create a mutiny, it will more than likely be a childish assumption of personal risk on your part, much more so than it costs the company, much more so than anyone else needs to care, because someone on a forum told you to be passionate about round rects or small icons or whatever. If you need to tell your boss "google NN group", you probably don't have the trust or experience to be successful with such a play.

It's okay to have a personal hatred of it and do what you can to steer the work appropriately, but when you're tasked with a dumbass plan, let it be the decision-makers' dumbass plan, unless it's your decision to make. Let it be the project we tried and it didn't and couldn't have worked out, which sometimes happens, but you learn and then leave if it's pervasive and you have other options.

It would be remarkably stupid to single yourself out as the person who thinks of themselves as the reincarnation of Steve Jobs and risk your livelihood to save Apple's reputation. The unlikely upside is that you get your way and that can boost your confidence, but the downside is that you fumble your best shot at financial security for the rest of your life because you thought you'd be received well.

That's not to say you shouldn't say no to nothing, or have love for your craft, just don't pretend it's your job to, unless it is, which it's probably not. Disagree and let it be a failure if it's going to be, feel vindicated if it is, but the money is there for you if not. The people who worked on the Vision Pro aren't responsible for it being a dud product, and they can be proud of what they did design-wise and technologically despite that.


I think your take is unhelpful, demanding, and damaging to engineering ethics. If you want to live in a 90s corporate workplace hierarchy model, that's your value system. But it is untenable and harms people in the long run.

Ethically speaking, the parent seemed to be demanding that some hypothetical designer put their livelihood on the line because good taste in UI design is paramount, we're not talking about building a skyscraper in a swamp out of twigs. Pat yourself on the back I guess if you want to volunteer to be a trillion dollar company's human meat shield, and relish in the virtue of being unemployed in a very bad market due to having a volatile emotional temperament, but I'd just recommend not doing that.

In the long run, no you don't want to set that much of your taste or expertise aside forever, but you shouldn't have to, it comes with all the things I said, trust & agency.


Tour de force article...well done! (except the snow was highly distracting)

Also absurd is that tabs and menus are not attached to their elements.


I agree, the Liquid Glass isn’t working out for Mac OS.

Few days ago I booted a very old device running High Sierra and the UI and old Dock look so clean.

That desktop was peak for me, and the age starts showing a bit in Finder, but it's still more usable than today's versions.


Just a note on Vision Pro - as it’s been priced at $3,500 - I’m not sure I’d say it’s unsuccessful.

I bet they’ve sold approx as many as they thought they would.

This product is a placeholder for a cheaper and lighter one in the future.


Nielsen Norman used to be quoted constantly during UX discussions at various places I worked. Seems like UX folks and Information Architects have slowly been replaced by a general purpose “designer”

There seems to be two takes on the whole liquid glass thing.

- UI/UX pros who understand this stuff: “I hate it” - everyone else: “I didn’t notice until you pointed it out”


I also don't really understand where the raised sidebar gets its color tint from. Wouldn't the desktop background be underneath everything else?

I don't mind Liquid Glass. What annoys me is how it's seemingly someone's full-time job at Apple to periodically make me relearn how to find my photos or open Safari tabs.

I generally don't place "conspiratorial" motivations to explain the decisions of tech companies, but I can't help but think this is a move from apple to keep their ludicrously powerful M-chips busy with mundane UI because the average user will never need to otherwise upgrade past an M1.

These directions come from above the PMs, especially at a place as design-focused as Apple.

Unpopular opinion, but I find Liquid Glass incredibly satisfying (I set it to Tinded though). The transitions are just really well done. The glass effect itself is a fun gadget, but unnecessary. I dislike how it must waste my battery. Give me the transitions of Liquid Glass with a basic frosted glass material and I would be perfectly happy, but the current state is fine too.

> If you're a designer at a top 10 S&P 500 company making 6 figures,

... your career requires constantly chasing after what amounts to fashion trends every few years, otherwise it's a solved problem and probably does not provide much of a career


> If you're a designer at a top 10 S&P 500 company making 6 figures, you owe it to yourself to have some love for your craft.

I think you've unintentionally illustrated the root of the problem here.

People motivated by profit are not incentivized to produce high-quality results. Rather, people motivated by profit are only incentivized to do the least effort that they can get away with.

People motivated by pride are those who are incentivized to produce good results, because the result reflects on them personally.

Which is all to say, pride motives produce a race to the top, whereas profit motives produce a race to the bottom. It's no wonder our modern economy can only produce slop.


C3 doesn't have a recompile everything model, in fact it's pretty much designed around supporting separate compilation and dynamic linking (unlike Zig and Odin), it even supports Objective-C style dynamic calls.


NOTE: I'm a fan of value semantics, mostly devil's advocate here.

Those implicit copies have downsides that make them a bad fit for various reasons.

Swift doesn't enforce value semantics, but most types in the standard library do follow them (even dictionaries and such), and those types go out of their way to use copy-on-write to try and avoid unnecessary copying as much as possible. Even with that optimization there are too many implicit copies! (it could be argued the copy-on-write makes it worse since it makes it harder to predict when they happen).

Implicit copies of very large datastructures are almost always unwanted, effectively a bug, and having the compiler check this (as in Rust or a C++ type without a copy constructor) can help detect said bugs. It's not all that dissimilar to NULL checking. NULL checking requires lots of extra annoying machinery but it avoids so many bugs it is worthwhile doing.

So you have to have a plan on how to avoid unnecessary copying. "Move-only" types is one way, but then the question is which types do you make move-only? Copying a small vector is usually fine, but a huge one probably not. You have to make the decision for each heap-allocated type if you want it move-only or implicitly copyable (with the caveats above) which is not trivial. You can also add "view" types like slices, but now you need to worry about tracking lifetimes.

For these new C alternative languages, implicit heap copies are a big nono. They have very few implicit calls. There are no destructors, allocators are explicit. Implicit copies could be supported with a default temp allocator that follows a stack discipline, but now you are imposing a specific structure to the temp allocator.

It's not something that can just be added to any language.


And so the size of your data structures matters. I'm processing lots of data frames, but each represents a few dozen kilobytes and, in the worst case, a large composite of data might add up to a couple dozen megabytes. It's running on a server with tons processing and memory to spare. I could force my worst case copying scenario in parallel on each core, and our bottleneck would still be the database hits before it all starts.

It's a tradeoff I am more than willing to take, if it means the processing semantics are basically straight out of the textbook with no extra memory-semantic noise. That textbook clarity is very important to my company's business, more than saving the server a couple hundred milliseconds on a 1-second process that does not have the request volume to justify the savings.


It's not just the size of the data but also the amount of copies. Consider a huge tree structure: even if each node is small, doing individual "malloc-style" allocations for millions of nodes would cause a huge performance hit.

Obviously for your use case it's not a problem but other use cases are a different story. Games in particular are very sensitive to performance spikes. Even a naive tracing GC would do better than hitting such an implicit copy every few frames.


It's not free. There is a license attached. One you are supposed to follow and not doing so is against the law.


There's a deeper discussion here about property rights, about shrinkwrap licensing, about the difference between "learning from" vs "copying", about the realpolitik of software licensing agreements, about how, if you actually wanted to protect your intellectual property (stated preference), you might be expected to make your software proprietary and not deliberately distribute instructions on how to reproduce an exact replica of it in order to benefit from the network effects of open distribution (revealed preference) - about wanting to have your cake and eat it too, but I'd be remiss to not point out that your username is not doing your credibility any favors here.


I'm not whining in this case, just pointing out "they gave it out for free" is completely false, at the very least for the GNU types. It was always meant to come with plenty of strings attached, and when those strings were dodged new strings were added (GPL3, AGPL).

If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.

Some companies outright bar their employees from reading GPLed code because they see it as too high of a liability. But if a computer does it, then suddenly it is a-ok. Apparently according to the courts too.

If you're going to allow copyright laundering, at least allow it for both humans and computers. It's only fair.


> If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.

Right, because you would have done more than learning, you would have then gone past learning and used that learning to reproduce the work.

It works exactly the same for a LLM. Training the model on content you have legal access to is fine. Aftwards, somone using that model to produce a replica of that content is engaged in copyright enfringement.

You seem set on conflating the act of learning with the act of reproduction. You are allowed to learn from copyrighted works you have legal access to, you just aren't allowed to duplicate those works.


The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is. The tokens the LLM is spitting out are coming from the LLM provider. It is the provider that is reproducing the code.

If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.


> The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is.

I don't think this is legally true. The law isn't fully settled here, but things seem to be moving towards the LLM user being the holder of the copyright of any work produced by that user prompting the LLM. It seems like this would also place the enfringement onus on the user, not the provider.

> If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.

If you produce code using a LLM, you (probably) own the copyright. If that code is already GPL'd, you would be the one engaged in enfringement.


You seem set on conflating "training" an LLM with "learning" by a human.

LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.

Legally, we call that "making a copy."

But don't take my word for it. There are plenty of lawsuits for you to follow on this subject.


> You seem set on conflating "training" an LLM with "learning" by a human.

"Learning" is an established word for this, happy to stick with "training" if that helps your comprehension.

> LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.

> Legally, we call that "making a copy."

Yes, when you use a LLM to make a copy .. that is making a copy.

When you train a LLM... That isn't making a copy, that is training. No copy is created until output is generated that contains a copy.


Everything which is able to learn is also alive, and we don't want to start to treat digital device and software as living beings.

If we are saying that the LLM learns things and then made the copy, then the LLM made the crime and should receive the legal punishment and be sent to jail, banning it from society until it is deemed safe to return. It is not like the installed copy is some child spawn from digital DNA and thus the parent continue to roam while the child get sent to jail. If we are to treat it like a living being that learns things, then every copy and every version is part of the same individual and thus the whole individual get sent to jail. No copy is created when installed on a new device.


> we don't want to start to treat digital device and software as living beings.

Right, because then we have to decide at what point our use of AI becomes slavery.


[flagged]


[flagged]


> "Learning" was used by the person I responded too.

Not in the same sense.

> If you had read my comment with any care you would have realized I used the words "training" and "learning" specifically and carefully.

This is completely belied by "It works exactly the same for a LLM."

> That doesn't count as a "copy" since it isn't human-discernable.

That's not the reason it _might not_ count as a copy (the law is still not settled on this, and all the court cases have lots of caveats in the rulings), but thanks for playing.

> If you don't like being called out for lack of comprehension, then don't needlessly impose a semantic interjection

If you want to not appear mendacious, then don't claim equivalence between human learning and machine training.

> It is pretty clear this is a transformative use and so far the courts have agreed

In weak cases that didn't show exact outputs from the LLM, yes. In any case, "transformative" does not automagically transform into fair use, although it is one considered factor.

> Very mature.

Hilarious, coming from the one who wrote "if it helps your comprehension."

You must be one of those assholes who think it's OK to say mean things if you use the right words.

Bless your heart.


You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.


> This is completely belied by "It works exactly the same for a LLM."

I specifically used the word "training" in the sentence aftwards. "It" clearly refers to the sentence prior which explains that infringement happens when the copy is created, not when the original is memorized/learned/trained.

> If you want to not appear mendacious, then don't claim equivalence between human learning and machine training.

I never claimed that. I already clarified that with my previous comment. Instead of bothering to read and understand you have continued to call names.

> Hilarious, coming from the one who wrote "if it helps your comprehension."

You seemed confused, you still seem confused. If you think this genuine (and slightly snarky) offer to use terms that sidestep your pointless semantic nitpick is "being an asshole"... then you need to get some more real world experience.


You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.


I'm polite in repose to being repeatedly called names and this is your response?

If you think my behavior here was truly ban worthy than do it because I don't see anything in the I would change except for engaging at all


This is the sort of thing I was referring to:

> Instead of bothering to read and understand you have continued to call names.

> You seemed confused, you still seem confused

> your pointless semantic nitpick

> you need to get some more real world experience

I wouldn't personally call that being polite, but whatever we call it, it's certainly against HN's rules, and that's what matters.

Edit: This may or may not be helpful (probably not!) but I wonder if you might be experiencing the "objects in the mirror are closer than they appear" phenomenon that shows up pretty often on the internet - that is, we tend to underestimate the provocation in our own comments, and overestimate the provocation in others' comments, which in the end produces quite a skew (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).


Sorry, and thanks.

I know moderation is a tough gig.


We spread free software for multiple purposes, one of them being the free software ethos. People using that for training proprietary models is antithetical to such ideas.

It's also an interesting double standard, wherein if I were to steal OpenAI's models, no AI worshippers would have any issue condemning my action, but when a large company clearly violates the license terms of free software, you give them a pass.


> I were to steal OpenAI's models, no AI worshippers would have any issue condemning my action

If GPT-5 were "open sourced", I don't think the vast majority of AI users would seriously object.


OpenAI got really pissy about DeepSeek using other LLMs to train though.

Which is funny since that's a much clearer case of "learning from" than outright compressing all open source code into a giant pile of weights by learning a low-dimensional probability distribution of token sequences.


I can't speak for anyone else, but if you were to leak weights for OpenAI's frontier models, I'd offer to hug you and donate money to you.

Information wants to be free.


I've been working (very slowly) on a cross-platform UI library written in C. It uses as much as it can from the OS without outright using the native widgets for everything. Rather the focus is on letting the user of the library customize the look of the controls as they see fit.

It's unfortunate but native UI (as in, using the native controls with their native look) has mostly died off in my opinion, at least for complex cross-platform applications.

You can try to do it in a cross-platform manner but it never works well. Want to implement a tab bar like VSCode's? Win32 tab bars do not support close buttons (need to be custom rendered) and Cocoa tabs it doesn't even make sense for them to have a close button. In Cocoa you're supposed to use either the windowing system to do tabs (similar to Safari tabs) or custom render everything (like iWork).

So I say screw it, make it look as you wish.

The design of the API is somewhat DOM inspired (everything is built up of divs that can be styled). It's pure retained mode for now, I still need to think how I'll make reactivity work.

On macOS it uses a custom NSView to implement "divs". Drawing is done with CoreAnimation layers. Text editing is handled by a nested a NSTextView control with a transparent background. Could also host a web view in a similar manner. Context menus are native.

On Windows it uses a custom C++ class that stores Windows.UI.Composition surfaces for drawing (could also use DirectComposition + Direct2D). Text editing is handled by a windowless RichEdit control (ITextHost/ITextServices). Context menus are native Win32.

On Linux it uses a custom QWidget with a nested QTextEdit control for text editing. I'm thinking of experimenting with Qt Quick for hardware accelerated rendering like the other two.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: