Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The GTK+3 port of GIMP is officially finished (twitter.com/zemarmot)
310 points by marcodiego on April 19, 2023 | hide | past | favorite | 247 comments


Nice to hear, congratulations and thanks for your hard work. Now time for gtk4 :(

There was a time, when I was young, backwards compatibility was a big part of our job. To me, it seems every other day QT and GTK makes a point of ignoring backwards compatibility with their releases, making Application Development hard.

I admit, I do *not* understand GTK/QT development model at all. But I think having that compatibility is one of the reasons of Linux's success, the rule of "Do not break user space".


> Now time for gtk4 :(

Hopefully easier? Unsure however.

> But I think having that compatibility is one of the reasons of Linux's success, the rule of "Do not break user space".

Quite. Of course, it's a sometimes repeated joke that the Linux GUI toolkit with the most binary compatibility is the Win32 API (via Wine).


Windows backwards compatibility is also pretty phenomenal if you know what you're doing.

There was an article a couple of years ago about taking binaries from either windows 1 or 2, changing a few bits of the header and it would run fine.

Edit: actually it's covered in the Wikipedia article https://en.m.wikipedia.org/wiki/Windows_1.0x

It's an old citation but here's the snippet:

"Due to Microsoft's extensive support for backward compatibility, it is not only possible to execute Windows 1.0 binary programs on current versions of Windows to a large extent but also to recompile their source code into an equally functional "modern" application with just limited modifications"

I haven't touched windows in probably 15 years though so I can't speak for any of this. Funny, I was a Windows software developer for a decade and now I know nothing about it. I don't even know if the debug tools I used to use made the leap to 64 bit


IMO it's not phenomenal, it's the job of an operating system! Something seemingly lost to many new devs. I'm pretty convinced by now the reason the win32 API is so stable because no developer is interested in "the old stuff", so nobody tinkers with it, hence stuff doesn't break. I'm fully with Linus Torvalds on this one, DON'T break userspace!!!


It's actually work IIRC. Again, out of the game for 15 years but they used to be subsystems - these ABI compatibility layers

Windows had one for DOS, Windows 16, "posix" and "os/2" with the last two in air quotes because it was only a subsection and they claimed mission accomplished - I think it was specifically to satisfy some government requirement but don't use me as a reference here. I don't think it could do os/2 "Presentation Manager" for instance. And there was a reason that cygwin was still a thing. I used it for an actual production project maybe 22 years ago. Wow, you've never seen slow until you run cygwin bash scripts on windows 2000.

Regardless, it didn't come free - there's people up in Redmond who had that as part (or maybe all) of their job. I would have done that - maintain an esoteric part of Windows for a Microsoft salary? That's the kind of job where you have almost no boss. Sign me up.

They had a bunch of weird projects like that. IE for Unix which ran on Solaris and HPUX (evolt has them: https://browsers.evolt.org/browsers/archive/ie you can probably QEMU that if you want). They also had Alpha, MIPS and I believe PPC versions of windows. There were a lot of other things that never got done. Like Windows for Intergraph Clipper, a fact that someone somewhere has decided to vouch for me on 12 years ago over yonder: https://www.cpushack.com/2011/01/16/cpu-of-the-week-intergra...

Again, all this shit I know? totally useless.


> I don't think it could do os/2 "Presentation Manager" for instance.

There was a Presentation Manager subsystem for NT that could run 16-bit OS/2 graphical programs, but it was about as hard to come by as buying a single LTSC license is today.


That's absurd. It's almost as silly as Quarterdeck's DESQview/X product that ran win-16 binaries. I ran it for laughs once in 2001, it worked. Motif, X, remote X, and 16 bit Windows --- it's everything you never wanted. http://toastytech.com/guis/dvx.html of course has a gallery. Quarterdeck - thanks for the memories, and also, the memories

The lesson of DESQ is to design software to run on the computers of tomorrow and don't worry so much about the computers of today. That sounds stupid to me as well but neither of us have yachts or private jets so what do we know. It's about colonizing the future.

I wouldn't be surprised if FreeDOS could spin this up.


Virtuallyfun has a go at running DesqView/X... and now I'm shocked to see it was all the way back in 2011 !

https://virtuallyfun.com/2011/03/27/desqviewx/

There's also a long running ticket in dosemu2 around some issues with DesqView/X https://github.com/dosemu2/dosemu2/issues/606

In some more years it will probably be working fine there.


2011? Try 1992!

DesqView/X was graphically multi-tasking DOS programs and running X applications the year after Linux was released. This was the Windows 3.1 era ( no cooperative multitasking ) and before OS/2 2.0 was out. Amazing.

Like everything else at the time though, it could not run Windows applications ( Microsoft Office ) and so it was doomed.


Would love to see that demonstrated, maybe it will be on virtuallyfun.com one day.


There's kind of a tension between new Microsoft and old Microsoft.

Old Microsoft put lots of thought into extensibility of APIs and makes it so that the old interfaces still work when they introduce new functionality.

New Microsoft rewrites components from scratch and deprecates the old one every few years, and doesn't bother to shim over the old interface.

The former group has it such that if your app was well written circa 1997-2001, it works well on the current release.

The latter group has it so that you have several different copies of .NET on your hard drive and the "recommended" UI libraries for new applications has changed a lot, and the new hotness from a few years ago is deprecated.


> New Microsoft rewrites components from scratch and deprecates the old one every few years, and doesn't bother to shim over the old interface.

This seems to be a trend across the industry, not just Microsoft. And it's a real shame. It makes it too risky to use a lot of shiny new things in real projects.


I agree it's an industry wide thing.

The Microsoft-specific variant of it is something I've seen up close. One of the issues always struck me as confusing interface and implementation. There isn't a good recognition of the fact that when you write against an API, you code against an abstraction, not a particular implementation. If your interface is good enough, you can do the rewrite treadmill thing to justify your current bonus or whatever, but you can point the old interface at a new implementation without most callers knowing the difference.

So, I think a good example of this working relatively well would be Windows audio. The WinMM API still works. They rewrote the audio stack a couple of times, and introduced new APIs such as WASAPI and I forget the newer one. Winmm isn't broken. It may not have access to all the latest features, but it works.

In contrast, all too often they discard an entire API because they want to rewrite the components underneath. But the I in API stands for interface. A good interface can survive rewrites of what's underneath.


An interface is how you connect your own code to the other code. There is a large interface design space spanning from extremely minimal and simple, to extremely detailed, flexible, giving control over performance etc.

Interfaces that are very simple tend to allow a broad range of implementations but may fail to make accessible all the capabilities of the implementation. On the other hand, interfaces that provide good control over runtime characteristics etc. tend to already give away the implementation.

If "API" were primarily about abstraction, it would be called APA.


I disagree with most of this, but everyone is entitled to their opinion.

One point I'll offer is that ability to shim an old interface onto a new implementation validates the design of the new thing. If you can't shim them, chances are there's some necessary problem you aren't solving, and possibly aren't aware of.

Concrete details of all these things will alter the discussion. I think my example of audio is a good one for the strength of "the old way". The old interface works, even though it does vastly different things than it did in 1995. Like you say, maybe there are some features you can't get with the old interface -- that can be OK. But they didn't break it.

The Win32 subsystem on NT-based Windows began as such a shim too. The NT native APIs sometimes look very very different. Most people don't know this. eg. Most people don't know that renaming things isn't its own syscall that operates on a source filename a la unix rename(2), which is what Win9x did -- that there's transparently an NtCreateFile() call that happens inside MoveFile() ... Philosophically, the NT API "hates" doing things based on filenames, most operations that Unix or Winapi have as filenames operate on handles... But nobody needs to know, the Win9x API is the public API.


> But the I in API stands for interface. A good interface can survive rewrites of what's underneath.

Absolutely!

But a good interface is also a really difficult thing to engineer, in part because it seems like a relatively easy thing. The gotchas don't become apparent until long after you ship.

Sometimes I wonder if some of this is just cost-cutting. A difficult task is an expensive task. Other times, I wonder if it's because API design is an actual specialty, and there are too many devs doing it who don't have the chops for it.


Also add the WinRT folks into the New Microsoft.

Deprecated C++/CX and .NET Native, while neither Native AOT nor C++/WinRT are proper replacements in capabilities, replaced UWP/WinUI 2.0 with WinAppSDK/WinUI 3.0 with lots of missing capabilities.


> Something seemingly lost to many new devs. I'm pretty convinced by now the reason the win32 API is so stable because no developer is interested in "the old stuff", so nobody tinkers with it, hence stuff doesn't break.

I highly encourage you to take a look at the work Microsoft did with the XBox backwards compatibility for the XBox One X/S; unlike their naming teams, MS have people who care deeply about this stuff. Not only did they spend a lot of time making sure that most games work on the One X, they spent time doing things like having the newer platform upgrade the games on the fly where possible.

MS absolutely has people tinkering with their old APIs, sometimes to quite good effect.


> Windows backwards compatibility is also pretty phenomenal if you know what you're doing.

Windows backwards compatibility is also pretty phenomenal if you DON'T know what you're doing and are important enough.

There is the famous case about Windows 95 detecting SimCity and then switching to a more careful memory manager as SimCity had quite some use after free problems, which would crash otherwise.



> Windows backwards compatibility is also pretty phenomenal

This is a point on which I give Microsoft high praise, and they have my great sympathy.

The issue for Windows is existential -- a large percentage (perhaps a majority) of people would ditch Windows in a heartbeat if they had to replace their software because of a new Windows version. Backward compatibility is Microsoft's moat.

It has been worth it to Microsoft to put serious resources toward that end, and they've worked miracles in terms of backward compatibility. It really is amazing.

And I have sympathy for them about it, too, because I guarantee that Microsoft would rather be able to run without carrying that burden too much.

I think this is the main reason why they are extremely forceful about updates. If everyone is using the same version, backward compatibility becomes easier because you don't have to keep it for as long.


I was super impressed by Windows' efforts on the backwards compatibility part, in particular with all the legendary articles on them testing specific apps and leaving alternate modes reproducing the bugs so the apps can continue working.

And then I opened again a windows laptop...and the words than came to mind were "missing the forest for the trees".

Like, I feel that Microsoft really cares a lot about each and every tree, and went above and beyond to make the most trees happy and cut down as few trees as they could at every iteration. And we lauded then for that, cheering at every tree that could stay alive.

Except now the forest is really badly planned, it grew and expanded a lot but without enough prunning really, each user has to carve a path with a chainsaw to get anywhere livable, we regularly fall into forgotten valleys with runes to unlock lost technology.

I won't argue it needs to be a walled garden breaking new at every release, but there must be a better middle-ground for all of this.


The basic problem is that they didn't provide a proper evolution for Win32. So yes, Win32 apps still "work" if you can tolerate tiny, non scaled 1995 era UI. If you want a modern looking UI that properly scales then they said rewrite it all in WPF. And then WinUI. And then JS/HTML. And then WinUI2. And then WinUI3. At no point was there a smooth upgrade path.

It's hard, really hard, so I don't really blame them for failing. But, Apple mostly managed it. AppKit/UIKit is pretty much a direct path from NeXTStep in the 1980s. 30+ years of evolution, without breaking the whole API until SwiftUI came along. Pretty impressive. Apple should get more credit for that.


Yes. To note, Apple still had its arm wrestling moments, famously with Microsoft and Adobe, where Apple wanted the apps devs to move on and got a resounding "nope" in return. That resulted in the Carbon API for instance lingering on for a while, but they could still manage a transition and deprecate it.

Just thinking about the two processor architecture transitions and the whole OS7 -> OSX switch, they really have an expertise no other company seems to even get close to.

Offering emulation where compatibility is broken definitely helps.


The current upgrade path from Win32 is Win32, I think. Microsoft now present it alongside WPF, WinForms and UWP as an option for developing new apps in the "get started" guide [1]. They have also added (or restored?) lots of useful documentation [2] on Win32, including how to handle UI scaling [3].

[1] https://learn.microsoft.com/en-us/windows/apps/get-started/#...

[2] https://learn.microsoft.com/en-us/windows/win32/

[3] https://learn.microsoft.com/en-us/windows/win32/hidpi/high-d...


The third link says:

"if you're creating a new Windows app from scratch, it is highly recommended that you create a Universal Windows Platform (UWP) application. UWP applications automatically—and dynamically—scale for each display that they're running on."

but no fault on you for being confused, Microsoft announces changes in strategy so fast they can't update their docs to keep up, so their once great MSDN has become a rats nest of contradictory advice and recommendations.

At any rate if you've tried to do Win32 hidpi you will know that it hardly works and this represents defeat by Microsoft. For example hidpi is only even attempted for windows created from dialog resources, but most aren't, so if you run any classical win32 apps they still have tiny 16x16 icons everywhere, tiny fonts, basically unusable on modern screens. Apple went through this pain with the original Retina transition where icons were blurry on old apps for a while, but they updated all their own apps and brought the ecosystem with them on that journey. Microsoft never did.

And it's all like that. Win32 has barely evolved for decades. There are occasional new magic flags and an ever-greater pile of hacks, but there's been no serious attempt to keep it competitive. The assumption was that everyone would migrate to .NET and WinForms but that didn't happen for various reasons. By then, Win32 - never a particularly well designed or well loved API even inside MS - was so aged that they felt it'd be easier to start over from scratch. And once that Rubicon was crossed, they did it again and again.


Apple managed it by using hacks such as forced integer upscaling. It works reasonably well for them because they mostly control the hardware. Windows needs to be able to deal with scaling factors like say 150% to accommodate all that it runs on, which is quite a bit harder.


Yup. And yet, they managed it.


>It's an old citation but here's the snippet:

Really old citation, 1995...

>There was an article a couple of years ago about taking binaries from either windows 1...

The below article seems to go more indepth and was written in 2020, might even be the one you were thinking of:

https://soylentnews.org/article.pl?sid=20/05/10/1753203

>As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations.

>After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10 [32-bit].

>FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared.


There's even a way to run 16-bit Windows binaries on 64-bit Windows now.

https://github.com/otya128/winevdm


Oh my gosh. That screenshot of the Calculator alone brought back memories from my childhood.


Oh neat, I might need to try this on Wolfenstein 3D.


"As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon."

No way. I'll need major citations on that. Wikipedia for one, firmly disagrees as does every book I've read on that history.

edit: I quoted it and still misread it. I thought it said windows nt was based on windows 1&2.


Which part are you objecting to?


I object to NT natively supporting 16 bit apps. I think they dropped that in Win7 or before. Of course with VMs everything is possible but it's no longer built-in.


32 bit Windows NT supported 16 bit Windows apps (and DOS) right until 32 bit Windows was dropped with Windows 11. 64 bit Windows has never supported them. As 64 bit Windows became commonly installed around Windows 7, you might get the impression it was dropped then, but it was still around if you really wanted it.


Windows NT is 3.1 and above


Eh, not quite. 3.1 ran on DOS, there was an NT 3.1 (separate OS) that was NT. NT and legacy Windows ran in parallel for a while, up through Windows 2000/Windows ME. That ended with Windows XP, the first consumer Windows NT release.


I misread it. I thought it said windows nt is based on windows 1 and 2. carry on


Yes! A lot of people don't realize that Windows compatibility is not just about binary compatibility, but source compatibility.


I feel like culturally speaking that's fair? The Windows ecosystem expects closed-source programs that are purely given to users as binaries, the ecosystem around FOSS operating systems expects most programs to be published as source code and then compiled by users or distros.


Only up to when WinRT got introduced, now it is a mess.


How many games designed for windows 7 can run on windows 11?


I think those that have a dependency on Games for Windows Live are screwed, so not Microsoft first party games, but most should? Isn’t it more XP and earlier stuff that has a reputation for being finicky, not DX9/10 stuff?


Anything that depends on Safedisc is also screwed because Microsoft pulled the driver for security reasons. This includes a lot of first-party stuff like Mechwarrior 4 and Freelancer. Really wish they would put those up on GOG!


I think the only game I ever had trouble running on a newer version of Windows was James Bond 007 Nightfire but compatibility mode seemed to work okay. I haven't played heaps of games in the last decade but any issues have been more with getting the right version of the Visual C++ runtime or PhysX (which Steam usually handles for you anyway) rather than the underlying operating system.


It's not even a joke - OK, not GUI-related, but a few years ago I tried to replay Return to Castle Wolfenstein on Linux - the official Linux port would run but with no audio. The Windows version worked fine under WINE!



FWIW you can use OSS-to-ALSA wrapper and sound will work.

In general you can get most of old Linux stuff to work if you find and install the relevant libraries, the main issue is that these libraries tend to be spread all over the place and sometimes even conflict with existing libraries.

With Windows you also need to do something similar with older games (e.g. use dgVoodoo2 to play old Direct3D and Glide games) but since Windows comes out with a lot more stuff and that stuff tends to be backwards compatible for the most stuff, there is much less to worry about.


Sometimes, yes - whether that wrapper's improved now I don't know, but at the time it wasn't sufficient. I found a forum post somewhere at the time which included some kind of deep library patching magic - it worked but I rather doubt I'd be able to find it again now.


Many old games work better under Wine than under modern Windows though.


The multiplayer version got open sourced and there exist a contemporary project to keep it updated, but also as backwards compatible as is possible.

https://www.etlegacy.com/


Was it a Windows version or DOS version?


Return to Castle Wolfenstein is Windows. Using the Quake 3 engine.


And there was an official Linux port. Which no longer works properly.


Oh boy, I don't know how I missed that detail. Sorry!


No definitely not easier at all. Especially because all the drawing code has changed.

But Gtk5 port will be total madness. No more modal windows and no more traditional tree view. I can agree that it removes a lot of problems and maintenance from the Gtk Team, but the cost for the application developers is huge.

But just as Win32API we can all still use Gtk2


It hasn't been a joke for a while.


> Of course, it's a sometimes repeated joke that the Linux GUI toolkit with the most binary compatibility is the Win32 API (via Wine).

Motif is pretty good here too.


Depends if they use Glade, or any of the removed APIs.


I don't understand the Glade problem. I thought both GTK+ 3 and GTK+ 4 had a GtkBuilder class which loads .ui files? And as for creating .ui files, Cambalache is a replacement for Glade?


From what I've seen the .ui format for Gtk3 and Gtk4 are different and Glade does not support Gtk4 Widgets or its additional libraries like libadwaita.

Cambalache is a successor for Glade but it also has issues. It uses its own storage format and you have to export to a .ui file for use in your Gtk4 project, and if you make changes to the UI file it's very difficult or even impossible to use those changes in Cambalache again.

The general consensus on a few of the GNOME-related Matrix channels seems to be writing .ui files by hand.


This is something that I unfortunately have a "reasonable" explanation. It involves some historic idiosyncrasies and GNOME devs' decisions along time. If you want to listen to some (read an) interesting story here's some of it:

* First: GNOME was born without an IDE. But, there was an interesting little project (by Naba Kumar) which started in mid to late 90's called Anjuta. IIRC, Anjuta 0.x to 1.x was what was called a "monolithic" IDE; that is: it didn't had any plugins and its features were "hard-coded". The 2.0 rewrite allowed it to have plugins, but they didn't covered (at first) all of the features of Anjuta 1.x. It was only around 2.4 or 2.6 that Anjuta finally got feature-rich and reasonably stable enough to become part of the GNOME project and the "IDE of the GNOME project" although most GNOME devs didn't use it.

* Second: GNOME was born without a GUI Builder. The most successful one was Glade. At first, glade generated C source code for the UI using GTK. Then people thought it was bad and someone created libglade. The 'project libglade' was two things: a XML format specifying an UI description and a library that could dynamically (at runtime) load it and build a GTK interface from that description. It could even connect signals without a single line of C source code.

Since libglade seemed like a good idea it was eventually incorporated into GTK in a format that is now called GtkBuilder.

Parallel to this, many events happened. Glade changed hands a few times and eventually its main developer became Chema Celorio. Celorio was a skydiver and one day he had a problem with his parachute and died. Glade development became dormant until it was picked up again by Tristan Van Berkom. Tristan modernized it, 'resurrected' it, improved its UI, rewrote many parts of it and eventually removed its code generation back-end to use only the GtkBuilder format.

* Third: At the same time, Anjuta development also changed hands. I remember Johannes Schimidt and a few others. At some point, there was an integration between Glade and Anjuta, but it was not very different from running the two tools separated except for the fact that they shared the same window.

There was some hope when, during the selection for Google Summer of code an interesting project was chosen: improve the integration by Pavel Kostyuchenko. The project was successful and the integration between Glade and Anjuta improved because of it.

I was happy and hopeful for these facts when one day I saw that (part of) the integration code was chopped off. I was so intrigued by this fact that I decided to ask Anjuta devs personally (on IRC) what had happened. The answer was something like: "it was not as good as we'd like, fixing it would be much more work than rewriting it."

That was when I decided to join the project to fix it myself. I improved the integration between Glade and Anjuta. Rewrote how it created functions inside the C source code, how it discovered automatically which .c and .h files were associated with the .ui file. Implemented automatic creation of members on the C struct that corresponded to the UI elements, fixed some bugs, wrote a documentation which was a good tutorial on how to use these features... I even implemented the preview feature in Glade. I really made the integration work to my liking. It was the workflow I dreamed.

The future seemed bright: there were new developers, frequent commits, new features were implemented frequently... But GTK evolved faster than Glade. Tristan eventually abandoned it and it was picked up by Juan Pablo Ugarte. By around this time, Glade became something of a monstrosity of technical debt, slow, crash-prone and buggy but it worked. Implementing support for modern GTK features (like bindings and css) seemed too much work and too hard to do.

* Fourth: Around 2013 Christian Hergert started developing a modern IDE for GNOME: GNOME Builder. He left his job, started a campaign to get money to back him up during first versions and quickly showed a lot of progress. It was said at the time that it was going to have Glade integration (it eventually did have, but it was never a decent one). The project thrived and Redhat hired him.

Development of Anjuta slowly became dormant after this. It is now archived.

Hergert started another project, Drafting, to replace Glade but it was de-prioritized. Ugarte basically abandoned Glade and started Cambalache. Cambalache seems like a good idea, works but has no IDE integration and is still mostly a "one man show".

And that is how we arrived at where we are. GNOME still lacks a good UI builder with IDE integration and no current plans to fill this gap.


thank you for that great and informative history.

this deserves to be made into its own blog post.


Sure if you like to write XML by hand.

Cambalache is years away to reach feature parity with Glade. Then there is the whole of using Web to render the designer.


Their idea of GUI building is just too bad and backward. You define the GUI by dragging things on a Canvas and some things that are specified in text. What you never want is going through thousands of widgets in a sidebar and a bit here and a bit there.

It suffers from we don't think about UI but only about technology. GUI Builder the old way stink (only XCode stinks like a perfume). There is also a reason why Microsoft does not offer a GUI Builder for their new WinUI 3


This sort of long slog work is where proper funding is needed.

I'm sure all the distros are busy, but it sure would be nice if one of them would fund get a developer they pay to do this.


The XML isn't upgraded and there are no tools or XSLT to do it. You end up hand-editing the ui files by hand. Not fun or productive.

There are good reasons why professional developers abandoned GTK+, and this is just one of them.


Glade can’t render both GTK3 and GTK4 in the same process and wasn’t actively maintained. Cambalanche was a replacement supporting both.


> To me, it seems every other day QT and GTK makes a point of ignoring backwards compatibility with their releases, making Application Development hard.

can't talk about GTK but Qt broke C++ API compatibility once between 2012 and 2023 for Qt 5 -> Qt 6 (and really porting from 4 to 5 and 5 to 6 is not particularly complicated), I wouldn't say that it's fair to call this "makes a point of ignoring backwards compatibility with their releases"


Is it really "breaking" the API when moving from v5 to v6 (or v2 to v3, or v3 to v4), when the old and new versions are co-installable, especially if they're all still at least receiving security support?

GTK2 will continue to be included with Debian 12 ("Bookworm") when it's released later this year (knock on wood). Any apps written for GTK2 20 years ago will still work against it as the GTK2 API (and ABI) has remained backwards compatible all that time.

(OK, GTK2 is EOL and not receiving any more updates since Dec 2020. So that's not great. But it does still work, and so will the apps that use it.)


A lot of distributions are trying to remove GTK2 though so it's only a matter of time until that breaks. Slackware will probably still have it though (and they still ship GTK1 which is fantastic!)


Recently i fixed the Gtk1 backed for Lazarus[0] which uses Gdk_pixbuf for pretty much all bitmap operations as it provides more advanced functionality than plain Gdk. Sadly while Slackware had Gtk1, it didn't have Gdk_pixbuf.

I did hunt down some old version of Gdk_pixbuf and made it to build (at least on my system)[1] so it isn't that big of a deal.

I attempted removing the dependency at some point (when i wrote the text at the second site i hadn't explored that part of the code much so i was under the impression that it made little use) and while possible, it needs a lot of functionality to be reimplemented in terms of fpimage (graphics functionality that i guess didn't exist when Gdk_pixbuf was originally used). I did stub out all the Gdb_pixbuf code though and while glitchy, it does work[2].

I might poke on it now and then but i'm not part of the Lazarus dev team and whenever i fix any bug on Linux it is for the Gtk2 backend since that is what i'm using (i have no plans nor interest for the later versions of Gtk).

[0] https://www.lazarus-ide.org/

[1] http://runtimeterror.com/pages/badsector/gtk12/

[2] https://i.imgur.com/su9JCUY.png


There are still some people salty, that Gtk3 was breaking themes (not API or ABI, but themes). It was communicated explicitly that themes are going to break, the theme engine implementation was not fully ready when Gtk3 was originally released and that themes are not considered part of the public API. Yet, some people didn't take that warning seriously, and eventually found out the hard way that it was meant seriously.


Not a big deal when knowing that there was just one major version upgrade during this time. And Qt also did not improve any of their tradional GUI parts during Qt5 development, but just added stuff for their embedded customers.


GTK broke C api in 2011 and 2020. Not bad IMO.


10 years is the timespan for one technology transformation (yes, using Javascript Framework of the Day is not new technology). So it's not bad but also far from good or anything special to realize.


> There was a time, when I was young, backwards compatibility was a big part of our job. To me, it seems every other day QT and GTK makes a point of ignoring backwards compatibility with their releases, making Application Development hard.

Which is odd because desktop user interfaces haven't fundamentally changed in the past two decades. Theoretically you could generate UI code for Win32, Qt, GTK, and others from a common markup format.


> Theoretically you could generate UI code for Win32, Qt, GTK, and others from a common markup format.

In theory yes, but the attempts I've seen at this run into issues with things like widgets present on one platform not being present on others (e.g. Win32 has no miller column widget like Cocoa's NSBrowser) and the UI toolkit chasing identical behavior between platforms.

For this sort of idea to work, I think an approach that fills gaps (following previous example, providing a Windows implementation of a miller column widget to mirror NSBrowser) and doesn't try to fight the behaviors of the UI toolkits it's built upon is what's ultimately required. Without that, an approach that reimplements everything from scratch is probably better.


wxWidgets [1] and SWT [2] are probably the most successful attempts at striking this balance (using native widgets). But usually the apps look slightly off on every platform.

[1] https://www.wxwidgets.org/about/screenshots/

[2] https://www.eclipse.org/swt/


> Win32 has no miller column widget

Neither does GTK, Qt or pretty much any other toolkit I've used. It's not exactly a great example of a standard widget.

We could also limit our definition of modern UI toolkits to only ones that support the Touch Bar and POWER9, but it's kinda silly. The vast majority of interaction metaphors are the same between systems. If your OS has a different or unique function that the vendor wants to see used, it's up to them to expose it well.


It ain't pretty but does exist in Qt

https://doc.qt.io/qt-6/qcolumnview.html#details


Indeed, I stand corrected. Maybe Qt 6 will give it a glowup befitting of a non-early-2000s Autodesk tool.


> to only ones that support the Touch Bar and POWER9

I'll give you the touch bar, since that very much does interact with the way GUIs work, but if your graphical toolkit needs to care about the CPU architecture at all I feel like something has gone terribly wrong. The closest thing that seems reasonable is to allow graphical acceleration, but requiring even that is a great way to ensure that things will break on a regular basis.


> Theoretically you could generate UI code for Win32, Qt, GTK, and others from a common markup format.

Lazarus's LCL components pretty much do that. One source, can target Win32, Qt, GTK, and Cocoa.



Qt, Gtk UI is already cross-platform, and now you want another layer of (leaky) abstraction on top of it :)


Perhaps that kind of backwards compatibility requires a level of completeness or maturity that despite their age, GTK and Qt have yet to achieve (or are just now achieving).

It's not too hard to get an AppKit/Cocoa project from the early 2000s compiling on modern macOS, but Cocoa had already accrued 20 years of age by 2005, which no doubt has reduced the degree of changes that've been necessary in the 18 years since. GTK only hit that 20 year mark a few years ago and hasn't enjoyed the same level of corporate firepower to develop it.


There is a widespread lack of understanding of just how staggeringly large a high-quality GUI toolkit is.

Across a number of minority language communities I participate in, there's a constant stream of "Why isn't there a good native GUI toolkit in $MY_FAVORITE_LANGUAGE yet? All the ones I can find suck." questions, to which the answer is, you're asking for a project that can easily be on the order of magnitude of all the other Open Source projects in the language combined, if not greater. What you get with less than effort than that are the aforementioned "suck"y GUI toolkits. Which I celebrate and praise, but realistically, just getting a production-grade "rich text widget" is alone a staggering undertaking, let alone everything else that goes into a GUI toolkit.

Just a project to bind an existing GUI toolkit into your favorite language can easily overwhelm a dedicated team of multiple people, and as annoyingly complicated as that can be, it's wildly easier than building the entire toolkit.

Getting to "maturity" is a really, really hard problem.


A big part of that historically was a GUI toolkit might need to build up a whole object model, render model, various container types, event loop, timer queue, networking, IPC, file system abstractions, internationalization and tools to deal with that, debugging and visualization tools, etc, etc, etc. For cross platform toolkits a whole OS and build system layer to deal with that as well. At least the ones people ended up using widely (gtk, qt, win32, cocoa, etc).

That's to say that a useful GUI toolkit has typically been a whole lot more than simply "draw me some widgets and let me poll on input events"

I wonder if that's really still as true though in newer (not C/C++) languages like Rust for example, where you can reuse a lot more of what the ecosystem and language already provide.


I'm sure it shrinks the footprint a lot to just do "widgets" rather than being an effective VM, but even that is plenty huge. I'm not aware of any "non-mainstream language" having even that much of a toolkit. Plenty of partial starts, even starts that with another 100x the effort could compete fairly well, but nothing terribly close to "done".

Someone may reply to me with one. I'd have to look at it and analyze. But I know I'm not going to get dozens of citations of high-quality toolkits of, say, the calibre that I can put rich text widgets with high-quality multi-font unicode support and everything else I want out of such a widget, into a tree control with those rich text widgets as my leaves, and have a million node's worth of rich text controls because they can be lazy-loaded based on visibility, just to come up with one use case.


It’s crazy how many newer UI toolkits (including those by giants like Microsoft) don’t even have a tableview/datagrid widget, which is one of the most common desktop widgets right after buttons. You’re expected to haul in some third party widget that’s half-baked and lacks battle testing or roll your own, which is ridiculous and a massive productivity + UX killer.


> don’t even have a tableview/datagrid widget, which is one of the most common desktop widgets right after buttons

The newer UI Toolkits have only labels. Buttons, scrollbars, borders, menus, comboboxes cannot be implemented with the current state of the art technology. /s


Because "mobile". Tables/grids do not work with mobile, so they are often skipped entirely. Annoying, I agree.


> You’re expected to haul in some third party widget that’s half-baked and lacks battle testing or roll your own, which is ridiculous and a massive productivity + UX killer.

Reminds of early-mid-00s and Delphi components. The standard components were fine, but lacked many important features (including Unicode!), so you'd hunt/shop around for third-party components.


This was the case for REALBasic too, which is where I first dipped my toe into programming. It was a wonderful intuitive learning environment (want to change what a button does? Double-click it to edit its code!) and could build to OS 9, OS X, and Windows with one click which was amazing, but it was very easy to hit the limits of what its standard widgets were capable of, which meant that you had to go get third-party widgets, most of which were commercially licensed (and meant most serious REALBasic apps used commercial widgets).

That probably wasn't as much of a problem for a working adult but it wasn't much fun as a broke teenager itching to make things (as I was at that point).


> Just a project to bind an existing GUI toolkit into your favorite language can easily overwhelm a dedicated team of multiple people

I have to disagree, it's not that hard and can be done by one person. Red language did it, it has very clear declarative GUI toolkit for Win/macOS/GTK in <2MB executable. So no, it's not that hard.


At 2MB in size, I guarantee you beyond a shadow of a doubt that it does not satisfy the test I gave in my other message, nor does it pass the test that if you ask a designer to design you a form, the toolkit will be able to do what the designer wants.

At that size you've got a programmer GUI toolkit, that as long as it is used by a programmer who is willing to live within its constraints, it will work fine, but it won't satisfy anyone else.

To put it another way, Window's Wordpad is an OK document editor, if you're happy to live within its constraints... but it's no replacement for a Word document being shared over Sharepoint with change tracking between multiple people before finally going out to 1,000 people via a mail merge. There's a ton of Wordpad-class GUI toolkits. There's not very many Word-class GUI toolkits.

I mean, I'm sure Tk is much more featureful than can be fit into 2MB and it's honestly barely in the "works for programmers" class. It's really nice for programmers, too, but it isn't going to replace QT anytime soon.


What has Sharepoint to do with a GUI Toolkit ?


In that metaphor, the 2MB GUI toolkit is Wordpad, and QT is Word.

I find the analogy particularly apt because it is often said of Word "Everyone uses only 10% of Word but everyone uses a different 10%", and the same is true of GUI toolkits. Once you get out of the super-basics of text widgets and radio buttons, you're in a hugely diverse world where nobody is using the same thing but to be a "real GUI toolkit" you need to support them, like, table widgets with complicated lazy loading, rich text widgets rich enough to implement a full custom editor, arbitrarily complicated graphing libraries, OpenGL support, internationalization and FULL Unicode support (far beyond merely the rendering of text), and so on and so on. I mean, take a minute and click through this: https://doc.qt.io/qt-5/classes.html Some of it is because QT is also a framework, but a lot of it is there for a GUI toolkit reason.

Just blipping through that, a list of features that are absolute drop dead requirements for someone out there that I didn't think to mention includes: Printer support, animation support, OAuth support, canvas support, filesystem support (for that open dialog and friends), font interaction for if you need direct rendering, help support, alternate calendar support, video support, and who knows what else I missed. Few apps need all these things, but there's plenty of apps in the world that absolutely, positively need at least one of those things.


> It's not too hard to get an AppKit/Cocoa project from the early 2000s compiling on modern macOS, but Cocoa had already accrued 20 years of age by 2005

That's a mindboggling thought. How much did 2005 Cocoa have in common with 1985 Cocoa? (That's a real question, I have no idea)

Qt and GTK were released (or first labelled stable) in 1995 and 98 respectively, so 20 years gives us 2015 and 2018 which is well within the Qt5 and GTK3 era.

For Qt, I would say Qt4 in 2005 marks a point of maturity, if not terminal stability. Ten years. After that there have been whole substructures and programming idioms added and removed and all sorts of things tidied up, but anything you wrote directly to Qt4 is going to be conceptually similar in current Qt versions.

The Qt2 to Qt3 and Qt3 to Qt4 transitions (I never used Qt 1) broke almost every line of Qt code, but from 4 to 6 is a different prospect. It's a question of updating some details and seeking replacements for specific APIs that haven't been carried across. That can be difficult or totally blocking but it's quite different from having to rewrite everything.


1985 Cocoa and 2005 Cocoa are very similar - I would guess there are less new things than there are in moder Cocoa and 2005.

Except - not strctly Cocoa but memory management has changed a lot and might have changed some APIs.

1985 just had malloc and free or [Class alloc] and [object free] 1986/7 introduced autorelease and retain. 2009? had garbage collection and then 2010? had the autorelease and retain automated (ARC).


Memory management has not changed. It's just hidden by the compiler. There are just three ways doable: RefCounting, GC or manual.

But a few things changed considerably. The background models and that now you go away from NSCell objects to child windows is the step and the required step to get stuff onto the GPU. The next one is Recycle View anything, removing the concept of NSOutlineView to NSTable View.


A good example of why backwards compatibility is important! I've been using GIMP for years with no clue that it was using a 10-year old version of GTK. And that's great.


What does that have to do with backwards compatibility? Gnome hasn’t received a major update in ages because it required so much work to support gtk4.

Of course gimp would keep working if you keep it on gtk3.


Why are you agreeing so adversarially?


Not trying to be adversarial. But I’m not agreeing. Obviously gimp will keep working if they don’t upgrade the dependencies. I don’t understand how that is impressive.


I guess I was imagining a scenario where Gnome 55 on Gtk 7 completely breaks compatibility with older Gtk apps and all the distros stopped providing Gtk 2/3 in their repositories and there were conflicts/headaches trying to get it all working.

As long as Gtk continues to have backwards compat, old/unmaintained apps should never just 'stop working'.


But your Gtk2 apps will not work on later versions of Gtk, and most distros are on the track to remove Gtk2 from the repos. So this isn't a hypothetical.

And it will be the same once Gtk3 is phased out: https://docs.gtk.org/gtk4/migrating-3to4.html


> Now time for gtk4

Port it back to its original toolkit, Motif, now that that's open. Won't be needing an update after that ;)


Given how many look&feel trends we've seen over the last few decades it would even look kinda cool&trendy in a postmodern-ish kinda way!


Motif Window Manager (mwm) is very like twm but with nice regular sized colour icons and Alt-Tab already defined in the system .mwmrc.

Very Windows 3 with iconised windows on the desktop.

https://en.wikipedia.org/wiki/Motif_Window_Manager


You are not wrong that compatibility is a factor in the success of the Linux kernel and a huge reason that desktop Linux has never really hit the mainstream.

Flatpak looks like it may partially fix that problem for desktop apps. Certainly, Flatpak makes it more realistic to support a commercial app on Linux.

For gaming, somewhat ironically, the Windows APIs are serving to provide that compatibility role. If I wanted to reach Linux gamers, I think I would write a Windows game that was properly tested for good compatibility and performance on Proton. Not only can I probably effectively reach more Linux desktops that way today but my game will probably still run well a couple of years from now whereas a “native” Linux binary will probably have broken by then.


> I admit, I do not understand GTK/QT development model at all.

It's very easy to understand.

QT is a commercial and de-facto proprietary product and the development model is: do what the paying customer wants.

GTK has no customers and after it was taken over by the GNOME crowd the development model is to deliberately sabotage the FOSS ecosystem. Small long tail projects can not afford to follow the relentless stream of backwards incompatible changes produced by upstream and die off. Even big projects like GIMP struggle with this. The result is a broken ecosystem which serves certain market participants (e.g. that offer desktop operating systems) very well.


On what basis are we claiming malicious intent by GNOME?


GNOME has developed it for decades. They broke API twice. Fewer times than Qt.


This is of course a lie. They declare widely used APIs as "internal interface" and break backwards compatibility on a regular basis even on sub minor version updates.


I think you are talking about the styling API during the 3.10 - 3.16 years. Yes they did but the styling was never an official API. If you can't find the documentation in Devhelp, then it's nothing you should use.

For people who thought they are smart its of course a fuckery.

The most painfull came from breaking the drawing code: Gtk2 moved to Pango/Cairo, Gtk4 moved to GPU Textures. Gtk5 will now break ageing view models. And i hope Gtk6 will break the single threading of the whole Gui.


If you really didn't work with GTK2 then there's no way you can understand the problem, thus your comment.


CADT


It is interesting that I read your comment today, I recently had an idea about enabling seamless binary compatibility and binary polymorphism without having to change and recompile sum types.

Widget frameworks are very complicated and have intricate APIs.

With present day computer technologies it is difficult to mutate an API or interface that others are implementing or using. I think people also rely on side effects that aren't promised by APIs, which makes it harder to change things.

I want to be capable of refactoring my logic and code without breaking my data structure and I want to change my data structure without breaking my logic. These are at odds.

How many times has a library upgrade caused a compiler error for you?

When someone implements my Java API, it is difficult for me to change it. Breaking software upgrades is my common experience upgrading software libraries.

My idea is that we should trace the method names, field/properties names and parameters used of methods in an Abstract Syntax Tree of code and then turn it into a structure and hash the symbols in the structure to correspond to in-memory layout index offset. (We put the data for the member symbol for example "children" in the same memory position every time)

We can use heuristics with this AST that from any given symbol name, there is a traversal from one symbol to another.

My idea is to handle the seamless migration of types - such as replacing one type for another, new type, merging types, splitting up types into members, changing the associations of a type, changing the plurality of the type (is it a 1-to-1 or 1-to-many mapping or many-to-many mapping?)

The idea is to create a struct data layout from an AST symbol trace and turn it into deterministic arrangement of data at the machine code layer.

When I refer to a symbol on a computer, it is at a numerical position in memory, relative to other symbols, such as relative to a symbol with RIP relative addressing in assembly.

If the same symbol corresponds to the same numerical position every time regardless of the AST, we get binary compatibility regardless of the AST that created that symbol traversal.

If I change the AST, but the data has the same underlying associations, the data structure can be inferred. For example, git detects file moves based on file hashes.

We could define a list of keywords and hash them into buckets and then people use these symbol names and they get binary compatibility for free.

For example, if a library upgrade changes the object hierarchy and splits something off into a new object, or introduces a plurality (one to many) where there was none before? Such as having multiple addresses for a customer, or multiple email addresses and contact numbers for an account where previously there was only one.

The hash of the relationship diagram can map to a struct layout deterministically.

For example, take the following data associational hierarchy:

bussiness_unit-> department-> product -> manager

and managers now need to be responsible for departments, the associational diagram changes to

business_unit -> manager -> department -> product

can we sort and hash the fields of these relationship diagrams so that they always place fields at the same index offsets in memory?

  hash "bussiness_unit-> department-> product -> manager"

  hash "business_unit -> manager -> department -> product"
so business_unit_manager manager_department all produce the same indexes

there are fields on business_unit to manager and children of manager object for department and children on department for products, or something like this.

or similar. This kind of schema changes should be programmatically deterministic, because these kinds of migrations are so common.

This idea could also solve database migrations too.


Some of the ideas are implemented in Unison[1].

[1] https://www.unison-lang.org


I wasn't really aware that GIMP wasn't ported to the latest GTK+ version yet. This is a bit funny given that GTK originally stood for "GIMP Toolkit".


GTK+3 isn't the latest version either


Yes that’s why it’s always been 2.x and now will finally go to version 3.


Whenever I install Gimp, I have to remember this tedious sequence of steps:

Edit > Preferences > Interface > Toolbox > uncheck "Use tool groups"

Please make this the default! Who on Earth thought it was a good idea to hide the tools in an image editor?


I like it, the toolbar feels less threatening, and the groupings make sense. all the keyboard bindings still work so there's no interruption to workflow, the screen just looks a little tidier. I salute the effort to maximise the workspace.


Thanks for the tip.

Does "mystery meat navigation" ring a bell for anyone? That metaphor is what I think of when I have to click each tool group to figure out what's inside.

I must have read some funny blog post explaining the concept of Mystery meat navigation.


First result on DDG is a Wikipedia article that says

> The term was coined in 1998 by Vincent Flanders, author of the book and accompanying website Web Pages That Suck.


In addition to this I always use the legacy icons, it's much more obvious what they mean. I find the symbolic ones impossible to use, and the modern color ones are only a slight improvement over that.


I don't think I've ever set this setting. Did it change recent-ish? How does this change the UI?


screenshots: https://imgur.com/a/STMtkS0

If I recall correctly, they started hiding the tools by default 3-5 years ago.


I also always go to Preferences > Interface > Icon Theme and change it back to Legacy, and the Theme to System (a light one for me). I really don't like the default dark theme and the lack of color in the icons. I suppose people who use it all the time just learn keyboard shortcuts. My casual usage for nearly 20 years though insists on being able to have the unvocalized thought process: "I need the eraser, so I'll click on the big pink eraser icon".


Holy shit, combined with the above that is infinity times better. Thanks.


This is something I was actively resisting. It turns out that for me, I find GTK3 to be less aesthetically pleasing than GTK2. There are far fewer themes for GTK3, and they aren't as nice as the GTK2 ones either. Sad to see a step back in user interface design, in my opinion at least.

It won't be long before all the distributions catch up and start removing the GTK2 builds, which are likely to be deprecated.


Also because of the extremely irritating animations, which some found impossible to disable.


Theming is a thing of the past. Unless it comes directly from the App Developer. The idea of general theming looking good is pretty stupid.


I believe GIMP is the oldest F/OSS app I have used more or less continuously since I first logged into a Linux system decades ago.

It's amazing how dedicated they are after all these years. Hats off to you, people involved with GIMP over the years ... your work has made a huge difference in many lives and careers.


> I believe GIMP is the oldest F/OSS app I have used more or less continuously since I first logged into a Linux system decades ago.

X (Xorg now), xterm, Gimp, Emacs... These guys were there when I started with Linux and I still use them to this day!


Seconded. GIMP kicks serious ass, no matter what version of GTK it uses.


I stopped using Gimp a few years ago because it made me lose data, and I could not find a way to work safely with Gimp.

I was modifying several PNG images. Once I was done, I closed the window. As usual, Gimp warned me that I hadn't saved anything, since it considers that saving to PNG is not saving, it's just exporting. So I ignored the message, like always. It turned out I hadn't saved/exported an image, and I swore to use a saner application and never touch Gimp again.


So:

- You didn't save your file

- You didn't export your file

- You ignored GIMP's warning about it

And somehow it's the program's fault? I guess they could implement some form of temporary autosave like some office applications do, but that comes with its own can of worms (i.e. stuff you really didn't want to save staying on your drive).


You misunderstood. I had several tabs in Gimp, each with an image. I thought I had exported every one (to .png). Gimp told me I hadn't saved any (to .xcf) and Gimp doesn't care about exporting. *I had no way to know if my images were all exported.*


> I had no way to know if my images were all exported.

You had a way of knowing that though. When GIMP lists all the unsaved images, it specifically says which images have been exported and to what files. It does so by appending "(exported)" to the filename (just like in the window's title when you are editing) and mentioning the exported file in the second line.


Oh that makes more sense. Still, if GIMP had this kind of temp autosave you wouldn't have lost these files since you could've opened the .xcg of the relevant file and then export. Overall the best workflow with these applications is to always save the project file, in GIMP, Photoshop, or Affinity.


That's interesting, so it doesn't expect you to work directly on a PNG?


They changed the behavior a number of years back.

It used to be you just put what file extension you wanted in the save dialog and gimp would save the that file format. Honestly I likes it, I always thought it was one of saner ways to handle a complex subject.

However. the gimp native format is .xcf xcf is the only format that will save all of gimps internal data. due to the ease of loosing data it was decided to split the save system into two parts "save" only saves in xcf format, all other formats are exported.

so no, no longer can you just work on a png, gimp will nag you to save to xcf every time, I understand the change but I can't help but think that we lost something along the way. something about tools that stay out of your way and so what you want of them.


For me it was Audacity.


Time to start working on the Gtk 4 port? https://docs.gtk.org/gtk4/index.html


Gtk 3.0 was released in 2011, the last 3.x release was supposed to be in 2016, but later 3.24 came along. 3.24 is now up to the 29th point release with 3.24.29. Kind of an OpenSSL 0.9esque version number...

Meanwhile Gtk 2.0 from 2002 was apparently the first GTK+ version with the GObject system. I'm not sure if distros can stop shipping Gtk 2 any time soon, as it's quite commonly used in commercial software and not always vendored (since vendoring Gtk can be tricky).


> 3.24 is now up to the 29th point release with 3.24.29. Kind of an OpenSSL 0.9esque version number...

It's just semver. Gtk 3 isn't receiving new features, so it's frozen at 3.24. It still gets bug/security fixes, so 3.24.x patch releases abound (and it seems the current version is actually 3.24.37). Similar situation with Gtk2, current version is 2.24.33.


They slightly cheated and added some new features to 3.24.x to ease the transition and integrate into newer tech better like sandboxing.


> Meanwhile Gtk 2.0 from 2002 was apparently the first GTK+ version with the GObject system

Technically true but what I remember from Gtk+ 1.2 is that it had an object system that was a lot like gobject. It was more like gobject was factored out into its own library and given a new name and some different polish.


And Gtk3 was not useable for good quality apps until around 3.16 and Gtk4 just become useable with the recent 4.10 (fixing the ugly scrolling bug).

Wisdom is to know that technology needs to ripe at least 5 years before they are useable, thats what i did to Swift and Kotlin too. While both SwiftUI, Jetpak Compose and WinUI3 are all way to young that i would even waste time at the moment to learn.


Before 2.0 it was GtkObject. 2.0 just factored it out so you could use it for non-GUI code as well.


I'd like someone to move Solvespace to GTK4. It's platform specific code is all in one file (about 1K LoC). Uses a bare minimum of UI toolkit.

https://github.com/solvespace/solvespace/issues/853

You know, in case someone wants to get their feet wet in either GTK4 or Solvespace ;-)


I haven't tried but I imagine the LLM stuff might actually work well for these super-menial coding tasks of porting between related APIs compared to other coding tasks. There are usually a lot of "how do I port this snippet of code from version x.y to (x+1).y" on Q&A sites, forums, mailing lists etc. which is probably in the training data.


I love the solvespace ui, yes it's weird and different but it works well and is internally consistent, except when a gtk dialog jumps out at you, that is jarring and breaks the flow.

I guess my point is. the obviously gtk bits of solvespace are it's worst parts, if I were a better ui programmer one of my goals would be to get rid of all the modal windows in solvespace, make them in line and operate like the rest of the native widgets.


Hopefully not. GTK4 deliberately removed support for subpixel font antialiasing, making fonts noticeably more blurry/ugly on non-HiDPI screens. There's a ton of FHD (1920x1080) or 1920x1200 screens out there that are perfectly fine for most usecases, but GTK4 turns them into landfill. :-(


If they could just do one easy thing with a proper designer, that is to space out the tool icon, menu text better. Right now is too tight and looks like a 10 year old kids first pass at a tcl/tk app.


Davies Media Design^1 has the best Gimp tutorials I've ever seen on YouTube. So happy that a people are making such high quality tutorials for this tool. It makes it a lot easier to learn it

1. https://www.youtube.com/watch?v=_L_MMU22bAw



I'm a bit excited about the very last point. I always found the GtkTreeModel/GtkCellRenderer stuff hard to use. Hopefully the new system is less confusing.

My first experience with trying to develop against Gtk4 (about a year ago) ran into an issue where none of the example programs in the Gtk repo worked. Apparently the details on how to actually get the program to pick up the now mandatory CSS file were not right? All it did was to convince me to stick with Gtk3 for the meantime.


Is there any plans to support the web dynamic CSS rendering model?


I kind of hope not since that would likely involve bundling almost a full web browser in every GTK app. Basically turning it into another electron clone. One of the nice things about GTK apps is that you can run them on more constrained environments and get a lot of work done.


I have that in checklist form on solvespace github here:

https://github.com/solvespace/solvespace/issues/853


Offtopic shoutout for solvespace, it is amazing! Is separating out the geometry code into a stand alone library on the roadmap at all? Or plans to incorporate the manifold library?

https://github.com/elalish/manifold


If you mean the NURBS code, there are no plans to separate it for now. It does live almost entirely in the src/surf folder and does not depend on the sketch entities or constraint solver.

My hope is to squash the remaining bugs in that code. I'm currently working on an issue with coincident and tangent surfaces.

The constraint solver is available as a library and is currently used in FreeCAD assembly 3 workbench, as well as the Blender CAD Sketcher addon.


Is migrating from 3 to 4 easier or harder than migrating from 2 to 3?


That is a very broad question. For something the size and complexity of gimp with lots of rendering it is still a big task. Broadly speaking I think 3->4 is often easier though.


Gtk devs could learn much from projects like Linux, Windows, OpenJDK, Clojure, Emacs, or FLTK wrt backward compatibility. When Linus announced Kernel 5.0, it was "just another release" [1]. But, unfortunately, I'm getting the impression that Gtk and GNOME are run by teen boys who want to impress girls rather than get the job done.

[1] https://itsfoss.com/linux-kernel-5/


> When Linus announced Kernel 5.0, it was "just another release"

In fairness, Linux major version numbers are just made up these days, while GTK is actually using the major version number to demarcate meaningful changes.


GP's point is that those meaningful changes shouldn't also include breaking backwards compatibility because you don't want to support the old stuff.


Half the comments here are "I wish the world hadn't moved beyond GTK2". I don't see the problem myself (GTK3 isn't libadwaita and some of the GTK2 controls like the file picker belong in a museum) but there's clearly a vocal group of people out there who will use GTK2/Gimp 2 for as long as they possibly can. It also saves a lot of problems with stable distributions (Debian, Ubuntu, etc.) because they can't just upgrade dependencies willy-nilly, and nobody wants a custom Debian patched GIMP to port back features from the next minor release. Taking the time to do a rewrite can help to prevent a half-broken intermediate state with wrapper objects and facades everywhere to deal with the version difference.


>But, unfortunately, I'm getting the impression that Gtk and GNOME are run by teen boys who want to impress girls rather than get the job done.

Why? Because they desided that new features\API\etc are more important (possibly for a good reason) than speding a great amount of time on maintaining compatibility?

Sometimes you have to chose.

PS: not to mention that number of contributors for Gtk and Linux (not to mention Windows lol) is quite different.


Desktop widget libraries aren't wide open spaces for innovation. What new features/APIs can they add that's worth porting an app from gtkN to gtk(N+1)?


Touch, animations, styling, portability, rendering performance, cleaner abstractions…

GTK4 is very nice.


None of which required breaking backwards compatibility as demonstrated by "Linux, Windows, OpenJDK, Clojure, Emacs, or FLTK" (and I'll add Xorg too). You can add new things without breaking old things.


That just isn’t true, you clearly haven’t used GTK. FLTK has never done core redesigns like GTK.


> That just isn’t true, you clearly haven’t used GTK. FLTK has never done core redesigns like GTK.

One, I've wasted 6 years of my life writing GTK 2 and GTK3 apps. Two, What does "core redesigns" have to do with breaking backwards compatibility? Did the redesign necessitate removing GtkStatusIcon for example^1? What about ANY of the breaking changes made?

1: The reasoning given to justify breaking compatibility here paints a more clearer picture of how GTK devs treat breaking backwards compatibility. Remember, this is coming from the developers of what is supposedly a cross-platform toolkit.


A core redesign of GTK2->3->4 was moving from a model built around X11/xlib to one built around Wayland. GtkStatusIcon only functioned correctly on one backend, X11. It was very poorly supported on win32 and macOS.

Other clear examples that had to happen was all rendering used to be done on the CPU with Cairo as part of the API. Cairo is very slow on high resolutions, or when doing animations, or just in general. The modern GTK is built around GPUs using OpenGL. There is a backwards compat story, in that you can use Cairo yourself still, but the API is all around modeled with modern hardware in mind. Something FLTK clearly is not and as such it cannot accomplish much of the things GTK4 can in terms of hidpi, animations, shaders, styling, zero-copy rendering, multimedia playback, etc.

Yes, if given infinite resources, one could both resdesign a library and not break API, but it is a small group with limited resources.

I genuinely think they have done an incredible job and there are many new possibilities with the toolkit now that were not previously.

If the argument truely is just "never break API" I fundamentally disagree and think most platforms that follow it are bad platforms full of legacy cruft.


Clearly you've never maintained a large codebase with dependencies.


> FLTK has never done core redesigns like GTK.

That is the point.


fractional scaling? hidpi ? hdr etc ?


Why do any of those break existing code? Didn't win32 add those without breaking existing code?


GTK is maintained by 2-5 people. They cannot maintain every API forever.

win32 is also a terrible example of good API, just stable.


> GTK is maintained by 2-5 people. They cannot maintain every API forever.

But they like to rewrite it every couple of years. /s


Not really. Win32 DPI scaling is a mess.


Read CADT model form JWZ.


So, what does a GTK+3 Gimp looks like ? Is it prettier/looks more integrated with Gnome ?


I wish one day to share your optimism.


Right. I read "GIMP has been ported to GTK+3" and all I can think is "parts of it will look broken and using some dialogs will make it crash"


A GTK+3 Gimp would finally have support for hidpi and wayland (or at least the gui toolkit will no longer block hidpi and wayland migration). GUI automation/scripting/plugin development will probably be easier thanks to gobject introspection. Various widgets will look nicer.


Theming should be more consistent.


Consistent with other GTK+3 apps, but less consistent if you're not using GTK apps a lot.

But, GTK+3 support should bring HiDPI support.


I'm on KDE, and GTK3 apps look better than GTK2 ones, because Breeze Dark is rather tricky to get working correctly with GTK2 on modern versions of KDE.

GTK3 looks almost exactly native (down to window tinting), except that the current version of breeze-gtk seems to struggle a bit with client-side decoration (the minimize/maximize/close buttons that are integrated in the toolbar), making them for example too big. (It's a known regression and should be fixed in the next version, though.)

GTK4 and the whole lack of libadwaita theming thing does look out of place though. But I use many GTK3 apps on KDE and they look near perfect. The only thing I can think of is some color-changes when gaining/losing focus are sometimes missing in Electron apps whose menu bar is themed via GTK3, iirc.


Interesting to note that GTK3 was launched in 2011


And it was a huge step back from GTK2 at the time, taking ages to mature. Yes, the GNOME project is stagnating rather badly. Compare that with Qt, which is much better by comparison.


Ah too bad. The file chooser after GTK2 became absolutely unusable.


Yep. Ability to paste file paths (without invoking some other key sequence first) into the gtkfilechooserwidget.c remains broken in Gtk3 and Gtk4. Broken in the sense that you'll get an error message. They say Gtk3.x isn't frozen but it is when it comes to the gtkfilechooserwidget.c. They won't work on it and they won't accept fixes from outside sources.


AFAIK GIMP does not use GTK's builtin file chooser, specifically because it lacked image thumbnails up until recently.


Port to Qt when?-)

Semi-jokes aside, Gimp needs a better development model and more backing. Current pace of change with these kind of migrations is abysmally slow.


If Krita got more "Photoshop" features this problem would be solved.


I agree, there is a big intersection and Krita already has a better designed interface.


It has a better name as well, and it uses a better GUI toolkit.


Getting more backing like Blender has would be huge.


Are non destructive layers a thing yet?


https://www.gimp.org/docs/userfaq.html#when-will-gimp-suppor...

"Currently the plan is to introduce non-destructive editing in GIMP 3.2. This is a huge change that will require rethinking the workflow and parts of the user interface."

and https://developer.gimp.org/core/roadmap/#non-destructive-lay... for more details


That's excellent news. Does this mean GIMP would work well with high DPI monitor now? I.e. the text on menus and window titles won't be so small?


Ironic that GIMP is still on an old version of the "GIMP Tool Kit"


That's what happens when you break backwards compatibility.


I hope they can make the function names shown in GUI be searchable in python command list.


People saying that it's time to port to GTK4 are missing it. They've been on GTK2 for 20 years. They've got plenty of time on GTK3.


And GTK4 is worse for complex desktop applications, more poorly documented, and is going to be deprecated even faster than GTK3 according to the GNOME devs. :D

Rather switch to Qt tbh


> Rather switch to Qt tbh

This a thousand times, although they would probably have to rename it Quimp:)


So, what you're saying is there are now two benefits, and only one "impossible to say over the phone" drawback but also hella easier to search for, so I guess three benefits


might as well go for Kwimp then and become part of KDE


I'm not sure the dependency on a desktop manager would be a good thing.


KDE isn't just the desktop environment these days! (In fact, the DE part of KDE is now called Plasma Desktop.)

The KDE project contains many applications, including ones you may have used, like Krita or Kdenlive. I think KCacheGrind also fills a quite unique niche on Linux, namely GUI profiling tools. But there are also many that stay more within the KDE ecosystem, like Kate (which is a pretty decent text editor) or Dolphin (the KDE file manager, and probably the most featureful mainstream Linux file manager). A lot of these applications are also decently cross platform and work fine on for example Windows, and even Haiku.

Also, KDE Gear (as the application suite is called) integrates many things quite nicely. For example, their LaTeX editor Kile is basically just a glued-together version of Kate and Okular (the KDE document viewer), with some extra stuff to work with all the LaTeX stuff. It means you get all the features of both, like Kate's quite good Vi mode, or Okular's capability to trim a document's margins.

Of course, KDE would very likely refuse to take "Kwimp" under its umbrella because they already have Krita, but even if you're not on KDE there may be some worthwhile applications that do fall under their umbrella. (Just like GNOME Boxes is an excellent choice for beginner-friendly virtualization, even if you're not running GNOME.)


sounds like it would make sense to sit it out and wait for gtk5

https://www.phoronix.com/news/GTK5-Likely-After-GTK-4.12


Is it as good as Krita yet?


It's give and take. I think the crop tool is better in gimp than in krita, for example. Because in gimp, you can easily constrain to x by y pixels (absolute) or x by y ratio (relative) and it will stick to it no matter what. In krita you have something like "1.777777778:1" which looks bad imo, and you can somehow change it accidentally when dragging frames in viewport.

But, just a single example.


Personally I've always preferred GIMP over Krita, but that's probably just because I rarely use either, and GIMP is "the devil I know". And I'm comfortable enough with it for the few tiny things I need to do (I'm decidedly not an artist!) that "the grass is greener" just doesn't apply.


gimp is kind of a mess, something about being started as a university project many many years ago and being only halfheartedly maintained since. nothing wrong with that I love projects like that, but some people take offense. any way....

My understanding is that krita is a better painting tool. if you are drawing it will probably be the better experience. gimp is better at image manipulation(it is in the name) if you a modifying an already existing image it has better tools.

Ether way it is not really the tool but what you do with it. there is this guy who does these amazing cut-away drawings for books, when asked what drawing program he uses the surprising answer was MS Paint.

https://www.youtube.com/watch?v=PdKkR_lbLN0


Read this as GPT-3 and was trying to figure out what it means to "port GIMP to" an LLM


Goodbye gtk2 :(


I still compile for it. It gives me better results.


GIMP I don't understand the choice of name.


Every time I see something about it I'm usually compelled to make note of what a poor idea it has been to keep this name. There's a whole lot of proverbial water under that bridge, but I definitely think the choice to keep it has always been childish, ridiculous, and perhaps literally the greatest barrier that has kept it from being a much more serious competitor to Adobe stuff.


> I definitely think the choice to keep it has always been childish, ridiculous, and perhaps literally the greatest barrier that has kept it from being a much more serious competitor to Adobe stuff.

I seriously doubt that. There's a project called Glimpse which is (afaik) just a rebranding of GIMP to be more acceptable. It was popular for a brief 15 minutes, then was forgotten.


That was, I think too little too late. I believe they were more of a fork than a pure rebranding?


I based that off a HN comment I read in the past, but looking into it now, that doesn't seem too far off base. The changes mentioned in the release notes[1] are all about rebranding - replacing on-screen text, logos, docs, etc. There are a couple non-branding-related changes like changes to keyboards shortcuts, but it seems fair to call it more of a rebrand than a fork. It does help me appreciate how much work goes into just this "surface-level" stuff of cross-platform public use Open Source software.

[1] https://github.com/glimpse-editor/Glimpse/releases


Fair, I'd still maintain too little too late. "Glimpse" didn't happen until much later.

A point I made above; I think people are really WAY too inside their own silos. I work at a university, I've also done non-profit work with children who are LEARNING this stuff. And now, what does it look like, me like "Hey kids, try GIMP?"


I'm only seeing this now (so not sure if you'll read this), but:

The thing is, vast swathes of the world doesn't know what that word means. I didn't know it until I learned about this controversy. If the name had been the "greatest barrier" to its acceptance, GIMP would have been taken up in a LOT of the poorer countries, both because it's free and almost no one knows the word well enough to see it as offensive. Maybe the name played a role in some small way in US academic institutions or something similar to that. But without more fundamental issues (usability issues, stability problems in the past, UI latency, a thousand other little papercuts), the name itself would not have been a great barrier for GIMP.


I worked in a webdev shop where we relied heavily on graphical tools like ps and gimp. Never once have I heard anyone take issue with the name, at least not when I was there, so the name hasn't kept gimp from being a competitor to Adobe stuff in that company at least.

I have to ask: are there really that many companies out there that base their tooling decisions on tool naming? I imagine they wont use git or bash either lol.


See, you're talking companies and tooling.

The rest of the creative world, people learning, people who might use these tools later; students and children. I work in a university. Etc. All of those people know "photoshop" as a verb.

And now I'm here saying -- "Try this thing, it's called...GIMP?"

Sometimes people here are WILDLY out of touch.


I think some people here are as wildly out of touch as the people they disagree with. Most people don't care about this and will use the tool that does the job, regardless of what it is named.


Nope. It's not that individuals are offended or whatnot. It's that a ton of people will literally never be able to use this tool because they've never heard of it.

They will not have heard of it because many of the kinds of people who might be able to introduce them to it will not take it seriously because it is a fundamentally unserious name.


Adobe reminds me of a damn old building brick in Spanish...


Exactly. Adobe also sounds weird to me, whereas Gimp doesn't mean anything at all.


Just try renaming it, like this:

sudo mv /usr/bin/gimp /usr/bin/krita

Do make sure to disconnect from internet first. I didn't and the new name somehow leaked out and backformed into a fully-fledged, actively maintained FOSS project. (It even works quite well, although it was a pain changing out every single gimp path for a krita path, by hand, plus of course pasting in the krita binary/ascii data.)


They could just call it GNU Imp.


Or Gim Paint, and then we can argue over how to pronounce Gim. is it Jim or gimm?


ok. I'll bite.

Sh looks like shut up. X looks like a porno venture. Gnome is not politically correct. Red Hat is communism. Daemons are satan's children.

Where do we stop ? /s


Again, I'm not talking about "being personally offended" or something like that, more the second order effects. Most everything you've named isn't something that meaningfully could be a big winner for free/open source, and also, none of those names are "as offensive" (again, I don't make the rules, and I don't personally care much either way. Here, I care about getting good software in the hands of mroe people)

GIMP is a unique missed opportunity because zillions of people, way more than those who are "in tech," understand the concept of "Photoshop." I honestly believe the entire landscape of "photo-editing" could have been much better and open if this name was something less, well, publicly stupid.

There is no way to me that the value of "keeping the name" is remotely close to the lost value of "way more people being able to use great and open software" here.



Just in time!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: