> If it did, corporations would just start crapping up Linux the way they've crapped up Windows.
They do already, my work laptop runs the corporate spin of Ubuntu, complete with Crowdstrike, which goes absolutely crazy and chews all the CPU whenever I do a Yocto build.
I used to be able to reliably BSOD a work computer by doing a largish git pull inside WSL2, with the culprit seemingly being the McAfee realtime scanner. VirtualBox VMs were fine though. Not confidence-inspiring!
Several of the lean GUI text editors are built on Scintilla (https://scintilla.org/), which provides a cross-platform editing component that can be integrated in GTK, Windows or Mac GUI apps. Maybe that has too much bells and whistles for you, since it's both about editing and presentation.
I guess I might be misunderstanding what Scintilla is? Everything I've seen with it has it coupled with native controls, like a winform control or a Qt control. Are you saying that the library can be used, on its own, without a graphical component? If so, that might fit the bill!
Yes, Scintilla is a text editor engine library. It's not tied to any particular UI or technology. Out of the box it's not a text editor yet; you provide the frontend. You get all the "table stakes" right away if you build your editor on this library.
Same engine, different frontends. The engine has a series of hooks that you implement in whichever way you please for your particular interface. It's definitely the presumptive choice here.
Ah, I see! Very cool! Yeah, this is the kind of thing I was looking for, so this should give me what I need to test some proof of concepts. Thanks for the links! I do wish there were something a little more ergonomic, but I'm way too far into the begging to be choosing, here, so I'm quite happy to take what I can get.
In any case, I really do appreciate the dual links. It's so much harder to suss out the boundaries of a library with only one implementation. This was really helpful.
> Could turn it around as "everything you can do in C++ you can do in C with a lot less language complexity".
No, you can't, C is lacking a lot that C++ brings to the table. C++ has abstraction capabilities with generic programming and, dare I say it, OO that C has no substitute for. C++ has compile-time computation facilities that C has no substitute for.
My point is trivially true as far as computability goes, but that is not what I ment.
All those abstraction capabilities can be a big detriment to any project, because they always come with a cost, and runtime is far from the only concern.
Specifically in an embedded project, toolchain complications and memory use (both RAM and code) are potentially much bigger concerns than for Desktop applications, and your selection of programmers is more limited as well; might be much more feasible to lock your developers onto acceptable C coding standards than to make e.g. "template metaprogramming" a necessary prerequisite for your codebase and then having to teach your applicants electrical engineering.
Both object oriented programming and compile time computation is doable for a C codebase, just needs more boilerplate and maybe a code-generator step in your build, respectively. But that might well be an advantage, discouraging frivolous use of complexity that you don't actually need, and that introduces hidden costs (understanding, ease of change, compile time) elsewhere.
Is there an example of the generic programming that you've found useful?
The extent of my experience has been being able to replace functions like convert_uint32_to_float and convert_uint32_to_int32 by using templates to something like convert_uint32<float>(input_value), and I didn't feel like I really got much value out of that.
My team has also been using CRTP for static polymorphism, but I also feel like I haven't gotten much more value out of having e.g. a Thread base class and a derived class from that that implements a task function versus just writing a task function and passing it xTaskCreate (FreeRTOS) or tx_thread_create (ThreadX).
Typed compile-time computation is nice, though, good point. constexpr and such versus untyped #define macros.
The generic algorithms that come with the C++ standard library are useful. Once you get used to using them you start to see that ad-hoc implementations of many of them get written repeatedly in most code. Since most of the algorithms work on plain arrays as well as more complex containers they are still useful in embedded environments.
I had been programming for a long time before I learned OOP. After some years playing with it, I came to the conclusion there's not much I can't do about as well using simple functions and structs. The key is a well thought out and organized codebase. Always felt polymorphism in particular seemed more trouble than it was worth.
I still use modern languages on a regular basis, but when I drop back to more basic languages there are only a few ergonomics that I truly miss (eg. generics).
std::array can sometimes give you the best of both worlds for stack allocation in that you statically constrain the stack allocation size (no alloca) while guaranteeing that your buffers are large enough for your data. You can also do a lot of powerful things with constexpr that are just not possible with arrays. It is very convenient for maintaining static mappings from enums to some other values.
Traditionally it has been done because the last three bits in an object pointer typically are always zero because of alignment, so you could just put a tag there and mask it off (or load it with lea and an offset, especially useful if you have a data structure where you'd use an offset anyway like pairs or vectors). In 64-bit architectures there are two bytes at the top that aren't used (one byte with five-level paging), but they must be masked, since they must be 0x00 or 0xff when used for pointers. In 32-bit archs the high bits were used and unsuitable for tags. All in all, I think the low bits still are the most useful for tags, even if 32-bit is not an important consideration anymore.
One good use of "It turns out..." is to report negative results. Something like "You can overclock a Mac Mini to 8GHz using liquid nitrogen. It turns out this is not a stable configuration <picture of burning Mac Mini hooked up to a physics experiment>"
> So it looked like the Mac but was infinitely worse.
"Infinitely worse"? Some people really need to cool off the hyperbole.
Having each window be a self-contained unit is the far better metaphor than making each window transform a global element when it is selected. As well as scaling better for bigger screens. An edge case like that may well be unfortunate, but it could be the price you pay to make the overall better solution.
That was the point of Tog's conclusion: edges of the screen have infinite target size in one cardinal direction, corners have infinite target size in two cardinal directions. Any click target that's not infinite in comparison, has infinitely smaller area, which I suppose you could conclude is infinitely worse if clickable area is your primary metric.
This wasn't just the menu bar either. The first Windows 95-style interfaces didn't extend the start menu click box to the lower left corner of the screen. Not only did you have to get the mouse down there, you had to back off a few pixels in either direction to open the menu. Same with the applications in the task bar.
The concept was similar to NEXTSTEP's dock (that was even licensed by Microsoft for Windows 95), but missed the infinite area aspect that putting it on the screen edge allowed.
The infinitely worse part was when you maximized the window so the menu bar was at the top, but Windows still had the border there, which was unclickable.
So now you broke the infinite click target even though it looked like it should have one.
Go is a total non-starter, it's not interactive at all. The competitors are things like Matlab, Mathematica, R or Python (with the right math libs). If you're weird you could use something like Haskell, APL or Lisp in this role, but you'd pay a hefty price in available libs.
In what situations would a non-interactive language be a non-starter? I have never felt that I missed having a REPL when coding C++ or Rust. The only reason it is even useful in python is that the type info is laughably bad, so you need to poke things interactivly to figure out what shape of data you should even expect.
(I'll take strong static typing every day, it is so much better.)
REPLs/notebooks are really nice in situations where you don't know what you want ahead of time and are throwing away 90% of the code you write, such as trying to find better numerical algorithms to accomplish some goal, exploring poorly documented APIs (most of them), making a lot of plots for investigating, or working a bunch with matrices and Dataframes (where current static typing systems don't really offer much.)
Yeah, this is a entirely different domain than what I work in (hard real-time embedded and hard real-time Linux).
Though poorly documented APIs exist everywhere, but they are not something you can rely on anyway: if it isn't documented the behaviour can change without it being a breaking change. It would be irresponsible to (intentionally) depend on undocumented behaviour. Rather you should seek to get whatever it is documented. Otherwise there is a big risk that your code will break some years down the line when someone tries to upgrade a dependency. Most software I deal with is long-lived. There is code I wrote 15 years ago that is still in production and where the code base is still evolving, and I see no reason why that wouldn't be true in another 15 years as well.
At least you should write tests to cover any dependencies on undocumented behaviour. (You do have comprehensive tests right?)
People working with math or stats are often in an explorative mode, testing different things, applying different transforms to the data, plotting variables or doing one-off calculations. You need some form of interactive session for that to be feasible, whether it is a REPL or a notebook. There actually is a C++ REPL just for this use case from CERN, because they have a ton of HEP C++ code that they want to use interactively.
I don't see what the role of AI is in this. You don't need an AI to aggregate data from a bunch of sources. You'd be better off having the AI write a scraper for you than burning GPU time on an agent doing the same thing every time.
They do already, my work laptop runs the corporate spin of Ubuntu, complete with Crowdstrike, which goes absolutely crazy and chews all the CPU whenever I do a Yocto build.
reply