The benefits I was referring to are being able to write fast, correct code easily. Other languages that allow one to do that include D, Go, Clojure, and Scala. If you want a scripting language, you're stuck with LuaJIT for now, but at least to me that's not an important distinction for anything I'd want to do with LuaJIT.
LuaJIT comes with warnings like "Callbacks are slow" and "The following operations are currently not compiled and may exhibit suboptimal performance, especially when used in inner loops:" which pushes it further down the list for me.
Several months ago I still used a multi-site licensed Windows 7 Ultimate and relied on the "Services for Unix" NFS support. My servers are Unix based. Went outside for a while and when I came back my Windows 7 Ultimate had been replaced by Windows 10 Pro. I really do not know how this happened because I always disable auto install.
My system is dual-boot Windows / Linux using Grub and somehow it did not break (showing that MS knows how to not break dual-boot). So I used Windows 10 Pro for a while only to discover that they removed "Services for Unix" NFS support which is now only available for Windows 10 Enterprise.
I've been using Linux for 20 years and am finally looking into not using Windows anymore at all. People may call it childish (my experience 20y advocating for Linux / BSD etc.), but I just do not trust them anymore. I also did not forget how they compared open-source with cancer and communism, something I think a professional company should never do.
Since I started hearing about WebAssembly I cannot stop thinking about the possibilities. For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.
For people thinking this will close the web even more because the source will not be "human"readable. Remember that JavaScript gets minified and compiled into (using Emscripten) as well. The benefits I see compared to what we have now:
- Better sharing of code between different applications (desktop, mobile apps, server, web etc.)
- People can finally choose their own favorite language for web-development.
- Closer to the way it will be executed which will improve performance.
- Code compiled from different languages can work / link together.
Then for the UI part there are those common languages / vocabularies we can use to communicate with us humans: HTML, SVG, CSS etc.
I only hope this will improve the "running same code on client or server to render user-interface" situation as well.
>For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.
The reason you'd write stuff in C is (aside from performance) to access native API. Browsers and WASM doesn't let you do that.
WASM in the Node could let you do that - meaning that you would get cross platform assembly packages instead of ELF or w/e binary - but you would still need APIs on the platform and often for C the way that's handled is preprocesor macros that choses which platform you are compiling to - so you can't just "compile to WASM and then magic" even with WASM you'd have to "compile to WASM + POSIX and WASM + Win32" if you want to run on POSIX/Win32, etc. for all platform/API permutations.
TL;DR WASM is big but it won't quite be the abstract virtual machine like say JVM or CLR
I've had many issues with NetworkManager, Avahi and Pulseaudio. I already decided to avoid them a long time ago. My distribution of choice, after using Slackware for several years, has always been Debian. There have been moments I tried to switch to FreeBSD and Gentoo. Since Debian decided to use Systemd I started to use Gentoo again, with the hardened profile, and avoid most of the software I do not agree with. I even created Ebuild packages for the latest MATE desktop and removed all dependencies to anything that has anything to do with GNOME 3 except GTK 3, which is optional as GTK 2 can be used as well.
Somewhere along the line a kind of "takeover" happened. I am not going to say that the people doing this are "evil" or something as they at least contributed something that can be used for free. Difficult to argue with that. But I cannot stop myself having the feeling that there is some hidden agenda of interests behind it as well. For me this "takeover" looks similar to what happened to W3C where certain people (companies?) started to ignore them and created HTML5 instead. I really like the well-formness of XHTML where XML itself is very strict and has many built-in features which are still missing inside what replaced them in popularity, like JSON. The people of W3C knew what they were doing and strived for every change to be stable and future proof which can be a slow process. Too slow for some people it seems.
Of course I tried to use Systemd and was really happy with it till the moment it started crashing and taking some clusters down with it. I migrated 2 servers to the new Debian and everything went well. Even the LXC containers started just fine until I started to upgrade the containers to the new Debian as well. At that time I did not even know the author of Systemd was the same as the one that started Pulseaudio, but somehow these issues gave me the same feeling of uncertainty that reminded me of it.
For the servers I am now using Gentoo (as almost every other distro already switched to systemd) in combination with OpenRC without any issues. Compilation of source packages happens on a separate server where I can test them before deploying the binary packages I just build to the test and production servers.
Actually, they are all valid ways when using the ReST philosophy. The idea is that with every request from the client the latest state is sent along with it (if needed). This way the state of the request can be reconstructed on the server without depending on centralized storage or a single server. The server may update the state and return it for the client to store again (some parts encrypted if needed). No need for the server to keep the state (this way it can remain stateless).
HTTP basic auth does this automatically.
Creating a resource for a session is optional but may be useful if the state to sent back and forth is too big. But this will require centralized storage.
The state may be stored inside a cookie where parts of it can be encrypted. It is also possible to specify that parts can only be updated by the server.
You may also use Local Storage and send the data across using an XMLHttpRequest or even use a HTML form (which may have limited support for the HTTP verbs).
But it seems to me that ReST does dictate, specifically, that there must be no need for shared state. The system must work without the client having knowledge of the server and vice versa. But a cookie, as used to hold a session token which references information on the server, is very much shared state. So should be forbidden.
The state is passed back and forth. The server does not keep the state (that is why they call it stateless), the client may keep it. If something does need to be stored on the server a resource should be created. The state that is passed back and forth may then contain a reference to the resource, a resource identifier (in case of the web usually a URI is being used).
The server is stateless because it receives the latest state from the client (location, cookie and other headers) and sends an new updated state (location, set-cookie and other headers) to the client again along with an optional representation. There is no need for the server to keep the state in memory anymore. Because the server does not keep the state it does not matter which copy of the server (in case of load-balancing or fail-over etc.) handles the next request as everything needed can be derived from the state sent along with request from the client.
Keep in mind that it is best practice to be able to reproduce a representation by the use of a resource identifier, like with "pure functions" in functional programming, it should be deterministic. The content of a representation may change but not the semantics of it.
The part that is stateful is the state that is being passed back and forth between the client and the server.
Have been using the ReST philosophy (or pattern?) for almost 15 years without any issues. ReST itself does not define the use of a specific technology and is not a standard by itself, it is pretty abstract. Using the URI and HTTP standards are just a way to apply it. Other technologies can be used as well, like SMTP / MIME or something developed yourself as soon as the ReST constraints can be applied. Adding the ReST constraints is what brings the advantages. It is really flexible and allows for easy composition of "layers" through "connectors". A caching proxy is an example of a "connector". If ReST could be compared to something in programming it is very similar to "monads" as used in functional programming.
Recursion can be done by the use of nested "resource identifiers" (SSI or ESI for example) inside the returned "representation". A "connector" may then decide to do another request to replace the "identifier". If the "connector" uses caching it may not even need to do another sub-request. I use this to do something like what Facebook's Relay does. As the full state is always being sent along with every request it is very easy to scale because the servers, wherever they are, always have enough context to be able to work with it. A server may even ignore what it does not understand for the next layer to handle it called "partial understanding" which is supported by XML for example. Unfortunately when using JSON most of these advantages are lost. That is why I prefer a less verbose "XML Infoset" similar to JSON while still keeping all the advantages.