Neat series. There was another recent post on using a C# version of Redux as well, at https://spin.atomicobject.com/2017/03/13/adapting-redux-c-sh... . It's nice to see Redux's concepts spreading outside the Javascript world. I've had similar thoughts regarding some of the Python services I work with.
If anyone is interested in learning Redux (or React), I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics.
You're welcome! And yes, that's absolutely my intent :) There's lots of "getting started" tutorials for React and Redux, but the official docs for both can only cover so much information. So, I've deliberately tried to collect articles on more advanced topics - stuff like "higher order components" for React, "normalizing state" for Redux, "hot module replacement" for Webpack, etc.
We've been working with Orleans for a few months to build out the next iteration of our infrastructure. I have nothing but good things to say about it. One of my favourite aspects of developing with Orleans is its single-threaded nature. Because a grain can by default only process one call at a time on a single thread, it's easy to make guarantees about data consistency inside that grain regardless of how many processes are reading/writing to it at once.
The idea of a stateful grain is also really powerful and something we're harnessing extensively: if everything is a grain, all of your data is already cached in the activated grain so there's no need for a separate caching system. Additionally, any updates to the data are immediately available to the "cache" before you send it to the database. So transient data can be cached even if you're not sure you eventually want to persist the data. This allows you to do partial application of data to grains, and then eventually commit the data to storage (in the case where writing is expensive.) This of course is totally possible with more traditional architecture, but its impossibly simple in Orleans and it's hard to go back once you get to use it.
I also want to give a big thanks to the folks who are active in the Orleans Gitter channel. Honestly probably the most helpful open source community I've ever encountered. They're extremely thorough and always willing to help.
How close are we to getting it on Dotnet core though? It's effectively impossible to ship on Windows in most shops, as it'd take a total reworking on tooling and the introduction of a lot of expertise and the loss of Kubernetes or Mesos for management.
.NET Core support is a little ways out still last time I checked, but it seems to be running at this point. IIRC, the Orleans team is working through kinks and compatibility to get it to 100% feature parity.
This was an interesting read! The stack is out of the ordinary and intriguing. I'd love to see numbers, performance number: request per second and whatnot compared to a traditional architecture to back up the conclusion :
> This architecture is highly efficient, almost infinitely scalable, yet very light on infrastructure.
I'm curious on how scalable such fine-grain grains (pun intented) (i.e. one grain per user?) are. Wouldn't communication across all those grains yield to poor performance?
Another big question I'd be interested in is testing and debugging: Redux tree/reducers model let's you easily test things. How's the testing/debugging experience in orleans? The author mentions the "time travel", I'd be curious to know his experience so far.
> An Actor system makes sure that every data element / actor object lives in only one place (one machine in the cluster), so there is no duplication of data, which simplifies things dramatically.
What happens when that machine goes down? My understanding of Orleans is that it'll be fault tolerant and handle this, but what are the impact on performance? And is there any guarantees the data will be stored without corruption in the external storage (azure table in the author's case)
This was an experiment, and I didn't do a performance test. The Orleans people will tell you that the current production version will process something on the order of a couple hundred thousand of messages per server per second. If your actors are modelled well, this scales to hundreds of servers, so it would be possible to have on the order of tens of millions of messages per second between actors, including persistence. I expect the upcoming Dotnet Core version to do even better.
> How's the testing/debugging experience
I normally test with xunit, standard dotnet stuff that works very well. Orleans provides a TestingHost that lets you spin up silos for testing grain code in a 'real' environment. This costs some time for startup.
The time travel idea needs some devtools ui, which I didn't build. I'm sure this would be possible though, with the limitation that the "edit and continue" feature of dotnet is not as capable as in javascript: often visual studio will tell you that your change requires a full application restart. You can change code and data while paused in the debugger.
> What happens when the machine goes down?
Orleans would respawn the Grain on another server (using the last-known state) and retry the operation up to two times by default. If you follow a safe implementation pattern (call WriteStateAsync after a state change), your state will be correct. In my blog code, I await successful completion of the message on the client. If something went wrong in Orleans and the state isn't persisted, that promise would be rejected, and it could be shown to the user.
> often visual studio will tell you that your change requires a full application restart
I suspect this is due a .NET runtime limitation.
When you edit your code, as long as you only make modifications inside a method which does not alter the method's declaration and method-signature, you should be fine. Basically, editing compiled code is fine, as long as you are just altering the methods internal IR-code.
If you however end up doing something which alters a method's name, or signature in any way, you end up effectively mutating a class-definition. And there's no part of the .NET runtime which allows that.
As a workaround, you could generate a new class on the fly, but then you would have to update all the existing references to this class in the code... And that probably means invalidating some type-signatures across the line, causing new class-level mutations.
And when all that is resolved, then you would have to port and preserve state for your mutated classes, fake a new stack, etc etc. And then you have turtles all the way down.
Basically at this point, a full recompile is probably the only sane option.
It's a by-product of the static nature of the languages and runtime. I suspect the same applies to JVM based edit-and-go features in Java IDEs.
Very excited to see this. I built a large Actor-based architecture (homebrew originally, now ported to Orleans) and while spinning myself up on React/Redux recently I started realizing how closely our data flow architecture mimics Redux.
My plan had been to work to make the parallels more explicit so it's easier for other devs to wrap their head around the architecture ("it's just Redux" is a lot easier to explain than "it's this custom homebrew thing", just like "it's just Orleans" helps significantly with detailing the Actor model).
The idea of an actual server-side .NET Redux implementation is really exciting, particularly coming out of the Orleans community.
I built a fairly large backend system using event sourcing ideas and am now using Redux for another backend tool I'm building. Redux and event sourcing are very similar. One big advantage of using Redux is there's tons of documentation and blog posts and plugins so you get a lot of power out of the box. E.g. it took me 5 minutes to add in support for the Redux devtools https://github.com/zalmoxisus/remote-redux-devtools.
Our findings in production were very much inline with this without the additional wrinkle of Redux server-side (which I decline to comment on), and of course Erlang doesn't virtualize Actors.
Sadly the SPOF-mitigation I did was never really released integrated with Fuzed.
But this kind of system offers FANTASTIC flexibility when done correctly, and allows you to do what many AWS-based folks also enjoy doing: allowing new and development resources to run peered to production resources for limited trials and easy experimentation.
Oh man this brings back memories! Some years ago I needed to provide http access to an old code base in a fringe programming language and decided to write a language agnostic web server.
This was when I was just getting started with Erlang so I decided ports were the way to go; I could write a small adapter in the target language and it would just work. After a few days prototyping I found fuzed (it also used Yaws, if I'm not mistaken). I spent many nights reading the source code, so thanks!
The project never saw the light of day because Zed Shaw released Mongrel2, which is based on the same concept, around the same time.
I skimmed through the paper and one possible problem is when actors communicate with each others it might create a lot of network traffic, especially if one node goes down, it has to sync and might create a load loop as the load of the other nodes increases. A better way is to let the client do the orchestrating of talking to different nodes, so that no server-to-server communication is needed.
These are distributed virtual actor systems, which is the pattern of Orleans. Of course there are many "normal" distributed actor systems (the Erlang Actor model).
> The Redux reducer that processes the action server-side in C# is almost the same as the one in typescript working client-side
... bother anyone else? Surely this is the worst-of-the-worst kind of non-DRY code you can write, where you have to have two identical implementations in two different languages, but they also have to have other responsibilities? Even if you did write tests for both, you would have to make sure that the tests were testing the same things.
Having thought about this a bit since I started typing, I think your server-side redux has to be different in some fundamental ways. It won't be an in-memory representation like the client has. It will have to actually perform the DB operations. The server will care about different state than the client. The server often won't care about in-memory representations, so won't need to produce a new state upon every action. If it does, it might be in a different form - a cache that keeps (f)recent data, for instance, rather than an entire state tree that evolves.
I'd like to hear from someone with experience on this, who has written two reducers for the same dataset, perhaps in different languages. Did you find yourself with subtle re-implementation bugs? Was it actually more productive than providing a regular REST API and having your client-side Redux store persist itself?
I personally have to wrangle lots of highly structured, partly denormalised data, with a quite a few layers of SQL indirection. Moving one part of the nested data to another branch, for example, is tricky to get right, so I test it thoroughly. And then I have to deal with the fact that my requirements include undo and save buttons – there is a constant distinction between previously-persisted and newly-added objects. I could not imagine having to write this kind of code twice and keep the two implementations in sync. But here's a shot:
- The server would act like a side-effect hook à la @ngrx/effects.
- Make a redux reducer tree out of multiple grains. Each one gets dispatched all the different actions received by the root, so you can have both a `Parent` grain and a `Child` grain react to a `ChildMovedToOtherParent` action. This is only necessary for me because of an over-denormalised schema, but I can imagine other uses.
- Handling undos is tricky. On the client, it's easy, because everything's in-memory and immutable; just store previous states. On the server, it's much harder. Event Sourcing would have you just pick a point in time and rebuild from there. I can see that kinda working: instead of writing to the DB immediately, queue up the actions in-memory and execute them when the server hears a Save action, creating a point you can't rewind past.
- All of this sounds a bit unstable, however. When you queue up DB calls, you have to execute them all sequentially, even if it's just modifying the same record over and over again. The most efficient way to do this, without running a hundred useless DB calls on save, would be simply to hold all your objects in memory, using a reducer to compute their appropriate final to-be-persisted state. This is way too complicated.
Has anyone got any experience working with an architecture like this? Or is everybody that's using Redux for state management only as ambitious as a counter or a one-way data funnel and not one bit more?
«Surely this is the worst-of-the-worst kind of non-DRY code you can write, where you have to have two identical implementations in two different languages, but they also have to have other responsibilities? Even if you did write tests for both, you would have to make sure that the tests were testing the same things.»
It's always been somewhat common in client-server programming since the dawn of time. Even sometimes in the same language on both sides like many classic C/C++ game models where the client and server are intentionally kept at a remove and use different codebases. In the classic game architecture paradigm, the client can't be trusted, but you also want to make sure that the client responds as quickly as possible.
It's interesting to see some of those game architecture approaches returning to web development: things like "optimistic updates" in the Redux world are a very similar pattern to classic game state client/server management. Client reducers act as if the server-side action was carried out in advance of server response and server response corrects only if necessary.
I don't think I can comment that deeply on the rest of your post, but maybe the above digression helps a bit?
If anyone is interested in learning Redux (or React), I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics.
I also maintain a catalog of Redux-related addons, utilities, and tools, at https://github.com/markerikson/redux-ecosystem-links .