Aside from that: The US, UK and Israel have all proven that it is possible to do better than the EU here. They didn't know as well and ordered way sooner and more. AFAIR the UK even recently claimed that they vaccinated more people than the EU even in absolute numbers.
The EU bet pretty heavily on AstraZeneca and that's not the Eastern European countries fault. The EU already ordered in August, while they only ordered from Biontech and Moderna in November (11th and 25th). It wasn't wrong to buy AstraZeneca, it's cheap and more importantly easy to handle. The failure was to bet so much on it. AstraZeneca wasn't something that was odered as well, it was and still is THE vaccine that's intended for the largest part of the population.
Money also can't be an issue, the EU has come up with 750B euros to fight the economical impact of the pandemics. That was already in April or so. For the vaccine the EU only had around 2.7B available for most of 2020. The UK alone spent more than that, the US over 10B$. The worst thing: 2B of those 2.7B euros were simply repurposed from an already existing fund. That means the EU states only had to come up with 700M euros together in total. Even ignoring all of that, the vaccine is so cheap compared to the costs of lockdowns (and lives) that the price simply does not matter much. Ironically the price they got is the part the EU is especially proud of and EU politicans are quick to point out that Israel payed twice as much per dose.
What they don't say that the Pfizer/Biontech vaccine's price depends on the amount and delivery date. So you could actually pay more to get the vaccine sooner. It seems the EU chose not to do that because they felt good enough with AstraZeneca (they were supposed to start production in October). Unfortunately that information isn't public so we don't know for sure. But this would both explain why the EU didn't expect significant shipments from Non-AZ vaccines in the first quarter and why they got it cheaper.
Unfortunately the EU still communicates there is no problem with the procurement but only with evil pharmaceutical companies not delivering as promised. There will not be any remorse.
It really must hurt them that even the heavily criticized politicians Johnson, Netanyahu and Trump did a much better job with procurement than the EU.
In that particular interview, I tend to agree. However I've seen/read multiple other interviews and talk shows where EU politicians talk quite differently. They may admit some general mistakes (without naming one) but still dispute each and every critic.
In one talk show for example a representative of the European Commission at least mentioned twice that in Africa even less people got vaccinated than in the EU. Like this would be the frame of reference.
Yes, that's true, but it also has to be said that producing these vaccines is not so simple. It's all new technology, no mRNA vaccine was ever approved before, and you can't just dial up production so easily. Unfortunately. You can't just convert any old pill-factory into a factory that produces these vaccines. Apparently there are supply chain issues as well - some of the things that are needed for the production of this vaccine are in not available in the numbers that are required.
But all that being said, I think we should move to war-time production here.
I can absolutely imagine that scaling production is extremely hard.
However I don't have confidence that the same politicians that just claimed that they couldn't foresee problems with mass production did everything in their power to help here last summer. I mean the EU ordered only in November from Biontech and Moderna (and less doses than the companies offered). That doesn't really look like an incentive for companies to look into opening another factory already in summer.
Just throwing the same numbers in here: The EU ordered 4+ doses per inhabitant by Q4 2020. Deliveries were intially scheduled through Q3 2021, with enough doses to be delivered to reach herd immnity by end of Q2. Not sure how ordering even more, without knowing when said doses would have been available, had helped.
EU politician really screwed up in summer 2020, so. They had, besides ordering (which was outsourced to the EU anyway), one job. Planning and setting up operations to vaccinate millions of people in the first 6 months of 2021. That would have included coordination between patient appointments, manufcturing and deliveries (invlving the EU ideally), making sure back-up plans are in place, getting processes up and running to make it as easy as possible to get vaccinated, making sure manufacturers can get necessary support in securing their upstream supply chains when needed and so on.
None of that happened. Instead, everyone was so, so happy that Europe had a great summer vacation. And then everyone so so hoped the unsurprising increase in cases starting in October would be just go away. And then everybody so so hoped they could safe Christmas shopping and christmas markets, And then everybody fell back to the only lesson they learned during the first wave: people like politicians that act tough. They just din't realise that back then acting tough, read lockdowns, was inline with expert advice. Basically, the EU had over six months to get ready for an EU-wide vaccination campaign. Member states had also 6 months. And did, it seems, by no means enough, if they did anything at all.
This now shows, and everyone is just happy to point at manufacturers and the EU. We'll see how long that story is going to hold water.
I can confirm the Czech government did basically nothing to prepare for vaccinations till the very last moment - something resembling a mass vaccination plan was only published on Dezember 22th (!) and only now the system seems to be in somehow working state, likely due to not having that many vaccines to process yet.
FWIW Canada did the same thing and our federal gov't is paying the same political price.
Trump reserved the US manufactured supply for themselves, only, so Canada is reliant on EU exports, which is kind of crazy to think about, geographically.
My understanding is many Canadian officials were caught off guard by the fact that the vaccine became available Q1 2021, they were thinking Q2/Q3 2021 was more likely and so much of the purchase deals were geared around that.
They ordered one slightly later than the UK and the US (let's ignore Israel which is basically a large scale trial). The main difference: the EU used normal certification processes, just sped them up considerably. The UK and US used emergncy certification.
The volumes the EU ordered initially were absolutely sufficient with 4+ doses per EU inhabitant. Manufacturing capacities were sufficient for that as well. It all started to go south as soon as memeber states looked for scape goats why vaccination happened so slowly. First the manufacturers, then the EU, then the federal government (where applicable). It is a last-mile distribution issue if you will now, not a manufacturing one.
I think that's a very charitable interpretation of events. Realistically the EU negotiated on behalf of the 27 member countries to try to get the best deal on price, and perhaps more importantly, to avoid the inevitable tensions which would follow from one member country securing more than another member country. This process slowed down the negotiation process.
Whether there is any truth in the EUs certification being conducted differently is a bit moot given they have approved the same vaccines based on the same trial data.
I don't think it is charitable. The EU had to juggle 27 individual countries, one central certification and Brexit. They had to avoid a situation in which rich countries, e.g. Germany, outbids poorer countries, e.g. Hungary or Greee. They managed to do that. They over purchased, they split orders betwen suppliers thus further minimizing the risk.
And they did all that using normal certification. They even pointed out, quite clearly, that the actual managemen of vaccination campaigns, the vaccine ordering and the national distribution is up to the member states.
The last part shows very different results, e.g. in germany Mecklenburg-Vorpommern is far ahead in per capita vaccines. Bavaria for example is behind them. All we won with focusing on the supply of vaccines so far in a shutdown of Pfizer's pant in Belgium to produce more doses, which are not needed, at a later point of time. And a nasty contract dipute with AZ after the media and, at least IMHO, politians singled out AZ as a scape goat.
There is the caveat that generated machine code can embed addresses of objects that get relocated by the GC. In this case the code needs to be patched even though the code itself doesn't move.
I'm not familiar with Ravenbook. How does it handle GC roots outside the managed heap? I presume it updates the GC roots as it moves the objects they point to, or does it not support GC roots outside the managed heap? If the code refers to managed objects, you need to mark those locations in the machine code as containing GC roots, otherwise, you might GC the referenced objects, which is just as bad as moving them.
I don't really know that particular GC, but not necessarily. Code objects are usually stored in a separate area and might be managed differently than regular objects. So a copying GC might not relocate code - maybe not even collect it. In case objects are relocated by the GC, there is usually "relocation information" which can be used to patch the code after it got relocated.
Releasing memory is definitely useful for mobile devices or browsers. Even in some server use cases it's useful, e.g. when you pay for memory usage. I guess that's why the JVM has this -XX:SoftMaxHeapSize option.
I guess it depends whether GCs are always scheduled in an allocation or can be triggered another way. Either way that should be easy to disable.
I read somewhere that D doesn't have write barriers, so I would assume they have a hard time implementing more advanced GC features like generational collection or concurrent marking. It's not suprising that the GCs in the JVM achieve much better pause time.
> What LLVM has going for it versus GCC, is the license, specially beloved by embedded vendors and companies like Sony, Nintendo, SN Systems, CodePlay can save some bucks in compiler development.
The license is probably considered an advantage by many companies. However it is definitely not the only reason for LLVMs success. There are many technical reasons as well, e.g. cleaner code and architecture. My personal impression is that a lot of research and teaching has moved from GCC to LLVM as well, universities usually do not care that much about the license.
Yes, GCC has GIMPLE (and before that just RTL) but it is not as self-contained as LLVM's IR. In GCC front-end and middle-end are quite tangled on purpose for political reasons. Nevertheless I agree that LLVM isn't as revolutionary as the poster you are replying to is claiming, reusing an IR for multiple languages was done before. However I don't think any other system was as successful as LLVM at this. E.g. Rust, Swift, C/C++ via clang, Julia, Fortran, JITs like JSC/Azul JVM are/were using LLVM as a compilation tier, GPU drivers, etc. Those are all hugely successful projects and if you ask me this is an impressive list already while not even complete. It seems most new languages these days use LLVM under the hood (with Go being the exception). IMHO this is also because LLVM's design was flexible enough that it enabled all those widely different use cases. GCC supports multiple languages as well, but it never took off to the degree that LLVM did.
I don't know all the compilers you mentioned but how many of those were still maintained and available on the systems people cared about by the time LLVM got popular? Are those proper open-sorce projects?
Yes that is an impressive list, but I bet if LLVM had gone with a license similar to GCC, and everything else remained equal, its adoption wouldn't be as it is.
No those projects aren't open source at all, they used their own compilers, or forked variants from GCC which you couldn't reveal thanks NDAs, now thanks to clang's license they have replaced their implementations, only contributing back what they feel relevant to open source.
I would still consider WASM a stack machine and not a register machine. Yes, there are mutable local variables in WASM but Java bytecode has them as well - which you consider a stack machine. BTW the designers of WASM explicitly call WASM a stack machine here: https://github.com/WebAssembly/design/blob/master/Rationale..... With WASM's MVP it was necessary to store e.g. loop state in local variables, thanks to recent changes this doesn't seem to be necessary anymore. I think this was the main argument that blog post considered WASM to be a register machine. javac also makes heavy use of variables in bytecode, but somehow no one considers the JVM a register machine.
> my observations are that people with experience in the field tend to prefer register machines
That's actually the opposite of my observation, they seem to prefer stack machines.
The clang disassembly given in the article sure makes it look like WASM is a nested expression tree, which leaves the choice of stack versus register to the implementation.
The stack machine is a model for the semantics of wasm, in the sense that the safety properties of wasm are defined in terms of stack types rather than SSA of register properties (for those that are unfamiliar with stack machines, this stack is a different type of concept from the call stack, that in wasm is used store the local variables). Whether during execution it is better implemented as a register machine or a stack machine is an implementation detail.
Java has instructions like dup, swap etc. To me, that is the critical difference here, and where I draw the line between “stack machine” and “register machine”.
I have to admit this line seems arbitrary to me. So WASM is a register machine to you but if they would simply add those 2 instruction would it suddenly become a stack machine then? Those instructions would actually be trivial to add. I think those terms are relatively well defined and when you argue that WASM is a register machine even though the inventors explicitly claim it's a stack machine you should have really good arguments for that. Personally I would be surprised if you could point me to any literature that supports your definition.
Turing completeness sounds like an arbitrary distinction to those outside of the field of CS, but it’s not.
To me, the distinction here is that the stack machine in WASM is restricted to the point that it corresponds 1:1 with an expression tree—not even a graph, just a tree. This means that every function in Web Assembly can be thought of as a collections of statements and expressions, and the stack machine abstraction is nothing more than a serialization format for the expressions.
Maybe dial it back a bit with the challenge to point at literature. The literature has not really caught up with the existence of WASM yet.
> Maybe dial it back a bit with the challenge to point at literature. The literature has not really caught up with the existence of WASM yet.
It wasn't me who claimed that WASM is "obviously" a register machine, despite the inventors saying otherwise. They even explicitly state that they decided against a register machine. I guess it's then reasonable for me to ask on what definition of stack vs register you are basing this opinion on. Let me be clear: I was not asking here for literature about WASM specifically but a definition of register/stack machines that supports your claim.
WASM's instruction encoding is very much based on a stack machine. Even with the initial limitations you mentioned I don't think it qualifies as "obviously a register machine". As already mentioned in multiple comments those restrictions were already lifted with the multi value proposal.
I understand that there is a grey area, but simply claiming "obviously a register machine" doesn't seem right to me. Implementation-wise WASM is a stack machine even if it needs/needed locals to be turing-complete.
The multi-value proposal breaks the ability to turn Wasm into expressions easily, and thus makes it even more of a stack machine than it already is. Dup and swap may still be added in the future.
A defining feature of a register machine is that the actual instruction encoding has direct references to source and destination registers in it. Wasm doesn't have those, it has explicit get_local instructions instead.
That said, if you turn off LLVM's WebAssemblyRegStackify pass, all LLVM IR's values will end up in locals, with little to no stack usage. Still no register machine, but a bit more of a grey area :)
Maybe, however remember that you then need to buy new hardware to use new WASM features.
Also WASM isn't really ideal for interpretation, this could make implementing the CPU harder (however I have no clue about implementing CPUs, so this is just a guess).
What would be the advantage? Performance? Probably not much after JIT compiling WASM to native machine instructions. If there is an actual problem there, I guess it would be better to just add new native instructions that support WASM semantics. The JIT can then use these instructions if available.
Right now WASM can't do much without a runtime, so I think a WASM-only CPU is probably infeasible for some time.