Hacker Newsnew | past | comments | ask | show | jobs | submit | deaddodo's commentslogin

"De facto" is the keyword there. Only the nation of origin has any say on company management and infrastructure in a de jure manner. The only power non-origin nations/entities have is via leveraging their ability to do business in the region and/or their local holdings.

> The only power non-origin nations/entities have is via leveraging their ability to do business in the region and/or their local holdings.

Which is absolutely enormous, so this distinction is splitting hairs.


Depends on the area/region. But not really.

But believe what you like.


Have you tried using Manifest V3 adblockers on Chrome? They're not nearly as capable or useful as the old ones.


The dependencies they're likely referring to aren't core libraries, they're shared interfaces. If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not). Generally for larger systems, but smaller/scrappier teams, a true dependency management tree for something like this is out of scope so they just redeploy everything in a domain.


> If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not).

This is not true! This is one of the core strengths of protobuf. Non-destructive protobuf changes, such as adding new API methods or new fields, do not require clients to update. On the server-side you do need to handle the case when clients don't send you the new data--plus deal with the annoying "was this int64 actually set to 0 or is it just using the default?" problem--but as a whole you can absolutely independently update a protobuf, implement it on the server, and existing clients can keep on calling and be totally fine.

Now, that doesn't mean you can go crazy, as doing things like deleting fields, changing field numbering or renaming APIs will break clients, but this is just the reality of building distributed systems.


What you are talking about is simply keeping the API (whether a library or a service) backwards-compatible. There are plenty strategies to achieve that, and it can be done with almost any interface layer (HTTP, protobuf, JSON, SQL, ...).


I was oversimplifying for the sake of example, but yes you are correct. Properly managed protobufs don't require an update on strict interface expansion; so shouldn't always require a redeploy.


Oh god no.

I mean I suppose you can make breaking changes to any API in any language, but that’s entirely on you.


A demake would be a reimagining of a modern game into the style and aesthetics of the time. E.g. taking God of War and turning it into a 2D Shinobi-style platformer for Sega Genesis. Or turning Gran Turismo into a Mode7-style racer on SNES.

In this case, the creator wrote a custom 3D renderer and recreated the models/meshes to get as close of an approximation of the N64 experience onto the GBA.

I wouldn't call it a port necessarily ("recreation" seems more apt), but it's closer to that than a demake.


The game was built for the Saturn and ported to PS1 (versus being built independently for it). During that process, graphical fidelity and effects were lost and audio transcoded in a lossy manner. Sprites were lower quality, the water effects were worse, textures were downgraded, shadows were lost, etc.

This is a pretty bad demonstration since they took literally zero effort in syncing the two side-by-side til the end, but gives you an idea:

https://www.youtube.com/watch?v=UL6Z9xvt7h0

That being said, it's not so much worse that PS1 is unplayable. It's just that if you had the choice between the two, the Saturn version is slightly more pleasant to look at and listen to.


Do you not know that the US is a Federal system and there are (at minimum) 50 different ways that schools are funded?

California's schools (for instance) aren't funded by local taxes, they're funded by the state and allocated funding based on a formula[1] of performance, need, population, etc. They can be augmented by local taxes, but in practice that's rare as the wealthy just avoid the system altogether; instead, opting for private institutions.

That's at least 12% of the population that is not funded in the manner you outline.

1 - https://www.cde.ca.gov/fg/aa/lc/


Equity remains a valid criticism of LCFF in California specifically.

For one unremarkable observation in this area, see the following think tank report:

> States often commission cost studies to establish the level of funding required to help students meet state standards. LPI analyzed five of the more recent of these studies [...] All of these studies recommended additional weighted funding to support English learners and students considered "at-risk," which was most often defined by a measure of family income and also included other factors [...] The recommended weights for English learners in these studies ranged from 15% to 40% of the base grant level in each state. The recommended weights for at-risk students ranged from 30% to 81%. Compared to the recommended funding in these states, the LCFF’s supplemental grant weight of 20% is at the lower end of the recommended range of weights for English learners and below the range of weights for at-risk students.

https://files.eric.ed.gov/fulltext/ED670929.pdf


A good chunk of EULAs are partially-completely unenforceable in US contract law as well.

It just doesn’t stop corporations from using them as a scare tactic.


Win98 was head and shoulders above System 9, from a stability perspective. It had protected memory, actual preemptive multitasking, a somewhat functional driver system built on top of an actual HAL, functional networking, etc, etc.

To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8-9 were so bad.


98's multitasking and memory 'protection' were a joke. In the same mid high machine for the era, 2k and xp were miles ahead of w98 on mid-high load.

Maybe not on a Pentium, but once you hit 192MB of RAM and some 500 MHz P3/AMD k7, NT based OSes were tons better.

You only realized that upon opening a dozen of IE windows. W98, even SE, will sweat. 2k will fly.

On single tasks, such as near realtime multimedia ones, w98 would be better, such as emulators/games or video players with single thread decoders. On multiprocessing/threading, w98 crawled against W2K even under p4's and 256MB of RAM.


Well, Win NT is an actual operating system, and Win 98 and Classic macOS are just horribly overgrown home computer shells in an environment they should never have been exposed to.


And yet, OS 8 and OS 9 couldn't even match that joke.


Ahem, w98 BSOD if you sneezed hard near it. Installing a driver? BSOD. IE page fault? BSOD. 128k stack limit reached? either grind to a halt or a BSOD. And so on...


I worked at a company that was delivering a client-side app in Java launched from IE. I think we had an ActiveX plugin as the "launcher." This predated "Java Web Start." It was hysterically bad. We were targeting 32 meg Win 98 systems and they were comically slow and unstable. Most of our developers had 64 and 128 meg boxes with NT 4.0. I mostly worked on the server side stuff, and used them as a terminal into the Solaris and Linux systems.


> To be clear, Win98 was a garbage fire of an OS (when it came to stability); which makes it so much worse that Mac OS 8 and 9 were so bad.


WIn 98SE and Mac OS 9 were on par. Ditto with System 7.5.3 and Windows 95 OSR2.


I disagree, and gave the technical reasons. So now we're just going into opinion, which I'm not interested in.

Either way, you're welcome to believe what you like.


If you call it Rock-Paper-Scissors it still follows logically:

Rock loses to Paper loses to Scissors


Why would you want to describe it as "loses to" rather than "beats"? People want to win, not lose.


Or any other ordering


Yes, but I was specifically choosing the one that is by far the most popular one; at least in American English.


Intel’s 18A is closer to availability (functional, ramping to production) than Samsung’s SF2 (still in dev/testing phase); which is roughly analogous to TSMC N2.

TSMC is ahead, as usual, but Intel is closer than Samsung (in this specific case).


“ Such metrics are often closely guarded trade secrets. But according to the Dailian report, Samsung's yields for SF2 are in the 50% to 60% range, just high enough for commercial production. The same report puts TSMC's upcoming N2 node at 80%”

Looks like Samsung is actually closer to production than Intel 18A which is still having issues with yields.

https://www.pcgamer.com/hardware/graphics-cards/samsungs-nex...


Dailian is being overly charitable to Samsung and downplaying Intel. They’re a Korean news outlet with a vested interest in the Chaebol.

That being said, take it however you like. Apple is talking to Intel to make their deal with TSMC more favorable. They could have done the same with Samsung. Either way, TSMC will still be fabbing (at least a good chunk of) their 2nm chips.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: