Hacker Newsnew | past | comments | ask | show | jobs | submit | cptskippy's commentslogin

They certainly look viable as replacements for my Tesla P40 for virtual workloads.

Support for Single Root IO Virtualization (SR-IOV) to enable compute and Graphics workloads in virtualized environments.

> The government does most things poorly and with little regard to budget or quality.

That's a common line by conservatives who are actively sabotaging government with policies and laws which they then point to as evidence of such inefficiencies.


> It is interesting that IBM dominated this generation of consoles, and was vanquished in the next.

IBM's Power was the only logical option at the time.

These consoles were being designed around 2000. Intel and AMD weren't partnering on bespoke CPUs at that time. I don't even think AMD would have been considered a viable partner. Neither had viable 64 bit options and part of console marketing at the time was the ever increasing bit depths.

Prior console generations had use MIPS which wasn't keeping up with ever increasing performance expectations and players like Toshiba and Sony were looking for a higher performance CPU architecture. IBM's Power architecture was really the only option. Sony, Toshiba, and IBM partnered to develop their a new 64 bit microarchitecture called Cell.

Microsoft's first console was basically a PC and that's how everyone saw it. The 360 was an opportunity for Microsoft to show that it could compete with the big boys. It was also an opportunity to keep a toe dipped in RISC, because it had dropped support for RISC CPUs with Windows 2000.


By the way, the AMD athlon 64-bit launched 2003. The PS3 launched in 2006. I had an AMD64 bit process in my laptop in 2005.

What wasn't viable?


Yeah that part didn't make sense, not to mention that neither the PS3 nor the 360 were running 64-bit software. They didn't have enough memory for it to be worth it.

you don't need memory to make 64 bit software worth it. Just 64 bit mathematics requirements. Which basically no video game console uses as from what I understand 32-bit floating point continue to be state of the art in video game simulations

Fundamentally it's still a memory limitation, just in terms of memory latency/cache misses instead of capacity. If you double the size of your numbers you're doubling the space it takes up and all the problems that come with it.

No it isn't. The 64-bit capabilities of modern CPUs have almost nothing to do with memory. The address space is rarely 64 bits of physical address space anyways. A "64-bit" computer doesn't actually have the ability to deal with 64 bits of memory.

If you double the size of numbers, sure it takes up twice the space. If the total size is still less that one page it isn't likely to make a big difference anyways. What really makes a difference is trying to do 64-bit mathematics with 32-bit hardware. This implies some degree of emulation with a series of instructions, whereas a 64-bit CPU could execute that in 1 instruction. That 1 instruction very likely executes in less cycles than a series of other instructions. Otherwise no one would have bothered with it


"Bitness" of a CPU almost always refers to memory addressing.

Now you could build a weird CPU that has "more memory" than it has addressable width (the 8086 is kind of like this with segmentation and 8/16 bit) but if your CPU is 64 bit you're likely not to use anything less than 64 bit math in general (though you can get some tricks with multiple adds of 32 bit numbers packed).

But a 32 bit CPU can do all sorts of things with larger numbers, it's just that moving them around may be more time-consuming. After all, that's basically what MMX and friends are.


The original 8087 implemented 80-bit operands in its stack.

It would also process binary-coded decimal integers, as well as floating point.

"The two came up with a revolutionary design with 64 bits of mantissa and 16 bits of exponent for the longest-format real number, with a stack architecture CPU and eight 80-bit stack registers, with a computationally rich instruction set."

https://en.wikipedia.org/wiki/Intel_8087


Typically, it doesn't have the ability to deal with a full 64 bits of memory, but it does have the ability to deal with more than 32 bits of memory, and all pointers are 64 bits long for alignment reasons.

It's possible but rare for systems to have 64-bit GPRs but a 32-bit address space. Examples I can think of include the Nintendo 64 (MIPS; apparently commercial games rarely actually used the 64-bit instructions, so the console's name was pretty much a misnomer), some Apple Watch models (standard 64-bit ARM but with a compiler ABI that made pointers 32 bits to save memory), and the ill-fated x32 ABI on Linux (same thing but on x86-64).

That said, even "32-bit" CPUs usually have some kind of support for 64-bit floats (except for tiny embedded CPUs).


The 360 and PS3 also ran like the N64. On PowerPC, 32 bit mode on a 64 bit processor just enables a 32 bit mask on effective addresses. All of the rest is still there line the upper halves of GPRs and the instructions like ldd.

You misread my comment. I'm not saying that it limits the amount of memory, I'm saying that _using more memory has cost_.

> If the total size is still less that one page it isn't likely to make a big difference anyways

It makes a significant difference when you're optimizing around cache behavior and SIMD lanes.


Parts of the 360 did. The hypervisor ran in 64bit mode, and use multiple simultaneous mirrors of physical address space with different security properties as part of its security model.

It's not like the games weren't running in 64 bit mode too (on both consoles)

They had full access to the 64 bit GPRs. There wasn't anything technically stopping game code from accessing the 64 bit address space by reinterpreting a 64 bit int as a pointer (except that nothing was mapped there).

It's only the pointers that were 32 bit, and that was nothing more than a compiler modification (like the linux x32 ABI).

They did it to minimise memory space/bandwidth. With only 512 MB of memory, it made zero sense to waste the full 8 bytes per pointer. The savings quickly add up for pointer heavy structures.

I remember this being a pain point for early PS3 homebrew. Stock gcc was missing the compiler modifications, and you had a choice between compiling 32 bit code (which couldn't use the 64bit GPRs) or wasting bandwidth on 64 bit pointers (with a bunch of hacky adapter code for dealing 32 bit pointers from Sony libraries)


Games themselves ran in 32 bit mode.

The difference is that on PowerPC, 32bit mode on 64bit processors (clearing the SF bit in the MSR) is just enabling a hardware 32bit mask on the effective address before it gets translated into a virtual address.

Unlike on x86-64 and arm64, there's no free (or even that cheap) way to do an ILP32 abi purely in software. x86 and arm allow encodings for memory reference instructions that only use the bottom half of the registers (the E* registers on x86, and the W* registers on arm64). No such encoding exists on PowerPC for memory reference instructions, so you'd be stuck manually masking each generated pointer.

Because of that, the compiler hacks you're talking about are kind of the opposite from what you're describing. The hacks are because on the upstream gcc PowerPC backend, having a 32bit pointers in hardware and having operations that operate on 64bit quantities had the same feature flag despite technically being able to be separately enabled on actual hardware. It was just very rare to do so. So the goal of the hacks was to describe to the compiler that the target has 32 hardware pointers, but still can issue instructions like ldd to operate on the full 64bit GPRs.


I have some confidence that AMD's acquisition of ATI had a huge impact.

That allowed both a CPU and an advanced GPU to be on the same die.

They also wisely sold Global Foundries, and were able to scale with TSMC.


You have to remember that the AMD and Intel of today are very different companies than they were 20-25 years ago. AMD split off it's fab capabilities, acquired ATI, adopted TSMC as a fab, and developed a custom silicon business.

At that time AMD wasn't in the custom CPU business, AMD64 was a new unproven ISA, and x86 based CPUs of that time were notoriously hot for a console. These were also some of the reasons why Microsoft moved away from the Pentium III it had used in the original Xbox.

The PS3 was launched in 2006 but the hardware design was decided years earlier to provide a reference platform for the software.


Because consoles don't use off-the-shelf CPUs for many reasons. Neither Intel nor AMD of that time would even consider making a bespoke CPU for Sony or MS.

Even they could use off-the-shelf SKU it wouldn't be viable - neither one had one that fits in power envelope (not that it helped xbox...)


Consoles used off-the-shelf CPUs until the 6th generation. Even the Dreamcast and the first Xbox used off-the-shelf CPUs, it was only the PS2 and the GameCube that started the trend of using custom-made CPUs.

Not entirely accurate.

The PSX's CPU is semi-custom. The core is a reasonably stock R3000 CPU, but the MMU is slightly modified and they attached a custom GTE coprocessor.... I guess you can debate if attaching a co-processor counts as custom or not (but then the ps4/xbone/ps5/xbs use unmodified AMD jaguar/zen2 cores)

IMO, the N64's CPU counts as off-the-shelf... however the requirements of the N64 (especially cost requirements) might have slightly leaked into the design of the R4300i. But the N64's RSP is a custom CPU, a from scratch MIPS design that doesn't share DNA with anything else.

But the Dreamcast's CPU is actually the result of a joint venture between Hitachi and Sega. There are actually two variants of the SH4, the SH4 and SH4a. The Dreamcast uses the SH4a (despite half the documentation on the internet saying it uses the SH4), which adds a 4-way SIMD unit that's absolutely essential for processing vertices.

We don't know how much influence Sega's needs had over the whole SH4 design, but the SIMD unit is absolutely there for the Dreamcast, I'm pretty sure it's the first 4-way floating point SIMD on the market. The fact that both the SH4/SH4a were then sold to everyone else, doesn't mean they were off the shelf.

Really, the original Xbox using an off-the-shelf CPU is an outlier (technically it's a custom SKU, but really it's just a binned die with half the cache disabled).


They would have started designing the systems in 2003, and one of the first choices is CPU partner.

Do you trust the new line of CPUs that just launched that year?


> actually the hardest part of a locally hosted voice assistant isn't the llm. it's making the tts tolerable to actually talk to every day.

I would argue that the hardest part is correctly recognizing that it's being addressed. 98% of my frustration with voice assistants is them not responding when spoken to. The other 2% is realizing I want them to stop talking.


> ...your scenario just does not happen.

It happens to us all of the time.

My partner is on a conference call, I hop in the car to go run an errand. Suddenly I'm on a conference call.

My partner is in the kitchen listening to a podcast, I hop in our other car and suddenly I'm listening to a podcast.

My partner is sitting in the car having a driveway moment, I arrive home with the other car and now I'm having her driveway moment.

My partner is on a conference call at her desk and picks up her phone to respond to a message and then you hear "shit shit shit, hold on a moment!" and then frantic typing and clicking.


Core evolved from the Banis (Centrino) CPU core which was based on P3, not P4. Banias used the front-side bus from P4 but not the cores.

Banias was hyper optimized for power, the mantra was to get done quickly and go to sleep to save power. Somewhere along the line someone said "hey what happens if we don't go to sleep?" and Core was born.


Renting camera equipment is fairly common and their are rental services that do overnight and next day.


Yes, just not every lense in every part of the world.


Who pays for the laptop when the school bully pours water on a kid's backpack? Or a kid has their bag in a seat and someone sits on it accidentally?

What happens when a kid's laptop is broken, regardless of the reason, and the family is unable to afford to repair it? Are we going to run into a similar situation that we had when kids couldn't pay for school lunch? Do teachers write "pay for a new laptop" in sharpie on the kid's arm for the parent?

A child's educational environment is a lot more chaotic, violent, and uncontrolled compared to an office environment. If you're issuing my child a $600 laptop and making me responsible for any damages, guess what's going to be kept at home in a secure location?

Making a child responsible for securing a laptop in an insecure environment isn't accountability, it's just a form of imprisonment.


What happens when a backpack full of paper books is destroyed? When I was a kid, we were charged between $50-100 for a book that was written in or destroyed. I bet these days it would be $200 each. Yeah we were running around with $500-600 of books in our backpacks all the time.


Back in the day it was also our (kids/parents) responsibility to provide book covers. We always used paper grocery bags, but you could buy some that were purpose built.


1) bully or bullys insurance 2) whoever sat on it Alternatively: Apple care? :)


Does everyone pay for bully insurance or is it a tax on the bullied?


I think it's highly dependent on the product line. My previous two phones were the Moto One which had very minimal bloat or customization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: