This guy uses "security" as a value prop for SPARC and Solaris 3 times in this article. But there is no such value prop for either technology. Solaris is not more secure than FreeBSD or even many Linux distributions. And despite the mocking jab, the Itanium processor has a far more powerful and flexible security model than the SPARC does.
Murphy is an unabashed Solaris/Sparc partisan, but he usually makes good points (in past posts, not much data on this one). I appreciate his dissenting from the majority viewpoint - thundering herds are prone to occasionally going off cliffs, as recently demonstrated.
I also believe he's right in thinking Sun is getting a raw deal from 'the street'. Of course, it would help if Sun could sell its way out of a wet paper bag ...
It does seem that many of the problems are down to short-term views from the "investors" forcing certain courses of action. The evidence does not show that bankers are any better running a computer company than they are running a bank.
(disclosure of bias: my startup runs on Sun + Solaris)
I remember Theo De Raadt pointing out several design errors in x86 chips that could be used as attack vectors. I am not sure how many of those SPARCs or Itaniums have.
Different x86 microarchitectures have had errata (such as TLB consistency) that, if you're a kernel VM developer, can bite you in the ass. That's annoying, but it's not an intrinsic flaw in the x86 architecture, and it is not as if there are latent x86 vulnerabilities lying around waiting to be exploited.
On the other hand, at least two times now, Sun managed to screw up floating point in a way that compromised security. For instance:
Witness, for example, Intel’s “new” Nehalem line - it’s great right? By Wintel standards, it is - but if Sun hadn’t lent AMD some people to design x64 and then supported the company by building its own motherboards to demonstrate what x86 multi-core could do, you’d be paying HP’s prices for Itanium desktops - Itanium performance, and Itanium security.
I call [whatever the highest HN-agreeable word that substitutes for "bullshit"].
The Itanium was never positioned for the desktop, and even if it was, it never had the performance to get the job done. It's slow, hot, and is a compromised design for desktop use.
AMD did to Intel what Toyota, Honda and Nissan did to GM in the 70s-90s: surprise and delight at the bottom end, and move upmarket slowly while the incumbent sleeps.
Intel made the mistake that Joel Spolsky famously warns us about -- don't rebuild things you don't have to; AMD bet on the opposite color and won big time.
most of the market, and many among Sun’s own staff, have no idea how the SPARC products line up against x86; no appreciation for the differences between Solaris and Linux; and no interest in learning how unloved products from Sun Ray to the CMT cryptology processors can produce huge savings and/or productivity improvements for customers.
Honestly, the more you know about "how the SPARC products line up against x86" the less palatable it is.
Sun has too many irons in the fire. On one hand you have the few-core UltraSPARC processors that power the big iron. The servers are big, hot, and not clockspeed competitive with tier-1 x86 hardware costing 1/10th the price.
On the other end you have the T1/2/2+ many-core processors that are woefully anemic price:performers except possibly in trivially parallelized tasks.
My experience (as an engineer in a multi-billion dollar company building from the ground-up over the last couple years) is that there's no compelling reason whatsoever to go with SPARC hardware unless you have to. We've purchased a few products with a legacy on SPARC, and these systems have been treated like appliances by the vendors -- they don't want us using Zones or ZFS, and they don't support us analyzing the applications using DTrace.
Linux is doing to Sun what Sun did to IBM. The difference is IBM had the benefit of more entrenchment and vastly deeper reserves. They managed to evolve away from "pushing iron as an end" to "pushing iron as a means to an end" -- where that end is consulting dollars.
"Sun has too many irons in the fire. On one hand you have the few-core UltraSPARC processors that power the big iron. The servers are big, hot, and not clockspeed competitive with tier-1 x86 hardware costing 1/10th the price."
I find it interesting to see comparisons based on clock speed becoming common again. Different architectures behave differently and SPARC performs differenty (read "better most of the time") than x86 at the same clock. I thought people learned that from the PowerPC Macs. I find this supremely ironic coming from a person nicknamed "iigs". Any half competent programmer could make a 6502 run rings around a Z80 with four times its clock in the Apple II days.
"On the other end you have the T1/2/2+ many-core processors that are woefully anemic price:performers except possibly in trivially parallelized tasks."
Like... Oh! web-serving! And virtualization. And database workloads. You must be right: those T1/T2 have no useful purpose.
While Linux and x86 killed Sun on some workloads (it was Windows who killed the Unix workstation, BTW) being able to properly cast hardware to a given problem is a skill far too few people possess these days. Perhaps you should look more closely into your "legacy" Sun stuff and learn about the good parts.
Yeah, I knew when I wrote that I chose poorly. I mean perf per core: that is to say, tasks that require clock speed.
web serving, virtualization, database workloads
Virtualization: Virtualizing Solaris is a practical non-starter -- every workload I've seen has fallen into one of two categories: either full system integrated applications (the appliance model that the majority of third party application providers seem to insist on these days) or common internet workloads (DNS, SMTP, HTTP). The former are more or less uniformly disallowed by vendor best practices -- nobody wants to do it. The latter is arguably better served by using the tried and true virtualization of the 1970s: preemptive multitasking multiple processes in a single host image.
I have been attempting to advocate Solaris virtualization in our organization and have failed. I can't make a case for it that has justified even one server purchase (or reclaim).
Web serving: $1000 PCs have been doing wire rate 100mbit static file serving and SSL for a decade or so. A lot of web workloads tend to coagulate around serializing resources (databases being the big one). In cases where that's not applicable, an arguably better solution is to cluster cheaper standalone machines. If there was a web workload that didn't apply well across multiple machines but could be run efficiently on multiple cores in the same image the T1/2/2+ would be a win. I don't know what that workload would actually be, however.
Database workloads: Maybe, and probably for certain loads, but a Google search for "T2000 oracle" doesn't look promising.
properly cast hardware
I'll see your "properly cast hardware" and raise you "do the right thing for the business". Esoteric hardware shopping is fun for engineers but it's not sound decisionmaking for the stakeholders. Shame really, as I quite prefer the IIgs's Apple Desktop Bus keyboard over the spongy thing on this ThinkPad.
I'm really kind of mystified by how DTrace has captured people's imaginations. How could something like DTrace justify an entire server platform? I have it on my Mac, I'm a professional reverser, and it's never the first thing I reach for. I'm serious, what's the huge win here?
You never, ever hear people say that Detours is a reason to switch to Win32. But Detours seems, if anything, more useful than DTrace.
Dtrace is useful because you can run it without having to interrupt any processes on the machine, and it is safe (won't cause kernel panics) and lightweight (won't slow the machine down too much) enough to use on a production site.
DTrace lets you instrument either 1 process, a collection of processes, or all processes at the syscall level. Want to know which processes are usinng disk IO; then what the size of the IO writes is? You can do that. For instance, you might want to tune your RAID controller stripe size to your IO-intensive application.
I'm pretty sure syscall-level instrumentation isn't the win with DTrace, since every modern Unix provides a safe, lightweight method for collecting that info. In fact, if you know how DTrace actually works, what Sun is doing is actually scarier than, say, truss.
I'm not saying there isn't a DTrace win; just that you haven't shown it.
DTrace's innovation is that it provides an easy to use interface which
allows people to get results they want in a way that would otherwise
require significant low-level expertise and engineering time. That people like it should come as no suprise to the entrepreneurial crowd here - it's just another twist on Make Something People Want.
One thing I particularly like about DTrace is its awk-like language
makes it easy to borrow snippets of useful code from existing DTrace
scripts. When I needed the equivalent of linux's Paco [1] tool on my
OSX box I found I could easily build it from the code of /usr/bin/dtruss[2] and
/usr/bin/pathopens.d[3]
DTrace is also what apparently powers Sun's Storage Analytics[4]
It's funny, because the awk-like D language is my least favorite aspect of the system. DTrace had an example to build on in this space: libpcap, which is the lingua franca of packet-level network programming on every platform at every performance level. If they had just took their probes and promoted an API with Python bindings, I'd have been all over it.
While there are benefits to that approach, I'm glad they didn't do things that way. Putting performance issues aside, I think it's a lot easier to develop, test and maintain a small single purpose language then a cross-platform library with a public API and suite of bindings to an evolving language like Python.
Given the size of its team, DTrace would never have succeeded as a
library with bindings to even one dynamic language like Python because
the support burden of doing that is enormous. Developing, maintaining
and testing dynamic language bindings on multiple platforms has got to
be one of the suckiest programming tasks in the industry. Just ask
anyone who has ever had to support a PyQT or PyWx application on
multiple platforms. There's a good reason why Python's only stable
cross-platform gui is the one whose underlying gui library has been
dead for a decade.
Maintaining cross-platform bindings to a popular dynamic language like
Python, Ruby or Perl is like fighting a war on three fronts. You have
to expose the basic features of the library in a useful way while the
os vendors are breaking things in their zeal to support new features
and and hardware while at the same time the dynamic language
developers languages are "improving" their languages in significant
semantic ways.
In contrast the DTrace developers and porters only need to worry about
kernel internals - the D language is trivial to maintain and nobody
who matters in that community wants it to be anything other than what
it is: a tool for admins to find out who or what to blame when
something isn't working as it should. The people who love DTrace the
most and are willing to part with their cash for it aren't programmers
or hackers, they're DBAs and storage guys who get wet dreams about
stability, uptime and security who don't tolerate willy-nilly changes
to things that work for them and worrying that D will evolve too quickly
is like worrying about the something like gdb command set will get out
of control.
"Want to know which processes are usinng disk IO; then what the size of the IO writes is?"
You don't need dtrace for that, or even strace. The IO accounting built into most operating systems will tell you. iotop exposes it in an easy to read form that doesn't require learning a new programming language.
Oddly enough, the last reason I heard for ZFS was intercepting printfs to show SQL calls. Kind of like using MySQLs debug mode, but like the previous example requiring learning a new programming language for no benefit.
It seems many dtrace advocates lack basic system administration skills.
dtrace (and systemtep on Linux) are useful for tracing kernel level issues if you're a kernel developer. Most people pay a vendor to do this.
I think the idea is that Sun is the technical steward for the project and it's worth supporting Sun (via Solaris) to continue to fund efforts to create tools of these types. I don't believe the idea is "I need a platform with great debugging so I pester my management to get me a Sun to keep around".
I don't know that I agree, but I believe that's the rationale.
Interesting and wrong-headed: the idea was both to optimize the architecture for C functional calls and to allow the processor to "scale" (hence the name) by adding to the register file without breaking the ABI (which you'd have to do to, say, add an %EFX register to x86).
But: call stack depth on modern programs pushed register performance up against L1 cache performance for the C stack, and SPARC's convolution register spill/fill seems to have been a liability. More importantly, adding registers has not over the last decade been the primary mechanism by which processor speed is "scaled".
There are hardware people here that could articulate this more accurately and correctly than me; I've got some SPARC assembly game, but I'm not a hardware design guy. But when I say the ISA has been discredited, I'm talking about things like register windows.