To (hopefully) install a shim that lets them install software that lets their help desk help their customer more effectively. If I were trying to support millions of 'ordinary users' and had them calling me every time anything didn't work the way they expected, I'd want something on their desktop that let me help them (a gotomypc type of agent, for example).
How is this different than Adobe Reader, where the ability to execute code within a document reading application has resulted in world wide exploits of operating systems?
If my document reader can execute any code in any language, then any document that I read has the potential to execute malicious code on my computer, and I now have an exploit vector that I need to consider when downloading documents & opening e-mail attachments.
I understand that the code can be sandboxed, but before I implicitly trust the sandboxing technology, I'd have to see an example of an unexploitable sandbox. I don't know of any - but that doesn't mean they don't exist.
Right, but "the best" being a very misleading term for anyone not in the know. It too has failed to do the job.. But, of course, no code is perfect. Just keep that in mind.
I find it rather silly to be worried about security given that Apple's one of the largest browser vendors in the world, directly or indirectly via WebKit.
More accurately: all increasing of capabilities in non-immediately-apparent sources IS cause for concern. But that has to be weighted against the exhibited competence of the vendor. I find it unlikely that we-vet-everything our-brand-name-is-safe-computing-experiences Apple wouldn't have considered security in this move.
Agreed - I certainly don't see anything that I'd call 'innovation', and there is nothing on any open source desktop that would send me off to my friends and relatives houses to switch them from OS X or Windows.
How does a retailer using Square manage PCI compliance?
Are retailers using Square automatically non-compliant? My understanding is that PCI Council has not approved mobile applications under PA-DSS, and merchants who accept card using software that is not PA-DSS are automatically non compliant on PCI-DSS.
"Why are you doing this? What will you achieve by it?"
Changing the port does not improve security. It does, however:
- dramatically reduce the noise associated with the fleet of password guessing bots that hit open SSH server daily.
- make it reasonable to assume that a password guess attempt is specifically targeting your serve, and therefor consideration for escalation and follow up.
Signal to noise ratio. Less noise make it possible to discover the signal.
> - dramatically reduce the noise associated with the fleet of password guessing bots that hit open SSH server daily.
But if you're already using fail2ban or denyhosts (as suggested in the trifecta) then you won't get that much noise anyway, and if you're only using public key auth then the noise from password guessing bots doesn't matter anyway.
> - make it reasonable to assume that a password guess attempt is specifically targeting your serve, and therefor consideration for escalation and follow up.
Unfortunately a failed authentication attempt regardless of port isn't enough to conclude that it's a targeted attack. Plenty of bots port scan common ports before running the tools to make sure they're attacking the right service. In fact some bots can do full portscans of hosts (although this is rare as it's quicker to scan for the attacks you have built in, thus you get more attack attempts in less time) - usually this is done to build a database of services, so that they can be exploited later when a new vulnerability comes out.
Regardless, as you say it doesn't improve security, there's no reason for it to be in any security-related trifecta.
One factor likely at play here is the low cost of licensing MS products under the enterprise/volume licensing agreements. Where I'm at it's costing well under $100/person/year to license Windows, Office, Exchange, SharePoint and a few other odds & ends. IIRC, for just under $50/person/year, we get Windows & the basic Office suite, add another $10/person/year to get client access licenses for Exchange & SharePoint; add another $10 for Visio, Project...
I've never had a good sense of why people are so desperate for Linux to take Windows' desktop market share. Linux already clobbered Windows on the server, and probably will in perpetuity because of its momentum, price, and developer support. It's a good place for Linux. No one can legitimately call open source unviable at this point.
I can understand it from an ideological perspective, but many of the people I see talking about it aren't ideologues.
Mainly because as Sys admins/Users we don't want to be forced into using Windows. It's selfish but seriously for those who know Linux, they are generally much more productive on it. But this often is not the case for people in more administrative type jobs. So the question becomes should it be easier to administrate and use for us (sys admins and power users) or for the average user in the office.
I'm lucky to work in a place where windows doesn't exist except for one of our project managers who like windows. No one complains about it and she is capable of supporting herself. (As a research lab, our tech support team is really small, desktop users usually have to be self sufficient.)
> I've never had a good sense of why people are so desperate for Linux to take Windows' desktop market share.
Why?
Linux is a platform, and therefore depends on network effects: The more people use it as a desktop, the more software (apps, games) will be available. The more software is available, the more people will use Linux as a desktop.
Thus, market share seems to matter -- especially for those who aren't ideologues.
Unfortunately, the theory only holds if there's money to be made by developing desktop applications and games. This is still more complicated than it should be.
I"m less concerned about Linux taking Windows market share, but I do not want Windows to be considered the only viable option, as so many IT departments do consider it.
Even just a 10% drop in market share, with Macs and Linux splitting the gains, would be awfully nice.
I call bullshit on IDC's figures. If you take public-facing websites -- something that can be checked -- MS has a 20% market share[1]. Some of these are multiple sites on one server, and some are multiple servers on one site, which probably cancel each other out so c.286M sites is roughly the same number of servers.
By contrast, IDC's figures are for a total of less than 2M servers sold per quarter. If each server lasts for 5 years, that means there are a total of 40M in use worldwide, which is way too low.
I don't see Linux share on that? I ask because we use Apache on WS08. I'm sure we're not the common case, but it would be useful to know if Apache==99% Linux or 75% Linux.
Taking a double digit percentage of a proprietary juggernaut's market with something free is what I'd count as clobbering.
That report also seems to cover shipments, as in prefab servers with the OS already installed. Don't most larger operations use homemade distributions and configurations? That's on top of any number of spawned instances of Linux in a virtualized environment. I wonder how many Linux instances are running on EC2 at any given time.
I don't see anything in the IDC report about overall OS share. Just shipments. A computerworld blog referencing a zdnet blog (which also cites the IDC report) for marketshare numbers seems fishy, as though they were fudging facts to support a headline.
Well, once consideration mentioned by Linux adherents in the 1990s was that if Windows is all the people who make decisions about enterprise computing know, they will tend to block the use of Linux when Linux is appropriate.
Most of the highest traffic sites on the web use linux/unix with the exception of MS owned sites and a scattering of other sites. Smaller sites tend to gravitate toward linux as a matter of course due to lower operating costs and wider availability of hosting.
Oh, there was no mention of this statement being just web servers.
I'd think that across the server market as a whole that Windows had won that one.
Every organization I come across seems to have a small percentage of Linux servers, but Windows dominates. I'd be pleasantly surprised if the facts had it the other way around.
> It's hard to carry the FOSS banner at that price.
Are they also counting the extra downtime, anti-malware packages and so on?
Because a Linux desktop usually just works.
And, BTW, in order to Sharepoint be as cost-effective as the worst open-source alternative (as in "groupware for masochists") Microsoft would have to pay you a truckload of money.
With Windows 7, there should be no more (or less) downtime than any other operating system.
"anti-malware packages and so on?"
Anti-virus for a large enterprise can be as cheap as $1/desktop/year.
Automated enterprise patch management is expensive, but when added to MS licensing, you still should be under $100/person/year.
Keep in mind that if you put an OS X or Linux desktop on my enterprise network, I will make you install some form of enterprise grade automated patch management on your desktop. I.E - not only do I need you to have automated patch management, I need to know that you are patched, when you last patched, what you patched, etc; which implies an enterprise class solution.
"in order to Sharepoint be as cost-effective"
I'm curious, do you have any reasonably objective data to back up that statement?
I ran enterprise class document management and collaboration with FOSS tools. At $10/person/year, SharePoint is a steal.
So here's a question. Suppose a 1000 consumers get together, and petition MSFT to buy a copy of the Windows du jour.
Will MSFT give them the same price as a 1000-user company?
And note that this story is about government. If they're being suckered into cut rate deals that disadvantage small businesses, education and consumers with discounts then there's a failure somewhere.
And how exactly does this relate to the way they deploy their code? As far as I can tell, they actually review code before marking it ready for deploy. That kind of changes would be an issue with "usual" large batch deploys as well.
I've also seen many devs look at a broken release and instantly realize their mistake. A forgotten production config, a hard coded variable, or an empty cache. If code is in fact reviewed before going to production, the risk is significantly lower.