Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

App whitelisting is rarely rolled out, but it's just such a definitive win these days.

a) You just kill ~85% of malware (rough estimate, probably technically higher, but I'm basing that on stats around interpreter-based/LOLBAS malware). Anything that isn't targeted is probably dead in the water.

b) You know exactly what's running on everyone's computers, more or less, so you have a way easier time baselining and building monitoring.

It's honestly easy-mode for security. But it's hard to roll out to a company a decade after it's been running, so you really need to do it early on. And most orgs don't care about security until after a breach, at which point they're too large and slow to get something like that done.



It's easy-mode for security because it offloads the cost of security to employees. Done right, many hands make light work. Done wrong, it's the very picture of "if nobody can use it, hackers can't either."

I'm interested in learning how to do this right, because I've only ever seen it done wrong. How do you streamline the process for getting programs approved? How do you accommodate developers who need to generate and run code?


> It's easy-mode for security because it offloads the cost of security to employees.

Yeah, absolutely. But I consider this to be, when done right, a good thing. A single security team is going to drown trying to scale your company's security asymmetrically with your company's growth. For every 1000 developers you might have 5-50 security engineers depending on how serious the company is about it. Spreading out security work across the company scales extremely well.

Of course, you want to minimize burden too.

There are a number of ways you can go. Personally, at my company, we're so small that it's trivial. We rarely onboard new employees and pretty much all approvals are handled within a few minutes of the employee getting their laptop.

As you scale it may make more sense to crowdsource this. 'Upvote' is one such tool. https://github.com/google/upvote

Upvote-like systems basically give you reputation based approvals. If you can convince a number of your coworkers that the app is worth installing, you can install it. I've seen a number of systems built this way that combine crowdsourced approvals with other forms of reputation.

At my company we use Chromebooks. Apps/Extensions are only allowed once approved. The Linux environment is where development happens.

This is nice because there's a strong separation between environments. That said, we don't do any sort of application whitelisting in the Linux environment today, due to a lack of tooling support.

If we did, what I'd like to do is just have simple rules like "If the binary was created by gcc etc allow it to execute". Santa allows for process-tree based rules like this.

Another option is developer VMs like in EC2. These can be nice for devs because they're often much more powerful than a laptop anyways. It requires a bit of tooling to work smoothly though with local IDEs and whatnot.


We're trying to do much of what you mention here. It ain't easy. Here's some of my thoughts, as one of the people in Development trying to achieve it.

The app whitelist impairs my efficiency, and honestly takes a fair amount of fun out of work. I can no longer use many of the tools that help me, and instead am confined into the corporate-approved structure. This is both a productivity issue and a job satisfaction issue.

Trying to separate our development workstations, and dev environment overall, from the rest of the sordid mess, is extraordinarily difficult. We're trying to do similar what you recommend, with developer VMs running in our internal ESX cluster. However, that technology conflicts with Microsoft's Hyper-V (we're a MS shop). We're trying to get to Docker/k8s, but we can't run Docker inside a VM that runs in ESX. So we're trying to offload the actual execution of the system, even for local testing, into DevSpaces. But there's a lot to figure out, and DevSpaces is a young product. Further, private AKS environments are a new thing in Azure, and we've a couple of times now run into roadblocks with MS's own growing pains.

Your "upvote" system seems to have merit, but with, I dunno, 50 or 60 devs, spread across several teams focused on different tech, it doesn't seem like it would scale well. Coming up with the right mix of apps, and convincing our Risk team, is very difficult. Especially when so much of their pushback seems confusing (like, DBeaver was denied as a FOSS product, but approved when we paid for the Enterprise version).


> I can no longer use many of the tools that help me, and instead am confined into the corporate-approved structure. This is both a productivity issue and a job satisfaction issue.

Out of curiosity, like what?

> Trying to separate

Yeah, like I said, starting early is going to make things much easier. If you're trying to get to this place later on, it's just wayyyyyy harder and most orgs can't get there. This is often the case with security - if you build your code, infra, policy, etc, with security in mind from day 1 it's 1000s of times simpler than doing it even just a few years later.

> Coming up with the right mix of apps, and convincing our Risk team, is very difficult.

Oh yeah, a risk team can be the real killer. That's why upvote is nice - security only gets involved if something is flagged. But if your risk team isn't willing to work with you that's a problem, and it sounds like yours isn't doing the work consistently.

To me, approval should be easy. Even if a malicious app is approved that's often still a huge win - the attacker can't use many of the tools and techniques they're used to. Obviously you want to avoid that compromise too, but it isn't the whole entire goal.

We whitelist some vendors entirely. Our app whitelist request form is:

* What risks are there with this extension/ app?

* Is there another approved app that can do this, and if so, why do we need this one?

* How will this app help you? What is it for?

That first question is really important because people usually have a decent understanding of what the app should/ shouldn't be doing, or if the risk should be trivial. It's all about spreading the assessment out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: