Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you need remote access, given sshd-in-container versus sshd-in-host as an access path, the former is clearly more secure by design, even if the containment eventually fails.

what is your alternative means of sandboxing the apps

Well, I don't have a finished solution but already I could write a fairly hefty book on evaluated approaches here ... basically combining the normal kernel tools (aggressive capabilities restrictions, connectivity restrictions, read-only root, device restrictions, mount restrictions (no /sys for example), subsystems-specific resource limits) with formal release process (testing, versioning, multi-party signoff), additional layers of protection (host-level firewalling, VLAN segregation, network infrastructure firewalling, IO bandwidth limitation, diskless nodes, unique filesystem per guest, unique UIDs/GIDs per guest, aggressive service responsiveness monitoring with STONITH and failover, mandating the use of hardened toolchains), kernel hardening and use of security toolkit features (syscall whitelist policy development automation via learning modes + test suite execution, etc.)

Fail-safe? No. Better than most? Probably. Security is a process, after all...

That doesn't change if it's run on the host

It does, because you're now contentious for host resource, and you have to have the comprehension of an abstract concept of hosts and guests and their identities living remotely, ie. normal tools - which assume node-per-address - won't work out of the box.

neither a "free" solution in terms of complexity

You have highlighted a tiny difference in the process space, which is basically free. But in doing so, you have ignored the other aspects. A single read-only bind mount per guest is very cheap in complexity terms.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: