Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kids, this is story of How I Met... my VPS hacked (corrspt.com)
192 points by lelf on Jan 20, 2014 | hide | past | favorite | 86 comments


I'd think twice before declaring victory; I don't see any reason to believe that the server is now "clean". If this had happened to me, I'd spin up a new VPS, configure it appropriately, then install my app and migrate any needed data. What I wouldn't do was continue running on an instance that had been exploited and assume I'd successfully cleaned it up.


Agreed, particularly when he's already admitted this was the second time.

However I still praise him for not only taking time to investigate how the breach happened, but also to blog about it so we can all benefit from the experience.


Oh yeah, I agree. I guess I failed to address that in my original comment, but the analysis is actually cool and it's a valuable submission.


The author answered that criticism in the comments to the Reddit post.

> And you're right, I don't know and it's probable that they left something else. I'll probably need to nuke the server and rebuild, but I wanted to have a better idea of how I could improve my situation in the future. And I believe that exposing what I did and how I did, the community would be of a assistance.

http://www.reddit.com/r/programming/comments/1vo7zv/kids_thi...


I praise his curiosity for actually digging the problem and trying to understand how he got his server hacked. It makes a much better post than: I nuked my installation, and installed everything from scratched crossing my fingers that this will never happen again in the future.


And the real issue was (is?) the fact that web application process had write access to a location it was able to execute code from.

It is well-known for decades (at least since PHP got popular, which was in '90s!) that such locations should only be writeable by the user(s) who maintain the code and not anyone else. Sometimes, such setup can't be done on dirt-cheap FTP-only shared hosting services (shrugs), but certainly not on VPSes.

Ignorance is bliss.


Yep, this can be a lesson to developers who read this. If your web application must write to files, keep them outside of the DocumentRoot!

edit: I know a lot of HN'ers are fans of Ubuntu but, FWIW, SELinux (default on RHEL and derivatives) would have prevented this.


Outside of any executable location, not only DocumentRoot. Latter is just an easier and more popular target.

I mean, all writeable directories (if any) must not only be not runnable by some server (like Apache with mod_php), but also be on devices mounted with noexec, like this is the case with /tmp.


I'm not sure that'll help like you think it will.

I can't verify it at the moment, but create a /tmp/phpinfo.php file with "<?php phpinfo(); ?>" in it and then run "php /tmp/phpinfo.php". Regardless of whether /tmp is mounted noexec I think you'll find that it executes (because the php binary is under /usr which almost certainly isn't mounted noexec).


Whoops, seems that you're right, I was wrong about interpreters taking noexec into account - either I misremembered something or things changed, but the fact is they don't. Sorry.

Anyway, it's also better to have noexec than not. :)

At least, it'll help if there's no complete shell access but only ability to run some binary by name, without args, or modify environment variables when running some executable.


BTW, is there any reason not to hook something like ClamAV into your upload function and scan files before they are saved?


Obligatory xkcd: http://xkcd.com/463/

Why add more moving parts when they don't do anything but make more work? Scanning is a helpful idea, but not AV scanning. Regular vulnerability scanning can assess the platform security. At the very least, it can warn about potential security holes. It might also be plagued with false positives, causing more work for no added benefit. Safely running services on the internet is hard.


> Why add more moving parts when they don't do anything but make more work?

Because in this case we're talking about intentionally accepting files from users to either integrate into the system or offer to other users. Why would you not at least check files for cleanness and reject any that fail instead of blindly accepting them because you wanted to enable file uploads.

Vulnerability scanning isn't going to tell you much when you want to accept files from users.


CentOS/RHEL guy here, and I have to say, SELinux FTW. So many times I've been frustrated and annoyed when people's first reaction is 'disable it' when they hit some problem (almost always due to using stuff in a wrong or nonstandard way)* instead of spending 5 min to get it working and having far more security.

* (That, and custom programs that use perversely wrong paths, e.g. web content in /home, logs in the application directory, etc.)


Ubuntu has AppArmor, which could also do the task. The problem is that no one actually uses SELinux or AppArmor.


For SELinux on RHEL, you have to take explicit action to disable it. Now it is probably likely that admins who haven't added any SELinux skills to their skillset only learn how to turn it off and not set permissions correctly. However, that is almost as bad as running chmod 777 on everything (which I've seen plenty of people do). The minimum that anyone admining a RHEL-derived system should learn is the chcon (change security context) command, and running the tools to diagnose SELinux issues.

If you must, start off with setting SELinux to "Permissive" instead of disabling it completely. Then after a few days of running, go through your audit logs, fix any of the errors that come up, then set it back to Enforcing.


>If you must, start off with setting SELinux to "Permissive" instead of disabling it completely. Then after a few days of running, go through your audit logs, fix any of the errors that come up, then set it back to Enforcing.

This is the best bet for something that's in production already, but ideally, you want to have SELinux set to enforcing in testing environments and create the policies there in the first place.


I totally agree with you. Security is the no. 1 concern before we start a public launch.

The issues you mentioned is very basic that should be avoided in the first place. VPS should be well maintained by the owner himself or have professional service to help.

For a better architecture, a web server should be separated from the app server and database server, because the HTTP port is always open on the firewall, while the rest can only be accessed locally.

I have a blog page showing step by step how to configure application development environment on Windows and deployment environment on Linux. Maybe it's helpful.

http://bingobo.info/blog/contents/how-to-configure-applicati...


>web application process had write access to a location it was able to execute code from

Like many CMSes seem intent on doing?


I agree that this would be the best option, but what is the point of remote deployment APIs then? Eg my CI deploys war files on a tomcat using tomcats remote deployment API, so tomcat must be able to write in its webapp directory. Would you suggest to deploy wars always via scp and a restart of the tomcat instance instead?


Sorry, I don't know anything about Tomcat. Maybe you could separate its deployment API and run it as a separate worker process that has elevated privileges? I really have no idea, sorry.

Guess there are always exceptional cases to anything.


Ugh. I totally thought that someone met his/her spouse after some his/her VPS was hacked.

Now I'm slightly disappointed. Am I the only one? :\


It's a very odd title.


I'd go one step further, its a crappy title.


I'd guess something may have got lost in (mental) translation: in Portuguese, the same verb may be used to mean meeting people and finding things.


I was really hoping for this as well. There was some good cyberpunky storyline to be had I thought.


The title reminded me a little of this.

https://www.youtube.com/watch?v=YDW7kobM6Ik


Me too, and everybody should watch that talk, the hackery build up is quite something.


Long story short, he was running an instance of JBoss that had a vulnerability that allows an attacker to execute commands as the Jboss user and ended up Bitcoin mining on his VPS. Isn't Jboss one of these bloated “enterprisey” JVM container frameworks? I would think something like Play/vert.x would suit the likes of a small VPS better.


Well, his choice of runtime was well over 5 years old (JBoss 4), and apparently not being kept up to date. I bet there are plenty of frameworks with security holes when not updated for five years. It also sounds like he may have been exposing the entire app server via his frontend proxy.

Both mistakes are amazingly easy to fall into, unfortunately.


JBoss is the least bloated of the container frameworks, since it allows you to rip out all of the bits of the J2EE environment you aren't actually using (profile thinning).

Using software released in 2004 without patching up vulnerabilities is probably a more significant problem.


Back when it was J2EE, development was bloated with interfaces, implementations and XML files you had to create for each EJB. JavaEE is POJO based and is actually very light-weight. JBoss itself is a micro-kernel with a bunch of services plugged in, so you can actually configure a very small server instance that is still compliant with the API specifications.

I was ready to give up on JavaEE around version 1.4, but it's so much fun to program JavaEE now. A lot more like TurboGears or RoR (lots of meta-programming and default "scaffolding").


Choice of framework is not that important. The real problem was (and, judging from the article it's still not fixed) that his server has write access to directories it can execute code from.

I mean, this is a classic example of badly configured filesystem permissions.


This is one of the things that scares me about deploying PHP apps. I'm being asked to take over hosting a couple of Joomla & Wordpress sites at work, and find it terrifying that they both ask for permission to install php scripts. I much prefer having a clear separation, but it seems that that isn't really an option.


Yes, PHP has weaker security but fast implementation because it allows direct database access from the front pages. There are a lot of discuss about it. I always disable PHP on the HTTPD to avoid potential issues.


Play is one thing Vert.x is another.

You deploy Play apps inside an application container (like JBoss, Tomcat, Jetty, etc...)

Vert.x is based on Netty and thus it is not running on an application container.

I'd say that Vert.x or Netty based apps are more safer - from this point of view, but there may be some other kind of vulnerabilities for them..

Also, it's important to note that the owner of the VPS instance didn't took proper precautions: "I overlooked the deployment of the web console and HTTP Invoker and I paid for that". If you go by the book and do everything right JBoss and other application containers are safe to use.


No, Jboss is an Application Server. If you used something like play, you could deploy to jboss(though play doesn't require it).


I feel an often overlooked prevention for these semi-random attacks is a change in the SSH default port. I've posted this idea elsewhere and people seem convinced that this is pointless because an attacker can just port-scan your machine. While true, this generally only happens when you are being specifically targeted by an attacker. More random attacks like the one mentioned here are likely just people scanning all the IPv4 space looking for open 22 ports and then testing known exploits. Since I don't run a super-popular site, I'm more likely to be the victim of the second kind of attack. I used to have bad-logins hitting my box on a regular basis, after switching ports for SSH, the log-in attempts went to zero.


Just disable password authentication, problem solved. Changing the SSH port does not add security against people who are determined to break into your system, but as you said, it helps to keep the noise down so you're more likely to notice suspicious activity early.


The article makes it sound like the SSH brute-forcing requests were part of the attack, but it's unlikely - they are very common. My servers get several of these attacks a day.

I don't like changing my default SSH port, but I don't like people trying to brute-force my SSH passwords either. Instead I use iptables to drop SSH connections from any IP address that attempts to connect overly frequently. This is highly efficient (compared to scripts like fail2ban) and very simple to implement:

  # SSH daemon - tcp Port 22 - drop any more than 3 new connections from one address every 5 mins
  $IPTABLES -I INPUT -p tcp -i eth+ --dport 22 -m state --state NEW -m recent --set
  $IPTABLES -I INPUT -p tcp -i eth+ --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 -j DROP
  $IPTABLES -A INPUT -p tcp -i eth+ --dport 22 -j ACCEPT
Enjoy!


Congrats, you have just opened yourself to a DoS for no gain whatsoever. Mind telling me the host and where you usually connect from?


You realise this only blocks connections from the abusive address, not all connections? If so, please enlighten me about how this enables a DOS.


There is no such thing as an "abusive address". There are only "abusive attackers", but what you are blocking are addresses, not attackers. Blocking attackers would require authenticating them. Which is impossible, because attackers usually won't cooperate in authenticating them. Which is why we usually authenticate legitimate users instead. Which is what the sshd would do if you just let it do its job. What you do instead is to offer a service which anyone without any authentication can use to block any address they like from accessing your ssh server by sending three packets where they put the address they wish to block into the source address field, with different source ports so as to make netfilter consider them NEW flows. Such functionality that allows anyone to reduce the availability of a service, especially when it takes as little effort as three packets, is what is commonly called a Denial of Service vulnerability.


Thanks for the explanation - I see your point and it would wise to consider this possibility when looking at this technique.

For my use case though, this reduces load on my server (and prevents clogging my auth log files) by stopping incessant password brute-forcing attempts. I must admit to quickly adding an over-riding ALLOW for the handful of IP addresses that should have access, though!


Because source IPs really aren't easy to spoof, right? </s>


I don't think changing your SSH port will prevent a web server exploit (which is what the article is talking about).


Well, these hits on your ssh port target only the most low-hanging of the low-hanging fruit - i.e. common username/password combinations. Nobody's brute-forcing you anything, they're just trying some common user/pass pairs at random. Disable remote login by password in lieu of key authentication and you can sleep tightly while listening to the soothing hum of failed login attempts crashing upon your airtight SSH configuration.

In the long run, I've found that years pass and connections configs get lost and you forget which fancy port you used for your SSH connection on that server. Maybe YOU have an ironclad convention, but your co-worker had another one, and you can't remember what port he used. And he's left the company or died or joined a cult.

Kids, leave your SSH ports alone. A config is just a config. But keys are forever.


Eh, seems just as likely you'd lose your keys if you lose your config. At least, I keep both in ~/.ssh


You can just let nmap run over it if you don't know the port anymore...


I stick with iptables IP whitelists. We're talking /31 or /32 whitelists. I really don't need SSH access 24/7, so it's fine having just a couple chairs permitted. I don't need to SSH from the airport.

That's naturally not the only layer of security, but I figure it's a nicer option than non-default port.


I have four VPSs, mostly on different hosting companies. 99% of ssh attacks come from China, the rest are eastern Europe (mainly Romania or Ukraine) or Brazil.

The Chinese attacks use a group of about 15 IP addresses, then, every so often, they all change the addresses to new ones at once. This has just happened, last week, in fact. So now I have dozens of attackers all coming from a group of about 15 IP addresses, which are different to the 15 or so IP addresses they used a couple of weeks ago. (No kidding, the regularity that this happens, it would not surprise me if their military is training a new class of crackers and has been assigned a different set of addresses to use this term.)

When I get a new IP address in the log, I do a whois and rewrite the "inetnum:|NetRange:" field to a class A|B|C address and then DROP it in iptables. Fuck 'em. The whole darn network class gets dropped. Not that I'm likely to be logging in from China any time soon anyway.

I now have a list of network classes with about 35 address ranges that get dropped, if anyone is interested in the list.


But I often DO need to ssh from the airport!


That's what I use. Since I access my VPS from just two places (home, work), I have only those two address allowed. Rest everything is dropped.


This was my first though as well, although the attack this time wasn't really SSH-based. It really does greatly reduce the frequency of attacks.


I connect to many SSH servers that aren't under my control. It's pretty annoying to remember an arbitrary SSH port for my own.

Binding to IPv6-only is more effective at reducing log spam: IP scanning 2^128 addresses is impractical, and scanners often cannot connect because of misconfiguration/incompatibility or lack of a routable IPv6 host address.


> scanners often cannot connect because of misconfiguration/incompatibility or lack of a routable IPv6 host address

Isn't this likely to keep you out of your own system too, at some point (accessing from unusual location without IPv6)?


Yeah. It could be problematic. Then again, 6to4/6in4 tunneling is fairly straightforward: https://en.wikipedia.org/wiki/List_of_IPv6_tunnelbrokers. Binding to a backup IPv4 port is another option.


I agree, and I've always thought that the people opposed to it are getting the 'security by obscurity' thing all wrong and taking it to ridiculous levels. Denying information to your attacker is virtually always a good idea. The only exception is widely-used, core stuff like encryption algorithms, web browsers, server framework, etc.

So sure, change the port and all, just as long as you're aware of what it does and doesn't do. I don't think anyone believes that you can allow root login with a password of 12345 if only you change the port, but it can be a good layer in a well-designed system.


The port change is a cosmetic difference that does not alter the security of your box one iota. All it currently does is reduce the number of unauthorized attempted logins.

If your box is more vulnerable on port 22 than port 22221 then your problems run orders of magnitude deeper than which port your ssh server runs on.


Based on that pool address the attacker seems to be mining Batcoin, not Bitcoin, which is appears to be an scrypt-based altcoin, far more profitable to CPU mine than Bitcoin. (In case anyone was wondering why an attacker was bothering to mine Bitcoin with a CPU).


Came to say this. Indeed this has nothing to do with Bitcoin.


Always useful to see attack vectors. Nice well written blog post.


The attack vector is some random part of the application server stack (JBoss) demarshalling and running commands on strings received directly from web clients.

A rather horrible practice to begin with. There's always so much that can go wrong when parsing tainted strings to native data structures, let alone to full objects using the builtin marshalling functions, they're not meant to be used on objects from untrusted sources at all!


Not long time ago Rails had an attack vector very similar to this, IIRC.


Yep, same thing but with YAML deserialization. Deserialization vulnerabilities are common for Java, Python, Ruby, and PHP web apps, because deserializing an untrusted input is nearly akin to running eval() on an untrusted input.


1. Define exactly what the deserialization output should be. 2. Implement it that exactly that way. Now it's simple. Does the definition in part (1) include execution of arbitrary commands?


A deserializer might be able to instantiate arbitrary classes, so any class with a constructor that could execute an arbitrary command makes the deserializer vulnerable.

Of course, the correct answer is not to use the deserializer that can instantiate arbitrary classes when you have a well-defined list of classes that can be instantiated.


I pasted a gist of the backdoor script the hacker downloaded onto that server: https://gist.github.com/anonymous/8527149

Original URL is http://pdd-nos.info/.tmp/back.conn.txt


Not loading for me. Here's Google's cached version: http://webcache.googleusercontent.com/search?q=cache:ZbXfxz_...


And kids, here's another story of how my VPS got DDoS'd by Hackernews/Reddit.



I'm interested in setting up a Linux-based server at home, but one of the things that have kept me from doing it is security concerns. Are there any server distributions which are very strict in regards to which packages are included/enabled, and configured with as little remote access as possible by default?


Yeah, Slackware =]. People tend to laugh when you use it ("LOL WHO USES SLACKWARE!?!??!111") but its packages are hand-picked to be stable and rock solid. On top of this, it's very easy to get a bare-bones install that doesn't enable a bunch of useless shit (and open ports) at startup. Beware, all package management is by hand for the most part and doesn't support dependencies. So if you install a package, you have to know what other packages it depends on. Also, be ready to compile some of your favorite services by hand since a lot of them aren't in the distributed packages (think things like NginX, HAProxy, etc). Things like Apache, MariaDB, Python, Ruby, PHP, are included though.

It's one the oldest linux distros and doesn't really hold your hand that much. You'll be editing a lot of config files by hand. However once you get used to it you'll definitely feel a much stronger connection to your box than with a lot of distros that try to make things easier. /r/slackware is a good resource, there are a good amount of lurkers that jump at any chance to help someone out, and linuxquestions.org is a good forum as well.

EDIT: forgot to mention: don't rule out FreeBSD/OpenBSD as well. They are known to be pretty solid for hosting as well.


Slackware is OK if you want to play around with Linux on a non-public-facing system, but it's bad advice if you want to run a secure public-facing server. It's too difficult to do automatic package updates on Slackware. Since so little software is packaged, you end up having to install most software manually, which means you become responsible for monitoring that software for security advisories and upgrading it in a timely manner when there's an advisory.

Here's my advice for a secure public-facing server: use Debian Stable, set up automatic upgrades every night, install as much as possible from the official Debian repository, and be sure to upgrade to the next Debian release before your current release loses security support. This way, the Debian Security Team is responsible for monitoring security advisories and rebuilding packages instead of you having to do it yourself.

Source: I just finished transitioning to Debian after 10 years of Slackware use, in large part because I found it too difficult to keep my Slackware installations secure.


CentOS or Debian. Go for a minimal install of either, then add only what you need (beware, minimal installs are very minimal; they actually don't even have vim by default, only a cut down vi). That said, if you forward the SSH port from the internet, use public key only for authentication (and if you forward ports for other things such as a webserver, keep it up to date) and you're fine.

Avoid consumer-focused distros like Ubuntu or Fedora; you'll also learn more by avoiding them as they tend to hand-hold the user and hide functionality and configuration from them.


The problem with old JBoss versions is that their configurations were insecure by default with several potential attack vectors available. If you didn't know about this and just deployed your JBoss out of the box, chances were good you'd get "hacked". Newer versions (>= 7) fortunately solved that issue and are not that easy to take over.


Anyone running php, move the location from /cgi-bin/php. We've seen a few takeovers with unpatched systems to run miners. Not everyone can take their server down and changing the location according to apache/nginx at least stops the bots from finding you immediately.


People are still running PHP as a cgi? I though the way to do it now was using a PHP-FPM pool with minimal privileges.


Oh yes. There are some rather ancient things still out there that give me the shivers, but you come across them time to time.


> Things could have been worse If the attacker found a way to upgrade the privileges of the user running jboss (it’s a sudoer, but the password is really hard)

NEVER GIVE THE USER YOUR APPLICATION SERVER RUNS UNDER SUDO PERMISSIONS!


Was the attacker gathering results through remote logrotate communications ?

Reminds me of wordpress attacks, you quickly wish for FS diffs in order to identify any change in your code/data ...


No, there was no real connection to the logrotate application. The miner that was running simply replaced argv[0] with "logrotate", presumably to look benign if/when the administrator took a look at the list of running processes.


Miners report progress directly to the configured pool.

It's pretty much a set it and forget it mining.


Tripwire can be used, you just need to add your webroot and update it. These tools aren't new, just not often utilized with the new kids on the block


You're right, I found about it afterwards. I wonder why this hosting company didn't provide it (or an equivalent) out of the box.


I hope he flattened that VM and reimaged...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: