Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
HipChat security notice (hipchat.com)
143 points by el_duderino on April 24, 2017 | hide | past | favorite | 109 comments


Doubt this will be a popular view around here, but using a 3rd party service for internal business communications is just a bad idea.

I've seen companies posting root passwords, ssh keys, salaries, internal financial details, etc in Slack and HipChat. Just waiting for a disaster to strike, adding value for every additional company to the target. Maybe this breach won't be the last straw, but it's a consistent risk.

You can run your own MatterMost or XMPP server quite easily and even lock it down to behind VPN only to minimize security risks almost completely.


I had a gig at a company that used Hipchat once. I generally like the app, but I was shocked/appalled that posting files to the chat got uploaded to the AWS cloud with a URL that is retrievable anywhere. Do you know how often things like logs, config files, etc. got posted to chats? That place wrote insurance software too, so plenty of juicy financial information in their systems.


HipChat has been a pile of crap for a long time. Why people willingly use that over Slack remains a mystery.


As far as usability and all that I really like Hipchat. I haven't gotten the chance to use Slack, but I've got no complaints on the Hipchat application itself. My current megacorp employer uses it as part of a big contract with Atlassian, but ours is hosted internally. I don't know the intimate details of self-hosting, but I assume that mitigates most of the security concerns.


Slack does the same thing... files are uploaded to S3 as well. Slack isn't a panacea when it comes to security.


Being uploaded to S3 isn't the problem; it's public access to the S3 URL that's the issue. You can't publicly access an uploaded Slack S3 file.


Hipchat offers a self-hosted option, which is a requirement for many companies that very specifically don't want their chat logs and files living in some s3 bucket.


In my experience, using irc or xmpp mostly results in people not using it unless a) the team is largely technical or b) there's a common, easy interface like gchat used to be.


Quite a lot of organizations use Spark which is a straight up XMPP client, they also license an enterprise XMPP server.


> Quite a lot of organizations use Spark which is a straight up XMPP client, they also license an enterprise XMPP server.

The same open source community (IgniteRealtime.org[1]) that maintains the Spark[2] XMPP client, also maintains OpenFire[3], a very good and easy to setup XMPP server.

[1] http://igniterealtime.org/

[2] http://igniterealtime.org/projects/spark/index.jsp

[3] http://igniterealtime.org/projects/openfire/index.jsp


Huh... I was confused when somebody told me they bought an Spark enterprise server license... now even moreso.

I think they probably just bought a license for a commercial fork of OpenFire.


> I think they probably just bought a license for a commercial fork of OpenFire.

That's very possible. Cisco bundled/bundles OpenFire into several of their enterprise appliances, including the Cisco Finesse product. Other companies do similar things. OpenFire is licensed under the Apache license.

There's also the possibility that your friend bought an enterprise license to OpenFire back when it was a commercial product under the name WildFire (Spark was commercial back then too). That would have been many, many years ago, back before Jive Software open sourced WildFire/OpenFire, Spark, Smack (XMPP Java Library), and several other pieces of software for real time communications.


That's why I suggested MatterMost - it can be self hosted and has a very nice interface. There's also quite good ones for XMPP like Conversations and Spark. Best bet for less technical people is to have suggested quality clients.


Its Android client is so-so. I'd suggest Matrix/Riot.


Why are you giving your employees a choice in the matter of something so important? Set up an XMPP server, tell them that's what is used for internal communication. Period.

And if they're too lazy/dumb/entitled to download Adium/Pidgin and enter their email address+password; well, you should probably find better employees.


Friends don't let friends use libpurple based messengers. Sadly, Adium development is pretty stale and unresponsive to even major security issues such as https://threatpost.com/code-execution-vulnerability-found-in...


"Not using it" means "compared to using offline means for communication or just not communicating," not "compared to using an unauthorized online means for communication."

Your job as IT is to deliver business value. It's certainly possible that people not communicating is better for the business than people communicating over a hackable service, but it's not the conclusion most people have come to.


You probably should not let all your (non-technical) employees install random software they download.


Oh for God's sake.


What about private code on GitHub? Email/files on Google? Heck, customer data in AWS?


You can host your own versions of them. For that extra layer of security, you can stick them behind a domain that's only accessable behind a company-wide VPN.


What about using one of those pesky popular operating systems that everyone else uses. They're such big targets, using them only increases the bounty someone would get if they exploited them. You can build your own internal operating system to minimize the risk that anyone can break into your company's machines.


That's a bit different - no one here is suggesting recreating tools, merely using existing tools in a more secure manner and segmenting them off from the general public and sometimes the rest of your network too. Lowering your attack surface is often the cheapest way to stop attacks.


What's being suggested is still a significant increase in spend for infrastructure. Self-hosting is not free. You lose economies of scale on the services. You need to hire an inhouse IT and/or infrastructure/ops team to support them. Your probability of downtime increases significantly, which comes with a cost. It's the same tradeoff you're talking about, with the only difference being the scale of cost.


You need to have one person who can run apt-get upgrade a few times. Most of the good development and operations people are capable of it in my experience at least. It requires at most a few days upfront and a couple hours of monthly work to keep things up to date.

It's not remotely comparable to re-developing the whole system. If your developers can maintain their own machines, they can probably handle this.


What about them? They are just as prone to the same problems


>You can run your own MatterMost or XMPP server quite easily and even lock it down to behind VPN only to minimize security risks almost completely.

What about Matrix?


My assumption has always been that companies large enough to have a security team also have better security practices than my small company. E.g. I'd guess Atlassian infrastructure folks don't share ssh keys over chat. Maybe they do.


When a zero day drops the size of the security team is irrelevant.


Who do you think is responsible for detecting a breach with that 0day? How about containing (and ensuring your believed containment is effective) and eradicating it? Would you rather have a dedicated security team do this, or would you prefer to have your devs wipe and rebuild naively, hoping they got everything? Even if you go MSSP, do they know your network?

Security is just as much (if not more) responding to a breach effectively and quickly as it is preventing one.


You can run your own HipChat server as well.


I think that's a totally legitimate view


The best is when people accidentally type their passwords into the hipchat window, which if you have a whole company spread across a few rooms, happens every fucking day without fail. The cloud is not secure, sorry.


Is there a single case of a company suffering a loss because of a Github or Slack data breach? Is there a single case of a company suffering a loss because of their own systems being breaches? It happens all the time. Look at Sony -- their data would have been safer stored on DropBox than their own internal servers.

You claim "risk," of using 3rd party services but can you quantify it with actual data?

Slack's entire business is secure business communication. Are we to think that our teams are better than Slack's when Slack's core competency is secure communication?

Should companies install their own phone lines because the 3rd party phone companies can't be trusted? Is there not risk when your internal teams who aren't necessarily domain experts, are building and maintaining systems that are outside of the company's core competency?

The security value of doing it yourself is nothing but anecdotal and not based on any actual data.


> Look at Sony -- their data would have been safer stored on DropBox than their own internal servers.

Not really, if I recall correctly Sony's whole Windows network was compromised via trojans in a PDF attachment exploit. Nothing to do with local vs cloud storage. They certainly couldn't have replaced their desktops with Dropbox.

> Are we to think that our teams are better than Slack's when Slack's core competency is secure communication?

Not necessarily - but they are much better able to restrict things to only your employees by applying VPNs and HTTPS+LDAP auth only proxies. Preventing you from being affected by public breaches like these. The value there is well documented - look what happened when the world moved from exposed to behind NAT routers.

Big public breaches happen all the time - you mentioned dropbox, I've gotten several reset emails from dropbox due to compromise, I'm guessing the attackers didn't walk away empty handed in such cases. A quick search reveals that just last year 68 million dropbox accounts were compromised.

On the other hand, when was the last time an even moderately well maintained SMB file server behind a LAN was compromised directly? Unique zero-day attacks are much more likely to be used on public services too due to the nature of their value.


Needless to say their (login) servers crashed from the pressure of people resetting their credentials.

"Hey, you know what might be a good idea? Let's email all of the accounts at the same time using an Appriver blast!"

Atlassian. I hate to hate you.


While I can see your point in this case I think it was the appropriate action, their ops team should've just beefed up their resources in conjunction with the email blast.

Only emailing a rolling amount of your customers becomes a shit show of support, who do you email first? Who do you email last? How long do you wait between groups? For who is security important, your biggest customers, highest paying, most security conscious? It's a real shit show to know, and one you'd absolutely get wrong, letting everyone know as fast as possible is the only acceptable solution to a security breach.


The servers should have definitely been prepared for the increased load. Perhaps I'm overly optimistically using the plural form in this case.

Truth to be told I can only assume this was done in a short burst, given my limited sample of (hopefully ever narrowing) circle of people who use Atlassian products. But would distributing the bulk-mail over an hour (two, three) using a randomized sample of their customer base really made a significant impact to security or their support?

I wonder how I'd do it, really, if let's say, beefing up my infrastructure for some reason isn't an option.


The HipChat desktop client had a trivial MITM vulnerability which took them several months to fix after I reported it. They never made any kind of public notice about it, so I'm almost surprised to see them talking about security here.


Where does that vulnerability report fit in with the Atlassian acquisition? (circa spring 2012)


It was first reported around this time last year, so "after".


I wonder which "popular third-party library" caused the problem


Probably left-pad


I assume they'll reveal that information once the library fixes the issue.



More importantly, I wonder how much they were paying for this library, or to what extent they were supporting it internally. Because if the answer is zero and they weren't, I would put a lot of the blame on HipChat engineering.


I'm not sure I understand you - You would blame the users of a third-party library if the library was found to have a vulnerability and it was exploited against the people using the library?


I read it as "if it's open-source, a company of Atlassian's size should be being good stewards and taking care of things that are helping them make money."


Open source code now carries a moral maintenance obligation? Do we say the same thing about any large company that uses openssl or any other open source libs that people use or depend on? That doesn't seem fair or reasonable.


> Open source code now carries a moral maintenance obligation?

Yes, and it always has and it can't be discharged. Pay-it-forward is the right thing to do.

> Do we say the same thing about any large company that uses openssl or any other open source libs that people use or depend on?

I certainly do. A red line, I-will-quit condition is and always has been "I won't participate in the development of private forks of open-source software" and I have at multiple employers gotten checks straight-up cut to open-source software maintainers. I have also entreated (and in two cases succeeded in convincing) maintainers to start up maintenance programs so we could pay them a yearly fee--because donations are way harder to push than support plans.

And, in turn, I open-source useful tools[1][2][3], including major parts of my consulting business, because it, too, is the right thing to do.

You should do likewise, because it is the decent and human thing to do.

> That doesn't seem fair or reasonable.

I consider not paying forward kindnesses paid to you way, way more unfair and unreasonable.

[1] - https://github.com/bossmodecg

[2] - https://github.com/eropple/auster

[3] - https://github.com/eropple/cfer-provisioning


> Yes, and it always has and it can't be discharged. Pay-it-forward is the right thing to do.

Interesting - it may be a nice thing to do, but I don't agree that there is any sort of obligation to the project just for using the project.

> I certainly do. A red line, I-will-quit condition is and always has been "I won't participate in the development of private forks of open-source software" and I have at multiple employers gotten checks straight-up cut to open-source software maintainers. I have also entreated (and in two cases succeeded in convincing) maintainers to start up maintenance programs so we could pay them a yearly fee--because donations are way harder to push than support plans.

I have also worked at companies that funded open source projects through donations or maintenance, but there was never a moral obligation there. It was more a method of risk management than altruism.

> I consider not paying forward kindnesses paid to you way, way more unfair and unreasonable.

Paying forward kindness and being morally obligated to maintain an open source library just because you use it are different things in my mind. It seems like being paid a kindness creates an obligation, which is not that nice.

---

Interesting perspective even though I strongly disagree. I'll continue to use open source projects 'AS IS'[0] and I still wont feel morally obligated to maintain them. Similar to how I use linux/BSD and don't feel obligated to maintain the kernel. I'm certainly grateful of course, but I don't feel like there was any sort of contract/exchange between myself and the maintainers that creates an obligation on my part.

[0] https://github.com/eropple/auster/blob/master/LICENSE.txt


It's not a "nice thing to do" and it's not really a moral issue. It's simply the point that if the security of your thriving business depends on a software component you did not author, you're obliged to do everything you can to ensure that component is safe for deployment, and for open source components one thing you can easily do is contribute to development.

There's really no way around this; it's a straightforward practical problem.


Where I come from, if you aren't paying it forward, you aren't grateful. (You are also probably hurting yourself, in that problems with it are problems for you, too.)


Not only moral, but mostly legal.

Warranties are not included. So it's a bit lame to blame "a popular third party library".

The OP was trying to say a company of Atlassians size should dedicate the resources to vet (and fix) those libraries if they use them for these purposes.


I don't think anyone is trying to shift blame, just to explain what happened. I am not affiliated with Atlassian so I'm only guessing.


Sorry, i wasn't trying to imply they shifted blame. But atlassian does bare the legal&moral burden of securing their product here.


I agree with you. I don't agree that they have a moral obligation to make pull requests to the open source library that had the issue. Hopefully they will, but there is no obligation there in my mind.


>Open source code now carries a moral maintenance obligation?

Many have always argued that it has.

>Do we say the same thing about any large company that uses openssl or any other open source libs that people use or depend on?

Many do.

>That doesn't seem fair or reasonable.

Many argue that any company that failing to contribute to the OSS projects they depend upon isn't fair or reasonable.


> Many have always argued that it has.

Alright, I'm arguing that it doesn't.

> Many do.

shrug I don't.

> Many argue that any company that failing to contribute to the OSS projects they depend upon isn't fair or reasonable.

This sort of attitude bothers me. At this point the software is not really free in my opinion. I am not a lawyer :P Just my $0.02


Nothing in life is free.

Quality software doesn't create itself out of thin air (yet, if ever). That means someone has to make an investment.

You don't have to invest in the upkeep of the foundation of your house, but if some bugs, say termites, were to sneak in you can't blame the original builders for the donated foundation.

Please downvote for the bad analogy.


I think of it like the Heartbleed vulnerability - was everyone affected to blame for the vulnerability? Was everyone simultaneously morally obligated to be contributing patches back to openssl? I don't think so.


Everyone affected was to blame for their own vulnerability, to the extent they relied on OpenSSL.

I worked for a company that needed to push out an out-of-cycle patch for Heartbleed. We were building a virtualization product that included a OpenSSL and other free-software libraries in the core product, plus an entire Linux distro to support our install-this-on-dedicated-hardware product. We made the business decision that we could reuse Ubuntu and not develop our own operating system and control plane. Others, like Microsoft, made the decision to implement it all themselves. Others, like VMware, took a decision sort of in the middle.

We got a significant amount of functionality for free - and a significant amount of risk for free. Whatever code worked for our needs, we could profit from. Whatever code introduced security vulnerabilities in our application (and it was not all upstream security vulnerabilities, since we intentionally designed our system to anticipate that local root exploits would be easy), we took responsibility for. That was part of saying that this was our product, and not just a shell script for building a similar product on your own.


IMO, frankly, yes.

If you use someone else's code, especially if you're not paying anything for it, you get what you put into it: nothing.

The liability for this breach is ultimately owned by Atlassian, not the third party library writer. To quote the most permissive license out there:

"THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED."


To use an analogy, do you blame everyone that has ever used the linux kernel whenever bugs/vulnerabilities are discovered in the kernel?


If they use the upstream kernel, yes. Linus Torvalds has been very clear that security is not a priority.

"We have one rule in the kernel: don't break userspace. Everything else is kind of a guideline. The whole security thing? It's a guideline that we shouldn't do stupid shit. But people do stupid shit all the time and I don't get that upset." https://www.youtube.com/watch?v=1Mg5_gxNXTo#t=8m28

Imagine, Torvalds said, that terrorists exploited a flaw in the Linux kernel to cause a meltdown at a nuclear power plant, killing millions of people. “There is no way in hell the problem there is the kernel,” Torvalds said. “If you run a nuclear power plant that can kill millions of people, you don’t connect it to the Internet.” Or if you do, he continued, you build robust defenses such as firewalls and other protections beyond the operating system so that a bug in the Linux kernel is not enough to create a catastrophe. http://www.washingtonpost.com/sf/business/2015/11/05/net-of-...

And the Linux kernel has a track record for not being the world's most secure piece of software. Which is fine, it's a project he started for fun.

It's certainly possible to pay people for a kernel that they'll stand behind commercially. If you're using, say, a RHEL kernel and a bug or vulnerability is discovered that impacts you, by all means get upset at Red Hat. If you're using a Fedora kernel, though, you made the choice to use it. It's completely unfair for you to get the benefits of running a kernel you put neither time nor money into, and not also the risk of running that kernel.


I would certainly blame Google if their Android phones were backdoored, especially if they tried to foist the blame off on the Linux kernel developers - a much more apt analogy since they sell Android phones.


I would be very surprised if something as complicated as Android phones didn't contain anything that can be back doored.

Obviously that is Google's problem, but I haven't seen Google (nor Atlassian in this case) claim anyone else is to blame.


Surely that's completely unreasonable. Who can claim that any piece of software is without bugs/security holes?

The problem is absolutely owned by Atlassian, but they actually did do something to fix it.

I don't believe anyone (apart from maybe Daniel J Bernstein) can claim any piece of software is bug/hole free, and neither does anyone need to!


Why is that unreasonable? If I write a complex piece of software, and something goes wrong, the blame is on me. As you say, there's no way to claim that it is without bugs.

Why, when I use someone else's software without so much as telling them, let alone signing a support contract with them, does the blame shift to them? My responsibility is to deliver a service, not to write software. If writing software is the easiest way to do it, great. If using someone else's software is the easiest way to do it, great. But I'm the one running the service, either way.


If it's free software? Yes, absolutely. To do otherwise is a chilling effect against hobbyist free-software authors in favor of large companies that have the ability to take on that liability. I, as an individual, want to be able to write code in my free time, put it on GitHub, and let it get popular without worrying that maybe it has a bug in it. If people want to start holding me responsible for it, I'm just not going to release it. Maybe you can pay my employer for a commercial license if we think that the cost of auditing it and taking on the liability for bugs is commercially reasonable, but it's also unlikely my employer will be interested.


I am absolutely not advocating that the library author be held liable.


I think this is a part of the general meme that big companies are making money by taking open source without giving back, and that big for-profit companies could do more for the open-source community.


When you use any third party library, you're responsible for its behaviors (or misbehaviors) on your customer's machine. How is it possible to view this in any other way?


I 100% agree with you. Atlassian is 100% responsible. I'm not sure I would not say they are at 'fault' though. Maybe I'm just quibbling over semantics.

I don't know the details of the vulnerability - I would say they were at fault if they did not update/patch a fixed vulnerability.


This really depends on the use case of the library. It's entirely possible to find bugs that are outside your expertise to fix.

I can't speak to this case, obviously.


Do they really need a captcha on BOTH username AND password input? I get that they are different submit pages, so they are likely querying the database on each page, but is that really necessary? I don't see any benefit from it, as a user, while trying to log in to my account.


My recent password update policy: Wait for a service to be hacked and then change the password to a long hash generated by Keepass. I was thinking of spending a whole day updating all the passwords for all the services I have accounts on but at the rate sites are getting hacked, it won't be long before I have created unique hash passwords for all the sites.


I miss IRC. All it needs is a few tweaks to bring it to 2017..


Then you might like Matrix. From the IRC point of view, it's basically IRC, but updated to become a 2017 protocol.


By the way, HipChat still has no two-factor authentication.


Is this implying their database was leaked?


Yes, possibly.


I understand not disclosing the vulnerability itself, but if they won't even disclose the affected library's name then they are being grossly irresponsible or are under an NSL. If they are under an NSL and not just being irresponsible that would mean the vulnerability is part of one of the stolen NSA exploit kits.

If I use the library, and it is non-essential for my business, then I should know what it is so that I can remove it.


> If you are a user of HipChat.com and have not received an email from our Security Team with these instructions, we have found no evidence that you are affected by this incident.

Well, I didn't receive anything, but couldn't log in either. I had to google this to find out what the hell is going on - error messages on the login page are not helpful either, they just refuse login even after resetting the password.


Just got hit by this. Everybody in my team was using HipChat as a primary online communication tool. So, was nice to see nobody in the room for a while.

  Fine, but I wonder why they didn't reset the API tokens while resetting password immediately. Are they managed by the different servers/services?


well written blog post imho


"This weekend our Security Intelligence Team detected" ...

"Security Intelligence Team" ... yeah, because that team actually exists.


It's sitting right next to the emoji team.


Why are they force resetting everyone's password if they are bcrypt'ed?


Bcrypt isn't magic. It will help slow down a full crack against everyone when every pw takes some tens of ms to check (though even if they had a hefty work factor of 1 second on a beefy ec2 instance, you can check a million accounts for 'password' in 11.5 days on a single machine, much less when you can spin up many more instances / leverage a botnet of many if less powerful machines). And if you want to target an individual user, you can try a million different PWs on their account over the same period. That's why it's best practice for everyone to rotate, though if your PW is complex and you're not a particularly juicy target you can probably get away with not doing it right now.


The webserver could have been compromised causing plaintexts for login attempts to be exposed as well. In fact, that is a very plausible explanation for how the database was accessed since it is usually firewalled off.


CYA. If they didn't force resetting passwords, they would at least appear to not have done "everything possible" to protect the account.


it's considered best security practice to do so


By whom?


Science.

You might find the below numbers interesting. Note that this performance is only one workstation with 8x gtx980. Even the mighty bcrypt (sidebar, look at the sha512 #s) won't save you if your password is bad. Now consider social media mining to enhance the word list. Now consider that (anecdotally) I have never done a hashcat audit and not had to have a conversation with someone about choosing better passwords:

Hashtype: bcrypt, Blowfish(OpenBSD) Workload: 32 loops, 2 accel

Speed.GPU.#1.: 6398 H/s Speed.GPU.#2.: 6507 H/s Speed.GPU.#3.: 6513 H/s Speed.GPU.#4.: 6643 H/s Speed.GPU.#5.: 6534 H/s Speed.GPU.#6.: 6512 H/s Speed.GPU.#7.: 6689 H/s Speed.GPU.#8.: 6542 H/s Speed.GPU.#*.: 52338 H/s

https://gist.github.com/epixoip/c0b92196a33b902ec5f3


Does HipChat hash client side or server side?


the most popular passwords are love, secret, sex and god


WTF? I got the email from HipChat. It includes this sentence. Without additional non-techy context

"HipChat hashes passwords using bcrypt with a random salt."

This is a good example of how not to do mass e-mails targeting the general population.


If they'd just said "We securely store your passwords", they would have tech people with pitchforks and torches about how they need to name their password hashing algorithm.

If they'd explained both ways, somebody would accuse them of the notification being too long as part of a scheme to bury the details below the fold.

As a company reporting an issue, no matter what you do, the internet's gonna nit pick


Serious question: Do you need the non-techy explanation?


no.. but I could be a non IT user using hipchat. This sentence is likely meaningless to me.


Is that a problem? I guess they could have put a "technical details:" in front of it, to make that clearer, but it has to be in that e-mail (otherwise the techies complain) and isn't really something they can explain in a useful way there.


Then you would be fine with the standard boilerplate "your data is secure with us"?

I'm happy they say this.. now i'd like some sort of proof :-)


A follow up sentence that re-iterates the same sentiment in non technical term will accommodate none technical recipients.

Poor UX is my point of this


Better would be "We securely[1] store your passwords"

blah blah...We take your privacy very seriously...blah blah...

[1] Technically, we use bcrypt with a random salt.


An argument could be made to take this a step further and not even go into buzzword details at all, and leave it with boilerplate words.

Security is hard. Describing security is even harder.

I'm for transparency in security for review, but you can achieve the same via a 3rd party audit.

Just my 2c.


Googling bcrypt should do the trick and most people know how to do that. </sarcasm>


you're joking I assume. If not, don't be in the UX field


Sure. I get your point. Just asking if YOU specifically needed the explanation, which I'm sure a number of folks would be happy to provide.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: