I'm not sure the description is what actually happened. It doesn't have the ring of truth to it.
That said, LastPass is not deserving of any trust as a password product of any kind. That a password was captured by a keylogger on a Dev Ops home computer shows that they don't understand how to secure remote computers, the meaning of defense in depth, the importance of proper login authentication, or how to secure data at rest. Each of these points are close to the core of their business.
I don't wish them ill. I hope they recover from this, but they need to understand security to produce a security product.
> I hope they recover from this, but they need to understand security to produce a security product.
it's because the market doesn't actually pay for a secure product, but the appearance of one.
The end-user buying cannot really discern whether the company's product is actually secure. There's no third-party standard auditing (let's say, a gov't organization).
Banks are run securely, not because I personally audit them, but that the gov't mandates liability onto the banks for losses from their insecure systems. So the banks are secure, because they stand to lose a lot. The same must be mandated for all companies imho, or insecure companies would continue to exist and thrive.
Eh banks run securely because it’s very difficult to steal money.
Hard currency theft requires a physical attack and “digital currency” is just essentially a spreadsheet that requires a settlement mechanism such as correspondent banking to work.
Banks transfers are nothing more than messages going between different branches and banks there is nothing being transferred other than orders.
The attack surface on modern banks especially large ones is actually ridiculously small since you don’t only need to defraud or compromise a single bank but also the entire system and all other banks which are using it since once the offended bank notices some inconsistency it can issue a notice to reverse any offending transactions.
Also since bank transfers are often liabilities for most banks e.g. when an account in First Capital transfers $1M to someone in First Direct it means that First Capital now owes First Direct $1M which makes First Direct a creditor which is why it will likely quarantine the funds until the transaction is fully verified and settled and even then there still likely going to be a cooldown period to reduce the risk even further.
Most of the security within banks is designed to deal with internal threats since the entire banking system is essentially based on mutual trust which gives individuals even fairly low ranking branch employees the ability to authorize fairly substantial transactions.
> Eh banks run securely because it’s very difficult to steal money.
i think you got the cause and effect wrong - banks are run securely because it's made to be very difficult to steal money. And stolen money gets tracked (if you did steal a large amount) by anti-money laundering laws, which makes it hard to spend it.
Why is banks' attack surface small? Why is all these other "systems" in place to make stealing money difficult?
Why isn't the same happening with stolen credentials, or data?
It's telling that in areas banks don't have responsibility (criminals breaking into individual accounts and draining them for example) they are insecure. SMS 2FA, short pin numbers, etc.
> He said Barclays told him it would do an internal fraud investigation which later resulted in Mr de Simone being held liable for all the losses.
> "They could not identify a point of compromise from the back end - to them it looked like the pin had been entered.
> "The only thing they could suggest was that someone knew the code therefore it's gross negligence on my part apparently.
In the olden days banks used to have decent security. Once you gain access to the account, to pay a new payee you'd need your bank card and pin number to do the 2FA. Now it's all the same phone.
> After eight months of evidence gathering and dealing with the police, an investigator at the Financial Ombudsman Service (FOS) upheld Mr de Simone's complaint against Barclays which now, if it disagrees with this, has the opportunity to ask the ombudsman to examine the case.
This. At the end of the day, if it is a significant transfer there are people in locked rooms that have to call each other and independently verify a transaction brokered by a third party. And if you don't pass a background check, and don't need to be there, you don't get in that room.
All this might be true in the US where realtime transfers don’t really exist.
But here in true UK where we‘be had realtime, high volume transfers for at least a decade, it’s very possible to steal digital money and move it on before the bank notices.
Faster Payments in the UK are expected to be credited and spendable within 20min, normally it’s spendable within milliseconds. The actual settlements happen every few hours, and all of it is pretty much automated. Additionally because the money can be credited so fast, there is no recourse for a bank to recover a payment they’ve authorised, they have to settle it. The sending to the transfer message is as good as sending the money, once it’s sent, they have to pay, if they refuse, the money is taken from their collateral at the payment network to pay their debt, and they’re disconnected from the payment network.
I’m from the UK that’s for relatively very small amounts and it’s insured, this is the cost of doing business and there are still a lot of fraud checks especially for new payees.
If your account all of a sudden has 10’s yet alone of 100’s 1000’s or of transfers in it will be quarantined.
HSBC locked my account just last week for 7 or 8 transfers of circa £10 all being made all in the same day from colleagues from my work since we bought a gift for someone that was leaving and settled the payment.
Authorized push payment fraud still does happen but it’s fairly negligible in the big scheme of things.
> If your account all of a sudden has 10’s yet alone of 100’s 1000’s or of transfers in it will be quarantined.
> HSBC locked my account just last week for 7 or 8 transfers of circa £10 all being made all in the same day from colleagues from my work since we bought a gift for someone that was leaving and settled the payment.
I would be careful assuming that HSBC is representative of how banks in the UK work. They’re well know to be the most trigger happy bank when it comes to account and transfer freezes. HSBC was probably the number one cause of complaints related to transfers while I was working in the fincrime team of a different bank.
> Eh banks run securely because it’s very difficult to steal money.
Banks run securely because they have serious and mandated change control processes, applied against a formal classification of the importance of systems.
Something you generally won't see outside the T100 of non-bank companies.
This means their IT evolves glacially slow, but it does keep things stable.
Sometimes organizations that look less secure are actually more secure just because they degrade gracefully under attack and/or can more easily mitigate/revert the consequences of successful attacks.
There's no such thing as a secure product, but secure enough. And lastpass storing billions of password is a very high value target. They probably have hackers banging on their firewalls all day, every day
Also for convenience over security. Like how Signal was the best secure option out there but people went to Telegram instead because even if it was (slightly?) less secured it’s a lot more convenient.
While it's very civil of you to wish recovery upon LastPass, I don't really think the product is deserving of redemption. This is not the first major incident and it demonstrates little growth in relation to prior breaches. The world as a whole would probably be better off if LastPass were to breathe its last.
I agree with the GP. Why would selling it solve the issues with the product?
How much of the product can be salvaged?
They have a well-known brandname, but it is arguably radioactive now.
The product as software can be rebranded, but why go through this effort if the ubderlying software has proven faulty so many times in the past?
A similar effort can be invested in making open-source password managers better, so there is a clear opportunity cost to salvaging LastPass.
Plus a sale would surely only directly benefit those most responsible for LastPass' issues. It would mean they are directly rewarded for their incompetent execution..
It seems like nothing is necessarily wrong with the software itself. Its the opsec surrounding the software. The most secure software in the world can be pwned if you can get access to the lead dev's system or the build system itself.
Yeah, because the description is inadequate. Is this BYOD? (… seems like not the employee's fault.) Is this the employee used the same password on the laptop and home, got credential stuffed, and LastPass isn't using MFA¹? (…seems like not the employee's fault.) Was there some jump from compromised home laptop to corp laptop? (The network is never to be trusted. …seems like not the employee's fault.)
The buck is supposed to stop at security, not at each employee's personal hygiene … if your game plan depends on the latter, it's game over.
There's more here than is being written, and I can only imagine because the truth probably stinks.
¹except TFA mentions MFA … but the mention of it doesn't really make sense.
> The buck is supposed to stop at security, not at each employee's personal hygiene … if your game plan depends on the latter, it's game over.
I have to take security trainings twice a year that literally talk about the buck stopping at my digital hygiene and I better not fuck it up for The Company.
Companies have to understand breaches will happen, but preparing employees on how to spot attacks or understand when they've been breached is a huge component of their ability to repel or minimize attacks.
Security is basically layering imperfect solutions on top of each other until the statistical probability of breaching ALL of them gets small enough to satisfy the requirements of the organization.
In the case of LastPass, they're holding data (which they shouldn't have been to be fair) that's INCREDIBLY attractive to everyone from script kiddies to nation state actors. When it comes to keeping out nation states the threat model can get kind of wild and difficult to engineer for while building what's ultimately a consumer facing product. However, in this particular case it's fairly obvious that bad IT policies lead to an issue and LastPass got burned.
I would be more forging of LastPass but letting employees do BYOD means you have to trust everyone that ever uses that computer including spouses and children. It's just really dumb.
> I have to take security trainings twice a year that literally talk about the buck stopping at my digital hygiene and I better not fuck it up for The Company.
this is mostly so they can pin it on you when it inevitably happens (rather than the management)
Right, but for your average user. If your machine is infected with a keylogger that results in a stolen password because of a vulnerability not timely identified and corrected.
That's not on you as an employee, that's on the security team for not implementing compensating controls/defence in depth.
Yes you have a responsibility to detect phishing emails, not writing down passwords, inserting USB's etc. But if something happens to you completely behind the scenes during your normal business, it's not on the employee.
Almost 20 years ago, I worked with this (not particularly competent) sysadmin. Policy said the root password had to be rotated once a month. So, in July 2003, they set the root password to blah0307 (where blah was some random word, which I forget now but knew at the time.) I wasn’t actually supposed to know the root password, but one of my colleagues let me in on the secret, including the repetitive pattern. I think the security auditor ticked the box “root password changed every 30 days”, but never asked what the password actually was.
I know some places have rules like “must have at least N characters different from previous passwords”. However, depending on the exact rule, people can come up with easy-to-remember workarounds: e.g. the “blah” bit is ”foo” in even months and “bar” in odd ones.
> An attacker can only hack the paper with physical access to my office.
... and there are lots of unrelated people with physical access to your office. Cleaning staff, building maintenance, HVAC technicians, printer service staff... and all of these may not have the same level of background checks as your company has.
And even if you hire all of these yourself (which makes sense at a certain scale), that still doesn't protect you against marketing inviting a camera crew and walking around everywhere in one of these typical "life at the office" short films for Linkedin. IT staff offices seem to be very popular for such films since they're usually the most personalized rooms with lots of nerd stuff on the walls and desks.
Besides: swiping a photo of a post-it leaves no evidence, whereas installing a physical keylogger certainly does.
If you start dragging a password from 1Password, the window disappears and you can release your mouse over a field elsewhere, and it will get typed in there (instead of pasting). This may or may not work for your case.
What I want is a secure shell (somehow) where my env variables are encrypted and on access I get a prompt to either use a fingerprint reader or a password to unlock them for the process.
Anyone know of any such option? What I've come to use are separate env files that I source in various directories before running the commands that need crednetials, or a tool that decrypts a file, loads it into an subprocesses env vars and runs a program (something like mozilla/sops), but I still find that too cumbersome, I'd like it transparent and integrated with my shell.
> ¹except TFA mentions MFA … but the mention of it doesn't really make sense.
I guess there was a loophole in their MFA integration. Maybe they accepted the same TOTP twice - in a multi-region setup I guess this might be a trade-off that somebody might risk. With a keylogger one can theoretically steal a TOTP anyway or do other more sophisticated shenanigans.
> We enabled Microsoft’s conditional access PIN-matching multifactor authentication using an upgrade to the Microsoft Authenticator application which became generally available during the incident.
Maybe switching to push notifications with number matching is their mitigation for that (e.g. without affecting multi-region replication / performance).
We are looking at a highly sensitive asset, though. A vault that is only used by four employees (per the original press release, not in the 9to5mac article) and contains keys for the production backups.
At this level of sensitivity you need to consider "should the employee's personal plex server be able to connect to this" and the answer might be a No. At this point you should be issuing them a PAW and a physical security key and auditing the shit out of it. It's not like lastpass can't afford it.
I understand what I'm asking for is "educated conjecture" more or less, but what would you surmise might be actual plausible situations rather than what LastPass is putting out as PR? Just asking as a laymen who is curious with no skin in the game.
Are there any reliable ways to secure remote computers from keyloggers _and_ still provide an efficient software development environment for non-trivial projects?
All of the software engineers I have seen have a fairly unrestricted environment -- Linux machines, with sudo access, often with passwordless root access via "docker" group, and with non-intrusive "endpoint protection" system. It would be normal to for someone to run "npm install" on their machine, or check out a random github repo they read about and run code from it.
Such machine would be a prime target for malware. And endpoint protection I have seen seems to be really stupid -- basically hooking "exec" calls and checking for exact hash match (!). Any serious malware should be able to bypass it without much effort, and if it only stays on a single computer, the detection chance is pretty low.
(I have also seen some poor souls who were stuck on locked-down Windows machines.. but they usually ended up using their machines as remote terminals, doing most their actual work on some remote server. And that server is sudo-capable Linux with light/no protection, and see previous paragraph. I suppose if _that_ is infected, at least Lastpass might not be stolen... unless people start browser on server and log into lastpass there, I've seen this happen)
Personally, I'm not a fan of the answers that amount to a cloud-hosted thin client. I use these at work, they're absolute technological marvels, but they suck.
The real answer is a zero trust network that implements:
- multi factor auth
- deployment approval gates
- end to end service encryption
- ALE for secrets and keys
- password managers
- WireGuard tunneling or equivalent
- read only production environments by default; major levers to pull in order to write
- fully partitioned environments, all of which partitioned away from the corporate network of laptops, printers, and security cameras
> - read only production environments by default; major levers to pull in order to write
Yes. In general, it's a good idea to split state management from business logic.
In the simplest thing, that means that eg you have a database that's separate from the rest of your site. But the principle applies more generally.
Useful for keeping things simple.
To go further: if you want to log something, you send it to a log server that is super simple and can only write to one location. So if someone takes over your business logic service, they can't write arbitrarily.
You've already included the answer - "using their machines as remote terminals, doing most their actual work on some remote server".
The developer uses MFA (TOTP, Push Notification, Yubikey etc) into a virtual desktop inside the organisation (Citrix, VMWare Horizon, etc).
From there, the developer can SSH / whatever into their development environment - which is hosted "inside" the corporate network, or their cloud provider, via internal links.
All code, and dev boxes live "inside" the corporate network, and only keypresses, mouse movement, and screen diffs are sent back and forth.
Most remote access packages can prevent clipboard, USB device, file transfer etc.
If you need a password manager for work purposes, then it lives on the corporate managed network - not your remote laptop/desktop - and to be really paranoid - you only ever "copy/paste" those passwords - you don't type them in.
If you really want to lock it down further, give the remote workers dedicated corporate equipment that they only use to access the remote desktops, so you can prevent some things like screen capturing, and really lock down the software to prevent things like keylogging software/malware.
You also should have the entire development environment segregated from the "business" corporate network as well.
It's only really an issue if you want to have offline developers - in which case I don't have any thoughts ready to hand - (but would expect it to be a very locked down machine, possibly with an even more locked down VM inside it).
As someone who regularly uses multiple layers of Virtual Desktop -> Virtual Desktop -> Remote Desktop, provided the network can handle it (on both your local network, and the corporate network), it works surprisingly well.
This is both a misunderstanding of the problem and an attempt to solve an administrative problem using technology... which cannot really solve it.
Developers have nothing to do with this. It's a common practice in companies that have "expensive" production environment (eg. VMs rented from AWS) that developers never get any kind of access to production environment. Ever. At all. No need to tie developers' hand by putting them behind a ton of unnecessary firewalls. They have no need for the sensitive information and shouldn't be burdened by protecting it.
The few people who do have access to company's "expensive" production environment are / should be very few people, most likely in the infra / DevOps department. These people do need to follow special protocol for communicating with the "expensive" environment, which, likely, doesn't happen all that often. Depends on the product, of course, but unlikely to be more than once a day, or even once a week.
----
PS. In many, many years of being in infra / system / automation I had never typed any passwords for any important services I had to use. They are usually difficult to type due to having all kinds of Unicode characters I wouldn't know how to reproduce w/o a little research. It's also very rare that they end up in system clipboard, since I usually end up using something like vi+tmux over SSH in Emacs' ascii-term to copy the password from somewhere and paste it somewhere else. So, stuff like AWS keys would have to be stolen by taking screenshots of my screen or something like that...
I mean, why on Earth would anyone deploy to production environment from their personal laptop? Normally, deployment is made from some sort of a testing / staging environment where the system was being tested / archived before shipping it to the next stop... It sounds like some kind of emergency / unplanned situation where a DevOps had to log into the remote system from their laptop.
Are you misunderstanding the term "DevOps"? You build it, you run it. If a DevOps team only runs things other developers have build, it is not a DevOps team.
In this case, DevOps shouldn't be rearchitecting, developing, or changing a password management's solution, crypto, architechture, or design in any way. Not in the slightest.
I'm all in for VM based privilege separation, but that won't protect you from infected endpoint. Assuming this was a targeted attack, folks that achieved RCE on DevOp engineer's machine could have waited for her to authenticate and then inject keystrokes into VM, SSH, VNC, Remote Desktop, Citrix or whatever remote management system they're using.
Honestly, this HN thread is full of bad advice and factually incorrect patronizing. Okta-style system asking to accept every single permission would not have protected from an attack, because Okta caches and reuses authentication tokens. Clipboard snooping / keylogger detection wouldn't have worked because none of these solutions are robust against targeted attacks.
The only thing I can think of which would have (and should have) helped is alert SOC / incident reponse team. Good luck finding one though.
Glad to see someone else with the same reaction, because a lot of this advice is... interesting, like people who are worried about keyloggers but think the clipboard is safe.
In my experience, I find that working with Virtual Desktops is the most frustrating user experience as a developer you could have. I prefer working in containerized environments which are more efficient and do not require the same amount of configuration processes as a Virtual Desktop.
For most development work, this would probably cause a serious productivity drop, but it definitely makes sense for the portion of the work that involves accessing critical production resources. For DevOps roles, that could well be the majority.
My company operates in a Windows centric industry and our software team uses it as well.
It turns out you don't need administrative privileges for a lot of dev work (installing and running vs code, python, node, many databases, etc...).
My experience is that sudo apt-get install is a Linux Distro thing, most programs don't need special permissions as long they are installed in user scope.
So, answering your question, our devs are like regular users: when they need to install something that needs privileges they call IT. Surprisingly, that rarely happens.
Privilege escalation on Windows is super easy though, every red teamer I know has a bunch of ready to use exploits (most of them public) up their sleeve. And it is virtually impossible to get a good baseline of a developer's machine, so I'm pretty sure every SOC out there is simply allowlisting huge swaths of your software.
You can sorta kinda harden these systems, but that would only work against common malware. And you generally can't isolate senior engineers in their own little DMZ, so any RAT on their machines usually leads to catastrophic consequences.
Privilege escalation is a red herring. Everything you need to compromise production from a developer PC is either available for a regular user or not available at all.
Anyone with this level of access should know not to run random github projects or npm install in the most sensitive context. The choice is between easy or secure. You can't have both and that's a reality one has to accept. It's not that difficult to spin up a VM when you want to fuck around and isolate it from sensitive data.
Especially as a "DevOps engineer", gatekeeping and providing least privilege access is in the job description. I understand getting lazy and relaxing the rules in some contexts but not when running a password manager on this scale, unacceptable.
Don't do development on any system that has access to production. Develop on dev lane resources.
Production access should only be allowed from a locked down system with no open ports and a very small whitelist set of software. Operations for said system should be simple. Deploy version x with necessary provisioning. Backup system. Restore system. View monitoring and logs.
You must minimize the surface area connected to production.
On the network side the system with the keys for production is also firewalled off from general Internet access. So potential malware can't phone home.
> That a password was captured by a keylogger on a Dev Ops home computer shows that they don't understand how to secure remote computers
I tend to disagree. The potential for any single employee to do substantial harm to any business is incredible and designing a system to make that not possible is nigh impossible.
It's neither the humans nor the institutions fault. It's just that systems involving humans are incredibly hard stuff. You are constantly weighing rigidity of the system with the potential to do harm within the system. How much way can it give to make it possible for humans to do their job in a complicated world?
If you go down the path of "we need a process for everything" you are going to end up with a lot of processes. The inherent problem with that approach is that (for most businesses that are not exactly amazon) a lot of processes can not not feasibly be systematically enforced and rely on being honoured by a human a juncture points, to an extent that makes you very uncomfortable when you consider what mechanisms you system has, for when they just don't.
As of now, most system simply rely on humans to do the right thing at the right time, for no other reason than it being the right thing and they also knowing that, for the whole world to not go up in flames.
If a password was captured by a key logger, rather than a session token being stolen, they didn't implement 2FA for this login.
They are also talking about a home computer. In my company, VPN access is limited to trusted devices; therefore, sensitive systems can only be accessed from a corporate machine.
Security at LastPass seems substandard for a company storing security credentials. Unfortunately, from my experience, this is relatively common, and regulators need to start issuing significant fines or prison sentences for this to improve. Unfortunately, it is too easy for CTO/CISO to find a scapegoat and avoid scrutiny.
There isn't enough information to tell. With keylogger you can steal password every time it's used, MFA will just prevent / limit it's use. So it doesn't tell us anything about their MFA implementation and whether attackers reused session or did some other trick (e.g. time based tokens by design can be used to multiple times within the given time period or you could hijack first MFA token while it's being sent to the server and present an error; now you can use this token yourself while user successfully logs in with the second token).
Once you get password vault, it's very likely that you also get creds necessary to set up VPN. Besides, there are ways to bypass (poorly implemented) VPN and relying on VPNs isn't even the best practice nowadays.
I agree with you that a few CISOs getting sentences would be the fastest way to raise the bar across the tech sector, but that's never going to happen.
A secure MFA implication requires a second device to authorise the login. As you correctly point out, generating a code then and entering it on a compromised machine is SFA, as it treats the token as a second password. If MFA is implemented correctly, the only attack vector should be the session token.
Last pass have different threat model than most of companies. They should have higher standard than everyone else. Employee privete PC should not have access to production data. They should you vpn and multi factory auth and everyone should be trained in company.
Restricting access to corporate environments from trusted machines is trivial using any form of MDM. No one should be working from their personal machines. That's gross negligence.
> Restricting access to corporate environments from trusted machines is trivial using any form of MDM.
Until somebody pulls out their personal cell phone, and takes a photo of a screen containing highly confidential data, to then send it to someone else, because, dang it, they had to get something done NOW and it seemed very convenient.
Some consumer electronics companies have security guards enforcing that no cellphone gets on premises. If you want security you can get security, but is the price worth paying? All depends on the cost of a breach. Given that last pass still has customers maybe they estimated their costs just right ;) that cannot be said of some certificate authorities..
To me it says their security is not up to par, and that employees were allowed to take secrets home. Which is fine to a point, but it has to be a company managed, remote-wipable and locked down system if it's people that hold the literal keys to the secure castle.
Same with the Canadian ban on Tik Tok, why are they even allowed to have phones where they can install any software on?
BYOD and personalization is fine to a point, but only if you don't have high level access.
That said, LastPass is not deserving of any trust as a password product of any kind. That a password was captured by a keylogger on a Dev Ops home computer shows that they don't understand how to secure remote computers, the meaning of defense in depth, the importance of proper login authentication, or how to secure data at rest. Each of these points are close to the core of their business.
I don't wish them ill. I hope they recover from this, but they need to understand security to produce a security product.