Hacker Newsnew | past | comments | ask | show | jobs | submit | pbar's commentslogin

Unfortunately with SSH specifically, the dissectors aren't very mature - you only get valid parsing up to the KeX completion messages (NEWKEYS), and after that, even if the encryption is set to `none` via custom patches, the rest of the message flow is not parsed.

Seems because dumping the session keys is not at all a common thing. It's just a matter of effort though - if someone put in the time to improve the SSH story for dissectors, most of the groundwork is there.


Interesting, I thought it was possible to decrypt SSH in Wireshark a la TLS, but it seems I'm mistaken. It still would have been my first goto, likely with encryption patched out as you stated. With well documented protocols, it's generally not too difficult deciphering the raw interior bits as needed with the orientation provided by the dissected pieces. So let me revise my statement: this probably would have been a fairly easy task with protocol analysis guided code review (or simply CR alone).

It all depends on the key exchange mechanism (KEM) used at the start of the TLS session. Some KEM have a property called “perfect forward secrecy” (PFS) which means it’s not possible to decrypt the TLS session after the fact unless one of the nodes logs out the session key(s). Diffie Helman and ECDH are two KEM that provide a PFS guarantee.

Was it intentional to reply with another no true Scotsman in turn here?


Yeah, I was also reading their response and was confused. "Creation comes from the soul, which we all know machines, and business people, don't have" ... "far deeper than you'll ever know", I mean, come on.


If you have to ask, then you missed it


Ah, quick and painless then


Depends if you have any useful info to them. Just a tip: they can sense your heartbeat and know when you are lying


It’s all fun and games until your KDC goes down!


Interesting, what were some of those unique optimizations?


It’s a spreadsheet world and we’re just living in it ;)


To their point, S3 buckets must be uniquely named globally, across all of AWS


Yeah, can’t escape things with global namespaces.


It’s always surprising to folks, but true, that Houston has a wealth of arts/culture/dining, and even a modicum of public transit (the metro rail, heh). Entertainment could be better, but the rest blow Austin out of the water!


As a transplant to Houston, I will never understand why Austin became a tech center and Houston has not. There is a lot of raw tech talent here, incubators, etc. No matter what policies are in place or what investments are made, it never seems to take hold.


As someone who's spent time in both Austin and Houston, I'd agree with the sentiment that Houston is pretty objectively a much better place to live. These phenomena are probably more driven by superficial appearances, though, and that's where Austin has always had an edge. It's hilly, it's perceived as a college town, and on the surface it has a lot of access to nature. It looks better on the surface than Houston to an observer who hasn't lived in both places. Really, it's just like the Bay Area -- it looks good, but it actually is a very rough and empty place to live for most folks.


A lot of this stuff is just sensitive dependence on initial conditions.

I think a huge element has to be the success of SxSW. That has given a lot of people the personal positive exposure to Austin that makes them think moving there is plausible.


Wouldn't surprise me, only because the Houston metro is >3x bigger than Austin


Single responsibility - that same container is gonna get shipped to prod/etc. I would hope the database isn’t in the container there!


I would think the database data does not live in the database container but is mounted into it?

If you update you database version, you do not ship the database data to production either.


This kind of thinking is why engineering departments have tech debt.


I get your point but don't think you're getting mine. In a bigger project / organization? Yes, let's have those processes and tools. But for simple apps as described in the article? Use a correspondingly simple solution. As always in software development, it's all about context.


I get your point, and I disagree on opinion. I and many others have had success using containerized devenvs on projects both large and small, and have likewise felt some pain with respect to repeatability when not - especially with the Python stack. Containers are synonymous with repeatability. Your future self is just another collaborator, and they’ll appreciate it down the line when they’ve got a new laptop and new env, for example ;)


I agree. With you and with the other opinions (why should only one perspective be right?).

I've been the lead developer on teams where I introduced Docker to solve consistency/reproducibility issues in AWS and Azure.

I've also done smaller applications in DotNet Core, Go, Node, Python, and Ruby. In those cases I've use other alternatives, including:

- Known Linux version, with git push-to-deploy (my favourite)

- Packer (the server), with embedded codebase (still quite simple)

- Docker (for most non-trivial deployments)

- Known Linux version, with chef or ansible (as an alternative to Docker)

- Terraform the machine, upload the codebase, run scripts on the server (ugh)

Every method had it's place, time, and reason. If possible, for simplicity, I'd go with the first option every time and then the others in that order.

The thing is, though, I may have an order of preference but that is totally overridden by the requirements of the project and whether or not the codebase is ever to be shared. For solo projects and small sites, I've not benefited from Docker as I have never had any server/OS issues (and I've been doing dev stuff for decades).

However the moment there was a need for collaborators or for pulling in extra dependencies (and here is the crunch point for me) such as headless browsers or other such 'larger' packages, then I would move on to either Packer/Terraform for fairly slow-changing deployment targets or Docker for fast-changing targets, as otherwise I inevitably started to find subtle issues creeping in over time.

In other words keep it simple while (a) you can and (b) you don't need to share code, but complexity inevitably changes things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: