We schedule log reviews just like we schedule backup tests. (Similar stuff gets caught during normal troubleshooting, but reviews are more comprehensive.)
It only takes one debug statement leaking to prod - it has to be a process, not an event.
Create a user with an extremely unusual password and create a script that logs them in once an hour. Use another script to grep the logs for this unusual password, and if it appears fire an alert.
Security reviews are important but we should be able to automate detection of basic security failures like this.
It would also be a good idea to search for the hashed version of that user’s password. It’s really bad to leak the unencrypted password when it comes in as a param, but it’s only marginally better to leak the hashed version.
This only works if you automate every possible code path. If you're logging passwords during some obscure error in the login flow then an automated login very likely won't catch it.
Log review is done for every single project at my workplace too (Walmart Labs). So I don't think this is a novel idea. And it does not stop there. Our workplace has a security risk and compliance review process which includes reviewing configuration files, data on disk, data flowing between nodes, log files, GitHub repositories, and many other artifacts to ensure that no sensitive data is being leaked anywhere.
Any company that deals with credit card data has to be very very sure that no sensitive data is written in clear anywhere. Even while in memory, the data needs to be hashed and the cleartext data erased as soon as possible. Per what I have heard from friends and colleagues, the other popular companies like Amazon, Twitter, Netflix, etc. also have similar processes.
It's novel to me; never worked anywhere that required high level PCI compliance or that scheduled log reviews. Adhoc log review, sure. I think it's a fantastic idea regardless of PCI compliance obligations.
We just realised the software I'm working on has written RSA private keys in the logs for years. Granted, it was at debug level and only when using a rarely-used functionnality, but still.
We also do log reviews, but 99% of the time they simply complain about the volume rather than the contents.
Do you enable debug logging in production? In our setup we log at info and above by default, but then have a config setting that lets us switch to debug logging on the fly (without a service restart).
This keeps our log volume down, while letting us troubleshoot when we need it. This also gives us an isolated time of increased logging that can be specifically audited for sensitive information.
(We caught ourselves doing it 4-5 months back, and went through _everything_ checking... Only random accident that brought it to the attention of anyone who bothered to question it too... Two separate instances by different devs of 'if (DEBUG_LEVEL = 3){ }' instead of == 3 - both missed by code reviews too...)
I know it's irrational but I really dislike Yoda notation. Every time I encounter one while reading code I have to take a small pause to understand them, I don't know why. My brain just doesn't like them. I don't think I'm the only one either, I've seen a few coding styles in the wild that explicitly disallow them.
Furthermore any decent modern compiler will warn you and ask to add an extra set of parens around assignments in conditions so I don't really think it's worth it anymore. And of course it won't save you if you're comparing two variables (while the warning will).
I don't think "Yoda notation" is good advice. How do you prevent mistakes like the following with Yoda notation?
if ( level = DEBUGLEVEL )
When both sides of the equality sign are variables, the assignment will succeed. Following Yoda notation provides a false sense of security in this case.
As an experienced programmer I have written if-statements so many times in life that I never ever, even by mistake, type:
if (a = b)
I always type:
if (a == b)
by muscle memory. It has become a second nature. Unless of course where I really mean it, like:
One way to not write any bugs is to not write any code.
If you must write code, errors follow, and “defence in depth” is applicable. Use an editor that serves you well, use compiler flags, use your linter, and consider Yoda Notation, which catches classes of errors, but yes, not every error.
These kinds of issues are excellent commercials for why the strictness of a language like F# (or OCaml, Haskell, etc), is such a powerful tool for correctness:
1) Outside of initialization, assignment is done with the `<-` operator, so you're only potentially confused in the opposite direction (assignments incorrectly being boolean comparisons).
2) Return types and inputs are strictly validated so an accidental assignment (returning a `void`/`unit` type), would not compile as the function expects a bool.
3) Immutable by default, so even if #1 and #2 were somehow compromised the compiler would still halt and complain that you were trying to write to something unwriteable
Any of the above would terminally prevent compilation, much less hitting a code review or getting into production... Correctness from the ground up prevents whole categories of bugs at the cost of enforcing coding discipline :)
Note these are only constant pointers. Your data is still mutable if the underlying data structure is mutable, (e.g. HashMap). Haven't used Java in a few years, but I made religious use of final, even in locals and params.
I don't really see how unless you've never actually read imperative code before; either way you need to read both sides of the comparison to gauge what is being compared. I'm dyslexic and don't write my comparisons that way and still found it easy enough to read those examples at a glance.
But ultimately, even if you do find it harder to parse (for whatever reason(s)) that would only be a training thing. After a few days / weeks of writing your comparisons like that I'm sure you'll find is more jarring to read it the other way around. Like all arguments regarding coding styles, what makes the most difference is simply what you're used to reading and writing rather than actual code layout. (I say this as someone who's programmed in well over a dozen different languages over something like 30 years - you just get used to reading different coding styles after a few weeks of using it)
Often when I glance over code to understand what it is doing I don't really care about values. When scanning from left to right it is easier when the left side contains the variable names.
Also I just find it unnatural if I read it out loud. It is called Yoda for a reason.
But again, not of those problems you've described are unteachable. Source code itself doesn't read like how one would structure a paragraph for human consumption. But us programmers learn to parse source code because we read and write it frequently enough to learn to parse it. Just like how one might learn a human language by living and speaking in countries that speak that language.
If you've ever spent more than 5 minutes listening to arguments and counterarguments regarding Python whitespace vs C-style braces - or whether the C-style brace should append your statement for sit on its own line - then you'd quickly see that all these arguments about coding styles are really just personal preference based on what that particular developer is most used to (or pure aesthetics on what looks prettiest - that that's just a different angle of the same debate). Ultimately you were trained to read
if (variable == value)
and thus equally you can train yourself to read
if (value == variable)
All the reasons in the world you can't or shouldn't are just excuses to avoid retraining yourself. That's not to say I think everyone should write Yoda-style code - that's purely a matter of personal preference. But my point is arguing your preference as some tangible issue about legibility is dishonest to yourself and every other programmer.
There are always assumptions being made, no matter what you do. But "uppercase -> constant" is such a generic and cross-platform convention that it should always be followed. This code should never have passed code review for this glitch alone.
Rust has an interesting take on that, it's not a syntax error to write "if a = 1 { ... }" but it'll fail to compile because the expression "a = true" returns nothing while "if" expects a boolean so it generates a type check error.
Of course a consequence of that is that you can't chained affectations (a = b = c) but it's probably a good compromise.
Well in Rust ypu couldn't have = return the new value in general anyway, because it's been moved. So even if chained assignment did work, it'd only work for copy types, and really, a feature that only saves half a dozen characters, on a fraction of the assignments you do, that can only be used a fraction of the time, doesn't seem worth it at all.
Unless you follow a zero warning policy they are almost useless. If you have a warning that should be ignored add a pragma disable to that file. Or disable that type of warning if it's too spammy for your project.
I've been working on authentication stuff for the last two weeks and the answer is "more than you'd like".
But luckily it's something we cover in code reviews and the logging mechanism has a "sensitive data filter" that defaults to on (and has an alert on it being off in production.)