Yeah maybe at some point -- I'd need to learn about all the formal stuff though first, it's new to me. For now it's cocotb test benches and also I run it through the Klaus test (https://github.com/Klaus2m5/6502_65C02_functional_tests) which is pretty comprehensive. There are definitely edge cases though that I found when actually running it on physical hardware that my tests didn't find (i.e ran into some IRQ servicing issues), so not easy to be 100%.
InAuth | Santa Monica, CA or Remote | Site Reliability Engineer
InAuth is looking for SREs to help operate and scale our real-time mobile and browser security platform. We use a variety of technologies and are looking for people with experience in:
The role has a lot of opportunity for growth as the SRE team is new. We have an office in Santa Monica, CA but we also are willing to have someone remote (if they are the right fit).
If you are interested or have questions email Chris Moos at: moos [at] inauth.com
One of the main differences is this one uses lookup tables for faster decoding of Huffman compressed data as opposed to using a tree (which is what the stdlib one uses). It also offers some additional encoding options for users that want more control of header field indexing.
Yup! That documentation is too daunting IMO but it's complete and not many steps, despite all the words. Especially if you're a Go programmer already. (just go get the code review tool)
I'd start with filing a bug against Go to track the overall process.
My issue is with the application running somewhere like an Android app where an attacker could easily change logging and see the sensitive information, as opposed to having to do a more sophisticated attack to get to the decrypted data (through finding out the key, patching class files, etc,.)
I've decompiled Android APKs before, it wasn't a big technical barrier. Just extract the APK using any zip utility, use Dex2Jar, and then run this: http://jd.benow.ca
I certainly wouldn't patch class files. I'd just extract the private key, then write a new Java application, utilise the same libraries, and point it at the XML. Boom, decrypted.
Is changing a text file a little easier? Perhaps. But extracting the private key is only slightly more work, and the benefits of being able to debug are worth it since the security arguments are pretty weak borderline non-existent.
If you're really paranoid about this just hash log4j.properties and check it on startup. Then crash out with "corrupted log4j.properties, please reinstall" if it has been modified.
I've reverse engineered plenty of Android apps before and yeah, unpacking it and seeing .class files is pretty straightforward. More sophisticated than modifying a text file, but still pretty easy.
Extracting the private key though is not that easy if it is obfuscated well. The key isn't just stored as a static variable and used as-is. I think the overall thing I'm trying to explain is:
* There are different classes of attackers
* Everything can be broken, but we want to stop as much people as we can
* Layering security is a good thing
* Is it really necessary to have the library log the information, as opposed to letting applications decide?
Fair enough. Did you see my point about just hashing log4j.properties's contents? Since I assume you won't be modifying it after you publish as you don't want debugging. As long as you check the hash before you decrypt any XML this should solve your concerns.
In order for someone to then abuse the debugging functionality on Santuario they would need to modify your APK which is frankly just as big of a barrier as finding and pulling the private key(s).
It would be possible but not that straightforward, you can change how log4j loads/finds properties file, for example, so it would be hard to enforce that.
Its pretty easy to unpack an APK, change log4j stuff to DEBUG, repack, and run vs. unpacking APK, disassemble class files, go through files, find how key is stored, routine for deobfuscation, etc,.
It probably has something to do with NUMA. Java 7 has a NUMA supported allocator that makes it more likely for threads to access memory from their local node. I'd be interested to see the golang test running with multiple processes, with each one running on an individual NUMA node.
1) How much do the founders need you? This will be helpful in negotiations.
2) How will your role in the company effect its success?
3) How important is salary to you?
All the above will give you an idea of how much equity you can get (or are worth). Use the answers to these questions to help you negotiate.
If salary is really important to you then expect less equity. If you really believe in the company and are willing to work to make it successful then maybe you should go towards more equity and take less salary. This will pay off (much more) in the long run.
I would recommend that you let them make you an offer first. This will give you a baseline on what their expectations are. They might say.. 60K/year and 20,000 options. You need to ask them about the options:
1) What percentage of the company is the employee options pool? This might be 10 or 20%.
2) How many shares are in the option pool, or...what percentage of the option pool is 20,000 options. This will help you determine what actual percentage of the company you may be entitled to.
3) What is the vesting schedule?
Remember one thing when negotiating: You need to ask or you will not receive :).
An in-app browser isn't the only way to provide OAuth, on most platforms you can invoke a URL and the browser application will open on the phone.
For example, on iPhone you can invoke an HTTP(s) URL, your app will exit, mobile safari opens, and you can then login and know that what you are typing is as secure as the OS/app sandboxing is....take a look at how the Facebook iPhone SDK flow works, its actually quite nice and very easy for users.
In reality, if you want to know that you aren't giving your username/password to a malicious third party, as an end user, you have to deal with a little inconvenience...being redirected in your browser to an SSL page that you trust, for example.
If the "third party" is actually a malicious native application, they can just simulate the launch of Safari, and most users probably won't even notice.
In this threat model, OAuth is practically a security no-op and a huge usability negative.