Hacker Newsnew | past | comments | ask | show | jobs | submit | whiterknight's commentslogin

1000 lines are easier to secure than 5 million lines


“You can write software that has no obvious bugs or you can write software that obviously has no bugs.”

I think that was ewd?


You can, of course, also write programs that have known bugs. Or even programs that have bugs that obviously shouldn't be there, but are anyway.


Not if 1000 lines are written by you alone and not checked by anyone else vs 5 million lines of code written by thousands of people and checked by countless more. Linux is probably more secure than 1000 lines of C code from a junior developer.


I think this is vastly overrated:

- how much code actually gets read outside of top 2-3 projects?

- how many of those readers can detect security problems?

- why are others inherently better at detecting problems than the author?

Wouldn’t 1000 lines read by 2 people be better than a million read by 10?


Not if you’re the only author!


The point is that many of the problems rust aims to solve become much less relevant. For example, if your program only does 10 Malloc and frees, you can probably track down the memory bugs.


I agree that these techniques help you write better code, but enforcing something is better than not. Obviously it’s a spectrum, so I wouldn’t say doing that is bad, but it does not really mean Rust is irrelevant.

And Rust brings more to the table than just the borrow checker.


Sure, it just invalidates the impending doom, ban C programming narrative.


I’m not sure I would characterize it this way, but it doesn’t satisfy the criteria of “memory safety by default,” which is what more and more organizations are desiring.

Time will tell.


Side note: tell your startup to switch its “hardware with Ubuntu Linux inside” to BSD. You will have a much more stable and simple platform that can last a long time.


The recommendation is solid, but FWIW no one looking for stability would choose Ubuntu, among the Linuxen!


I promise, you will be just fine without the security updates.


This is probably misguided. Apple includes the OS version number in the user agent, so an attacker can actually pay to have code delivered only to users with vulnerable versions of MacOS. (advertising marketplaces allow bidding by user agent)


Are you thinking of a safari exploit that allows JavaScript to get out of the safari process? What’s the attack scenario?


The user agent is defined by the browser.

And it only contains: Intel Mac OS X 10_15_7 irrespective of what Mac you are using.


I’m seeing Mozilla/5.0 (Macintosh; Intel Mac OS X 14_7). What do you think the 14_7 stands for on MacOS 14.7?


I currently use M3 Max MacBook Pro. Mac OS 14.6.1(23G93).

Firefox 130.0.1

  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:130.0) Gecko/20100101 Firefox/130.0"
Safari 17.6

  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.6 Safari/605.1.15"
Seems like a Google Chrome-specific behavior, but I don't have Google Chrome installed to test.


> Intel Mac OS X 10_15_7.

This is on an M4 MacBook Pro running 15.0.

So not correct.


Heya, I couldn’t find a way to contact you privately but I’d assume you want to delete your comment until (presumably) next month! Correct me if I’m wrong tho :)

Alternatively, a mod could help to edit it instead


Why would they want to do that?


To be honest, I’m not entirely sure.

It’s a product that isn’t officially announced yet. Anyone could mention that they own that device of course, but it’s the extra credibility of him being an ex-Apple SWE (judging from his comments) that convinced me to drop that comment.

Dunno if there could be any legal implications, if not - all good!


When it comes to marketing I think we can take a page from the Apple playbook. Never show anything incomplete. Critics and the general public don’t understand the artistic process and will only become uneasy. nobody cares how you did it. Just show results.


Yes I feel like this "public" work only for non-veblen goods, else it just devaluates the whole creation process.


At a more microscopic level, the same goes for git commits. Complete, sensible ideas are much easier to interact with and understand than someone's chaotic half thought through work-in-progress. There's value to this approach at all scales of magnification--completeness and polish aids digestibility and helps {colleagues, users, buyers, prospects} understand the value proposition.


As far as I know the first Apple computer was not really a computer but a box of parts that the customer had to assemble themselves. The early days of Apple is probably the best example of "building in public" because so much of it was actually just selling the dream. In any case, the opposite is infinitely worse. Too many people wasted away for years "building in private". If and when they did finally release it was to the sound of crickets. Hacking for pleasure is one thing. If you're trying to build a business then you must be always be selling and marketing.


That's true. But despite being the same legal entity, there is very little in common between 70s Apple and the Apple of today. We shouldn't really expect consistent behavior.


Yeah and Apple didn’t make any money until after that phase.

> If you're trying to build a business then you must be always be selling and marketing.

Yes. I am only giving marketing advice. Sales is unfortunately embedded in so many other areas of life besides commerce.


You’re attributing these life challenges to your STEM PhD speciality?

I am sensitive to your hardship, but optimization has nothing to do with it. People are succeeding with bachelor’s degrees in Latin.


The point of the post is that in my long experience there was not much about optimization that in any significant practical career sense was "applied", i.e., no jobs even to keep one from living on the streets, far from a career to buy a house and support a family.

The point about my Ph.D. with a lot in optimization is that I was quite well qualified in the field, but even with all those qualifications "applied" was not real, i.e., there just was nothing like a career in optimization applications.

Can suspect that, yes, "Ph.D." did seriously damage all career prospects, optimization, math, even computing.

The successes I did have were in computing. At the time the math was a small aid.

Did get paid for some work in applied optimization on some military problems. Looking back, there was some stranger in the office eager to discuss the weather, etc. with me. Maybe I said the wrong things about some of the US foreign wars, and then the stranger was gone. Might have been some high end military job interview that needed only gung ho attitudes toward foreign policy.

Delicate political situation, and I was oblivious about politics. Like, "stick to two subjects, the weather and everyone's health" and avoid "sex, politics, and religion".

Early on while I was in a grad program teaching math, some recruiters came from DC desperate for anyone with some math/physics education. I interviewed: Got a offer and took a job right away. Soon bought a new car and got married.

In those days, around DC, to get a job, just look in the WaPo, apply, go on the interview, show some knowledge of some of computing, get an offer, compare a few offers, with, say, a 15% raise, and accept one -- worked great. For a while, at GE time sharing national HQ, was the main guy for the applied math library, e.g., the FFT, regression analysis. Later a good background in "applied" optimization, worthless.


You should have tried wall street. At this point they are the real supporters of mathematicians. We had optimization problems everywhere and had physics PhDs reinventing mathematical algorithms and keeping things “proprietary”. Right now i work in a startup that essentially writes optimization routines for portfolio problems.

I will blame your phd advisor.


Wall Street?

I was in NY and close enough to NYC. I'd just published a paper in anomaly detection in complex systems, gave a talk at the main NASDAQ server farm, and later at Morgan Stanley. No real interest.

Sent a copy of my anomaly paper to a hedge fund, got an interview, was asked by one of their junior people "If know the correlation between A and B and that between B and C, what about A and C"? Okay, maybe: Start with the cosine of the sum of two angles???

Asked them for a reference on investing math -- did already know about the old Markowitz work, the efficient frontier, the role of quadratic optimization, about to do more with stochastic differential equations for the Black-Scholes work -- got the book they mentioned, saw that its math was all junk, and didn't follow up with the A and C contact, a mistake.

Did send a resume to Simons.

Looked into deterministic optimal control, Athens and Falb, talked with Athens, later talked with an Athens student who knew something about Simons and claimed that he hired mostly Russian mathematicians. So gave up. A mistake. I was naive.

Later, of course, Simons explained that he liked people who, say, via math but any math, had shown some ability, and I had some evidence I could have shown. My Math SAT was high enough that maybe I even beat Simons?

I was naive: Assumed that a carefully written resume was necessary and sufficient and that anything else was superfluous and unwelcome.

Nope: In practice in the real world, keep trying different things. Do send reprints of published papers. E.g., when I was at Georgetown, computer center staff and teaching computer science, a prof had some teaching software as a front end to the IBM SSP (scientific subroutine package) and in testing found that two of the IBM routines were too slow and the third had poor numerical accuracy. So, I wrote plug compatible versions -- used some (n)ln(n) software and some tricky double use of memory to replace the n^2 software and used some Forsythe and Moler work to fix the accuracy problem -- seemed too simple to me, but COULD have sent Simons that work. Once did get a lecture on differential geometry from a student of A. Gleason and had a copy of some S. Chern notes -- could have studied those and sent something to Simons. How'd I know Simons knew Chern??

There is a recent remark: "Don't give up. Keep plugging".

I was naive. Knew much more about math and computing than people and personality.

Since WWII, the US military has pushed hard to have more -- students, professors, and research -- in math and science. In high school, taught myself the math, learned the physics at a glance, otherwise goofed off (had a girlfriend drop dead gorgeous), but did well on the state standardized tests, so got sent to summer math/physics enrichment programs. I swallowed the bait hook, line, and sinker. I'd recommend:

"Always look for the hidden agenda."

"Believe none of what you hear, half of what you see, still that will be twice too much."

"Who you know can be more important than what you know."

There were some opportunities to "know" some powerful people, but I was naive.

My Ph.D. advisor was a nice guy, but I got to him after the fallout of a bad civil war in the faculty and never much talked with him. For my dissertation, some applied math, and had the main idea on an airplane flight before the Ph.D. program, in the first year wrote a 50 page first draft, later cleaned up the math, used Fubini's theorem in a short proof that my math was optimal, wrote some illustrative software, typed in the paper, showed it to my advisor and the rest of the department, had a famous guy a Chair of an orals committee to review the dissertation, and graduated. My advisor and one of the faculty (connected in DC and later President at an Ivy) knew a LOT about politics, but I was naive.

For a while, my career, in computing but with some math on the side, e.g., the FFT and digital filtering of Navy sonar signals, was going well, so I got the Ph.D. in applied math just to do better in THAT career and with ZERO intentions to be a professor or do academic research. That career direction was MY idea, mine alone, ..., a BAD situation!!! I was naive.


> I was naive. Knew much more about math and computing than people and personality.

Do you think not learning math would have helped you understand people at a younger age? It sounds like you just needed time to grow socially and in practicality. For most people on this forum, that’s a challenge regardless.


About people in math and the more technical parts of computing, I've guessed that poor socialization has played a role.

But when my career was okay, it was in computing, and I did well enough in the socialization.

Can consider these and those issues, but my experience was that "applied" optimization, as in the book title in the OP here, was too near the empty set.

It isn't just me: My professors in applied math and the ones in optimization were not getting much if anything in consulting. I've been recruited and hired, but never for optimization.

Here I'm trying to do a service to the readers: Be very careful about the idea that there is significant career help via "applied" optimization.


Sounds like you opened up the newspaper and scanned for “mathematician”. Leveraging phd research into a great job is a tough. Re-skilling into a normie engineer/technician/analyst, is not.

My point is not to criticize your job hunting skills, it’s to suggest that this an undue psychological burden in your life and is perhaps masking other causes and personal challenges.


Naw: The WaPo period was before my Ph.D. The ads were for computing -- math not mentioned. For some years, the career was computing but with some math, e.g., the FFT (fast Fourier transform), ....

I never wanted the Ph.D., what I learned there, the research I did there, to be the basis of a career. Instead, before the Ph.D. I had a good career going with computing and, at times a crucial help, some math, and went for the Ph.D. ONLY to do better at THAT career. For my career, the day I entered the Ph.D. program was a BIG step down, and what I'd learned about optimization was, in a word, WORTHLESS.

My main point here is on the word "applied" for optimization: I was well qualified, and happened to publish some research in optimization, but discovered that "applied" optimization was not the basis of a good career. Here I'm just reporting that fact. I doubt that there is still any real career opportunity in "applied" optimization.

So, a book title with "Applied" Optimization is to me a outrage.

I wasn't stuck on "optimization". For a while worked in the first wave of AI (artificial intelligence via the Rete algorithm). Then published in mathematical statistics. I was perfectly willing to mow grass, shine shoes, ..., do anything that would support me financially, be reasonably safe, and not seriously illegal but discovered that "Ph.D." on the resume blocked any such. Thought about taking "Ph.D." off the resume but was afraid that I'd get into trouble due to the gap in time.

Here my point, complaint, warning, contribution to others, is: My long experience was that there is nearly no career in "applied" optimization. A second point could be, outside of academics, a Ph.D. can hurt your career. Try leaving it off your resume. A Ph.D. might be worse for your career than a felony conviction; no joke (my legal history is totally clean).

In life, we are forced to make important decisions without good information. In my career, at times I did well, and at times I didn't.

E.g., by middle school it seemed accepted and true that education helps, more education helps more, education in the STEM fields is the best, a Ph.D. is the best education, and, thus, a Ph.D. in a STEM field should be really good, e.g., easily enough to buy a house and support a family.

Truth: Nope, too simple. I couldn't take care of my wife, kitty cats, get a job, any job, at all, ANY job, got run out of the house by the Sheriff with guns.

With a BS "With Honors" in math, I got strongly recruited. With a Ph.D. in applied math, including optimization, I got strongly rejected.

Yup, it hurt. I was manipulated, lied to, and hurt.

"psychological burden": Maybe those are the right words. But millions of people have suffered worse, e.g., The Great Depression, wars, Covid in the family, and much more, and still did well.

Don't know the solution in general.

For me, now, still good in math and computing, with .NET, etc. got a Web site, with some math at the core, running easily enough, and intending to go live, get some viewers, run simple ads (standard sized rectangles), and make some money. In this, want to remain anonymous and not be a public person.

And want to OWN the business. Have someone list what papers I need to file for a business, an LLC, etc. Get an accountant. Get and receive revenue. In simple terms, add up the expenses and keep the rest. Eventually sell the business and pursue, say, mathematical physics.


> I doubt that there is still any real career opportunity in "applied" optimization.

I agree there are no ready made jobs for that.

But you yourself know there are optimization problems all over real life. It’s a sales problem. Companies don’t know what they need or who has it.

> there is nearly no career in "applied" optimization

Agreed. But that’s true of all PhDs. The only difference is business guys see “computer science” and have an idea of where it fits in their org. It’s easier to sell. But in reality there is no business for experts in complexity theory or category theory type systems.

Making money involves solving practical problems. Even professors take a two job approach, mixing official research to get tenure with stuff they are actually interested in.

> With a BS "With Honors" in math, I got strongly recruited

This is very unfortunate. Because professors grew up competing in an academic tournament for their jobs they think that’s how the whole world works.


Correct!


> you will forget almost all of the stuff your learned

Speak for yourself.

You are also developing a meta skill of being able to read technical material, and a fluency with basic concepts (algorithms, signals, etc)

Also sometimes learning something isn’t knowing exactly how to do it on the spot, but knowing it’s the right thing to look up when you need it.


Billions of views and total YouTube dominance disagree with you.


No, they're not exclusive at all. As the guide itself says:

> Your goal here is to make the best YOUTUBE videos possible. That’s the number one goal of this production company. It’s not to make the best produced videos. Not to make the funniest videos. Not to make the best looking videos. Not the highest quality videos.. It’s to make the best YOUTUBE videos possible.

You can get billions of views and total YouTube dominance without making particularly engaging content, and I think that's the interesting point here.


It does not define best.

Best revenue? Best profit margin? Best viewed?


They spell it out pretty clearly in the PDF.

> The three metrics you guys need to care about is Click Thru Rate (CTR), Average View Duration (AVD), and Average View Percentage (AVP).

> How to measure the success of content

> Like I said at the start of this the metrics you care about in regards to virality are CTR, AVD, and AVP. If you want to know if the contents of a video are good, just look at the AVD and AVP of a video after we upload it.


I feel it’s more that the average person likes to watch something simple sometimes, myself included


In other words, the commenters suggestion would not make the video better fit that need.


> recording ui interactions, etc

This is a sign that the software has grown to become its own operating system. They also have to add their own version control… remote editing, etc.

Unix style versions of tools are interesting in their own right (photoshop vs imagemagick).


> This is a sign that the software has grown to become its own operating system. They also have to add their own version control… remote editing, etc.

Not really, though most of the commercial CAD systems are comparable in size to an operating system already. NX just spits out the journal file, it's assumed the dev will take the code and modify it to their needs using their normal dev workflow and their own version control, IDE, etc..


Why?


Can't we come up with puzzles where at least something of value is created when the puzzle is solved (and a tremendous amount of resources is not wasted)?


The use of the word “we” is curious. You didn’t come up with the puzzle, you didn’t “waste” the resources. The purpose of the we is to appoint yourself judge and arbiter and to steal yourself into the in-group. Just post your judgement: you don’t like that someone else did something you don’t like with their resources.


That sounds like an ad-hominem attack to avoid the question, tbh.


At the risk of sounding snarky: It wasn’t. It does however, answer the question. “We” do not need to change our allocation strategy whatsoever because “we” didn’t allocate any resources towards this and “we” aren’t the arbiter of what others can or cannot do with their resources.


"We" as in "us humans".


You have my permission. This is snark.


We can and do, all the time. And all puzzles are a "waste of resources", really.

I'm not into crypto and I do think Bitcoin is stupid and wasteful, but I don't find it "sick" or all what upsetting that this kind of puzzle exists, though I think some smart contract-based Ethereum puzzles could be much more interesting, demanding solutions to more interesting problems that don't directly relate to the blockchain itself. Imagine a smart contract with a pot anybody can pay into that pays out to whoever could crack a particular previously unsolvable problem. Basically a public bounty. The only downside is that it has to be a problem that can be validated algorithmically.


This isn't really a puzzle, though. A puzzle requires intellectual curiosity and creativity to solve.

This was just a race to see who could burn the most CPU/GPU cycles the fastest.

Even when a real puzzle has a monetary reward for solving it, a big component of the reward is the solving itself. For this, the reward is just money.


I agree with you. I think it's a bit wasteful and dumb, I just don't find it either sick or confusing.


Puzzles are training and intellectual entertainment, something you cannot have a web server without, cause sad nerds are unproductive.


Why should “we”? You can hear “we should/must” from all corners here but then remember it’s an US start-up’ers forum with people who plan morning meetings for email regexps.

Bitcoin may be an inefficiency, but is it the? Most everyday things modern first-world people do are equivalent to burning oil and shredding trees for little to no reason. You just can’t see it as clearly as in PoW crypto.


> puzzles where at least something of value is created when the puzzle is solved

What puzzles create something of value when they're solved today? A puzzle is typically a thing you do for fun and entertainment, not something you try to solve for the purpose of creating value.

I guess you're thinking more about logic/mathematical puzzles and alike? Would make sense in that case, but that's not the only type of puzzle.


Pretty sure all puzzles are a tremendous waste of time and create no value.


That wouldn't be a puzzle, then. It would some kind of engineering challenge. A puzzle starts by knowing the answer and then putting some circuitous path between it and the player, that they have to figure out how to navigate. It's inherently wasteful to construct puzzles.


Unless the people solving the puzzles learn something valuable on the way.

Anyway, I don't agree that puzzles by definition have known answers, unless you want to nitpick and I just change my "puzzle" into "challenge".


The sibling comments are all correct that you're special-pleading the criterion that a puzzle create something of value.

But, as it happens, this one does: it offers economic incentive to develop more efficient attacks on elliptic curves. The curve Bitcoin uses isn't widely used outside of it, but that doesn't mean that an efficient attack on Secp256k1 wouldn't apply elsewhere.

Is this modest as positive externalities go? Probably yes. Could someone with a better attack on the curve just empty wallets? Not necessarily, and probably not: the point of the puzzle is that the entropy has been deliberately reduced to make it crackable with brute force, so, say someone worked out a factor of four improvement: that isn't going to get you into the Genesis Wallet, but it substantially lowers the price of claiming some of the puzzles.

Also, being a cryptographer and being a thief are unrelated professions. Some people might be inclined to both, but I would guess that most are not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: