Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the reason it looks fake is because they actually have the math wrong about how optics and apertures work, and they make some (really bad) approximations but from a product standpoint can please 80% of people.

I could probably make a better camera app with the correct aperture math, I wonder if people would pay for it or if mobile phone users just wouldn't be able to tell the difference and don't care.



There are a few projects now that simulate defocus properly to match what bigger (non-phone camera) lenses do - I hope to get back to working on it this summer but you can see some examples here: https://x.com/dearlensform

Those methods come from the world of non-realtime CG rendering though - running truly accurate simulations with the aberrations changing across the field on phone hardware at any decent speed is pretty challenging...


most people just want to see blurry shit in the background and think it makes it professional. if you really want to see it fall down, put things in the foreground and set the focal point somewhere in the middle. it'll still get the background blurry, but it gets the foreground all wrong. i'm guessing the market willing to pay for "better" faked shallow depth of field would be pretty small.


> and think it makes it professional.

That's a bit cynical. Blurring the background can make the foreground object stand out more, objectively (?) improving the photo in some cases.


I get what he means. The gold standards for professional “bokeh” portraits, a 85mm f/1.4 prime, is typically a $1000-2000 lens.


Yeah that's why I didn't write the app already. I feel like the people who want "better faked depth" usually just end up buying a real camera.


Lytro had dedicated cameras and inferior resolution so they failed to gain enough traction to stay viable. You might have a better chance being that it's still on the same device, but the paid for app would be a push.

However, you could just make the app connect to localhost and hoover up the user's data to monetize and then offer the app for free. That would be much less annoying than showing an ad at launch or after every 5 images taken. Or some other scammy app dev method of making freemium apps successful. Ooh, offer loot boxes!!!


Sample of one, but I’m interested. I used to use a real camera and now very rarely do. But I also often find the iPhone blurring very fake and I’ve never understood why. I assumed it was just impossible to do any better, given the resources they throw at the problem. If you could demonstrate the difference, maybe there would be a market, even if just for specific use cases like headshots or something.


Would it be possible to point out more details about where Apple got the math wrong and which inaccurate approximations they use? I'm genuinely curious and want to learn more about it.


It's not that they deliberately made a math error, it's that it's a very crude algorithm that basically just blurs everything that's not within what's deemed as the subject with some triangular, Gaussian, or other computationally simple kernel.

What real optics does:

- The blur kernel is a function of the shape of the aperture, which is typically circular at wide aperture and hexagonal at smaller aperture. Not gaussian, not triangular, and the kernel being a function of the depth map itself, it does not parallelize efficiently

- The blurring is a function of the distance to the focal point, is typically closer to a hyperbola; most phone camera apps just use a constant blur and don't even account for this

- Lens aberrations, which are often thought of as defects, but if you generate something too perfect it looks fake

- Diffraction effects happen at sharp points of the mechanical aperture which create starbursts around highlights

- When out-of-focus highlights get blown out, they blow out more than just the center area, they also blow out some of the blurred area. If you clip and then blur, your blurred areas will be less-than-blown-out which also looks fake

Probably a bunch more things I'm not thinking of but you get the idea


The iPhone camera app does a lot of those things. The blur is definitely not a Gaussian blur, you can clearly see a circular aperture.

The blurring is also a function of the distance, it's not constant.

And blowouts are pretty convincing too. The HDR sources probably help a lot with that. They are not just clipped then blurred.

Have you ever looked at an iPhone portrait mode photo? For some subjects they are pretty good! The bokeh is beautiful.

The most significant issue with iPhone portrait mode pictures are the boundaries that look bad. Frizzy hair always ends up as a blurry mess.


The Adobe one has a pretty decent ML model for picking out those stray hairs and keeping them in focus. They actually have two models, a lower quality one that keeps things on device and a cloud one that is more advanced.


Any ideas what the Adobe algorithm does? It certainly has a bunch of options for things like the aperture shape.


re: parallelization, could a crude 3Dfft-based postprocessing achieve a slightly improved result relative to the current splat-ish approach while still being a fast-running approximation?

i.e. train a very small ML model on various camera parameters vs resulting reciprocal space transfer function.


Thanks!


I'm pretty happy with the results my Pixel produces (apart from occasional depth map errors). Is Google doing a better job than Apple with the blurring, or am I just blissfully ignorant? :-)


If it's all done in post anyway, then it might be a lot simpler to skip building a whole camera app and just give people a way to apply more accurate bokeh to existing photos. I would pay for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: