Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Depth of field blur: the Swiss army knife that improved the framerate (joostdevblog.blogspot.com)
111 points by exch on April 9, 2012 | hide | past | favorite | 19 comments


Downsampling images before a blur seems to be a pretty standard technique to get more perceived bang (blur) for your buck. If you don't downsample the image too far, the blur adequately covers any artifacts the downsampling (and re-scaling) would have. The result is, an N radius blur can be stretched a lot farther, because the downsampling (and re-scaling) itself adds an blur of its own.

This performance optimization doesn't really work for variable-width blurs (like depth of field) on 3d scenes though, because some parts of your image need to be crisp, while others blurred. Downsampling the entire image would lose resolution on the crisp parts.

3d depth of field blurs are actually pretty interesting though. They're variable width, which is just another way of saying that each pixel on the final image might be blurrier or less blurry than another pixel. Implementing this kind of variable blurring is a tough task, and typically done by scaling up or scaling down a disk of random sampling coordinates for each pixel. When the disk of coordinates is large, the sampling coordinates sample further from the pixel its centered on, so the resulting pixel has a bigger blur. When the sampling disk is small, the surrounding coordinates are closer to the center pixel, creating a smaller blur. The size of this sampling disk is controlled by the depth of the pixel from the near and far focus planes of the virtual camera.

I've glossed over a lot (like the artifacts that can result from 3d DoF), but Nvidia's GPU gems has a great article on the subject http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html


> This performance optimization doesn't really work for variable-width blurs...

An alternative would be to use Summed-area tables [1] where each texel contains the sum of all texels that are above and to the left of the current texel. This allows variable-width blurs to be computed in constant time, using only four texture samples (!).

And the concept of a variable-width blur is an approximation to a related concept in photography/optics called the "circle of confusion." [2] The circle of confusion, combined with bokeh sprites, are usually what AAA games are doing for depth of field [3].

See [4] for more details.

[1] http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html

[2] http://en.wikipedia.org/wiki/Circle_of_confusion

[3] http://udn.epicgames.com/Three/BokehDepthOfField.html

[4] http://mynameismjp.wordpress.com/2011/02/28/bokeh/


I love this stuff. I haven't had a chance to implement a summed area table yet for anything, but I did do variance shadow maps very recently (did you mean to cite that?). I did read about them though, and one drawback that I remember was that you might have a precision overflow in the bottom/top corner where the summations are the greatest?


The GPU Gems article talks about summed area tables in the context of variance shadow maps, specifically percentage closer soft shadows variance shadow maps where variable-width blurs are used extensively. Didn't really mean to cite the stuff about VSMs (sorry), more so the SAT stuff toward the latter half of the article as it would apply to DoF.

I believe the precision/overflow issues can be mitigated using modern GPU hardware (the authors mention this in section 8.5.2 of the article.) Now-a-days it is pretty much standard to have floating-point render targets (especially with deferred rendering and light-pre pass renderers) so 16/32 bits per component should be adequate especially if you apply the tricks the authors present in the article. Not to mention that now we have DirectCompute/OpenCL/CUDA.

Though I don't really see people talking about SATs that much (I myself have never implemented them.) Maybe there is an underlying reason why people don't? Bandwidth maybe? I would imagine it would be pretty taxing on the system to recompute the SAT every frame.


In 3D games with significant view distance, they often use low resolution copy's for displaying distant objects. These low resolution copy's are progressively replaced with ever more complex versions as you approach. I suspect you could mix in some blur to get a smoother transition and enable even lower resolution copy's while avoiding the 'pop' that occurs when you suddenly get the high rez version of an object.


My first thought on reading is this is essentially (at least in spirit) mipmapping[1] applied to his 2D (2.5D?) engine.

1. http://en.wikipedia.org/wiki/Mipmap


Pet peeve, It's "farther."


My understanding is that "further" and "farther" can be used interchangeably when talking about distances (so "further" can be used wherever "farther" can, but not the other way around).

This is for example what the usage note at http://www.merriam-webster.com/dictionary/farther says, although they state that the usage is becoming more polarized.


"Farther away", but "improved even further". I know this is what you mean, but others might get confused.


I don't think there's a consistent distinction on those. The OED lists greater-distance meanings for both words, and has quotations going back centuries for both (Shakespeare used "further" for distance). Spot-checking on Google Books, it looks like "further away" and "farther away" have roughly equal contemporary usage, as well.


Hmm, that's true. I use them in the way previously described, but don't correct people on it, as it's too late for that. We can still save "it's/its" "their/there/they're" and "your/you're", though!


This is wildly off-topic, but here goes.

We really can’t; you’re going to have to accept one day that your position is untenable. Where languages evolve, we’re powerless to prevent their inexorable transformation. They’re like species—there’s no such thing as “devolution”. Some would that it were different—they say “we could’ve won, if only”. Try though we might, the globalising power of the Internet, and its effect on communication, mean that it’s ultimately futile to fight change—even when it makes us uncomfortable.

We really cant, your gonna hafta accept one day that your position is untenable. Were languages evolve, were powerless to prevent there inexorable transformation. There like species, theres no such thing as “devolution”. Some would that it were different—they say “we could of won, if only”. Try tho we might, the globalising power of the internet, and its effect on communication, mean that its ultimately futile to fight change, even when it makes us uncomfterble.


I consider loss of specificity (which is what is happening in the latter case) to be devolution, and do not find descriptivism a convincing (or even meaningful) argument against that position. Other arguments might prove more successful, of course. As for futility - well, we'll see, won't we?

As you say, though, this is wildly off topic.


There are two contradictory forces shaping a language: effectiveness and clarity. Languages walk the narrow path between those two. Sometimes languages move in one direction, sometimes in the other direction. A loss of specificity (like the disappearing distinction between “it’s” and “its”) increases effectiveness (for the writer) at the cost of clarity.

I’m not sure with what justification you can call that devolution. It’s a change that affects clarity only minimally: There aren’t many situations where using “it’s” instead of “its” is confusing and doesn’t communicate what the writer wants to communicate.

It just makes sense that every flourish in a language that adds rarely needed specificity will be ground away with time.

Stylistically you are certainly within your rights to call it a devolution (I also don’t like it aesthetically), but that’s very subjective. I don’t think that pointing at specificity is the way to go here, though.


One of the fascinating general trends in that respect, which I don't think the reasons for are fully settled yet, is the tendency of languages to simplify grammar over time. Ancient Greek and Latin had more complex grammatical features than modern Greek and Italian do, for example, and similarly with Sanskrit and its descendants. Of course, it can't be a universal trend, because the complexity must have arisen at some point as well.


Complexity comes from languages that arise in isolation. Basically, the fewer people speak a language, the less chance its natural complexity will degrade. Invasions and other interlinguistic contact create simplifications, most visibly pidgins and creoles. Check out Our Magnificent Bastard Tongue by Ian McWhorter, which shows how English underwent this very process. What we speak today is a hilarious pidgin of Old English and Norman French, which are themselves mashups.

I wonder what will win: practically no new natural languages are being created, so simplification is running amok, but invasions are now rare, and we use computers almost exclusively in our own native tongues. Maybe we end up suffering global economic collapse and reverting to the natural language wars.


English dropped grammatical gender some centuries ago. Despite being 'less specific' in some ways, I'm not sure too many would argue that this was a step backward.


If anything, I think this one is going the other way: traditionally there wasn't a distinction, but one seems to be emerging more recently.


Indeed, thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: