All deflators need memory allocation and CPU counters that trigger errors on massive resource utilization. Guessing it is uncomputable like the Halting Problem to detect all zip bombs.
It's an interesting point. For classical ZIPs I actually disagree that it's needed. And then again I do agree for some other potential compression algorithms :)
In case of ZIPs: an implementation can look through the Central Directory index of files and sum up "uncompressed size" fields of all the files, and then check the sum vs the set limits - no prior decompression is needed (this is neither CPU intensive nor requires a lot of memory allocations).
The obvious "gotcha" here is that the "uncompressed size" might be declared low, while the actual data inside the compressed stream might be much higher - this is detectable only when trying to decompress, so it would seem we would fall into your idea (memory allocation / CPU counters). But that actually is not needed, as all good decompression libraries have functions to "decompress at most N bytes" - so the implementation just uses the previously declared "uncompressed size" as the limit, and therefore guarantees that the actual total decompressed size is within the checked (in previous step) total limit.
That said, I do recognize that some decompression algorithms might have possible inputs which get really CPU intensive even for a single byte, though that's not the case for typical "DEFLATE"-using ZIPs (i.e. you probably might structure the decompression stream in a way that does a lot of cache misses, but that's about it).
For non-DEFLATE compression YMMV and your method comes to mind as a decent solution.