Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've thought of that, but the problem is that it needs to linearly interpolate between the more accurate values, and depending upon how finely grained the linear interpolation is, you would need a pretty big fixed point multiplier to do that interpolation accurately.

If you didn't want to interpolate with an accurate slope, and just use a linear interpolation with a slope of 1 (using the approximations 2^x ~= 1+x and log_2(x+1) ~= x for x \in [0, 1)), then there's the issue that I discuss with the LUTs.

In the paper I mention that you need at least one more bit in the linear domain than the log domain (i.e., the `alpha` parameter in the paper is 1 + log significand fractional precision) for the values to be unique (such that log(linear(log_value)) == log_value) because the slope varies significantly from 1, but if you just took the remainder bits and used that as a linear extension with a slope of 1 (i.e., just paste the remainder bits on the end, and `alpha` == log significand fractional precision), then log(linear(log_value)) != log_value everywhere. Whether or not this is a real problem is debatable though, but probably has some effect on numerical stability if you don't preserve the identity.

Based on my tests I'm skeptical about training in 8 bits for general problems even with the exact linear addition; it doesn't work well. If you know what the behavior of the network should be, then you can tweak things enough to make it work (as people can do today with simulated quantization during training, or with int8 quantization for instance), but generally today when someone tries something new and it doesn't work, they tend to blame their architecture rather than the numerical behavior of IEEE 754 binary32 floating point. There are some things even today in ML (e.g., Poincaré embeddings) that can have issues even at 32 bits (in both dynamic range and precision). It would be a lot harder to know what the problem is in 8 bits when everything is under question if you don't know what the outcome should be.

This math type can and should also be used for many more things than neural network inference or training though.



> It would be a lot harder to know what the problem is in 8 bits when everything is under question if you don't know what the outcome should be.

I might have a solution for that : I work on methods to both quantify the impact of your precision on the result and locate the sections of your code that introduced the significant numerical errors (as long as your numeric representation respects the IEEE standard).

However, my method is designed to test or debug the numerical stability of a code and not be used in production (as it impacts performances).


None of the representations considered in the paper (log or linear posit or log posit) respect the IEEE standard, deliberately so :)


You drop denormals and change the distribution but do you keep the 0,5 ULP (round to nearest) garantee from the IEEE standard ? And are your rounding errors exact numbers in your representation (can you build Error Free Transforms) ?


For (linear) posit, what the "last place" is varies. Versus a fixed-size significand, there is no 0.5 ulp guarantee. If you are in the regime of full precision, then there is a 0.5 ulp guarantee. The rounding also becomes logarithmic rather than linear in some domains (towards 0 and +/- inf), in which case it is 0.5 ulp log scale rather than linear, when the exponent scale is not 0.

For my log numbers under ELMA (with or without posit-ing), the sum of 2 numbers alone cannot be analyzed in a simple ulp framework I think, given the hybrid log/linear nature. Two numbers summed are both approximated in the linear domain (to 0.5 ulp linear domain, assuming alpha >= frac + 1), then summed exactly, but conversion back to the log domain when done is approximate, to 0.5 ulp in the log domain. But the result is of course not necessarily 0.5 ulp in the log domain. Multiplication, division and square root are always the exact answer however (no rounding). The sum of two log numbers could of course also be done via traditional LNS summation, in which case there is <0.5 ulp log domain error.

Kulisch accumulation throws another wrench in the issue. Summation of many log domain numbers via ELMA will usually be way more accurate than 0.5 (log domain) ulp rounding via LNS traditional summation techniques, because the compounding of error is minimized, especially when you are summing numbers of different (or slightly different) magnitudes. Kulisch accumulation for linear numbers is of course exact, so the sum of any set of numbers rounded back to traditional floating point is accurate to 0.5 ulp.


The rounding errors for linear posit are exact numbers (excepting division), assuming higher precision. The rounding errors for LNS add/sub are not exact numbers in the representation in the general case. 2 and sqrt(2) are represented exactly, but (2 + sqrt(2)) is not.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: