Fewer Bits
Arithmetic is expensive. If one has fewer bits to add and multiply once can do so faster and cheaper. Hence interest in lo-bit neural nets. Zhou et al. have an approach that they test on image recognition that shows one can get away with fewer bits of resolution, for example 6 bit instead of 32 on gradients in AlexNet. Read the whole thing:
https://arxiv.org/abs/1606.06160
Now here’s another place they want to go to lower precision, in climate models. It turns out that as you go to smaller length scales in climate models, you need fewer bits or resolution. Check out Tim Palmer on climate models in the following video, and keep in mind the matrioshka doll analogy. It turns out that the smallest dolls need fewer bits. In the second half, he gets into reduced requirement for precision in small length scale representations, tying it into energy efficiency. The neat part is that it might be as simple as turning down voltage on CMOS.
One hour seventeen minutes, including questions, which are quite good.