Friday, October 18, 2024

Teaching computers a new way to count could make numbers more accurate





Understand that the number 2 is represented as 1.999999999 or so depending on the chip.  64 bits means 64 9s are used.  Again this barely matters at all until you actually do computer things like deal with a million entries that are not independent.  This is the fundamental problem of scientific computer programing.

Now imagine a program producing a stock index by adjusting continuously for every new entry.  Depending on defaults, the index will incline up or down slowly and never self correct.  folks who only know programming will do this over and over.


So fixing this is actually a hardware problem first and no one is rushing in.  Recall that instead of calculating values for pi or for logs, we actually use a look up table for accuracy and speed.


Teaching computers a new way to count could make numbers more accurate

A new way to store numbers in computers can dynamically prioritise accuracy or range, depending on need, allowing software to quickly switch between very large and small numbers



14 October 2024







There are many ways for computers to store numbers

Andrew Ostrovsky/Panther Media GmbH/Alamy

https://www.newscientist.com/article/2451034-teaching-computers-a-new-way-to-count-could-make-numbers-more-accurate/?

Changing the way numbers are stored in computers could improve the accuracy of calculations without needing to increase energy consumption or computing power, which could prove useful for software that needs to quickly switch between very large and small numbers.

Numbers can be surprisingly difficult for computers to work with. The simplest are integers – a whole number with no decimal point or fraction. As integers grow larger, they require more storage space, which can lead to problems when we attempt to reduce those requirements – the infamous millennium bug arose from computer programs storing the year as a two-digit number (99 for 1999), leading to the potential for confusion when the year rolled over to 2000.



This space issue is why very large or small numbers, or even just ones with a decimal component, are stored using a technique called floating-point arithmetic. These floating-point numbers work like scientific notation: storing the significant digits, or mantissa, as one integer and an exponent as another – for example, 1,234,000 would be stored as 1234 (the mantissa) and 3 (the exponent), for 10³. Computers actually do this in binary, rather than the base 10 we are used to, but the principle is the same.

But floating-point numbers are only a useful approximation of real numbers; they are inherently inaccurate because of the limited number of digits they can store. For example, a system designed to store 1,234,000 might struggle with 1,234,567, because this would require more digits for the mantissa. Such limitations mean that floating point numbers can sometimes cause real-world issues.

Now Itamar Cohen at Ariel University and Gil Einziger at Ben-Gurion University, both Israeli institutes, have designed an adapted version of floating-point representation called floating-floating-point that they say can improve accuracy and keep the amount of data required to represent the numbers to a minimum.

No comments: