# Floating-point for decimals

There is a common misconception that floating points types cannot be used for decimal numbers. It has turned into a kind of religious dogma as you can guess from the way this very popular StackOverflow question is stated. Let us dissect what is going here.

Consider the following problem. You are given up to 1000 decimal numbers up to 1 billion each, with at most two digits after the decimal point. An example input might look like this:

`1.01`

2.02

3.03

Your task is to find their sum. The expected result for this example is `6.06`

. Let us try to find it with this simple Kotlin code using double-precision floating point arithmetic:

**fun **main(args: Array<String>) {

**val **input = *generateSequence ***{ ***readLine*() **}**

*println*(input.*map ***{ it**.*toDouble*() **}**.*sum*())

}

If you run this code, you see that it produces `6.0600000000000005`

. Why? Go read “What Every Programmer Should Know About Floating-Point Arithmetic” if you don’t why it is happening and get back here to read what to do about it.

The obvious, and much publicized solution to this problem, is to use decimal types to represent decimal numbers. In Kotlin/JVM you can use BigDecimal (`.toBigDecimal`

extension is available in Kotlin 1.2):

**import **java.math.BigDecimal

**fun **main(args: Array<String>) {

**val **input = *generateSequence ***{ ***readLine*() **}**

*println*(input

.*map ***{ it**.*toBigDecimal*() **}**

.*fold*(0.*toBigDecimal*()) **{ **a, b **-> **a + b **}**)

}

It works as expected and produces the correct answer of `6.06`

. Is there any problem with BigDecimal? It depends on your domain. It is perfectly fine most of the time, but it is slower and takes more memory than using floating-point arithmetic, which is natively supported by all modern CPUs. So can we use floating-point to get a correct solution? Yes, we can!

A double-precision IEEE754 number has 15+ decimal digits of precision, but it cannot precisely represent trivial decimal fractions like `0.1`

, because *binary* floating point numbers are used. Internally, a number like `0.1`

is represented by a number that is very close, but is not exactly equal to it (`0.1`

is actually represented by `0.1000000000000000055511151231257827021181583404541015625`

if you are curious).

This is not a problem as long as you don’t do *arithmetics* with those numbers. If you just *store* the numbers (read and write them), then it all “does the right thing” and you get your decmial number like `0.1`

back unharmed in your output, because Double.toString on JVM is specifically designed to allow this kind of use. However, when you start doing arithmetics, like adding numbers, those representation errors start to add up leading to the infamous `0.1 + 0.2 != 0.3`

problem.

The way to solve this problem is to use a problem/domain-specific knowledge. In our problem we are given numbers with up to two digits after decimal point and we know that they have at most two digits after any addition. Moreover, because there are at most 1000 number of up to billion, their sum cannot exceed a trillion (10¹²), so with two digits after decimal point it should comfortably fit into the double-precision floating point type if only we can prevent those errors from accumulating. We need to add our numbers so that `1.01 + 1.02 == 2.03`

, while a regular addition of those two doubles produces the result of `2.0300000000000002`

. How? We just need to round them to two decimal digits after the decimal point (we are using Kotlin 1.2 math functions here):

**import **kotlin.math.round

**fun **round2(x: Double) = *round*(x * 100) / 100

We get `round2(1.01 + 1.02) == 2.03`

, because integer `100`

is perfectly represented in double and both multiplication and division produce correctly rounded results. Armed with this rounding function, we can now solve our original problem in floating-point:

**fun **main(args: Array<String>) {

**val **input = *generateSequence ***{ ***readLine*() **}**

*println*(input

.*map ***{ it**.*toDouble*() **}**

.*fold*(0.0) **{ **a, b **-> ***round2*(a + b) **}**)

}

It is important to invoke `round2`

after each addition, not just once at the end, to prevent accumulated errors from garbling the last digit of the answer.

Takeaway: you can use floating point arithmetics to solve certain problems with decimal numbers if you know how. Shall you do it? It depends.