# [PD] Infinite and NaN float values?

Matteo Sisti Sette matteosistisette at gmail.com
Mon Aug 16 23:24:04 CEST 2010

```On 08/16/2010 09:20 PM, Mathieu Bouchard wrote:
> On Mon, 16 Aug 2010, Matteo Sisti Sette wrote:
>
>> However there seem to be some inconsistency: 5/4 returns 0 (as I was
>> used to), not +inf.
>
> 5/4 is not a division by 0, it's a division by 4.

Sorry I meant 5/0

> and then, if it were a division by 0... that should give NaN.
> It can't be NaN [guess you mean can't be infinity] because,
> among other things, you can't even know which sign of
> the infinity it should give,

> If you accept limits (as in "the limit of 1/x as x goes towards 0") as a
> suitable mathematical device in this context, then it gives +Inf when x
> decreases towards 0, but it gives -Inf when x increases towards 0. In
> such cases, the limit is usually said to "not exist".

I think in this cases the limit is said to be "infinity" (without sign),
meaning that
being y=f(x),
for every M>0 (no matter how big) you can always find a T>0 such that
for all x:|x|<T you have |f(x)|>M.
I think that is said to be an infinite limit (rather than non existing
limit), and that applies to both real and complex numbers.

I think there exists a "compactification" of real numbers that takes
this "limit" as one more number which is Infinity (without sign), thus
making R compact (visually represented by mapping real numbers to the
points of a circumference rather than a straight line). I don't know if
this applies to complex numbers also.

In ActionScript actually (which is not a great example language but it
is the one with which I could check right now), 5/0 gives Infinity, and
-5/0 gives -Infinity.

Don't most programming languages behave like this? I always though NaN
was only the result of 0/0 (and Inf-Inf, and other operations where any
""limit"" would be nonexistent),
but I didn't consider the problem you mention about the sign

>
> If you copy the sign of the zero to decide whether the result is +Inf or
> -Inf, then you violate the rule that the sign of the zero shouldn't
> matter...

That's right.
Indeed, how can one postulate that 0==-0 is true and at the same time
maintain that inf!=-inf???

In ActionScript:

0   ==  -0    -> true
1/0 == 1/-0   -> false

```