Leave these fields empty (spam trap):
Name
You can leave this blank to post anonymously, or you can create a Tripcode by using the float Name#Password
Comment
[*]Italic Text[/*]
[**]Bold Text[/**]
[~]Taimapedia Article[/~]
[%]Spoiler Text[/%]
>Highlight/Quote Text
[pre]Preformatted & Monospace text[/pre]
1. Numbered lists become ordered lists
* Bulleted lists become unordered lists
File

Sandwich


Floating Point Numbers by Jarvis Wonkinwill - Tue, 31 Jul 2018 18:41:40 EST ID:JrM6mJRf No.37608 Ignore Report Quick Reply
File: 1533076900708.jpg -(6453B / 6.30KB, 188x250) Thumbnail displayed, click image for full size. 6453
can someone explain wtf Floating Point Numbers are?
>>
Fuck Blytheham - Wed, 01 Aug 2018 05:09:32 EST ID:cIyjo+iF No.37609 Ignore Report Quick Reply
It's a fraction and an exponent and usually a sign. It's kind of like scientific notation with the goal of sacrificing precision so you can load and operate on the numbers efficiently in hardware. FP is used pretty much everywhere except finance where the lack of precision will get you sued.
>>
Jenny Dartdock - Thu, 02 Aug 2018 01:58:27 EST ID:Xm/W+3lL No.37610 Ignore Report Quick Reply
>>37609
Well...kinda.

The IEEE754 floating-point standard that pretty much everyone uses was created with the goal of being able to re-use many per-existing hardware pieces, and you'll find that some of the integer hardware still works on it (equality, sign bits, zero, and addition, subtraction, less-than, and greater-than are all mostly compatible), however new hardware was required for it (such as hardware that handles floating point exceptions, denormal numbers, infinities and NaNs).

However, as it turns out different pieces of hardware handle floating point numbers differently. GPUs, for instance, get to follow most of the rules of the spec while ignoring some other rules that allow them to accelerate their hardware greatly while still remaining mostly compatible:
https://docs.microsoft.com/en-us/windows/desktop/direct3d10/d3d10-graphics-programming-guide-resources-float-rules
>>
Rebecca Socklefoot - Sat, 04 Aug 2018 00:05:05 EST ID:x6K3CZQk No.37616 Ignore Report Quick Reply
Watch the floating point lecture
https://www.cs.cmu.edu/~213/schedule.html
>>
Jenny Fuckingcocke - Thu, 01 Nov 2018 03:05:45 EST ID:E/AtBUMC No.37667 Ignore Report Quick Reply
A floating point number is of course bits in memory.
The first bit is the sign bit. 0 means the number is negative, 1 means the number is positive. Note that this is the opposite from how negative integers are usually represented in binary (two's complement).

Next after the sign comes the exponent. it's either 8 or 11 bits long depending on if the floating point is a float or a double (32 or 64 bits in total).
This exponent is biased, which means there is an implicit value that is subtracted from the unsigned binary integer value the exponent has. For 32 bit floats the bias is IIRC -127, so an exponent of 00000001 (binary) is
-126, 10000000 binary is 1 etc. the exponent shouldn't have an exponent value of all 0s or 1s, those are for denoting special cases such as NaN, infinities and other stuff.

The last part is the significand. This is somewhat like your normal integer value: Each bit on the left means double the value of the bit on the right. The difference is that the leftmost bit means value 0.5, the next is 0.25 etc.
Also an implicit 1 is added (another bias if you will). Knowing some math you know that the significand value is always between 1 (included) and 2 (excluded).

Finally the value of the floating point number is
(-1)^sign * 2^exponent * significand.

The sign part is just mathematical notation "if sign is 1, then negative, otherwise positive". Remember the biases in exponent and significand.

So a

0 10000010 11010000000000000000000 would be

sign: positive
exponent: 128+2 -127 = 3
significand: 0.5 + 0.25 + 0.0625 = 0.8125

and the final value: 2^3 * 0.8125 = 6.5
>>
Charlotte Punkinfoot - Tue, 18 Dec 2018 13:41:59 EST ID:LpbOCWiY No.37705 Ignore Report Quick Reply
1545158519143.jpg -(81434B / 79.53KB, 638x479) Thumbnail displayed, click image for full size.
>>37608
it's scientific notation for computers, that lets you create decimal numbers.


1) "Fraction" (sometimes referred to as the mantissa if you want to over complicate things) is the actual value that just has the decimal in the wrong place. it is the binary representation of your number so between 0 and 1

so it can be like .5

2) than you have the "exponent value" which is used for shifting the decimal place around

so if it is like 2 your mantissa becomes 500, and if it is -2 your mantissa becomes .005

(Note* the exponent is derived from subtracting some b value from the maximum number of bits in the exponent to get the exponent value, IDK if that's exactly how it works but I don't care enough to relearn that part, and its a little too complex for a dummy like me to try and explain.)

3) and the sign bit, which if it 0 that you have a positive number and if it is 1 you have a negative number.

It gets a little more specific when dealing with something like IEEE floating point numbers but this is just a loose way to think of floating point numbers.
>>
Walter Bambleman - Wed, 19 Dec 2018 17:59:52 EST ID:ztur2YT8 No.37707 Ignore Report Quick Reply
The name "floating point" comes as a contrast to the previous "fixed point" numbering scheme, in which an integer with N bits was broken up into two parts (a "whole part" and a "fractional part"). This fixed-point format was very performant for running on mostly integer-based computers, and it wasn't too difficult to understand for most people. However, these fixed-point numbers really only worked well when you knew what range of values you would be working with up-front. If you didn't know that, then it was very likely to run into integer overflow problems (with the "whole number" part of the fixed-point number), or you'd run into problems where the precision of your "decimal part" wasn't high enough to effectively perform calculations.

Floating point came around as a solution to the problems of fixed-point systems, however floating-point has other new problems compared to fixed-point (such as a non-uniform precision range, and imprecision rounding issues).


Report Post
Reason
Note
Please be descriptive with report notes,
this helps staff resolve issues quicker.