2.  Notation

Back Home Up

In computers, the two essential atoms that make up the floating point molecule are the exponent and the mantissa.

The exponent is equivalent to the power of 10 in scientific notation, although in computers it is usually a power of 2, instead. The exponent is allowed to be either negative or positive and is usually implemented as a special kind of integer designed for that purpose. In other words, when I say 4.5*10^2, the value 2 is the exponent of that number (the power of 10.)

The term mantissa may be new to you.  In mathematics, it means the decimal part of a logarithm as opposed to the integer part. Logarithms are just the flip side of scientific notation (powers) in many respects, so it is natural to draw terms from that field.

In floating point notation, the mantissa represents the first part of the scientific notation value, from 1 to 10 (excluding 10.) For our purposes here, when I say 4.5*10^2, the value 4.5 is the mantissa of that number. And in computers, the mantissa is also implemented as a special kind of integer crafted for its special needs.

The exponent part

Before I really twist your head with the mantissa, let's take on the more natural of the two; the exponent.

In computers these days, the exponent part of a floating point value is a power of 2.  So this is the first thing to keep in mind.  No longer are numbers represented as some value times a power of 10.  Instead, it's slightly different.  In computers, floating point numbers are represented by some value (mantissa) times a power of 2.  This is done because it makes things a little faster inside the computer with less actual logic.

The powers of 2 can be both negative and positive, so the exponent must be allowed to do the same.  Luckily, signed twos-complement integers do this well.  Unluckily, signed twos-complement integers make direct comparison for magnitude a little tricky, because negative numbers "look" larger than positive values unless you take their sign bit into account.  Since magnitude comparison is essential for addition and subtraction and since the slightly extra work involved in comparing negative and positive twos-complement integers wasn't desired, most exponents are implemented as biased exponents.  (Discussing twos-complement notation is beyond the scope here.  If you don't understand them, just accept my statement that biased exponents make things easier.)

The result is something like, 1.5*2^(exponent - bias). I know that makes it look worse to you. But it looks better to a computer.  And you can guess who's view wins!

The value of the bias actually depends on how many bits of the floating point value are reserved for the exponent. In 32-bit precision formats, the value of the bias is 127 because the exponent occupies 8 bits of space and 2^(8-1) - 1 = 127. Okay, so that's a little cryptic as to why, but it does give you a formula. In 64-bit precision formats, the exponent occupies 11 bits of space, so the value of the bias is 2^(11-1)-1 = 1023. I know this is just more to remember, but computers like it this way.

If you run my FLOATQB program, you will see the bit field for the exponent clearly labeled for you. You can switch between 32-bit and 64-bit formats by adding a ! or # to your number, in kindship with the way Quick BASIC uses them. The program will automatically switch for you.

The mantissa part

As if that wasn't enough, there is the mantissa.  Remember the scientific notation?  And that the exponent switched from a power of 10, used in scientific notion, to a power of 2 for computers?  So, 1.5*10^1 becomes 1.875*2^3.   Nasty, eh?

If you play with this idea a bit, converting various scientific numbers from powers of 10 to powers of 2, you'll also discover that the mantisa only needs to go from 1 to 2 (excluding 2.)  In fact, all of the mantissas you will ever need to write will always be of the form, 1.xxxxxxxx.  Kind of handy that way.  The first digit is always a 1.  All you need to figure out is the fractional part for the mantissa.

Which brings up one more detail about mantissas in computers, hidden bit notation.  Some smart person noticed that since the mantissa always starts with a 1, that there is no need to explicitly say so.  In other words, you can just provide the fractional part and ignore the 1, knowing that everyone else knows the 1 should be there and can put it back.  Similarly, computers today also throw away that leading 1, when saving a floating point value.  It's not much of an advantage, but it buys one binary bit and that can increase your precision in the same space and that is worth doing.

The only exception occurs in the special case of zero.  That's the only time when the leading digit isn't a one.  But in floating point format, that easy to handle as a special case.

By the way, a mantissa is stored in a form just like scientific numbers are, with their leading binary bit always in the same position.  This is quite different from the normal way of storing integers in computers and the processes of going between normal integer form and this special form is called normalization and denormalization.  Floating point numbers store their mantissas in normalized form.  And when certain calculations are performed, such as addition and subtraction, they are denormalized, combined, and then renormalized.

What About The Sign?

Almost forgot!  The entire number needs to be either "positive" or "negative." This means that the mantissa needs a sign. So floating point formats include a special bit just for that purpose. It's handling is so different though, from the mantissa, that it's almost like another atom. But it applies to the mantissa, so maybe it's best to consider that way.

Summary

So, floating point involves a hidden-bit,normalized mantissa, a mantissa sign (+ or -), and a biased exponent.  It's kind of tricky, in implementation, but the concepts are simple enough. The results are enormously useful, too.

If you get a chance, load up FLOATQB or FLOAT and see what it really looks like on a PC. Try out different values and see if you can develop a finer understanding.

 

Last updated: Friday, July 09, 2004 13:13