I'm a little confused by the very last example. Here's the excerpt:
When using a floating-point type, you can specify a precision (the total number of allowable digits both to the left and to the right of the decimal point) and a scale (the number of allowable digits to the right of the decimal point), but they are not required. These values are represented in Table 2-3 as p and s. If you specify a precision and scale for your floating-point column, remember that the data stored in the column will be rounded if the number of digits exceeds the scale and/or precision of the column. For example, a column defined as float(4,2) will store a total of four digits, two to the left of the decimal and two to the right of the decimal. Therefore, such a column would handle the numbers 27.44 and 8.19 just fine, but the number 17.8675 would be rounded to 17.87, and attempting to store the number 178.375 in your float(4,2) column would generate an error.
Why doesn't 178.375 in a float(4,2) column not just get rounded to 178.4? Why does it produce an error? Wouldn't 178.4 satisfy a precision of 4 and a scale of 2-- all be it there aren't 2 digits on the right of the decimal.