Binary precision of a number type

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Binary precision of a number type

Gilles Dubochet
Hello,

(Sorry for the long introduction, I didn't manage to make it shorter
yet clear. The question comes at the end of the message)

JDBC can return (via ResultSetMetaData's getPrecision method) the
minimal number of decimal digits required to store a number. I want to
store at runtime this number as the smallest possible native Java value
(Integer, Long or BigInteger depending on the number's precision). Java
type's precision are of course defined in binary digits.

I want my system to support various DBMS, so I cannot use (I think) the
column type to know the precision since the precision of a given type
is not standardised in SQL (integer for example isn't guaranteed to be
32 bits). Column type names aren't standardised either. So I believe I
need to use the precision value.

Here is an example of my problem: a 32 bits signed number in the
database can represent numbers between -2,147,483,648 and
2,147,483,647. This could typically be stored in a Java integer.

The (conservative) number of decimal digits required to represent these
32 bits numbers is 10 (if the minus sign comes for free), and that is
what getPrecision returns. Now, 10 digits really allow numbers ranging
from -9,999,999,999 to 9,999,999,999, quite a bit more than the actual
range (getPrecision overstates the precision).

If I consider the full range (9,999,999,999) to be the actual range of
this value, I would need 35 bits to represent it. This means I would
use a Java long value, and that isn't the real smallest possible Java
type for this value.

On the other hand, I could also consider that the binary precision for
tinyinteger, smallinteger, integer or biginteger really is "the maximum
number of bits whose possible number range still fits into that many
decimal digits" (of course I would have to consider whether it is
signed or not). This is the more optimistic approach and would allow to
correctly calculate 32 as the number of bits in my example.

And now the question: Is the optimistic approach safe? Is there no case
where this assumption is too optimistic? And what about numeric or
decimal types where the scale is zero, can they be treated the same?
Also in other DBMS than PostgreSQL? And what about float or real
values? I'd appreciate anyone's thoughts about this.

Cheers,
Gilles Dubochet.



---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend