

The following bug has been logged on the website:
Bug reference: 15812
Logged by: Kaleb Akalework
Email address: [hidden email]
PostgreSQL version: 11.3
Operating system: Windows/Linux
Description:
I have a need to divide a big number numeric(20) by 10000000000 to feed it
into a floor function. The division operation rounds up the number which
causes problems. I need the division to just divide the number without
rounding up or down. For my purposes 3691635539999999999/10000000000 should
return 369163553.9999999999 not 369163554. This happens if the data is
retrieved from a column. Below are queries to reproduce the problem
create table test_table
(
REQUEST_UUID varchar(50) not null,
BIG_NUM numeric(20,0) not null
);
INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values ('TEST',
3691635539999999999);
INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
3691635530099999999);
INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
3691635530999999999);
SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000), BIG_NUM/10000000000 from
test_table;
The following is the result of the above select. you can see that column 2
for first row was rounded up.
"3691635539999999999" "369163554" "369163554.00000000"
"3691635530099999999" "369163553" "369163553.01000000"
"3691635530999999999" "369163553" "369163553.10000000"


On 2019May17, PG Bug reporting form wrote:
> create table test_table
> (
> REQUEST_UUID varchar(50) not null,
> BIG_NUM numeric(20,0) not null
> );
>
> INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values ('TEST',
> 3691635539999999999);
> INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
> 3691635530099999999);
> INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
> 3691635530999999999);
>
> SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000), BIG_NUM/10000000000 from
> test_table;
Well, your column definition has room for zero decimal places, so I'm
not sure this result is all that surprising. Maybe you should cast the
column to one that has a few decimal places, say
select bit_num::numeric(30,10) / 10000000000 from test_table;
and see whether that helps your case.

Álvaro Herrera https://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Hi Alvaro,
Thank you for getting back to me. Well that is not correct the column is defined as numeric(20,0) because we don't expect decimal numbers in the column,
but operations on the value should not be dictated by the column definition. My table has millions of rows and cannot change the table definition due to number of rows and business purposes. The question is why is the result of the operation dictated by the column definition?
If you just did select (3691635539999999999/10000000000) you would get the correct result minus all the numbers after the decimal point
SELECT (3691635539999999999/10000000000)
"369163553"
This seems to be bug, no? I have data centers with SQL Server and Oracle and they don't exhibit this behavior
Thank you again for getting back to me quickly. Looking forward to hearing from you
Thank you
Kaleb Akalework
Original Message
From: Alvaro Herrera < [hidden email]>
Sent: Friday, May 17, 2019 12:02 PM
To: Kaleb Akalework < [hidden email]>; [hidden email]
Subject: Re: BUG #15812: Select statement of a very big number, with a division operator seems to round up.
*** External email: Verify sender before opening attachments or links ***
On 2019May17, PG Bug reporting form wrote:
> create table test_table
> (
> REQUEST_UUID varchar(50) not null,
> BIG_NUM numeric(20,0) not null
> );
>
> INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values ('TEST',
> 3691635539999999999); INSERT INTO test_table (REQUEST_UUID, BIG_NUM)
> values('TEST', 3691635530099999999); INSERT INTO test_table
> (REQUEST_UUID, BIG_NUM) values('TEST', 3691635530999999999);
>
> SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000), BIG_NUM/10000000000 from
> test_table;
Well, your column definition has room for zero decimal places, so I'm not sure this result is all that surprising. Maybe you should cast the column to one that has a few decimal places, say
select bit_num::numeric(30,10) / 10000000000 from test_table; and see whether that helps your case.

Álvaro Herrera http://TOC.ASG.COM:8080/?dmVyPTEuMDAxJiY2MjA1YzY5ZWNiMmRjZTgwOD01Q0RFREIwOF82NDEyOV8yOTEwXzEmJjNkYTNlNmVlYTQ1MDQwMT0xMjMyJiZ1cmw9aHR0cHMlM0ElMkYlMkZ3d3clMkUybmRRdWFkcmFudCUyRWNvbSUyRg==PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Hi,
(on postgresql lists please quote emails nicely, and trip irrelevant
pieces)
On 20190517 16:10:52 +0000, Kaleb Akalework wrote:
> Thank you for getting back to me. Well that is not correct the column is defined as numeric(20,0) because we don't expect decimal numbers in the column,
> but operations on the value should not be dictated by the column
> definition. My table has millions of rows and cannot change the table
> definition due to number of rows and business purposes. The question
> is why is the result of the operation dictated by the column
> definition?
It doesn't have to be the column division  you could just indicate the
desired precision in the divisor. I'd assume that
SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000::numeric(21,10)), BIG_NUM/10000000000::numeric(21,10) from test_table;
would give you precisely the result you waant?
> If you just did select (3691635539999999999/10000000000) you would get the correct result minus all the numbers after the decimal point
>
> SELECT (3691635539999999999/10000000000)
>
> "369163553"
>
> This seems to be bug, no?
That's just because the types here assumed to be bigint (i.e. 64bit
integers):
postgres[22538][1]=# SELECT pg_typeof(3691635539999999999), pg_typeof(10000000000), pg_typeof(3691635539999999999/10000000000), 3691635539999999999/10000000000;
┌───────────┬───────────┬───────────┬───────────┐
│ pg_typeof │ pg_typeof │ pg_typeof │ ?column? │
├───────────┼───────────┼───────────┼───────────┤
│ bigint │ bigint │ bigint │ 369163553 │
└───────────┴───────────┴───────────┴───────────┘
(1 row)
Greetings,
Andres Freund


On Fri, May 17, 2019 at 9:11 AM Kaleb Akalework <[hidden email]> wrote: The question is why is the result of the operation dictated by the column definition?
Because PostgreSQL, and SQL in general, is a typed language and the output of the division operation is defined to be of the exact same type as its inputs. Since you are dividing:
numeric(20,0) / bigint
PostgreSQL converts that to:
numeric(20,0) / numeric(20,0) = numeric(20,0)
Then applies the rules for rounding a scaled value to an unscaled one (i.e., away from half) to the result.
Writing:
numeric / bigint = numeric (same scale/precision as the numeric value)
Basically ends up the same since for these particular values the scale of the input is 0 and so the scale of the output is also 0 (TBH, I'm a bit confused writing this in face of third column's result...)
Thus:
numeric(30,10) / bigint = numeric(30,10)
Also...
SELECT 3691635539999999999/10000000000
is
bigint / bigint = bigint (with fractional truncation instead of rounding)
David J.


Hi,
On 20190517 12:02:11 0400, Alvaro Herrera wrote:
> On 2019May17, PG Bug reporting form wrote:
>
> > create table test_table
> > (
> > REQUEST_UUID varchar(50) not null,
> > BIG_NUM numeric(20,0) not null
> > );
> >
> > INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values ('TEST',
> > 3691635539999999999);
> > INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
> > 3691635530099999999);
> > INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values('TEST',
> > 3691635530999999999);
> >
> > SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000), BIG_NUM/10000000000 from
> > test_table;
>
> Well, your column definition has room for zero decimal places, so I'm
> not sure this result is all that surprising. Maybe you should cast the
> column to one that has a few decimal places, say
> select bit_num::numeric(30,10) / 10000000000 from test_table;
> and see whether that helps your case.
Arguably it's less the column's and more the divisor's precision that's
the problem. Note that even if big_num were numeric (i.e. without an
implied precision) you'd get the OP's results  the precision is not
"widened" to the appropriate width for the max precision needed for the
division.
Greetings,
Andres Freund


Hi,
Thank you Alvaro, Andres and David.
I tried Andres suggestion in his last email and that seems to work as a work around. Please see below.
SELECT BIG_NUM, FLOOR(BIG_NUM /10000000000.0000000000), BIG_NUM/10000000000.0000000000 from test_table
"3691635539999999999" "369163553" "369163553.9999999999"
"3691635530099999999" "369163553" "369163553.0099999999"
"3691635530999999999" "369163553" "369163553.0999999999"
But should this be a bug? Can their a better support of this, instead of having the query writer to know how many decimal numbers to put to get the correct type?
It seems a little awkward and error prone to have to type .0000.... etc?
Thank you all again for the fast response.
Kaleb Akalework
Original Message
From: Andres Freund < [hidden email]>
Sent: Friday, May 17, 2019 12:24 PM
To: Alvaro Herrera < [hidden email]>
Cc: Kaleb Akalework < [hidden email]>; [hidden email]
Subject: Re: BUG #15812: Select statement of a very big number, with a division operator seems to round up.
*** External email: Verify sender before opening attachments or links ***
Hi,
On 20190517 12:02:11 0400, Alvaro Herrera wrote:
> On 2019May17, PG Bug reporting form wrote:
>
> > create table test_table
> > (
> > REQUEST_UUID varchar(50) not null,
> > BIG_NUM numeric(20,0) not null
> > );
> >
> > INSERT INTO test_table (REQUEST_UUID, BIG_NUM) values ('TEST',
> > 3691635539999999999); INSERT INTO test_table (REQUEST_UUID, BIG_NUM)
> > values('TEST', 3691635530099999999); INSERT INTO test_table
> > (REQUEST_UUID, BIG_NUM) values('TEST', 3691635530999999999);
> >
> > SELECT BIG_NUM, FLOOR(BIG_NUM/10000000000), BIG_NUM/10000000000
> > from test_table;
>
> Well, your column definition has room for zero decimal places, so I'm
> not sure this result is all that surprising. Maybe you should cast
> the column to one that has a few decimal places, say
> select bit_num::numeric(30,10) / 10000000000 from test_table; and
> see whether that helps your case.
Arguably it's less the column's and more the divisor's precision that's the problem. Note that even if big_num were numeric (i.e. without an implied precision) you'd get the OP's results  the precision is not "widened" to the appropriate width for the max precision needed for the division.
Greetings,
Andres Freund


On Fri, 17 May 2019 at 17:36, Kaleb Akalework < [hidden email]> wrote:
>
> I tried Andres suggestion in his last email and that seems to work as a work around. Please see below.
>
> SELECT BIG_NUM, FLOOR(BIG_NUM /10000000000.0000000000), BIG_NUM/10000000000.0000000000 from test_table
>
> "3691635539999999999" "369163553" "369163553.9999999999"
> "3691635530099999999" "369163553" "369163553.0099999999"
> "3691635530999999999" "369163553" "369163553.0999999999"
>
> But should this be a bug? Can their a better support of this, instead of having the query writer to know how many decimal numbers to put to get the correct type?
>
> It seems a little awkward and error prone to have to type .0000.... etc?
>
I would suggest using the numeric div() function, which divides a pair
of numeric values, and returns the truncated integer result. I.e.,
div(big_num, 10000000000). For example:
SELECT div(3691635539999999999::numeric(20, 0), 10000000000);
returns 369163553.
See https://www.postgresql.org/docs/current/functionsmath.htmlRegards,
Dean


In reply to this post by PG Bug reporting form
>>>>> "PG" == PG Bug reporting form < [hidden email]> writes:
PG> I have a need to divide a big number numeric(20) by 10000000000 to
PG> feed it into a floor function. The division operation rounds up the
PG> number which causes problems. I need the division to just divide
PG> the number without rounding up or down. For my purposes
PG> 3691635539999999999/10000000000 should return 369163553.9999999999
PG> not 369163554. This happens if the data is retrieved from a column.
There seems to have been a bit of confusion in the prior responses here.
The first thing to understand is that numeric/numeric _must_ in general
round the result to _some_ precision, since otherwise the output of
1.0/3.0 would be infinitely long. (Whereas numeric addition,
subtraction, and multiplication can always give exact results.)
The question is how many digits of precision the division function
chooses. The documentation seems to be silent on this; the code says:
* The result scale of a division isn't specified in any SQL standard. For
* PostgreSQL we select a result scale that will give at least
* NUMERIC_MIN_SIG_DIGITS significant digits, so that numeric gives a
* result no less accurate than float8; but use a scale not less than
* either input's display scale.
NUMERIC_MIN_SIG_DIGITS is defined to be 16.
So here you're dividing 3691635539999999999::numeric, which has a
display scale of 0, by 10000000000::numeric, which also has a display
scale of 0. 369163553.9999999999 is 19 significant digits; the chosen
result scale is 8 because that gives at least the minimum 16 significant
digits.
(Now, it can be argued that PG's choice of result scale for division is
more surprising than it could be. But coming up with a nonsurprising
rule is not easy.)
When a numeric value comes from a table column that has a declared
scale, like numeric(20,0), then the value always has the specified
scale. You can force the scale to a specific value using round(x,n)
(usually more convenient than adding a ::numeric(blah,n) cast).
For your example, it might be more convenient to do:
select BIG_NUM*(1.0/10000000000) ...
which will always give exact results when the divisor is a power of 10.

Andrew (irc:RhodiumToad)

