Hi all,
A couple of days ago I've shared a WIP patch [1] implementing BRIN indexes based on bloom filters. One inherent limitation of that approach is that it can only support equality conditions - that's perfectly fine in many cases (e.g. with UUIDs it's rare to use range queries, etc.). [1] https://www.postgresql.org/message-id/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com But in other cases that restriction is pretty unacceptable, e.g. with timestamps that are queried mostly using range conditions. A common issue is that while the data is initially well correlated (giving us nice narrow min/max ranges in the BRIN index), this degrades over time (typically due to DELETE/UPDATE and then new rows routed to free space). There are not many options to prevent this, and fixing it pretty much requires CLUSTER on the table. This patch addresses this by BRIN indexes with more complex "summary". Instead of keeping just a single "minmax interval", we maintain a list of "minmax intervals", which allows us to track "gaps" in the data. To illustrate the improvement, consider this table: create table a (val float8) with (fillfactor = 90); insert into a select i::float from generate_series(1,10000000) s(i); update a set val = 1 where random() < 0.01; update a set val = 10000000 where random() < 0.01; Which means the column 'val' is almost perfectly correlated with the position in the table (which would be great for BRIN minmax indexes), but then 1% of the values is set to 1 and 10.000.000. That means pretty much every range will be [1,10000000], which makes this BRIN index mostly useless, as illustrated by these explain plans: create index on a using brin (val) with (pages_per_range = 16); explain analyze select * from a where val = 100; QUERY PLAN -------------------------------------------------------------------- Bitmap Heap Scan on a (cost=54.01..10691.02 rows=8 width=8) (actual time=5.901..785.520 rows=1 loops=1) Recheck Cond: (val = '100'::double precision) Rows Removed by Index Recheck: 9999999 Heap Blocks: lossy=49020 -> Bitmap Index Scan on a_val_idx (cost=0.00..54.00 rows=3400 width=0) (actual time=5.792..5.792 rows=490240 loops=1) Index Cond: (val = '100'::double precision) Planning time: 0.119 ms Execution time: 785.583 ms (8 rows) explain analyze select * from a where val between 100 and 10000; QUERY PLAN ------------------------------------------------------------------ Bitmap Heap Scan on a (cost=55.94..25132.00 rows=7728 width=8) (actual time=5.939..858.125 rows=9695 loops=1) Recheck Cond: ((val >= '100'::double precision) AND (val <= '10000'::double precision)) Rows Removed by Index Recheck: 9990305 Heap Blocks: lossy=49020 -> Bitmap Index Scan on a_val_idx (cost=0.00..54.01 rows=10200 width=0) (actual time=5.831..5.831 rows=490240 loops=1) Index Cond: ((val >= '100'::double precision) AND (val <= '10000'::double precision)) Planning time: 0.139 ms Execution time: 871.132 ms (8 rows) Obviously, the queries do scan the whole table and then eliminate most of the rows in "Index Recheck". Decreasing pages_per_range does not really make a measurable difference in this case - it eliminates maybe 10% of the rechecks, but most pages still have very wide minmax range. With the patch, it looks about like this: create index on a using brin (val float8_minmax_multi_ops) with (pages_per_range = 16); explain analyze select * from a where val = 100; QUERY PLAN ------------------------------------------------------------------- Bitmap Heap Scan on a (cost=830.01..11467.02 rows=8 width=8) (actual time=7.772..8.533 rows=1 loops=1) Recheck Cond: (val = '100'::double precision) Rows Removed by Index Recheck: 3263 Heap Blocks: lossy=16 -> Bitmap Index Scan on a_val_idx (cost=0.00..830.00 rows=3400 width=0) (actual time=7.729..7.729 rows=160 loops=1) Index Cond: (val = '100'::double precision) Planning time: 0.124 ms Execution time: 8.580 ms (8 rows) explain analyze select * from a where val between 100 and 10000; QUERY PLAN ------------------------------------------------------------------ Bitmap Heap Scan on a (cost=831.94..25908.00 rows=7728 width=8) (actual time=9.318..23.715 rows=9695 loops=1) Recheck Cond: ((val >= '100'::double precision) AND (val <= '10000'::double precision)) Rows Removed by Index Recheck: 3361 Heap Blocks: lossy=64 -> Bitmap Index Scan on a_val_idx (cost=0.00..830.01 rows=10200 width=0) (actual time=9.274..9.274 rows=640 loops=1) Index Cond: ((val >= '100'::double precision) AND (val <= '10000'::double precision)) Planning time: 0.138 ms Execution time: 36.100 ms (8 rows) That is, the timings drop from 785ms/871ms to 9ms/36s. The index is a bit larger (1700kB instead of 150kB), but it's still orders of magnitudes smaller than btree index (which is ~214MB in this case). The index build is slower than the regular BRIN indexes (about comparable to btree), but I'm sure it can be significantly improved. Also, I'm sure it's not bug-free. Two additional notes: 1) The patch does break the current BRIN indexes, because it requires passing all SearchKeys to the "consistent" BRIN function at once (otherwise we couldn't eliminate individual intervals in the summary), while currently the BRIN only deals with one SearchKey at a time. And I haven't modified the existing brin_minmax_consistent() function (yeah, I'm lazy, but this should be enough for interested people to try it out I believe). 2) It only works with float8 (and also timestamp data types) for now, but it should be straightforward to add support for additional data types. Most of that will be about adding catalog definitions anyway. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list ([hidden email]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers |
Apparently I've managed to botch the git format-patch thing :-( Attached
are both patches (the first one adding BRIN bloom indexes, the other one adding the BRIN multi-range). Hopefully I got it right this time ;-) regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list ([hidden email]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers ![]() ![]() |
Hi,
attached is a patch series that includes both the BRIN multi-range minmax indexes discussed in this thread, and the BRIN bloom indexes initially posted in [1]. It seems easier to just deal with a single patch series, although we may end up adding just one of those proposed opclasses. There are 4 parts: 0001 - Modifies bringetbitmap() to pass all scan keys to the consistent function at once. This is actually needed by the multi-minmax indexes, but not really required for the others. I'm wondering if this is a safechange, considering it affects the BRIN interface. I.e. custom BRIN opclasses (perhaps in extensions) will be broken by this change. Maybe we should extend the BRIN API to support two versions of the consistent function - one that processes scan keys one by one, and the other one that processes all of them at once. 0002 - Adds BRIN bloom indexes, along with opclasses for all built-in data types (or at least those that also have regular BRIN opclasses). 0003 - Adds BRIN multi-minmax indexes, but only with float8 opclasses (which also includes timestamp etc.). That should be good enough for now, but adding support for other data types will require adding some sort of "distance" operator which is needed for merging ranges (to pick the two "closest" ones). For float8 it's simply a subtraction. 0004 - Moves dealing with IS [NOT] NULL search keys from opclasses to bringetbitmap(). The code was exactly the same in all opclasses, so moving it to bringetbitmap() seems right. It also allows some nice optimizations where we can skip the consistent() function entirely, although maybe that's useless. Also, maybe the there are opclasses that actually need to deal with the NULL values in consistent() function? regards [1] https://www.postgresql.org/message-id/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list ([hidden email]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers ![]() ![]() ![]() ![]() |
Hi,
Apparently there was some minor breakage due to duplicate OIDs, so here is the patch series updated to current master. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ![]() ![]() ![]() ![]() |
On Sun, Nov 19, 2017 at 5:45 AM, Tomas Vondra
<[hidden email]> wrote: > Apparently there was some minor breakage due to duplicate OIDs, so here > is the patch series updated to current master. Moved to CF 2018-01. -- Michael |
In reply to this post by Tomas Vondra-4
> On Nov 18, 2017, at 12:45 PM, Tomas Vondra <[hidden email]> wrote: > > Hi, > > Apparently there was some minor breakage due to duplicate OIDs, so here > is the patch series updated to current master. > > regards > > -- > Tomas Vondra http://www.2ndQuadrant.com > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services > <0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz><0002-BRIN-bloom-indexes.patch.gz><0003-BRIN-multi-range-minmax-indexes.patch.gz><0004-Move-IS-NOT-NULL-checks-to-bringetbitmap.patch.gz> After applying these four patches to my copy of master, the regression tests fail for F_SATISFIES_HASH_PARTITION 5028 as attached. mark |
On 12/19/2017 08:38 PM, Mark Dilger wrote: > >> On Nov 18, 2017, at 12:45 PM, Tomas Vondra <[hidden email]> wrote: >> >> Hi, >> >> Apparently there was some minor breakage due to duplicate OIDs, so here >> is the patch series updated to current master. >> >> regards >> >> -- >> Tomas Vondra http://www.2ndQuadrant.com >> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >> <0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz><0002-BRIN-bloom-indexes.patch.gz><0003-BRIN-multi-range-minmax-indexes.patch.gz><0004-Move-IS-NOT-NULL-checks-to-bringetbitmap.patch.gz> > > > After applying these four patches to my copy of master, the regression > tests fail for F_SATISFIES_HASH_PARTITION 5028 as attached. > also used by the satisfies_hash_partition() function. Fixed patches attached. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ![]() ![]() ![]() ![]() |
> On Dec 19, 2017, at 5:16 PM, Tomas Vondra <[hidden email]> wrote: > > > > On 12/19/2017 08:38 PM, Mark Dilger wrote: >> >>> On Nov 18, 2017, at 12:45 PM, Tomas Vondra <[hidden email]> wrote: >>> >>> Hi, >>> >>> Apparently there was some minor breakage due to duplicate OIDs, so here >>> is the patch series updated to current master. >>> >>> regards >>> >>> -- >>> Tomas Vondra http://www.2ndQuadrant.com >>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >>> <0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz><0002-BRIN-bloom-indexes.patch.gz><0003-BRIN-multi-range-minmax-indexes.patch.gz><0004-Move-IS-NOT-NULL-checks-to-bringetbitmap.patch.gz> >> >> >> After applying these four patches to my copy of master, the regression >> tests fail for F_SATISFIES_HASH_PARTITION 5028 as attached. >> > > D'oh! There was an incorrect OID referenced in pg_opclass, which was > also used by the satisfies_hash_partition() function. Fixed patches > attached. Thanks! These fix the regression test failures. On my mac, all tests are now passing. I have not yet looked any further into the merits of these patches, however. mark |
In reply to this post by Tomas Vondra-4
> On Dec 19, 2017, at 5:16 PM, Tomas Vondra <[hidden email]> wrote: > > > > On 12/19/2017 08:38 PM, Mark Dilger wrote: >> >>> On Nov 18, 2017, at 12:45 PM, Tomas Vondra <[hidden email]> wrote: >>> >>> Hi, >>> >>> Apparently there was some minor breakage due to duplicate OIDs, so here >>> is the patch series updated to current master. >>> >>> regards >>> >>> -- >>> Tomas Vondra http://www.2ndQuadrant.com >>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >>> <0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz><0002-BRIN-bloom-indexes.patch.gz><0003-BRIN-multi-range-minmax-indexes.patch.gz><0004-Move-IS-NOT-NULL-checks-to-bringetbitmap.patch.gz> >> >> >> After applying these four patches to my copy of master, the regression >> tests fail for F_SATISFIES_HASH_PARTITION 5028 as attached. >> > > D'oh! There was an incorrect OID referenced in pg_opclass, which was > also used by the satisfies_hash_partition() function. Fixed patches > attached. Your use of type ScanKey in src/backend/access/brin/brin.c is a bit confusing. A ScanKey is defined elsewhere as a pointer to ScanKeyData. When you define an array of ScanKeys, you use pointer-to-pointer style: + ScanKey **keys, + **nullkeys; But when you allocate space for the array, you don't treat it that way: + keys = palloc0(sizeof(ScanKey) * bdesc->bd_tupdesc->natts); + nullkeys = palloc0(sizeof(ScanKey) * bdesc->bd_tupdesc->natts); But then again when you use nullkeys, you treat it as a two-dimensional array: + nullkeys[keyattno - 1][nnullkeys[keyattno - 1]] = key; and likewise when you allocate space within keys: + keys[keyattno - 1] = palloc0(sizeof(ScanKey) * scan->numberOfKeys); Could you please clarify? I have been awake a bit too long; hopefully, I am not merely missing the obvious. mark |
On 12/20/2017 03:37 AM, Mark Dilger wrote: > >> On Dec 19, 2017, at 5:16 PM, Tomas Vondra <[hidden email]> wrote: >> >> >> >> On 12/19/2017 08:38 PM, Mark Dilger wrote: >>> >>>> On Nov 18, 2017, at 12:45 PM, Tomas Vondra <[hidden email]> wrote: >>>> >>>> Hi, >>>> >>>> Apparently there was some minor breakage due to duplicate OIDs, so here >>>> is the patch series updated to current master. >>>> >>>> regards >>>> >>>> -- >>>> Tomas Vondra http://www.2ndQuadrant.com >>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >>>> <0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz><0002-BRIN-bloom-indexes.patch.gz><0003-BRIN-multi-range-minmax-indexes.patch.gz><0004-Move-IS-NOT-NULL-checks-to-bringetbitmap.patch.gz> >>> >>> >>> After applying these four patches to my copy of master, the regression >>> tests fail for F_SATISFIES_HASH_PARTITION 5028 as attached. >>> >> >> D'oh! There was an incorrect OID referenced in pg_opclass, which was >> also used by the satisfies_hash_partition() function. Fixed patches >> attached. > > Your use of type ScanKey in src/backend/access/brin/brin.c is a bit confusing. A > ScanKey is defined elsewhere as a pointer to ScanKeyData. When you define > an array of ScanKeys, you use pointer-to-pointer style: > > + ScanKey **keys, > + **nullkeys; > > But when you allocate space for the array, you don't treat it that way: > > + keys = palloc0(sizeof(ScanKey) * bdesc->bd_tupdesc->natts); > + nullkeys = palloc0(sizeof(ScanKey) * bdesc->bd_tupdesc->natts); > > But then again when you use nullkeys, you treat it as a two-dimensional array: > > + nullkeys[keyattno - 1][nnullkeys[keyattno - 1]] = key; > > and likewise when you allocate space within keys: > > + keys[keyattno - 1] = palloc0(sizeof(ScanKey) * scan->numberOfKeys); > > Could you please clarify? I have been awake a bit too long; hopefully, I am > not merely missing the obvious. > harmless, though, because ScanKey itself is a pointer, so the size is the same. Attached is an updated version of the patch series, significantly reworking and improving the multi-minmax part (the rest of the patch is mostly as it was before). I've significantly refactored and cleaned up the multi-minmax part, and I'm much happier with it - no doubt there's room for further improvement but overall it's much better. I've also added proper sgml docs for this part, and support for more data types including variable-length ones (all integer types, numeric, float-based types including timestamps, uuid, and a couple of others). At the API level, I needed to add one extra support procedure that measures distance between two values of the data type. This is needed so because we only keep a limited number of intervals for each range, and once in a while we need to decide which of them to "merge" (and we simply merge the closest ones). I've passed the indexes through significant testing and fixed a couple of silly bugs / memory leaks. Let's see if there are more. Performance-wise, the CREATE INDEX seems a bit slow - it's about an order of magnitude slower than regular BRIN. Some of that is expected (we simply need to do more stuff to maintain multiple ranges), but perhaps there's room for additional improvements - that's something I'd like to work on next. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ![]() ![]() ![]() ![]() |
This stuff sounds pretty nice. However, have a look at this report:
https://codecov.io/gh/postgresql-cfbot/postgresql/commit/2aa632dae3066900e15d2d42a4aad811dec11f08 it seems to me that the new code is not tested at all. Shouldn't you add a few more tests? I think 0004 should apply to unpatched master (except for the parts that concern files not in master); sounds like a good candidate for first apply. Then 0001, which seems mostly just refactoring. 0002 and 0003 are the really interesting ones (minus the code removed by 0004). -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services |
On 01/23/2018 09:05 PM, Alvaro Herrera wrote: > This stuff sounds pretty nice. However, have a look at this report: > > https://codecov.io/gh/postgresql-cfbot/postgresql/commit/2aa632dae3066900e15d2d42a4aad811dec11f08 > > it seems to me that the new code is not tested at all. Shouldn't you > add a few more tests? > I have a hard time reading the report, but you're right I haven't added any tests for the new opclasses (bloom and minmax_multi). I agree that's something that needs to be addressed. > I think 0004 should apply to unpatched master (except for the parts > that concern files not in master); sounds like a good candidate for > first apply. Then 0001, which seems mostly just refactoring. 0002 and > 0003 are the really interesting ones (minus the code removed by > 0004). > That sounds like a reasonable plan. I'll reorder the patch series along those lines in the next few days. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services |
On 01/23/2018 10:07 PM, Tomas Vondra wrote: > > > On 01/23/2018 09:05 PM, Alvaro Herrera wrote: >> This stuff sounds pretty nice. However, have a look at this report: >> >> https://codecov.io/gh/postgresql-cfbot/postgresql/commit/2aa632dae3066900e15d2d42a4aad811dec11f08 >> >> it seems to me that the new code is not tested at all. Shouldn't you >> add a few more tests? >> > > I have a hard time reading the report, but you're right I haven't added > any tests for the new opclasses (bloom and minmax_multi). I agree that's > something that needs to be addressed. > >> I think 0004 should apply to unpatched master (except for the parts >> that concern files not in master); sounds like a good candidate for >> first apply. Then 0001, which seems mostly just refactoring. 0002 and >> 0003 are the really interesting ones (minus the code removed by >> 0004). >> > > That sounds like a reasonable plan. I'll reorder the patch series along > those lines in the next few days. > NULL tweak to the beginning of the series, and also adds proper regression tests both for the bloom and multi-minmax opclasses. I've simply copied the brin.sql tests and tweaked it for the new opclasses. I've also added a bunch of missing multi-minmax opclasses. At this point all data types that have minmax opclass should also have multi-minmax one, except for these types: * bytea * char * name * text * bpchar * bit * varbit The reason is that I'm not quite sure how to define the 'distance' function, which is needed when picking ranges to merge when building/updating the index. BTW while working on the regression tests, I've noticed that brin.sql fails to test a couple of minmax opclasses (e.g. abstime/reltime). Is that intentional or is that something we should fix eventually? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ![]() ![]() ![]() ![]() |
> BTW while working on the regression tests, I've noticed that brin.sql > fails to test a couple of minmax opclasses (e.g. abstime/reltime). Is > that intentional or is that something we should fix eventually? I believe abstime/reltime are deprecated. Perhaps nobody wanted to bother adding test coverage for deprecated classes? There was another thread that discussed removing these types. The consensus seemed to be in favor of removing them, though I have not seen a patch for that yet. mark |
On 02/05/2018 09:27 PM, Mark Dilger wrote:
> >> BTW while working on the regression tests, I've noticed that brin.sql >> fails to test a couple of minmax opclasses (e.g. abstime/reltime). Is >> that intentional or is that something we should fix eventually? > > I believe abstime/reltime are deprecated. Perhaps nobody wanted to > bother adding test coverage for deprecated classes? There was another > thread that discussed removing these types. The consensus seemed to > be in favor of removing them, though I have not seen a patch for that yet. > Yeah, that's what I've been wondering about too. There's also this comment in nabstime.h: /* * Although time_t generally is a long int on 64 bit systems, these two * types must be 4 bytes, because that's what pg_type.h assumes. They * should be yanked (long) before 2038 and be replaced by timestamp and * interval. */ But then why adding BRIN opclasses at all? And if adding them, why not to test them? We all know how long deprecation takes, particularly for data types. For me the question is whether to bother with adding the multi-minmax opclasses, of course. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services |
Tomas Vondra <[hidden email]> writes:
> Yeah, that's what I've been wondering about too. There's also this > comment in nabstime.h: > /* > * Although time_t generally is a long int on 64 bit systems, these two > * types must be 4 bytes, because that's what pg_type.h assumes. They > * should be yanked (long) before 2038 and be replaced by timestamp and > * interval. > */ > But then why adding BRIN opclasses at all? And if adding them, why not > to test them? We all know how long deprecation takes, particularly for > data types. There was some pretty recent chatter about removing these types; IIRC Andres was annoyed about their lack of overflow checks. I would definitely vote against adding any BRIN support for these types, or indeed doing any work on them at all other than removal. regards, tom lane |
On 02/06/2018 12:40 AM, Tom Lane wrote:
> Tomas Vondra <[hidden email]> writes: >> Yeah, that's what I've been wondering about too. There's also this >> comment in nabstime.h: > >> /* >> * Although time_t generally is a long int on 64 bit systems, these two >> * types must be 4 bytes, because that's what pg_type.h assumes. They >> * should be yanked (long) before 2038 and be replaced by timestamp and >> * interval. >> */ > >> But then why adding BRIN opclasses at all? And if adding them, why not >> to test them? We all know how long deprecation takes, particularly for >> data types. > > There was some pretty recent chatter about removing these types; > IIRC Andres was annoyed about their lack of overflow checks. > > I would definitely vote against adding any BRIN support for these > types, or indeed doing any work on them at all other than removal. > Works for me. Ripping out the two opclasses from the patch is trivial. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services |
Hi,
Attached is an updated patch series, fixing duplicate OIDs and removing opclasses for reltime/abstime data types, as discussed. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ![]() ![]() ![]() ![]() |
Hi,
On 2018-02-25 01:30:47 +0100, Tomas Vondra wrote: > Note: Currently, this only works with float8-based data types. > Supporting additional data types is not a big issue, but will > require extending the opclass with "subtract" operator (used to > compute distance between values when merging ranges). Based on Tom's past stances I'm a bit doubtful he'd be happy with such a restriction. Note that something similar-ish also has come up in 0a459cec96. I kinda wonder if there's any way to not have two similar but not equal types of logic here? That problem is resolved here by adding the ability for btree operator classes to provide an "in_range" support function that defines how to add or subtract the RANGE offset value. Factoring it this way also allows the operator class to avoid overflow problems near the ends of the datatype's range, if it wishes to expend effort on that. (In the committed patch, the integer opclasses handle that issue, but it did not seem worth the trouble to avoid overflow failures for datetime types.) - Andres |
Andres Freund <[hidden email]> writes:
> On 2018-02-25 01:30:47 +0100, Tomas Vondra wrote: >> Note: Currently, this only works with float8-based data types. >> Supporting additional data types is not a big issue, but will >> require extending the opclass with "subtract" operator (used to >> compute distance between values when merging ranges). > Based on Tom's past stances I'm a bit doubtful he'd be happy with such a > restriction. Note that something similar-ish also has come up in > 0a459cec96. > I kinda wonder if there's any way to not have two similar but not equal > types of logic here? Hm. I wonder what the patch intends to do with subtraction overflow, or infinities, or NaNs. Just as with the RANGE patch, it does not seem to me that failure is really an acceptable option. Indexes are supposed to be able to index whatever the column datatype can store. regards, tom lane |
Free forum by Nabble | Edit this page |