Can I add Index to make a query faster which involves joins on unnest ?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Can I add Index to make a query faster which involves joins on unnest ?

Arup Rakshit-4
The below query basically gives the result by maintaining the order of the sizes in the list.

explain analyze select
        "price_levels"."name",
        "price_levels"."size"
from
        "price_levels"
join unnest(array['M',
        'L',
        'XL',
        '2XL',
        '3XL',
        '4XL',
        '5XL',
        '6XL',
        'S']) with ordinality t(size,
        ord)
                using (size)
order by
        t.size


I have a Btree index on the size column.

Explain output is:

Merge Join  (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
  Merge Cond: ((price_levels.size)::text = t.size)
  ->  Index Scan using price_levels_size_idx on price_levels  (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
  ->  Sort  (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
        Sort Key: t.size
        Sort Method: quicksort  Memory: 25kB
        ->  Function Scan on unnest t  (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 ms



Thanks,

Arup Rakshit



lup
Reply | Threaded
Open this post in threaded view
|

Re: Can I add Index to make a query faster which involves joins on unnest ?

lup


On Sep 13, 2018, at 12:17 PM, Arup Rakshit <[hidden email]> wrote:

The below query basically gives the result by maintaining the order of the sizes in the list.

explain analyze select
        "price_levels"."name",
        "price_levels"."size"
from
        "price_levels"
join unnest(array['M',
        'L',
        'XL',
        '2XL',
        '3XL',
        '4XL',
        '5XL',
        '6XL',
        'S']) with ordinality t(size,
        ord)
                using (size)
order by
        t.size


I have a Btree index on the size column.

Explain output is:

Merge Join  (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
  Merge Cond: ((price_levels.size)::text = t.size)
  ->  Index Scan using price_levels_size_idx on price_levels  (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
  ->  Sort  (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
        Sort Key: t.size
        Sort Method: quicksort  Memory: 25kB
        ->  Function Scan on unnest t  (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 ms



Thanks,

Arup Rakshit


There are not value of size fit it to be a worthwhile key.


Reply | Threaded
Open this post in threaded view
|

Re: Can I add Index to make a query faster which involves joins on unnest ?

Arup Rakshit-4
Hello Rob,


So how do I improve this query speed?


Thanks,

Arup Rakshit



On 14-Sep-2018, at 12:27 AM, Rob Sargent <[hidden email]> wrote:



On Sep 13, 2018, at 12:17 PM, Arup Rakshit <[hidden email]> wrote:

The below query basically gives the result by maintaining the order of the sizes in the list.

explain analyze select
        "price_levels"."name",
        "price_levels"."size"
from
        "price_levels"
join unnest(array['M',
        'L',
        'XL',
        '2XL',
        '3XL',
        '4XL',
        '5XL',
        '6XL',
        'S']) with ordinality t(size,
        ord)
                using (size)
order by
        t.size


I have a Btree index on the size column.

Explain output is:

Merge Join  (cost=4.61..5165.38 rows=60000 width=46) (actual time=0.157..57.872 rows=60000 loops=1)
  Merge Cond: ((price_levels.size)::text = t.size)
  ->  Index Scan using price_levels_size_idx on price_levels  (cost=0.29..4111.05 rows=60000 width=14) (actual time=0.044..25.941 rows=60000 loops=1)
  ->  Sort  (cost=4.32..4.57 rows=100 width=32) (actual time=0.108..3.946 rows=53289 loops=1)
        Sort Key: t.size
        Sort Method: quicksort  Memory: 25kB
        ->  Function Scan on unnest t  (cost=0.00..1.00 rows=100 width=32) (actual time=0.030..0.033 rows=9 loops=1)
Planning time: 0.667 ms
Execution time: 62.846 ms



Thanks,

Arup Rakshit


There are not value of size fit it to be a worthwhile key.