migration of 100+ tables

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

migration of 100+ tables

Julie Nishimura
Hello friends, I will need to migrate 500+ tables  from one server (8.3) to another (9.3). I cannot dump and load the entire database due to storage limitations (because the source is > 20 TB, and the target is about 1.5 TB).

I was thinking about using pg_dump with customized -t flag, then use restore. The table names will be in the list, or I could dump their names in a table.  What would be your suggestions on how to do it more efficiently?

Thank you for your ideas, this is great to have you around, guys!


Reply | Threaded
Open this post in threaded view
|

Re: migration of 100+ tables

Adrian Klaver-4
On 3/10/19 5:53 PM, Julie Nishimura wrote:
> Hello friends, I will need to migrate 500+ tables  from one server (8.3)
> to another (9.3). I cannot dump and load the entire database due to
> storage limitations (because the source is > 20 TB, and the target is
> about 1.5 TB).
>
> I was thinking about using pg_dump with customized -t flag, then use
> restore. The table names will be in the list, or I could dump their
> names in a table.  What would be your suggestions on how to do it more
> efficiently?

The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html

>
> Thank you for your ideas, this is great to have you around, guys!
>
>


--
Adrian Klaver
[hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: migration of 100+ tables

Julie Nishimura
Oh, this is great news! Yay, thanks!


From: Adrian Klaver <[hidden email]>
Sent: Sunday, March 10, 2019 6:28 PM
To: Julie Nishimura; [hidden email]
Subject: Re: migration of 100+ tables
 
On 3/10/19 5:53 PM, Julie Nishimura wrote:
> Hello friends, I will need to migrate 500+ tables  from one server (8.3)
> to another (9.3). I cannot dump and load the entire database due to
> storage limitations (because the source is > 20 TB, and the target is
> about 1.5 TB).
>
> I was thinking about using pg_dump with customized -t flag, then use
> restore. The table names will be in the list, or I could dump their
> names in a table.  What would be your suggestions on how to do it more
> efficiently?

The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html

>
> Thank you for your ideas, this is great to have you around, guys!
>
>


--
Adrian Klaver
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: migration of 100+ tables

Adrian Klaver-4
On 3/10/19 9:07 PM, Julie Nishimura wrote:
> Oh, this is great news! Yay, thanks!

Just be aware, where -n = --schema:

https://www.postgresql.org/docs/10/app-pgdump.html

"
Note

When -n is specified, pg_dump makes no attempt to dump any other
database objects that the selected schema(s) might depend upon.
Therefore, there is no guarantee that the results of a specific-schema
dump can be successfully restored by themselves into a clean database.
"

So any relationships across schema will not be preserved.

>
> ------------------------------------------------------------------------
> *From:* Adrian Klaver <[hidden email]>
> *Sent:* Sunday, March 10, 2019 6:28 PM
> *To:* Julie Nishimura; [hidden email]
> *Subject:* Re: migration of 100+ tables
> On 3/10/19 5:53 PM, Julie Nishimura wrote:
>> Hello friends, I will need to migrate 500+ tables  from one server (8.3)
>> to another (9.3). I cannot dump and load the entire database due to
>> storage limitations (because the source is > 20 TB, and the target is
>> about 1.5 TB).
>>
>> I was thinking about using pg_dump with customized -t flag, then use
>> restore. The table names will be in the list, or I could dump their
>> names in a table.  What would be your suggestions on how to do it more
>> efficiently?
>
> The sizes you mention above, are they for the uncompressed raw data?
>
> Are the tables all in one schema or multiple?
>
> Where I am going with this is pg_dump -Fc --schema.
>
> See:
> https://www.postgresql.org/docs/10/app-pgrestore.html
>
> The pg_restore -l to get a TOC(Table of Contents).
>
> Comment out the items you do not want in the TOC.
>
> Then pg_restore  --use-list.
>
> See:
>
> https://www.postgresql.org/docs/10/app-pgrestore.html
>
>>
>> Thank you for your ideas, this is great to have you around, guys!
>>
>>
>
>
> --
> Adrian Klaver
> [hidden email]


--
Adrian Klaver
[hidden email]