If you have a large database in Oracle, with a tablespace with say 2 gigabytes worth of data in it, and you then go in a delete a large number of rows from a large number of tables, and shrink it down to about 300 megabytes worth of data, and then you create an Oracle export using exp, you might expect you could then import this Oracle dump file, into another database, and have it take up 300 MB.
You’d be wrong.
The dump file ends up with all of the create table and create index commands using an INITIAL extent storage setting based on the size of the old table at its fullest. So when you run the import of the dump file, it basically eats up 2 gigabytes of tablespace for 300 MB of data. You can’t edit the INITIAL values in the dump file, since it’s binary, and if you edit it, you corrupt it. Oracle doesn’t seem to have any great ways to fix this, so here’s my hack:
- Do the full export, with compress=n (this is useful regardless).
- Generate a create tables script (I used my SQLDeveloper GUI client) that just creates the tables (no INITIAL settings)
- Generate a create constraints script (I used my SQLDeveloper GUI client) that just creates constraints/indexes
- Run the create tables script on the new database
- Run the import with these options: ignore=yes constraints=no indexes=no
- Run the create constraints script
Now you have a 300 MB database. If you export from this, you end up with an export file that will create other 300 MB databases and you can share it with your friends.
Good luck!
P.S. Oracle DBAs might have a better way of doing this. I don’t know.
Leave a Reply