Importing a dmp with partitioned tables into Oracle XE

1. Import the schema using normal commands and verify the logs

impdp userid=scott56 schemas=scott56 directory=DB_DUMP dumpfile=scott56_dump_backup.dmp

2. Generate the sqlfile using SQLFILE option with impdp

impdp userid=scott56 directory=DB_DUMP dumpfile=scott56_dump_backup.dmp SQLFILE=scott56.sql

3. Find the partition table from generated sql file
Convert partition table to non-partition table and create those table in schema

3. Import the CONTENT=DATA_ONLY option using impdp

impdp userid=scott56 TABLES=TBLMCONCURRENTUSERS  CONTENT=DATA_ONLY directory=DB_DUMP dumpfile=scott56_dump_backup.dmp

Note:
Oracle 11g onwards supports exporting tables by merging partitions in EXPDP. Please refer to the option of PARTITION_OPTIONS when exporting using the expdp utility.

http://stackoverflow.com/questions/2555942/importing-a-dmp-with-partitioned-tables-into-oracle-xe

how expdp is faster than exp, what oracle internally change to speed up datapump?

Data pump is block mode and exp is a byte mode. So, block mode is always faster than the byte mode

ORACLE Export (exp) vs Datapump (expdp)

Datapump uses the Direct Path data access method (which permits the server to bypass SQL and go right to
the data blocks on disk) has been rewritten to be much more efficient and now supports Data
Pump Import and Export.

http://dbaranjit.blogspot.in/2013/08/how-expdp-is-faster-than-exp-what.html

Refresh the tables data from one schema to another schema

Refresh the tables data from one schema to another schema


/opt/u02/app/oracle/product/11.2.0/db_1/bin/impdp $user_name/$pwd@$netstr remap_schema=SCOTT:SCOTTMH directory=DUMP_DIR TABLES=SCOTT.TBLSCOTTCUSTOMER,SCOTT.TBLMSESSIONUSAGESUMMARY,SCOTT.TBLSQUOTATYPE,SCOTT.TBLMQUOTAPROFILE,SCOTT.TBLSCOMMONSTATUS,
SCOTT.TBLMEMAILTMPLT,SCOTT.TBLMSMSTMPLT,SCOTT.TBLMSTAFF,SCOTT.TBLMADDON,SCOTT.TBLMDEFAUTHRULE,SCOTT.TBLMADDONDEFAULTAUTHRULEREL,
SCOTT.TBLMADDONSUBSCRIBERREL,SCOTT.TBLMSERVICETYPE,SCOTT.TBLSUSAGEMETERINGTYPE,SCOTT.TBLMPCCRULE,SCOTT.TBLMDEFBEARERPCCRULEREL,
SCOTT.TBLMSESSIONUSAGECDR,SCOTT.TBLTELPROVHISTORY,SCOTT.TBLSTELCORDIAERROR,SCOTT.TBLCIRCLEMRPMAPPING,SCOTT.TBLCDR network_link=PRD_NVTXDBMH table_exists_action=replace

Extract Database DDL

1 – Workaround Import show=Y & indexfile=index.sql

imp show=Y

Importing with show=Y will list the contents of the file being imported, without actually performing any changes on the database.

imp userid=/ file=rdbm.dmp full=y show=y

more

imp indexfile=index.sql

Indexfile: This is another option with import (imp) utility to generate index DDL script from the export dump.  INDEXFILE option will spool the CREATE INDEX statements (including storage parameters) to the filename which specify in the indexfile. This will not generate primary key indexes.

2 – Workaround impdp(sqlfile=ddl.sql)

Question: In the old import utility we were able to use “show=y” to see the DDL contents of an export dump file. What is the equivalent in Data Pump?

Answer: In Data Pump import (impdp) made several great improvements:

The dump file is now in XML, and you can browser a dump file to see the contents.
The show=y has been replaced by the sqlfile argument.

$ impdp directory=expdir dumpfile=myexp.dmp sqlfile=ddl.sql

SQLFILE:
This parameter enables you to extract the DDL from the export/Import dump file, without Exporting/importing any data. SQLFILE includes DDLs for tablespaces, users, role grants, packages, procedures, functions, tables, indexes, primary and foreign keys, etc.
The INCLUDE parameter allows to target the DDLs you are interested in. for example if Specify INCLUDE=USER will give you CREATE USER statements.

http://snippetserver.com/?mode=article_detail&article=445
http://www.dba-oracle.com/t_data_pump_sqlfile_parameter.htm
http://chandu208.blogspot.in/2012/02/datapump-sqlfile-parameter-during.html

3 – Workaround Database API

DBMS_METADATA can be used to extract DDL definitions from a database:

GENERAL USAGE
set long 300000 pages 0 lines 140
SELECT DBMS_METADATA.GET_DDL('OBJECT_TYPE','OBJECT_NAME','OWNER') FROM DUAL;

select dbms_metadata.get_ddl('TABLE','EMP') from dual;
select dbms_metadata.get_dependent_ddl('INDEX','EMP') from dual;

https://sites.google.com/site/padipaco/generating-create-scripts-through-dbms_metadata-package

http://oracleandunix.blogspot.in/2009/07/dbmsmetadatagetddl.html

4 – Workaround Free Tool
SQL Developer
Export Oracle DDL to flat file using sql developer

We can export data from oracle to flat file using sql developer.
I hope who ever uses oracle have the tool sql developer, since it is a open source 🙂

Process
1) Select Tools in tabs and then select Database Export.
2) Select any file/location, default name is export.sql.
3) Select connection.
4) Select DDL option(Default are Terminator and Pretty Point)
5) Click Next
6) Select Object Types. If you don’t need data then uncheck data.
7) Click Next.
8 ) select GO.
9) Select tables or procedures or function or anything you want or select all and tehn click >>
10) Click Next
11) Now you can see export summary.
12) Click Finish.

You can have your ddl scripts backup.
http://perumal.org/sql-developer-database-export-wizard-to-export-ddl-and-data-as-dml/

export the specific table partitions using expdp

Production Server
=================
expdp hr/hr@orcl tables=TEST1:P2013_7,TEST2:P2013_7 directory=DB_DUMP dumpfile=exp_testcdr_P2013_7.dmp

-bash-3.2$ expdp hr/hr@orcl tables=TEST1:P2013_7,TEST2:P2013_7 directory=DB_DUMP dumpfile=exp_testcdr_P2013_7.dmp

Export: Release 11.2.0.3.0 – Production on Thu Aug 22 11:09:13 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting “hr”.”SYS_EXPORT_TABLE_01″: hr/********@orcl tables=TEST1:P2013_7,TEST2:P2013_7 directory=DB_DUMP dumpfile=exp_testcdr_P2013_7.dmp
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “hr”.”TEST1″:”P2013_7″ 142.5 MB 370614 rows
. . exported “hr”.”TEST2″:”P2013_7″ 116.6 KB 159 rows
Master table “hr”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for hr.SYS_EXPORT_TABLE_01 is:
/backup/datapump/exp_testcdr_P2013_7.dmp
Job “hr”.”SYS_EXPORT_TABLE_01″ successfully completed at 11:09:25

Testbed-QA Server
==================
first of all just create table TEST1 and TEST2

[oracle@QA dpdump]$ impdp scott/scott directory=DB_DUMP dumpfile=exp_testcdr_P2013_7.dmp remap_schema=hr:scott table_exists_action=append

Import: Release 11.2.0.1.0 – Production on Thu Aug 22 13:36:38 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “scott”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “scott”.”SYS_IMPORT_FULL_01″: scott/******** directory=DB_DUMP_scott dumpfile=exp_testcdr_P2013_7.dmp remap_schema=hr:scott table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table “scott”.”TEST1” exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
ORA-39152: Table “scott”.”TEST2” exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “scott”.”TEST1”:”P2013_7″ 142.5 MB 370614 rows
. . imported “scott”.”TEST2”:”P2013_7″ 116.6 KB 159 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job “scott”.”SYS_IMPORT_FULL_01″ completed with 2 error(s) at 13:37:01

How to take the logical backup of Database?

Simple export and import

EXP/IMP :==> exp –help, imp –help

Data Pump export and import

EXPDP/IMPDP :==> expdp –help, impdp –help

Step-1 Create Logical Directory

CREATE OR REPLACE DIRECTORY DB_DUMP AS '/u01/app/oracle/backup/';
GRANT READ, WRITE ON DIRECTORY DB_DUMP TO username;

Note: You can also use exiting directory if already created
SELECT * FROM V$DBA_DIRECTORIES;

SCHEMA BAKUP

Syntax of Data Pump

expdp username/password@ora_net_string1 schemas=username directory=DB_DUMP dumpfile=exp_username_datetime.dmp logfile=exp_username_datetime.log

impdp username/password@ora_net_string2 schemas=username directory=DB_DUMP dumpfile=exp_username_datetime.dmp 

Example-1: When the source and destination database have same username 

expdp scott/scott@orcl1 schemas=scott directory=DB_DUMP dumpfile=exp_scott_20032012.dmp logfile=log_scott_20032012.log

impdp scott/scott@orcl2 schemas=scott directory=DB_DUMP dumpfile=exp_scott_20032012.dmp

Example-2: When the source and destination database have are not same username

expdp scott1/scott1@orcl1 schemas=scott directory=DB_DUMP dumpfile=exp_scott_20032012.dmp logfile=log_scott_20032012.log

impdp scott2/scott2@orcl2 remap_schemas=scott:scott2 directory=DB_DUMP dumpfile=exp_scott_20032012.dmp

TABLE  BACKUP

Syntax of Data Pump

expdp username/password@ora_net_string tables=table_name directory=DB_DUMP dumpfile=exp_username_datetime.dmp 
logfile=exp_username_datetime.log

impdp username/password@ora_net_string tables=table_name directory=DB_DUMP dumpfile=exp_username_datetime.dmp 

Example:

expdp scott/scott@orcl tables=employees directory=DB_DUMP dumpfile=exp_scott_20032012.dmp    logfile=log_scott_20032012.log

impdp scott/scott@orcl tables=employees directory=DB_DUMP dumpfile=exp_scott_20032012.dmp

More Example

How to Backup the Remote Database Using Data Pump

Oracle10g

Add the remote database entry inside the local tnsnames.ora

Check the Remote Database using  tnsping remote_orcl

Both the local and remote users require the EXP_FULL_DATABASE role granted to them

Create the Database Link inside the localuser
CREATE DATABASE LINK remote_orcl CONNECT TO scott IDENTIFIED BY tiger USING 'remote_orcl';

$ expdp userid=localuser/localuser@orcl tables=scott.EMP network_link=remote_orcl directory=DB_DUMP dumpfile=exp_emp_080512.dmp logfile=log_emp_080512.log

Export: Release 10.2.0.1.0 – Production on Tuesday, 08 May, 2012 18:40:44

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
Starting “localuser”.”SYS_EXPORT_TABLE_01″: userid=aaastats/********@orcl tables=scott.emp network_link=remote_orcl directory=DB_DUMP dumpfile=exp_emp_080512.dmp logfile=log_emp_080512.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12.34 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “SCOTT”.”EMP” 6.671 GB 41239928 rows
Master table “localuser”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for localuser.SYS_EXPORT_TABLE_01 is:
/home/oracle/exp_emp_080512.dmp
Job “localuser”.”SYS_EXPORT_TABLE_01″ successfully completed at 18:50:46