Datapump Import failed with ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE_STATISTICS]

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE_STATISTICS]
ORA-06502: PL/SQL: numeric or value error
LPX-00225: end-element tag “HIST_GRAM_LIST_ITEM” does not match start-element tag “EPVALUE”

ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPW$WORKER”, line 9001

—– PL/SQL Call Stack —–
object line object
handle number name
0x34e4888f8 20462 package body SYS.KUPW$WORKER
0x34e4888f8 9028 package body SYS.KUPW$WORKER
0x34e4888f8 16665 package body SYS.KUPW$WORKER
0x34e4888f8 3956 package body SYS.KUPW$WORKER
0x34e4888f8 9725 package body SYS.KUPW$WORKER
0x34e4888f8 1775 package body SYS.KUPW$WORKER
0x34c75ef78 2 anonymous block

Solution:

EXCLUDE=STATISTICS

http://javeedkaleem.blogspot.in/2010/03/install-database-components.html
http://dbinourhand.blogspot.in/2013/09/remap-table-option-in-datapump.html
http://www.moreajays.com/2013/03/ora-39126-worker-unexpected-fatal-error.html

Source(Oracle ASM) to Target(Oracle ASM) – Import All Tablespaces into Single Tablespce

directory=DB_DIR1
REMAP_SCHEMA=RM:RMUAT
dumpfile=RMschema.dmp
logfile=import_RMUAT_node2.log
REMAP_TABLESPACE=IDX_NETVERTEX:RMUAT
REMAP_TABLESPACE=QM:RMUAT
REMAP_TABLESPACE=RM:RMUAT
REMAP_TABLESPACE=TBS_EDR_1:RMUAT
REMAP_TABLESPACE=TBS_EDR_12:RMUAT
REMAP_TABLESPACE=TBS_EDR_13:RMUAT
REMAP_TABLESPACE=TBS_EDR_14:RMUAT
REMAP_TABLESPACE=TBS_EDR_15:RMUAT
REMAP_TABLESPACE=TBS_EDR_2:RMUAT
REMAP_TABLESPACE=TBS_EDR_3:RMUAT
REMAP_TABLESPACE=TBS_EDR_4:RMUAT
REMAP_TABLESPACE=TBS_IDX_EDR:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER01:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER02:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER03:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER04:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER05:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER06:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER07:RMUAT
REMAP_TABLESPACE=TBS_IDX_NETVERTEXCUSTOMER08:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER01:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER02:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER03:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER04:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER05:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER06:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER07:RMUAT
REMAP_TABLESPACE=TBS_NETVERTEXCUSTOMER08:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY01:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY02:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY03:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY04:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY05:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY06:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY07:RMUAT
REMAP_TABLESPACE=TBS_SESS_USAGE_SMRY08:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION1:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION2:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION3:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION4:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION5:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION6:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION7:RMUAT
REMAP_TABLESPACE=TBS_TBLCORESESSION8:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL1:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL2:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL3:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL4:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL5:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL6:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL7:RMUAT
REMAP_TABLESPACE=TBS_TBLMADDONSUBSCRIBERREL8:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR01:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR02:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR03:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR04:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR05:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR06:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR07:RMUAT
REMAP_TABLESPACE=TBS_TBLMDYNASPR08:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE1:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE2:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE3:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE4:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE5:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE6:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE7:RMUAT
REMAP_TABLESPACE=TBS_TBLSESSIONRULE8:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_1:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_10:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_11:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_12:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_13:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_14:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_15:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_2:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_3:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_4:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_5:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_6:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_7:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_8:RMUAT
REMAP_TABLESPACE=TBS_USAGECDR_9:RMUAT

http://international-dba.blogspot.in/2014/05/impdp-remapdatafile-parameter.html
https://community.oracle.com/thread/498114?start=0&tstart=0

Datapump – Hands on

DBv11.2.0.3.0
Case-1 import backup when you don’t know username

SQLFILE : Write all the SQL DDL to a specified file.

-bash-4.1$ impdp userid=system/manager remap_schema=rm:appgt directory=DB_BKP dumpfile=exp_appgt_11051606200101.dmp SQLFILE=rm.sql

REMAP_TABLESPACE: Tablespace objects are remapped to another tablespace.
REMAP_SCHEMA: Objects from one schema are loaded into another schema.
EXCLUDE=TABLE:”IN(‘EDR’,’CORESESSIONS’,’SESSIONRULE’)”

impdp userid=system/manager remap_schema=QM:netvertexgt directory=DB_BKP dumpfile=exp_netvertex_11051606200101.dmp remap_tablespace=QM:netvertexgt EXCLUDE=TABLE:”IN(‘EDR’,’CORESESSIONS’,’SESSIONRULE’)” TABLE_EXISTS_ACTION=REPLACE

Generate consistent backup through Oracle Datapump export

The original export utility when setting with consistent=y will create the export dump file of the database objects from the point in time to the beginning of the export session.

Conventional Export
==============

exp userid="/ as sysdba" owner= file=.dmp log=.log consistent=y

However we don’t have have consistent parameter for datapump export. By default datapump export is consistent, but not for the entire dump file.It is consistent for one table at a time and not for the entire export.

We can use FLASHBACK_SCN or FLASHBACK_TIME parameters to achieve the consistency of the backup. Find the below equivalent commands for consistent backups in both original export and datapump export.

Datapump Export
==============

vi expdp.par

schemas=
directory=
dumpfile= .dmp
logfile= .log
FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"
job_name=

expdp userid="/ as sysdba" parfile=expdp.par

Data Migration using Oracle Datapump QUERY option

Consider the case when we want to migrate the selected records from source database to target database we have the below great technique available onwards 10g

I used QUERY option of Oracle Datapump tool for satisfy the requirements of Data Migration from source database to target database.
🙂

Approach-1
1. Taking backup using Oracle Datapump tool from source database
2. Move the backup on target database
3. Import the backup

OR

Approach-2
1. Create and configure database link in target database
2. Taking backup using Oracle Datapump tool from source database and import on target database using network_link option

expdp scott/******* parfile=mstpara.par

-bash-4.1$ cat mstpara.par
TABLES=EMP
directory=DB_ORCL
DUMPFILE=expdp_emp_test.dmp
logfile=expdp_emp_test.log
query=EMP:"where dept = '505'"

References

http://dba-oracle.com/t_oracle_expdp_tips.htm
http://amit7oracledba.blogspot.in/2013/12/query-parameter-in-datapump-utility.html
http://arjudba.blogspot.in/2008/04/exportimport-datapump-parameter-query.html
https://community.oracle.com/message/12587311
http://shaharear.blogspot.in/2008/09/using-query-parameter-in-expdpimpdp.html
http://arjudba.blogspot.in/2008/04/exportimport-datapump-parameter-query.html

Oracle Data Pump: Interactive Mode – Attach/Detach

I had a customer who was importing a long running Data Pump import job. The client needed to go home and still have a way to monitor the import job.

Starting with Oracle 10g Data Pump there is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location

http://blogs.xtivia.com/home/-/blogs/oracle-data-pump%3A-interactive-mode-attach-detach