Unloading/Loading Data Using External Tables

Oracle have incorporated support for data pump technology into external tables.
The ORACLE_DATAPUMP access driver can be used to unload data to data pump export files and subsequently reload it.
The unload of data occurs when the external table is created using the “AS” clause.

CREATE TABLE emp_xt
  ORGANIZATION EXTERNAL
   (
     TYPE ORACLE_DATAPUMP
     DEFAULT DIRECTORY test_dir
     LOCATION ('emp_xt.dmp')
   )
   AS SELECT * FROM emp;

The data can then be queried using the following.

SELECT * FROM emp_xt;

The syntax to create the external table pointing to an existing file is similar, but without the “AS” clause. In this case we will do it the same schema, but this could be in a different schema in the same instance, or in an entirely different instance.

DROP TABLE emp_xt;

CREATE TABLE emp_xt (
  EMPNO     NUMBER(4),
  ENAME     VARCHAR2(10),
  JOB       VARCHAR2(9),
  MGR       NUMBER(4),
  HIREDATE  DATE,
  SAL       NUMBER(7,2),
  COMM      NUMBER(7,2),
  DEPTNO    NUMBER(2))
  ORGANIZATION EXTERNAL (
     TYPE ORACLE_DATAPUMP
     DEFAULT DIRECTORY test_dir
     LOCATION ('emp_xt.dmp')
  );
SELECT * FROM emp_xt;

Creating an external table using the ORACLE_DATAPUMP access driver is restricted to dump files created by the external table unload.

Oracle11g result_cache – New Feature for DWH

Introduction
Result Cache is offered in Oracle11g as new Feature
Caching is nothing new to Oracle
Oracle has cached data for a long time now
What’s new is the caching of results This is similar to how a Materialized View works but is more-dynamic
New “result_cache” hint asks oracle to cache query results

Hands On

Example-1

SQL> show parameter result

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
client_result_cache_lag big integer 3000
client_result_cache_size big integer 0
result_cache_max_result integer 5
result_cache_max_size big integer 200M
result_cache_mode string MANUAL
result_cache_remote_expiration integer 0

-bash-3.00$ sqlplus hitesh/*****
SQL*Plus: Release 11.2.0.3.0 Production on Wed Jan 28 15:45:19 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> SET TIMING ON
SQL> SET AUTOTRACE TRACEONLY
SQL> select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl;

Elapsed: 00:00:44.76

Execution Plan
———————————————————-
Plan hash value: 1168074429
———————————————————————————————–
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
——————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 1379K(1)| 04:35:53 |
| 1 | RESULT CACHE | 3b8wqvk98cp33ck9s25pay4zwg | || |
| 2 | SORT AGGREGATE | | 1 || |
| 3 | MERGE JOIN CARTESIAN | | 100M| 1379K(1)| 04:35:53 |
| 4 | MERGE JOIN CARTESIAN | | 1000K| 13798(1)| 00:02:46 |
| 5 | MERGE JOIN CARTESIAN| | 10000 | 141(1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TEST_TBL | 100 | 3(0)| 00:00:01 |
| 7 | BUFFER SORT | | 100 | 138(1)| 00:00:02 |
| 8 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 9 | BUFFER SORT | | 100 | 13797(1)| 00:02:46 |
| 10 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 11 | BUFFER SORT | | 100 | 1379K(1)| 04:35:53 |
| 12 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
———————————————————————————————–
Result Cache Information (identified by operation id):
——————————————————
1 – column-count=1; dependencies=(HITESH.TEST_TBL); attributes=(single-row);
name=”select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl”
Note
—–
– dynamic sampling used for this statement (level=2)
Statistics
———————————————————-
22 recursive calls
0 db block gets
81 consistent gets
8 physical reads
0 redo size
526 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl;

Elapsed: 00:00:00.02

Execution Plan
———————————————————-
Plan hash value: 1168074429
———————————————————————————————–
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
———————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 1379K(1)| 04:35:53 |
| 1 | RESULT CACHE | 3b8wqvk98cp33ck9s25pay4zwg | || |
| 2 | SORT AGGREGATE | | 1 || |
| 3 | MERGE JOIN CARTESIAN | | 100M| 1379K(1)| 04:35:53 |
| 4 | MERGE JOIN CARTESIAN | | 1000K| 13798(1)| 00:02:46 |
| 5 | MERGE JOIN CARTESIAN| | 10000 | 141(1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TEST_TBL | 100 | 3(0)| 00:00:01 |
| 7 | BUFFER SORT | | 100 | 138(1)| 00:00:02 |
| 8 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 9 | BUFFER SORT | | 100 | 13797(1)| 00:02:46 |
| 10 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 11 | BUFFER SORT | | 100 | 1379K(1)| 04:35:53 |
| 12 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
———————————————————————————————–
Result Cache Information (identified by operation id):
——————————————————

1 – column-count=1; dependencies=(HITESH.TEST_TBL); attributes=(single-row);
name=”select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl”
Note
—–
– dynamic sampling used for this statement (level=2)
Statistics
———————————————————-
13 recursive calls
0 db block gets
32 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl;

Elapsed: 00:00:00.01

Execution Plan
———————————————————-
Plan hash value: 1168074429
———————————————————————————————–
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
———————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 1379K(1)| 04:35:53 |
| 1 | RESULT CACHE | 3b8wqvk98cp33ck9s25pay4zwg | | |
| 2 | SORT AGGREGATE | | 1 || |
| 3 | MERGE JOIN CARTESIAN | | 100M| 1379K(1)| 04:35:53 |
| 4 | MERGE JOIN CARTESIAN | | 1000K| 13798(1)| 00:02:46 |
| 5 | MERGE JOIN CARTESIAN| | 10000 | 141(1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TEST_TBL | 100 | 3(0)| 00:00:01 |
| 7 | BUFFER SORT | | 100 | 138(1)| 00:00:02 |
| 8 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 9 | BUFFER SORT | | 100 | 13797(1)| 00:02:46 |
| 10 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
| 11 | BUFFER SORT | | 100 | 1379K(1)| 04:35:53 |
| 12 | TABLE ACCESS FULL | TEST_TBL | 100 | 1(0)| 00:00:01 |
———————————————————————————————–
Result Cache Information (identified by operation id):
——————————————————
1 – column-count=1; dependencies=(HITESH.TEST_TBL); attributes=(single-row);
name=”select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, tes
t_tbl”
Note
—–
– dynamic sampling used for this statement (level=2)
Statistics
———————————————————-
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Example-2

SQL> select /*+ RESULT_CACHE */ count(*) from test_tbl;
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 215328055
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)|Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 (0)|00:00:01 |
| 1 | RESULT CACHE | 37qx1wzc3051davww9ksc0b0rx | | ||
| 2 | SORT AGGREGATE | | 1 | ||
| 3 | TABLE ACCESS FULL| TEST_TBL | 100 | 3 (0)|00:00:01 |
------------------------------------------------------------------------------------------
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=1; dependencies=(HITESH.TEST_TBL); attributes=(single-row);
name="select /*+ RESULT_CACHE */ count(*) from test_tbl"
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select /*+ RESULT_CACHE */ count(*) from test_tbl;

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 215328055
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)|Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 (0)|00:00:01 |
| 1 | RESULT CACHE | 37qx1wzc3051davww9ksc0b0rx | | ||
| 2 | SORT AGGREGATE | | 1 | ||
| 3 | TABLE ACCESS FULL| TEST_TBL | 100 | 3 (0)|00:00:01 |
------------------------------------------------------------------------------------------
Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=1; dependencies=(HITESH.TEST_TBL); attributes=(single-row);
name="select /*+ RESULT_CACHE */ count(*) from test_tbl"
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
526 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Connect sys/*********

SQL> select id, type, status, name from v$result_cache_objects;

ID TYPE STATUS
———- ———- ———
NAME
——————————————————————————–
0 Dependency Published
HITESH.TEST_TBL

2 Result Published
select /*+ RESULT_CACHE */ count(*) from test_tbl

1 Result Published
select /*+ RESULT_CACHE */ count(*) from test_tbl, test_tbl, test_tbl, test_tbl

References
http://blog.yannickjaquier.com/oracle/sql-result-cache.html
http://dbaora.com/sql-result-cache-11g/
http://www.ooug.org/presentations/2010slides/Oracle_11g_Results_Cache.ppt

Data Purging Management-Oracle Table Daily Interval Auto Partition

create or replace function get_high_value_as_date(
  p_table_name     in varchar2,
  p_partition_name in varchar2
) return date as
  v_high_value varchar2(1024);
  v_date        date;
begin
  select high_value into v_high_value from user_tab_partitions
    where table_name = upper(p_table_name)
      and partition_name = upper(p_partition_name);
  execute immediate 'select ' || v_high_value || ' from dual' into v_date;
  return v_date;
end;
/
create or replace procedure proc_app_cleanup_cdr
as
  c_days_to_keep constant integer := 15;
  x_last_partition exception;
  pragma exception_init(x_last_partition, -14758);
begin
  for rec in (select table_name, partition_name
    from user_tab_partitions
      where table_name = 'TBLCDR' and partition_name != 'P_FIRST'
        and get_high_value_as_date(table_name, partition_name) <
        sysdate - c_days_to_keep) loop
    begin
	  dbms_output.put_line('alter table ' || rec.table_name || ' drop partition ' || rec.partition_name ||' update indexes');
      execute immediate 'alter table ' || rec.table_name || ' drop partition ' || rec.partition_name ||' update indexes';
    exception
      when x_last_partition then
        null;
    end;
  end loop;
end;
/
BEGIN
  -- Job defined entirely by the CREATE JOB procedure.
  DBMS_SCHEDULER.create_job (
    job_name => 'JOB_APP_CLEANUP_CDR',
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN PROC_APP_CLEANUP_CDR(); END;',
    START_DATE => SYSTIMESTAMP,
    repeat_interval => 'FREQ=DAILY;BYHOUR=03;BYMINUTE=32;',
    end_date => NULL,
    enabled => TRUE,
    comments => 'Job defined for JOB_PCRF_CLEANUP_CDR.');
END;
/

🙂

Automatic Daily – Interval Partitioned Table Management

Automatic Daily – Interval partitioned table management

spool APP_DBC_PARTITION-TBLCDR_v1.log

--create tablespaces for TBLCDR
DEFINE db_datafile="&&dbf_storage_directory"
prompt &db_datafile

CREATE TABLESPACE tbs_cdr_1   DATAFILE '&&db_datafile/tbs_cdr_1.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_2   DATAFILE '&&db_datafile/tbs_cdr_2.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_3   DATAFILE '&&db_datafile/tbs_cdr_3.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_4   DATAFILE '&&db_datafile/tbs_cdr_4.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_5   DATAFILE '&&db_datafile/tbs_cdr_5.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_6   DATAFILE '&&db_datafile/tbs_cdr_6.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_7   DATAFILE '&&db_datafile/tbs_cdr_7.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_8   DATAFILE '&&db_datafile/tbs_cdr_8.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_9   DATAFILE '&&db_datafile/tbs_cdr_9.dbf'  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_10  DATAFILE '&&db_datafile/tbs_cdr_10.dbf' size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_11  DATAFILE '&&db_datafile/tbs_cdr_11.dbf' size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_12  DATAFILE '&&db_datafile/tbs_cdr_12.dbf' size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_13  DATAFILE '&&db_datafile/tbs_cdr_13.dbf' size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_14  DATAFILE '&&db_datafile/tbs_cdr_14.dbf' size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_cdr_15  DATAFILE '&&db_datafile/tbs_cdr_15.dbf' size 250M AUTOEXTEND ON;

CREATE TABLESPACE tbs_idxcdr  DATAFILE '&&db_datafile/tbs_idxcdr.dbf' size 250M AUTOEXTEND ON;

connect &&username/&&password

set long 10000
set pagesize 0
select dbms_metadata.get_dependent_ddl('INDEX','TBLCDR') from duAL;

SELECT DBMS_METADATA.GET_DDL('TABLE','TBLCDR') FROM DUAL;

RENAME TBLCDR TO OLD_TBLCDR;

CREATE TABLE TBLCDR
  (
    CDRID            NUMBER(20,0),
    SESSIONID        VARCHAR2(100 BYTE),
    USERNAME         VARCHAR2(264 BYTE),
    FRAMMEDIPADDRESS VARCHAR2(32 BYTE),
    GATEWAYADDRESS   VARCHAR2(32 BYTE),
    REQUESTTYPE      VARCHAR2(30 BYTE),
    SERVICETYPE      VARCHAR2(30 BYTE),
    MONITORINGKEY    VARCHAR2(64 BYTE),
    INPUTOCTETS      NUMBER(32,0),
    OUTPUTOCTETS     NUMBER(32,0),
    TOTALOCTETS      NUMBER(32,0),
    USAGETIME        VARCHAR2(64 BYTE),
    PARAM1           VARCHAR2(64 BYTE),
    PARAM2           VARCHAR2(64 BYTE),
    PARAM3           VARCHAR2(64 BYTE),
    PARAM4           VARCHAR2(64 BYTE),
    PARAM5           VARCHAR2(64 BYTE),
    CREATEDATE       TIMESTAMP (6),
    LASTMODIFIEDDATE TIMESTAMP (6),
    TIMESTAMP        TIMESTAMP (6),
    AGGREGATEKEY  VARCHAR2(128 BYTE),
    IMSI          VARCHAR2(100 BYTE),
    APN           VARCHAR2(25 BYTE),
    LAC           VARCHAR2(5 BYTE),
    USEREQUIPMENT VARCHAR2(25 BYTE)
  )
PARTITION BY RANGE (CREATEDATE)
INTERVAL (NUMTODSINTERVAL(1,'day')) store in (tbs_cdr_1,tbs_cdr_2,tbs_cdr_3,tbs_cdr_4,tbs_cdr_5,tbs_cdr_6,tbs_cdr_7,tbs_cdr_8,tbs_cdr_9,tbs_cdr_10,tbs_cdr_11,tbs_cdr_12,tbs_cdr_13,tbs_cdr_14,tbs_cdr_15)
(PARTITION p_first VALUES LESS THAN (TO_DATE('19-12-2013', 'DD-MM-YYYY')))
NOLOGGING;

 CREATE UNIQUE INDEX IDX_UNQ_CDR ON TBLCDR(CDRID)
 TABLESPACE tbs_idxcdr
 NOLOGGING;

 ALTER TABLE TBLCDR INITRANS 200;

 ALTER INDEX IDX_UNQ_CDR INITRANS 200;

 ALTER SEQUENCE SEQ_CDR CACHE 2000;

 ALTER TABLE TBLCDR ENABLE ROW MOVEMENT;

 set long 10000
 set pagesize 0
 select dbms_metadata.get_dependent_ddl('INDEX','TBLCDR') from DUAL;

 SELECT DBMS_METADATA.GET_DDL('TABLE','TBLCDR') FROM DUAL;

--Create Procedure for TRUNCATE PARTITION which are older then 6 months
-- TRUNCATE PARTITION which are older then 6 months (MANUAL OAM)

-- ALTER TABLE TBLCDR TRUNCATE PARTITION 'NAME_OF_PARTITION' UPDATE INDEXES;

spool off;

Automatic Maintenance

create or replace procedure sp_netvertex_data_purge_cycle(t_name varchar2)
as
    v_date     DATE;
    v_date_cut DATE := trunc(SYSDATE) - 15;
    --v_date_cut DATE := add_months(trunc(SYSDATE, 'mm'), purge_month_interval);
BEGIN

/*

-- Automatic Daily Interval Partition Configured on below modules
    1.) TBLCDR
    2.) TBLEDR
    3.) TBLMSESSIONUSAGECDR

-- This Code is for Automatic data purging cycle configuration to drop the specified partitin
   which is older than 3 or 6 months

-- How to use this procedure

 EXEC sp_netvertex_data_purge_cycle('TBLEDR',1);
 EXEC sp_netvertex_data_purge_cycle('TBLCDR',1);
 EXEC sp_netvertex_data_purge_cycle('TBLSESSIONUSAGECDR',1);

 we can also configure the database scheduler and configured in pick off hour for automize the maintainanace

*/

    FOR r IN (SELECT table_name,high_value,interval,partition_name FROM   user_tab_partitions tp WHERE  tp.table_name = t_name)
    LOOP
        EXECUTE IMMEDIATE 'SELECT ' || r.high_value || ' from dual' INTO v_date;
        IF v_date < v_date_cut
        THEN
            IF r.interval = 'NO'
            THEN
                EXECUTE IMMEDIATE 'alter table ' ||  r.table_name || ' truncate partition ' ||r.partition_name || ' UPDATE INDEXES';

                --Debug Command
                DBMS_OUTPUT.PUT_LINE('alter table ' || r.table_name || ' truncate partition ' ||r.partition_name|| ' UPDATE INDEXES');
            ELSE
                EXECUTE IMMEDIATE 'alter table ' ||  r.table_name || ' drop partition ' ||r.partition_name || ' UPDATE INDEXES';

                --Debug Command
			          DBMS_OUTPUT.PUT_LINE('alter table ' || r.table_name || ' drop partition ' ||r.partition_name|| ' UPDATE INDEXES');
            END IF;
        END IF;
    END LOOP;

END;

ALTER TABLE TBLEDR ADD PARTITION “P2 “VALUES LESS THAN (TIMESTAMP’ 2013-12-16 00:00:00′)
Error report –
SQL Error: ORA-14760: ADD PARTITION is not permitted on Interval partitioned objects

You can not drop the oldest, root, partition in an interval partitioned table.

ORA-14758: Last partition in the range section cannot be dropped
https://forums.oracle.com/message/10220371

APP TABLE PARTITION – Tuning

–create tablespaces for TBLMSESSIONUSAGECDR
DEFINE db_datafile=”&&dbf_storage_directory”
prompt &db_datafile

CREATE TABLESPACE tbs_usagecdr_1   DATAFILE ‘/ORADATA2/tbs_usagecdr_1.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_2   DATAFILE ‘/ORADATA2/tbs_usagecdr_2.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_3   DATAFILE ‘/ORADATA2/tbs_usagecdr_3.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_4   DATAFILE ‘/ORADATA2/tbs_usagecdr_4.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_5   DATAFILE ‘/ORADATA2/tbs_usagecdr_5.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_6   DATAFILE ‘/ORADATA2/tbs_usagecdr_6.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_7   DATAFILE ‘/ORADATA2/tbs_usagecdr_7.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_8   DATAFILE ‘/ORADATA2/tbs_usagecdr_8.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_9   DATAFILE ‘/ORADATA2/tbs_usagecdr_9.dbf’  size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_10  DATAFILE ‘/ORADATA2/tbs_usagecdr_10.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_11  DATAFILE ‘/ORADATA2/tbs_usagecdr_11.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_usagecdr_12  DATAFILE ‘/ORADATA2/tbs_usagecdr_12.dbf’ size 250M AUTOEXTEND ON;

connect &&username/&&password

–Keep history of TBLMSESSIONUSAGESUMMARY without depend on year so every
–new calendar month same partition will use next year as well

–Backup Existing Table
SELECT COUNT(1) FROM TBLMSESSIONUSAGECDR;

RENAME TBLMSESSIONUSAGECDR TO TBLMSESSIONUSAGECDR_BCK031213;

CREATE TABLE TBLMSESSIONUSAGECDR
(CREATE_DATE        TIMESTAMP(6),
USERID             VARCHAR2(50),
MONITORINGKEY       VARCHAR2(100) NOT NULL ENABLE,
USAGETIME           NUMBER(20,0) DEFAULT 0 NOT NULL ENABLE,
UPLOADOCTETS        NUMBER(20,0) DEFAULT 0 NOT NULL ENABLE,
DOWNLOADOCTETS      NUMBER(20,0) DEFAULT 0 NOT NULL ENABLE,
TOTALOCTETS         NUMBER(20,0) DEFAULT 0 NOT NULL ENABLE,
AGGREGATEKEY        VARCHAR2(128),
LASTUPDATETIME      TIMESTAMP (6),
METERINGLEVEL       VARCHAR2(50),
CUI                 VARCHAR2(50),
V_CREATE_DATE NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR(CREATE_DATE,’MM’))) VIRTUAL VISIBLE
)
PARTITION BY LIST (V_CREATE_DATE)
(
PARTITION p_1 VALUES (1)  tablespace tbs_usagecdr_1,
PARTITION p_2 VALUES (2)  tablespace tbs_usagecdr_2,
PARTITION p_3 VALUES (3)  tablespace tbs_usagecdr_3,
PARTITION p_4 VALUES (4)  tablespace tbs_usagecdr_4,
PARTITION p_5 VALUES (5)  tablespace tbs_usagecdr_5,
PARTITION p_6 VALUES (6)  tablespace tbs_usagecdr_6,
PARTITION p_7 VALUES (7)  tablespace tbs_usagecdr_7,
PARTITION p_8 VALUES (8)  tablespace tbs_usagecdr_8,
PARTITION p_9 VALUES (9)  tablespace tbs_usagecdr_9,
PARTITION p_10 VALUES(10) tablespace  tbs_usagecdr_10,
PARTITION p_11 VALUES(11) tablespace  tbs_usagecdr_11,
PARTITION p_12 VALUES(12) tablespace  tbs_usagecdr_12
);

ALTER TABLE TBLMSESSIONUSAGECDR     INITRANS 100;

ALTER TABLE TBLMSESSIONUSAGECDR ENABLE ROW MOVEMENT;

–Create Procedure for TRUNCATE PARTITION which are older then 6 months
create or replace PROCEDURE PROC_NVUSAGECDR_MAINTAINANCE
AS
p_name varchar2(100);
BEGIN

/*
EliteCSM NetVertex Module
EliteCore Technologies Pvt. Ltd.
*/

FOR REC IN (SELECT distinct ‘p_’||TO_NUMBER(TO_CHAR(CREATE_DATE,’MM’))
p_name
FROM TBLMSESSIONUSAGECDR
WHERE CREATE_DATE + 185  < SYSDATE)
LOOP
EXECUTE IMMEDIATE ‘ALTER TABLE TBLMSESSIONUSAGECDR TRUNCATE PARTITION ‘ || REC.p_name || ‘ UPDATE INDEXES’;

DBMS_OUTPUT.PUT_LINE (‘ALTER TABLE TBLMSESSIONUSAGECDR TRUNCATE PARTITION ‘ || REC.p_name || ‘ UPDATE INDEXES; ——-> successfully completed’);

END LOOP;
END;
/

— Run on the next to last day of every month:
BEGIN
— Job defined entirely by the CREATE JOB procedure.
DBMS_SCHEDULER.create_job (
job_name => ‘JOB_USAGECDR_MAINTAINANCE’,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘BEGIN PROC_NVUSAGECDR_MAINTAINANCE(); END;’,
START_DATE => SYSTIMESTAMP,
repeat_interval => ‘freq=yearly;bymonth=6;bymonthday=30;byhour=7;byminute=5;’,
end_date => NULL,
enabled => TRUE,
comments => ‘Job defined entirely by the CREATE JOB procedure.’);
END;
/

begin
DBMS_STATS.GATHER_TABLE_STATS (
ownname => ‘”NETVERTEXP”‘,
tabname => ‘”TBLMSESSIONUSAGECDR”‘,
estimate_percent => 1
);
end;
/

–create tablespaces for TBLEDR
DEFINE db_datafile=”&&dbf_storage_directory”
prompt &db_datafile

CREATE TABLESPACE tbs_edr_1   DATAFILE ‘/ORADATA1/tbs_edr_1.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_2   DATAFILE ‘/ORADATA1/tbs_edr_2.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_3   DATAFILE ‘/ORADATA1/tbs_edr_3.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_4   DATAFILE ‘/ORADATA1/tbs_edr_4.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_5   DATAFILE ‘/ORADATA1/tbs_edr_5.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_6   DATAFILE ‘/ORADATA1/tbs_edr_6.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_7   DATAFILE ‘/ORADATA1/tbs_edr_7.dbf’ size 250M AUTOEXTEND ON;
CREATE TABLESPACE tbs_edr_8   DATAFILE ‘/ORADATA1/tbs_edr_8.dbf’ size 250M AUTOEXTEND ON;

connect &&username/&&password

–Keep history of TBLEDR without depend on year so every
–new calendar month same partition will use next year as well

–Backup Existing Table
SELECT COUNT(1) FROM TBLEDR;
RENAME TBLEDR TO TBLEDR_BCK03122013;

CREATE TABLE TBLEDR
(EDRID       NUMBER(20,0),
SESSIONID   VARCHAR2(100),
USERNAME    VARCHAR2(264),
EXPIREDRULE VARCHAR2(100),
NEWRULE     VARCHAR2(100),
OLDBW       VARCHAR2(32),
NEWBW       VARCHAR2(32),
PARAM1      VARCHAR2(64),
PARAM2      VARCHAR2(64),
CREATEDATE  TIMESTAMP (6),
LASTMODIFIEDDATE TIMESTAMP (6),
TIMESTAMP        TIMESTAMP (6),
V_CREATEDATE       NUMBER
GENERATED ALWAYS AS
(
TO_NUMBER(TO_CHAR(CREATEDATE,’DD’))
) VIRTUAL
)PARTITION BY LIST (V_CREATEDATE)
(
PARTITION p_1 VALUES  (1)   tablespace  tbs_edr_1,
PARTITION p_2 VALUES  (2)   tablespace  tbs_edr_2,
PARTITION p_3 VALUES  (3)   tablespace  tbs_edr_3,
PARTITION p_4 VALUES  (4)   tablespace  tbs_edr_4,
PARTITION p_5 VALUES  (5)   tablespace  tbs_edr_5,
PARTITION p_6 VALUES  (6)   tablespace  tbs_edr_6,
PARTITION p_7 VALUES  (7)   tablespace  tbs_edr_7,
PARTITION p_8 VALUES  (8)   tablespace  tbs_edr_8,
PARTITION p_9 VALUES  (9)   tablespace  tbs_edr_1,
PARTITION p_10 VALUES (10)   tablespace tbs_edr_2,
PARTITION p_11 VALUES (11)   tablespace tbs_edr_3,
PARTITION p_12 VALUES (12)   tablespace tbs_edr_4,
PARTITION p_13 VALUES (13)   tablespace tbs_edr_5,
PARTITION p_14 VALUES (14)   tablespace tbs_edr_6,
PARTITION p_15 VALUES (15)   tablespace tbs_edr_7,
PARTITION p_16 VALUES (16)   tablespace tbs_edr_8,
PARTITION p_17 VALUES (17)   tablespace tbs_edr_1,
PARTITION p_18 VALUES (18)   tablespace tbs_edr_2,
PARTITION p_19 VALUES (19)   tablespace tbs_edr_3,
PARTITION p_20 VALUES (20)   tablespace tbs_edr_4,
PARTITION p_21 VALUES (21)   tablespace tbs_edr_5,
PARTITION p_22 VALUES (22)   tablespace tbs_edr_6,
PARTITION p_23 VALUES (23)   tablespace tbs_edr_7,
PARTITION p_24 VALUES (24)   tablespace tbs_edr_8,
PARTITION p_25 VALUES (25)   tablespace tbs_edr_1,
PARTITION p_26 VALUES (26)   tablespace tbs_edr_2,
PARTITION p_27 VALUES (27)   tablespace tbs_edr_3,
PARTITION p_28 VALUES (28)   tablespace tbs_edr_4,
PARTITION p_29 VALUES (29)   tablespace tbs_edr_5,
PARTITION p_30 VALUES (30)   tablespace tbs_edr_6,
PARTITION p_31 VALUES (31)   tablespace tbs_edr_7
);

ALTER TABLE TBLEDR
ADD CONSTRAINT “PK_PEDR” PRIMARY KEY (“EDRID”);

ALTER TABLE TBLEDR INITRANS 100;

ALTER INDEX PK_PEDR INITRANS 100;

ALTER TABLE TBLEDR ENABLE ROW MOVEMENT;

–Create Procedure for TRUNCATE PARTITION which are older then 6 months
create or replace PROCEDURE PROC_TBLEDR_MAINTAINANCE
AS
p_name varchar2(100);
BEGIN

/*
EliteCSM NetVertex Module
EliteCore Technologies Pvt. Ltd.
*/

FOR REC IN (SELECT distinct ‘p_’||TO_NUMBER(TO_CHAR(CREATEDATE,’DD’)) p_name FROM TBLEDR WHERE CREATEDATE + 15 < SYSDATE)
LOOP

EXECUTE IMMEDIATE ‘ALTER TABLE TBLEDR TRUNCATE PARTITION ‘ || REC.p_name || ‘ UPDATE INDEXES’;

DBMS_OUTPUT.PUT_LINE (‘ALTER TABLE TBLEDR TRUNCATE PARTITION ‘ || REC.p_name || ‘ UPDATE INDEXES; ——-> successfully completed’);

END LOOP;

END;
/

— Run on the next to last day of every month:
BEGIN
— Job defined entirely by the CREATE JOB procedure.
DBMS_SCHEDULER.create_job (
job_name => ‘JOB_TBLEDR_MAINTAINANCE’,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘BEGIN PROC_TBLEDR_MAINTAINANCE(); END;’,
START_DATE => SYSTIMESTAMP,
repeat_interval => ‘freq=monthly;bymonthday=14,28;byhour=6;byminute=33;’,
end_date => NULL,
enabled => TRUE,
comments => ‘Job defined entirely by the CREATE JOB procedure.’);
END;
/

begin
DBMS_STATS.GATHER_TABLE_STATS (
ownname => ‘”NETVERTEXP”‘,
tabname => ‘”TBLEDR”‘,
estimate_percent => 1
);
end;
/

🙂

😦

Insert data only single table space so we should improve insert performance via taking advantage of SAN storage

specify multiple tablespaces

Data Management of Table MAXVALUE partition

–Data Verification on PRD

SELECT MAX(CREATE_DATE) FROM TBLEMPSTATISTICS PARTITION (P2013_10);

SELECT MAX(CREATE_DATE) FROM TBLEMPSTATISTICS PARTITION (PMAXDATE);

SQL> SQL>  set colsep "|"
SQL>  set linesize 32767
SQL>  set pagesize 10000
SQL>  COLUMN table_name     FORMAT A20
SQL>  COLUMN partition_name FORMAT A20
SQL>  COLUMN high_value     FORMAT 999
SQL>  COLUMN num_rows       FORMAT 999999999
SQL>
SQL>  SELECT table_name,
  2          partition_name,
  3          high_value,
  4          num_rows
  5   FROM   user_tab_partitions
  6   WHERE table_name = 'TBLEMPSTATISTICS'
  7   ORDER BY table_name, partition_name;

TABLE_NAME          |PARTITION_NAME      |HIGH_VALUE                                                                      |  NUM_ROWS
--------------------|--------------------|--------------------------------------------------------------------------------|----------
TBLEMPSTATISTICS   |P2013_10            |TIMESTAMP' 2013-11-01 00:00:00'                                                 |   8308323
TBLEMPSTATISTICS   |P2013_11            |TIMESTAMP' 2013-12-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2013_12            |TIMESTAMP' 2014-01-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2013_5             |TIMESTAMP' 2013-06-01 00:00:00'                                                 |   6116188
TBLEMPSTATISTICS   |P2013_6             |TIMESTAMP' 2013-07-01 00:00:00'                                                 |  23133572
TBLEMPSTATISTICS   |P2013_7             |TIMESTAMP' 2013-08-01 00:00:00'                                                 |   8822588
TBLEMPSTATISTICS   |P2013_8             |TIMESTAMP' 2013-09-01 00:00:00'                                                 |   9575553
TBLEMPSTATISTICS   |P2013_9             |TIMESTAMP' 2013-10-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |PMAXDATE            |MAXVALUE 			                                                 |   4100000

–Temporary shift the data of nov to other table

CREATE TABLE TBLEMPSTATISTICS_P2013_11
AS
SELECT * FROM TBLEMPSTATISTICS PARTITION (PMAXDATE);

–drop the max partition

ALTER TABLE TBLEMPSTATISTICS DROP PARTITION PMAXDATE UPDATE INDEXES;

–add the new partitions for 12 months

ALTER TABLE TBLEMPSTATISTICS ADD PARTITION P2013_11 VALUES LESS THAN (TO_DATE('2013-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2013_11;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2013_12  VALUES LESS THAN (TO_DATE('2014-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2013_12;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_01  VALUES LESS THAN (TO_DATE('2014-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_01;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_02  VALUES LESS THAN (TO_DATE('2014-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_02;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_03  VALUES LESS THAN (TO_DATE('2014-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_03;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_04  VALUES LESS THAN (TO_DATE('2014-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_04;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_05  VALUES LESS THAN (TO_DATE('2014-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_05;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_06  VALUES LESS THAN (TO_DATE('2014-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_06;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_07  VALUES LESS THAN (TO_DATE('2014-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_07;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_08  VALUES LESS THAN (TO_DATE('2014-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_08;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_09  VALUES LESS THAN (TO_DATE('2014-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_09;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION P2014_10  VALUES LESS THAN (TO_DATE('2014-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'))
TABLESPACE tbsuserstatistics2014_10;

ALTER TABLE TBLEMPSTATISTICS  ADD PARTITION PMAXDATE  VALUES LESS THAN (MAXVALUE) TABLESPACE tbsuserstatisticsmaxdate;

–OUTPUT================

SQL>  set colsep "|"
SQL>  set linesize 32767
SQL>  set pagesize 10000
SQL>  COLUMN table_name     FORMAT A20
SQL>  COLUMN partition_name FORMAT A20
SQL>  COLUMN high_value     FORMAT 999
SQL>  COLUMN num_rows       FORMAT 999999999
SQL>
SQL>  SELECT table_name,
  2          partition_name,
  3          high_value,
  4          num_rows
  5   FROM   user_tab_partitions
  6   WHERE table_name = 'TBLEMPSTATISTICS'
  7   ORDER BY table_name, partition_name;

TABLE_NAME          |PARTITION_NAME      |HIGH_VALUE                                                                      |  NUM_ROWS
--------------------|--------------------|--------------------------------------------------------------------------------|----------
TBLEMPSTATISTICS   |P2013_10            |TIMESTAMP' 2013-11-01 00:00:00'                                                 |   8308323
TBLEMPSTATISTICS   |P2013_11            |TIMESTAMP' 2013-12-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2013_12            |TIMESTAMP' 2014-01-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2013_5             |TIMESTAMP' 2013-06-01 00:00:00'                                                 |   6116188
TBLEMPSTATISTICS   |P2013_6             |TIMESTAMP' 2013-07-01 00:00:00'                                                 |  23133572
TBLEMPSTATISTICS   |P2013_7             |TIMESTAMP' 2013-08-01 00:00:00'                                                 |   8822588
TBLEMPSTATISTICS   |P2013_8             |TIMESTAMP' 2013-09-01 00:00:00'                                                 |   9575553
TBLEMPSTATISTICS   |P2013_9             |TIMESTAMP' 2013-10-01 00:00:00'                                                 |   8116638
TBLEMPSTATISTICS   |P2014_01            |TIMESTAMP' 2014-02-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_02            |TIMESTAMP' 2014-03-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_03            |TIMESTAMP' 2014-04-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_04            |TIMESTAMP' 2014-05-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_05            |TIMESTAMP' 2014-06-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_06            |TIMESTAMP' 2014-07-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_07            |TIMESTAMP' 2014-08-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_08            |TIMESTAMP' 2014-09-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_09            |TIMESTAMP' 2014-10-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |P2014_10            |TIMESTAMP' 2014-11-01 00:00:00'                                                 |
TBLEMPSTATISTICS   |PMAXDATE            |MAXVALUE

–Load the data from 1 to 14 NOV to actual table

INSERT INTO TBLEMPSTATISTICS
SELECT * FROM TBLEMPSTATISTICS_P2013_11;

–Clean temporary table

drop table TBLEMPSTATISTICS_P2013_11;

–Check INVALID objects

SELECT OBJECT_NAME FROM USER_OBJECTS
WHERE STATUS != 'VALID';

—cross check prd ddl

set long 10000
set pagesize 0
select dbms_metadata.get_ddl('TABLE','TBLEMPSTATISTICS') from dual;
select dbms_metadata.get_dependent_ddl('INDEX','TBLEMPSTATISTICS') from dual;

exposure on VLDB 🙂