Configure the Database Environment to Support Optimal Performance Data Access

1. Documentation in Tahiti -> Masters Book List -> Performance Tuning Guide -> 8 I / O Configuration and Design

Documentation Tahiti -> Masters Book List -> Database Installation Guide for Linux -> D Optimal Flexible Architecture

2. “Setting the DB environment to support optimal access performance data.” After several turns to what is required at this point, it still seems a very vague description for a target. So I will take this point as disaster drawer of my notes related to data access (I / O).

3. DB_BLOCK_SIZE parameters and DB_FILE_MULTIBLOCK_READ_COUNT

We have already spoken of DB_BLOCK_SIZE, and return to it again for the importance. I like the guidelines named in the documentation (http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#CHDEFEGC). I summarize here:

Small rows + Random Access => small block size (DB_BLOCK_SIZE = 8192)
Small rows + Sequential Access => large block size (DB_BLOCK_SIZE = 16384 or 32768)
Small rows + Random Access and Sequential => large block size (DB_BLOCK_SIZE = 16384 or 32768)
Large rows (For. Ex. With objects LOB) => large block size (DB_BLOCK_SIZE = 16384 or 32768)

This is merely a recommendation that works well. At the end it depends on the application / s specific / s using / n the database. I remind you that we can have the database with a block size by default (the most benefit have the kind of access is made to the database) and then have several TABLESPACES with different block size because we have a data set it benefits from it.

Take an example. We have a data warehouse (DW) where all a company’s sales are recorded. BD use this to generate some great reports. By various tests we have done in the development environment we have seen that the optimal block size is 16K (16384). We have a star schema (STAR ​​SCHEMA, we will see in the section “Data Management”) where we have a large data table (FACT TABLE) and then several small related tables, called dimension tables (DIMENSION TABLES). Because our reports are traversed FACT TABLE whole piece, we are interested optimize memory bringing each reading as much data on each trade, so we created one or more TABLESPACES 32K to host it. However for the dimensional tables do little reading to be quick, then we can put them in a TABLESPACE 8K. Summarizing the example:

DB for use DW => db_block_size = 16K
FACT TABLES used for one or more TABLESPACES => DB_BLOCK_SIZE = 32K
DIMENSION TABLES used for one or more TABLESPACES => DB_BLOCK_SIZE = 8K

The DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the maximum number of blocks that can be read from an I / O operation during a sequential scan reading (FULL TABLE SCAN or INDEX FAST FULL SCAN). Put another way, is the number of contiguous blocks that can read the database in a single operation. A high value will promote heavier execution plans. After its value has a major effect on the way in which the BD disk access. This parameter is calculated as follows:

db_file_multiblock_read_count = min (1048576 / db_block_size, db_cache_size / (sessions * db_block_size))

Tom Kyte, in his book “Effective Oracle by Design” makes an interesting exercise with this parameter. We do something similar:

- DB_FILE_MULTIBLOCK_READ_COUNT We will set the value to 1 and we get the cost of a query to the table SEG $
show parameter db_file_multiblock_read_count
SET EXPLAIN AUTOTRACE TRACEONLY
ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT = 1;
SELECT * FROM SEG $;

– Now we modify the parameter value to 32 and back to get the execution plan for the same query
– We’ll see how the cost (Cost% CPU) has been reduced considerably
– The higher it is the value of this parameter will lower the cost of doing an operation FULL SCAN
ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT = 32;
SELECT * FROM SEG $;

– Now let’s do the “trick” used Tom Kyte to get the real value maximum reading of disk blocks

- So we disable the AUTOTRACE
AUTOTRACE SET OFF

- First create a tablespace which will put a big table (BIGTAB)
CREATE TABLESPACE TEST DATAFILE '/u01/app/oracle/oradata/OCM/test01.dbf' SIZE 512M;
- Create the table BIGTAB (use a script of Tom Kyte)
CREATE BIGTAB TABLESPACE TABLE TEST AS SELECT ROWNUM ID, A. * FROM WHERE 1 ALL_OBJECTS A = 0;
- LOGGING mode we disable the table in order to avoid unnecessary REDO
ALTER TABLE BIGTAB NOLOGGING;
- We populate the table
DECLARE
L_CNT NUMBER;
L_ROWS NUMBER: = 1000000;
BEGIN
INSERT / * + APPEND * / INTO BIGTAB SELECT ROWNUM, A. * FROM ALL_OBJECTS A;
L_CNT: = SQL% ROWCOUNT;
COMMIT;
WHILE (L_CNT <L_ROWS)
LOOP
INSERT / * + APPEND * / INTO BIGTAB
SELECT ROWNUM + L_CNT,
OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID, OBJECT_TYPE, CREATED,
LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
FROM BIGTAB
WHERE ROWNUM <= L_ROWS-L_CNT;
L_CNT: = L_CNT + SQL% ROWCOUNT;
COMMIT;
END LOOP;
END;
/

– We get the trace file our session
SELECT TRACEFILE FROM V $ SESSION S, V $ PROCESS WHERE P = P.ADDR S.PADDR AND S.SID = SYS_CONTEXT (‘USERENV’, ‘SID’);

– Activate trace to our session and raise the value of the parameter DB_FILE_MULTIBLOCK_READ_COUNT a high value
ALTER SESSION SET EVENTS ‘10046 trace name context forever, level 12’;
ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT = 1000000;
SET AUTOTRACE TRACEONLY

– Launched the query again forcing a FULL SCAN
SELECT / * + noparallel (SEG $) full (SEG $) * / * FROM BIGTAB;

# Seek the ‘blocks’ of the event parameter ‘db file scattered read’
grep scattered /u01/app/oracle/diag/rdbms/ocm/OCM/trace/OCM_ora_18573.trc
– We will leave many lines with the event ‘db file scattered read’:
– => WAIT # 2: nam = ‘db file scattered read’ ela = 158 = 7 block file # # = 42163 blocks = 8 obj # = 74006 tim = 1356020746340933
– => WAIT # 2: nam = ‘db file scattered read’ ela = 156 = 7 block file # # = 42433 blocks = 8 obj # = 74006 tim = 1356020746341211

– As we can see, the maximum number of blocks that can be read on our VM is 8 blocks * 8192 bytes = 64KB
– After the appropriate setting for our environment would DB_FILE_MULTIBLOCK_READ_COUNT = 8

– We clean the environment
DROP TABLESPACE INCLUDING CONTENTS TEST AND DATAFILES;

4. At this point we have done a lot of work to optimize disk access. We have based architecture Optimal Flexible Architecture (OFA), where a hierarchy for FS and directories for the installation of BD, DATAFILES, FRA, states etc. This distribution allows an adequate sharing of applications between different disks and allows multiple installations of Oracle coexist in a certain order. Deputy directory hierarchy we have followed (as you will see, we have not used ‘/ 03’ and ‘/ 04’ but it would be advisable in this case):
Directory Description
/ Root Directory
/ U01 / Mount Point 1 for users
/ U01 / app / tree for application software
/ U01 / app / oracle / directory ORACLE_BASE
/ U01 / app / oracle / admin / files tree management for BD
/ U01 / app / oracle / admin / COM / Management Tree for BD CMO
/ U01 / app / oracle / doc / Documentation Online (we have not installed)
/ U01 / app / oracle / fast_recovery_area / file tree for recovery
/ U01 / app / oracle / fast_recovery_area / COM / Recovery database files for OCM1
/ U02 / app / oracle / oradata directory for Oracle data
/ U03 / app / oracle / oradata directory for Oracle data (recommended if you do not have STRIPPING or mirroring)
/ U04 / app / oracle / oradata directory for Oracle data (recommended if you do not have STRIPPING or mirroring)
/ U01 / app / oracle / product / tree to accommodate Oracle installations
/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_HOME for Oracle databases
/u01/app/oracle/product/11.2.0/grid ORACLE_HOME for Grid Infrastructure for Standalone Server (install it later)

5. Filesystems As we are using to host our data files, we can use this parameter (FILESYSTEMIO_OPTIONS). This parameter has four possible values:

DIRECTIO => We enable only Direct I / O (cache is to avoid OS)
Asynch => We enable asynchronous operations only (reduce certain bottlenecks)
SETALL => We enable asynchronous operations and Direct I / O
NONE => No option enabled

– We enable the SETALL option and restart the instance
FILESYSTEMIO_OPTIONS ALTER SYSTEM SET SCOPE = SPFILE = SETALL;
IMMEDIATE SHUTDOWN
STARTUP

– We check that operations have enabled I / O asynchronous
COL NAME FORMAT A50
SELECT NAME FROM V $ DATAFILE ASYNCH_IO F, V $ IOSTAT_FILE I F.FILE # = I.FILE_NO WHERE AND FILETYPE_NAME = ‘Data File’;

– We have another way to check and verify that the processes use OS functions AIO (Asynchronous IO)
– This will trace calls to system functions to search DBWR process AIO
SELECT SPID FROM V $ PROCESS WHERE PNAME = ‘DBW0’;

– In another session execute the following command SO (with SPID that we obtained earlier)
– / Usr / bin / strace -p 22062
– The DBWR process should use the asynchronous functions io_submit and io_getevens (instead of pread and pwrite)
– Io_submit (46919427829760, 29, 0x2aac47ca77f8 {{0, 1, 0, … 20}
– Io_getevents (46919427829760, 2, 128, {{0x2aac47ca67c0 …
– If none because you have very idle BD can force a CHECKPOINT with the following statement:
– ALTER SYSTEM CHECKPOINT;
– In Metalink referred also to consult the slabinfo file to check that there are structures used by AIO
– Cat / proc / slabinfo | grep kio # Ex .: “kioctx 26 36 320 January 12” and “kiocb 3 15 256 January 15”

– To check if you are using Direct I / O, we will lift the BD and remove traces of DBWR process again
IMMEDIATE SHUTDOWN
STARTUP MOUNT
SELECT SPID FROM V $ PROCESS WHERE PNAME = ‘DBW0’;

– In another session execute the following command SO (with SPID that we obtained earlier)
– / Usr / bin / strace -p 22949

– We raised the BD
ALTER DATABASE OPEN;

– If we review the trace log, see the O_DIRECT flag, indicating that we have opened the file in DIRECT I / O mode
– Open (“/ u01 / app / oracle / oradata / COM / undotbs02.dbf” O_RDWR | O_SYNC | O_DIRECT) = 24

Important note! ASM when we do not have to worry about the FILESYSTEMIO_OPTIONS parameter (this only applies to FS). ASM provides us with I / O operations and Direct asynchronous I / O by default.

6. There is a specific topic that can fit into this goal. This is answer the following question: What is the optimal size for redo ONLINE file? Each BD have a different charge generating transactions and REDO also varies from BD to another. The general recommendation is that between each SWITCH LOGFILE should spend at least 20 minutes. We can go to the database ALERT file and see how often they occur. If less than 20 minutes, we should increase the size of the redo. In heavily loaded BD it is easy to redologs 2GB.

# We will review our step log and took the opportunity to use the command ADRCI
ADRCI

# Come straight to open our log of BD
show alert

# Here we can find how often a switch of redo “Thread 1 advanced to log sequence 94” it occurs
# The time between each one should be not less than 20 minutes

There is another approach when we set the parameter FAST_START_MTTR_TARGET. This parameter specifies the desired recovery (CRASH RECOVERY) when lifting the database when it is not closed time orderly. When we set this parameter, the database begins to run regular incremental checkpoint for recovery before a fall BD does not exceed the time set in FAST_START_MTTR_TARGET. The aim is to prioritize the availability of the performance, that is, we prefer that the database is available as quickly as possible at the expense of an overload by checkpoints. In the end, we look for a balanced value.

When we activate the parameter FAST_START_MTTR_TARGET MTTR Advisor is activated, and some of the information given is the optimum size (which is then is the minimum) for the size of the redo files. More information about the MTTR Advisor in http://docs.oracle.com/cd/E11882_01/server.112/e16638/instance_tune.htm#i1018945.

– The FAST_START_MTTR_TARGET parameter can have a value between 1 and 3600
– Suppose our BD is super critical and has to recover as soon as possible before a failure
– Let’s find the lowest value following procedure
ALTER SYSTEM SET FAST_START_MTTR_TARGET = 1 SCOPE = BOTH;
IMMEDIATE SHUTDOWN
STARTUP

– The TARGET_MTTR field indicates the minimum time necessary to make the CRASH RECOVER
– The ESTIMATED_MTTR field tells us how long it would take to lift the current situation of the database if it fell
SELECT TARGET_MTTR, ESTIMATED_MTTR FROM V $ INSTANCE_RECOVERY;

– Since we are interested only availability FAST_START_MTTR_TARGET could set the parameter value TARGET_MTTR
– We can analyze the value that most concerns consulted the V $ view MTTR_TARGET_ADVICE
FAST_START_MTTR_TARGET ALTER SYSTEM SET SCOPE = BOTH = 9;

– Also we would get the recommended size (in MB) of files redo
SELECT FROM V $ INSTANCE_RECOVERY OPTIMAL_LOGFILE_SIZE;

– We clean the environment and we disable the MTTR ADVISOR
ALTER SYSTEM RESET FAST_START_MTTR_TARGET SCOPE = SPFILE;
ALTER SYSTEM FAST_START_MTTR_TARGET SET SCOPE = MEMORY = 0;

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s