Find Out Values Of Bind Variables Across Multiple Executions Of A SQL

In case of skewed data distribution, different values of bind variables can result in different execution plans.
Various execution plans and corresponding values of bind variables can be determined both when the statement
is still shared pool and when it has been flushed to AWR.

dba_hist_sqltext
v$sql_bind_capture
DBA_HIST_SQLBIND

http://oracleinaction.com/find-out-values-of-bind-variables-across-multiple-executions-of-a-sql/

11g R2 clusterware and ASM, which one gets up first and which one depends on other?

In Oracle 10g RAC and 11gR1 RAC, Oracle clusterware and ASM are installed
in the different Oracle homes, and the Clusterware has to be up before ASM
instance can be started because ASM instance uses the clusterware to access
the shared storage.
Oracle 11g R2 introduced the grid infrastructure home which combines Oracle
clusterware and ASM.
The OCR and voting disk of 11g R2 clusterware can be stored in ASM.
So it seems that ASM needs the clusterware up first to access the shared
storage  and the clusterware needs ASM up first before it can access its
key data structure: OCR and voting disk.
So really clusterware and ASM, which one needs to be up first, and which
one has to wait for other? This seemed to be the chicken or the ego problem.
Oracle’s solution to this problem is to combines  the clusterware and ASM
into a single Grid Infrastructure home and  comes up a  procedure with  a
complex  start up sequence which  mixes  the different components of clusterware
and ASM  instance in order.
clusterware startup command  $GI_HOME/bin/crsctl start crs follows this
sequence to bring both clusterware and ASM online
From this sequence of the log message and timestamp, we get some understanding
about the sequence of clusterware and ASM instance:
1)	CSSD and CTSSD are up before ASM
2)	Voting disks used by CSSD are discovered by reading the header of
the disks, not through ASM
3)	Startup of CRS service has to wait until ASM instance is up and the
diskgroup for OCR and voting disk is mounted.
4)	EVEMD and ACFS are up after the ASM starts.

https://kyuoracleblog.wordpress.com/2011/01/19/in-11g-r2-rac-clusterware-and-asm-which-one-is-up-first-and-which-one-is-depending-on-other/

gpnptool

In cluster environment, the location of  SPfile for ASMread from GPnP profile.
[grid@host01 peer]$ gpnptool getpval -asm_spf
Warning: some command line parameters were defaulted. Resulting command line:
         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-
+DATA/cluster01/asmparameterfile/registry.253.793721441
The oputput of the query shows that SPfile is on ASM in DATA diskgroup. To find out the
 location of ASM disks, following query is issued :
[root@host01 peer]# gpnptool getpval -asm_dis
ASM-Profile id=”asm” DiscoveryString=””
The  device headers of every device in the disk string returned by the above query are scanned  (if configured by you at ASM initial setup time). Here Discovery String is blank is as ASMDISKSTRINGS parameter has not been set. Hence, headers of all the ASM disks are scanned .
Here, I have shown the output of the query only on the disk which contains SPfile.(spfflg is not null)
[root@host01 ~]#  kfed read /dev/sdb3 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                       16 ; 0x0f4: 0x00000010
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001
In the output above, we see that
     the device /dev/sdb3 contains a copy of the ASM spfile (spfflg=1).
     The ASM spfile location starts at the disk offset of 16 (spfile=16)
Considering the allocation unit size (kfdhdb.ausize = 1M), let’s dump the ASM spfile from the device:
[root@host01 ~]#  dd if=/dev/sdb3 of=spfileASM_Copy2.ora skip=16  bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.170611 seconds, 6.1 MB/s
[root@host01 ~]# strings spfileASM_Copy2.ora
+ASM1.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM2.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.asm_diskgroups=’FRA’#Manual Mount
+ASM2.asm_diskgroups=’FRA’#Manual Mount
+ASM1.asm_diskgroups=’FRA’#Manual Mount
*.asm_power_limit=1
*.diagnostic_dest=’/u01/app/grid’
*.instance_type=’asm’
*.large_pool_size=12M
*.remote_login_passwordfile=’EXCLUSIVE’
Using the parameters in SPfile, ASM is started.
Once ASM is up, OCR is read by CRSD and various resources on the node are started.
Each node reads network information in GPnP profile and using GNS,  negotiates appropriate network identity for itself . Hence, nodes can be dynamically added/deleted.

http://oracleinaction.com/gpnp-profile/

Whenever GPnP profile is not present, it is automatically copied from existing nodes by GPnPd as pending.xml . Whenever, profile is updated, it is automatically renamed to profile.xml.

http://oracleinaction.com/lose-gpnp-profile/

How to Restore/Recover a Small Table in a Large Database

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

www.pythian.com/blog/12c-how-to-restore-recover-a-small-table-in-a-large-database/

USING ‘DELAY’ OPTION TO PROTECT LOGICAL/PHYSICAL CORRUPTIONS

You may utilize the delay option (if you have multiple standby sites) to prevent physical/logical corruption of your primary. For instance, your standby #1 may not have ‘Delay’ on to be your disaster recovery standby database. However, you may opt to implement a delay of minutes or hours on your standby #2 to allow recover from a possible physical or logical corruption on your primary database.
SQL> alter database recover managed standby database delay 5 disconnect;

http://www.pafumi.net/Standby_Concepts_Configuration.html