Exadata Storage expansion

I got a chance to explore and involve in this DBMA Task. In this article, I will summarize and walk through a procedure about adding a new cell to an existing Exadata Database Machine.

Most of us knew the capabilities that Exadata Database Machine delivers. It’s a known fact that Exadata comes in different fixed rack size capacity:

    • 1/8 rack (2 db nodes, 3 cells),
    • quarter rack (2 db nodes, 3 cells),
    • half rack (4 db nodes, 7 cells) and
    • full rack (8 db nodes, 14 cells). 

When you want to expand the capacity, it must be in fixed size as well, like, 1/8 to quarter, quarter to
half and half to full.

 

With Exadata X5 Elastic configuration, one can also have customized sizing by extending capacity of the rack
by adding any number of DB servers or storage servers or combination of both, up to the maximum allowed capacity
in the rack.

Preparing to Extend Exadata Database Machine

Preparing to Extend Exadata Database Machine ◄===

[0] Validate the environment
Before starting the activity, collect the Exachk, and validate the environment.
Also, verify the current cell alert if any.

dcli -g /root/cell_group -l root "cellcli -e list alerthistory where endTime=null and alertShortName=Hardware and alertType=stateful and severity=critical"

[1] Ensure HW placed in the rack, and all necessary network and cabling requirements are completed.
(2 IPs from the management network is required for the new cell).

[2] Re-image or upgrade of cell image

2.1 Extract the imageinfo from one of the existing cell server.
2.2 Login to the new cell through ILOM, connect to the console as root user and get the imageinfo
2.3 If the image version on the new cell doesn’t match with the existing image version, either you
download the exact image version and re-image the new cell or upgrade the image on the existing servers.

Review “MOS Doc ID 2151671.1” if you want to reimage the new cell.

[3] Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node.

To do this, perform the steps below from the first 1 dB server in the cluster:

cd /etc/oracle/cell/network-config
cp cellip.ora cellip.ora.orig
cp cellip.ora cellip.ora-bak

[4] If ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added

List the cell attributes required for configuring cell ASR alerting.
Run the following command from any existing storage grid cell:

CellCLI> list cell attributes snmpsubscriber

Apply the same SNMP values to the new cell by running the command below as the celladmin user,
as shown in the below example:

CellCLI> alter cell snmpSubscriber=((host='10.20.14.21',port=162,community=public))

[5] Configure cell alerting for the cell being added.

List the cell attributes required for configuring cell alerting.
Run the following command from any existing storage grid cell:

CellCLI> list cell attributes notificationMethod,notificationPolicy,
smtpToAddr,smtpFrom,smtpFromAddr,smtpServer,smtpUseSSL,smtpPort

Apply the same values to the new cell by running the command below as the celladmin user,
as shown in the example below:

CellCLI> alter cell notificationmethod='mail,snmp',notificationpolicy='critical,warning,clear',
smtptoaddr= 'dba@email.com',smtpfrom='Exadata',smtpfromaddr='dba@email.com',smtpserver='10.20.14.21',
smtpusessl=FALSE,smtpport=25

[6] Create cell disks on the cell being added

Log in to the cell as celladmin and run the following command:

CellCLI> create celldisk all

[7] Check that the flash log was created by default:

CellCLI> list flashlog

You should see the name of the flash log. It should look like cellnodename_FLASHLOG, and its status should be “normal”.If the flash log does not exist, create it using :

CellCLI> create flashlog all

[8] Check the current flash cache mode and compare it to the flash cache mode on existing cells:

CellCLI> list cell attributes flashcachemode

To change the flash cache mode to match the flash cache mode of existing cells, do the following:

1. If the flash cache exists and the cell is in WriteBack flash cache mode,
you must first flush the flash cache:

CellCLI> alter flashcache all flush

Wait for the command to return.

2. Drop the flash cache:

CellCLI> "drop flashcache all"

3. Change the flash cache mode:

CellCLI> alter cell flashCacheMode=writeback

The value of the flashCacheMode attribute is either writeback or writethrough.
The value must match the flash cache mode of the other storage cells in the cluster.

4. Create the flash cache:

CellCLI> create flashcache all

[9] Create grid disks on the cell being added.

—> Query the size and cachingpolicy of the existing grid disks from an existing cell.

CellCLI> list griddisk attributes name,asmDiskGroupName,cachingpolicy,size,offset
  • For each disk group found by the above command, create grid disks on the new cell that is being added to the cluster.
  • Match the size and the cachingpolicy of the existing grid disks for the disk group reported by the command above.
  • Grid disks should be created in the order of increasing offset to ensure similar layout and performance characteristics as the existing cells.
  • For example, the “list griddisk” command could return something like
    this:
DATAC1 default 5.6953125T 32M
DBFS_DG default 33.796875G 7.1192474365234375T
RECOC1 none 1.42388916015625T 5.6953582763671875T

When creating grid disks, begin with DATAC1, then RECOC1, and finally DBFS_DG using the following command:

CellCLI> create griddisk ALL HARDDISK PREFIX=DATAC1, size=5.6953125T, cachingpolicy='default',
comment="Cluster cluster-clux6 DR diskgroup DATAC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=RECOC1,size=1.42388916015625T, cachingpolicy='none',
comment="Cluster cluster-clux6 DR diskgroup RECOC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=DBFS_DG,size=33.796875G, cachingpolicy='default',
comment="Cluster cluster-clux6 DR diskgroup DBFS_DG"

CAUTION: Be sure to specify the EXACT size shown along with the unit (either T or G).

[10] Verify the newly created grid disks are visible from the Oracle RAC nodes.
Log in to each Oracle RAC node and run the following command:

$GI_HOME/bin/kfod op=disks disks=all | grep cellName_being_added

This should list all the grid disks created as above.

[11] Add the newly created grid disks to the respective existing ASM disk groups.

ALTER DISKGROUP disk_group_nameadd disk 'comma_separated_disk_names';

The command above kicks off an ASM rebalance at the default power level.
Monitor the progress of the rebalance by querying gv$asm_operation :

SQL> select * from gv$asm_operation;

Once the rebalance completes, the addition of the cell to the Oracle RAC is complete.

[12] Run the latest Exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.

Thank you Oracle ACE Syed Jaffar Hussain for sharing his experience

Thank you for visiting this blog 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.