top of page

Technical Deep Dive: Adding X10 Storage Cells to an Existing Oracle Exadata Environment

  • Jason Beattie
  • Aug 4
  • 2 min read

Objective


This post details the end-to-end process of expanding an existing Oracle Exadata environment by integrating two additional X10 storage cells. The steps include validating system compatibility, creating grid disks, integrating with Oracle ASM, and verifying post-implementation health and capacity.


Note: Images and specific IP Related Details has been masked for security reasons. Please reach out if you have any questions on this blog post.


System Overview


Environment Setup (Before Expansion):


  • Existing Storage Cells: Cell01, Cell02, Cell03

  • Database Nodes: DB01, DB02

  • ASM Disk Groups: DATA, DONOTUSE, RECOVER

  • Redundancy Mode: High


Storage Cells Added:


  • New Cell A

  • New Cell B


Note: All hostnames and IPs referenced in this blog are anonymized for confidentiality.


1. Initial Validation


Before adding the new cells, we validated the following:


  • Network Reachability: Ensured new storage cells were reachable from both DB nodes via public and private interfaces.


  • Image Consistency: Confirmed all storage servers were running matching image versions.


  • CLI Tools: Verified that CellCLI was operational on both new storage cells.


2. Grid Disk Provisioning


On each of the new cells, grid disks were manually created using CellCLi. Each physical disk was divided among the ASM disk groups according to defined sizing policies.


Sample Commands (Executed via CellCLI):

create griddisk DATA_CD_00_<cellname> celldisk=CD_00_<cellname>, size=8.111328125T create griddisk DONOTUSE_CD_00_<cellname> celldisk=CD_00_<cellname>, size=2.77734375T create griddisk RECOVER_CD_00_<cellname> celldisk=CD_00_<cellname>, size=1.4443359375T

This pattern was repeated across six celldisks per cell for both new storage servers.


3. ASM Integration

With grid disks created, the next step was to integrate them into ASM. First, we validated new disks using:

asmcmd lsdsk --candidate

Then, disks were added to each disk group:


DATA

ALTER DISKGROUP DATA ADD DISK 'o/<private_ip_set>/DATA_CD_00_<cellA>', ... 'o/<private_ip_set>/DATA_CD_05_<cellB>';


RECOVER

 ALTER DISKGROUP RECOVER ADD DISK 'o/<private_ip_set>/RECOVER_CD_00_<cellA>', ... 'o/<private_ip_set>/RECOVER_CD_05_<cellB>';

DONOTUSE

ALTER DISKGROUP DONOTUSE ADD DISK 'o/<private_ip_set>/DONOTUSE_CD_00_<cellA>', ... 'o/<private_ip_set>/DONOTUSE_CD_05_<cellB>';
ree

Each operation triggered an automatic rebalance operation handled internally by ASM.


4. Post-Integration Checks


Once disks were added, we ran several validations:


ASM Usage Verification

SELECT NAME, TOTAL_MB/1024 AS TOTAL_GB, FREE_MB/1024 AS FREE_GB FROM V$ASM_DISKGROUP;
  • Verified all newly added disks were visible.

Image Redacted.
Image Redacted.
  • Confirmed that rebalance operations completed successfully.


  • Checked logs for errors (none found in ASM or alert logs).


5. Capacity Results

Disk Group

Capacity Before (TB)

Capacity After (TB)

DATA

13.59

25.16

DONOTUSE

5.24

8.75

RECOVER

2.72

4.35

Total usable capacity saw a significant increase across all critical disk groups.


6. Summary

Task

Status

Image version check

✅ Completed

Networking configuration

✅ Successful

Grid disk creation

✅ Completed

Disk integration into ASM

✅ Rebalance successful

Capacity validation

✅ Verified

Conclusion

The integration of two new X10 storage cells into the existing Oracle Exadata environment was completed successfully. With zero errors during validation and over 80% storage capacity gain in core disk groups, this expansion lays the foundation for enhanced performance and future scalability.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2 Post
  • LinkedIn

©2023 Proudly created with Wix.com

bottom of page