Migrating a database to Exadata

Migrating a database to Exadata

Published on: Category: Oracle

Qualogy is supporting a large global insurance company with the transition from the current AIX based applications and databases to the new Exadata & Exalogic infrastructure. Included in this new architecture are ZFS Storage Appliances for backup and file share purposes.

The new architecture is a high available architecture.  Any component failure is automatically compensated by a peer component. Disaster recovery techniques at the database and application layer ensure that a second datacenter can take over any functionality if there is a need to do so. The new architecture will be monitored with Oracle Enterprise Manager 13c Cloud Control, and Oracle Enterprise Manager will also be used for further automation and provisioning. In the picture below the new architecture is shown.

The first production database has just been migrated to the Exadata platform by the Qualogy Exadata Team*. The application still runs on AIX for the time being. In this blog, I will explain the database migration process and I will also share the first customer experience with their database running on Exadata.

*The Qualogy Exadata Team consisted of Andrei Ilie, Massimiliano Dolphi and Rob Lasonder

Phase 1: Preparation Phase

Oracle delivered the Exadata, Exadata and ZFS Storage appliances in the datacenter with a basic configuration.  Although the appliances were up and running after Oracle's delivery, several configuration steps were still required to fully integrate the appliances into the customer IT infrastructure. Qualogy performed this task using customized protocols, based on Oracle and Qualogy Best Practices. During this phase the Exadata machine was also connected to the ZFS Storage Appliance for backup purposes. All appliances were integrated into the Oracle Enterprise Manager 13c Cloud Control environment and monitoring templates were applied. Oracle Enterprise Manager is intended to play a central role in further automation, provisioning and life cycle management of the new environment. During this phase customer IT Staff was trained by Qualogy regarding the new Exadata and Exalogic infrastructure.

Phase 2: Migration Phase

During the migration phase the database was migrated to Exadata, monitoring of the database was enabled and database backup and recovery were configured and tested.  During this time the performance of the database and the application were also examined and compared with the old situation.

The best migration method depends on a number of factors, such as allowed downtime and database size, version and current platform. In this case the current database platform was AIX and 2 factors were of particular importance for this migration. First of all, the database contained application code which was generated by the application vendor and this code could not be changed by the DBA. Secondly, the migration was not just a one-time occurrence, but needed to take place every night (!) in a nightly schedule. The source database on AIX was refreshed on a daily basis with Transportable Tablespaces and therefore the target database on Exadata needed to be refreshed on a daily basis as well.

Taken all these factors into consideration RMAN cross-platform Transportable Tablespace was chosen as the migration method. This method enabled the database to be refreshed very quickly on a daily basis with minimal downtime. RMAN takes care of the migration of the tablespaces containing the application data, including meta-data export/import and endian conversion. The whole operation was scripted and the customer Batch Scheduler was used to schedule the nightly batch jobs. Using the existing customer Batch Scheduler had the advantage that the customer’s standard operational procedures regarding batch scheduling, execution and monitoring were already in place. For this purpose, Batch Scheduler agents were installed and configured on the Exadata compute nodes. In the figure below the whole migration process is shown:

Backup and recovery were configured for the database on Exadata as part of the migration as well. The database (500 GB) backup takes place online and only takes 5 minutes to complete, which is an impressive 1,6 GB per second, showing the enormous potential of Exadata in terms of handling large volumes of data in a short period of time.

During the migration, the performance of the database and application were also evaluated and although the Exadata database was must faster than the current AIX database, additional performance improvement measures were hard to take due to the fact that the database code was generated by the application (containing really bad code sometimes, leading to massive Cartesian joins) and to the fact that the database was refreshed every day. Because performance of this database was a crucial factor for success or failure, it was therefore decided to host the entire database on Flash Disks only. For this reason, part of the Flash Cache was reconfigured as Flash Grid Disks and the database was configured to run on Flash completely. This resulted in an additional performance boost for the database. And because the Exadata X6 version comes with a lot of Flash (19,2 TB for 1/8 Rack X6-2) sufficient Flash remained to serve as a Flash Cache for the other planned Exadata databases. See the outline how the Flash was reconfigured for this purpose.

Phase 3: Production Phase

After all the tests and trial runs completed successfully and after the daily scheduled refresh ran stable for some time, the Exadata Database was finally put into production. During the first week of Production the DBA Exadata team was on call and monitored the database and database performance. The application team monitored the application and application performance, and compared them with the original AIX database. See passage below which contains part of the application performance comparison:

Within 2 weeks after the 'go live', the results about the Exadata project were also published on the customer Intranet:

"We are pleased to inform you that we have upgraded our infrastructure for <Customer application>. All users will experience a significant improvement in the speed. Loading the data in all tools and navigating from one view to another will be much faster than before. Improving our performance is a vital part of the larger plan where we want make your, and our customers’ and brokers’ online experience with <customer> as fast, stable and safe as possible.

ITS have been able to achieve this through Oracle’s Exadata Database Machine. This machine is the highest performing infrastructure made from Oracle. It run queries and data loads much faster enabling <customer application> to perform a lot better. We have all been waiting eagerly for this improvement. The feedback from our test customers on the increased speed has been very positive. Please use this opportunity to (further) promote <customer application> in your markets."

To be continued

Migrating this AIX database to Exadata was just a first step in launching the new Exadata and Exalogic infrastructure. More AIX databases are scheduled to migrate to Exadata and also AIX applications will be migrated to Exalogic. Qualogy continues to support the customer in this effort. Also, Oracle Enterprise Manager Cloud Control will be heavily used to manage the new IT Landscape.


Step 1: View current Flash Cache settings
  1. [root@exa-dm1dbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e list flashcache detail|grep -e SIZE -e STATUS
  2. exa-dm1celadm01: SIZE: 5.821319580078125T
  3. exa-dm1celadm01: STATUS: normal
  4. exa-dm1celadm02: SIZE: 5.821319580078125T
  5. exa-dm1celadm02: STATUS: normal
  6. exa-dm1celadm03: SIZE: 5.821319580078125T
  7. exa-dm1celadm03: STATUS: normal

Step 2: Drop current Flash Cache

Perform as root on the first compute node:

  1. [root@exa-dm1dbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e DROP flashcache ALL;
  2. exa-dm1celadm01: Flash cache exa_dm1celadm01_FLASHCACHE successfully dropped
  3. exa-dm1celadm02: Flash cache exa_dm1celadm02_FLASHCACHE successfully dropped
  4. exa-dm1celadm03: Flash cache exa_dm1celadm03_FLASHCACHE successfully dropped
Step 3: Recreate Flash Cache with smaller size
  1. [root@exa-dm1dbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e CREATE flashcache ALL SIZE=4T
  2. exa-dm1celadm01: Flash cache exa_dm1celadm01_FLASHCACHE successfully created
  3. exa-dm1celadm02: Flash cache exa_dm1celadm02_FLASHCACHE successfully created
  4. exa-dm1celadm03: Flash cache exa_dm1celadm03_FLASHCACHE successfully created
  6. [root@exa-dm1dbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e list flashcache detail|grep -e SIZE -e STATUS
  7. exa-dm1celadm01: SIZE: 4T
  8. exa-dm1celadm01: STATUS: normal
  9. exa-dm1celadm02: SIZE: 4T
  10. exa-dm1celadm02: STATUS: normal
  11. exa-dm1celadm03: SIZE: 4T
  12. exa-dm1celadm03: STATUS: normal
Step 4: Recreate the Flash Grid Disks
  1. [root@exa-dm1dbadm01 ~]# dcli -g ~/cell_group -l root cellcli -e CREATE griddisk ALL flashdisk prefix=flash
  2. exa-dm1celadm01: GridDisk flash_FD_00_exa_dm1celadm01 successfully created
  3. exa-dm1celadm01: GridDisk flash_FD_01_exa_dm1celadm01 successfully created
  4. exa-dm1celadm02: GridDisk flash_FD_00_exa_dm1celadm02 successfully created
  5. exa-dm1celadm02: GridDisk flash_FD_01_exa_dm1celadm02 successfully created
  6. exa-dm1celadm03: GridDisk flash_FD_00_exa_dm1celadm03 successfully created
  7. exa-dm1celadm03: GridDisk flash_FD_01_exa_dm1celadm03 successfully created
Step 5: Create the FLASH Diskgroups in ASM
  1. SQL> create diskgroup FLASHC1 high redundancy
  2. disk '/o/*/flash*'
  3. attribute 'compatible.rdbms'='',
  4. 'compatible.asm'='',
  5. 'cell.smart_scan_capable' = 'TRUE',
  6. 'au_size' = '4M';
  7. Diskgroup created.
  9. SQL> @/home/oracle/scripts/asm_diskgroup1.sql
Rob Lasonder
About the author Rob Lasonder

Senior Exadata Specialist at Qualogy.

More posts by Rob Lasonder
Comments (2)
  1. om 12:12

    these steps must be followed while transferring a database to exadata. Good post.

  2. om 16:04

    Did you have a single oem installation in exadata with all your prod and non prod targets, or did you have prod oem installation for your prod targets and another non-prod oem for the non-prod targets ? (any license if having a prod and non-prod in exadata?)