How to Migrate Oracle Databases to Nutanix NDB

This is Part 3 of our series. If you are just joining, start with Part 1 for the business case and Part 2 for planning and sizing. In this post we focus on execution.

You will pick a migration path, build a runbook, and perform a clean cutover to Nutanix Database Service (NDB), then validate and govern the new environment.

Choose the Right Migration Method

Different workloads call for different paths. Use the matrix below to shortlist methods.

Method Typical Use Downtime Profile Notes
Oracle Data Pump (expdp/impdp)
Small to medium schemas, cross-version upgrades, schema-only moves
Cutover outage required
Good for cleanup and reorg. Consider network throughput and parallelism.
RMAN Backup and Restore
Like-for-like moves, same endian, full database
Short outage with final redo apply
Restore to target, recover with archived redo, then open with resetlogs at cutover.
Transportable Tablespaces
Very large databases, minimal movement of datafiles
Low to moderate outage
Export metadata, copy datafiles, plug in at target. Watch tablespace pre-reqs.
Logical Replication
Near zero downtime scenarios
Minimal outage at switchover
Use logical replication to sync changes, then switch. More moving parts to manage.

Tip for DBAs. If you are unfamiliar with AHV-specific best practices during cutover, review Best Practices for Migrating Oracle Databases to Nutanix AHV.

Prepare the NDB Target

Set up the destination before you touch production.

  • Create database profiles and placement policies in NDB for each Oracle version and edition.
  • Align storage. Map ASM disk groups or file systems to Nutanix storage containers and protection policies.
  • Network and security. Confirm VLANs, MTU, DNS, NTP, and service accounts. Pre-create listener entries.
  • Backup and snapshot strategy. Define snapshot schedules and retention to meet RPO and RTO targets.
  • Tagging. Tag VMs and databases with environment, app, owner, and cost center for inventory and license reporting.

For day two patching guidance on the platform you will land on, see House of Brick’s post on patching Oracle and SQL Server in NDB.

Build a Migration Runbook

A strong runbook prevents surprises.

  1. Baseline. Capture AWR or Statspack, OS metrics, and storage latency on the source. Keep the numbers handy for post-cutover comparison.
  2. Dry run. Migrate a non production copy using the exact method you will use on production. Time each step. Record throughput and any errors.
  3. Roles and comms. Name a cutover lead, a rollback lead, and an approver. Set up a dedicated chat channel and conference bridge.
  4. Change windows. Confirm maintenance windows and business freeze periods with application owners.
  5. Rollback plan. Write explicit steps for a controlled return to source if validation fails.

Method Playbooks

Data Pump

  1. Prepare target schemas, tablespaces, and security roles in NDB.
  2. On source, run expdp with a parameter file that defines SCHEMAS or FULL and includes consistent settings such as FLASHBACK_TIME.
  3. Move dump files over a high throughput channel. Consider parallel file copies.
  4. On target, run impdp with PARALLEL, REMAP_SCHEMA as needed, and transform clauses to match storage and segments.
  5. Rebuild invalid objects, refresh statistics, and compare row counts.
    Validate application connectivity and jobs.
  6. When to use. Good for cleanup, version upgrades, and schema isolation. Expect a defined outage during the import phase.

RMAN Backup and Restore

  1. Take an online backup with archive logs at the source.
  2. Restore the backup to the target VM on NDB.
  3. Configure listeners, pfiles, and ASM on the target in advance.
  4. Apply archived redo logs until the cutover checkpoint.
  5. At cutover, stop application writes, ship final archive logs, apply, and open with resetlogs.
  6. Repoint application services and validate.

When to use. Best for like-for-like moves with limited change. Outage is the final redo apply and service switch.

Transportable Tablespaces

  1. Verify self-contained tablespaces and set them read only.
  2. Export metadata with expdp TRANSPORT_TABLESPACES.
  3. Copy datafiles to the target. Consider Nutanix storage acceleration for faster copies.
  4. Import metadata on target and set tablespaces read write.
  5. Validate constraints, synonyms, and grants.

When to use. Large databases with a need to move quickly. Requires careful prep and validation of self-containment.

Logical Replication

  1. Stand up target schemas and seed an initial load using Data Pump or RMAN.
  2. Start logical replication from source to target.
  3. Monitor lag and reconcile conflicts.
  4. Schedule a short outage to quiesce the source, allow final apply, switch DNS, and resume on the target.
  5. Decommission replication once stable.

When to use. Near zero downtime projects or when batch windows are tight. Requires more coordination and monitoring.

Cutover Without Drama

Treat cutover as a controlled change, not a heroic push.

  • Change freeze. Stop non essential changes five to seven days before cutover.
  • Final sync. Apply final redo, finalize imports, and checkpoint all services.
  • Backups. Take a last on platform backup in NDB before opening for production traffic.
  • DNS and routing. Update service endpoints, connection strings, and connection pools.
  • Smoke test. Validate logins, critical transactions, report runs, and batch jobs with application owners present.
  • Post cutover watch. Keep engineers online for at least one business cycle.

Validate and Tune on NDB

After traffic moves, verify that the target meets or beats baseline.

  • Performance comparison. Compare AWR baselines and top SQL on source versus target. Check logical reads, parse rates, and wait events.
  • Storage and IOPS. Confirm latency is at or below targets. Validate Nutanix container placement and caching benefits.
  • Backups and DR. Perform a test restore. Validate snapshot replication to the secondary site or cloud.
  • Security posture. Confirm CIS and NIST control checks pass at both VM and database layers.
  • Patching. Use NDB patch orchestration to bring the environment to the desired level. See House of Brick patch guidance.

For performance fine tuning on the new platform, review Optimizing NDB Workloads in Hybrid Cloud.

Common Pitfalls and How to Avoid Them

  • Network throughput bottlenecks. Test parallel streams and MTU in advance. Use multiple channels for expdp, RMAN, and file copies.
  • Character set and collation mismatches. Confirm AL32UTF8 or desired target settings before export. Avoid implicit conversions.
  • Forgotten jobs and links. Inventory database links, external jobs, wallets, ACLs, and scheduler chains. Validate post cutover.
  • License creep. Watch options and packs. Track feature usage on the target from day one.
  • Under sized VMs. Right size based on real workload metrics, not nameplate configs. Revisit allocations after one week of production.

For additional field tips, see the House of Brick expert migration guide.

Put Governance in Place From Day One

Modern infrastructure accelerates change. Add a continuous layer that watches the things auditors and finance care about.

  • License intelligence. Track cores and option usage against entitlements in near real time. See License Intelligence for NDB Automation.
  • Drift detection. Alert on VM moves, parameter changes, or security configuration changes that could affect cost or compliance.
  • Compliance scoring. Run automated checks against CIS and NIST at the VM and database layers.
  • Inventory and tagging. Keep the catalog accurate so owners, tiers, and cost centers are never in doubt.

This layer turns a successful cutover into a stable operating model.

Your Next Steps

Frequently Asked Questions

Which migration method do most teams choose first?

RMAN backup and restore is the most common starting point because it preserves the database layout and keeps the change window small. Teams add redo apply at the end to minimize outage.

When is Data Pump better than RMAN?

Use Data Pump when you want to clean up schemas, move to a newer Oracle version, or separate application components. It requires a defined outage during import and careful parallel tuning.

How do I minimize downtime for a customer facing application?

Seed the target with RMAN or Data Pump, then use logical replication for ongoing change. Quiesce briefly at the end, apply the final changes, and switch service endpoints.

Can I move Oracle RAC to NDB?

Yes. Validate RAC network requirements and test cluster services in a non production environment first. Confirm VIPs, SCAN listeners, and multicast or equivalent settings.

What should I do in the first week after cutover?

Compare performance to baseline, right size VMs, test restore and replication, confirm compliance scores, and set alerts for feature usage and VM moves.

Key Takeaway

Successful migrations are methodical. Pick the method that fits your downtime and size, rehearse it, validate on NDB, and add continuous governance. You will protect cost savings and reduce audit risk while giving DB teams a platform that is easier to operate.

Related Posts