If you have trouble after you upgraded the OS image version of your VM Cluster nodes to the latest minor version or to a new major version, one possible solution would be roll back the update. In this blog post I will demonstrate how to do this.
Image Version
Run the following command as root to get extended details about the currently installed image version.
$> imageinfo --all-options
Kernel version: 5.15.0-308.179.6.16.el8uek.x86_64 #2 SMP Thu Sep 18 11:19:34 PDT 2025 x86_64
Uptrack kernel version: 5.15.0-312.187.5.3.el8uek.x86_64 #2 SMP Sun Sep 21 08:53:14 PDT 2025 x86_64
Image kernel version: 5.15.0-308.179.6.16.el8uek
Image version: 25.2.3.0.0.251015.2
Image created: 2025-10-16 01:07:26 -0700
Image activated: 2025-11-13 09:20:48 +0100
Image status: success
Image label: OSS_25.2.3.0.0_LINUX.X64_251015.2
Exadata software version: 25.2.3.0.0.251015.2
Node type: GUEST
Install type: KVM Guest with ROCE and Secure Fabric
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
In line 6 the currently used image version is highlighted and line 14 shows the actively used system partition.
imageinfo uses the file /opt/oracle.cellos/image.id to extract parts of the above information. To identify the version of the inactive system partition, mount the partition first.
$> mkdir /mnt/inactive
$> mount /dev/mapper/VGExaDb-LVDbSys2 /mnt/inactive/
$> egrep "^version:" /mnt/inactive/opt/oracle.cellos/image.id | cut -d ':' -f 2
25.1.8.0.0.250805
$> umount /mnt/inactive
$> rmdir /mnt/inactive
System Partitions
A ExaCC guest VM has the following two pairs of system-related partitions to store the operating system.
- /dev/mapper/VGExaDb-LVDbSys1
/dev/mapper/VGExaDb-LVDbVar1 - /dev/mapper/VGExaDb-LVDbSys2
/dev/mapper/VGExaDb-LVDbVar2
During an operating system update, the currently used partitions become inactive and the inactive partitions become active. Afterwards the update is applied.
To achieve this switch, the partitions labels DBSYS and VAR are used. the labels are set to the active partitions.
$> blkid -s LABEL | egrep "DBSYS|VAR"
/dev/mapper/VGExaDb-LVDbSys1: LABEL="DBSYS"
/dev/mapper/VGExaDb-LVDbVar1: LABEL="VAR"
Rollback
A rollback is achieved by switching the above partition labels to the inactive system partitions and a reboot of the VM. To initiate the rollback patchmgr needs to be used. I am running the following command from a dedicated admin server.
Depending on the situation put the names of the VM Cluster nodes that need a rollback in the dbnodes file. To avoid a complete downtime add the parameter --rolling.
$> patchmgr --rollback --dbnodes /stage/dbnodes --allow_active_network_mounts --rolling --log_dir /stage/rollback_test
************************************************************************************************************
NOTE patchmgr release: 25.250622 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the rollback process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
WARNING Your patchmgr is more than 60 days old. Check MOS 1553103.1 and make sure you have the most recent version.
2025-11-13 10:35:07 +0200 :INFO : Checking hosts connectivity via ICMP/ping
2025-11-13 10:35:08 +0200 :INFO : Hosts Reachable: [oranode2]
2025-11-13 10:35:08 +0200 :INFO : All hosts are reachable via ping/ICMP
2025-11-13 10:35:08 +0200 :Working: Verify SSH equivalence for the root user to oranode2
2025-11-13 10:35:08 +0200 :WARNING: patchmgr is launched from a remote/shared filesystem
2025-11-13 10:35:08 +0200 :WARNING: Ensure the filesystem is not hosted from any of the target nodes listed
2025-11-13 10:35:09 +0200 :INFO : SSH equivalency verified to host oranode2
2025-11-13 10:35:09 +0200 :SUCCESS: Verify SSH equivalence for the root user to oranode2
2025-11-13 10:35:10 +0200 :Working: Initiate rollback on 1 node(s).
2025-11-13 10:35:10 +0200 :Working: Check for enough free space on oranode2 to transfer and unzip files.
2025-11-13 10:35:14 +0200 :SUCCESS: Check for enough free space on oranode2 to transfer and unzip files.
2025-11-13 10:35:31 +0200 :Working: Initiate rollback on oranode2
2025-11-13 10:35:34 +0200 :Working: dbnodeupdate.sh running a rollback step on oranode2.
2025-11-13 10:40:46 +0200 :INFO : oranode2 is ready to reboot.
2025-11-13 10:40:46 +0200 :SUCCESS: dbnodeupdate.sh running a rollback step on oranode2.
2025-11-13 10:40:47 +0200 :Working: Initiate reboot on oranode2.
2025-11-13 10:40:48 +0200 :SUCCESS: Initiate reboot on oranode2.
2025-11-13 10:40:48 +0200 :Working: Waiting to ensure oranode2 is down before reboot.
2025-11-13 10:42:26 +0200 :SUCCESS: Waiting to ensure oranode2 is down before reboot.
2025-11-13 10:42:26 +0200 :Working: Waiting to ensure oranode2 is up after reboot.
2025-11-13 10:43:09 +0200 :SUCCESS: Waiting to ensure oranode2 is up after reboot.
2025-11-13 10:43:09 +0200 :Working: Waiting to connect to oranode2 with SSH.
2025-11-13 10:43:23 +0200 :SUCCESS: Waiting to connect to oranode2 with SSH.
2025-11-13 10:43:23 +0200 :Working: Wait for oranode2 is ready for the completion step of rollback.
2025-11-13 10:48:24 +0200 :SUCCESS: Wait for oranode2 is ready for the completion step of rollback.
2025-11-13 10:48:24 +0200 :Working: Initiate completion step from dbnodeupdate.sh on oranode2.
2025-11-13 10:57:27 +0200 :SUCCESS: Initiate completion step from dbnodeupdate.sh on oranode2.
2025-11-13 10:58:19 +0200 :SUCCESS: Initiate rollback on oranode2.
2025-11-13 10:58:19 +0200 :SUCCESS: Initiate rollback on 1 node(s).
2025-11-13 10:58:21 +0200 :SUCCESS: Completed run of command: /stage/patchmgr --rollback --dbnodes /stage/nodes.dat --allow_active_network_mounts --rolling --log_dir /stage/rollback_test
2025-11-13 10:58:21 +0200 :INFO : Rollback performed on dbnode(s) in file /l/ora/nodes.dat: [oranode2]
2025-11-13 10:58:21 +0200 :INFO : Current image version on dbnode(s) is:
2025-11-13 10:58:21 +0200 :INFO : oranode2: 25.1.8.0.0.250805
2025-11-13 10:58:21 +0200 :INFO : For details, check the following files in /stage/rollback_test:
2025-11-13 10:58:21 +0200 :INFO : - <dbnode_name>_dbnodeupdate.log
2025-11-13 10:58:21 +0200 :INFO : - patchmgr.log
2025-11-13 10:58:21 +0200 :INFO : - patchmgr.trc
2025-11-13 10:58:22 +0200 :INFO : Collected dbnodeupdate diag in file: Diag_patchmgr_dbnode_rollback_130524103456.tbz
2025-11-13 10:58:22 +0200 :INFO : Exit status:0
2025-11-13 10:58:22 +0200 :INFO : Exiting.
After the rollback was successful, the old image version is now active again. The active system partitions changed as well.
$> imageinfo --all-options
Kernel version: 5.15.0-308.179.6.14.el8uek.x86_64 #2 SMP Sun Jul 27 21:02:28 PDT 2025 x86_64
Uptrack kernel version: 5.15.0-310.184.5.2.el8uek.x86_64 #2 SMP Wed Jul 9 16:08:33 PDT 2025 x86_64
Image kernel version: 5.15.0-308.179.6.14.el8uek.x86_64
Image version: 25.1.8.0.0.250805
Image created: 2025-08-05 04:38:23 -0700
Image activated: 2025-11-13 11:47:06 +0100
Image image type: production
Image status: success
Image label: OSS_25.1.8.0.0_LINUX.X64_250805
Exadata software version: 25.1.8.0.0.250805
Node type: GUEST
Install type: KVM Guest with ROCE and Secure Fabric
System partition on device: /dev/mapper/VGExaDb-LVDbSys2
$> blkid -s LABEL | egrep "DBSYS|VAR"
/dev/mapper/VGExaDb-LVDbSys2: LABEL="DBSYS"
/dev/mapper/VGExaDb-LVDbVar2: LABEL="VAR"
OCI CLI / Web Console
The OCI Web Console only provides the option to rollback a VM update, when the update itself failed. In this case the option is displayed on the VM Cluster overview page.
Currently there is no function call for the OCI CLI to initiate a rollback.