Tuesday, 10 July 2012

Solaris - Live Upgrade Installation (Solaris 10 Update 9)

Looking into Live Upgrade to allow for patching of the servers with the ability to rollback if any issues occur.

Background:

I have tried patching while using ZFS snapshots - patching from Update 9 to 10 then applying the latest Recommended set - the rollback worked but the system needed 2 reboots to get back to normal....

I'm now trying Live Upgrade which should end up with a better outcome (less downtime)

Test server is a SPARC box running Solaris 10 Update 9 with ZFS (mimicing the live).

Commands:

lustatus
lucreate
luactivate
ludelete
luupgrade

Steps:

- Run a ZFS snapshot on the server (rollback option if install fails)

- Go to the following page on MOS: Solaris Live Upgrade Software Patch Requirements [ID 1004881.1]

Apply these patches:
Solaris 10 5/08 (Update 5) or later:
SPARC:

x86:

Once copied over to the Server run the following:
# unzip /<path>/<zip_file_name>
# patchadd /<path>/<patch_folder_name>

- Created a Boot Environment called zfsABE within the same ZFS Root Pool

# lucreate -n zfsABE
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfsABE>.
Source boot environment is <root_ds>.
Creating file systems on boot environment <zfsABE>.
Populating file systems on boot environment <zfsABE>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/root_ds> on <
rpool/ROOT/root_ds@zfsABE>.
Creating clone for <
rpool/ROOT/root_ds@zfsABE> on <rpool/ROOT/zfsABE>.
Creating snapshot for <rpool/ROOT/root_ds/var> on <
rpool/ROOT/root_ds/var@zfsABE>.
Creating clone for <
rpool/ROOT/root_ds/var@zfsABE> on <rpool/ROOT/zfsABE/var>.
Mounting ABE <zfsABE>.
Generating file list.
Copying data from PBE <root_ds> to ABE <zfsABE>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfsABE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <root_ds>.
Making boot environment <zfsABE> bootable.
Population of boot environment <zfsABE> successful.
Creation of boot environment <zfsABE> successful.


- To check the status and then activate and use zfsABE
NOTE: You will get a message stating how to recover in the event of a boot failure - MAKE NOTE OF THIS (copy and paste into an editor)

# luactivate zfsabe
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsABE>.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/root_ds
     zfs set mountpoint=<mountpointName> rpool/ROOT/root_ds
     zfs mount rpool/ROOT/root_ds

4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/root_ds
8. Exit Single User mode and reboot the machine.

**********************************************************************
Modifying boot archive service
Activation of boot environment <zfsABE> successful.



Run lustatus which should show YES against ACTIVE on REBOOT

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
root_ds                    yes      yes    no        no     -
zfsABE                     yes      no     yes       no     -


# init 6

- To check the status post reboot
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
root_ds                    yes      no     no        yes    -
zfsABE                     yes      yes    yes       no     -


Documentation:
Solaris Live Upgrade Software Patch Requirements [ID 1004881.1]


Useful Live Upgrade links:

https://blogs.oracle.com/bobn/entry/getting_rid_of_pesky_live

https://blogs.oracle.com/bobn/entry/live_upgrade_survival_tips

https://blogs.oracle.com/bobn/entry/common_live_upgrade_errors

https://blogs.oracle.com/bobn/entry/var_tmp_and_live_upgrade

https://blogs.oracle.com/bobn/entry/live_upgrade_and_zfs_versioning

1 comment:

  1. I am planning for upgrade os solaris 10 update 9 sol 10 u11

    the same server is having ldm configured , how can I do this activity ..

    ReplyDelete