Tuesday 24 July 2012

VMware - SLES

I found out recently that due to the Licenses we have VMware we were/are untitled to SUSE SLES fully supported within the VI.

I'll upload the links once I find them again.....

General information about SUSE (which some VMware licenses allow you to run it within a vSphere environment - for free and supported!)

http://www.softpanorama.info/Commercial_linuxes/Suse/index.shtml

Friday 20 July 2012

Solaris - Adding patches while in single user mode

Had a request in from the DBAs to add certain patches to a server running a number of Whole Root Zones.

Reading through the bumpf on the patch (putting aside the fact that there are Zones involved) the recommendation was to place the server into single user mode....

Digging around I found a couple of options:

# init 1
Which drops to single user mode without restarting the server - which could leave errant applications running which didn't receive a kill command when leaving the running state.

# reboot -- -s
Which reboots the server into single user mode while making sure no applications are still running.

We went for the second option as it sounds cleaner and less risk.

Rebooting into single user mode stopped the Zones from autostarting which also meant downtime so all the work had to be down out of hours unless we got agreed downtime (which we did!)

Thursday 12 July 2012

Solaris - Using Live Upgrade to Upgrade Solaris 10 (U9 to U10)

Documenting how to upgrade Solaris 10 Update 9 (U9) to Update 10 (U10) using Live Upgrade (LU) on ZFS.

Starting point

 Okay so I've got LU working on Solaris 10 U9 (See http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html). Now I need to patch the Boot Environments (BE).

 From the research I've done the best method for just patching (not upgrading) is to follow these steps:

1. Install the 10_sparc_0811_patchset (6 zip files - unzip then run the install script)
2. Install the latest 10_Recommended.zip (1 zip file - unzip then run the install script)
3. Install the latest Firmware for your server.

The patchset would update the common packages that exist between U9 and U10, but not install any new packages that exist in U10 but not U9 - you need to upgrade to do this.

 Since I'm using LU why don't I go for the Upgrade option instead of just patching? (I'll cover the patching a BE in another blog).

Pre-Upgrade preparation work

 First steps are to upgrade the existing Live Upgrade to U10 version (as per most Oracle Documentation, but I'll be specific about one. Oracle Whitepaper - How to Upgrade and Patch with Oracle Solaris Live Upgrade).

 Why? A new BE will be created in which the Upgrade to U10 will occur - the new BE won't work correctly post upgrade unless this occurs.

 I'm upgrading using the U10 ISO which is stored on an NFS SAN volume.

1. Mount the NFS volume (http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/02/useful-commands.html)
2. Mount the ISO volume (http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/02/useful-commands.html)
3. # cd /<mount_point>/Solaris_10/Tools/Installers
4. # ./liveupgrade20 -noconsole -nodisplay

 Already to go? No, think again. You need to re-apply the LU patches (See http://sysadmin-tips-and-tricks.blogspot.co.uk/2012/07/solaris-live-upgrade-installation.html).

 Once the LU patches are applied the BE that will be used for the U10 upgrade can be created. Failure to do this step will result in BE creation failures (Well, it did for me....).

# lucreate -n u10
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <u10>.
Source boot environment is <zfsBE>.
Creating file systems on boot environment <u10>.
Populating file systems on boot environment <u10>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@u10>.
Creating clone for <rpool/ROOT/zfsBE@u10> on <rpool/ROOT/u10>.
Creating snapshot for <rpool/ROOT/zfsBE/var> on <rpool/ROOT/zfsBE/var@u10>.
Creating clone for <rpool/ROOT/zfsBE/var@u10> on <rpool/ROOT/u10/var>.
Mounting ABE <u10>.
Generating file list.
Copying data from PBE <zfsBE> to ABE <u10>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <u10>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <zfsBE>.
Making boot environment <u10> bootable.
Population of boot environment <u10> successful.
Creation of boot environment <u10> successful.


I'm now ready to upgrade.....

You may want to check what the new BE is comprised of:
# lufslist u10

Upgrading the BE from U9 to U10

The U10 ISO is still connected and mounted, as /dvd, from the liveupgrade20 installation so I just needed to run the following command:
# luupgrade -u -n u10 -s /dvd

The upgrade takes a while (30 minutes plus).

NOTE: I've amended this slightly so it runs as a background task
# nohup luupgrade -u -n u10 -s /dvd >> /patches/u10log &

Once completed I activated the u10 BE (getting it ready to take over after the reboot)
# luactivate u10
A Live Upgrade Sync operation will be performed on startup of boot environment <u10>.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/zfsBE
     zfs set mountpoint=<mountpointName> rpool/ROOT/zfsBE
     zfs mount rpool/ROOT/zfsBE

4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/zfsBE
8. Exit Single User mode and reboot the machine.

**********************************************************************
Modifying boot archive service
Activation of boot environment <u10> successful.


Reboot the server:
# init 6

Once rebooted the u10 BE has taken over as the working BE
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -
u10                        yes      yes    yes       no     -


Check the release version of Solaris:
bash-3.2# cat /etc/release
                   Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
  Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
                            Assembled 23 August 2011

Job done!

- The u10 BE has been upgraded
- The only impact to the orginal BE was the LU upgrade - which shouldn't affect the running of the server if things go wrong.

Tuesday 10 July 2012

Solaris - Live Upgrade Installation (Solaris 10 Update 9)

Looking into Live Upgrade to allow for patching of the servers with the ability to rollback if any issues occur.

Background:

I have tried patching while using ZFS snapshots - patching from Update 9 to 10 then applying the latest Recommended set - the rollback worked but the system needed 2 reboots to get back to normal....

I'm now trying Live Upgrade which should end up with a better outcome (less downtime)

Test server is a SPARC box running Solaris 10 Update 9 with ZFS (mimicing the live).

Commands:

lustatus
lucreate
luactivate
ludelete
luupgrade

Steps:

- Run a ZFS snapshot on the server (rollback option if install fails)

- Go to the following page on MOS: Solaris Live Upgrade Software Patch Requirements [ID 1004881.1]

Apply these patches:
Solaris 10 5/08 (Update 5) or later:
SPARC:

x86:

Once copied over to the Server run the following:
# unzip /<path>/<zip_file_name>
# patchadd /<path>/<patch_folder_name>

- Created a Boot Environment called zfsABE within the same ZFS Root Pool

# lucreate -n zfsABE
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfsABE>.
Source boot environment is <root_ds>.
Creating file systems on boot environment <zfsABE>.
Populating file systems on boot environment <zfsABE>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/root_ds> on <
rpool/ROOT/root_ds@zfsABE>.
Creating clone for <
rpool/ROOT/root_ds@zfsABE> on <rpool/ROOT/zfsABE>.
Creating snapshot for <rpool/ROOT/root_ds/var> on <
rpool/ROOT/root_ds/var@zfsABE>.
Creating clone for <
rpool/ROOT/root_ds/var@zfsABE> on <rpool/ROOT/zfsABE/var>.
Mounting ABE <zfsABE>.
Generating file list.
Copying data from PBE <root_ds> to ABE <zfsABE>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfsABE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <root_ds>.
Making boot environment <zfsABE> bootable.
Population of boot environment <zfsABE> successful.
Creation of boot environment <zfsABE> successful.


- To check the status and then activate and use zfsABE
NOTE: You will get a message stating how to recover in the event of a boot failure - MAKE NOTE OF THIS (copy and paste into an editor)

# luactivate zfsabe
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsABE>.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/root_ds
     zfs set mountpoint=<mountpointName> rpool/ROOT/root_ds
     zfs mount rpool/ROOT/root_ds

4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/root_ds
8. Exit Single User mode and reboot the machine.

**********************************************************************
Modifying boot archive service
Activation of boot environment <zfsABE> successful.



Run lustatus which should show YES against ACTIVE on REBOOT

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
root_ds                    yes      yes    no        no     -
zfsABE                     yes      no     yes       no     -


# init 6

- To check the status post reboot
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
root_ds                    yes      no     no        yes    -
zfsABE                     yes      yes    yes       no     -


Documentation:
Solaris Live Upgrade Software Patch Requirements [ID 1004881.1]


Useful Live Upgrade links:

https://blogs.oracle.com/bobn/entry/getting_rid_of_pesky_live

https://blogs.oracle.com/bobn/entry/live_upgrade_survival_tips

https://blogs.oracle.com/bobn/entry/common_live_upgrade_errors

https://blogs.oracle.com/bobn/entry/var_tmp_and_live_upgrade

https://blogs.oracle.com/bobn/entry/live_upgrade_and_zfs_versioning