An update on patching zones (which is why the information came to light).
I have noticed that when you create a BE on a server running local zones and then patch it an additional zone directory is created. But when I delete the old BE the old zone directory didn't go with it...... If it creates the folder when I lucreate why doesn't it delete when I use ludelete? Maybe it was just naive of me to expect it to be removed.......
For example if I wanted to patch a server running a zone called ZONE1, first of all I would create a new BE and call it for arguments sake newBE. Once the BE creation had completed my zone folder location would look something like this:
/export/ZONE1 (running BE)
/export/ZONE1-newBE (BE to be patched)
Then I would run the unzipped CPU against the newBE.
When all the patching has occurred, the newBE has been activated and a reboot has occurred ZONE1 is now running from /export/ZONE1-newBE.
At some point in the near future I would do some tidying up and remove the old BE by running ludelete and then check the number of BEs by running lustatus.
But when I check the /export directory it shows both folders still being there.
Luckily you can remove the old directories with no issues, but to be safe check first what directory the zone is running from by typing:
zoneadm list -iv
Now to amend the Work Instruction so others at work know what needs to be done.......
FYI - Oracle say this is normal behavior, but I don't recall reading anything about it (but it doesn't mean there isn't - I just haven't seen it!).
Showing posts with label Solaris Containers. Show all posts
Showing posts with label Solaris Containers. Show all posts
Tuesday, 7 May 2013
Wednesday, 30 January 2013
Solaris - recover a root password in a local zone
Someone managed to reset the root password of a local zone incorrectly which resulted in a machine we couldn't log in as root......
To fix it log in to the global zone as root.
Edit the shadow file of the offending local zone (e.g. Local zone is called LZ01)
# vi /zones/LZ01/root/etc/shadow
Edit the root entry in the shadow file like so:
root::15435::::::
The save the entry (esc key, colon, wq!)
Log in to the console of the local zone from the global zone:
# zlogin -C LZ01
Log in as root (which now just logs you in without prompting for a password).
Reset the root password.
# passwd root
Follow the prompts.......
To fix it log in to the global zone as root.
Edit the shadow file of the offending local zone (e.g. Local zone is called LZ01)
# vi /zones/LZ01/root/etc/shadow
Edit the root entry in the shadow file like so:
root::15435::::::
The save the entry (esc key, colon, wq!)
Log in to the console of the local zone from the global zone:
# zlogin -C LZ01
Log in as root (which now just logs you in without prompting for a password).
Reset the root password.
# passwd root
Follow the prompts.......
Wednesday, 22 February 2012
Solaris Zones - Fibre channel presentation
This process will persistently mount an FC LUN in a Non Global Zone
On the Global Zone present the required FC LUN, format as UFS and manually mount:
On the Global Zone add the newly formatted file system to the required Non Global Zone as type = UFS
On the Global Zone unmount the newly created file system, reboot the Non Global Zone and delete the now defunct mount point:
Login to the Non Global Zone and run the mount command to check the file system is mounted Read\Write
To remove a file system:
On the Global Zone present the required FC LUN, format as UFS and manually mount:
mkdir /<folder name> fcinfo hba-port fcinfo remote-port -slp <wwn> format newfs /dev/dsk/<device id> mount -f ufs /dev/dsk/<device id> /<folder name>
On the Global Zone add the newly formatted file system to the required Non Global Zone as type = UFS
global# zonecfg -z <my-zone> zonecfg:my-zone> add fs zonecfg:my-zone:fs> set dir=/<folder name> zonecfg:my-zone:fs> set special=/dev/dsk/<device id> zonecfg:my-zone:fs> set raw=/dev/rdsk/<device id> zonecfg:my-zone:fs> set type=ufs zonecfg:my-zone:fs> end
On the Global Zone unmount the newly created file system, reboot the Non Global Zone and delete the now defunct mount point:
umount /<folder name> zoneadm –z <my-zone> reboot rm –r /<folder name>
Login to the Non Global Zone and run the mount command to check the file system is mounted Read\Write
To remove a file system:
global# zonecfg -z <my-zone> zonecfg:my-zone> add fs
zonecfg:my-zone> remove fs dir=/<folder name>
zonecfg:my-zone> verify
zonecfg:my-zone> commit
Subscribe to:
Posts (Atom)