Had a bit of a problem when one of the guys decided to restore file to the /tmp directory instead of the original location.
The ownership changed on /tmp to the restored folders owner which had a knock on effect with some scripts.
To fix the problem the following commands were applied:
# chmod 1777 /tmp
# chown root:root /tmp
Showing posts with label Solaris. Show all posts
Showing posts with label Solaris. Show all posts
Thursday, 19 December 2013
Monday, 16 December 2013
Solaris - ZFS quota
Setting up a ZFS quota:
1. Create mount point
# zfs create -o mountpoint=/mp rpool/mp
2. Set a 5Gb quota
# zfs set quota=5G rpool/mp
The mount is now created with a quota.
What happens if you now want to increase the quota?
Increasing the quota to 20Gb
# zfs set quota=20G rpool/mp
At some point you may want to get rid of the mount point and its quota
# zfs destroy rpool/mp
1. Create mount point
# zfs create -o mountpoint=/mp rpool/mp
2. Set a 5Gb quota
# zfs set quota=5G rpool/mp
The mount is now created with a quota.
What happens if you now want to increase the quota?
Increasing the quota to 20Gb
# zfs set quota=20G rpool/mp
At some point you may want to get rid of the mount point and its quota
# zfs destroy rpool/mp
Wednesday, 30 January 2013
Solaris - recover a root password in a local zone
Someone managed to reset the root password of a local zone incorrectly which resulted in a machine we couldn't log in as root......
To fix it log in to the global zone as root.
Edit the shadow file of the offending local zone (e.g. Local zone is called LZ01)
# vi /zones/LZ01/root/etc/shadow
Edit the root entry in the shadow file like so:
root::15435::::::
The save the entry (esc key, colon, wq!)
Log in to the console of the local zone from the global zone:
# zlogin -C LZ01
Log in as root (which now just logs you in without prompting for a password).
Reset the root password.
# passwd root
Follow the prompts.......
To fix it log in to the global zone as root.
Edit the shadow file of the offending local zone (e.g. Local zone is called LZ01)
# vi /zones/LZ01/root/etc/shadow
Edit the root entry in the shadow file like so:
root::15435::::::
The save the entry (esc key, colon, wq!)
Log in to the console of the local zone from the global zone:
# zlogin -C LZ01
Log in as root (which now just logs you in without prompting for a password).
Reset the root password.
# passwd root
Follow the prompts.......
Wednesday, 19 December 2012
Solaris - Find a process using a specific port
One of the DBAs found/knocked up a useful script which searches for what process is using the entered port number.
Background:
From a NETSTAT the DBAs found that a unknown process was hogging a port number that another application they were installing wanted to use.Netstat syntax:
# netstat -a | grep <port-number>Process finding script:
#!/bin/bash
# Get the process which listens on port
# $1 is the port we are looking for
if [ $# -lt 1 ]
then
echo "Please provide a port number parameter for this
script"
echo "e.g. $0 22"
exit
fi
echo "Greping for your port, please be patient (CTRL+Z
breaks) ... "
for i in `ls /proc`
do
pfiles $i | grep AF_INET | grep $1
if [ $? -eq 0 ]
then
echo Is owned by pid $i
fi
done
Wednesday, 12 December 2012
Solaris - ZFS disk failure reporting
Our previous servers had Solaris 9 running and when we replaced the hardware we also moved to Solaris 10.
On the old servers they had a SVM script, running regularly via crontab, which monitored disk status. Any failure messages would be emailed to a shared monitored mailbox.
A bit of digging around and it didn't make sense to run the same script on the new servers as we had also switched to ZFS. So we could monitor the status of the ZFS (the disks are in pairs and therefore mirrored) using the zpool status command.
2. Configure mail relay.
3. Schedule script frequency in crontab.
# vi /usr/local/zfscheck
#!/usr/bin/ksh
zpool status -x | grep 'all pools are healthy'
if [ $? -ne 0 ]; then
date > /var/tmp/zfscheck.log
echo >> /var/tmp/zfscheck.log
hostname >> /var/tmp/zfscheck.log
echo >> /var/tmp/zfscheck.log
zpool status -xv >> /var/tmp/zfscheck.log
cat /var/tmp/zfscheck.log | mail -s "Disk failure in server : `hostname`" name@mailaddress
fi
(save and exit)
# chmod +x zfscheck
Edit the sendmail.cf file with your mail relay information by editing the line:
# "Smart" relay host (may be null)
DS
# vi /etc/mail/sendmail.cf
# "Smart" relay host (may be null)
DSmailrelay.yourdomain.com
(On our Exchange Front Ends we edited the the SMTP node with the server IP address to enable relay.By default Exchange is set to deny relaying).
Restart the sendmail service:
# svcadm restart sendmail
To set an editor for crontab:
# bash
# export EDITOR=vi
# crontab -e
NOTE: If you are in the default shell (bourne) then you have to use the following to be able to edit using vi:
# EDITOR=vi
# export EDITOR
Edit crontab with the following setting then exit and save:
# ZFS pool check
0,30 * * * * /usr/local/zfscheck
Check that the entry has taken:
# crontab -l
I checked that the whole thing worked by setting it all up on a test server and then pulled a drive out. After a few minutes I got an email!
On the old servers they had a SVM script, running regularly via crontab, which monitored disk status. Any failure messages would be emailed to a shared monitored mailbox.
A bit of digging around and it didn't make sense to run the same script on the new servers as we had also switched to ZFS. So we could monitor the status of the ZFS (the disks are in pairs and therefore mirrored) using the zpool status command.
Steps:
1. Create script.2. Configure mail relay.
3. Schedule script frequency in crontab.
1. Create script
# vi /usr/local/zfscheck
#!/usr/bin/ksh
zpool status -x | grep 'all pools are healthy'
if [ $? -ne 0 ]; then
date > /var/tmp/zfscheck.log
echo >> /var/tmp/zfscheck.log
hostname >> /var/tmp/zfscheck.log
echo >> /var/tmp/zfscheck.log
zpool status -xv >> /var/tmp/zfscheck.log
cat /var/tmp/zfscheck.log | mail -s "Disk failure in server : `hostname`" name@mailaddress
fi
(save and exit)
# chmod +x zfscheck
2. Configure mail relay
Edit the sendmail.cf file with your mail relay information by editing the line:
# "Smart" relay host (may be null)
DS
# vi /etc/mail/sendmail.cf
# "Smart" relay host (may be null)
DSmailrelay.yourdomain.com
(On our Exchange Front Ends we edited the the SMTP node with the server IP address to enable relay.By default Exchange is set to deny relaying).
Restart the sendmail service:
# svcadm restart sendmail
3. Schedule the task to run every 30 minutes
To set an editor for crontab:
# bash
# export EDITOR=vi
# crontab -e
NOTE: If you are in the default shell (bourne) then you have to use the following to be able to edit using vi:
# EDITOR=vi
# export EDITOR
Edit crontab with the following setting then exit and save:
# ZFS pool check
0,30 * * * * /usr/local/zfscheck
Check that the entry has taken:
# crontab -l
I checked that the whole thing worked by setting it all up on a test server and then pulled a drive out. After a few minutes I got an email!
Monday, 10 December 2012
Solaris - Searching for growing files
Still being quite new to Solaris and *NIX variants we hit a minor issue when one of the servers reported its disk space being nearly full and thought "How do we find out what's filling the space up?".
Application team wanted to know what files were involved, was it system or application? - so after a bit of hunting around and testing we came up with:
# find / -type f -mount -size +100000000c -mtime -1
(look for files on the local disk that are greater than 100Mb in size that have grown since yesterday)
You'll get an output giving the file path and name of the file which you can then go and check how big the actual file is:
# cd /location
# ls -hal
In this case it was a log (not in /var/adm) which was logging a failed service.
Stopped the service - deleted the log (after have a quick peek to see what was up), checked and fixed the issue then restarted the service.
We found the files causing the issues and took remedial action and disk usage went down from 97% to 78%, the lowest it had been since the server was installed.
On the same theme to find directories that are large then:
# cd /location
# du -dks * | sort -n
From the above output you can see which directories are the largest and go a hunting..... (cd into the one of the offending large directories and repeat the command).
Application team wanted to know what files were involved, was it system or application? - so after a bit of hunting around and testing we came up with:
# find / -type f -mount -size +100000000c -mtime -1
(look for files on the local disk that are greater than 100Mb in size that have grown since yesterday)
You'll get an output giving the file path and name of the file which you can then go and check how big the actual file is:
# cd /location
# ls -hal
In this case it was a log (not in /var/adm) which was logging a failed service.
Stopped the service - deleted the log (after have a quick peek to see what was up), checked and fixed the issue then restarted the service.
We found the files causing the issues and took remedial action and disk usage went down from 97% to 78%, the lowest it had been since the server was installed.
On the same theme to find directories that are large then:
# cd /location
# du -dks * | sort -n
From the above output you can see which directories are the largest and go a hunting..... (cd into the one of the offending large directories and repeat the command).
Thursday, 15 November 2012
Solaris - checking whether an account is locked or not
Recently we needed to be able to check what the account status on one of the servers was as there was an access problem.
Was it locked out? How do we check from a terminal session?
# passwd -s <account_name>
This comes back with the accounts status. In this instance it came back with:
# <account_name> LK
Status information:
PS = a normal working account.
LK = locked out account.
NP = account has no password.
Okay, so the account is locked. How do I unlock it?
# passwd -u <account_name>
Account is now unlocked - now to find the script that locked the account in the first place......
NOTE: If you want to lock the account on purpose
# passwd -l <account_name>
Was it locked out? How do we check from a terminal session?
# passwd -s <account_name>
This comes back with the accounts status. In this instance it came back with:
# <account_name> LK
Status information:
PS = a normal working account.
LK = locked out account.
NP = account has no password.
Okay, so the account is locked. How do I unlock it?
# passwd -u <account_name>
Account is now unlocked - now to find the script that locked the account in the first place......
NOTE: If you want to lock the account on purpose
# passwd -l <account_name>
Friday, 9 November 2012
Solaris - Projects
Had to apply a Project to an account so that Oracle could be installed. 4Gb of memory was required:
Steps:
1. Create the user account (oracle)
2. Apply project settings
# projadd -U oracle -K "project.max-shm-memory=(privileged,4096,deny)" 'user.oracle'
# projmod -s -K "project.max-sem-nsems=(priv,256,deny)" user.oracle
# projmod -s -K "project.max-sem-ids=(priv,100,deny)" user.oracle
# projmod -s -K "project.max-shm-ids=(priv,100,deny)" user.oracle
3. Check settings
# projects -l
or
# cat /etc/project
I found a useful article where they applied the same settings but to a group.
Steps:
1. Create the user account (oracle)
2. Apply project settings
# projadd -U oracle -K "project.max-shm-memory=(privileged,4096,deny)" 'user.oracle'
# projmod -s -K "project.max-sem-nsems=(priv,256,deny)" user.oracle
# projmod -s -K "project.max-sem-ids=(priv,100,deny)" user.oracle
# projmod -s -K "project.max-shm-ids=(priv,100,deny)" user.oracle
3. Check settings
# projects -l
or
# cat /etc/project
I found a useful article where they applied the same settings but to a group.
Subscribe to:
Posts (Atom)