Wednesday, 19 December 2012

Solaris - Find a process using a specific port

One of the DBAs found/knocked up a useful script which searches for what process is using the entered port number.

Background:

From a NETSTAT the DBAs found that a unknown process was hogging a port number that another application they were installing wanted to use.

Netstat syntax:

# netstat -a | grep <port-number>


Process finding script:



#!/bin/bash
# Get the process which listens on port
# $1 is the port we are looking for

if [ $# -lt 1 ]
then
echo "Please provide a port number parameter for this script"
echo "e.g. $0 22"
exit
fi
echo "Greping for your port, please be patient (CTRL+Z breaks) ... "
for i in `ls /proc`
do
pfiles $i | grep AF_INET | grep $1
if [ $? -eq 0 ]
then
echo Is owned by pid $i
fi
done

Wednesday, 12 December 2012

Solaris - ZFS disk failure reporting

Our previous servers had Solaris 9 running and when we replaced the hardware we also moved to Solaris 10.

On the old servers they had a SVM script, running regularly via crontab, which monitored disk status. Any failure messages would be emailed to a shared monitored mailbox.

A bit of digging around and it didn't make sense to run the same script on the new servers as we had also switched to ZFS. So we could monitor the status of the ZFS (the disks are in pairs and therefore mirrored) using the zpool status command.

Steps:

1. Create script.
2. Configure mail relay.
3. Schedule script frequency in crontab.

1. Create script


# vi /usr/local/zfscheck


#!/usr/bin/ksh
zpool status -x | grep 'all pools are healthy'
if [ $? -ne 0 ]; then
    date > /var/tmp/zfscheck.log
    echo >> /var/tmp/zfscheck.log
    hostname >> /var/tmp/zfscheck.log
    echo >> /var/tmp/zfscheck.log
    zpool status -xv >> /var/tmp/zfscheck.log
    cat /var/tmp/zfscheck.log | mail -s "Disk failure in server : `hostname`" name@mailaddress
fi


(save and exit)

# chmod +x zfscheck


2. Configure mail relay


Edit the sendmail.cf file with your mail relay information by editing the line:
# "Smart" relay host (may be null)
DS

# vi /etc/mail/sendmail.cf

# "Smart" relay host (may be null)
DSmailrelay.yourdomain.com

(On our Exchange Front Ends we edited the the SMTP node with the server IP address to enable relay.By default Exchange is set to deny relaying).

Restart the sendmail service:
# svcadm restart sendmail

3. Schedule the task to run every 30 minutes


To set an editor for crontab:
# bash
# export EDITOR=vi
# crontab -e

NOTE: If you are in the default shell (bourne) then you have to use the following to be able to edit using vi:
# EDITOR=vi
# export EDITOR


Edit crontab with the following setting then exit and save:

# ZFS pool check
0,30 * * * * /usr/local/zfscheck


Check that the entry has taken:
# crontab -l

I checked that the whole thing worked by setting it all up on a test server and then pulled a drive out. After a few minutes I got an email!



 

Monday, 10 December 2012

Solaris - Searching for growing files

Still being quite new to Solaris and *NIX variants we hit a minor issue when one of the servers reported its disk space being nearly full and thought "How do we find out what's filling the space up?".

Application team wanted to know what files were involved, was it system or application? - so after a bit of hunting around and testing we came up with:

# find / -type f -mount -size +100000000c -mtime -1

(look for files on the local disk that are greater than 100Mb in size that have grown since yesterday)

You'll get an output giving the file path and name of the file which you can then go and check how big the actual file is:

# cd /location
# ls -hal

In this case it was a log (not in /var/adm) which was logging a failed service.

Stopped the service - deleted the log (after have a quick peek to see what was up), checked and fixed the issue then restarted the service.

We found the files causing the issues and took remedial action and disk usage went down from 97% to 78%, the lowest it had been since the server was installed.

On the same theme to find directories that are large then:

# cd /location
# du -dks * | sort -n

From the above output you can see which directories are the largest and go a hunting..... (cd into the one of the offending large directories and repeat the command).