Welters Helpful Unix Administrative Scripts

Written by:

Welters Helpful Unix Administrative Scripts
  • 0.00 / 5 5
0 votes, 0.00 avg. rating (0% score)

Over the years I have used many scripts to perform various tasks. Some of those scripts came from the-welters website. Here you will find some great perl scripts to perform such tasks like display Cisco CDP packets to performing symaccess commands on EMC storage arrays.

These scripts are distributed under standard GNU Public License terms. You are free to use and distribute them, provided you preserve the attribution comments in them. They are not that sophisticated, but if you find them useful or have suggestions for improvement, please email. unix@the-welters . com and let them know what you think. Most of these are written in Bourne shell.

  • cdpinfo – display Cisco CDP packet info via tcpdump or snoop
  • createVgCloneScript – duplicate a volume group, volume, and filesystem structure
  • dimmslots – display the dimm slot arrangement on an IBM Power System
  • lvinfo – a script to help document Logical Volume Manager information on an AIX system.
  • wackVG – Delete the contents of a volume group, including all all files and volumes.
  • pstree – this script analyzes “ps” command output and displays the parent / child relationships of the running processes.
  • dumpall – a set of scripts to backup Solaris and Digital Unix systems.
  • vgbackup – a very basic script for doing backups on AIX systems.
  • mtfcount – count the file marks on a tape.
  • vmlog – a script that records time stamped vmstat output to a file.
  • hogs – A script to show the top CPU or Memory users on a system.
  • rex – An rexec client for executing commands on remote systems.
  • syslog maintenance scripts.
  • webcat and webload – These two scripts can be used to fetch web pages from the command line or generate system load on web servers.
  • logroll – A script to manage old log files.
  • format_clone – A script to copy Solaris disk format layouts from one disk to another.
  • inetd_cleanup – A script to tighten up the security of Solaris inetd configurations.
  • ping_scan – A script to ping IP addresses in sequence.
  • hostbyname – Get IP address for a host name via gethostbyname
  • hostbyaddr – Reverse lookup of hostnames by IP address, using gethostbyaddr
  • hostxcheck – Forward and reverse lookup cross check
  • rc4 – A perl implementation of the “crypt” rc4 encryption command.
  • repack – Recreate a Solaris package file from an installed package
  • makePackage – create a simple package through through simple prompts.
  • sh_ex – This Bourne shell script doesn’t do anything, it just has a series of syntax examples to jog your memory when coding.
  • perl_ex – This Perl script doesn’t do anything, it is just a series of syntax examples like sh_ex.
    Storage oriented scripts

  • emc-maskrep – VMax / symaccess masking view report
  • emc-fastcfg – Display VMax FAST VP configuration
  • sw-alicheck – Brocade status and alias checking script
  • sw-savecfg – Backup brocade switch configurations

dimmslots

Display the layout and size of dimm slots on an IBM Power or pSeries server. This uses the lscfg -vp command to display the hardware config. You might want to run this command when planning hardware upgrades to make sure that you have enough slots to install additional memory.

#!/usr/bin/perl
#
# Display the DIMM slots in use on a system, provide a total
# slot count, memory count.  This script uses the lscfg -vp
# command to provide the data.  Output parsing depends on the
# order of the output of the lscfg command.  This script
# was tested under AIX 5.3 and 6.1, on Power5 P520, P550, P570,
# Power6 P570 systems and power7 p770s.  Your mileage may vary.
#
# This script should report correctly for the whole system even
# in LPARed environments.
#
# Andy Welter
# V1.0 - 2008
#

open (LSCFG, "lscfg -vp |") || die "cannot run lscfg\n";

#
# Scan for Memory DIMM lines, then get the location and size.
# break out of the inner loop when you get the size.
$totsize=0;
$count=0;
print "Location                         FRU             SIZE\n";
print "-----------------------------------------------------\n";
while ($_=<LSCFG>) {
	chomp;
	if (m/memory dimm/i) {
		$count++;
		#
		# Get the location and size for the DIMM;
		while ( $_ = <LSCFG>) {
			chomp;
			if (m/fru number/i) {
				@list=split/\./;
				$index=@list;
				$fru=$list[$index-1];
			} elsif (m/size/i) {
				@list=split /\./,$_;
				$index=@list;
				$size=$list[$index-1];
				$totsize+=$size;
			} elsif (m/physical location/i) {
				@list=split/:/,$_;
				$location=$list[1];
				@list=split/-/,$location;
				$cec=$list[0];
				if ($cec eq $prevcec) {
					$cecmb+=$size;
				} else {
					if ( $prevcec ne "" ) {
						print "CEC Total: $cectotal $cecmb MB\n";
					};
					$cectotal=0;
					$cecmb=$size;
					$prevcec=$cec;
				};
				$cectotal++;
				print "$location\t$fru\t$size\n";
				last;
			};
		};
	};
};
print "CEC Total: $cectotal DIMMs $cecmb MB\n";
print "\nSystem Total: $count DIMMs $totsize MB\n";

Platforms supported: IBM AIX 5.x and 6.1


lvinfo

This script is handy for documenting the logical volume manager configuration for an AIX system. Having this information all in one place can be useful when trying to analyze disk utilization. It can also be valuable as part of the disaster recovery documentation for a system.

#!/bin/ksh
#
# Simple script to document LVM configurations.
#
# Andy Welter
# www.the-welters.com
#
exec 2>&1
printf "AIX DISK AND LVM INFORMATION\n"
printf "*********************************************************\n"

printf "\nDF\n"
printf "==========================\n"
df -k

printf "\nVOLUME GROUPS:\n"
printf "==========================\n"
lsvg

printf "\n\nPHYSICAL VOLUMES:\n"
printf "==========================\n"
lspv

printf "\n\nPVs BY VOLUME GROUP\n"
printf "==========================\n"
lsvg | while read VG; do
	VGLIST="$VGLIST $VG"
	printf "\n$VG\n"
	printf "--------------------------\n"
	lspv | grep $VG
done

printf "\n\nPV INFORMATION:\n"
printf "==========================\n"
lspv | while read PV; do
	printf "\n$PV\n"
	printf "--------------------------\n"
	lspv $PV
done

printf "\n\nVG INFORMATION\n"
printf "==========================\n"
for VG in $VGLIST; do
	printf "\n$VG\n"
	printf "--------------------------\n"
	lsvg $VG
	lsvg -l $VG
done

printf "\n\nLV INFORMATION\n"
printf "==========================\n"
for VG in $VGLIST; do
	printf "\nVolume Group: $VG\n"
	printf "--------------------------\n"
	lsvg -l $VG | egrep -v "^$VG:" | egrep -v "^LV NAME" | while read LV JUNK;
do
		printf "\nLogical Volume: $LV\n"
		printf "--------------------------\n"
		lslv $LV
	done
done

Platforms supported: IBM AIX 4.x


cdpinfo

Display Cisco Discovery Protocol packet information via tcpdump or snoop. This information includes the name of the network switch the network interface is connected to, plus the port number, vlan, and duplex of the port.

#!/usr/bin/perl
#
# Listen for Cisco Discovery Protocol (CDP) packets
# and print out key values such as switch, port, vlan, and duplex.
#
# This script depends on either "snoop" (Solaris) or
# "tcpdump" (AIX, and others).  Both of those programs generally
# must be run as root.
#
# It has been tested on Solaris 10 and AIX 5.3.
#
# Andy Welter
# Version 1.1
# July 2007
#	Support timeout values while waiting on the cdp packet.
# Version 1.0
# December 2006
#	Initial Version.
#
#

$usage="cdpinfo -i <ethernet interface> [-t timeoutvalue] [-v]\n-i use the enX device name for the interface to watch\n-t timeout value in seconds.  Don't wait for a cdp packet longer than this. default is 60 seconds.  zero means no limit.\n-v verbose output\n";
use Getopt::Std;
if ( getopts ('i:t:v') == 0) {
	print "$usage";
	exit 1;
};

$idev=$opt_i;
if ($opt_i) {
	$iface="-i $opt_i";
};
$verbose=$opt_v;
$timeout=$opt_t;

#
# convert string data to hex characters.
#
sub hexprint {
        my ($string)=@_;
        my $hex="";
        my $ii,$len;
        $len=length ($string);
        $ii=0;
        @bytes=unpack "C*",$string;
        foreach $byte (@bytes) {
            $hex=$hex . sprintf "%02x ",$byte;
            $ii++;
        };
return $hex;
};

#
# Parse TCP dump output to acquire a CDP packet
sub tcpdump {
my ($cmd)=@_;
# tcpdump omits the first 14 bytes of a packet in the hex dump
# so put some filler in the packet string
my $packet="01234567890123";

open (GETPACKET, "$cmd") || die "cannot open $cmd\n";
while ( $_ = <GETPACKET> ) {
	chomp;
	#
 	# look a line that starts with white space, followed by at least
	# 2 hex characters
	if (m/^\s+([\da-fA-F]+ )/) {
		s/^\s+//;
		@data=split /\s+/,$_,8;
		foreach $bytes (@data) 	{
			$verbose && print "$bytes ";
			$packet=$packet . pack "H4", $bytes;
		};
		$verbose && print "\n";
	} ;
};
close GETPACKET;
return $packet;
};

#
# Parse "snoop" output for the packet
sub snoop  {
my ($cmd)=@_;
my $packet="";
open (GETPACKET, "$cmd") || die "cannot open $cmd\n";
while ( $_ = <GETPACKET> ) {
	chomp;
	print "-- $_\n";
	if (/^\s+\d+:/) {
		s/^\s+//;
		@data=split /\s+/,$_,10;
		shift @data;
		pop @data;
		foreach $bytes (@data) 	{
			$packet=$packet . pack "H4", $bytes;
		};
	};
};
close GETPACKET;
return $packet;
};

#
# Parse the acquired CDP packet for key values.
#
sub decodePacket {
my ($packet)=@_;
my $plen,$string,$ii,$flength,$switchName,$switchPort,$ftype,$vlan,$duplex;
# decode the packet
# ethernet layout:
# 0-7   8 byte preamble
# 8-13  6 byte dest mac addr
# 14-19 6 byte source mac addr
# 20-21 2 byte type field
# 22-23 2 byte check sum
# 24-25 2 byte ???
# 26-27 2 byte first CDP data field
# 28-29 2 byte field length (including field type and length)
# 30--  Variable data.
#       4 byte CRC field.
#
# Field type indicators
# Device-ID  => 0x01
# Version-String  => 0x05
# Platform  => 0x06
# Address  => 0x02
# Port-ID  => 0x03
# Capability  => 0x04
# VTP-Domain  => 0x09
# VLAN-ID  => 0x0a
# Duplex  => 0x0b
# AVVID-Trust  => 0x12
# AVVID-CoS  => 0x13);

$verbose && printf "packet len=%d\n",length($packet);
#
# The CDP packet data starts at offset 26
$ii=26;
$plen=length ($packet);
while ( $ii < $plen-4) {
        $ftype=unpack "S", substr ($packet, $ii, 2);
        $flength=unpack "S", substr ($packet, $ii+2, 2);
	if ( $ftype == 1 ) {
		$switchName=substr ($packet,$ii+4,$flength-4);
	} elsif ( $ftype == 3 ) {
		$switchPort=substr ($packet,$ii+4,$flength-4);
	} elsif ( $ftype == 10 ) {
		$vlan=unpack "s",substr ($packet,$ii+4,$flength-4);
	} elsif ( $ftype == 11 ) {
		$duplex=unpack "c",substr ($packet,$ii+4,$flength-4);
	};
	$string=substr ($packet,$ii+4,$flength-4);
	$fvalue=hexprint ($string);
	$string=~s/\W/./g;
        $verbose && printf "\noffset=%d, type 0x%04x, length 0x%04x\nHex Value:\n%s\nASCII value:\n%s\n\n",
		$ii,$ftype, $flength-4,$fvalue,$string;
	if ($flength == 0 ) {
		$ii=$plen;
	};
        $ii=$ii+$flength;
};
return sprintf "\"%s\",\"%s\",\"%d\",\"0x%02x\"",
	$switchName,$switchPort,$vlan,$duplex;
};

#
# MAIN ROUTINE
#
# determine whether we are a snoop or tcpdump kinda system
$cmd=`which tcpdump`;
chomp $cmd;
if ( $cmd ne "" ) {
	$cmd= "$cmd $iface -s 1500 -x -c 1 'ether [20:2] = 0x2000' 2>/dev/null |";
} else {
	$cmd=`which snoop`;
	chomp $cmd;
	if ( $cmd ne "" ) {
		$cmd="$cmd $iface -s 1500 -x0 -c 1 'ether[20:2] = 0x2000' 2>/dev/null |";
	} else {
		print "ERROR: neither snoop nor tcpdump in my path\n";
		exit 1;
	};
};

sub timeout {
	die "TIMEOUT";
};
$SIG{ALRM}=\&timeout;

eval {
alarm ($timeout);
#
# use tcpdump or snoop to get a CDP packet
if ( $cmd=~m/snoop/ ) {
	$packet=snoop ($cmd);
} elsif ( $cmd=~m/tcpdump/ ) {
	$packet=tcpdump($cmd);
} else {
	print "ERROR: snoop or tcpdump not found\n";
	exit 1;
};
alarm(0);
};
if ($@ =~ "TIMEOUT") {
	$packet="";
};

#
# Decode the acquired packet and print the results.
print  '"' . $idev . '",' . decodePacket ($packet) . "\n";

Platforms supported: IBM AIX, Sun Solaris


createVgCloneScript

This script can used in disaster recovery to rebuild the non-rootvg volume groups and volumes on a system. The output of this script is a script that will run the required mklv and crfs commands needed to duplicate a systems file system structure. This script needs to be run prior to a disaster in order to produce the volume creation script. The volume creation script requires some manual editing in order to adapt to the specific hardware being used for recovery. In the event of a real recovery the disk names and sizes will differ from the original system configuration, which requires that the mklv commands be manually edited.

#!/usr/bin/perl
#
# This script is used in disaster recovery to rebuild the
# non-rootvg volume groups and volumes on a system. The output
# of this script is a script that will run the required mklv and
# crfs commands needed to duplicate a systems file system structure.
#
# This script needs to be run prior to a disaster in order to
# produce the volume creation script. The volume creation script
# requires some manual editing in order to adapt to the specific
# hardware being used for recovery.  In the event of a real recovery
# the disk names and sizes will differ from the original system
# configuration, which requires that the mklv commands be manually
# edited.
#
# Andy Welter - ajw8
# MeadWestvaco
# February 2005
# Version 1.4
# v1.4 NOTE: use "lsfs" to get nbpi, frag size, bf, and other file system parms.
# v1.3 NOTE: added jfslog and jfs2log volume support.  Fixed $lps vs $pps bug.
# v1.2 NOTE: fixed file system type error with mklv.  Changed output comments.
#
sub getfsparms {
#
# Use lsfs to determine the parameters used in creating the file system.
my ($fsname)=@_;
my $commandOptions="";
my $parms=`lsfs -cq $fsname | tail -1`;

chomp $parms;
$parms=~s/[\(\)]//g;
@parmList=split /:/,$parms;
foreach $option (@parmList) {
	@parsed=split /\s+/,$option;
	if ($parsed [0] eq "frag" ) {
		$commandOptions=$commandOptions . " -a frag=$parsed[2]";
	} elsif ( $parsed [0] eq "nbpi" ) {
		$commandOptions=$commandOptions . " -a nbpi=$parsed[1]";
	} elsif ( $parsed [0] eq "compress" ) {
		$commandOptions=$commandOptions . " -a compress=$parsed[1]";
	} elsif ( $parsed [0] eq "ag" ) {
		$commandOptions=$commandOptions . " -a ag=$parsed[1]";
	} elsif ( $parsed [0] eq "logname" ) {
		$commandOptions=$commandOptions . " -a logname=$parsed[1]";
	};
};
return $commandOptions;
};

$scriptOutput=$ARGV[0];

if ( $scriptOutput ne "" ) {
    if ( -f $scriptOutput ) {
        rename ($scriptOutput,"$scriptOutput.old");
    };
    open (SCRIPT, ">$scriptOutput") ||
        die "cannot write output file $scriptOutput\n";
    select SCRIPT;
};

open (LSVG, "lsvg |") || die "cannot get vglist\n";

while ( $_ = <LSVG>) {
    chomp;
    if ( $_ ne "rootvg" ) {
	push (@vglist, $_);
    };
};
close LSVG;

print "#!/usr/bin/ksh\n";
print "##
## This script auto generated by /root/createVgCloneScript
## Edit this script as needed, and run it to recreate
## non-rootvg volume groups and file systems.
##\n";
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime (time);
printf ("## Created: %02d/%02d/%04d %02d:%02d:%02d \n",
        $mon+1, $mday, $year+1900, $hour, $min, $sec);

print "echo 'script to clone volume groups:'\n";
print "echo \"@vglist\"\n\n";

print "echo 'Create Volume groups:'\n";
print "echo 'edit script to add physical volumes to each mkvg command'\n";
print "echo 'ex: mkvg -f -y myvgname hdisk1 hdisk2'\n";
print "\n";
print "echo 'NOTE: once this script has been edited with the proper hdisks,'\n";
print "echo '      remove the exit statement from the script so it will run'\n";
print "#################################################\n";
print "## REMOVE THIS EXIT AFTER EDITING MKLV COMMANDS:\n";
print "#################################################\n";
print "exit 1\n\n\n";
foreach $vg (@vglist) {
    print "# volume group: $vg\n";
    open (VGINFO, "lsvg $vg |") ||die "cannot get vginfo for $vg\n";
    while ($_ = <VGINFO> ) {
	chomp;

	@info=split (/\s+/);
	if ( m/^VG STATE/ ) {
		$ppSize{$vg}=$info[5];
	} elsif ( m/^ACTIVE PVs/) {
		$vgActive{$vg} = $info[5];
	} elsif ( m/TOTAL PPs/) {
		$vgSize{$vg}="$info[5] $info[6] $info[7]";
	} elsif ( m/USED PPs/) {
		$vgUsed{$vg}="$info[4] $info[5] $info[6]";
	};
    };
    if ( $vgActive {$vg} ne "yes" ) {
	$noStart="-n";
    } else {
	$noStart="";
    };
    print ("#####################\n");
    print ("# CREATE $vg\n");
    print ("#    ORIGINAL SIZE: $vgSize{$vg}\n");
    print ("#    ORIGINAL USED: $vgUsed{$vg}\n");
    print ("# mkvg -f -s $ppSize{$vg} $noStart -y $vg <INSERT HDISK LIST>\n");
    close VGINFO;
};

foreach $vg (@vglist) {
   #
   # Create jfs and jfs2 log volumes.  Need to do this before we
   # create the file systems.
   #
   open (LVLIST, "lsvg -l $vg | tail +3 |") || die "cannot get lvlist for $vg\n";
   while ( $_ = <LVLIST> ) {
	($volname, $type, $lps, $pps, $pvs, $state, $mount) =
		split /\s+/;
	if ($type eq "jfslog" || $type eq "jfs2log") {
		$logType=$type;
		$logType=~s/log//;
		print "#\n# $logType log file.\n";
		print "mklv -t $type -y $volname $vg $lps\n";
		print "logform -V $logType /dev/$volname\n";
	};
   };
   close LVLIST;

   open (LVLIST, "lsvg -l $vg | tail +3 |") || die "cannot get lvlist for $vg\n";
   while ( $_ = <LVLIST> ) {
	($volname, $type, $lps, $pps, $pvs, $state, $mount) =
		split /\s+/;
	if ($type eq "jfs" || $type eq "jfs2" || $type eq "paging") {
		push (@fsList, $mount);
		$lvInfo {$vg} = "$type $lps $mount";
		if ( $type eq paging ) {
			print "mkps -s $lps -t lv $vg\n";
		} else {
			$fsParms=getfsparms($mount);
			# convert number of PPs to number of
			# 512 byte blocks
			$fsSize=$lps * $ppSize{$vg} * 2048;
			print "#\n# $mount file system size = $fsSize bytes\n";
			print "lsfs $mount 2> /dev/null >&2\n";
			print "if [ \$? -eq 0 ]; then\n\trmfs $mount\nfi\n";
			print "mklv -t $type -y $volname $vg $lps\n";
			print "crfs -v $type -A yes -d $volname $fsParms -m $mount\n"
		};
	};
   };
   close LVLIST;
};
#
# Sort the file systems by mount point name.  Mount points
# need to be created, and hierarchical file systems need to
# be mounted in the proper order in order to create the
# sub directory mount points.

print "\n##\n## Create mount points and mount file systems\n##\n";
@fsList=sort @fsList;
foreach $fs (@fsList) {
    print "if [ ! -d $fs ]; then\n";
    print "    mkdir -p $fs\n";
    print "fi\n";
    print "mount $fs\n";
};

Platforms supported: IBM AIX 5.x


wackVG

This script performs a basic cleanup of a volume group in order to provide a somewhat secure remove of data. This is obviously not DOD or financial system grade data erasure, but may be sufficient for many enviroments when turning systems over to a recycling company or at the end of an offsite disaster recovery test. The script does the following actions: For each volume in the indicated volume group: – kill any processes using file systems in the volume group – recursively remove all files in the volume – remove the file system with rmfs – once all the volumes are removed, remove the volume group via reducevg. For best results, follow up the deletion of the volume group with a creation of a new volume group re-using the PVs.

#!/bin/ksh
#
# This script performs a basic cleanup of a volume group in
# order to provide a somewhat secure remove of data.  This is
# obviously not DOD or financial system grade data erasure,
# but may be sufficient for many enviroments when turning
# systems over to a recycling company or at the end of an
# offsite disaster recovery test.
#
# The script does the following actions:
#    For each volume in the indicated volume group:
#	- kill any processes using file systems in the volume group
#	- recursively remove all files in the volume
#	- remove the file system with rmfs
#    - once all the volumes are removed, remove the volume group
#      via reducevg.
#
# For best results,
# follow up the deletion of the volume group with a creation
# of a new volume group re-using the PVs.
#
# Andy Welter
# January 2005
#

VG=$1
TESTRUN=$2
if [ "$VG" = "" ]; then
        print "ERROR: must specify a volume group"
        exit 1
fi

print "!!!!!!!! WARNING !!!!!!!"
print "!!! This script will !!!"
print "!!! delete all data  !!!"
print "!!! on the following !!!"
print "!!! volume group:    !!!"
print
lsvg -l $VG
print "Whack $VG ?\n\n"
read ANS
case $ANS in
        y|Y) print "Ok, here we go"
                sleep 2
                ;;
        *) print "Quitting."
                exit 1
                ;;
esac
lsvg -l $VG | sort -b +6r| egrep "jfs |jfs2 " | grep -v "^LV NAME" | \
        while read LVNAME TYPE LP PP PV STATE MOUNT; do
        print "removing $MOUNT..." >&2
        if [ "$TESTRUN" = "" ]; then
                fuser -ck $MOUNT
                find $MOUNT -xdev -depth -exec rm {} \;
                ls -alR $MOUNT
                umount $MOUNT
                rmfs $MOUNT
        else
                print "test: $MOUNT"
        fi
done
lspv  | while read PVNAME VOLID CURVG STATUS; do
        case $CURVG in
        $VG) print "remove $PVNAME from $VG"
                if [ "$TESTRUN" = "" ]; then
                        reducevg -df $VG $PVNAME
                fi
                ;;
        *) print "ignore $PVNAME... belongs to $CURVG"
                ;;
        esac
done

Platforms supported: IBM AIX 5.x


pstree

This script can be handy to visualize the parent child relationships of processes running on your system. When analyzing performance or other system problems, you often want to find out who belongs to who. This script uses a really handy feature in newer “ps” commands. The “-o” option. this command line option is used to control specifically what output fields you are interested in from “ps” and specifically what order you want them to appear in. This makes it much much easier to pick apart the output of “ps” in a script.

When issued without any parameters, it displays the ancestry of all processes running on the system relative to the “schd” process. (pid 0). You can also supply a specific pid, and only the ancestry of that process will be shown.

This script is kinda interesting in that it is a Bourne shell script that uses recursion. But because of this, it is really inefficient (recursive calls require a new process be spawned off). It would be much better off written in Perl, but I originally wrote it before I knew Perl very well.

#!/bin/sh
#
# Author: Andy Welter
# www.the-welters.com
# January 15, 2000
#
# Display a parent child relationship for a process or all processes
# on a system.
#
# This script uses recursive calls to itself in a bourne shell script
# which is kinda cool, but is really inefficient.
#
# It takes advantage of the "-o" option on the PS command to put
# the PS output into a more easily parsed format and this also
# makes it more portable between Unix flavors.  Linux does not
# support this option, so unfortunately it doesn't work on Linux.
#

lookup_ancestors () {
        #
        # Find parent of current process, then recursively call this
        # script to display it's line of anscestors.
        #
        # Note that Orphan processes show pid 1 as their parent.
        #
	PID=$1
        PROC=`cat $FILE | grep "^$PID "`
	PPID=`echo "$PROC" | (read pid ppid user args;echo $ppid)`
        if [ -n "$PPID" ]; then
                if [ $PID -ne 0 ]; then
                        $0 -p $PPID
                fi
        fi
}

lookup_descendents () {
        #
        # Find list of children of the current process, and recursively
        # call this script for each of the children found to display
        # their children.
        cat $FILE | grep " $1 " |  \
        while read PID PPID USER ARGS; do
		if [ "$PID" != "$PPID" ]; then
                	$0 -c $PID "$indent"
		fi
        done
        }

display_process () {
        #
        # display the ps info of the indicated process.
	# (can be used for parents or children)
        # $1 is the pid to display
        # $indent is the amount of indentation to display before the process
        #
        cat $FILE | grep "^$1 " | while read PID PPID USER ARGS; do
        echo "$indent$USER	$PID	$PPID	$ARGS"
	done
}

step="  "

PS="ps -ea -o pid -o ppid -o user -o args"

if [ "$1" = "-p" ]; then
        #
        # look up parent pid then display the current process.
        # anscestors will be in oldest to newest order.
        #
        # don't bother indenting for parents.
        #indent="$3"
        lookup_ancestors $2
        #indent="$indent$step"
        display_process $2
elif [ "$1" = "-c" ]; then
        #
        # lookup child pid
        #
        indent=$3
        display_process $2
        indent="$indent$step"
        lookup_descendents $2
else
	if [ "$1" = "" ]; then
		START=0
	else
		START=$1
	fi
        #
        # This is the initial call to the script.
	# Display ancestors and descendents #
	#
        echo
        echo "================="
        echo "= Looking up information for process $START"
        echo "================="
        # Get the output to ps and normalize it's output
        FILE=/tmp/pstree.$$
        export FILE
        indent=""
        $PS | tail +2 | sed 's/^ *//g' > $FILE
        display_process $START
        #
        echo
        echo "================="
        echo "= Ancestors of process $START"
        echo "================="
        lookup_ancestors $START
        #
        echo
        echo "================="
        echo "= Descendents of process $START"
        echo "================="
        lookup_descendents $START

        /bin/rm $FILE
fi

Platforms supported: Unixs who’s “ps” command supports the “-o” option. (ie, Solaris, Digital Unix, AIX, but not Linux)


vmlog

This script provides a simple way to log performance data to a file. Running accounting or installing a full shrink wrap package like HP Perfview can obviously provide you with more information. But if you want something cheap and easy, this can be a good start. The first step on solving performance complaints is to know what the system has done in the past. And the first steps in justifying hardware upgrades is to show that the system is busy and that usage has increased. The way I install this script is to run it via cron at midnight. By default, it reports 24 hours worth of observations, 60 seconds apart. Command line options can be used to alter the interval between observations or alter the number of observations.

    • The vmlog.txt script
#!/usr/bin/ksh
#
# Usage:
USEAGE="vmlog [-i interval] [-c count] [-l logfile]"
#
# Logfile	- If not specified, this defaults to /var/adm/vmstat/vmlog
#		with the day of the month appended.  This means that the
#		data will be kept for 31 days, and will be overwritten each
#		month much like SAR data is.
# Interval 	- If not specified, this defaults to 300 seconds
# Count		- If not specified, this defaults to 1 day worth of
#                 observations
#
# Version 1.0
#
if [ $# -eq 1 ]; then
	echo $USAGE
	exit 1
fi

while [ $# -ge 2 ]; do
case $1 in
	-c)	COUNT=$2
		shift 2
		;;
	-i)	INTERVAL=$2
		shift 2
		;;
	-l)	LOG=$2
		shift 2
		;;
	*)	echo $USAGE
		exit 1
		;;
esac
done

if [ -z "$INTERVAL" ]; then
	INTERVAL=300
fi
if [ -z "$COUNT" ]; then
	# Number of seconds in a day = 60x60*24=86400
	COUNT=`expr 86400 / $INTERVAL`
fi
if [ -z "$LOG" ]; then
	LOG=`date +"/var/adm/vmstat/vmlog.%d"`
else
	LOG=`date +"$LOG.%d"`
fi

#
# Create the log dir if needed.
DIR=`dirname $LOG`
if [ ! -d $DIR ]; then
	mkdir $DIR
fi

 vmstat -t $INTERVAL $COUNT > $LOG
  • A clean up script, vmlog_cleanup.txt, to remove old log files.
    #!/bin/sh
    #
    # This program deletes vmlog output files.
    #
    # Andy Welter 10/01/98
    # www.the-welters.com
    #
    #
    # TIMELIMIT is used to control how long a file is kept by the system.
    # every time this script runs, it will delete files that are greater
    # than the specified number of days.
    #
    # TIMELIMIT must start with a "+" in order to function properly.
    #
    TIMELIMIT="+30"
    LOGDIR=/var/tmp/stats
    DATE=`date +"%y%m%d"`
    if [ -d $LOGDIR ]; then
    	find $LOGDIR -type f -name 'vmlog.*' -mtime $TIMELIMIT -exec rm {} \;
    fi
    

     

Platforms supported: Most Unix flavors


dumpall

“dumpall” is a set of system backup scripts for Sun Solaris and Compaq Tru64 Unix. The dumpall scripts are more complicated that the other scripts on this page, so they have their own page detailing the scripts and the installation procedures. But in brief, these scripts are capable of backing up all file systems on a system, and for Solaris systems allow for relatively secure backup across the network.

<!DOCTYPE HTML PUBLIC "-//SoftQuad//DTD HoTMetaL PRO 4.0::19971010::extensions to HTML 4.0//EN"
 "hmpro4.dtd">

<HTML>

  <HEAD>
    <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
    <META NAME="KEYWORDS"
    CONTENT="unix, solaris, digital, ufsdump, vxdump, vdump, backup, perl, korn shell, systems administration, consultant, welter, zazaindex1zaza, zazaindex2zaza">
    <TITLE>The dumpall backup script</TITLE>
  </HEAD>

  <BODY BGCOLOR="#FFFFFF">
    <TABLE WIDTH="100%" BGCOLOR="#0080FF">
      <TR>
        <TD>

        <H1>dumpall backup script</H1> </TD>
      </TR>
    </TABLE>

    <H3><A NAME="top">Script overview:</A></H3>

    <P>This script will dump all local mounted file systems on a system to
      tape. It can send output to a log file and via email.&nbsp; It is portable
      between Solaris 2.x and Digital/Compaq Unix 4.0x. Because of differences
      in the underlying &quot;ufsdump&quot; and &quot;vdump&quot; commands, this
      script has different capabilities between Sun and DEC.&nbsp;&nbsp; </P>
    <UL>
      <LI><A HREF="#solaris">Solaris information</A></LI>
      <LI><A HREF="#digital">Digital Unix information</A></LI>
      <LI><A HREF="#options">Command options</A></LI>
      <LI><A HREF="#tape">Tape format</A></LI>
      <LI><A HREF="#installation">Installation and setup</A></LI>
      <LI><A HREF="#security">Security concerns</A></LI>
      <LI><A HREF="#disaster">Disaster Recovery issues</A></LI>
      <LI><A HREF="#Appendix:">Appendix- view and download the scripts</A></LI>
    </UL>

    <H3><A NAME="solaris">Solaris:</A></H3>

    <P>This script can back up both standard UFS file systems and Veritas vxfs
      file systems. Because it uses device names rather than mount points in the
      ufsdump or vxdump command, it can be run as a non-root user. ufsdump and
      vxdump support being run as a user in the &quot;sys&quot; group because
      the disk device files are readable by the &quot;sys&quot; group. This is
      especially useful when performing network based backups. This allows a the
      central computer controlling the backups to run it's &quot;rsh&quot;
      commands as a non root user. </P>

    <P>Sun supports the use of a tape drive over the network in the ufsdump
      program, which allows the use of a central tape host. And because the
      central tape host does not need a root user id for allowing inbound tape
      requests, this also helps security. </P>

    <H3><A NAME="digital">Digital / Compaq Unix</A></H3>

    <P>As of Digital Unix 4.0D, the &quot;vdump&quot; command can backup either
      UFS or ADVFS file systems. So this script assumes that &quot;vdump&quot;
      will work on whatever file systems are locally mounted. If you are running
      UFS file systems on a pre 4.0D system, you can modify the &quot;OSF1_dump&quot;
      script to check for the fstype. </P>

    <P>Unfortunately, Digital Unix does not lend itself as easily to network
      based tape backups as Solaris. So this script makes no provisions for
      network backups for a Digital host. </P>

    <P>One way to use vdump to a remote tape drive is using this command: vdump
      -b64 0uf - /somefilesystem | rsh tapehost dd of=/dev/nrmt0h bs=64k The
      block size parameters are needed to make sure the sending and receiving
      programs are in sync as to what block size to use. </P>

    <H3><A NAME="options">Options:</A></H3>

<PRE>&nbsp;-l|-logfile&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Write output to the specified log file.
&nbsp;-mail|-email&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Send log file output to the specified email address
&nbsp;-d|-dev&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Specifies the tape device to use.&nbsp; Must be the non-rewinding&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; device for the script to work properly.&nbsp; Default device is
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $TAPE if set, or /dev/nrmt0h
&nbsp;-unload
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Unloads the tape from the drive when the backup is done.</PRE>

    <P>These options can be specified either via the command line or via a
      configuration file. The configuration file resides in the same directory
      as the executable, and can either be named &quot;dumpall.rc&quot; or &quot;dumpall.&lt;hostname&gt;.rc&quot;.
      This allows you to easily override default settings with command line
      parameters, or store the configuration files in a central spot. The
      program will take all of its parameters from the first location it finds.
      It searches for parameters in the following order: </P>
    <UL>
      <LI>Command line parameters</LI>
      <LI>A host specific dumpall.&lt;hostname&gt;rc file located in the same
        directory as the &quot;dumpall&quot; command</LI>
      <LI>A &quot;dumpall.rc&quot; file located in the same directory as the &quot;dumpall&quot;
        command</LI>
    </UL>

    <H3><A NAME="tape">Tape Format:</A></H3>

    <P>Each file system backed up will be it's own file mark on the backup
      tape. (This is why a &quot;non-rewinding tape device must be used). There
      will be one file mark for each file system, plus an additional file mark
      that consists of the backup script's output file written to the tape in &quot;tar&quot;
      format. The individual backup file marks will be in the file format needed
      for backing up each file system. For Sun, the format will be either &quot;ufsdump&quot;
      format or &quot;vxdump&quot; format. For Digital Unix, it will be in &quot;vdump&quot;
      format for either ADVFS or UFS file systems. </P>

    <H3><A NAME="installation">Installation and Setup</A></H3>

    <P>If you are performing backups to a local tape drive, and are running the
      backups as root, the installation is as simple can be. Simply copy &quot;dumpall&quot;
      to a local or NFS mounted directory on the system being backed up. Create
      a &quot;dumpall.rc&quot; or &quot;dumpall.&lt;system name&gt;.rc&quot;
      file to contain the default parameters, and add a line to the root crontab
      if so desired. </P>

    <P>If you are running the command via &quot;rsh&quot; from a central
      server, and/or are writing to a tape drive over the network, the
      installation gets slightly more complicated. The simplest thing to do
      would be to just set up the /.rhosts files so that root could rsh from the
      central backup server to the client host, and vice versa. The problem with
      this is that this is a huge security hole. </P>

    <P>The &quot;dumpall&quot; client system can be set up so that it is run as
      a non-root user from the central backup server. And tape server can be set
      up so that the user id for remote tape access is restricted so that it can
      not do anything except run the tape drive. This requires setup on both the
      backup server and backup client systems. In our example, we have two
      systems, &quot;tapehost&quot; and &quot;backupclient&quot; . The &quot;dumpall&quot;
      command be run via &quot;rsh&quot; from tapehost, and will run on &quot;backupclient&quot;
      as user &quot;backup&quot;. The tape drive on &quot;tapehost&quot; will be
      controlled from &quot;backupclient&quot; using the user name &quot;tape&quot;.
      Note, depending on how your systems are set up, you may need fully
      qualified hostnames in the .rhosts file, such as &quot;backupclient.mydomain.com&quot;.
    </P>

    <H4>Backup client modifications:</H4>

    <P>The dumpall script must be copied to the backup client, and a dumpall.rc
      file should be set up in the same directory as the dumpall script. The
      backup client needs to have a non root user id added to it that is in the
      &quot;sys&quot; group. This user will need to have it's .rhosts file set
      up to allow inbound &quot;rsh&quot; calls from the backup server. For
      security, the script files, .rhosts file, and /home/backup directory
      should all be owned by root and only writeable by root. The &quot;backup&quot;
      user can have a locked password because it does not need to support
      logins, it just needs to be able to run commands via &quot;rsh&quot;., for
      example: </P>

<PRE>/etc/passwd entry:
backup:x:59999:3:Backup User:/home/backup:/bin/ksh

/home/backup/.rhosts contents:
tapehost root</PRE>

    <H4>Backup server modifications:</H4>

    <P>The backup server needs to have a non root user id added to it that can
      be used by the backup client for tape drive communication. Because any
      user can read and write to the tape drive, this id does not need to belong
      to any special group, and it is better to make it as unprivileged an
      account as possible. This id will need a .rhosts file that will allow rsh
      commands from root or backup on the backup client systems. In order to
      prevent the id from being used for anything but tape control, the start up
      script for the account will be a Perl script that limits what the account
      can do. When &quot;ufsrestore&quot; and &quot;ufsdump&quot; access a tape
      drive over the network, they do so by issuing an &quot;rsh tapehost
      /etc/rmt&quot; command. The &quot;/etc/rmt&quot; command then reads and
      writes data over standard in and standard out over the &quot;rsh&quot;
      network connection. The tape id start up script will only allow the &quot;/etc/rmt&quot;
      command to be executed. If any other command is passed to the start up
      script, it will exit quietly. You will need to copy the rmt perl script
      into the home dir for the &quot;tape&quot; id. As with the backup user id,
      all script files, home dirs, and .rhosts files for the &quot;tape&quot; id
      should be owned by, and only writeable by, root. </P>

<PRE>/etc/passwd entry on tapehost:
tape:x:60000:60001:Tape drive control account for rdump:/home/tape:/home/tape/rmt

/home/tape/.rhosts contents:
backupclient&nbsp;&nbsp;&nbsp; root
backupclient&nbsp;&nbsp;&nbsp; backup</PRE>

    <P>In addition to these steps, the backup server will need a script that
      calls the &quot;dumpall&quot; script on each client machine via &quot;rsh&quot;.
      While dumpall could be run via cron on each individual client system, the
      scheduling of the dumpall process can be a problem. Each backup needs to
      wait until the previous backup finishes using the tape drive. If dumpall
      was run independently on each client system, you would need to provide a
      large buffer of time between each backup to handle variations in backup
      time. It is more reliable to control backups centrally. See the
      <A HREF="#Appendix:">appendix</A> at the bottom for a simple central
      backup script. </P>

    <H3><A NAME="testing">Testing the setup:</A></H3>

    <P>Before running the backup scripts, you should test the rsh connectivity
      between the backup client and the tapehost system. </P>
    <UL>
      <LI>On the backup client, su to &quot;backup&quot; and run the command &quot;rsh
        -l tape tapehost date&quot;. This command should return immediately
        without any output or error messages. If the command returns the date,
        then the &quot;tape&quot; id is not restricting the commands that can be
        run as &quot;tape&quot;. If you get &quot;permission denied&quot;, then
        the .rhosts file is not setup properly. If the command returns &quot;command
        not found&quot;, then either Perl or /home/tape/rmt is not installed
        properly on the tapehost.</LI>
      <LI>On the tape host system, as root, run the command &quot;rsh -l backup
        backupclient id&quot;. The command should return the user id &quot;backup&quot;.
        See the previous troubleshooting tips if the command fails.</LI>
    </UL>

    <H3><A NAME="security">Security Concerns:</A></H3>

    <P>This backup method is more secure than the typical method of performing
      network backups, but it still introduces possible areas for security
      exploits. This method is more secure than some other methods because: </P>
    <UL>
      <LI>Backup clients do not have to allow root &quot;rsh&quot; commands.</LI>
      <LI>Backup clients only have to allow &quot;rsh&quot; commands as the
        user &quot;backup&quot; from a single host, the central backup server or
        tape host system.</LI>
      <LI>The tape host server does have to allow &quot;rsh&quot; commands from
        each client system. But the user id is a non-root id, and because it
        does not have a usable startup shell, it does not allow any commands
        other than the &quot;rmt&quot; tape control command. Because the startup
        script is not listed in /etc/shells, the account can not be used for FTP
        either.</LI>
    </UL>

    <P>The vulnerabilities that this script introduces are: </P>
    <UL>
      <LI>Because the tape drive on the tape host is accessible over the
        network, any user that can log into or &quot;rsh&quot; to the tapehost
        can use the ufsrestore command to read files from the backup tape. This
        window of exposure is lessened by ejecting the tape immediately after
        the backup has completed. But once a previously used tape is inserted
        into the system, it's contents are potentially vulnerable until the
        backup process begins.</LI>
      <LI>Backup tapes could be overwritten under the same circumstances as
        mentioned in the last item.</LI>
      <LI>If &quot;root&quot; is compromised on &quot;tapehost&quot;, then the &quot;backup&quot;
        id on each client system is also compromised.</LI>
    </UL>

    <P>If these potential vulnerabilities are too great for your environment,
      then your alternatives might have to involve solutions such as local tape
      drives, encrypting the &quot;ufsdump&quot; data stream prior to sending it
      over the network, or using &quot;ssh&quot; to make for a more secure data
      channel. </P>

    <H3><A NAME="disaster">Disaster Recovery Concerns:</A></H3>

    <P>Using network based backup for the OS does complicate disaster recovery
      scenarios. When restoring a file system, you will have to remember that
      the backup tape contains backup files for multiple systems. You will need
      to skip over the unneeded backup file marks using &quot;mt&quot;. This
      process will be easier if you have access to the log files that correspond
      to the backup tape you are processing.  In addition, a typical DR
      procedure involves booting off of CD-ROM media and assumes that you can
      restore from a local tape drive. Possible solutions to this problem
      include: </P>
    <UL>
      <LI>Temporarily connecting a local tape drive to the system being
        restored.</LI>
      <LI>Booting from CD-ROM, followed by configuring the CD-ROM based system
        to allow network access, then using the remote tape drive capabilities
        included in ufsrestore.</LI>
      <LI>Installing a minimal OS image that includes network access on an
        alternate boot disk. This minimal image can then be booted up, and the
        normal boot disk can be restored using ufsrestore over the network.</LI>
    </UL>

    <H3><A NAME="Appendix:">Appendix:</A></H3>

    <UL>
      <LI>download the <A HREF="dumpall.txt">dumpall</A> script</LI>
      <LI>download a sample <A HREF="dumpall.tapehost.rc.txt">dumpall.rc</A>
        file for local tape drives</LI>
      <LI>download a sample <A HREF="dumpall.backupclient.rc.txt">dumpall.rc</A>
        file for a remote tape drive</LI>
      <LI>download the <A HREF="rmt.txt">rmt</A> script</LI>
      <LI>download a sample <A HREF="central_backup.txt">central backup</A>
        script</LI>
    </UL>
    <HR>
    <TABLE WIDTH="100%">
      <TR>
        <TD><I><A HREF="index.html">Back</A></I></TD>
        <TD ALIGN="RIGHT"><I>Last Updated November 2, 1999</I></TD>
      </TR>
    </TABLE>
  </BODY>
</HTML>

Platforms supported: Sun Solaris, Compaq Tru64 Unix (Aka Digital Unix)


vgbackup

This script needs to be reworked, it isn’t the greatest right now. It does a few things for you, so it is a starting point, but it doesn’t send results via email, or warn you if you are not backing up a volume group or anything like that. All it does is do a mksysb, followed by a dump of the volumes in each additional volume group you specify. It maintains a log file, and can automatically print it out, but it doesn’t send the results in email.

#!/bin/ksh
#
# Simple backup script for a list of volume groups.  All filesystems
# in each of the volume groups specified will be backed up using the
# "backup" command.
#
# Andy Welter
# www.the-welters.com
# February 17, 1999
#
# parms:
# -log  If this parm is used, script output will be logged to
#       /var/adm/logs/backup.YYMMDD.log.  Otherwise script output
#       will be written to the screen.
# <vglist>  A list of additional volume groups to backup.  If not present, the
#       The script defaults to backing up "rootvg" only.
#
DEV=/dev/rmt0.1
PRINT=""

while [ "$1" != "" ]; do
case $1 in
	"-log")
		# Redirect output to the log file
		LOG=`date +"/var/adm/logs/backup.%y%m%d.log"`
		exec 2> $LOG >&2
		;;
	"-print")
		PRINT="yes"
		;;
	*)
		VGLIST="$VGLIST $1"
	esac
shift
done

if [ "$VGLIST" = "" ]; then
	VGLIST=""
fi
# Get list of mount points
for VG in $VGLIST; do
	lsvg -l $VG | grep " jfs " | \
	while read LV junk2 junk3 junk4 junk5 junk6 FS; do
		FSLIST="$FSLIST $FS"
	done
done
if [ "$FSLIST" = "" ]; then
	echo "NOTE:  No extra file systems specified."
	echo "NOTE:  Only rootvg will be backed up."
fi
RC=0
date +"begin backup at %y/%m/%d %H:%M"
echo "Performing a mksysb and backing up the following file systems: $FSLIST"
echo "Run mksysb..."
mkszfile && mksysb -m -i -X $DEV
RC2=$?
RC=`expr $RC + $RC2`
date +"mksysb complete at %y%m%d %H:%M"

for FS in $FSLIST; do
	date +"backup $FS %y/%m/%d %H:%M"
	/etc/backup -0 -f $DEV $FS
	RC2=$?
	RC=`expr $RC + $RC2`
	if [ $RC -eq 0 ]; then
		echo "SUCCESS: done with $FS"
	else
		echo "ERROR: Backup of $FS failed"
	fi
	echo
done
# eject the tape

if [ $RC -eq 0 ]; then
	echo "ejecting the tape..."
	mt -f $DEV offline
else
	echo "ERROR: at least one backup had errors."
fi

date +"done with backup at %y/%m/%d %H:%M"

if [ "$PRINT" = "yes" ]; then
	lp $LOG
fi

Platforms supported: AIX 3.x and 4.x.


mtfcount

This script counts the number of file marks on a tape. It can optionally position the tape just past the last file mark when it is done. This can be handy when trying to figure out what is on an unmarked tape, when verifying that a backup worked correctly, or when you want to append additional backups on the end of a tape that has been rewound already.

#!/bin/sh
#
# Count file marks on a tape.  It can optionally position the tape
# after the last mark so you can append to the end of it.
#
# Andy Welter
# www.the-welters.com
#
USAGE="mtfcount [-f <devname>] [-append|-a]"
if [ "$TAPE" = "" ]; then
	DEV=/dev/rmt/0n
else
	DEV=$TAPE
fi
while [ $# -ge 1 ]; do
	case $1 in
	-f)
		DEV=$2
		shift 2
		;;
	-append|-a)
		APPEND=yes
		shift
		;;
	*)
		echo "$USAGE"
		exit 1
		;;
	esac
done
mt -f $DEV rewind
COUNT=0
RC=0
while [ $RC -eq 0 ]; do
	echo "Skipping file $COUNT..."
	mt -f $DEV fsf 1
	RC=$?
	if [ $RC -eq 0 ]; then
		COUNT=`expr $COUNT + 1`
	fi
done

echo "$COUNT files found on tape"
echo "rewinding tape..."
mt -f $DEV rewind
if [ "$APPEND" = "yes" ]; then
	if [ $COUNT -ge 1 ]; then
		echo "positioning tape to the end..."
		mt -f $DEV fsf $COUNT
	fi
fi

Platforms supported: Most Unix flavors


hogs

This simplistic script uses the “-o” parameter to the ps command to put the ps output format into a more easily parsed format. This is a handy technique for any script that has to parse “ps” output. Unfortunately, not all ps commands support this option. Linux is one of the systems that does not support it. The “top” program available elsewhere on the net is a more powerful way of monitoring CPU utilization, but I still like this simple tweak to a ps command.

#!/bin/ksh
#
# Display the top memory users or CPU users via ps.
#
# Andy Welter
# www.the-welters.com
#
# BUGS:
#	This script doesn't sort CPU time properly for processes using
#	more than 24 hours of CPU time.  Re-writting this in perl is the
#	most practical way to fix this bug since that will give me a lot
#	better string manipulation and pattern matching control.
#
USAGE="hogs [-mem|-cpu]"

if [ $# -ge 1 ]; then
	OPT=$1
else
	OPT="-mem"
fi

case $OPT in
-c*)
	# The long string of sed commands is used to normalize the elapsed CPU time field so
	# that the processes are sorted by time properly.
	echo "Top CPU users"
	echo "cputime     vsize  started	pid user	command"
	ps -ea -o time -o vsz -o stime -o pid -o user -o comm | \
		tail +2 | \
		sed 's/  *..:.. / ZZ:&/' | sed 's/  *ZZ:  */ 00:/' | sed 's/^  */ /' | sed s'/00:0:/00:00:/' |
		sort -rn | head -20
	;;
-m*)
	echo "Top Memory users"
	echo "vsize    cputime  started	pid	user	command"
	ps -ea -o vsz -o time -o stime -o pid -o user -o comm | \
		tail +2 | sort -rn | head -20
	;;
*)
	echo "$USAGE"
	exit 1
esac

Platforms supported: Versions of Unix that support the “-o” option for “ps” (ie not Linux)


rex

This Perl script acts as a “rexecd” client.

The rexec protocol is a is a method for executing commands on a remote system using a username and password for authentication. This differentiates rexec from rsh. Rsh commands use .rhosts and hosts.equiv files to set up trust relationships between systems, and allow command execution without a separate password challenge. There are security drawbacks to each approach.

Trust relationships can be used to compromise other systems once one system is breached. And rexec has no logging for failed login attempts. This allows it to be used as a conduit for dictionary password guessing attacks on a system. Systems directly exposed to the Internet should not run the execd. Systems on controlled networks should use software such as TCP Wrappers or “logdaemon” in order to put logging in place on this service. Future enhancements will include sending standard error and standard to different file descriptors and changing the command’s ARGV list so that the command line options (ie password) are not visible via “ps” while the script is running.

inetd listens for rexec requests via TCP connections on port 512. rexec format as documented in the man page for rexec: The input stream consists of null separated values.

port for standard error\0username\0password\0 command and args\0″

#!/usr/local/bin/perl
#
# Andy Welter
# www.the-welters.com
# January 10, 2000
#
# This script acts as a "rexec" client.  The rexec protocol is a
# is a method for executing commands on a remote system using a
# username and password for authentication.  This diffentiates
# rexec from rsh.  Rsh commands use .rhosts and hosts.equiv files
# to set up trust relationships between systems, and allow command
# execution without a seperate password challenge.
#
# There are security drawbacks to each approach.  Trust relationships
# can be used to compromise other systems once one system is breached.
# And rexec has no logging for failed login attempts.  This allows it
# to be used as a conduit for dictionary password guessing attacks on
# a system.
#
# Systems exposed to the internet should not run the execd.  Systems on
# controlled networks should use software such as TCP Wrappers in order
# to put logging in place on this service.  inetd listens for rexec requests
# via TCP connections on port 512.
#
# rexec format as documented in the man page for rexec:  The input stream
# consists of null separated values.
# port for standard error\0username\0password\0command and args\0
#
use Socket;

sub sendcmd {
$sockaddr = 'S n a4 x8';
($name, $aliases, $proto) = getprotobyname('tcp');
($name, $aliases, $type, $len, $thisaddr) = gethostbyname($host);
$thisport = pack($sockaddr, &AF_INET, 0, $thisaddr);
$thatport = pack($sockaddr, &AF_INET, $port, $thisaddr);

socket(S, &PF_INET, &SOCK_STREAM, $proto) ||
	die "cannot create socket\n";
	connect(S,$thatport) || die "cannot connect socket\n";

# Set socket to write after each print
select(S); $| = 1; select(STDOUT);

#
# Send command
#
printf S "0\0%s\0%s\0%s\0",$user,$passwd,$command;
#
# Read responses from server and print them out
#
while ( $_ = <S> ) {
	printf ("$_");
};
close(S);
};

#
# MAIN
#
#
$port=512;
$host=$ARGV[0];
$user=$ARGV[1];
$passwd=$ARGV[2];
$command=$ARGV[3];
sendcmd;

exit 0;

TCP Wrappers and Logdaemon available from ftp://coast.cs.purdue.edu/pub/tools/unix

Platforms supported: Most Unix flavors. Maybe NT too if you have a good Perl port.


Syslog Configuration:

Syslog is a standard Unix utility for reporting system messages. Messages can be kept on the local system, or forwarded to central loghost machines. Many network devices such as routers and firewalls can utilize syslog as a reporting mechanism.

Messages are processed by the “syslod” daemon process, and are sent to the daemon process either through the logger system call, or through the “logger” command. Syslog configuration is generally controlled via the /etc/syslog.conf file.

Syslog configuration checklist:
  • Create an /etc/syslog.conf file with the desired syslog rules. If you are using a central syslog server, ensure that the rules on the server send the syslog output to the desired files.
  • Ensure that the destination file names are valid for the syslog.conf rules. Create empty files if needed.
  • Refresh the syslogd configuration by either sending a “SIGHUP” signal to the syslogd process, or by restarting the process.
  • Test the syslog configuration using the “logger” command.
  • Implement automatic log pruning and cleanup to prevent old syslog messages from filling up file systems.

There are several sources for Syslog daemons for Windows 95 and Windows NT. In addition, the “swatch” utility can be used to automatically filter syslog messages based on message content.

Syslog scripts

  • sample syslog.conf file
  • sample syslog.conf file for loghost client
  • logmaint.txt – Script to periodically roll over to new syslog files. Can also be used to initially create syslog output files.
    #!/bin/sh
    #
    # This program rolls over old log files and creates new ones.
    #
    # The program loops through the list of files used by syslog, copies
    # the old data to a date and time stamped file, then truncates the
    # active log file.
    #
    # This program supports one command line option "-compress" or "-nocompress",
    # which controls whether or not the program compresses the archived log
    # file.  The default behavior is controlled by the COMPRESS variable at the
    # start of the program.
    #
    # Andy Welter 9/23/98
    # www.the-welters.com
    #
    COMPRESS="yes"
    USAGE="logroll [-compress|-nocompress]"
    LOGDIR=/var/adm/syslog
    FILES="auth.log daemon.log kern.log user.log syslog.log local0.log messages"
    DATE=`date +"%y%m%d"`
    
    case $1 in
    "-compress") COMPRESS=yes
    	;;
    "-nocompress") COMPRESS=no
    	;;
    *) echo $USAGE
    	exit 1
    	;;
    esac
    
    for F in $FILES; do
    	FF="${LOGDIR}/${F}"
    	if [ -f $FF ]; then
    		while [ -f ${FF}.${DATE} ]; do
    			DATE=`date +"%y%m%d.%H%M%S"`
    		done
    		cp -p $FF $FF.$DATE
    		chgrp adm $FF.$DATE
    		chmod 660 $FF.$DATE
    		if [ "$COMPRESS" = "yes" ]; then
    			/usr/bin/compress $FF.$DATE
    		fi
    	fi
    	cat /dev/null > $FF
    	chgrp adm $FF
    	chmod 660 $FF
    done
    
    

     

  • logdel.txt – Deletes syslog files over the specified age.
    #!/bin/sh
    #
    # This program deletes old syslog files
    #
    # Andy Welter
    # www.the-welters.com
    #
    #
    # TIMELIMIT is used to control how long a file is kept by the system.
    # every time this script runs, it will delete files that are greater
    # than the specified number of days.
    #
    # TIMELIMIT must start with a "+" in order to function properly.
    #
    TIMELIMIT="+14"
    LOGDIR=/var/adm/syslog
    DATE=`date +"%y%m%d"`
    if [ -d $LOGDIR ]; then
    	find $LOGDIR -type f -name '*.log.*' -mtime $TIMELIMIT -exec rm {} \;
    fi
    

     

  • logtest.txt – Tests syslog configuration with the “logger” command by writing a test message to each facility for each priority.
    #!/bin/sh
    #
    # This is a test of the "logger" and syslog configuration.
    # It will loop through all the possible facillities and severity levels
    # and send a syslog message for each one.
    #
    # Andy Welter 9/23/98
    # www.the-welters.com
    #
    FACILITIES="auth daemon kern lpr mail news user syslog uucp local0"
    SEV="debug info notice warn err crit alert emerg"
    DATE=`date +"%D %T"`
    
    for F in $FACILITIES; do
    	for S in $SEV; do
    		echo "$F.$S ..."
    		logger -p$F.$S "syslog test $DATE $F.$S"
    	done
    done
    
    

     

Platforms supported: Most Unix flavors


webcat and webload

Webcat is a perl program that takes URLs as input and fetches the specified web page. It can also be used to execute CGI Programs that use the GET method and URL encoding for their parms. When when given an URL on the command line, the script fetches a single web page. When no URL is specified on the command line, the program will read URLs from stdin, one URL per line.

The script does no parsing of the file returned and does not fetch images or URLs listed in frames.

Webload is a simple Bourne shell script that uses webcat in order to retrieve lists of URLs, and reports the amount of time needed to retrieve the page. Webload can save the files to a specified directory or send them to /dev/null. It can also loop through the list and download them repeatedly. There are much better tools out there for this task now. But when I originally wrote this that wasn’t the case. Think of this as an old example for how you can do things.

webcat.txt

#!/usr/bin/perl
#
# This script is a command line program for fetching web pages.
# It takes URLs as input and then connects to the specified web
# server and retrieves the specified page using an HTTP/1.0 GET
# request.  It does not parse the resulting web page, and does
# not retrieve any associated images, included files, or source
# files for embedded frames.
#
# A single URL can be given on the command line, or the program
# will read URLs from stdin, one URL per line.
#
#
# Andy Welter
# www.the-welters.com
# 9/1/1999
#
use Socket;
if ( "$ARGV[0]" ne "" ) {
	$url="$ARGV[0]";
} else {
	$interactive=1;
	chop ($url = <STDIN>);
};

#
# quiet means don't print prompts and diagnostics
#
$quiet=1;

#
# Read std in and write it to the server
# Open a socket for each command
#
printf ("url: ") unless $quiet;
while ( $url ) {
$url=~s.http://..;
($host,$filename)=split /\//, $url, 2;
($host,$port)=split /:/,$host;
if ($port eq "") {
	$port=80;
};
$filename="/$filename";

printf ("set up socket to $host:$port\n") unless $quiet;
$sockaddr = 'S n a4 x8';
($name, $aliases, $proto) = getprotobyname('tcp');
($name, $aliases, $type, $len, $thisaddr) = gethostbyname($host);
$thisport = pack($sockaddr, &AF_INET, 0, $thisaddr);
$thatport = pack($sockaddr, &AF_INET, $port, $thisaddr);

	printf ("Opening socket...\n") unless $quiet;
	socket(S, &PF_INET, &SOCK_STREAM, $proto) ||
		die "cannot create socket\n";
		connect(S,$thatport) || die "cannot connect socket\n";

	# Set socket to write after each print
	select(S); $| = 1; select(STDOUT);

	#
	# Send command
	#
	printf ("Sending $filename") unless $quiet;
	print S "GET $filename HTTP/1.0\n\n";
	#
	# Read responses from server
	#
	while ( $_ = <S> ) {
		printf ("$_") unless $opt_n;
	};
	close(S);
	if ($interactive) {
		printf ("url: ") unless $quiet;
		chop ($url=<STDIN>);
		}
	else {
		$url="";
		};
};

exit 0;

webload.txt

#!/bin/sh
#
# Andy Welter
# www.the-welters.com
#
# This script uses the webcat program to download a list of URLs.
# It can be used to repeatadly download the list of files in order to
# generate a steady traffic load on a web server, and can also save
# the resulting files in a directory.
#
# Options:
# -file <filename>
#	The file name containing the list of URLS to retrieve.
# -delay <seconds>
#	The number of seconds to pause between GET requests
#	by default, there is no delay between requests.
# -save <dirname>
#	The name of an existing directory where the output will
#	be saved
# -loop <count>
#	The number of times that the script will loop through the file list.
#	The default is 1.  A negative number will cause an infinite loop.
#
#
USAGE="webload -file <filename> [-loop <count>] [-save <dirname>] [-delay <seconds>]"

DELAY=0
LOOP=1

while [ $# -gt 1 ]; do
case $1 in
-file|-f) URLS=$2
	;;
-delay|-d) DELAY=$2
	;;
-save|-s) SAVEDIR=$2
	;;
-loop|-l) LOOP=$2
	;;
*) echo $USAGE
	exit 1
	;;
esac
shift 2
done

COUNT=0
echo $URLS
while [ $COUNT -ne $LOOP ]; do
	cat $URLS | while read URL; do
		echo $URL
		date
		if [ -n "$SAVEDIR" ]; then
			FILE=`echo $URL | sed 's,[ /],.,g`
			webcat $URL > $SAVEDIR/$FILE
		else
			webcat $URL > /dev/null
		fi
		date
		echo ---------------------
	sleep $DELAY
	done
COUNT=`expr $COUNT + 1`
done

Platforms supported: Most Unixs, requires perl5. webcat should work on NT with a good perl port.


logroll

This script is used to manage log files on a system. It will maintain a specified number of log file copies, renaming the old ones in a format “logfilename”, “logfilename.0”, “logfilename.1”, with “logfilename.0” being the most recent archived log file.

#!/bin/ksh
#
# Roll log files.
# This script will maintain a specified number of old log files,
# named by appending a number to the end of it, with the older
# files having higher numbers.  Numbering will start at zero.
#
# Andy Welter
# www.the-welters.com
# 1/4/2001
#

stat_check () {
RC=$1
MSG="$2"
if [ "$RC" != "0" ]; then
	print "ERROR: $MSG"
	exit $RC
fi
}

FILE=$1
if [ ! -f "$FILE" ]; then
	stat_check 1 "No such log file $FILE"
fi

if [ $# -eq 1 ]; then
	COUNT=6
else
	COUNT=`expr $2 - 1`
fi

CUR=$COUNT

while [ $COUNT -ge 1 ]; do
	NEXT=`expr $COUNT - 1`
	if [ -f $FILE.$COUNT ]; then
		rm $FILE.$COUNT
		stat_check $? "Removing $FILE.$COUNT"
	fi
	if [ -f $FILE.$NEXT ]; then
		mv $FILE.$NEXT $FILE.$COUNT
		stat_check $? "Moving $FILE.$NEXT"
	fi
	COUNT=`expr $COUNT - 1`
done
#
# Copy then truncate the active log file to avoid problems with
# open file descriptors
cp -p $FILE $FILE.0
stat_check $? "Moving $FILE.$NEXT"
cat /dev/null > $FILE

Platforms supported: Most Unixs.


format_clone

This script is used to copy the disk partitioning from one disk to another. This does not copy any data, it simply replicates the partition layout as you could do using the “format” program. This script comes in handy when setting up a batch of new disk drives, or when setting up disk mirroring using Solstice Disk Suite.

NOTE: the source and destination disks must have the same geometry. (size, cylinders, sectors, etc). This script does not check to see if the destination disk is in use, but it does save a backup copy of the old disk partitioning in /tmp/format.dat.<diskname>.bak.

#!/bin/sh
#
# Clone a disk format.  This can be useful when setting up
# a bunch of new disks, or when setting up a disk that will
# be a mirror of an existing disk.
#
# NOTE:  This program assumes that the source and destination
# disk drives are the exact same make and model.
#
# Andy Welter
# www.the-welters.com
# 1/4/2001
#
SOURCE=$1
DEST=$2
echo "Copy format from $SOURCE to $DEST"
echo "Are you sure that $DEST is not in use? [y|n]"
read ANS
if [ "$ANS" != "y" ]; then
	echo "Exiting."
	exit 1
fi

if [ ! -h /dev/dsk/${SOURCE}s0 -a \
	! -h /dev/dsk/${SOURCE}s0 ]; then
    echo "no such disk"
fi
FORMATDAT="/tmp/format.dat.$$"
FORMATBAK="/tmp/format.dat.$DEST.bak"
if [ -f $FORMATDAT -o \
	-f $FORMATBAK ]; then
	echo "$FORMATDAT or $FORMATBAK already exists"
	exit 1
fi
echo "Saving source format table..."
format $SOURCE << EOF > /dev/null 2>&1
save
$FORMATDAT
quit
EOF

#
# Get the disk type and partition table name from the file
DNAME=`grep "^disk_type" $FORMATDAT | cut -d'"' -f2`
PNAME=`grep "^partition" $FORMATDAT | cut -d'"' -f2`

#
# Save the old format table for the destination device
echo "Saving backup copy of $DEST format table in $FORMATBAK..."
format $DEST << EOF > /dev/null
save
$FORMATBAK
quit
EOF

#
#
echo "Copy new format table to $DEST ..."
format -x $FORMATDAT -t $DNAME -p $PNAME $DEST << EOF > /dev/null
label
y
quit
EOF

rm $FORMATDAT
echo "done."

Platforms supported: Solaris 2.x.


inetd_cleanup

Most systems ship with an overly permissive inetd.conf file. This eliminates services that are potential security hazards. There are more full fledged security tightening scripts than this such as the Bastille Linux project and Titan amongst others. But this is handy if all you want to do is sweep through a bunch of inetd.conf files.

#!/bin/ksh
#
# Andy Welter
# www.the-welters.com
# 1/3/2001
#
# This script is used to tighten up the security of a system's
# inetd.conf file, which is used to control what services the inet daemon
# will start up on a system.
#
# Most Unix systems ship by default with some services that are better
# left disabled.  This script has 4 levels of tightening it can do:
# Default - Only the worst of the default services are turned off.
#           Suitable for non-mission critical servers on a relatively
#	    trustworthy internal network.
# medium  - Slightly more restrictive, internal servers would be a good
#           candidate for this level.
# high    - Much more restrictive.  Used for systems that are exposed to the
#           internet either directly, or within a DMZ.  Web servers, ftp
#	    servers, mail gateways, etc.
# max     - ???
#
# Usage:
USAGE='inetd_lockdown [-medium|-high|-max] [-install] [<inetd.conf filename>]'
#

#
# Check return codes, and exit if not zero.
rccheck () {
RC=$1
if [ $RC -ne 0 ]; then
	print "Error at $2"
	exit 1
fi
}

#
# Define services to disable for each level
# Services that have names are defined in the /etc/services file.  The services with numbers here
# are services that use remote procedure calls. (rpc).  Many rpc based services have a history
# of buffer overrun security issues.
DEFLIST="comsat exec talk uucp tftp name finger systat netstat echo discard chargen rquotad walld rexed ruserd"
MEDLIST="$DEFLIST shell login 100232 xaudio"
HIGHLIST="$MEDLIST rstatd printer kerbd ufsd timed dtspc 100068 100083 100235 100221 100229 100230"
MAXLIST="$HIGHLIST ftp"
LIST=$DEFLIST
FILE=/etc/inet/inetd.conf

while [ $# -ge 1 ]; do
    case $1 in
	-medium) LIST=$MEDLIST
		;;
	-hi|-high) LIST=$HIGHLIST
		;;
	-m|-max) LIST=$MAXLIST
		;;
	-i|-install) INSTALL="yes"
		;;
	-v) VERBOSE="yes"
		;;
	-*) print "$USAGE"
		exit 1
		;;
	*) FILE=$1
		;;
    esac
    shift
done

if [ ! -r $FILE ]; then
	print "ERROR: No such file or file unreadable - $FILE"
	exit 1
fi

if [ $VERBOSE ]; then
	print "Disabling the following services:"
	print "$LIST"
    if [ $INSTALL ]; then
	print "The new $FILE will be installed automatically"
    fi
fi

TMPFILE=$FILE.tmp
NEWFILE=$FILE.new
BAKFILE=`date +"$FILE.%y%m%d"`
cp $FILE $BAKFILE
rccheck $? "backup"
cp $FILE $TMPFILE
cp $FILE $NEWFILE

#
# Loop through the list of services and use sed to comment the service out
for SVC in $LIST; do
	sed "s/^$SVC/#&/" < $TMPFILE > $NEWFILE
	rccheck $? "sed"
	cp $NEWFILE $TMPFILE
	rccheck $? "cp"
done
rm $TMPFILE

#
# Install the new file and make inetd re-read it's config file
if [ "$INSTALL" = "yes" ]; then
	if [ $VERBOSE ]; then
		print "Installing file now..."
	fi
	if [ -w $FILE ]; then
		cp $NEWFILE $FILE
		rccheck $? "install"
		PID=`ps -eaf -o pid -o comm | grep -w "inetd" |\
			 (read pid cmd; echo $pid)`
		kill -HUP $PID
	fi
fi
if [ $VERBOSE ]; then
	print "Difference between old and new files:"
	diff $NEWFILE $BAKFILE
fi

Platforms supported: Most Unixs, but the service lists are Solaris oriented.


ping_scan

Sequentially ping a range of IP addresses. This script is not as efficient as using a program such as nmap, but it is a handy exercise for manipulating IP addresses in Perl.

#!/usr/local/bin/perl
#
# This script will ping a range of IP addresses given
# a starting and ending address.  There are other faster
# ways to do this, for example, the nmap program, but this
# is also an example on how to manipulate IP addresses in Perl.
#
# Andy Welter
# www.the-welters.com
# January 2001
#

use Socket;
$packet_count=1;
$usage="ping_scan <starting IP address> [ending ip address]\n";
($#ARGV >= 0) || die "$usage";
$cur_str=$ARGV[0];
if ($#ARGV == 0 ) {
	$end_str=$cur_str;
} else {
	$end_str=$ARGV[1];
};
#
# Convert "1.2.3.4" notation address into an integer.
$cur_bin= unpack "L", inet_aton $cur_str;
$end_bin= unpack "L", inet_aton $end_str;

while ( $cur_bin <= $end_bin ) {
	#
	# Convert integer into a 1.2.3.4 notation address.
	$cur_str=inet_ntoa (pack "L", $cur_bin);
	print "$cur_str ... ";
	#
	# Ping the host
	open (PING, "ping $cur_str $packet_count |") ||
		die "Cannot execute ping command\n";
	while ( $_ = <PING> ) {
		print $_;
	};
	close PING;
	$cur_bin++;
};

Platforms supported: Most Unixs.


hostbyaddr

This script does a reverse address lookup, returning a host name when given an IP address. It uses the gethostbyname function call, which means it will use whatever name resolution method the host running the program uses. This is meant to provide a more easily parsed format for output than nslookup provides, and is mostly useful inside other scripts.

#!/usr/local/bin/perl
#
# This script does a reverse address lookup, returning
# a host name when given an IP address.  It uses the
# gethostbyaddr function call, which means it will use
# whatever name resolution method the host running the
# program uses.  This is meant to provide a more easily
# parsed format for output than nslookup provides.
#
# Andy Welter
# www.the-welters.com
# January 16, 2001
#
use Socket;
$addr=$ARGV[0];
$hostname=gethostbyaddr(inet_aton ($addr), AF_INET);
if ( $hostname ) {
	print "$hostname\n";
} else {
	exit 1;
};

Platforms supported: Unix and NT with a good Perl port


hostbyname

When given an IP address, this script does a hostname lookup lookup. It uses the gethostbyaddr function call, which will use whatever name resolution method the host running the program uses. Like the previous script this is meant to provide a more easily parsed format for output than nslookup provides, and is mostly useful inside other scripts.

#!/usr/local/bin/perl
#
# Obtain an IP address for a host name
# The gethostbyname function uses whatever name service the system
# you run it on is configured to use.  For example, this may be
# local hosts files, NIS, NIS+, or of course, DNS.
#
# This output is meant to be easier to use than the information
# returned by nslookup.
#
# Andy Welter
# www.the-welters.com
# January 16, 2001
#
$host=$ARGV[0];
$addr=gethostbyname($host);
if ( $addr ) {
	($a,$b,$c,$d)=unpack ('C4', $addr);
	print "$a.$b.$c.$d\n";
} else {
	exit 1;
};

Platforms supported: Unix and NT with a good Perl port


hostxcheck

When given an IP address or host name, this script will perform a forward and reverse look up using gethostbyaddr and gethostbyname. This is useful for verifying that reverse DNS entries have been set up correctly, or to make sure that an IP address does not have a spoofed reverse entry.

#!/usr/local/bin/perl
#
# Cross check the a host name using forward and reverse DNS
# lookups.  If the name passed in is a "cname" (aka alias),
# then the second hostname returned will not match the first.
# That is ok.  The biggest thing to watch for is that the addresses
# match.
#
# The gethostbyname or gethostbyaddr functions use whatever name service
# the system you run it on is configured to use.  For example, this may be
# local hosts files, NIS, NIS+, or of course, DNS.
#
# Andy Welter
# www.the-welters.com
# May 16, 2001
#

use Socket;
#
# Get the address for the name passed in.  If someone
# passed us an address, gethostbyname will return the address anyway.
$name1=$ARGV[0];
$addr1=gethostbyname($name1);
if ( $addr1 ) {
	($a,$b,$c,$d)=unpack ('C4', $addr1);
	$addr1txt="$a.$b.$c.$d";
} else {
	die "Lookup failed for name1=$name1\n";
};
#
# Do a reverse look up to get the name that goes with this address
$name2=gethostbyaddr (inet_aton ($addr1txt), AF_INET);
if ( $name2 ) {
	$addr2=gethostbyname($name2);
} else {
	die "Lookup failed for addr1=$addr1txt\n";
};
#
# Now do a forward lookup on the address to get the name.
if ( $addr2 ) {
	($a,$b,$c,$d)=unpack ('C4', $addr2);
	$addr2txt="$a.$b.$c.$d";
} else {
	die "Lookup failed for name2=$name2\n";
};

printf ("%-32s\t%s\n", "Hostname:","Address:");
printf ("%-32s\t%s\n%-32s\t%s\n",
	$name1, $addr1txt, $name2, $addr2txt);

Platforms supported Unix and NT with a good Perl port.


rc4

This program is an implementation of the “crypt” rc4 encryption program. It is a simple two way encryption program that can encrypt a file or data stream using a supplied key. When the same key and program are used on an encrypted file, it will decrypt the file.

#!/usr/bin/perl
#
# A perl implementation of the "crypt" program.
#
# This program implements an rc4 encrytion algorythm
# using a variable length key single piece key.
#
# It takes a file or data stream and encrypts it using the
# provided key.  Running the output of the program back
# through the program with the same key will decrypt it.
#
# USAGE: rc4 <keyval> [file name]
# If no file name is provided, the script will read from
# STDIN.
#
# Andy Welter
# www.the-welters.com
# May 2001
#

#
# Encrypt a buffer at a type.  Encryption is a stateful
# process, so we use the "@state" global variable to track
# the state.
sub rc4 {
my ($buf) = @_;
my ($ebuf, $char);
for(unpack('C*',$buf)) {
	$x++;
	$y=($state[$x%=256]+$y)%256;
	@state[$x,$y]=@state[$y,$x];
	#&swap;
	$char= pack (C,
		$_^=$state[ ($state[$x] + $state[$y]) %256 ]);
	$ebuf= $ebuf . $char;
	};
return $ebuf;
};

sub prepkey {
#
# Prepare the encryption key
#
my @key=@_;
my @hexkey=unpack('C*',pack('H*',shift @key));
my ($x, $y);
my @t;
my @state;
#
# prepare key
for(@t=@state=0..255){
	$y=($hexkey[$_%@hexkey]+$state[$x=$_]+$y)%256;
	@state[$x,$y]=@state[$y,$x];
	#&swap;
}
return @state;
};

local @state=prepkey("$ARGV[0]");
my $x=0;
my $y=0;
my $file;
if ( $ARGV[1] eq "" ) {
	# If no file name was specified,
	# use standard-in as the file name
	$file="<&=0";
} else {
	$file=$ARGV[1];
};

open (IN,"$file") || die "error: can not open $file";
while (read (IN,$buf, 1024)) {
	print rc4 ($buf);
};

Platforms supported Unix and NT with Perl.


repack

This script uses the “/var/sadm/install/contents” file in Solaris to recreate a Solaris package file based on the currently installed files on the system. This can be useful as a way of making a backup of a Solaris package prior to removing it, or as a way to recreate a Solaris installation package when you no longer have the package file that it came in. It can also be useful as a way of creating a modified / customized package for installation on other systems. One could change file contents, ownerships, or permissions to suit their needs, repackage the files, and use that package for installation on other systems.

#!/bin/ksh
#
# Re-create a Solaris package installation file from it's installed
# components.
#
# Andy Welter
# www.the-welters.com
#
# Limitations:
#   This script will only capture files that have entires in the
#   /var/sadm/install/contents file.  It will not capture other files such
#   as config files that were added for the package later.  It will also
#   not capture any pre or post install scripts that may have been used
#   when the package was originally installed.
#
#   The file ownerships and permissions in the new package are will be the
#   same as the actual file ownerships and permissions, not what they were
#   when the file was originally installed.
#
DIR=/tmp
USAGE="repack <package name>"
if [ $# -lt 1 ]; then
	echo "$USAGE"
	exit 1
fi
PKG=$1
pkginfo $PKG
if [ $? -ne 0 ]; then
	echo "No such package $PKG."
	exit 1
fi

TMPDIR=$DIR/$PKG.$$
if [ ! -d $TMPDIR ]; then
	mkdir $TMPDIR
else
	echo "$TMPDIR already exists.  Exiting"
	exit 1
fi
cd $TMPDIR
echo "PKG=$PKG" > pkginfo
echo 'BASEDIR="/"' >> pkginfo
pkginfo -l $PKG | while read KEYWORD VALUE; do
	case $KEYWORD in
	NAME:)
		# make sure the name reflects the fact that this
		# is a repackaged Solaris package.
		echo "NAME=$VALUE - repackaged" >> pkginfo
		;;
	CATEGORY:)
		echo "CATEGORY=$VALUE" >> pkginfo
		;;
	ARCH:)
		echo "ARCH=$VALUE" >> pkginfo
		;;
	VERSION:)
		# make sure the version reflects the fact that this
		# is a repackaged Solaris package.
		echo "VERSION=$VALUE - repackaged" >> pkginfo
		;;
	VENDOR:)
		echo "VENDOR=$VALUE" >> pkginfo
		;;
	EMAIL:)
		echo "EMAIL=$VALUE" >> pkginfo
		;;
	esac
done

echo "i pkginfo=./pkginfo" > prototype
grep $PKG /var/sadm/install/contents | cut -f1 -d" "| pkgproto >> prototype

if [ -d /var/spool/pkg/$PKG ]; then
	echo "Package already exists in /var/spool/pkg."
	echo "Do you want to over write this package? "
	read ANS
	if [ "$ANS" = "y" -o \
		"$ANS" = "yes" ]; then
		pkgmk -o -r /
	else
		echo "Ok, exiting"
		exit 1
	fi
else
	pkgmk -r /
fi
cd /var/spool/pkg
rm -r $TMPDIR
echo "Do you want to turn the pkg directory into a single package file? "
read ANS
if [ "$ANS" = "y" -o \
	"$ANS" = "yes" ]; then
	pkgtrans -s /var/spool/pkg $PKG.pkg
	rm -r /var/spool/pkg/$PKG
fi

Platforms Supported: Solaris 2.x


makePackage

This script creates a simple solaris package by interactively prompting for the basic values of a package. It assumes that you already have the files installed on the system in the proper location, with the desired ownerships and permissions.

#!/usr/bin/ksh
#
# This script creates a simple solaris package by interactively
# prompting for the basic values of a package.  It assumes that
# you already have the files installed on the system in the proper
# location, with the desired ownerships and permissions.
#
# Andy Welter
# Version 1.0
# 2005
#

#
copyScript () {
	if [ "$2" = "" ]; then
		return
	elif [ -d $1 ]; then
		cp $1/$2 $TMPDIR
		print "i $2=./$2" >> prototype
	elif [ -f $1 ]; then
		cp $1 $TMPDIR/$2
		print "i $2=./$2" >> prototype
	else
		print "ERROR!  bad file name or path $1"
	fi
};

print "Creating pkginfo file..."
print "Enter values for solaris package variables: "
print -n "Package name (8 chars or less, no spaces [PKG]: "
read pkg
TMPDIR="/var/spool/pkg/$pkg.$$"
if [ -d $TMPDIR ]; then
	print "ERROR: $TMPDIR already exists"
	exit 1
else
	mkdir $TMPDIR
	cd $TMPDIR
fi
print "PKG=$pkg" > pkginfo

print -n "\t Package long name [NAME]: "
read name
print "NAME=$name" >> pkginfo

print -n "\t package CATEGORY: [ex. utility, application] "
read cat
print "CATEGORY=$cat" >> pkginfo

print "ARCH=sparc" >> pkginfo

print -n "\t VERSION: "
read version
print "VERSION=$version" >> pkginfo

print -n "\t VENDOR: "
read vendor
print "VENDOR=$vendor" >> pkginfo

#print -n "\t BASEDIR: "
#read basedir
basedir="/"
print "BASEDIR=$basedir" >> pkginfo

print "EMAIL=" >> pkginfo

print "Creating package prototype file..."
cat /dev/null > prototype
print "Solaris packages can run scripts at 3 points"
print "= a 'checkinstall' script for checking package dependencies"
print "= a 'preinstall' script that prepares a system for the installation of the files"
print "= a 'postinstall' script that runs after the other package files have been installed"
print "The files must be named checkinstall, preinstall, or postinstall."
print
print "Enter the FULL path name to any checkinstall script you wish to run (optional): "
read script
copyScript $script checkinstall

print "Enter the path name to any preinstall script you wish to run (optional): "
read script
copyScript $script preinstall

print "Enter the path name to any postinstall script you wish to run (optional): "
read script
copyScript $script postinstall

print -n "Enter the name of a file containing a list of the package files: "
read manifest
if [ -f "$manifest" ]; then
	print "i pkginfo=./pkginfo" >> prototype
	cat $manifest | pkgproto >> prototype

else
	print "ERROR: Must specify a file listing the contents of the package"
	exit 1
fi
if [ -d /var/spool/pkg/$pkg ]; then
	echo "Package already exists in /var/spool/pkg."
	echo "Do you want to over write this package? "
	read ANS
	if [ "$ANS" = "y" -o \
		"$ANS" = "yes" ]; then
		pkgmk -o -r /
	else
		echo "Ok, exiting"
		exit 1
	fi
else
	pkgmk -r /
fi

cd /var/spool/pkg
rm -r $TMPDIR
echo "Do you want to turn the pkg directory into a single package file? "
read ANS
if [ "$ANS" = "y" -o \
	"$ANS" = "yes" ]; then
	pkgtrans -s /var/spool/pkg $pkg.pkg
	rm -r /var/spool/pkg/$pkg
fi

Platforms Supported: Solaris 2.x


sh_ex

This script doesn’t do anything… It is just a file with examples of Bourne shell script syntax. It can be handy if you don’t write in sh often and want your memory jogged, or if you can never remember what test operator checks to see if a file is a symbolic link or not.

#!/bin/ksh

#
# Some script syntax examples:

#
# reading output from one program to use in a variable
ps -eaf | grep httpd | while read USER PID PPID THEREST; do
	grep "error: $PID" /var/adm/somelogfile
done

#
# The problem with the previous example is that you can't do
# something with standard out in the middle of the loop, since
# you are already using standard out.   Another way to do it is this:
ps -eaf | grep httpd | while read USER PID PPID THEREST; do
	PIDLIST="$PIDLIST $PID"
done
for CURPID in $PIDLIST; do
	grep "error: $PID" /var/adm/somelogfile >> /tmp/someotherlog
done

#
# You can also format the output of "ps" the way you want using the
# "-o" option.
ps -eaf -o pid -o comm | grep "httpd" | while read PID CMD; do
	kill $PID
done

#
# You can also use program output this way:
# Suppose you want to see what the permissions and ownerships were on some
# command that was in your path:
ls -al `which su`

#
# Or suppose you wanted to save a date/time stamp so that you could use
# it multiple times:
DTIME=`date +"%m%d%y.%H%M"`
mv log1 log1.$DTIME
mv log2 log2.$DTIME

#
# Some examples of the "test" operation:
# -r  file exists and is readable
# -w  file exists and is writable
# -x file exists and is executable
# -f  file exists and is a regular file. (not a dir)
# -d  file is a directory
# -h  file is a symbolic link
# -c  file is a character special file
# -b  block special file
# -p  named pipe
# -u  setuid file
# -g  setgid file
# -k  sticky bit set
# -s  file exists and has a size greater than 0
# -z  test for zero len string
# -n  non-zero length string
#
# Numeric test operators:
# -lt less than
# -le less than or equal to
# -gt greater than
# -ge greater than or equal to
# -eq equal to
# -ne not equal to
#
# String test operators
# = equal
# != not equal
# > greater than
# < less than
#
# boolean operators
# -a and
# -o or
# ! not
#
if [ -f /tmp/myfile ]; then
	cat /tmp/myfile
fi

#
# elif structure
if [ ! -f $FILE ]; then
	echo "No such file $FILE"
elif [ ! -r $FILE ]; then
	echo "$FILE not readable"
else
	echo "File is readable"
fi

#
# Processing command line options:
while [ $# -ge 2 ]; do
	case $1 in
	-f) FILE=$2
		shift 2
		;;
	-d) DEVICE=$2
		shift 2
		;;
	*) echo "usage: $0 [-f <filename>] [-d <device name>]"
		exit 1
		;;
	esac
done

#
# This is an example of of subroutine definition and call
hup_daemon () {
DAEMON="$1"
ps -e -o pid,comm | grep "$DAEMON" | while read PID COMM; do
	echo "send HUP to $PID $COMM"
	kill -HUP $PID
done
};

hup_daemon inetd

# Writing data to syslog
/usr/local/bin/someCommand > /var/adm/someCommand.log
if [ $? -eq 0 ]; then
	logger -p user.notice "NOTE:  someCommand worked just fine"
else
	logger -p user.err "ERROR: someCommand had a problem"
fi

#sending a file as an attachment in email
uuencode /var/adm/someCommand.log someCommand.log | mailx -s "Log file from
someCommand" someUser@somedomain.com

Platforms supported: generic Bourne shell script syntax.


perl_ex

This script doesn’t do anything… It is just a file with examples of simple Perl syntax. It can be handy if you don’t write in perl often and want your memory jogged.

#!/usr/bin/perl

###########################################################################
# This is an example Perl script that doesn't do that much.  It is mostly an
# example of Perl syntax.  The last part of the example is a basic web
# server log analysis program.
###########################################################################

###########################################################################
# Types of variables
###########################################################################
#
# Scalar variables always start with a "$" whether they are on the left or
# right hand side of an expression.  This differs from bourne, korn,
# and C-shell scripts.
$logDir="/var/log/httpd";
$logFile="$logDir/access_log";

#
# arrays
# Arrays can be referenced two ways, in an array "context", or in a
# scalar "context".  In an array context, the array name starts with
# a %.  When referenced with a $, the value of the array is evaluated
# as a scalar value.  Arrays are referenced as scalars when you are
# accessing a single element of the array by its subscript.   When the
# name of the array is referenced with an $ and no subscript, the
# value of the expression is the length of the array.
%months=("", "January", "February", "March", "April", "May", "June",
	"July", "August", "September", "October", "November", "December");
$mar=$months[3];
$monthCount=$months -1;

#
# Hashes or associative arrays
# One of the most useful innovations in Perl.  An associative array
# is like an array where the index is a string rather than an integer.
# When referenced with a @, the variable is an array.  When used with
# a $ and a subscript, you get the value of the array element.  Without
# the subscript, you get the
%namesAndNicknames ("Andrew", "Andy", "William", "Bill", "James", Jim");
$nickname=$namesAndNicknames {"Andrew");
$namesAndNicknames {"William"} = "Billy Bob";
#
# Get a list of hash keys
@names=keys (%namesAndNicknames);
#
# Get a list of the values in the hash
@values=values (%namesAndNicknames);

###########################################################################
# A subroutine definition and call
###########################################################################
sub isEqual {
my ($parm1, $parm2) = @_;
if ( $parm1 eq $parm2) {
	return 1;
} else {
	return 0;
};
};

$rc=isEqual ("abc", "def");

###########################################################################
# Simple control structures
###########################################################################
if ( $nickname eq "Billy Bob" ) {
	nascarfan("true");
} elsif ( $nickname eq "Nigel" ) {
	f1fan("true");
};

$length = @names;
for ($ii=0; $ii < $length; $ii++) {
	print ("$names [$ii] aka $namesAndNicknames{$names [$ii]} \n");
};

foreach $name (@names) {
	printf ("%s aka %s \n",
		$name, $namesAndNicknames {$name});
};

#
# open a file for output, read from standard in, and write it to the outfile.
open (OUTFILE, "> /var/tmp/junk");
while ( $_ = <>) {
	print (OUTFILE "$_");
};

#
# The first parm in the open command is the file handle name, File handles
# have odd syntax... they do not have any special character in front of them
# such as $, %, or &.
open (LOG,"<$logFile") || die "error: can not open $LOG";
#
# split the lines of the log file.  Ex.
# host.someisp.net - someuserid [01/Jan/1998:08:30:15 -0500] "GET /~welter/index.html HTTP/1.1" 304 -
while (<LOG>) {
	($host,$junk,$userid,$date,$time,$request,$url,$protocol,$junk2) =
		split (/\s+/);
	if (($url =~ /welter/) && !($url =~ /jpg/)) {
		$urls {"$url"} ++;
		$hosts {"$host"} ++;
		$dates {"$date"} ++;
	};
};

print "URL Requests by host.\n";
while (($key ,$count) = each %hosts) {
	print "$count\t$key\n";
};

print "\nURL Requests by URL.\n";
while (($key ,$count) = each %urls) {
	print "$count\t$key\n";
};

# sort some hash by value
@sorted = sort { $hash{$a} cmp $hash{$b} } keys %hash;
for $key (@sorted) {
	print $hash{$key}\n;
done

Platforms supported: generic Perl syntax.


emc-maskrep

This perl script produces a VMax / symaccess masking view report in HTML or plain text. It uses the xml output option of the symcli commands to get the data in a more parsable format.

#!/usr/bin/perl
#
# symaccess masking view report
#
# Produces a report summarizing the masking views and storage groups for an EMC VMax.
# This script uses the symaccess command with the "-output XML" option to obtain its
# information and requires the use of the perl XML module to do the parsing.
#
# The script can produce plain text, or a simple HTML page.  The HTML page has
# intra document links to make it easier to navigate around. The script can also
# produce a "light" or "list only" version of it's output, where it just prints the
# devices and the masking views that the devices are in.
#
# In a nut shell, the full output of this script consists of a prettied up version of
# "symaccess list view" with the disk space used by each view listed.  Then the script
# runs "symaccess show view <viewname>" for each view.  It also runs a symaccess show
# for any storage groups that are not included in any masking views.
#
# The script does not print information about old style mapping/masking commands, it
# only documents symaccess structures.
#
# The script can take a while to run if a system has many masking views.  Running
# the script with a -v for verbose gives feedback on what the script is doing, so
# it can be handy to use that option when initially testing the script in a new
# environment.
#
# This script has been tested on Linux and AIX.  It requires SYMCLI.
#
# Options:
# -s sid  - specifies the SID of the VMax.  Otherwise we depend on SYMCLI_SID
# -v      - Verbose.  Prints diag messages as it runs various symcli commands.
# -l      - light or list output.
#           prints device ids and the masking view that they are in.
# -h      - prints usage statement
# -H      - HTML output
#
# Andy Welter
#
# V1.0 March 2011, initial version.
# V1.1 May 2011, cleanup unneeded code.
# V1.2 May 2011, add "light" option that only lists devices with their masking view.
# V1.2 Aug 2011, add reporting for storage groups that are not in views.
# V1.3 Dec 2011, added usage statement and some comments.
# V1.4 June 2012, added error checking for failed symaccess calls.
#

# use module
use XML::Simple;
use Data::Dumper;
use Getopt::Std;
$usage="USAGE: emc-maskrep <-s sid> <-l> <-H> <-v>\n-s specify SID if not using SYMCLI_SID env variable\n-l sg list only, aka light output\n-H html output\n-v verbose output\n";
getopts ('s:Hhlv') || die "$usage";
if ( $opt_s ne "" ) {
	$ENV{SYMCLI_SID}=$opt_s;
};
if ( $ENV{SYMCLI_SID} eq "" ) {
	print "ERROR: must specify an array with SYMCLI_SID or -s\n";
	exit 1;
};
if ( $opt_H ne "" ) {
	$html=1;
};
#
# Note: RDF info not currently supported.
if ( $opt_r ne "" ) {
	# get RDF info with symrdf list
	$rdf=1;
};
if ( $opt_l ne "" ) {
	# "list" output, only list devices by storage view
	$light=1;
};
if ( $opt_v ne "" ) {
	# verbose output... print info to std err to track progress.
	$verbose=1;
};
if ( $opt_h ne "" ) {
	# help and exit
	print "$usage";
	exit 0;
};

sub refToList  {
# when we have one item in a list in the XML, it
# gets parsed as a scalar instead of an array.  So we sometimes
# want to be able to force that to be one item list to make
# coding easier.
#
# This is complicated, so I'll break it down:
# @_ is the array of parameters to the function.
# @_[0] is the first parameter, which in this case is a reference
# The ref test then checks to see whether that reference is to a scalar
# or is a reference to an array.
my @list;
my $item;
if ( ref (@_[0]) eq "ARRAY" ) {
    # The first parameter is a reference to an array.  But what I want
    # is the array itself, not the reference to it.  So that is what the @{} gets me.
    #@list=@{@_[0]};
    foreach $item (@{@_[0]}) {
	if ($item ne "") {
		push (@list, $item);
	};
    };
} else {
        # the reference is to a scalar, and that is easier to decode.  I then turn
        # it into an array.
        my ($item)=(@_);
        @list=($item);
};
# and now I return the array.
return @list;
};

sub getSgList() {
my $xml = new XML::Simple (KeyAttr=>[]);
$verbose && print STDERR "list -type storage...\n";
my $cmdout=`/usr/symcli/bin/symaccess list -type storage -output XML 2> /dev/null`;
$cmdout || die "ERROR in getSgList: symaccess list -type storage -output XML \n";
my $sym=$xml->XMLin($cmdout);
my @sglist=refToList ($sym->{Symmetrix}{Storage_Group});
my $sg;
my %sgdb;
my $name;
foreach $sg (@sglist) {
    $name=$sg->{Group_Info}{group_name};
    $sgdb{$name}="";
};
return %sgdb;

};

sub getMaskViews() {
my $xml = new XML::Simple (KeyAttr=>[]);
$verbose && print STDERR "list view...\n";
my $cmdout=`/usr/symcli/bin/symaccess list view  -output XML 2> /dev/null`;
if ( $cmdout eq "" ) {
	print "ERROR in getMaskViews: symaccess list view \n";
	return;
};
my $sym=$xml->XMLin($cmdout);
my @viewlist=refToList ($sym->{Symmetrix}{Masking_View});
my $view;
my $viewXML,$showViewOut;
my %viewdb;
my $name,$ig,$pg,$sg,$size;
foreach $view (@viewlist) {
    $name=	$view->{View_Info}{view_name};
    $ig=	$view->{View_Info}{init_grpname};
    $pg=	$view->{View_Info}{port_grpname};
    $sg=	$view->{View_Info}{stor_grpname};
    my $size=0;
    if ( $light != 1 ) {
    	$verbose && print STDERR "get size view $name...\n";
    	$showViewOut=`/usr/symcli/bin/symaccess show view $name  -output XML 2> /dev/null`;
	if ( $showViewOut eq "" ) {
		print "ERROR in getMaskViews: /usr/symcli/bin/symaccess show view $name  -output XML\n";
		exit 1;
	};
    	$viewXML=$xml->XMLin($showViewOut);
    	my $device,@devlist;
    	@devlist=refToList($viewXML->{Symmetrix}{Masking_View}{View_Info}{Device});
    	my @cap;
    	foreach $device (@devlist) {
		@cap=refToList($device->{capacity});
    		$size+=$cap[0];
    	};
    };
    $viewdb{$name}="$ig,$pg,$sg,$size";
};
return %viewdb;
};

sub printListViews {
my (%maskviews)=@_;
my $view,@values;
$date=`date`;
if ( $html == 1 ) {
	print "<h1>Masking View Report</h1>\n<h2>$date $ENV{SYMCLI_SID}</h2>\n<table border cellpadding=5>\n";
	print "<tr><th>view <th>init group <th>port group  <th>storage group <th>capacity</tr>\n";
} else {
    print "#########################################################################################\n";
    print "# Masking View Report $date $ENV{SYMCLI_SID}\n";
    print "# view              init group          port group          storage group        capacity\n";
    print "#################   #################   #################   ##################  #########\n";
};
my $total=0;
foreach $view (sort (keys (%maskviews))) {
	@values=split /,/,$maskviews{$view};
	$total+=$values[3];
	if ( $html == 1 ) {
	    printf "<tr><td><a href=\"#%s\">%s</a><td>%s<td>%s<td>%s<td align=right>%d</tr>\n",
		$view,$view,$values[0],$values[1],$values[2],$values[3];
	} else {
	    printf "%-19s %-19s %-19s %-19s %9d\n",
		$view,$values[0],$values[1],$values[2],$values[3];
	};
};
if ( $html == 1 ) {
	print "<tr><td>&nbsp<td>&nbsp<td>&nbsp<td>total MB<td>$total</table>\n<br>\n";
	print "<hr>\n";
	print "\n<h2><a href=#unused-sglist>Link to Unmasked Storage Group List</a></h2>\n";
	print "<hr>\n";
	print "\n<h2>Masking View Details</h2>\n";
} else {
	print "%80s%10d\n"," ", $total;
};
};

#
# NOTE: rdf info option not currently supported.
sub getRDF {
my $xml = new XML::Simple (KeyAttr=>[]);
$verbose && print  STDERR "run symrdf list...\n";
return;
my $cmdout=`/usr/symcli/bin/symrdf list -output XML 2> /dev/null`;
my $sym=$xml->XMLin($cmdout);
my @devlist=refToList ($sym->{Symmetrix}{Device});
my $dev;
my %rdfhash;
my $local,$remote,$type,$group;
foreach $dev (@devlist) {
	$local=$dev->{Local}{dev_name};
	$type=$dev->{Local}{type};
	$group=$dev->{Local}{ra_group_num};
	$remote=$dev->{Remote}{dev_name};
	$rdfhash{$local}="$type:$group\t$remote";
};
return %rdfhash
};

sub getLightView {
my ($view)=@_;
my $xml = new XML::Simple (KeyAttr=>[]);
$verbose && print STDERR "show view $view...\n";
my $cmdout=`/usr/symcli/bin/symaccess show view $view -output XML 2>/dev/null`;
$cmdout || die "ERROR in getLightView: /usr/symcli/bin/symaccess show view $view -output XML \n";
my $sym=$xml->XMLin($cmdout);
my @devlist=refToList ($sym->{Symmetrix}{Masking_View}{View_Info}{Device});
my @devs,$dev;
foreach $dev (@devlist) {
	push (@devs,$dev->{dev_name});
};
return @devs;
};

sub showLightView {
my ($view)=@_;
my @list=getLightView($view);
foreach $dev (@list) {
	print "$dev\t$view\n";
};
};

sub showView {
my ($view)=@_;
my $xml = new XML::Simple (KeyAttr=>[]);
$verbose && print STDERR "show view $view...\n";
my $cmdout=`/usr/symcli/bin/symaccess show view $view 2> /dev/null` ;
$cmdout || die "ERROR in showView /usr/symcli/bin/symaccess show view $view\n";
if ( $html == 1 ) {
	print "<a name=\"$view\">\n";
	print "<a href=#top>[top]</a>\n";
};
print "\n\n############################################################################\n";
print "## View $view\n";
print "############################################################################\n";
print "$cmdout";
};

############################################################################
# Main routine
############################################################################

my (%sglist)=&getSgList();
my (%maskviews)=&getMaskViews();

if ( $html == 1 ) {
	print "<html>\n<pre>\n";
};
if ( $light != 1 ) {
	printListViews(%maskviews);
};
my @masklist=sort keys (%maskviews);
my %rdfhash;
if ( $rdf ne "" ) {
	%rdfhash=getRDF();
};
foreach $view (@masklist) {
    if ( $light != 1 ) {
	showView($view);
    } else {
	showLightView($view);
    };
    # mark the storage group for this view as used
    my @mvparms=split/,/,$maskviews{$view};
    $sglist{$mvparms[2]}=$view;
};

if ( $light != 1 ) {
# print out the list of unused storage groups
my $sg;
if ( $html == 1 ) {
	print "<hr>\n";
	print "<a name=\"unused-sglist\">\n";
	print "\n<h2>Unmasked Storage Groups</h2>\n";
	print "<a href=#top>[top]</a>\n";
} else {
	print "\n\nUnmasked Storage Groups\n";
};
foreach $sg (sort keys (%sglist)) {
    if ( $sglist{$sg} eq "" ) {
	if ( $html == 1 ) {
		print "<a href=\"#sg-$sg\">$sg</a>\n";
    	} else {
		print "$sg\n";
    	};
    };
};
# show details of each unused sg
foreach $sg (sort keys (%sglist)) {
    if ( $sglist{$sg} eq "" ) {
    	my $cmdout=`/usr/symcli/bin/symaccess show $sg -type storage 2> /dev/null` ;
	$cmdout || die "ERROR from /usr/symcli/bin/symaccess show $sg -type storage \n";
    	if ( $html == 1 ) {
		print "<a name=\"sg-$sg\">\n";
		print "<h2>$sg</h2>\n";
		print "<a href=#unused-sglist>[unmasked list]</a>\n";
		print "$cmdout"
    	} else {
		print "\n$cmdout";
    	};
    };
};
};

if ( $html == 1 ) {
	print "</pre>\n</html>\n";
};

Platforms supported: Perl with XML::Simple, Data::Dumper, and tested with Symcli 7.1 and 7.2 and EMC VMax storage arrays.


emc-fastcfg

This perl script produces a VMax FAST VP configuration report in plain text.

#!/usr/bin/ksh
#
# Simple script to document the current state of an EMC FAST VP
# configuration.
# Andy Welter
# V 1.0
# Feb 2012
#
# The code that redirects standard error to dev/null is to clean
# up some extra newlines that symcli throws out to stderr.
# The tail commands are to get rid of some junk at the start of the
# command output.
#

# set our standard env variables, including SID
# ex.
# export SYMCLI_SID=00019260xxxx
. /usr/local/bin/emc/emcenv > /dev/null

print "############################################################"
print "## F A S T    P O L I C I E S"
print "############################################################"
symfast list -fp -vp 2> /dev/null | tail +2

# build a list of policy names.
symfast list -fp -output xml 2> /dev/null| grep policy_name | cut -f2 -d">" | cut -f1 -d"<" | while read name; do
	list="$list $name"
done
#
# Print details about each policy
for name in $list; do
print "############################################################"
print "#### $name"
print "############################################################"
	symfast show -fp_name $name 2> /dev/null | tail +5
done

#
# Print capacity and usage info about the tiers
print "\n\n"
print "############################################################"
print "## F A S T    T I E R S"
print "############################################################"
symfast list -tech ALL -demand -vp 2> /dev/null | tail +4

#
# Print allocation details about all the associations.
print "\n\n"
print "############################################################"
print "## F A S T   A L L O C A T I O N S  B Y   G R O U P"
print "############################################################"
symfast list -association -demand 2> /dev/null | tail +4

Platforms supported: Tested with Symcli 7.3 and EMC VMax storage arrays.


sw-alicheck

Perl script that is used to check the status of brocade fabrics. Uses Net::Telnet to login to the switches, runs zoneshow and switchshow, combining the output. Shows the zoning alias for devices connected to each port. It also flags ports that are connected to HBAs without aliases, and also flags aliases that are not logged in any where. This is handy for spotting problems or pruning out old aliases.

#!/usr/bin/perl
use Net::Telnet;

#
# sw-alicheck
# This script produces a report of the switchshow and alishow output
# from a list of brocade switches.  The value of the script is that
# it merges the alishow and switchshow output. This makes it easier
# to spot WWNs that are connected to the switch without having aliases
# and spot aliases that are no longer connected to the switch.  Those
# instances could either be the result of obsolete alias entries, or
# failed connections to the switches.
#
# NOTE: YOU MUST EDIT THIS SCRIPT TO INCLUDE YOUR OWN USER ID,
# PASSWORD, AND SWITCH LIST.

#
# History:
# V1.0 Fall 2010 Andy Welter. Initial version.
# V1.1 Dec 2010 Andy Welter. Add comments, filter out unlicensed ports.
# V1.2 Jan 2011 Andy Welter. Add support for NPIV connections
# V1.3 Jan 2011 Andy Welter. Add support for cascaded switches.
#

#
# to send commands use
# $telnet->print('somecommand');
# $output=$telnet->waitfor('/\$ /');
# or
# $output=$telnet->cmd ('somecommand');
#
use Getopt::Std;
if ( getopts ('s:u:p:c:h:i') == 0) {
	print "$usage\n";
	exit 1;
};

sub sort_by_value {
        local(*x) = @_;
        sub _by_value { $x{$a} cmp $x{$b}; }
        sort _by_value keys %x;
}

#
# read the aliases from the switch and store them in a hash.
sub aliget  {
#
# some of the cfg, zone, and alias data is spread out across multiple lines.
# we want each zone or alias to be on the same line.
#
# The last line is going to be a command prompt.
# the $lastline logic is what keeps us from printing that.
#
my ($host,@output)=@_;
$name="";
%alilist=();
foreach $_  (@output) {
    chomp;
    # get rid of leading white space
    s/^\s+//g;
    # stop processing alias $name if we see a item
    if (m/zone:|cfg:|configuration:/) {
	$name="";
    } elsif ( m/alias:/) {
	# we have a new alias name, process  it
	($type,$name,$wwids)=split /\s+/;
	# look for wwids
	foreach $wwid (split /\s+/,$wwids) {
	    if ($wwid=~m/..:..:..:..:..:..:..:../) {
		$alilist{$wwid}=$name;
	    };
	};
    } elsif (m/..:..:..:..:..:..:..:../) {
	# found a wwid.  Add it to the alias list if we are working on a $name
	$wwid=$_;
	if ($name ne "" ) {
		$alilist{$wwid}=$name;
	};
    };

};
};

#
# get list of WWIDs on an NPIV device
sub npivget  {
#
# This assumes that the telnet connection is still active.
($port)=@_;
my $line;
my $wwidlist=();
my (@output) = $telnet->cmd ("portshow $port");
foreach $line (@output) {
    if ($line=~m/^portWwn of device/) {
	# start scanning for WWNs
	$startscan=1;
    } else {
	if ( $startscan == 1) {
	    # process WWNs until we run out of them
	    if ($line=~m/..:..:..:..:..:..:..:../) {

		$line=~s/\s+//g;
		push @wwidlist, ($line);
	    } else {
		$startscan=0;
	    };
	};
    };
};
return @wwidlist;
};

#
# Display the switch status and determine which WWIDs are logged into each port.
sub swget  {
#
# for each port with a connected WWID, record the port number, and print
# the alias for that WWID.  Print a warning if no alias is found for the WWID.
my ($host,@output)=@_;
$name="";
foreach $_  (@output) {
    chomp;
    # ignore switch ports with no module.
    if ( ! m/No POD Lic/ ) {
	@wwidlist=();
	print "$_";
	@line=split /\s+/;
	if ( m/NPIV/ ) {
	    # an NPIV device is on this port.  get the list of WWIDs
	    @wwidlist=npivget($line[1]);
	} else {
		# regular device with one WWID
		if ($line[$#line]=~m/..:..:..:..:..:..:..:../ &&
			$line[0] ne "switchWwn:") {
			@wwidlist=($line[$#line]);
		};
	};
	foreach $wwid (@wwidlist) {
	    $portlist{$wwid}=$host . "-" . $line[1];
	    if ( $alilist{$wwid} eq "" ) {
		print "NO-ALIAS ";
	    } else {
		print "$alilist{$wwid} ";
	    };
	};
	print "\n";
    };
};
};

$user=$opt_u;
$passwd=$opt_p;
$cmd=$opt_c;
if ( $opt_h eq "" ) {
	$opt_h=24;
};
#
# EDIT SWITCH LIST HERE
if ( $opt_s eq "" ) {
	@fablist=("fab-a-sw1,fab-a-sw2,fab-a-sw3", "fab-b-sw1,fab-b-sw2,fab-b-sw3");
} else {
	@fablist=split /\s+/,$opt_s;
};

#
# EDIT USER ID HERE
if ( $user eq "" ) {
	$user="admin";
};
#
# EDIT PASSWORD HERE
if ( $passwd eq "" ) {
	$passwd="myVerySecurePassword";
};
if ( $cmd eq "" ) {
	$cmd="zoneshow";
};
@ltime=localtime (time());
$date=sprintf ("%d/%02d/%02d-%02d:%02d:%02d",
	$ltime[5]+1900,$ltime[4]+1,$ltime[3],$ltime[2],$ltime[1],$ltime[0]);

foreach $fabric  (@fablist) {
    %portlist=();
    @hostlist=split /,/,$fabric;
    foreach $host (@hostlist) {
    $telnet = new Net::Telnet (Timeout=>20,
	Prompt=>'/\> /',
	Errmode=>'return');

    print "\n\n###\n### $host\n###\n";
    if ($telnet->open ($host)) {
	if ($telnet->login ($user,$passwd)) {
		@output= $telnet->cmd ("alishow");
		#
		# analyze the alias info
		aliget ($host,@output);
		@output= $telnet->cmd ("switchshow");
		#
		# parse and print the switchshow output
		swget ($host,@output);
		$telnet->print ('exit');
	} else {
		print "login failed for $host\n";
	};
    } else {
	print "open failed for $host\n";
    };
    $telnet->close();

    };
    #
    # print the alias list
    print "\n";
    foreach $wwid (sort_by_value(*alilist)) {
	printf ("%-24s %s",$alilist{$wwid}, $wwid);
	if ($portlist{$wwid} eq "") {
		print "\tNOT-LOGGED-IN";
	}  else {
		print "\t$portlist{$wwid}"
	};
	print "\n";
    };
};

exit ($rc);

Platforms supported: Perl with Net::Telnet, Brocade switches supporting telnet logins.


sw-savecfg

Perl script to save the configuration of a list of brocade switches. Uses Net::Telnet to login to the switches, runs “configshow -all” and saves the output for each switch in a different file. Assumes a configuration repository named /home/config/MonthYear.

#!/usr/bin/perl
use Net::Telnet;

#
# Use telnet and the "configshow" command to backup brocade switch
# configurations. Telnet to each switch in the list and run configshow
# then write the output to /home/config/MonthYear/swcfg-hostname.txt
#
# You will need to edit this script to your custom switch name list
# user id, and password. Or those can be specified on the command line.
#

# V1.0 Dec 2011	Andy Welter based on sw-runcmd script
#
#

# Net::Telnet example:
# to send commands use
# $telnet->print('somecommand');
# $output=$telnet->waitfor('/\$ /');
# or
# $output=$telnet->cmd ('somecommand');
#
my $usage='sw-savecfg <-u user> <-p password> <-s "switchname1 switchname2 ...">';
use Getopt::Std;
if ( getopts ('s:u:p:') == 0) {
	print "$usage\n";
	exit 1;
};

$user=$opt_u;
$passwd=$opt_p;
$cmd=$opt_c;
if ( $opt_s eq "" ) {
	$opt_s="fabasw1 fabasw2 fabbsw1 fabbsw2";
	chomp $opt_s;
};
@hostlist=split /\s+/,$opt_s;
if ( $user eq "" ) {
	$user="admin";
};
if ( $passwd eq "" ) {
	$passwd="YourReallSecurePassword";
	chomp $passwd;
};

my $cmd="configshow -all";
#
# Construct the directory name for the save location.
my @months=("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec");
my @dtime=localtime(time());
my $dir=sprintf ("/home/config/%s%d",$months[$dtime[4]],$dtime[5]+1900);

foreach $host  (@hostlist) {
    print "\n##########\n## system: $host \ncommand: $cmd\n##########\n";
    $telnet = new Net::Telnet (Timeout=>20,
	Prompt=>'/\> /',
	Errmode=>'return');
    my @output;
    if ($telnet->open ($host)) {
	if ($telnet->login ($user,$passwd)) {
		open (CFG, "> $dir/swcfg-$host.txt") || die "Cannot open $dir/swcfg-$host.txt\n";
		@output=$telnet->cmd ($cmd);
		#
		# remove the last output line from the output.  That will be a command prompt.
		pop @output;
		print CFG @output;
		close CFG;
		$telnet->print ('exit');
	} else {
		print "login failed for $host\n";
	};
    } else {
	print "open failed for $host\n";
    };
    $telnet->close();
    print "\n";
};

Platforms supported: Perl with Net::Telnet, Brocade switches supporting telnet logins.


Leave a Reply