ciphermethod.com

Admin Sarcasm On Why The Server Is So Slow

by on Jun.22, 2011, under Jokes, Politics

Per section 6.1 Navigational Deflector of the starship maintenance manual, I think it’s encountering slip stream collisions due to failure to properly install secondary deflector. Suggest installation/repair of secondary deflector to assist stabilization and performance enhancement.

“When entering into Quantum Slip Stream speeds the Auxiliary deflector system plays a critical role in establishing a stable slip stream conduit through which matter can pass. The Ventral deflector contains a series of 4 graviton flux inverters which allow the near perfect stabilization of a slip stream conduit without the need for perpetual polaric modulation by the main deflector system. The entire deflector network is tied directly into the compute and NAV COM core which then ties into the slip stream generator and the Flight Control data processors. While slip stream speeds can be attained without the use of the secondary deflector it severely limits the vessels flight time to three minutes every six hours, and greatly increases the risk of slip stream collision.”


Leave a Comment more...

Red Hat 5 – hdc: open failed: No medium found

by on Jun.21, 2011, under Red Hat




Error:
Starting monitoring for VG VolGroup00: /dev/hdc: open failed: No medium found

Fix:
[root@rh5-32 cache]# vgscan
Reading all physical volumes. This may take a while…
Found volume group “VolGroup00” using metadata type lvm2

Before:
[root@rh5-32 ~]# /etc/init.d/lvm2-monitor restart
Stopping monitoring for VG VolGroup00: /dev/hdc: open failed: No medium found
6 logical volume(s) in volume group “VolGroup00” unmonitored
[ OK ]
Starting monitoring for VG VolGroup00: /dev/hdc: open failed: No medium found
6 logical volume(s) in volume group “VolGroup00” monitored
[ OK ]

/etc/lvm/cache/.cache shows hdc:
# This file is automatically maintained by lvm.

persistent_filter_cache {
valid_devices=[
“/dev/VolGroup00/LogVol00_home”,
“/dev/mapper/VolGroup00-LogVol00_home”,
“/dev/VolGroup00/LogVol00_usr”,
“/dev/VolGroup00/LogVol00_var”,
“/dev/mapper/VolGroup00-LogVol00_var”,
“/dev/mapper/VolGroup00-LogVol00_usr”,
“/dev/sda1”,
“/dev/mapper/VolGroup00-LogVol00_root”,
“/dev/hdc”,
“/dev/VolGroup00/LogVol00_root”,
“/dev/VolGroup00/LogVol00_tmp”,
“/dev/root”,
“/dev/mapper/VolGroup00-LogVol00_tmp”,
“/dev/sda2”,
“/dev/mapper/VolGroup00-LogVol00_swap”,
“/dev/VolGroup00/LogVol00_swap”
]
}

After:
[root@rh5-32 cache]# /etc/init.d/lvm2-monitor restart
Stopping monitoring for VG VolGroup00: 6 logical volume(s) in volume group “VolGroup00” unmonitored
[ OK ]
Starting monitoring for VG VolGroup00: 6 logical volume(s) in volume group “VolGroup00” monitored
[ OK ]

New /etc/lvm/cache/.cache:
[root@rh5-32 cache]# cat /etc/lvm/cache/.cache
# This file is automatically maintained by lvm.

persistent_filter_cache {
valid_devices=[
“/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0-part1”,
“/dev/ram11”,
“/dev/VolGroup00/LogVol00_home”,
“/dev/mapper/VolGroup00-LogVol00_home”,
“/dev/ram10”,
“/dev/ram”,
“/dev/disk/by-uuid/59b6dbd6-938b-4b6c-9b3d-570403ab5aff”,
“/dev/VolGroup00/LogVol00_usr”,
“/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0-part2”,
“/dev/ram12”,
“/dev/VolGroup00/LogVol00_var”,
“/dev/mapper/VolGroup00-LogVol00_var”,
“/dev/mapper/VolGroup00-LogVol00_usr”,
“/dev/ram6”,
“/dev/ram13”,
“/dev/ram14”,
“/dev/ram5”,
“/dev/ram1”,
“/dev/sda1”,
“/dev/ramdisk”,
“/dev/mapper/VolGroup00-LogVol00_root”,
“/dev/VolGroup00/LogVol00_root”,
“/dev/ram0”,
“/dev/VolGroup00/LogVol00_tmp”,
“/dev/root”,
“/dev/mapper/VolGroup00-LogVol00_tmp”,
“/dev/disk/by-label/boot”,
“/dev/ram2”,
“/dev/sda2”,
“/dev/ram8”,
“/dev/ram9”,
“/dev/ram15”,
“/dev/mapper/VolGroup00-LogVol00_swap”,
“/dev/VolGroup00/LogVol00_swap”,
“/dev/ram3”,
“/dev/ram4”,
“/dev/ram7”
]
}

1 Comment more...

How to disable Spotlight in Snow Leopard

by on Jun.09, 2011, under Snow Leopard

To disable Spotlight for all volumes:
sudo mdutil -a -i off

To enable Spotlight for all volumes:
sudo mdutil -a -i on

Leave a Comment more...

20110511: raidcheck.sh

by on May.11, 2011, under Scripts

#!/bin/bash

# raidcheck.sh – Search for SAS1064 or Adaptec AAC-RAID controller and report failed/degraded drives
#
# 20091222 jah – Created by Jamey Hopkins
# 20110511 jah – Added support for more than 1 drive. Still defaults to controller 1.
#                      rm UcliEvt.log
#                      Code/error message cleanup/clarification.

process_mpt-status() {
   ID=`grep Found status.0 | cut -f2 -d= | cut -f1 -d,`
   /usr/sbin/mpt-status -i $ID -s >status.1
   C1=`cat status.1 | grep phys_id | wc -l`
   C2=`cat status.1 | grep phys_id | grep ONLINE | wc -l`
   [ “$C1” = “0” ] && echo “No Drives Found”
   [ “$C1” = “$C2” ] && echo “$C2 of $C1 Drives Are Online”
   #echo “Controller ID=$ID”
  
   if [ $C2 -lt $C1 ]
   then
 echo “ERROR: Failed SAS Drive Found”
        echo “$C2 of $C1 Drives Are ONLINE”
 echo
 exit 2
   fi
}

AACRAID=”0″;SAS1064=”0″

# search for SAS1064 controller
DATA=`/sbin/lspci | grep SAS1064 2>/dev/null`
if [ “$DATA” != “” ]
then
 #echo Process SAS
 SAS1064=”1″
 # check if mptctl module is loaded
 MPT=`/sbin/lsmod | grep mptctl 2>/dev/null`
 [ ! -n “$MPT” ] && echo “mptctl module not loaded”
 /usr/sbin/mpt-status -p >status.0 2>&1
 grep “not found” status.0 >/dev/null
 if [ “$?” = “0” -a ! -n “$MPT” ]
 then
  echo “mpt-status not found in /usr/sbin”
 else
  process_mpt-status
 fi
fi

# search for Adaptec AAC-RAID controller
DATA=`/sbin/lspci | grep AAC-RAID 2>/dev/null`
if [ “$DATA” != “” ]
then
  #echo Process AAC-RAID
 AACRAID=”1″
 STATE=`/usr/StorMan/arcconf getconfig 1 | grep “Logical devices/Failed/Degraded” | cut -f2 -d: | xargs echo`
 #echo state is -${STATE}-
 #STATE=”1/0/1″  # Set STATE for Testing
 STATE2=`echo $STATE | cut -f2 -d’/’`
 STATE3=`echo $STATE | cut -f3 -d’/’`
 if [ “$STATE2” != “0” -o “$STATE3” != “0” ]
 then
  echo “ERROR: AAC-RAID Error – Devices/Failed/Degraded $STATE”
  echo
  exit 2
 else
  echo “AAC-RAID: No Failed or Degraded Drives Found.”
 fi
fi

if [ $SAS1064 = 0 -a $AACRAID = 0 ]
then
 echo “No supported controllers found.”
fi

rm status.0 status.1 >/dev/null 2>&1
rm UcliEvt.log >/dev/null 2>&1

exit 0

1 Comment more...

Move LVM Volume Group To New Larger Drive

by on May.09, 2011, under Linux

Shutdown server and add new drive to system, then after power on:

fdisk /dev/sdb # set partion type to 8e
pvcreate /dev/sdb1
vgextend Volume00 /dev/sdb1
pvmove -n /dev/mapper/Volume00-external /dev/sda2 /dev/sdb1
lvextend -L +47G /dev/mapper/Volume00-external
ext2online /dev/mapper/Volume00-external

sdb=New Drive, Volume00=Volume Group, Volume00-external=Logical Volume
Support Commands: lvdisplay, vgdisplay sfdisk -s, df

Leave a Comment more...


Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

But Wait, There's More!

A few highly recommended friends...