Showing posts with label AIX. Show all posts
Showing posts with label AIX. Show all posts

Tuesday, February 11, 2014

Handy Linux and AIX system information




Linux

AIX
Number of Processors Number of  Physical CPU sockets
# cat /proc/cpuinfo  | grep "physical id" | sort | uniq | wc -l

Number of Cores
# cat /proc/cpuinfo | egrep "core id|physical id" | tr -d "\n" | sed s/physical/\\nphysical/g | grep -v ^$ | sort | uniq | wc -l

Total number of hyperthreads
# cat /proc/cpuinfo | grep processor | wc -l
prtconf | grep "Number of Processors"
Processor Type cat /proc/cpuinfo | grep "model name" | uniq

prtconf | grep "Processor Type"
Processor Clock Speed cat /proc/cpuinfo | grep "model name" | uniq prtconf | grep "Processor Clock Speed"
Disk information fdisk -l lspv
Network Card lspci | grep Ethernet

#lsdev -Cc if
# lsdev -Cc adapter
# entstat -d en0


Memory cat /proc/meminfo | grep MemTotal prtconf  | grep "Memory Size"
IP Address hostname -i prtconf | grep "IP Address"
HBA
lspci | grep HBA
lsdev -Cc adapter | grep "FC Adapter"

A simple PERL script that will give system and OS information on a Linux box

#!/usr/bin/perl -w

#
# A simple PERL script that gives system information like Linux kernel version
# CPU, Memory, IP address, and HBA
#

#
# Function that returns Linux kernel information
#

sub os {
    open (MYOS, "uname -a|");

    while (<MYOS>) {
        $my_os = $_;
        chomp($my_os);
        @kernel_version = split(/\s+/, $my_os);
        print "OS Version: $kernel_version[0] $kernel_version[2]\n";
    }
    close (MYOS);
}

#
# Function that returns the RedHat release information
#

sub rhat {
    open (MYRH, "cat /etc/redhat-release|");

        while (<MYRH>) {
            $my_rh = $_;
            chomp($my_rh);
            print "RedHat Release Version: $my_rh\n\n";
        }
        close (MYRH);
}

#
# Function that returns number of CPU sockets on the machine
#

sub numcpu {
    open (MYPROC, "cat /proc/cpuinfo  | grep \"physical id\" | sort | uniq | wc -l |");

    while (<MYPROC>) {
        $num_cpu = $_;
        chomp($num_cpu);
        print "Number of CPU/Sockets: $num_cpu\n\n";
    }
    close (MYPROC);
}

#
# Function that returns the number of hardware threads in the processor
#

sub numhwthreads {
    open (MYHWTHREADS, "cat /proc/cpuinfo | grep processor | wc -l |");

    while (<MYHWTHREADS>) {
        $num_threads = $_;
        chomp($num_threads);
        print "Number of Hardware threads: $num_threads\n\n";
    }
    close (MYHWTHREADS);
}

#
# Function that returns the processor type.
#

sub proctype {
    open (MYPROCTYPE, "cat /proc/cpuinfo | grep \"model name\" | uniq |");

    while (<MYPROCTYPE>) {
        $num_cpu = $_;
        chomp($num_cpu);
        @proc_info = split(/:/,$num_cpu);
        print "Processor Type: $proc_info[1]\n\n";
    }
    close (MYPROCTYPE);
}

#
# Function that returns the Memory information
#

sub memory {
    open (MYMEMORY, "cat /proc/meminfo | grep MemTotal |");

    while (<MYMEMORY>) {
        $my_memory = $_;
        chomp($my_memory);
        @memory_kb = split(/\s+/,$my_memory);
        $memory_gb = $memory_kb[1]/1048576;
        print "Memeory: $memory_gb GB\n\n";
    }
    close (MYMEMORY);
}

#
# Function that returns the IP address of the host
#

sub ipaddress {
    open (MYIP, "hostname -i |");

    while (<MYIP>) {
        $my_ip = $_;
        chomp($my_ip);
        print "IP address: $my_ip\n\n";
    }
    close (MYMEMORY);
}

#
# Function that returns the HBA information on the host
#

sub hba {
    open (MYHBA, "lspci | grep HBA|");

    while (<MYHBA>) {
        $my_hba = $_;
        chomp($my_hba);
        print "HBA: $my_hba\n";
    }
    close (MYHBA);
}

os();
rhat();
numcpu();
numhwthreads();
proctype();
memory();
ipaddress();
hba();

Tuesday, February 4, 2014

Tuesday, January 28, 2014

VNC for AIX

I wanted to run vnc on my AIX 6.1.9 machine. I got vnc-3.3.3r2-3.aix5.1.ppc.rpm from
http://www-03.ibm.com/systems/power/software/aix/linux/toolbox/alpha.html

After downloading the vnc rpm, I installed it using

# rpm -i vnc-3.3.3r2-3.aix5.1.ppc.rpm

After the rpm was installed, the vncserver, vncpasswd, vncconnect, and vncviewer could be found under /usr/bin/X11

# cd /usr/bin/X11
#  ls vnc*
vncconnect  vncpasswd   vncserver   vncviewer

You will see the vncserver as shown above, but when you run vncserver you get the following error message

# cat /home/root/.vnc/triton10:1.log
29/01/14 13:27:01 Xvnc version 3.3.3r2
29/01/14 13:27:01 Copyright (C) AT&T Laboratories Cambridge.
29/01/14 13:27:01 All Rights Reserved.
29/01/14 13:27:01 See http://www.uk.research.att.com/vnc for information on VNC
29/01/14 13:27:01 Desktop name 'X' (triton10.storage.tucson.ibm.com:1)
29/01/14 13:27:01 Protocol version supported 3.3
29/01/14 13:27:01 Listening for VNC connections on TCP port 5901
29/01/14 13:27:01 Listening for HTTP connections on TCP port 5801
29/01/14 13:27:01   URL http://triton10.storage.tucson.ibm.com:5801
Font directory '/usr/lib/X11/fonts/Speedo/' not found - ignoring

Fatal server error:
could not open default font 'fixed'
xrdb: A remote host refused an attempted connect operation.
1356-605 xrdb: Can't open display 'triton10:1'
1356-265 xsetroot:  Unable to open display:  triton10:1.
mwm: 2053-015 Could not open display.
Warning: This program is an suid-root program or is being run by the root user.
The full text of the error or warning message cannot be safely formatted
in this environment. You may get a more descriptive message by running the
program as a non-root user or by removing the suid bit on the executable.
xterm Xt error: Can't open display: %s


Below is the xstartup file that I used. This file is under the .vnc directory

#cat xstartup
#!/bin/sh

xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
#twm &
mwm &

Below is a diff of a vncserver that has the fixes and works, and the vncserver that got installed under /usr/bin/X11 by the rpm.
#diff  vncserver /usr/bin/X11/vncserver
38,39c38
< #$vncClasses = "/usr/local/vnc/classes";
< $vncClasses = "/opt/freeware/vnc/classes";
---
> $vncClasses = "/usr/local/vnc/classes";
140,141c139
< #$cmd .= " -auth $xauthorityFile";
< $cmd .= " -ac";
---
> $cmd .= " -auth $xauthorityFile";
152d149
< $cmd .= " -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/,/usr/lib/X11/fonts/75dpi/";

Next we start the vncserver with the newly updated vncserver. 




Monday, January 14, 2013

Mapping Storage Array(SAN) Volumes to AIX host

I have an older post where I talk about mapping SAN volumes to a Linux host, and on how to identify the volumes that were created on the array.

In this post I will show how to identify a volumes created on a Storwize V7000 Storage Array on the AIX host.


From the Storwize V7000 GUI we can see that I created 1 volume. Volume test_UID with UID 60050768027F0001F00000000000016C was created, the volume was then mapped to an AIX host.

On the AIX host run the lspv command this will list the current volumes available on the host. If you don't see the newly created volumes, run the cfgmgr command from the command line. Next run the lspv command, and you will see the two new volumes that were created and mapped onto the host.

 isvp14_ora> lspv
hdisk0          00f62a6b98742dad                    rootvg          active
hdisk59         00f62a6bf28975ed                    swapa           active
hdisk230        none                                None

Running the lsattr command on the AIX host as shown below will show lots of information about the LUN, including the unique_id       3321360050768027F0001F00000000000016C04214503IBMfcp

From highlighted part of the unique_id 3321360050768027F0001F00000000000016C04214503IBMfcp
field we can find that this LUN maps to the the newly created volume on the storage array.

isvp14_ora> lsattr -El hdisk229
PCM             PCM/friend/fcpother                                 Path Control Module              False
algorithm       fail_over                                           Algorithm                        True
clr_q           no                                                  Device CLEARS its Queue on error True
dist_err_pcnt   0                                                   Distributed Error Percentage     True
dist_tw_width   50                                                  Distributed Error Sample Time    True
hcheck_cmd      test_unit_rdy                                       Health Check Command             True
hcheck_interval 60                                                  Health Check Interval            True
hcheck_mode     nonactive                                           Health Check Mode                True
location                                                            Location Label                   True
lun_id          0x14000000000000                                    Logical Unit Number ID           False
lun_reset_spt   yes                                                 LUN Reset Supported              True
max_retry_delay 60                                                  Maximum Quiesce Time             True
max_transfer    0x40000                                             Maximum TRANSFER Size            True
node_name       0x50050768020000d2                                  FC Node Name                     False
pvid            none                                                Physical volume identifier       False
q_err           yes                                                 Use QERR bit                     True
q_type          simple                                              Queuing TYPE                     True
queue_depth     8                                                   Queue DEPTH                      True
reassign_to     120                                                 REASSIGN time out value          True
reserve_policy  single_path                                         Reserve Policy                   True
rw_timeout      30                                                  READ/WRITE time out value        True
scsi_id         0x10900                                             SCSI ID                          False
start_timeout   60                                                  START unit time out value        True
unique_id       3321360050768027F0001F00000000000016C04214503IBMfcp Unique device identifier         False
ww_name         0x50050768022000d2                                  FC World Wide Name               False

We can use the below script to show all the UID of the luns on the AIX host

#!/usr/ksh

for disk in $(lsdev -Cc disk|awk '{print $1}')
do
echo $disk:
lsattr -EHl $disk -a unique_id
echo ---------------------------------------
done






Monday, September 10, 2012

Accessing GUI running on AIX using putty



I usually use putty to access my remote AIX server, and then I stat the vnc server on it and connect from my Windows XP laptop using the vnc client to access any GUI like the Oracle runInstaller, etc.

There are time when I would have to connect to a remote AIX machine which does not have vncserver on it, so at that time I would access the GUI from putty using ssh X11 forwarding.

For this the pre-requisite is to have ssh and ssl installed and running on the AIX server. Open SSH and Open SSL can be installed on an AIX 7.1 host following the steps in my earlier post http://shettymayur.blogspot.com/2012/09/open-secure-shellssh-and-open-secure.html

With the help of the instructions on the following links http://mynotes.wordpress.com/2009/12/11/setting-x-display-using-putty-and-xming/ and http://tacomadata.com/node/15 I was able to able to access the GUI (xclock, runinstaller, etc.)

Configuring PuTTY:
Using putty I created an SSH session to access the AIX server.







































Configuring Xming:
Install Xming from http://sourceforge.net/projects/xming/
On starting Xming you should see this at the bottom panned on your Windows XP machine.

 

Configure sshd on AIX:
Edit /etc/ssh/sshd_config , uncomment and add the following:

X11Forwarding yes
X11UseLocalhost yes
XauthLocation /usr/bin/X11/xauth

Connect to the AIX host test.ibm.com using the ssh session that was created earlier, and call xclock to test. Xclock should be displayed on the Windows machine.

test> echo $DISPLAY
localhost:10.0
test> xclock

Saturday, September 8, 2012

Open Secure Shell(SSH) and Open Secure Socket Layer(SSL) on AIX 7.1

I installed Open Secure Shell(SSH) and Open Secure Socket Layer(SSL) on my AIX 7.1 host, but when I did a sshd -V I got the below error message.

isvp14_ora> /usr/sbin/sshd -V
OpenSSL version mismatch. Built against 908070, you have 90812f

Here we see the list of SSH and SSL software that is currently installed on the AIX host.

isvp14_ora> lslpp -l | grep ssh
  openssh.base.server     4.7.0.5301  COMMITTED  Open Secure Shell Server
isvp14_ora> lslpp -l | grep openssh
  openssh.base.server     4.7.0.5301  COMMITTED  Open Secure Shell Server
isvp14_ora> lslpp -l | grep ssl
  openssl.base            0.9.8.1802  COMMITTED  Open Secure Socket Layer
  openssl.license         0.9.8.1802  COMMITTED  Open Secure Socket License
  openssl.man.en_US       0.9.8.1802  COMMITTED  Open Secure Socket Layer
  openssl.base            0.9.8.1802  COMMITTED  Open Secure Socket Layer
isvp14_ora>

I deleted the SSH solftware and started afresh with the compatible versions, by installing
from OpenSSH_5.8.0.6102.tar I got the from
http://www-03.ibm.com/systems/power/software/aix/expansionpack/index.html

On untarring OpenSSH_5.8.0.6102.tar I saw the following files in the directory.

isvp14_ora> ls
.toc                    openssh.msg.Ja_JP       openssh.msg.es_ES
OpenSSH_5.8.0.6102.tar  openssh.msg.KO_KR       openssh.msg.fr_FR
openssh.base            openssh.msg.PL_PL       openssh.msg.hu_HU
openssh.license         openssh.msg.PT_BR       openssh.msg.it_IT
openssh.man.en_US       openssh.msg.RU_RU       openssh.msg.ja_JP
openssh.msg.CA_ES       openssh.msg.SK_SK       openssh.msg.ko_KR
openssh.msg.CS_CZ       openssh.msg.ZH_CN       openssh.msg.pl_PL
openssh.msg.DE_DE       openssh.msg.ZH_TW       openssh.msg.pt_BR
openssh.msg.EN_US       openssh.msg.Zh_CN       openssh.msg.ru_RU
openssh.msg.ES_ES       openssh.msg.Zh_TW       openssh.msg.sk_SK
openssh.msg.FR_FR       openssh.msg.ca_ES       openssh.msg.zh_CN
openssh.msg.HU_HU       openssh.msg.cs_CZ       openssh.msg.zh_TW
openssh.msg.IT_IT       openssh.msg.de_DE       openssl-0.9.8.1802
openssh.msg.JA_JP       openssh.msg.en_US       openssl-0.9.8.1802.tar

I then installed the openssh.base using smitty, here is the new version of ssh on the AIX machine.
NOTE: Remember to accept the license agreement while installing using smitty.

isvp14_ora> lslpp -l | grep ssh
  openssh.base.client     5.8.0.6102  COMMITTED  Open Secure Shell Commands
  openssh.base.server     5.8.0.6102  COMMITTED  Open Secure Shell Server
  openssh.license         5.8.0.6102  COMMITTED  Open Secure Shell License
  openssh.man.en_US       5.8.0.6102  COMMITTED  Open Secure Shell
  openssh.msg.en_US       5.8.0.6102  COMMITTED  Open Secure Shell Messages -
  openssh.base.client     5.8.0.6102  COMMITTED  Open Secure Shell Commands
  openssh.base.server     5.8.0.6102  COMMITTED  Open Secure Shell Server
isvp14_ora>

isvp14_ora> /usr/sbin/sshd -V
sshd: illegal option -- V
OpenSSH_5.8p1, OpenSSL 0.9.8r 8 Feb 2011
usage: sshd [-46Ddeiqt] [-b bits] [-f config_file] [-g login_grace_time]
                           [-h host_key_file] [-k key_gen_time] [-o option] [-p port] [-u len]
isvp14_ora>  


To configure passwordless ssh on Oracle RAC nodes here is link to the Oracle docs that talk about it.
http://docs.oracle.com/cd/E11882_01/install.112/e24614/manpreins.htm

Wednesday, March 28, 2012

Increasing the jfs2 file system size on AIX beyond 64GB

In my earlier post I talk about increasin the size of a file system on AIX

http://shettymayur.blogspot.com/2011/01/how-to-increase-file-system-size-on-aix.html

In this post I created a 100GB SAN volume, which I mapped to my AIX 7.1 host.

Using smitty I created a 64GB jfs2 file system on the volume.


/dev/fslv02       64.00     52.61   18%     5459     1% /bigfs
isvp14_ora> lslv fslv02
LOGICAL VOLUME:     fslv02                 VOLUME GROUP:   bigfs
LV IDENTIFIER:      00f62a6b00004c0000000136568e8fcf.2 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            512                    PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                512                    PPs:            512
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        /bigfs                 LABEL:          /bigfs
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
isvp14_ora>

The volume that was mapped to the host was 100GB, but when I tried to increase the file system size beyond 64GB I get the below message.

isvp14_ora> chfs -a size=90G /bigfs
0516-787 extendlv: Maximum allocation for logical volume fslv02 is 512.

We see that it's complaining that the maximum number LP is 512. LV can go up to 512 LPs, each LP is 128 MB , so  512 x 128MB= 64 GB

Below we below see the command to increase the LPs to 1024, this will give us a max file system of 128 GB


isvp14_ora> chlv -x'1024' fslv02
isvp14_ora> lslv fslv02
LOGICAL VOLUME:     fslv02                 VOLUME GROUP:   bigfs
LV IDENTIFIER:      00f62a6b00004c0000000136568e8fcf.2 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            1024                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                512                    PPs:            512
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        /bigfs                 LABEL:          /bigfs
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO

isvp14_ora> chfs -a size=90G /bigfs
Filesystem size changed to 188743680
isvp14_ora> df -g
/dev/fslv02       90.00     78.60   13%     5459     1% /bigfs

Monday, October 10, 2011

Increasing the Maximum number of PROCESSES allowed per user in AIX 7.1

One thing to remember while running Oracle benchmark clients is to increase the maximum number of processes the user(oracle in this case) can handle on the database server(AIX) side. You can use "ulimit -a" to check the current "max user processes"

To increase the "max user processes" on an AIX server, run smitty -> select System Environments -> select  Change / Show Characteristics of Operating System -> Maximum number of PROCESSES allowed per user       [600]  ->  press Enter
 



isvp14_ora> smitty
Login  as user oracle and run "ulimit -a". You will see that the maximum user processes has increased to 600.

Friday, May 13, 2011

mount: 0506-324 Cannot mount /dev/oradata_lv on /oradata: There is a request to a device or address that does not exist.

I have a two node cluster sharing a Storwize V7000 between then. Volume groups "oradata_vg" and "oralogs_vg" were created on the shared LUNs.

I then mounted file systesms /ordata and /oralogs on the volume groups. I was also able to manualy switchover the file system between the two nodes of the cluster.

But now when I tried to mount /oradata on the first node I got the "mount: 0506-324" error message.


isvp14_ora> mount /oradata
mount: 0506-324 Cannot mount /dev/oradata_lv on /oradata: There is a request to a device or address that does not exist.

On doing an lspv on the node I noticed that the volume group was not active. I did an varyonvg on that volume, but that failed.

isvp14_ora> lspv
hdisk0          00f62a6c98758859                    rootvg          active
hdisk1          00f62a6ca2279d3e                    swap            active
hdisk2          00f62a6cdf8874f5                    None
hdisk3          00f62a6cdf8875ff                    None
hdisk4          00f62a6ce57d00d4                    oradata_vg
hdisk5          00f62a6bdf88762c                    None
hdisk6          00f62a6bdf887797                    oralogs_vg
hdisk7          00f62a6bdf887802                    None

I noticed that the volume group was active on the second node, so I did a varoffvg of the volume groups. I then did a varyonvg of the volume groups on the first node. I was then able to mount the file systems


isvp15_ora> lspv
hdisk0          00f62a6c98758859                    rootvg          active
hdisk1          00f62a6ca2279d3e                    swap            active
hdisk2          00f62a6cdf8874f5                    None
hdisk3          00f62a6cdf8875ff                    None
hdisk4          00f62a6ce57d00d4                    oradata_vg      active
hdisk5          00f62a6bdf88762c                    None
hdisk6          00f62a6bdf887797                    oralogs_vg      active
hdisk7          00f62a6bdf887802                    None
isvp15_ora> varyoffvg oradata_vg
isvp15_ora> varyoffvg oralogs_vg

isvp14_ora> varyonvg oradata_vg
isvp14_ora> varyonvg oralogs_vg
isvp14_ora> mount /oradata

Thursday, April 28, 2011

Got wget, bash, unzip for AIX 7.1?

Well I was looking around for all these utilities for a while untill i came across this page:
ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/README.txt


      isvp14_ora> ftp ftp.software.ibm.com
      Name> ftp
      Password> your e-mail address
          ftp> cd aix/freeSoftware/aixtoolbox/RPMS/ppc/wget
          ftp> binary
          ftp> get wget-1.9.1-1.aix5.1.ppc.rpm
          ftp> quit
      isvp14_ora> rpm -hUv wget-1.9.1-1.aix5.1.ppc.rpm
      isvp14_ora> wget -r -nd ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc

You will now have the following files in the directory that you created:


isvp14_ora> ls
getapp-dev.sh       getgnome.base.sh    getkde3.all.sh
Xsession.kde        getbase.sh          getkde2.all.sh      getkde3.base.sh
Xsession.kde2       getdesktop.base.sh  getkde2.base.sh     getkde3.opt.sh
getgnome.apps.sh    getkde2.opt.sh     

isvp14_ora> chmod +x get*.sh

Run the script getbase.sh this will create a directory called base, ftp the rpm's into it.


isvp14_ora>
isvp14_ora> cd base
isvp14_ora> ls
bash-3.2-1.aix5.2.ppc.rpm          rpm-3.0.5-52.aix5.3.ppc.rpm
bzip2-1.0.5-3.aix5.3.ppc.rpm       rpm-build-3.0.5-52.aix5.3.ppc.rpm
gettext-0.10.40-8.aix5.2.ppc.rpm   rpm-devel-3.0.5-52.aix5.3.ppc.rpm
gzip-1.2.4a-10.aix5.2.ppc.rpm      tar-1.14-2.aix5.1.ppc.rpm
info-4.6-1.aix5.1.ppc.rpm          unzip-5.51-1.aix5.1.ppc.rpm
patch-2.5.4-4.aix4.3.ppc.rpm      
popt-1.7-2.aix5.1.ppc.rpm

isvp14_ora>

Install the rpms that you need:

isvp14_ora> rpm -hUv unzip-5.51-1.aix5.1.ppc.rpm
isvp14_ora> rpm -hUv zip-2.3-3.aix4.3.ppc.rpm
isvp14_ora> rpm -hUv bash-3.2-1.aix5.2.ppc.rpm

There we go, we now have bash with AIX 7.1
isvp14_ora> bash
bash-3.2#

Monday, February 21, 2011

How to find the Network Interface Card(Network Adapter) information on AIX7.1

isvp17> lsattr -El ent0
alt_addr      0x000000000000   Alternate Ethernet address                True
flow_ctrl     no               Request Transmit and Receive Flow Control True
jumbo_frames  no               Request Transmit and Receive Jumbo Frames True
large_receive yes              Enable receive TCP segment aggregation    True
large_send    yes              Enable hardware Transmit TCP segmentation True
media_speed   Auto_Negotiation Requested media speed                     True
multicore     yes              Enable Multi-Core Scaling                 True
rx_cksum      yes              Enable hardware Receive checksum          True
rx_cksum_errd yes              Discard RX packets with checksum errors   True
rx_clsc       1G               Enable Receive interrupt coalescing       True
rx_clsc_usec  95               Receive interrupt coalescing window       True
rx_coalesce   16               Receive packet coalescing                 True
rx_q1_num     8192             Number of Receive queue 1 WQEs            True
rx_q2_num     4096             Number of Receive queue 2 WQEs            True
rx_q3_num     2048             Number of Receive queue 3 WQEs            True
tx_cksum      yes              Enable hardware Transmit checksum         True
tx_isb        yes              Use Transmit Interface Specific Buffers   True
tx_q_num      512              Number of Transmit WQEs                   True
tx_que_sz     8192             Software transmit queue size              True
use_alt_addr  no               Enable alternate Ethernet address         True

isvp17> lsdev |grep -i ethernet
en0        Available          Standard Ethernet Network Interface
en1        Defined            Standard Ethernet Network Interface
ent0       Available          Logical Host Ethernet Port (lp-hea)
ent1       Available          Virtual I/O Ethernet Adapter (l-lan)
et0        Defined            IEEE 802.3 Ethernet Network Interface
et1        Defined            IEEE 802.3 Ethernet Network Interface
lhea0      Available          Logical Host Ethernet Adapter (l-hea)
isvp17>

How to find the HBA information on an AIX7.1 server

Run the following command on the host systems. The highlighted area tells us that we have Virtual Fibre Channel Adapter, which mean we are vios client and the adapter is a virtual that was provided by the vios server.


isvp17> lscfg -vpl fcs0
  fcs0             U8233.E8B.065D51P-V1-C36-T1  Virtual Fibre Channel Client Adapter

        Network Address.............C05076037CF00000
        ROS Level and ID............
        Device Specific.(Z0)........
        Device Specific.(Z1)........
        Device Specific.(Z2)........
        Device Specific.(Z3)........
        Device Specific.(Z4)........
        Device Specific.(Z5)........
        Device Specific.(Z6)........
        Device Specific.(Z7)........
        Device Specific.(Z8)........C05076037CF00000
        Device Specific.(Z9)........
        Hardware Location Code......U8233.E8B.065D51P-V1-C36-T1


  PLATFORM SPECIFIC

  Name:  vfc-client
    Node:  vfc-client@30000024
    Device Type:  fcp
    Physical Location: U8233.E8B.065D51P-V1-C36-T1

Now, let us log onto the vios server itself, and see what kind on HBA adapter it is using to connect to the storage system. This tells us that the adapter that we are using a dual port 8Gb adapter.
telnet (vios server name)

IBM Virtual I/O Server

login: padmin
padmin's Password:
Last unsuccessful login: Mon Jan 24 09:13:56 MST 2011 on /dev/pts/0 from sig-9-65-56-25.mts.ibm.com
Last login: Sun Feb 20 14:34:26 MST 2011 on /dev/pts/0 from 9.57.85.97

$ oem_setup_env
# lscfg -vpl fcs0
  fcs0             U78A0.001.DNWK129-P1-C1-T1  8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)

        Part Number.................10N9824
        Serial Number...............1B046059F8
        Manufacturer................001B
        EC Level....................D77040
        Customer Card ID Number.....577D
        FRU Number..................10N9824
        Device Specific.(ZM)........3
        Network Address.............10000000C9AA5388
        ROS Level and ID............02781174
        Device Specific.(Z0)........31004549
        Device Specific.(Z1)........00000000
        Device Specific.(Z2)........00000000
        Device Specific.(Z3)........09030909
        Device Specific.(Z4)........FF781116
        Device Specific.(Z5)........02781174
        Device Specific.(Z6)........07731174
        Device Specific.(Z7)........0B7C1174
        Device Specific.(Z8)........20000000C9AA5388
        Device Specific.(Z9)........US1.11X4
        Device Specific.(ZA)........U2D1.11X4
        Device Specific.(ZB)........U3K1.11X4
        Device Specific.(ZC)........000000EF
        Hardware Location Code......U78A0.001.DNWK129-P1-C1-T1


  PLATFORM SPECIFIC

  Name:  fibre-channel
    Model:  10N9824
    Node:  fibre-channel@0
    Device Type:  fcp
    Physical Location: U78A0.001.DNWK129-P1-C1-T1

Update: I also found a great tool called "hbainfo" at http://www.tablespace.net/hbainfo/ I tested it on 6.1 as well as 7.1, and it works great on both. The o/p below is from 6.1 though.



# uname -a
AIX sonasisvp1_ora1 1 6 00F61CE14C00
# ./hbainfo
Total Adapters:                 2
This Adapter Index:             0
Adapter Name:                   com.ibm-df1000f114108a0-1
Manufacturer:                   IBM
SerialNumber:                   1C00908221
Model:                          df1000f114108a0
Model Description:              FC Adapter
HBA WWN:                        20000000C99D9E4C
Node Symbolic Name:
Hardware Version:
Driver Version:                 6.1.4.6
Option ROM Version:             02781174
Firmware Version:               111304
Vendor Specific ID:             0
Number Of Ports:                1
Driver Name:                    /usr/lib/drivers/pci/efcdd
Port Index:                     0
Node WWN:                       20000000C99D9E4C
Port WWN:                       10000000C99D9E4C
Port Fc Id:                     330496
Port Type:                      Fabric
Port State:                     Operational
Port Symbolic Name:
OS Device Name:                 fcs0
Port Supported Speed:           Unknown - transceiver incable of reporting
Port Speed:                     Unknown - transceiver incable of reporting
Port Max Frame Size:            2112
Fabric Name:                    10000005336C6CF9
Number of Discovered Ports:     2
Seconds Since Last Reset:       0
Tx Frames:                      159
Tx Words:                       9
Rx Frames:                      313
Rx Words:                       20
LIP Count:                      0
NOS Count:                      0
Error Frames:                   0
Dumped Frames:                  0
Link Failure Count:             0
Loss of Sync Count:             5
Loss of Signal Count:           0
Primitive Seq Protocol Err Cnt: 0
Invalid Tx Word Count:          16
Invalid CRC Count:              0
#



Thursday, February 17, 2011

cp : 0653-447 Requested a write of 4096 bytes, but wrote only 3584.

As user oracle I was trying to copy the Oracle binaries which I had downloaded and were under /home/root on an AIX7.1 machine.

$ cp -p /home/root/aix.ppc64_11gR2_database_1of2.zip .
cp : 0653-447 Requested a write of 4096 bytes, but wrote only 3584.
$ ulimit -a
time(seconds)        unlimited
file(blocks)         2097151
data(kbytes)         131072
stack(kbytes)        32768
memory(kbytes)       32768
coredump(blocks)     2097151
nofiles(descriptors) 2000
threads(per process) unlimited
processes(per user)  unlimited

$ ulimit -f unlimited
ksh: ulimit: 0403-045 The specified value is outside the user's allowable range.
Edited the /etc/security/limits file and added fsize = -197151 for oracle user as shown below.
isvp18> vi /etc/security/limits

pconsole:
stack_hard = 131072
data = 1280000
data_hard = 1280000

oracle:
fsize = -197151


isvp18> su - oracle
$ ulimit -f unlimited
$ ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         131072
stack(kbytes)        32768
memory(kbytes)       32768
coredump(blocks)     2097151
nofiles(descriptors) 2000
threads(per process) unlimited
processes(per user)  unlimited

$ cp /home/root/aix.ppc64_11gR2_database_1of2.zip .
$ cp /home/root/aix.ppc64_11gR2_database_2of2.zip .
$ cksum /home/root/aix.ppc64_11gR2_database_1of2.zip
1915658395 1564425851 /home/root/aix.ppc64_11gR2_database_1of2.zip
$ cksum aix.ppc64_11gR2_database_2of2.zip
1152318705 1007010341 aix.ppc64_11gR2_database_2of2.zip

Copy went through fine at the end :-)

Wednesday, February 16, 2011

mount: 0506-324 Cannot mount /dev/lv03 on /oraarch: The media is not formatted or the format is not correct.

I was trying to mount one of my file systems on AIX7.1 while I got this error message

isvp18> mount /oraarch
Replaying log for /dev/lv03.
mount: 0506-324 Cannot mount /dev/lv03 on /oraarch: The media is not formatted or the format is not correct.
0506-342 The superblock on /dev/lv03 is dirty.  Run a full fsck to fix.

So I ran fsck on the file system as suffested, and answered Yes to the question that fsck asked.
isvp18> fsck /oraarch



** Checking /dev/rlv03 (/oraar)
** Phase 0 - Check Log
log redo processing for /dev/rlv03
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Inode Map
** Phase 6 - Check Block Map
Bad Block Map; SALVAGE? yes
** Phase 6b - Salvage Block Map
Superblock is marked dirty; FIX? yes
9 files 1528152 blocks 191934120 free
***** Filesystem was modified *****


Tried to mount the file system again after the fsck completed, and it worked fine.
isvp18> mount /oraarch
isvp18> df -m
Filesystem    MB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4       10240.00   3502.43   66%    52007     7% /
/dev/hd2       16896.00  14625.86   14%    44235     2% /usr
/dev/hd9var      512.00    215.45   58%     5765    11% /var
/dev/hd3        3584.00   3332.43    8%      483     1% /tmp
/dev/hd1       15360.00   7167.52   54%     3716     1% /home
/dev/hd11admin    256.00    255.61    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt     512.00    327.88   36%     7015     9% /opt
/dev/livedump    256.00    255.64    1%        4     1% /var/adm/ras/livedump
/dev/lv02      94464.00  93567.66    1%       21     1% /oralog
/dev/fslv00    94464.00  92401.49    3%       22     1% /oradata
/dev/bkuplv00   9440.00   7707.68   19%       28     1% /bak/test/home/test
vanhalen:/vanhalen/tools    512.00    486.43    5%      541     1% /testlab/tools
/dev/lv03      94464.00  93717.83    1%       18     1% /oraarch

Wednesday, February 9, 2011

fshop_make: 0506-252 A file system with nbpi = 4096 cannot exceed 134217728 512-byte blocks

I was trying to create a 100 GB file system my IBM Storwize V7000 volume, and was getting this error message from smitty while creating the file system
Error message: fshop_make: 0506-252 A file system with nbpi = 4096 cannot exceed 134217728 512-byte blocks





Increasing the nbpi as shown below allowed me to create the file system.


smitty made the following entry in /etc/filesystems for my file system:

/oradata:
        dev             = /dev/lv01
        vfs             = jfs
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false


snitty also create the /oradata mount point
isvp17> ls -l /oradata
total 0

Mounting /oradata:
isvp17> mount /oradata
isvp17> df -m
Filesystem    MB blocks      Free %Used    Iused %Iused Mounted on

/dev/lv01      94464.00  93717.84    1%       17     1% /oradata

Wednesday, January 26, 2011

Mounting a file system on AIX 7.1

I created a 10 GB volume on my Storwize V7000 storage, which I then presented to my AIX 7.1 server.

isvp17> lspv
hdisk0          00f65d51a5aa3cf1                    rootvg          active
hdisk1          00f65d51bfba4e2e                    test1           active
hdisk2          none                                None
hdisk3          none                                None
hdisk4          none                                None
isvp17>
hdisk4 is the 10 GB volume that I had created.Next, I'll create a volume group called "metro".

isvp17> mkvg -y metro hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
metro
isvp17>
isvp17> lspv
hdisk0          00f65d51a5aa3cf1                    rootvg          active
hdisk1          00f65d51bfba4e2e                    test1           active
hdisk2          none                                None
hdisk3          none                                None
hdisk4          00f65d51c465f8eb                    metro           active
isvp17>

Create a file system using the "smitty crjfs" command. Next select "Add a Standard Journaled File System" from the menu. It will ask you what Volume Group you want the to use, at that point select the volume group "metro" that we had created earlier.

The next screen will ask us a bunch of questions, when when answered correctly created the file system for us. All we have to do after that is mount our newly created file system.


Tuesday, January 25, 2011

How to increase the file system size on an AIX 7.1 server

I was running out of filesystem space on my AIX 7.1 box

The df output showed that things were pretty tight:
isvp17> df -m
Filesystem    MB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4         512.00    330.59   36%     9762    12% /
/dev/hd2       16896.00  14787.64   13%    43253     2% /usr
/dev/hd9var      512.00    232.75   55%     5726    10% /var
/dev/hd3        3584.00   3581.39    1%       40     1% /tmp
/dev/hd1         256.00    120.70   53%      183     1% /home
/dev/hd11admin    256.00    255.62    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt     512.00    336.83   35%     7007     9% /opt
/dev/livedump    256.00    255.64    1%        4     1% /var/adm/ras/livedump
isvp17>

The volume group rootvg in my case had one physical volume(hdisk0). All the physical volumes in
volume group are divided into physical partitions(PPs) of the same size.
To check if there is any free space on the physical volume which has rootvg i did
isvp17> lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            559         454         111..97..22..112..112

454 = number of free physical partitions(PP).
isvp17> chfs -a size=5G /
Filesystem size changed to 10485760
isvp17> df -m
Filesystem    MB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4        5120.00   4937.89    4%     9762     1% /
/dev/hd2       16896.00  14787.64   13%    43253     2% /usr
/dev/hd9var      512.00    232.68   55%     5726    10% /var
/dev/hd3        3584.00   3581.39    1%       40     1% /tmp
/dev/hd1         256.00    120.68   53%      183     1% /home
/dev/hd11admin    256.00    255.62    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt     512.00    336.83   35%     7007     9% /opt
/dev/livedump    256.00    255.64    1%        4     1% /var/adm/ras/livedump
isvp17>
isvp17> lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            559         436         111..97..04..112..112
isvp17> 
 
Next I increased the size of /home filesysem to 15GB
isvp17> chfs -a size=15G /home
Filesystem size changed to 31457280
isvp17> lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            559         377         111..42..00..112..112
isvp17>
 
It's good to know that I still have 377 physical partitions free in case I need
to increase it further in the future. 

Friday, January 21, 2011

Peek into AIX 7.1

I've been mainly a Linux and Solaris guy for most of my tech career, and now I finally have a chance to tinker
around with AIX, AIX 7.1 to be precise.

AIX Version 7
Copyright IBM Corporation, 1982, 2010.
login: root
root's Password:
*******************************************************************************
*                                                                             *
*                                                                             *
*  Welcome to AIX Version 7.1!                                                *
*                                                                             *
*                                                                             *
*  Please see the README file in /usr/lpp/bos for information pertinent to    *
*  this release of the AIX Operating System.                                  *
*                                                                             *
*                                                                             *
*******************************************************************************
Last login: Fri Jan 21 11:01:47 MST 2011 on /dev/vty0

AIX Level is: 7.1.0.0
isvp17>
Yay!!! that wasn't that hard.


Ok, so lets see what uname gives us here. As you see it tells us the OS name, version, and that we are on
a powerpc system
isvp17> uname -ap
AIX isvp17 1 7 00F65D514C00 powerpc

Next, how about telling us the amount of physical memory on this system. rmss with the -p flag tells us that we have around 24 GB of Memory that has been allocated to this system/LPAR.
isvp17> rmss -p
Simulated memory size is 24576 Mb.
I would also like to know if this a LPAR or if it's a fully allocated system. prtconf will give me a detailed
list of the hardware configuration. Wow!!! this is actually pretty detailed, and handy information to have about
the system that we are working on.

isvp17> prtconf
System Model: IBM,8233-E8B
Machine Serial Number: 065D51P
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 6
Processor Clock Speed: 3000 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 1 isvp17
Memory Size: 24576 MB
Good Memory Size: 24576 MB
Platform Firmware level: AL710_099
Firmware Version: IBM,AL710_099
Console Login: enable
Auto Restart: true
Full Core: false

Network Information
        Host Name: isvp17.storage.tucson.ibm.com
        IP Address: x.xx.xx.xx
        Sub Netmask: 255.255.254.0
        Gateway: x.xx.xx.x
        Name Server:
        Domain Name:

Paging Space Information
        Total Paging Space: 512MB
        Percent Used: 3%

Volume Groups Information
==============================================================================
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            559         454         111..97..22..112..112
==============================================================================

INSTALLED RESOURCE LIST

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
*   = Diagnostic support not available.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

+ sys0                                                           System Object
+ sysplanar0                                                     System Planar
* vio0                                                           Virtual I/O Bus
* ent1             U8233.E8B.065D51P-V1-C3-T1                    Virtual I/O Ethernet Adapter (l-lan)
* vscsi0           U8233.E8B.065D51P-V1-C2-T1                    Virtual SCSI Client Adapter
* hdisk0           U8233.E8B.065D51P-V1-C2-T1-L8100000000000000  Virtual SCSI Disk Drive
* vsa0             U8233.E8B.065D51P-V1-C0                       LPAR Virtual Serial Adapter
* vty0             U8233.E8B.065D51P-V1-C0-L0                    Asynchronous Terminal
+ fcs0             U8233.E8B.065D51P-V1-C36-T1                   Virtual Fibre Channel Client Adapter
+ fscsi0           U8233.E8B.065D51P-V1-C36-T1                   FC SCSI I/O Controller Protocol Device
* lhea0            U78A0.001.DNWK129-P1                          Logical Host Ethernet Adapter (l-hea)
+ ent0             U78A0.001.DNWK129-P1-C6-T1                    Logical Host Ethernet Port (lp-hea)
+ L2cache0                                                       L2 Cache
+ mem0                                                           Memory
+ proc0                                                          Processor
+ proc4                                                          Processor
+ proc8                                                          Processor
+ proc12                                                         Processor
+ proc16                                                         Processor
+ proc20                                                         Processor
isvp17>


How about some information on the disks and the file systems mounted. Let me try out good old df command. Lets see what it brings back.
isvp17> df -m
Filesystem    MB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4         512.00    330.74   36%     9749    12% /
/dev/hd2       16896.00  15105.72   11%    43011     2% /usr
/dev/hd9var      512.00    245.90   52%     5684    10% /var
/dev/hd3        3584.00   3581.45    1%       36     1% /tmp
/dev/hd1         256.00    255.54    1%       17     1% /home
/dev/hd11admin    256.00    255.62    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt     512.00    336.88   35%     7007     9% /opt
/dev/livedump    256.00    255.64    1%        4     1% /var/adm/ras/livedump
vanhalen:/vanhalen/tools    512.00    486.43    5%      541     1% /testlab/tools

To mount a filesystem in Solaris you would make an entry into /etc/vfstab , and in RedHat Linux into /etc/fstab.
The non of these file exist in AIX, out here it's dome through the /etc/filesystems file. Lets see what it has.
I won't cut paste the whole thing, but it has someting like this.

isvp17> cat /etc/filesystems
* This version of /etc/filesystems assumes that only the root file system
* is created and ready.  As new file systems are added, change the check,
* mount, free, log, vol and vfs entries for the appropriate stanza.
*

/:
        dev             = /dev/hd4
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = root
        free            = true

/home:
        dev             = /dev/hd1
        vfs             = jfs2
        log             = /dev/hd8
        mount           = true
        check           = true
        vol             = /home
        free            = false

/usr:
        dev             = /dev/hd2
        vfs             = jfs2
        log             = /dev/hd8
        mount           = automatic
        check           = false
        type            = bootfs
        vol             = /usr
        free            = false


How about some disk information, something like fdisk -l in Linux, and format in Solaris. Out in AIX land the command is lspv. Ok, so let try it out. Well, I would like some more information that that. Let me look to see if there are any flags that can give me a bit more detailed information. Looks like the -p give us
more info, but it's very different to what I'm used to in Linux and Solaris.
isvp17> lspv
hdisk0          00f65d51a5aa3cf1                    rootvg          active


isvp17> lspv -p hdisk0
hdisk0:
PP RANGE  STATE   REGION        LV NAME             TYPE       MOUNT POINT
  1-1     used    outer edge    hd5                 boot       N/A
  2-112   free    outer edge
113-114   used    outer middle  hd6                 paging     N/A
115-126   used    outer middle  lg_dumplv           sysdump    N/A
127-127   used    outer middle  livedump            jfs2       /var/adm/ras/livedump
128-224   free    outer middle
225-225   used    center        hd8                 jfs2log    N/A
226-226   used    center        hd4                 jfs2       /
227-230   used    center        hd2                 jfs2       /usr
231-231   used    center        hd9var              jfs2       /var
232-232   used    center        hd3                 jfs2       /tmp
233-233   used    center        hd1                 jfs2       /home
234-234   used    center        hd10opt             jfs2       /opt
235-235   used    center        hd11admin           jfs2       /admin
236-236   used    center        hd4                 jfs2       /
237-240   used    center        hd2                 jfs2       /usr
241-241   used    center        hd9var              jfs2       /var
242-242   used    center        hd10opt             jfs2       /opt
243-243   used    center        hd3                 jfs2       /tmp
244-301   used    center        hd2                 jfs2       /usr
302-313   used    center        hd3                 jfs2       /tmp
314-335   free    center
336-447   free    inner middle
448-559   free    inner edge
isvp17>

isvp17> lspv -l hdisk0
hdisk0:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
hd3                   14      14      00..00..14..00..00    /tmp
hd9var                2       2       00..00..02..00..00    /var
hd2                   66      66      00..00..66..00..00    /usr
hd4                   2       2       00..00..02..00..00    /
hd10opt               2       2       00..00..02..00..00    /opt
hd1                   1       1       00..00..01..00..00    /home
hd8                   1       1       00..00..01..00..00    N/A
hd6                   2       2       00..02..00..00..00    N/A
hd5                   1       1       01..00..00..00..00    N/A
lg_dumplv             12      12      00..12..00..00..00    N/A
livedump              1       1       00..01..00..00..00    /var/adm/ras/livedump
hd11admin             1       1       00..00..01..00..00    /admin