The Subsystem Device Driver [SDD] is a pseudo device driver designed to support the multipath configuration environments in the IBM TotalStorage Enterprise Storage Server, the IBM TotalStorage DS family, the IBM SystemStorage SAN Volume Controller. It resides in a host system with the native disk device driver and provides the following functions:
– Enhanced data availability
– Dynamic I/O load-balancing across multiple paths
– Automatic path failover protection
– Concurrent download of licensed internal code
– Path-selection policies for the host system
On AIX it works pretty much out-of-the-box, but on Linux the story is quite different. The driver is closed-source, so all you can do is download the driver kernel module for supported kernels from IBM, as well as some userspace software.
For supported distributions, the driver can be downloaded from:
Subsystem Device Driver for Linux
When you install the package, it copies its files under /opt/IBMsdd (at least the Redhat .rpms do). Also, it adds an init script called “sdd” under /etc/init.d, and adds a line like this in inittab:
1 |
srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 |
That should take care that if the sddsrv daemon dies, it will be restarted automatically.
Table of Contents
Basic configuration
After the disks have been configured in the SAN Volume Controller, they should show up when the busses of the Fibre Channel adapters are re-scanned.
/proc/partitions shows the new disk:
1 |
252 0 73400320 vpatha |
/etc/vpath.conf holds the names for the device ID’s, and /etc/sddsrv.conf some miscellaneous settings.
Under /opt/IBMsdd/bin there are some useful utilities. The datapath command will allow you to query adapters and devices, and set them offline or online. lsvpcg shows which scsi disks map to which vpath disks. cfgvpath allows you to make changes to the configuration.
Using with LVM on RHEL4
To get LVM working correctly with SDD on RHEL4, there are a couple of things that must be taken care of.
Boot-up load order
The first thing is to make sure the sdd driver loads before LVM on boot-up. The SDD User Manual suggests adding a script to start sdd in /etc/rc.sysinit. It must be started after the root filesystem has been remounted but before LVM initializes:
1 2 3 4 5 6 7 8 9 10 11 |
# Remount the root filesystem read-write. update_boot_stage RCmountfs state=`awk / / / && ($3 !~ /rootfs/) { print $4 } /proc/mounts` [ "$state" != "rw" -a "$READONLY" != "yes" ] && action $"Remounting root filesystem in read-write mode: " mount -n -o remount,rw / # Starting SDD /etc/init.d/sdd start # LVM initialization ... |
Also, they say /etc/init.d/sdd script must be set _not_ to start at boot-up:
1 |
[root@server ~]# chkconfig sdd off |
The above configuration has a problem, though. The SDD software is installed under /opt, but if your /opt is on an LVM partition, there is no way to load the SDD drivers from there before LVM has been started and /opt mounted. In fact, if the line sits there, it will not work if /opt resides on its own partition of any type.
One solution to that problem is to move the SDD drivers to the root partition. For me it seems to just work if I enable /etc/init.d/sdd at bootup (no changes needed in rc.sysinit):
1 |
[root@server ~]# chkconfig sdd on |
This may be due to the fact that my root partition is also in LVM, so that it has to be initialized in initrd early, and perhaps LVM scans for devices again later on.
Volume initialization and detection problems
A couple of changes must be made to the LVM configuration file /etc/lvm/lvm.conf. If you don’t do that, you will run into an error message like this while trying to create a physical volume on a vpath device:
1 2 |
[root@server ~]# pvcreate /dev/vpatha1 Device /dev/vpatha1 not found (or ignored by filtering). |
Accepting vpath devices
In /etc/lvm/lvm.conf, you will see that this line is commented out:
1 |
# types = [ "fd", 16 ] |
Remove the comment character, and change the line to look like this:
1 |
types = [ "vpath", 16 ] |
That will add vpath to LVM’s list of accepted device types.
Rejecting the underlying devices
The lvm.conf has a filter configuration option which is used to select the devices that are allowed to be used as LVM physical volumes. The default is usually to allow every device. That is a problem with vpath devices, however, because they will show up both as /dev/vpathX, and as regular scsi disks. So, if you have configured one disk per volume controller through two FC switches, you will see a total four /dev/sdX disks per one vpath disk. And all these five devices will show the same data (although only the vpath device is redundant, and should therefore be used). The LVM default is to scan all devices, so it will print error messages of duplicate physical volumes if you don’t change the filtering rule. The messages look like this:
1 |
Found duplicate PV 1XlJrZHnI49tTtHVvwe7cXZ0cATNFTxw: using /dev/sdak not /dev/vpatha |
This is particularly bad, as it means LVM has chosen to use sdak instead of the redundant vpatha path.
To prevent this, change the filter line in lvm.conf to look like this:
1 |
filter = [ "a/vpath[a-z]*/", "r/.*/" ] |
That line means LVM accepts only devices named vpath[a-z]*. This way LVM will choose the correct device. However, if you have physical volumes on internal SCSI disks, that line will also reject them. Add an accept rule on those:
1 |
filter = [ "a/vpath[a-z]*/", "a/sda2/", "r/.*/" ] |
After these changes, you should be able to create a physical volume on vpath devices, and no error messages should be displayed when handing volumes. And everything should look the same the next time you boot.
Hi Mikko,
Very nice and helpful entry you have here. I only wish I had discovered this earlier as it would
Hi.
First sorry for my english, I’m french.
I have an IBM x460 server with RHEL4 and ESS IBM San.
I have 1 Lun mounted and I want to mount another one but it doesn’t work.
Here some info :
cat /proc/partitions :
major minor #blocks name
8 0 143142912 sda
8 1 104391 sda1
8 2 143034727 sda2
8 16 102050784 sdb
8 17 102044848 sdb1
8 32 146484384 sdc
8 33 131072 sdc1
8 34 131072 sdc2
8 35 146464768 sdc3
8 39 146202624 sdc7
8 48 102050784 sdd
8 49 102044848 sdd1
8 64 146484384 sde
8 65 131072 sde1
8 66 131072 sde2
8 67 146464768 sde3
8 71 146202624 sde7
8 80 102050784 sdf
8 81 102044848 sdf1
8 96 146484384 sdg
8 97 131072 sdg1
8 98 131072 sdg2
8 99 146464768 sdg3
8 103 146202624 sdg7
8 112 102050784 sdh
8 113 102044848 sdh1
8 128 146484384 sdi
8 129 131072 sdi1
8 130 131072 sdi2
8 131 146464768 sdi3
8 135 146202624 sdi7
253 0 140967936 dm-0
253 1 2031616 dm-1
252 0 102050784 vpatha
252 1 102044848 vpatha1
252 64 146484384 vpathb
252 65 131072 vpathb1
252 66 131072 vpathb2
252 67 146464768 vpathb3
252 71 146202624 vpathb7
253 2 102043648 dm-2
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
138755080 29089464 102617220 23% /
/dev/sda1 101086 16013 79854 17% /boot
none 8316612 0 8316612 0% /dev/shm
/dev/mapper/base_vg-base_lv
100441544 13137160 82202204 14% /base
vgscan
Reading all physical volumes. This may take a while…
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Found volume group “base_vg” using metadata type lvm2
pvdisplay
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
— Physical volume —
PV Name /dev/vpatha1
VG Name base_vg
PV Size 97.32 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 24913
Free PE 0
Allocated PE 24913
PV UUID j6v90t-6rXx-GWs0-rPJE-0Nmd-10zW-FJtGXj
— NEW Physical volume —
PV Name /dev/vpathb
VG Name
PV Size 139.70 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID XMDFLa-694l-4pV6-d6vP-QQPT-5tjw-nLCMQG
datapath query device
Total Devices : 2
DEV#: 0 DEVICE NAME: vpatha TYPE: 2105800 POLICY: Optimized Sequential
SERIAL: 31327397
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Host1Channel0/sdb OPEN NORMAL 26864417 0
1 Host1Channel0/sdd OPEN NORMAL 26894334 0
2 Host2Channel0/sdf OPEN NORMAL 26873013 0
3 Host2Channel0/sdh OPEN NORMAL 26882503 0
DEV#: 1 DEVICE NAME: vpathb TYPE: 2105800 POLICY: Optimized Sequential
SERIAL: 31127397
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Host1Channel0/sdc CLOSE NORMAL 153 0
1 Host1Channel0/sde CLOSE NORMAL 134 0
2 Host2Channel0/sdg CLOSE NORMAL 156 0
3 Host2Channel0/sdi CLOSE NORMAL 194 0
vgcreate images_vg /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Volume group “images_vg” successfully created
vgscan
Reading all physical volumes. This may take a while…
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb1 not /dev/vpathb
Found duplicate PV XMDFLa694l4pV6d6vPQQPT5tjwnLCMQG: using /dev/vpathb3 not /dev/vpathb
Found volume group “base_vg” using metadata type lvm2
I can’t mount a second Lun. Do you have an idea to help me ?
Thank you guys!! its also works for me..
i just added sdd on startup DS8K disks are visible on me.
#chkconfig –level 345 sdd on
i didnt change in any file… just add type this command hopefully all issue will be fixed.