Puppet | AWS | CDN | Sqoop/Hive | MySQL | Linux | RedHat Cluster | Net-SNMP | PHP | RRDtool | LVS | Subversion | Bugzilla | HA ..



rrd4sar

29 April, 2010 02:29

Description: Fetch SAR statistics for a selected day for a selected remote machine and display them graphically. Uses RRDtool (Copyright 1997-2004 by Tobias Oetiker ), SAR command authored by Sebastien Godard & PHP(libssh2).

License: Distributed under GNU GENERAL PUBLIC LICENSE - Version 3
Author: Praveen Kumar K S
Download: rrd4sar

 rrd4sar









SELinux, httpd ( apache ), file context ( httpd_sys_content_t ) and other settings

04 November, 2008 06:17

SELinux when enforced can make a lot of things not work. Features developed at
an server(SELinux disabled) may not work when it goes to production. SELinux controls
httpd (Apache) and I can list down few pointers for you.
Errors:
  1. You see errors when you try creating a VirtualHost having its DocumentRoot on directories other than /var/www/html
  2. You see errors when you try starting apache on non standard ports other than ones listed below
                            http_cache_port_t              tcp      3128, 8080, 8118
                            http_cache_port_t              udp      3130
                            http_port_t                        tcp      80, 443, 488, 8008, 8009, 8443
                            pegasus_http_port_t            tcp      5988
                            pegasus_https_port_t           tcp      5989
  3. Your script does not execute if you are trying to run system files in it.
    Eg:
    You are trying to run /usr/bin/crontab, etc from php.
So.. what are we supposed to do? There are two possibilities.
SELinux can be configured by setting selinux booleans and also by changing the context of the scripts. So.. what is this context? You will come to know below.

If SELINUX is not disabled.. we may have to look into

        cat /var/log/messages | grep SELinux

and later accordingly, allow selinux to relax restrictions on httpd.

Manual:
        man 8 httpd_selinux

Is SELinux enabled?
        dmesg | grep selinux
        cat /selinux/enforce

To see processes protected by selinux:
                ps -ZC httpd

For entire list
                ps -eZ

File attributes
                ls -Z /usr/bin/crontab
                       -rwxr-xr-x  root     root     system_u:object_r:bin_t          /usr/bin/crontab                Selinux prevents:
                        ls -Z /home/praveen/info.php
                        -rw-rw-r--  praveen  praveen  user_u:object_r:user_home_t     /home/praveen/test.php
                Selinux allows:
                        ls -Z /home/praveen/info.php
                                -rw-rw-r--  praveen  praveen  user_u:object_r:httpd_sys_content_t      /home/praveen/test.php

Types avaliable for apache
                getsebool -a | grep httpd

allow_httpd_anon_write --> off
allow_httpd_bugzilla_script_anon_write --> off
allow_httpd_mod_auth_pam --> off
allow_httpd_nagios_script_anon_write --> off
allow_httpd_squid_script_anon_write --> off
allow_httpd_sys_script_anon_write --> off
httpd_builtin_scripting --> on
httpd_can_network_connect --> off
httpd_can_network_connect_db --> off
httpd_can_network_relay --> off
httpd_disable_trans --> off
httpd_enable_cgi --> on
httpd_enable_ftp_server --> off
httpd_enable_homedirs --> on
httpd_rotatelogs_disable_trans --> off
httpd_ssi_exec --> off
httpd_suexec_disable_trans --> off
httpd_tty_comm --> off
httpd_unified --> on

List httpd ports
                semanage port -l | grep http
                        http_cache_port_t              tcp      3128, 8080, 8118
                        http_cache_port_t              udp      3130
                        http_port_t                    tcp      80, 443, 488, 8008, 8009, 8443
                        pegasus_http_port_t            tcp      5988
                        pegasus_https_port_t           tcp      5989

Add a httpd port to selinux
                Add it under httpd.conf
                        Listen 81
                Do a graceful.
               semanage port -a -t http_port_t -p tcp 81

audit2allow is a perl script that interprets the selinux errors and constructs the right rules to overcome various problems
                /usr/bin/audit2allow -i /var/log/messages
                sealert -l <id>
                        (id from /var/log/messages)

Change file context to make it accesible by httpd:
       chcon -h root:object_r:httpd_sys_content_t test.php

Sometimes restorecon -v test.php can work.

Recursively,
         restorecon -Rv <dir>

The below command will allow apache to access your directory(non default)
chcon -Rt httpd_sys_content_t <your document root dir>

The command below will allow apache to access user home directories as document root:
setsebool -P httpd_enable_homedirs=1

To use system commands thru php or other scripts on apache.. that command's context should be changed to httpd_unconfined_script_exec_t.
Eg:
chcon -t httpd_unconfined_script_exec_t /usr/bin/crontab

For other, undiscussed policy related errors:
        tail -f /var/log/messages | grep SELinux
        sealert -l <id>
will tell you what to do.

Last option will be to disable SELinux protection to apache by:
       setsebool -P httpd_disable_trans 1
       /etc/init.d/httpd restart

So disabling SELinux is not the solution if you have Apache issues.





Common Errors and Resolutions

04 August, 2008 07:34


Error:
rsync: recv_generator: mkdir "" failed: Too many links (31)
rsync: stat "" failed: No such file or directory (2)
rsync: mkstemp ".MXiMwF" failed: No such file or directory (2)
Resolution:
Go for reiserfsgfs or comply with ext3 subdirectories limitation


Context:
svnsync (reversal)
Error:
svnsync: PROPFIND of '/mysvn': Could not resolve hostname `myhost'
Resolution:
ip instead of hostname


Context:
svn-python
>>>import svn.repos
Error:
undefined symbol: gss_delete_sec_context
Resolution:
Edit Makefile
#SVN_APR_LIBS =  path-to-apache-x.x.x/lib/libapr-x.la -luuid -lrt -lcrypt  -lpthread -ldl
SVN_APR_LIBS =  path-to-apache-x.x.x/lib/libapr-1.la -luuid -lrt -lcrypt  -lpthread -ldl -lgssapi
ln -s /usr/lib/libgssapi.so.x.x.x /usr/lib/libgssapi.so
And make again.


Context:
svnsync (reversal)
Error:
svnsync: PROPFIND of '/mysvn': Could not resolve hostname `myhost'
Resolution:
ip instead of hostname


Context:
MIME-tools: /usr/bin/perl -MCPAN -e 'install MIME::Parser'
Error:
No IO::File
Undefined subroutine &Mail::Internet::mailaddress
Resolution:
cpan> install IO::File
cpan> force install MIME::Parser


Context:
Chart: /usr/bin/perl -MCPAN -e 'install Chart::Base'
Error:
The module Chart::Base isn't available on CPAN
Resolution:
wget http://search.cpan.org/CPAN/authors/id/C/CH/CHARTGRP/Chart-2.4.1.tar.gz
tar zxvf Chart-2.4.1.tar.gz
cd Chart-2.4.1
perl Makefile.PL
make
make test
make install


Error:
Starting httpd: Warning: DocumentRoot [path-to-bugzilla] does not exist
Forbidden
You don't have permission to access /README on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an
ErrorDocument to handle the request.
Resolution:
sestatus
cat /selinux/enforce
echo 0 >/selinux/enforce
newrole -r sysadm_r
cat /selinux/enforce
vi /etc/selinux/config
    SELINUX=disabled
reboot


Context:
mysqlhotcopy -u <user> -p <pass> <bd> --debug
Error:
Using copy suffix '_
Filtering tables with '(?-xism:.*)'
Invalid db.table name 'db.table`.`field' at
path-to/mysqlhotcopy line 855.
Dirty fix:
Added a new line
vi path-to/mysqlhotcopy
:836
map { s/^.*?.//o } @dbh_tables;


Context:
mysqlhotcopy -u <user> -p <pass> <bd> --debug --addtodest
Error:
DBD::mysql::db do failed: Access denied; you need the RELOAD privilege for
this operation at path-to/mysqlhotcopy line 473.
Resolution:
mysql>
GRANT RELOAD ON *.* TO user@localhost; FLUSH PRIVILEGES;


Context:
Can't connect to the database.
Error:
Too many connections
Is your database installed and up and running?
Do you have the correct username and password selected in localconfig?
Resolution:
vi /etc/my.cnf
#max_connections = 100
max_connections = 250
interactive_timeout = 180
wait_timeout = 180
/etc/init.d/mysqld restart


Error:
DBD::mysql::db do failed: MySQL server has gone away at mysqlhotcopy line 528.
Resolution:
vi /etc/my.cnf
interactive_timeout = 3600
wait_timeout = 3600
/etc/init.d/mysqld restart


Error:
apr-config not found
Resolution:
ln -s path-to/apr-1-config path-to/apr-config


Troubleshoot:
tail -f path-to-apache-x.x.x/logs/error_log
Error:
[] () Apache2::SizeLimit httpd process too big, exiting at SIZE=172222/0 KB  SHARE=5222/0 KB  UNSHARED=162222/70000 KB  REQUESTS=2 LIFETIME=0 seconds
Resolution:
Edit mod_perl.pl
        Raise value for$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 70000;


Dependency Errors:
missing: fig2dev
*** This tool is provided by docbook-utils ***
*** This tool is provided by sdvel ***
*** This tool is provided by transfig ***
Resolutions:
Load distro CD.
Change dir to CD.
find . | grep "docbook-utils"
rpm -q --provides -p RedHat/RPMS/docbook-utils-0.6.14-4.noarch.rpm
rpm -ivh RedHat/RPMS/docbook-utils-0.6.14-4.noarch.rpm
rpm -ivh RedHat/RPMS/docbook-style-dsssl-1.78-4.noarch.rpm
rpm -ivh RedHat/RPMS/jadetex-3.12-11.noarch.rpm
rpm -ivh RedHat/RPMS/tetex-2.0.2-22.EL4.7.x86_64.rpm
rpm -ivh RedHat/RPMS/tetex-fonts-2.0.2-22.EL4.7.x86_64.rpm
rpm -ivh RedHat/RPMS/tetex-latex-2.0.2-22.EL4.7.x86_64.rpm
rpm -ivh RedHat/RPMS/netpbm-progs-10.25-2.EL4.2.x86_64.rpm
rpm -ivh RedHat/RPMS/netpbm-10.25-2.EL4.2.x86_64.rpm
rpm -ivh RedHat/RPMS/tetex-dvips-2.0.2-22.EL4.7.x86_64.rpm
rpm -ivh RedHat/RPMS/docbook-utils-pdf-0.6.14-4.noarch.rpm
find . | grep "transfig"
rpm -ivh RedHat/RPMS/transfig-3.2.4-8.x86_64.rpm


Compile Warning:
missing: cvs2cl devel_product_release
This above warning can be ignored according to:
http://lists.mkgnu.net/pipermail/scmbug-users/2007-February/000786.html
Resolution:
Compile scmbug like ./configure --without-doc


Context:
./etc/init.d/scmbug-server start
perl -MCPAN -e 'install Mail::Sendmail';
Error:
make test fails
Resolution:
cpan>
  look Mail::Sendmail
  perl Makefile.PL
  make
  make install




How can we use UCD-SNMP-MIB to snmpwalk a custom interface?

13 January, 2007 01:13

We have to consider two nodes here: monitoring node, application node.

1) In Monitoring
In case of JFFNMS, we can write custom interface types using PHP(thats what I am used to). We can add an script to parse snmpwalk output and create RRD files out of it.

2) UCD-SNMP-MIB 1.3.6.1.4.1.2021
We can define our health checking script name in our snmpd.conf
In snmpd.conf of application node:
exec .1.3.6.1.4.1.2021.50 my-interface /bin/sh /home/app/status.sh

Eg:
/bin/echo hello world
in snmpd.conf
Then,
snmpwalk -v 1 -c public localhost UCD-SNMP-MIB::extTable
will give,

UCD-SNMP-MIB::extIndex.1 = INTEGER: 1
UCD-SNMP-MIB::extNames.1 = STRING: echotest
UCD-SNMP-MIB::extCommand.1 = STRING: /bin/echo hello world
UCD-SNMP-MIB::extResult.1 = INTEGER: 0
UCD-SNMP-MIB::extOutput.1 = STRING: hello world
UCD-SNMP-MIB::extErrFix.1 = INTEGER: 0
UCD-SNMP-MIB::extErrFixCmd.1 = STRING:


3) In our application node
In /home/app/status.sh
echo "[tac:`cat /home/app/status/tacs`] FATAL: Error while creating shs.data"
From our application we have to populate tacs file with hitcount and log error to status.sh file on error. Tools like log4php seems to be handy.

Summary:

From monitor, we do an snmpwalk to an MIB on application node to collect extOutput values to be parsed by our monitor. Using the values and SLA's we can throw alerts, plot rrd, etc.




Howto MySQL DRBD HA

10 February, 2007 13:19

This is one of the MySQL High Availability strategy I had discussed earlier. I have consolidated what I found on net in bits and pieces + some of my experiences. Here are some tips to get it working.
In this approach, I have noticed that the failover is smooth and quick. If you are looking only for High Availability of MySQL resources, then
, this is the one.

Env:
I tried CentOS release 4.4 (Final) x86_64 on 2 servers.
One which has a better RAM can be used as a active node. Other, can be considered the failover.

Partitioning during OS installtion:
You need to reserve a huge physical volume which would be later used as a DRBD volume.
Don't specify any file system type.

fdisk /dev/sda

Should print:

The number of cylinders for this disk is set to 9729.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 2611 20972826 83 Linux
/dev/sda2 2612 2872 2096482+ 82 Linux swap
/dev/sda3 2873 3003 1052257+ 8e Linux LVM
/dev/sda4 3004 9729 54026595 5 Extended
/dev/sda5 3004 9729 54026563+ 8e Linux LVM

We are going to use /dev/sda5 as a DRBD device.


DRBD:

Installation:
On machine1 and machine2

yum -y install drb
yum -y install kernel-module-drbd-2.6.9-42.ELsmp
modprobe drbd

Configuration:
On both machines:

vi /etc/drbd.conf

#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd/drbd.conf
#
# Our MySQL share
resource db
{
protocol C;
incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f";
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, ...
syncer {
group 1;
rate 6M;
}
on machine1.myhost.com {
device /dev/drbd1;
disk /dev/sda5;
address 10.10.150.1:7789;
meta-disk internal;
}
on machine2.myhost.com {
device /dev/drbd1;
disk /dev/sda5;
address 10.10.150.2:7789;
meta-disk internal;
}
}

Start:

On both machines:
drbdadm adjust db
On machine1:
drbdsetup /dev/drbd1 primary --do-what-I-say
service drbd start
On machine2:
service drbd start
On both machines(see status):
service drbd status
On machine1:
mkfs -j /dev/drbd1
tune2fs -c -1 -i 0 /dev/drbd1
mkdir /db
mount -o rw /dev/drbd1 /db
On machine2:
mkdir /db

Test failover:
For manual switchover(This wont be needed as HA will do this for you):
On primary-
umount /db
drbdadm secondary db
On secondary-
drbdadm primary db
service drbd status
mount -o rw /dev/drbd1 /db
df

This finishes DRBD part of it. You have created a DRBD mount which will be used as a data directory for your MySQL.


MySQL:
Now comes the hurdle.
machine1:
mkdir /db/mysql
NOTE: /db should be mounted to do this
mkdir /db/mysql/data
chown -R mysql /db/mysql/data
chgrp -R mysql /db/mysql/data
mv /home/mysql/data /db/mysql/data
ln -s /db/mysql/data /home/mysql/data
machine2:
mv /home/mysql/data /tmp
ln -s /db/mysql/data /home/mysql/data

Now, start MySQL on machine1. Create some sample database and table. Stop MySQL. Do a manual switchover of DRBD. Start MySQL on machine2 and query for that table. It should work. But, this is of no use if you have to switchover manually every time. Now we are heading to HA.


HA:

Installation:
yum -y install gnutls*
yum -y install ipvsadm*
yum -y install heartbeat*

Configuration:
Edit /etc/sysctl.conf and set net.ipv4.ip_forward = 1
vi /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
/sbin/chkconfig --level 2345 heartbeat on
/sbin/chkconfig --del ldirectord
You need to setup the following conf files on both machines:
a)/etc/ha.d/ha.cf
#/etc/ha.d/ha.cf content
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694 #(If you have multiple HA setup in same network.. use different ports)
bcast eth0 # Linux
auto_failback on #(This will failback to machine1 after it comes back)
ping 10.10.150.100 #(Your gateway IP)
apiauth ipfail gid=haclient uid=hacluster
node machine1.myhost.com
node machine2.myhost.com
On both machines:
b)/etc/ha.d/haresources
NOTE: Assuming 10.10.150.3 is virtual IP for your MySQL resource and mysqld is the LSB resource agent.
#/etc/ha.d/haresources content
machine1.myhost.com LVSSyncDaemonSwap::master IPaddr2::10.10.150.3/24/eth0 drbddisk::db Filesystem::/dev/drbd1::/db::ext3 mysqld
c)/etc/ha.d/authkeys
#/etc/ha.d/authkeys content
auth 2
2 sha1 YourSecretString
Now, make your authkeys secure:
chmod 600 /etc/ha.d/authkeys

Start:
On both machines(first on machine1):
Stop MySQL.
Make sure MySQL does not start on system init.
For that:
/sbin/chkconfig --level 2345 MySQL off
/etc/init.d/heartbeat start
These commands will give you status about this LVS setup:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
ip addr sh
/etc/init.d/heartbeat status
df
/etc/init.d/mysqld status

Access your HA-MySQL server like:
mysql -h10.10.150.3

Shutdown machine1 to see MySQL up on machine2.
Start machine1 to see MySQL back on machine1.





MySQL clustering strategies and comparisions

17 April, 2007 12:39

After testing the following strategies of MySQL cluster alternatives, here, are my supplies to open source communities, with out which I could not have tried the following in first place. I have consolidated what I found on net in bits and pieces + some of my experiences.
  1. MySQL Clustering(ndb-cluster stogare)
  2. MySQL / GFS-GNBD/ HA
  3. MySQL / DRBD /HA
  4. MySQL Write Master / Multiple MySQL Read Slaves
  5. Standalone MySQL Servers(Functionally seperated)


Scenerio and usefulness:
What?
Its a mechanism provided by MySQL people themselves in form of a storage engine.
It is transaction safe.
It replicates in real time.
You can use this in high availability and load balancing scenerios.
Needs a minimun of three nodes to see real effects.

Cost:
First.. Go for this if you really can afford RAM which increases proportionally to your DB size.
Second.. Also, better have a GB Ethernet.
Thirdly, you may have to go for SCI cards from Dolphin which costs you around a grand for each node.

Advantages:
Can be used in load balancing scenerios
Can be used in high availability scenerios
Highly scalable
True DB redundancy
Maintained properly

Disadvantages:
Cost factor(See above)

Speed:
Speed almost 10 times slower than a typical standalone with no GB Eth and no SCI Cards Few storage engine related limitations.

When?
Redundancy, HA, Balanced Load.
What?
How about having a shared disk as a data directory for multiple MySQL servers?
GFS/GNBD gives you that shared data disk.
GFS is a transaction safe FS.
You can have one MySQL server serving the shared data at a time.

Cost:
Cost of @ max n powerful servers.. one active and others as failovers

Advantages:
High Availability
Redundancy to some extent
Scalable in terms of HA

Disadvantages:
No load balancing
No guarenteed redundancy
No scalability wrt load

Speed:
Twice the standalone. Fares well in reads.

When?
When your appl. is read intensive and need to be HA.
What?
How about having a shared disk as a data directory for multiple MySQL servers?
DRBD gives you that shared data disk.
DRBD can be forced to be transaction safe.
You can have one MySQL server serving the shared data at a time.

Cost:
Cost of @ max n powerful servers.. one active and others as failovers

Advantages:
High Availability
Redundancy to some extent
Scalable in terms of HA

Disadvantages:
No load balancing
No guarenteed redundancy
No scalability wrt load

Speed:
Almost as standalone for both read and writes.

When?
When your appl. is read intensive and need to be HA.
What?
Consider having different DB handles for read and writes.
More reads than you can have more slaves.
For write you can have one master.
You can have 'n' slaves for read and 1 master to write.


Cost:
Cost of @ max1 powerful write server.. plus 'n' read slaves.

Advantages:
High Availability for reads.
Load balanced for reads.
Scalable in terms of Read-Load balancing

Disadvantages:
No load balancing for writes
No HA for writes
No scalability wrt writes

Speed:
Same as standalone. Fares well in reads.

When?
When your appl. is read intensive and need to be HA and load balanced. Cautiously writing appl. would do. Because, your Write Server is not HA.
Disadvantages weights more.



A)MySQL-HA-DRBD(Actve/passive) setup as Master

B)Slave1, Slave2, .... Slave'n' in a load balanced setup.

'A' will be available over an VIP say 192.168.0.1

'B' will be available over an VIP say 192.168.0.2

Now, in your application, use,

-->mysql connection to 198.168.0.1 for DB writes

-->mysql connection to 198.168.0.2 fro DB reads


This gives us HA WRITES and HA/Load shared READS.






Howto GFS GNBD

01 November, 2006 00:57

I have consolidated what I found on net in bits and pieces + some of my experiences.

Infrastructure details on which I tried:

Three servers(Also, it can be implemented on two nodes)
Fedora Core 4 x86_64 on all nodes
I tried it on FC4 64-bit. If you plan to try it on any other distribution or 32-bit arch.. still the procedure remains same. Since I built it from source but not RPMs, you may have to simply supply config options with a different CFLAGS.

Before you proceed make sure you have physical volume(something like /dev/sda1, /dev/sda4, etc) with no data. This is going to be the gfs volume which you will export to other nodes. It should be on the node which is going to be your gnbd server. If you dont have such volume create one using fdisk.

I used mounted gfs volume as a DOCUMENT ROOT for my Apache server nodes(Load Balanced).

I would suggest:

1) Not to try this for production environment if you are looking for speed. 
2) Also, there is one Linux issue related to unmount. When you say halt or shutdown.. Linux will do an unmount for devs. It hangs in case of gnbd. There is a fix for this which may require an kernel compile. Else.. you will have to hard reboot.

RedHat cluster can prove effective if you have SAN.



Plan:



Install instructions:

Step 1)
gnbd server: 

Now get rid of old libdevmapper if any. Else, it my trouble you later.
It may have come with your distribution.
Do,

locate libdevmapper

Move all such libdevmapper.whatever to some junk location.


wget ftp://sources.redhat.com/pub/cluster/releases/cluster-1.00.00.tar.gz
wget ftp://sources.redhat.com/pub/dm/old/device-mapper.1.02.07.tgz
wget ftp://sources.redhat.com/pub/lvm2/old/LVM2.2.01.09.tgz
gunzip cluster-1.00.00.tar.gz
tar xvf cluster-1.00.00.tar
tar zxvf device-mapper.1.02.07.tgz
tar zxvf LVM2.2.01.09.tgz
cd cluster-1.00.00
CFLAGS="-m64 -fPIC" ./configure --kernel_src=/usr/src/kernels/2.6.11-1.1369_FC4-smp-x86_64/
make install

ln -s /usr/lib/libmagma* /usr/lib64/
ln -s /lib/libdevmapper.so* /usr/lib64/
ln -s /usr/lib/libgulm.* /usr/lib64/
ln -s /usr/lib/libmagma* /lib64/
ln -s /usr/lib/libmagma* /lib/
ln -s /usr/lib/libdlm* /usr/lib64/
ln -s /lib/liblvm2clusterlock* /lib64
ln -s /lib/liblvm2clusterlock* /usr/lib64/
ln -s /lib/liblvm2clusterlock* /usr/lib/
ln -s /lib/libdevmapper.so.1.* /usr/lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.* /usr/lib/libdevmapper.so
ln -s /lib/libdevmapper.so.1.* /lib64/libdevmapper.so

Edit
vi /etc/lvm/lvm.conf
Look for
locking_type =
Change it to
locking_type = 2
Append this line
locking_library = "/lib/liblvm2clusterlock.so"

mkdir /etc/cluster
vi /etc/cluster/cluster.conf

Append content shown in the image below to cluster.conf and edit it properly:
(Sorry, I tried putting these lines as text. But, they din't appear. I am not too keen about blogging tricks.)
http://blog.chakravaka.com/images/clusterconf.jpg

Edit /etc/hosts and have an entry of each participating node.
cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
10.10.200.10 gnbdserv.mycluster.com gnbdserv localhost.localdomain localhost
10.10.200.11 node1.mycluster.com
10.10.200.12 node2.mycluster.com

cd ../device-mapper.1.02.07
CFLAGS="-m64 -fPIC" ./configure
make
make install

cd ../LVM2.2.01.09
CFLAGS="-m64 -fPIC" ./configure --with-clvmd --with-cluster=shared
make
make install

ln -s /lib/libdevmapper.so.1.02 /lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.02 /usr/lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.02 /usr/lib/libdevmapper.so
cp /lib/libdevmapper.so.1.02 /lib64
cp /lib/libdevmapper.so.1.02 /usr/lib64
cp /lib/libdevmapper.so.1.02 /usr/lib


Step 2)
gfs node1: 

Get rid of old libdevmapper if any.

wget ftp://sources.redhat.com/pub/cluster/releases/cluster-1.00.00.tar.gz
wget ftp://sources.redhat.com/pub/dm/old/device-mapper.1.02.07.tgz
wget ftp://sources.redhat.com/pub/lvm2/old/LVM2.2.01.09.tgz
gunzip cluster-1.00.00.tar.gz
tar xvf cluster-1.00.00.tar
tar zxvf device-mapper.1.02.07.tgz
tar zxvf LVM2.2.01.09.tgz
cd cluster-1.00.00
CFLAGS="-m64 -fPIC" ./configure --kernel_src=/usr/src/kernels/2.6.11-1.1369_FC4-smp-x86_64/
make install

ln -s /usr/lib/libmagma* /usr/lib64/
ln -s /lib/libdevmapper.so* /usr/lib64/
ln -s /usr/lib/libgulm.* /usr/lib64/
ln -s /usr/lib/libmagma* /lib64/
ln -s /usr/lib/libmagma* /lib/
ln -s /usr/lib/libdlm* /usr/lib64/
ln -s /lib/liblvm2clusterlock* /lib64
ln -s /lib/liblvm2clusterlock* /usr/lib64/
ln -s /lib/liblvm2clusterlock* /usr/lib/
ln -s /lib/libdevmapper.so.1.* /usr/lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.* /usr/lib/libdevmapper.so
ln -s /lib/libdevmapper.so.1.* /lib64/libdevmapper.so

Edit
vi /etc/lvm/lvm.conf
Look for
locking_type =
Change it to
locking_type = 2
Append this line
locking_library = "/lib/liblvm2clusterlock.so"

mkdir /etc/cluster

Don't create cluster.conf. It will be copied automatically during cluster start.

Edit /etc/hosts and have an entry of each participating node.
cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
10.10.200.11 node1.mycluster.com node1 localhost.localdomain localhost
10.10.200.10 gnbdserv.mycluster.com
10.10.200.12 node2.mycluster.com

cd ../device-mapper.1.02.07
CFLAGS="-m64 -fPIC" ./configure
make
make install

cd ../LVM2.2.01.09
CFLAGS="-m64 -fPIC" ./configure
make
make install

ln -s /lib/libdevmapper.so.1.02 /lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.02 /usr/lib64/libdevmapper.so
ln -s /lib/libdevmapper.so.1.02 /usr/lib/libdevmapper.so
cp /lib/libdevmapper.so.1.02 /lib64
cp /lib/libdevmapper.so.1.02 /usr/lib64
cp /lib/libdevmapper.so.1.02 /usr/lib

Step 3)
gfs node2: 
Same as step 2.

Step 4)

Execute each command on all nodes(Proceed to next command only after you execute a command in all the node):

depmod -a
modprobe dm-mod
modprobe gfs
modprobe lock_dlm
modprobe dlm
modprobe cman
modprobe lock_harness
ccsd
cman_tool join
fence_tool join
clvmd

You have started your cluster.
After this we have to create a logical volume in gnbdserv.

gnbdserv> umount /dev/sda4
(Replace /dev/sda4 with your volume)
gnbdserv> pvcreate /dev/sda4
gnbdserv> pvscan
gnbdserv> vgcreate mycluster /dev/sda4
Note: name mycluster is same as in cluster.conf Plaese give same name as you give in cluster.conf
gnbdserv> pvdisplay /dev/sda4
gnbdserv> vgdisplay mycluster | grep "Total PE"
The above command gives you a . Use that in next command.
gnbdserv> lvcreate -l mycluster -n docrut
You can give any name instead of docrut.
gnbdserv> vgchange -aly

Export it to each node.
Import it on each node.
Make gfs filesystem on them.

gnbdserv> modprobe gnbd
gnbdserv> /sbin/gnbd_serv -v
gnbdserv> gnbd_export -v -e export1 -d /dev/mycluster/docrut
gnbdserv> gnbd_export -v -l
gnbdserv> modprobe gfs
gnbdserv> gfs_mkfs -p lock_dlm -t mycluster:export1 -j 3 /dev/mycluster/docrut
The -j option depends on number of nodes.
gnbdserv> mkdir /global
gnbdserv> mount -t gfs /dev/mycluster/docrut /global
You need not mount a gfs volume on gnbdserv. But, it is just to show that it can be mounted if needed.

each node> modprobe gnbd
each node> gnbd_import -v -i gnbdserv.mycluster.com
each node> gnbd_import -v -l
each node> modprobe gfs
each node> gfs_mkfs -p lock_dlm -t mycluster:export1 -j 3 /dev/gnbd/export1
It is enough to do gfs_mkfs on gnbdserv. It can also be done on nodes.
each node> mkdir /global
each node> mount -t gfs /dev/gnbd/export1 /global

That's it.

Step 5)
Startup scripts: 

You should do the following as your startup scripts.

gnbdserv:
depmod -a
modprobe dm-mod
modprobe gfs
modprobe lock_dlm
modprobe dlm
modprobe cman
modprobe lock_harness
ccsd
cman_tool join
fence_tool join
clvmd
vgchange -aly
modprobe gnbd
/sbin/gnbd_serv -v
gnbd_export -v -l
gnbd_export -v -e export1 -d /dev/mycluster/docrut
modprobe gfs

nodes:
depmod -a
modprobe dm-mod
modprobe gfs
modprobe lock_dlm
modprobe dlm
modprobe cman
modprobe lock_harness
ccsd
cman_tool join
fence_tool join
clvmd
modprobe gnbd
gnbd_import -v -i gnbdserv.mycluster.com
gnbd_import -v -l
modprobe gfs
mount -t gfs /dev/gnbd/export1 /global


Step 6)
Shutdown scripts: 

You should do this for a graceful shutdown.

gnbdserv:
vgchange -aln
killall clvmd
fence_tool leave
killall gnbd_serv
killall gnbd_clusterd
cman_tool leave force
killall ccsd

nodes:
umount /dev/gnbd/export1
killall clvmd
fence_tool leave
killall gnbd_recvd
killall gnbd_monitor
cman_tool leave force
killall ccsd


Author: Praveen Kumar Karagadi Subramanya