歡迎您光臨本站 註冊首頁

[MySQL集群架構] drbd+heartbeat+mysql主從高可用

←手機掃碼閱讀     火星人 @ 2014-03-03 , reply:0

[MySQL集群架構] drbd+heartbeat+mysql主從高可用

drbd+heartbeat+mysql主從高可用


DRBD+heartbeat+mysql 高可用

基於Redhat編譯或rpm,CentOS等系統之間yum 安裝

製作RPM方式安裝:

準備2台機器做主從,首先修改2台機器的hostname

Ip:192.168.1.252
192.168.1.249

2台機器的內核:

# uname -a

Linux linux252 2.6.9-67.ELsmp #1 SMP Wed Nov 7 13:58:04 EST 2007 i686 i686 i386 GNU/Linux

# uname -a

Linux linux249 2.6.9-67.ELsmp #1 SMP Wed Nov 7 13:58:04 EST 2007 i686 i686 i386 GNU/Linux

# hostname

linux252

# hostname

linux249

# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1
localhost.localdomain localhost

192.168.1.252
linux252

192.168.1.249
linux249

# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1
localhost.localdomain localhost

192.168.1.252
linux252

192.168.1.249
linux249

準備軟體包:http://oss.linbit.com/drbd/8.3/drbd-8.3.1.tar.gz

ftp://ftp.eenet.ee/pub/gentoo/distfiles/libnet-1.1.2.1.tar.gz

http://www.ultramonkey.org/download/heartbeat/2.1.3/heartbeat-2.1.3.tar.gz

安裝drbd:

可以編譯安裝,我這裡選擇了製作rpm包安裝,講源碼包製作為RPM包安裝。

tar zxvf drbd-8.3.4.tar.gz

cd drbd-8.3.4

cp drbd.spec.in drbd.spec

make rpm KDIR=/usr/src/kernels/2.6.9-67.EL-smp-i686/

cd dist/RPMS/i386/

rpm -ivh drbd-8.3.4-3.i386.rpm

rpm -ivh drbd-km-2.6.9_67.ELsmp-8.3.4-3.i386.rpm

先安裝libnet

./configure

Make;make install

再安裝drbd:

make all
make install
make install-tools

檢查是否載入到內核:

Modprobe drbd

Lsmod|grep drbd(有結果說明載入成功)

創建供DRBD記錄信息的數據塊.分別在兩台主機上執行:

# drbdadm create-md r0

--==
Thank you for participating in the global usage survey
==--

The server's response is:

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

說明:出現以上信息,表示創建成功

注意:

1)
「r0」是在drbd.conf里定義的資源名稱.

2)
當執行命令」drbdadm create-md r0」時,出現以下錯誤信息。

Device size would be truncated, which

would corrupt data and result in

'access beyond end of device' errors.

You need to either


* use external meta data (recommended)


* shrink that filesystem first


* zero out the device (destroy the filesystem)

Operation refused.

Command 'drbdmeta 0 v08 /dev/xvdb internal create-md' terminated with exit code 40

drbdadm create-md r0: exited with code 40

解決辦法:初始化磁碟文件格式, dd if=/dev/zero bs=1M count=1 of=/dev/sdXYZ; sync

Tip:如果出現code 50錯誤,使用dd命令將硬碟破壞

dd if=/dev/zero of=/dev/sda2 bs=1M count=1

在2台機器上面同時執行啟動

# /etc/init.d/drbd start

注意:啟動Master上的drbd侯,就去啟動backup的drbd,否則Master無法啟動

查看DRBD的狀態

# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:27


0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:4008024

」/proc/drbd」中顯示了drbd當前的狀態.第一行的st表示兩台主機的狀態,都是」備機」狀態.

ds是磁碟狀態,都是」不一致」狀態.

這是由於,DRBD無法判斷哪一方為主機,以哪一方的磁碟數據作為標準數據.所以,我們需要初始化一個主機.

Field
說明                  值:

cs
連接狀態               出現的值:


o Unconfigured:設備在等待配置。


o Unconnected:連接模塊時的過渡狀態。


o WFConnection:設備等待另一測的配置。


o WFReportParams:過渡狀態,等待新TCP 連接的第一個數據包時。.


o SyncingAll:正將主節點的所有模塊複製到次級節點上。.


o SyncingQuick:通過複製已被更新的模塊(因為現在次級節點已經離開了集群)來更新次級節點。


o Connected:一切正常。



o Timeout:過渡狀態。

st
狀態(設備的作用)      可能的值為:


o 本地/遠程一級狀態


o 二級狀態


o 未知(這不是一種作用)

ns
網路發送    模塊號碼

nr
網路接收    模塊號碼

dw
磁碟寫入    模塊號碼

dr
磁碟讀取    模塊號碼

of
運行中(過時的)模塊號碼

pe
待解決的    模塊號碼

ua
未答覆的    模塊號碼(最好為0)

在主伺服器上面上執行, 此命令只在主機上執行

# drbdsetup /dev/drbd0 primary -o
#定義為主節點

再次查看狀態

# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:27


0: cs:SyncSource rorimary/Secondary ds:UpToDate/Inconsistent C r----


ns:338940 nr:0 dw:0 dr:346240 al:0 bm:20 lo:1 pe:163 ua:229 ap:0 ep:1 wo:b oos:3674296


[>...................] sync'ed:
8.4% (3674296/4008024)K delay_probe: 21


finish: 0:01:06 speed: 55,620 (55,620) K/sec

在備伺服器上看到正在進行數據同步

# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:27


0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----


ns:0 nr:481216 dw:476096 dr:0 al:0 bm:28 lo:161 pe:6116 ua:160 ap:0 ep:1 wo:b oos:3531928


[=>..................] sync'ed: 12.0% (3531928/4008024)K queue_delay: 0.1 ms


finish: 0:01:06 speed: 52,896 (52,896) want: 204,800 K/sec

說明:

主備機狀態分別是"主/備",主機磁碟狀態是"實時",備機狀態是"不一致".

在第3行,可以看到數據正在同步中,即主機正在將磁碟上的數據,傳遞到備機上.現在的進度是[>...................] sync'ed:
0.4% (1040316/1040316)K

稍等一會,在數據同步完后,再查看一下ha1的DRBD狀態:

磁碟狀態都是"實時",表示數據同步完成了

在主伺服器Master DRBD:10.10.10.176上面執行格式化,掛載操作

# mkfs.ext3 /dev/drbd0

# mkdir /db2

# mount /dev/drbd0 /db2

# df -ha

Filesystem
Size
Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00


49G
3.3G
43G
8% /

none
0
0
0
-
/proc

none
0
0
0
-
/sys

none
0
0
0
-
/dev/pts

usbfs
0
0
0
-
/proc/bus/usb

/dev/hda1
99M
13M
81M
14% /boot

none
252M
0
252M
0% /dev/shm

none
0
0
0
-
/proc/sys/fs/binfmt_misc

sunrpc
0
0
0
-
/var/lib/nfs/rpc_pipefs

/dev/drbd0
20G
193M
19G
2% /db2

安裝heartbeat

編譯安裝

groupadd haclient
useradd -g haclient hacluster(先加用戶和組)

. /ConfigureMe configure --enable-fatal-warnings=no
./ConfigureMe make --enable-fatal-warnings=no

Make;make install


# cp /usr/share/doc/heartbeat-2.1.4/haresources /etc/ha.d

# cp /usr/share/doc/heartbeat-2.1.4/ha.cf /etc/ha.d

# cp /usr/share/doc/heartbeat-2.1.4/authkeys /etc/ha.d

# chmod 600 /etc/ha.d/authkeys


# vi /etc/ha.d/authkeys

auth 1

1 crc

# cat /etc/ha.d/ha.cf

debugfile /var/log/ha-debug

logfile /var/log/ha-log

logfacility local0

keepalive 2

deadtime 6

warntime 4

initdead 12

auto_failback on

node linux252

node linux249

udpport 694

ucast eth0 192.168.1.249

ping_group group1 192.168.1.252 192.168.1.249

respawn hacluster /usr/lib/heartbeat/ipfail

apiauth ipfail gid=haclient uid=hacluster

hopfudge

# cat /etc/ha.d/haresources

linux252 drbddisk::r0 Filesystem::/dev/drbd0::/db2::ext3 192.168.1.248 mysqld

linux252
當前primary節點名(uname -n)

drbddisk

告訴heartbeat要管理drbd的資源

Filesystem
這裡是告訴heartbeat需要管理文件系統資源,其實實際上就是執行mount/umount命令,後面的「::」符號之後是跟的 Filesystem的參數設備名和mount點)

/dev/drbd0::/db2: 這個地方啟動heartbeat后heartbeat會自動mount執行。

Mysqld 這個文件是自動啟動mysql的需要把mysql的啟動文件複製到/etc/ha.d/resource.d/mysqld這樣heartbeat才能自動啟動這個文件。從而啟動mysql服務。

Heartbeat自動執行mount和啟動mysql的服務。不需要手動去啟動drbd和mysql。

linux252 drbddisk::r0 Filesystem::/dev/drbd0::/db2::ext3 192.168.1.248 mysqld (這裡的ip是vip會根據主從的宕機情況進行浮動)

主從2台機器heartbeat都啟動起來,然後啟動主用ip add看浮動vip是否在哪台機器上,哪台就是主,主宕機了,從接替vip來代替主工作。

可以宕機測試主從接管情況。

參考文檔:

http://bbs.linuxtone.org/forum-viewthread-tid-7413-highlight-drbd.html

http://www.wenzizone.cn/?p=282

http://phorum.study-area.org/index.php?topic=56862.0;wap2

其他相關介紹:




配置文件:

DRBD部分配置

主:192.168.1.252

# cat /etc/drbd.conf

global {


usage-count yes;

}

common {


syncer { rate 10M; }

}

resource r0 {


protocol C;


handlers {


pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";


pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";


local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";


fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";


}


disk {


on-io-error
detach;


}

startup {


#
allow-two-primaries;


become-primary-on both;


}


net {


allow-two-primaries;


after-sb-0pri disconnect;


after-sb-1pri disconnect;


after-sb-2pri disconnect;


rr-conflict disconnect;


}


syncer {


rate 10M;


al-extents 257;


}


on linux252 {


device
/dev/drbd0;


disk
/dev/hdb;


address
192.168.1.252:7788;


flexible-meta-disk
internal;


}


on linux249 {


device
/dev/drbd0;


disk
/dev/hdb;


address
192.168.1.249:7788;


meta-disk internal;


}

}

#

Heartbeat部分配置:

# cat /etc/ha.d/ha.cf

debugfile /var/log/ha-debug

logfile /var/log/ha-log

logfacility local0

keepalive 2

deadtime 6

warntime 4

initdead 12

auto_failback on

node linux252

node linux249

udpport 694

ucast eth0 192.168.1.249

ping_group group1 192.168.1.252 192.168.1.249

respawn hacluster /usr/lib/heartbeat/ipfail

apiauth ipfail gid=haclient uid=hacluster

hopfudge

# cat /etc/ha.d/authkeys

auth 1

1 crc

# cat /etc/ha.d/haresources

linux252 drbddisk::r0 Filesystem::/dev/drbd0::/db2::ext3 192.168.1.248 mysqld

192.168.1.249

從的配置文件:

# cat /etc/drbd.conf

global {


usage-count yes;

}

common {


syncer { rate 10M; }

}

resource r0 {


protocol C;


handlers {


pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";


pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";


local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";


fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";


}


disk {


on-io-error
detach;


}

startup {


#allow-two-primaries;


become-primary-on both;


}



net {


allow-two-primaries;


after-sb-0pri disconnect;


after-sb-1pri disconnect;


after-sb-2pri disconnect;


rr-conflict disconnect;


}


syncer {


rate 10M;


al-extents 257;


}


on linux252 {


device
/dev/drbd0;


disk
/dev/hdb;


address
192.168.1.252:7788;


flexible-meta-disk
internal;


}


on linux249 {


device
/dev/drbd0;


disk
/dev/hdb;


address
192.168.1.249:7788;


meta-disk internal;


}

}
《解決方案》

希望於樓主多多交流

[火星人 ] [MySQL集群架構] drbd+heartbeat+mysql主從高可用已經有412次圍觀

http://coctec.com/docs/service/show-post-1411.html