配置gfs遇到的問題。幫忙哦!
我的步驟:
1.在三台機子上改/etc/hosts(三台一樣)
127.0.0.1 localhost.localdomain localhost
192.168.18.240 gfs-node01
192.168.20.224 gfs-node02
192.168.0.141 gnbd-server
//127.0.0.1那行原來就有,我沒去掉
2.用system-config-cluster生成配置:
在192.168.18.240機子上生成配置,保存為:/etc/cluster/cluster.conf
複製一份到192.168.20.224上,保存為/etc/cluster/cluster.conf
生成配置過程如下:
選擇使用DLM
添加兩個節點,名字分別為gfs-node01,gfs-node02,Quorum Votes值都設置為1
添加fence devices type:Glodbal Network Block Device Name:gnbd Server:gnbd-server
添加Failover Domains 名字為gnbd-server,並加入剛才創建的節點gnbd-node1,gnbd-node2
在gfs-node01,gfs-node02,中"manange fencing for this node",選擇"add a fence level"
//伺服器不是虛擬機,兩個節點是虛擬機
生成的配置文件如下:
<cluster config_version="2" name="alpha_cluster">
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="gfs-node01" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="gfs-node02" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_gnbd" name="gnbd" server="gnbd-server"/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name="gnbd-server" ordered="0" restricted="0">
<failoverdomainnode name="gfs-node01" priority="1"/>
<failoverdomainnode name="gfs-node02" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources/>
</rm>
</cluster>
3.在兩個節點機子上載入模塊
modprobe gnbd
modprobe gfs
modprobe lock_dlm
4.在兩個節點機子上啟動服務
service ccsd start
service cman start
service fenced start
service clvmd start
service gfs start
service rgmanager start
5.在gnbd-server上執行如下命令:
gnbd_serv -n
gnbd_export -c -d /dev/sda1 -e global_disk
結果
gnbd_export: created GNBD global_disk serving file /dev/sda1
6.在兩個節點上執行如下命令:
gnbd_import -i gnbd-server
結果
gnbd_import: created directory /dev/gnbd
gnbd_import: created gnbd device global_disk
gnbd_recvd: gnbd_recvd started
fence_tool join
//執行ccsd 和 cman_tool join 提示already running 因為我前面已經啟動這些服務了
7.在gfs-node01上
執行gfs_mkfs -p lock_dlm -t alpha_cluster:gfs -j 2 /dev/gnbd/global_disk
提示:gfs_mkfs: Partition too small for number/size of journals
我想是空間不夠。改小一點://最小是32 默認是128
gfs_mkfs -p lock_dlm -t alpha_cluster:gfs -j 2 -J 32 /dev/gnbd/global_disk
執行結果
This will destroy any data on /dev/gnbd/global_disk.
It appears to contain a EXT2/3 filesystem.
Are you sure you want to proceed? y
Device: /dev/gnbd/global_disk
Blocksize: 4096
Filesystem Size: 9680
Journals: 2
Resource Groups: 8
Locking Protocol: lock_dlm
Lock Table: alpha_cluster:gfs
Syncing...
All Done
可以了哦。
8.在兩個節點機子 的/ 目錄下建立文件夾 gfstest
執行:
mount -t gfs /dev/gnbd/global_disk /gfstest
問題:
在三個節點上執行:cat /proc/cluster/status
發現如下結果:
gfs-node01:
Protocol version: 5.0.1
Config version: 4
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 1
Expected_votes: 1
Total_votes: 1
Quorum: 1
Active subsystems: 6
Node name: gfs-node01
Node addresses: 192.168.18.240
gfs-node02:
Protocol version: 5.0.1
Config version: 4
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 1
Expected_votes: 1
Total_votes: 1
Quorum: 1
Active subsystems: 6
Node name: gfs-node02
Node addresses: 192.168.20.224
gfs-server:
gnbd_export: created GNBD global_disk serving file /dev/sda1
# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 0
Cluster name:
Cluster ID: 0
Cluster Member: No
Membership state: Not-in-Cluster
執行 cat /proc/cluster/nodes時:
結果如下:
gfs-node01:
Node Votes Exp Sts Name
1 1 1 M gfs-node01
gfs-node02
Node Votes Exp Sts Name
1 1 1 M gfs-node02
gfs-server
Node Votes Exp Sts Name
上述情況看感覺這三台機子是分開的。。。為什麼呢?
怎麼會這樣!!!!
這是什麼問題呢?怎麼會沒有節點呢?
help me !!!!
[ 本帖最後由 04120103 於 2008-5-9 11:24 編輯 ]
《解決方案》
GNBD怎麼配置的呀?我怎麼看見你GNBD沒有配置么