歡迎您光臨本站 註冊首頁

CentOS4.4+GFS+Oracle10g RAC+VMWARE

←手機掃碼閱讀     火星人 @ 2014-03-04 , reply:0

CentOS4.4+GFS+Oracle10g RAC+VMWARE

這幾天在安裝一個RAC測試環境,使用的是CentOS4.4+GFS6.1+Oracle10g RAC+VMWARE Server1.0.1,經過千辛萬苦和chinaunix上的文章的幫助,終於安裝完畢,安裝時我是參考Oracle_GFS.pdf做的,該文檔可到redhat網站下載。 現有以下問題請教:

1、測試環境中,若是有一個node down掉,另一個node也不能訪問共享磁碟,也就是gfs文件系統,不知為何?我使用的是lock_dlm
2、因手頭沒有 fence device ,做cluster時我選擇的是 fence_manual,請問 IBM HS21 BladeCenter 中是否已含 fence device 功能呢?準備在實際環境中用HS21
《解決方案》

1:gfs有個法定啟動台數的限制,最小是2台,你down了1台自然非法了 :)
2:BladeCenter 有fence功能,我記得是訪問192.168.70.125就能設置

[ 本帖最後由 fuumax 於 2006-12-25 16:28 編輯 ]
《解決方案》

能否把你的安裝步驟共享出來
《解決方案》

CentOS4.4 + RHCS(DLM) + GFS + Oracle10gR2 RAC + VMWare Server 1.0.1 安裝

本文參考了本論壇很多文章,特此致謝!

****************************************************************************
* CentOS4.4 + RHCS(DLM) + GFS + Oracle10gR2 RAC + VMWare Server 1.0.1 安裝 *
****************************************************************************

一、測試環境
        主機:一台PC,AMD-64位的晶元,4G內存,安裝CentOS-4.4-x86_64版本的操作系統
        在這個主機上面安裝了2個虛擬機,全部安裝CentOS-4.4-x86_64版本的操作系統,未進行內核定製,網上更新到最新

二、安裝 VMWare Server 1.0.1 for linux

三、創建共享磁碟

        vmware-vdiskmanager -c -s 6Gb -a lsilogic -t 2 "/vmware/share/ohome.vmdk"   |用於 Shared Oracle Home
        vmware-vdiskmanager -c -s 10Gb -a lsilogic -t 2 "/vmware/share/odata.vmdk"  |用於 datafiles and indexes
        vmware-vdiskmanager -c -s 3Gb -a lsilogic -t 2 "/vmware/share/oundo1.vmdk"  |用於 node1 Redo logs and Undo tablespaces
        vmware-vdiskmanager -c -s 3Gb -a lsilogic -t 2 "/vmware/share/oundo2.vmdk"  |用於 node2 Redo logs and Undo tablespaces
        vmware-vdiskmanager -c -s 512Mb -a lsilogic -t 2 "/vmware/share/oraw.vmdk"  |用於 Oracle集群註冊表文件和CRS表決磁碟

        2個虛擬機使用一個共享磁碟
       
四、安裝虛擬機
        1. 在vmware console 創建 vmware guest OS, 取名 gfs-node01, 選擇custome create-> Redhat Enterprise Linux 4 64-bit,其它都是默認.
           內存選擇1G(>800MB你就看不到warning了), 硬碟大小選擇12GB, 建立方式不選擇 pre-allocated

        2. 創建好后vmware guest OS之後, 給guest 加上一塊NIC(也就是網卡)

        3. 關掉vmware console, 在node1目錄下面,打開gfs-node1.vmx, 在最後空白處添加以下內容

scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"

scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.filename = "/vmware/share/ohome.vmdk"
scsi1:1.deviceType = "disk"

scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.filename = "/vmware/share/odata.vmdk"
scsi1:2.deviceType = "disk"

scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.filename = "/vmware/share/oundo1.vmdk"
scsi1:3.deviceType = "disk"

scsi1:4.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.filename = "/vmware/share/oundo2.vmdk"
scsi1:4.deviceType = "disk"

scsi1:5.present = "TRUE"
scsi1:5.mode = "independent-persistent"
scsi1:5.filename = "/vmware/share/oundo3.vmdk"
scsi1:5.deviceType = "disk"

scsi1:6.present = "TRUE"
scsi1:6.mode = "independent-persistent"
scsi1:6.filename = "/vmware/share/oraw.vmdk"
scsi1:6.deviceType = "disk"

disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

        這段是對vmware使用共享硬碟的方式進行定義,大多數人都知道設置 disk.locking ="false" 卻漏掉dataCache

        保存退出之後,重新打開你的vmware-console,你就可以看到vmware guest OS的配置中,都有這些硬碟出現了.


五、需要安裝的包以及順序

        可以用yum安裝:
        1、升級CentOS4.4
                yum update
        2、安裝csgfs
                yum install yumex
                cd /etc/yum.repos.d
                wget http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo
                yumex

        也可以手動rpm安裝:
    包下載地址:http://mirror.centos.org/centos/4/csgfs/x86_64/RPMS/

        1、在所有節點上安裝必須的軟體包,軟體包完整列表請參考GFS6.1用戶手冊

rgmanager                                — Manages cluster services and resources
system-config-cluster        — Contains the Cluster Configuration Tool, used to graphically configure the cluster and the display of the current status of the nodes, resources, fencing agents, and cluster services
ccsd                                        — Contains the cluster configuration services daemon (ccsd) and associated files
magma                                        — Contains an interface library for cluster lock management
magma-plugins                        — Contains plugins for the magma library
cman                                        — Contains the Cluster Manager (CMAN), which is used for managing cluster membership, messaging, and notification
cman-kernel                                — Contains required CMAN kernel modules
dlm                                                — Contains distributed lock management (DLM) library
dlm-kernel                                — Contains required DLM kernel modules
fence                                        — The cluster I/O fencing system that allows cluster nodes to connect to a variety of network power switches, fibre channel switches, and integrated power management interfaces
iddev                                        — Contains libraries used to identify the file system (or volume manager) in which a device is formatted Also, you can optionally install Red Hat GFS on your Red Hat Cluster Suite. Red Hat GFS consists of the following RPMs:
GFS                                                — The Red Hat GFS module
GFS-kernel                                — The Red Hat GFS kernel module
lvm2-cluster                        — Cluster extensions for the logical volume manager
GFS-kernheaders                        — GFS kernel header files


        2、安裝軟體和順序
安裝腳本,install.sh
#!/bin/bash

rpm -ivh kernel-smp-2.6.9-42.EL.x86_64.rpm
rpm -ivh kernel-smp-devel-2.6.9-42.EL.x86_64.rpm

rpm -ivh perl-Net-Telnet-3.03-3.noarch.rpm
rpm -ivh magma-1.0.6-0.x86_64.rpm

rpm -ivh magma-devel-1.0.6-0.x86_64.rpm

rpm -ivh ccs-1.0.7-0.x86_64.rpm
rpm -ivh ccs-devel-1.0.7-0.x86_64.rpm

rpm -ivh cman-kernel-2.6.9-45.4.centos4.x86_64.rpm
rpm -ivh cman-kernheaders-2.6.9-45.4.centos4.x86_64.rpm
rpm -ivh cman-1.0.11-0.x86_64.rpm
rpm -ivh cman-devel-1.0.11-0.x86_64.rpm

rpm -ivh dlm-kernel-2.6.9-42.12.centos4.x86_64.rpm
rpm -ivh dlm-kernheaders-2.6.9-42.12.centos4.x86_64.rpm
rpm -ivh dlm-1.0.1-1.x86_64.rpm
rpm -ivh dlm-devel-1.0.1-1.x86_64.rpm


rpm -ivh fence-1.32.25-1.x86_64.rpm

rpm -ivh GFS-6.1.6-1.x86_64.rpm
rpm -ivh GFS-kernel-2.6.9-58.2.centos4.x86_64.rpm
rpm -ivh GFS-kernheaders-2.6.9-58.2.centos4.x86_64.rpm

rpm -ivh iddev-2.0.0-3.x86_64.rpm
rpm -ivh iddev-devel-2.0.0-3.x86_64.rpm

rpm -ivh magma-plugins-1.0.9-0.x86_64.rpm

rpm -ivh rgmanager-1.9.53-0.x86_64.rpm

rpm -ivh system-config-cluster-1.0.25-1.0.noarch.rpm

rpm -ivh ipvsadm-1.24-6.x86_64.rpm

rpm ivh piranha-0.8.2-1.x86_64.rpm --nodeps


注意:有些包有依賴關係,使用nodeps開關進行安裝即可


        3、修改各個節點上的/etc/hosts文件(每個節點都一樣)
        如下:
        # cat hosts
        # Do not remove the following line, or various programs
        # that require network functionality will fail.
        127.0.0.1        localhost.localdomain localhost

                192.168.154.211 gfs-node1
                192.168.154.212 gfs-node2

        192.168.10.1    node1-prv
        192.168.10.2    node2-prv

                192.168.154.201 node1-vip
                192.168.154.202 node2-vip

        注意:主機名、cluster主機名、ocs中的pub節點名最好相同。


六、運行system-config-cluster進行配置

增加2個節點,節點的權置全部設置為1,即Quorum值設置為1

三個節點的名稱為:
gfs-node1
gfs-node2

修改cluster.conf文件,如下:

# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="alpha_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="gfs-node1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="F-Man" nodename="gfs-node1" ipaddr="192.168.10.1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="gfs-node2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="F-Man" nodename="gfs-node2" ipaddr="192.168.10.2"/>
                                        </method>
                        </fence>
                </clusternode>
        </clusternodes>
               
        <cman/>

        <fencedevices>
                <fencedevice agent="fence_manual" name="F-Man"/>
        </fencedevices>
        
        <rm>
                <failoverdomains>
                        <failoverdomain name="web_failover" ordered="1" restricted="0">
                                <failoverdomainnode name="gfs-node01" priority="1"/>
                                <failoverdomainnode name="gfs-node02" priority="2"/>
                                <failoverdomainnode name="gfs-node03" priority="3"/>
                        </failoverdomain>
                </failoverdomains>
        </rm>
</cluster>

[注意] Use fence_bladecenter.  This will require that you have telnet enabled on
        your management module (may require a firmware update)

使用scp命令把這個配置文件copy到node2節點上

七、 在01/02節點上啟動dlm,ccsd,fence等服務  
        可能在安裝配置完上述步驟后,下面這些服務已經起來了。

        在2個節點上載入dlm模塊  

        # modprobe lock_dlm
        # modprobe lock_dlm

        5.2、啟動ccsd服務  
        # ccsd
        # ccsd

        5.3、啟動集群管理器(cman)  
        root@gfs-node1 # /sbin/cman_tool join  
        root@gfs-node2 # /sbin/cman_tool join  

        5.4、測試ccsd服務  
        (注意:ccsd的測試要等cman啟動完成後,然後才可以進行下面的測試

        # ccs_test connect
        # ccs_test connect

        # ccs_test connect 各個節點的返回如下:
        node 1:
        # ccs_test connect
        Connect successful.
        Connection descriptor = 0
        node 2:
        # ccs_test connect
        Connect successful.
        Connection descriptor = 30

        5.5、查看節點狀態
        cat /proc/cluster/nodes,應該返回  
        # cat /proc/cluster/nodes
        Node  Votes Exp Sts  Name
          1    1    3   M   gfs-node1
          2    1    3   M   gfs-node2

        #

八、加入fence域:  
# /sbin/fence_tool join
# /sbin/fence_tool join


九、查看集群狀態
Node 1:
# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 1
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 3
Expected_votes: 3
Total_votes: 3
Quorum: 2   
Active subsystems: 1
Node name: gfs-node1
Node ID: 1
Node addresses: 192.168.10.1

Node 2
# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 1
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 3
Expected_votes: 3
Total_votes: 3
Quorum: 2   
Active subsystems: 1
Node name: gfs-node2
Node ID: 2
Node addresses: 192.168.10.2

十、在node-1上進行分區
        #dmesg |grep scsi察看scsi設備,如下:
    # dmesg | grep scsi
        scsi0 : ioc0: LSI53C1030, FwRev=00000000h, Ports=1, MaxQ=128, IRQ=169
        Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
        # pvcreate /dev/sdb
                Physical volume "/dev/sdb" successfully created
        # pvcreate /dev/sdc
                Physical volume "/dev/sdc" successfully created
        # pvcreate /dev/sdd
                Physical volume "/dev/sdd" successfully created
        # pvcreate /dev/sde
                Physical volume "/dev/sde" successfully created
       
        # system-config-lvm
                physical extent size 改為 128k
                sdb -> common -> ohome
                sdc -> oradata -> datafiles
                sdd -> redo1 -> log1
                sde -> redo2 -> log2

[注意] 在pvcreate之前也可以先用fdisk分區

十一、創建GFS文件系統
        # mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:log1 /dev/redo1/log1
        # mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:log2 /dev/redo2/log2
        # mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:ohome /dev/common/ohome
        # mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:datafiles /dev/oradata/datafiles

        查看:
        dmesg | grep scsi
        lvscan

        修改 /etc/fstab 文件:在文件末尾添加
        /dev/common/ohome       /dbms/ohome     gfs _netdev 0 0
        /dev/oradata/datafiles  /dbms/oradata   gfs _netdev 0 0
        /dev/redo1/log1         /dbms/log1      gfs _netdev 0 0
        /dev/redo2/log2         /dbms/log2      gfs _netdev 0 0

        The _netdev option is also useful as it insures the filesystems are un-mounted before cluster services shutdown.
        2個節點都要修改 fstab 文件. /dbms 及其子目錄都要手工創建好。

十二、創建RAW分區
        The certified version of Oracle 10g on GFS requires that the two clusterware files be located on shared raw partitions and
        be visible by all RAC nodes in the cluster.
        # fdisk /dev/sdg
        創建2個256M的raw device

        If the other nodes were already up and running while you created these partitions, these other nodes must re-read the partition
        table from disk:
        # blockdev --rereadpt /dev/sdg

        Make sure the service rawdevices is enabled on all three RAC nodes for the run level that will be used. This example enables
it for both run levels. Run:
rac1 # chkconfig –level 35 rawdevices on
The mapping occurs in the files /etc/sysconfig/rawdevices
# raw device bindings
# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdg1
/dev/raw/raw2 /dev/sdg2
The permissions of these files must always be owned by the oracle user used to install the software (oracle). A 10
second delay is needed to insure that the rawdevices service has a chance to configure the /dev/raw directory. Add
these lines to the /etc/rc.local file. This file is symbolically linked to /etc/rc?.d/S99local.
echo "Sleep a bit first and then set the permissions on raw"
sleep 10
chown oracle:dba /dev/raw/raw1
chown oracle:dba /dev/raw/raw2

十二、修改 /etc/sysctl.conf

kernel.shmmax = 4047483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max = 65536
##
This is for Oracle RAC core GCS services
#
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 1048576
net.core.wmem_max = 1048576

十三、Create the oracle user
# groupadd oinstall
# groupadd dba
# useradd oracle -g oinstall -G dba

配置 /etc/sudoers 文件,so that oracle admin users can safely execute root commands:
# User alias specification
User_Alias SYSAD=oracle, oinstall
User_Alias USERADM=oracle, oinstall
# User privilege specification
SYSAD ALL=(ALL) ALL
USERADM ALL=(root) NOPASSWD:/usr/local/etc/yanis.client
root ALL=(ALL) ALL

每個節點都要做以上工作。

十四、oracle用戶 Create_a_clean_ssh_connection_environment
        1、在每個節點上執行 ssh-keygen –t dsa 按回車直到執行完畢
        2、在node1 collect up all the ~/.ssh/id_dsa.pub files into one ~/.ssh/authorized_keys file and distribute this to the other three nodes:
                ssh gfs-node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
                ssh gfs-node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
                scp ~/.ssh/authorized_keys gfs-node2:~/.ssh

        3、執行一遍以下命令:
                 $ ssh gfs-node1 date
                 $ ssh node1-prv date
                 $ ssh gfs-node2 date
                 $ ssh node2-prv date

                node2節點同樣做一遍


之後就是安裝 oracle10g 了,注意選取 cluster 方式安裝。
《解決方案》

回復 2樓 fuumax 的帖子

gfs 文件系統強制最少兩台伺服器是真的么?那如果在工作環境中有一台壞掉了,豈不是另外一台也沒法工作?那雙機還有什麼意義呢?
《解決方案》

應該通過心跳功能完成自動切換
《解決方案》

RAC不存在切換的問題的!
RAC是為了高可用性而設計的。
《解決方案》

正準備實戰,樓主的經驗對我很有用。謝了!
《解決方案》

這個fence是做什麼的?cluster.conf中必須要配置么?
《解決方案》

:mrgreen: :mrgreen:

[火星人 ] CentOS4.4+GFS+Oracle10g RAC+VMWARE已經有889次圍觀

http://coctec.com/docs/service/show-post-5844.html