Forgot your password?    
+ Reply to Thread
Results 1 to 2 of 2

Thread: Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan.( Part 1)

  1. #1
    Oracle Administrator
    Join Date
    Dec 2011
    Posts
    92

    Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan.( Part 1)

    Warning before proceeding make sure you have at least 6Gb or more ram otherwise your
    installation will not work.


    Virtual Machine 1 configuration

    Guest OS - RHEL 5.4
    Name - rac1
    Public I/P eth0 - 192.168.1.100
    Interconnect I/P eth1 - 192.168.2.100
    memory - 2GB
    Disk space - 30GB
    Location - C:\11gRAC\rac1\


    Virtual Machine 2 configuration

    Guest OS - RHEL 5.4
    Name - rac2
    Public I/P eth0 - 192.168.1.200
    Interconnect I/P eth1 - 192.168.2.200
    memory - 2GB
    Disk space - 30GB
    Location - C:\11gRAC\rac2\


    Shared Storage configuration


    Voting Disk + OCR

    "C:\11gRAC\shared\Disk1" size 2GB
    "C:\11gRAC\shared\Disk2" size 2GB
    "C:\11gRAC\shared\Disk3" size 2GB

    Database Storage

    "C:\11gRAC\shared\Disk4" size 12GB
    "C:\11gRAC\shared\Disk5" size 12GB

    Flash Recovery Area

    "C:\11gRAC\shared\Disk6" size 12GB
    "C:\11gRAC\shared\Disk7" size 12GB



    Create two virtual machines in vmware.

    Sample configuration for first machine.


    Screen name Configuration

    select appropriate configuration ->select (custom)
    virtual machine hardware compatibilty ->6.5-7x
    Guest operating system installation ->I will install the operating system later
    select a guest operating system ->linux -> Red Hat Enterprise Linux 5
    Name the virtual machine name ->rac1 location -> C:\11gRAC\rac1\
    Processor Configuration Number of processors ->1 Number of cores per processor ->1
    Memory ->2048
    Network type ->Use briged networking
    I/O controller types ->defaults
    select a disk ->create a new virtual disk
    disk type ->SCSI
    maximum disk size ->30GB
    Disk file ->C:\11gRAC\rac1\rac1.vmdk
    click on customize hardware -> remove floppy
    remove sound card
    Cd/DvD - Use ISO if you have image or use physical drive.

    Add network card ->click on add ->Network Adapter->Bridged-> finish

    click (close) then click (finish)



    Linux installation steps

    click on power on


    Boot screen hit enter
    Test Cd skip
    Welcome click(next)
    Language next
    keyboard next
    Installation number skip entering installation number
    Warning would you like to intialize this drive, erasing ALL DATA - yes
    Disk Partition create custom layout
    Partitions

    Code:
    /boot		ext3		100
    swap				4096
    /		ext3		25000

    click(next)


    Grub configuration click(next)

    network configuration

    eth0

    untick enable ipv6 configuration

    in IPv4 select manual configuration

    192.168.1.100 255.255.255.0
    click(ok)


    eth1

    untick enable ipv6 configuration

    in IPv4 select manual configuration

    192.168.2.100 255.255.255.0
    click(ok)

    Hostname - rac1.example.com

    Gateway 192.168.1.1


    Map - choose your country and city
    untick system clock uses UTC


    password for root

    Package selection - customize now

    Packages to install

    Desktop Environment
    Gnome

    Applications
    Editors
    Graphical internet
    Text-based Internet

    Development
    Develoment libraries
    Development Tools
    Legacy Software development

    Servers
    Server Configuration Tools

    Base system
    administration Tools
    base
    java
    legacy software support
    system tools
    X window system


    About to install - click(next)


    Post installation

    welcome
    - next
    firewall - disabled
    selinux - disabled
    kdump - next
    date and time - next
    create user - next
    sound card - next
    additional CDs - next
    reboot system - ok



    Rpms to be installed


    binutils-2.17.50.0.6
    compat-libstdc++33*
    elfutils-libelf*
    elfutils-libelf-devel*
    elfutils-libelf-devel-static*
    gcc-4*
    gcc-c++*
    glibc-2*
    glibc-common*
    glibc-devel*
    glibc-headers*
    kernel-headers-2*
    ksh*
    libaio*
    libaio-devel*
    libgcc*
    libgomp*
    libstdc++*
    libstdc++-devel*
    make*
    pdksh*
    sysstat*
    unixODBC*
    unixODBC-devel


    Configure DNS on NODE 1


    yum install -y *bind* caching-nameserver



    [root@rac1 ~]# ifconfig eth0

    eth0
    Code:
    Link encap:Ethernet  HWaddr 00:0C:29:C7:15:90  
              inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fec7:1590/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:1787 errors:0 dropped:0 overruns:0 frame:0
              TX packets:63 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:142477 (139.1 KiB)  TX bytes:8809 (8.6 KiB)
              Interrupt:67 Base address:0x2424

    [root@rac1 ~]# cd /var/named/chroot/etc/

    [root@rac1 etc]# cp named.caching-nameserver.conf named.conf

    Sample named.conf file

    [root@rac1 etc]# vi named.conf
    Code:
    //
    // named.caching-nameserver.conf
    //
    // Provided by Red Hat caching-nameserver package to configure the
    // ISC BIND named(8) DNS server as a caching only nameserver 
    // (as a localhost DNS resolver only). 
    //
    // See /usr/share/doc/bind*/sample/ for example named configuration files.
    //
    // DO NOT EDIT THIS FILE - use system-config-bind or an editor
    // to create named.conf - edits to this file will be lost on 
    // caching-nameserver package upgrade.
    //
    options {
            listen-on port 53 { 192.168.1.100; };
    #       listen-on-v6 port 53 { ::1; };
            directory       "/var/named";
            dump-file       "/var/named/data/cache_dump.db";
            statistics-file "/var/named/data/named_stats.txt";
            memstatistics-file "/var/named/data/named_mem_stats.txt";
    
            // Those options should be used carefully because they disable port
            // randomization
            // query-source    port 53;
            // query-source-v6 port 53;
    
            allow-query     { any; };
            allow-query-cache { any; };
    };
    logging {
            channel default_debug {
                    file "data/named.run";
                    severity dynamic;
            };
    };
    view localhost_resolver {
            match-clients      { any; };
            match-destinations { 192.168.1.100; };
            recursion yes;
            include "/etc/named.rfc1912.zones";
    };



    [root@rac1 etc]# vi named.rfc1912.zones


    Code:
    // named.rfc1912.zones:
    //
    // Provided by Red Hat caching-nameserver package 
    //
    // ISC BIND named zone configuration for zones recommended by
    // RFC 1912 section 4.1 : localhost TLDs and address zones
    // 
    // See /usr/share/doc/bind*/sample/ for example named configuration files.
    //
    zone "." IN {
            type hint;
            file "named.ca";
    };
    
    zone "example.com" IN {
            type master;
            file "forward.zone";
            allow-update { none; };
    };
    
    zone "localhost" IN {
            type master;
            file "localhost.zone";
            allow-update { none; };
    };
    
    zone "1.168.192.in-addr.arpa" IN {
            type master;
            file "reverse.zone";
            allow-update { none; };
    };
    
    zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
            type master;
            file "named.ip6.local";
            allow-update { none; };
    };
    
    zone "255.in-addr.arpa" IN {
            type master;
            file "named.broadcast";
            allow-update { none; };
    };
    
    zone "0.in-addr.arpa" IN {
            type master;
            file "named.zero";
            allow-update { none; };
    };
    [root@rac1 etc]# chgrp named named.conf
    [root@rac1 etc]# cd /var/named/chroot/var/named/

    [root@rac1 named]# cp localdomain.zone forward.zone
    [root@rac1 named]# cp named.local reverse.zone
    [root@rac1 named]# vi forward.zone

    Code:
    $TTL    86400
    @               IN SOA  rac1.example.com. root.example.com.  (
                                            42              ; serial (d. adams)
                                            3H              ; refresh
                                            15M             ; retry
                                            1W              ; expiry
                                            1D )            ; minimum
                    IN NS           rac1.example.com. 
    rac1            IN A            192.168.1.100
    [root@rac1 named]# vi reverse.zone

    Code:
    $TTL    86400
    @       IN      SOA     rac1.example.com. root.rac1.example.com.  (
                                          1997022700 ; Serial
                                          28800      ; Refresh
                                          14400      ; Retry
                                          3600000    ; Expire
                                          86400 )    ; Minimum
            IN      NS      rac1.example.com.
    100     IN      PTR     rac1.example.com.
    [root@rac1 named]# chgrp named forward.zone
    [root@rac1 named]# chgrp named reverse.zone

    [root@rac1 named]# cat /etc/hosts

    Code:
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1                localhost.localdomain localhost
    192.168.1.100           rac1.example.com        rac1
    Insert into all the nodes.

    [root@rac1 named]# cat /etc/resolv.conf

    Code:
    search example.com
    nameserver 192.168.1.100
    In Rac1
    [root@rac1 named]# cat /etc/sysconfig/network

    Code:
    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=rac1.example.com
    In Rac2

    [root@rac2 named]# cat /etc/sysconfig/network
    Code:
    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=rac2.example.com

    [root@rac1 named]# service named restart
    Code:
    Stopping named:                                            [  OK  ]
    Starting named:                                            [  OK
    ]

    Execute from both Nodes

    [root@rac1 named]# dig rac1.example.com

    Code:
    ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5 <<>> rac1.example.com
    ;; global options:  printcmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2650
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0
    
    ;; QUESTION SECTION:
    ;rac1.example.com.              IN      A
    
    ;; ANSWER SECTION:
    rac1.example.com.       86400   IN      A       192.168.1.100
    
    ;; AUTHORITY SECTION:
    example.com.            86400   IN      NS      rac1.example.com.
    
    ;; Query time: 4 msec
    ;; SERVER: 192.168.1.100#53(192.168.1.100)
    ;; WHEN: Tue Aug 28 22:56:32 2012
    ;; MSG SIZE  rcvd: 64



    [root@rac2 ~]# dig -x 192.168.1.100
    Code:
    ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5 <<>> -x 192.168.1.100
    ;; global options:  printcmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64577
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
    
    ;; QUESTION SECTION:
    ;100.1.168.192.in-addr.arpa.    IN      PTR
    
    ;; ANSWER SECTION:
    100.1.168.192.in-addr.arpa. 86400 IN    PTR     rac1.example.com.
    
    ;; AUTHORITY SECTION:
    1.168.192.in-addr.arpa. 86400   IN      NS      rac1.example.com.
    
    ;; ADDITIONAL SECTION:
    rac1.example.com.       86400   IN      A       192.168.1.100
    
    ;; Query time: 3 msec
    ;; SERVER: 192.168.1.100#53(192.168.1.100)
    ;; WHEN: Tue Aug 28 23:04:13 2012
    ;; MSG SIZE  rcvd: 104

    [root@rac2 ~]# nslookup rac1.example.com
    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac1.example.com
    Address: 192.168.1.100
    [root@rac2 ~]# nslookup 192.168.1.100
    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    100.1.168.192.in-addr.arpa      name = rac1.example.com.

    Once your basic dns server is working do the following

    append the following in /var/named/chroot/var/named/forward.zone



    Code:
    ; Oracle RAC Nodes
    rac1	                IN A        192.168.1.100
    rac2	                IN A        192.168.1.200
    rac1-priv               IN A        192.168.2.100
    rac2-priv	        IN A        192.168.2.200
    rac1-vip                IN A        192.168.1.251
    rac2-vip                IN A        192.168.1.252
    
    
    ; Single Client Access Name (SCAN) virtual IP
    rac-cluster-scan    IN A        192.168.1.150
    rac-cluster-scan    IN A        192.168.1.151
    rac-cluster-scan    IN A        192.168.1.152

    Append the following in /var/named/chroot/var/named/reverse.zone

    Code:
    ; Oracle RAC Nodes
    100                     IN PTR      rac1.example.com.
    200                     IN PTR      rac2.example.com.
    251                     IN PTR      rac1-vip.example.com.
    252                     IN PTR      rac2-vip.example.com.
    
    
    ; Single Client Access Name (SCAN) virtual IP
    150                     IN PTR      rac-cluster-scan.example.com.
    151                     IN PTR      rac-cluster-scan.example.com.
    152                     IN PTR      rac-cluster-scan.example.com.


    [root@rac1 named]# service named restart
    Code:
    Stopping named:                                            [  OK  ]
    Starting named:                                            [  OK  ]
    [root@rac1 named]# chkconfig named on
    [root@rac1 named]# chkconfig named --list
    Code:
    named           0:off   1:off   2:on    3:on    4:on    5:on    6:off



    Perform following tests on both nodes

    nslookup rac1.example.com

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac1.example.com
    Address: 192.168.1.100
    nslookup rac2.example.com

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac2.example.com
    Address: 192.168.1.200

    nslookup rac1

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac1.example.com
    Address: 192.168.1.100
    nslookup rac2

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac2.example.com
    Address: 192.168.1.200

    nslookup 192.168.1.100

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    100.1.168.192.in-addr.arpa      name = rac1.example.com.
    nslookup 192.168.1.200

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    200.1.168.192.in-addr.arpa      name = rac2.example.com.
    nslookup rac-cluster-scan

    Code:
    Server:         192.168.1.100
    Address:        192.168.1.100#53
    
    Name:   rac-cluster-scan.example.com
    Address: 192.168.1.152
    Name:   rac-cluster-scan.example.com
    Address: 192.168.1.150
    Name:   rac-cluster-scan.example.com
    Address: 192.168.1.151
    Append the following in /etc/hosts on all the nodes.


    Code:
    # Public Network - (eth0)
    192.168.1.100    rac1.example.com           rac1
    192.168.1.200    rac2.example.com           rac2
    
    # Private Interconnect - (eth1)
    192.168.2.100    rac1-priv.example.com      rac1-priv
    192.168.2.200    rac2-priv.example.com      rac2-priv
    
    # Public Virtual IP (VIP) addresses - (eth0:1)
    192.168.1.251    rac1-vip.example.com       rac1-vip
    192.168.1.252    rac2-vip.example.com       rac2-vip

    Sample /etc/hosts file in both the nodes.

    Code:
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1                localhost.localdomain localhost
    
    
    
    # Public Network - (eth0)
    192.168.1.100    rac1.example.com           rac1
    192.168.1.200    rac2.example.com           rac2
    
    # Private Interconnect - (eth1)
    192.168.2.100    rac1-priv.example.com      rac1-priv
    192.168.2.200    rac2-priv.example.com      rac2-priv
    
    # Public Virtual IP (VIP) addresses - (eth0:1)
    192.168.1.251    rac1-vip.example.com       rac1-vip
    192.168.1.252    rac2-vip.example.com       rac2-vip

    Execute the following commands from both nodes and verify connectivity.

    ping -c 3 rac1.example.com
    ping -c 3 rac2.example.com
    ping -c 3 rac1-priv.example.com
    ping -c 3 rac2-priv.example.com

    ping -c 3 rac1
    ping -c 3 rac2
    ping -c 3 rac1-priv
    ping -c 3 rac2-priv


    Deconfigure NTP services on both nodes

    /sbin/service ntpd stop
    chkconfig ntpd off
    mv /etc/ntp.conf /etc/ntp.conf.original
    rm /var/run/ntpd.pid


    Create necessary groups and users for Grid on both nodes


    [root@rac1 ~]#groupadd -g 1000 oinstall
    [root@rac1 ~]#groupadd -g 1200 asmadmin
    [root@rac1 ~]#groupadd -g 1201 asmdba
    [root@rac1 ~]#groupadd -g 1202 asmoper
    [root@rac1 ~]#useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid

    [root@rac1 ~]# id grid
    Code:
    uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)


    Assign password to grid user on both nodes.


    [root@rac1 ~]#passwd grid


    Set Bash Profile for grid user on both nodes.



    [root@rac1 ~]#su - grid

    Replace the content in .bash_profile with the following of grid user on both nodes.


    Code:
    # ---------------------------------------------------
    # .bash_profile
    # ---------------------------------------------------
    # OS User:      grid
    # Application:  Oracle Grid Infrastructure
    # Version:      Oracle 11g Release 2
    # ---------------------------------------------------
    
    # Get the aliases and functions
    if [ -f ~/.bashrc ]; then
          . ~/.bashrc
    fi
    
    alias ls="ls -FA"
    
    # ---------------------------------------------------
    # ORACLE_SID
    # ---------------------------------------------------
    # Specifies the Oracle system identifier (SID)
    # for the Automatic Storage Management (ASM)instance
    # running on this node.
    # Each RAC node must have a unique ORACLE_SID.
    # (i.e. +ASM1, +ASM2,...)
    # ---------------------------------------------------
    ORACLE_SID=+ASM1; export ORACLE_SID
    
    # ---------------------------------------------------
    # JAVA_HOME
    # ---------------------------------------------------
    # Specifies the directory of the Java SDK and Runtime
    # Environment.
    # ---------------------------------------------------
    JAVA_HOME=/usr/local/java; export JAVA_HOME
    
    # ---------------------------------------------------
    # GRID_BASE
    # ---------------------------------------------------
    # Specifies the base of the Oracle directory structure
    # for Optimal Flexible Architecture (OFA) compliant
    # installations. The Oracle base directory for the
    # grid installation owner is the location where
    # diagnostic and administrative logs, and other logs
    # associated with Oracle ASM and Oracle Clusterware
    # are stored.
    # ---------------------------------------------------
    GRID_BASE=/u01/app/grid; export GRID_BASE
    
    ORACLE_BASE=$GRID_BASE; export ORACLE_BASE
    
    # ---------------------------------------------------
    # GRID_HOME
    # ---------------------------------------------------
    # Specifies the directory containing the Oracle
    # Grid Infrastructure software. For grid
    # infrastructure for a cluster installations, the Grid
    # home must not be placed under one of the Oracle base
    # directories, or under Oracle home directories of
    # Oracle Database installation owners, or in the home
    # directory of an installation owner. During 
    # installation, ownership of the path to the Grid 
    # home is changed to root. This change causes 
    # permission errors for other installations.
    # ---------------------------------------------------
    GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
    
    ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
    
    # ---------------------------------------------------
    # ORACLE_PATH
    # ---------------------------------------------------
    # Specifies the search path for files used by Oracle
    # applications such as SQL*Plus. If the full path to
    # the file is not specified, or if the file is not
    # in the current directory, the Oracle application
    # uses ORACLE_PATH to locate the file.
    # This variable is used by SQL*Plus, Forms and Menu.
    # ---------------------------------------------------
    ORACLE_PATH=/u01/app/oracle/dba_scripts/sql; export ORACLE_PATH
    
    # ---------------------------------------------------
    # SQLPATH
    # ---------------------------------------------------
    # Specifies the directory or list of directories that
    # SQL*Plus searches for a login.sql file.
    # ---------------------------------------------------
    # SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH
    
    # ---------------------------------------------------
    # ORACLE_TERM
    # ---------------------------------------------------
    # Defines a terminal definition. If not set, it
    # defaults to the value of your TERM environment
    # variable. Used by all character mode products. 
    # ---------------------------------------------------
    ORACLE_TERM=xterm; export ORACLE_TERM
    
    # ---------------------------------------------------
    # NLS_DATE_FORMAT
    # ---------------------------------------------------
    # Specifies the default date format to use with the
    # TO_CHAR and TO_DATE functions. The default value of
    # this parameter is determined by NLS_TERRITORY. The
    # value of this parameter can be any valid date
    # format mask, and the value must be surrounded by 
    # double quotation marks. For example:
    #
    #         NLS_DATE_FORMAT = "MM/DD/YYYY"
    #
    # ---------------------------------------------------
    NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
    
    # ---------------------------------------------------
    # TNS_ADMIN
    # ---------------------------------------------------
    # Specifies the directory containing the Oracle Net
    # Services configuration files like listener.ora, 
    # tnsnames.ora, and sqlnet.ora.
    # ---------------------------------------------------
    TNS_ADMIN=$GRID_HOME/network/admin; export TNS_ADMIN
    
    # ---------------------------------------------------
    # ORA_NLS11
    # ---------------------------------------------------
    # Specifies the directory where the language,
    # territory, character set, and linguistic definition
    # files are stored.
    # ---------------------------------------------------
    ORA_NLS11=$GRID_HOME/nls/data; export ORA_NLS11
    
    # ---------------------------------------------------
    # PATH
    # ---------------------------------------------------
    # Used by the shell to locate executable programs;
    # must include the $GRID_HOME/bin directory.
    # ---------------------------------------------------
    PATH=.:${JAVA_HOME}/bin:$JAVA_HOME/db/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
    PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
    PATH=${PATH}:/u01/app/oracle/dba_scripts/bin
    export PATH
    
    # ---------------------------------------------------
    # LD_LIBRARY_PATH
    # ---------------------------------------------------
    # Specifies the list of directories that the shared
    # library loader searches to locate shared object
    # libraries at runtime.
    # ---------------------------------------------------
    LD_LIBRARY_PATH=$GRID_HOME/lib
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$GRID_HOME/oracm/lib
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
    export LD_LIBRARY_PATH
    
    # ---------------------------------------------------
    # CLASSPATH
    # ---------------------------------------------------
    # Specifies the directory or list of directories that
    # contain compiled Java classes.
    # ---------------------------------------------------
    CLASSPATH=$GRID_HOME/JRE
    CLASSPATH=${CLASSPATH}:$GRID_HOME/jdbc/lib/ojdbc6.jar
    CLASSPATH=${CLASSPATH}:$GRID_HOME/jlib
    CLASSPATH=${CLASSPATH}:$GRID_HOME/rdbms/jlib
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/oc4j/ant/lib/ant.jar
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/oc4j/ant/lib/ant-launcher.jar
    CLASSPATH=${CLASSPATH}:$JAVA_HOME/db/lib/derby.jar
    CLASSPATH=${CLASSPATH}:$GRID_HOME/network/jlib
    export CLASSPATH
    
    # ---------------------------------------------------
    # THREADS_FLAG
    # ---------------------------------------------------
    # All the tools in the JDK use green threads as a
    # default. To specify that native threads should be
    # used, set the THREADS_FLAG environment variable to
    # "native". You can revert to the use of green
    # threads by setting THREADS_FLAG to the value
    # "green".
    # ---------------------------------------------------
    THREADS_FLAG=native; export THREADS_FLAG
    
    # ---------------------------------------------------
    # TEMP, TMP, and TMPDIR
    # ---------------------------------------------------
    # Specify the default directories for temporary
    # files; if set, tools that create temporary files
    # create them in one of these directories.
    # ---------------------------------------------------
    export TEMP=/tmp
    export TMPDIR=/tmp
    
    # ---------------------------------------------------
    # UMASK
    # ---------------------------------------------------
    # Set the default file mode creation mask
    # (umask) to 022 to ensure that the user performing
    # the Oracle software installation creates files
    # with 644 permissions.
    # ---------------------------------------------------
    umask 022

    Make sure of setting for RAC1 ORACLE_SID=+ASM1; export ORACLE_SID
    Make sure of setting for RAC2 ORACLE_SID=+ASM2; export ORACLE_SID


    Activate the bash_profile on both nodes using

    [grid@rac1 ~]$ . ./.bash_profile


    Create necessary users and groups for Oracle software



    [root@rac1 ~]#groupadd -g 1300 dba
    [root@rac1 ~]#groupadd -g 1301 oper
    [root@rac1 ~]#useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle

    [root@rac1 ~]# id oracle
    Code:
    uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

    Create password for oracle user


    [root@rac1 ~]#passwd oracle


    Set Bash Profile for oracle user on both nodes.


    [root@rac1 ~]#su - oracle


    Replace the content in .bash_profile with the following


    Code:
    # ---------------------------------------------------
    # .bash_profile
    # ---------------------------------------------------
    # OS User:      oracle
    # Application:  Oracle Database Software Owner
    # Version:      Oracle 11g Release 2
    # ---------------------------------------------------
    
    # Get the aliases and functions
    if [ -f ~/.bashrc ]; then
          . ~/.bashrc
    fi
    
    alias ls="ls -FA"
    
    # ---------------------------------------------------
    # ORACLE_SID
    # ---------------------------------------------------
    # Specifies the Oracle system identifier (SID) for
    # the Oracle instance running on this node.
    # Each RAC node must have a unique ORACLE_SID.
    # (i.e. orcl1, orcl2,...)
    # ---------------------------------------------------
    ORACLE_SID=orcl1; export ORACLE_SID
    
    # ---------------------------------------------------
    # ORACLE_UNQNAME
    # ---------------------------------------------------
    # In previous releases of Oracle Database, you were 
    # required to set environment variables for
    # ORACLE_HOME and ORACLE_SID to start, stop, and
    # check the status of Enterprise Manager. With
    # Oracle Database 11g Release 2 (11.2) and later, you
    # need to set the environment variables ORACLE_HOME 
    # and ORACLE_UNQNAME to use Enterprise Manager. 
    # Set ORACLE_UNQNAME equal to the database unique
    # name.
    # ---------------------------------------------------
    ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME
    
    # ---------------------------------------------------
    # JAVA_HOME
    # ---------------------------------------------------
    # Specifies the directory of the Java SDK and Runtime
    # Environment.
    # ---------------------------------------------------
    JAVA_HOME=/usr/local/java; export JAVA_HOME
    
    # ---------------------------------------------------
    # ORACLE_BASE
    # ---------------------------------------------------
    # Specifies the base of the Oracle directory structure
    # for Optimal Flexible Architecture (OFA) compliant
    # database software installations.
    # ---------------------------------------------------
    ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
    
    # ---------------------------------------------------
    # ORACLE_HOME
    # ---------------------------------------------------
    # Specifies the directory containing the Oracle
    # Database software.
    # ---------------------------------------------------
    ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
    
    # ---------------------------------------------------
    # ORACLE_PATH
    # ---------------------------------------------------
    # Specifies the search path for files used by Oracle
    # applications such as SQL*Plus. If the full path to
    # the file is not specified, or if the file is not
    # in the current directory, the Oracle application
    # uses ORACLE_PATH to locate the file.
    # This variable is used by SQL*Plus, Forms and Menu.
    # ---------------------------------------------------
    ORACLE_PATH=/u01/app/oracle/dba_scripts/sql:$ORACLE_HOME/rdbms/admin; export ORACLE_PATH
    
    # ---------------------------------------------------
    # SQLPATH
    # ---------------------------------------------------
    # Specifies the directory or list of directories that
    # SQL*Plus searches for a login.sql file.
    # ---------------------------------------------------
    # SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH
    
    # ---------------------------------------------------
    # ORACLE_TERM
    # ---------------------------------------------------
    # Defines a terminal definition. If not set, it
    # defaults to the value of your TERM environment
    # variable. Used by all character mode products. 
    # ---------------------------------------------------
    ORACLE_TERM=xterm; export ORACLE_TERM
    
    # ---------------------------------------------------
    # NLS_DATE_FORMAT
    # ---------------------------------------------------
    # Specifies the default date format to use with the
    # TO_CHAR and TO_DATE functions. The default value of
    # this parameter is determined by NLS_TERRITORY. The
    # value of this parameter can be any valid date
    # format mask, and the value must be surrounded by 
    # double quotation marks. For example:
    #
    #         NLS_DATE_FORMAT = "MM/DD/YYYY"
    #
    # ---------------------------------------------------
    NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
    
    # ---------------------------------------------------
    # TNS_ADMIN
    # ---------------------------------------------------
    # Specifies the directory containing the Oracle Net
    # Services configuration files like listener.ora, 
    # tnsnames.ora, and sqlnet.ora.
    # ---------------------------------------------------
    TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
    
    # ---------------------------------------------------
    # ORA_NLS11
    # ---------------------------------------------------
    # Specifies the directory where the language,
    # territory, character set, and linguistic definition
    # files are stored.
    # ---------------------------------------------------
    ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
    
    # ---------------------------------------------------
    # PATH
    # ---------------------------------------------------
    # Used by the shell to locate executable programs;
    # must include the $ORACLE_HOME/bin directory.
    # ---------------------------------------------------
    PATH=.:${JAVA_HOME}/bin:$JAVA_HOME/db/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
    PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
    PATH=${PATH}:/u01/app/oracle/dba_scripts/bin
    export PATH
    
    # ---------------------------------------------------
    # LD_LIBRARY_PATH
    # ---------------------------------------------------
    # Specifies the list of directories that the shared
    # library loader searches to locate shared object
    # libraries at runtime.
    # ---------------------------------------------------
    LD_LIBRARY_PATH=$ORACLE_HOME/lib
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
    export LD_LIBRARY_PATH
    
    # ---------------------------------------------------
    # CLASSPATH
    # ---------------------------------------------------
    # Specifies the directory or list of directories that
    # contain compiled Java classes.
    # ---------------------------------------------------
    CLASSPATH=$ORACLE_HOME/jdbc/lib/ojdbc6.jar
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/oc4j/ant/lib/ant.jar
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/oc4j/ant/lib/ant-launcher.jar
    CLASSPATH=${CLASSPATH}:$JAVA_HOME/db/lib/derby.jar
    CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
    export CLASSPATH
    
    # ---------------------------------------------------
    # THREADS_FLAG
    # ---------------------------------------------------
    # All the tools in the JDK use green threads as a
    # default. To specify that native threads should be
    # used, set the THREADS_FLAG environment variable to
    # "native". You can revert to the use of green
    # threads by setting THREADS_FLAG to the value
    # "green".
    # ---------------------------------------------------
    THREADS_FLAG=native; export THREADS_FLAG
    
    # ---------------------------------------------------
    # TEMP, TMP, and TMPDIR
    # ---------------------------------------------------
    # Specify the default directories for temporary
    # files; if set, tools that create temporary files
    # create them in one of these directories.
    # ---------------------------------------------------
    export TEMP=/tmp
    export TMPDIR=/tmp
    
    # ---------------------------------------------------
    # UMASK
    # ---------------------------------------------------
    # Set the default file mode creation mask
    # (umask) to 022 to ensure that the user performing
    # the Oracle software installation creates files
    # with 644 permissions.
    # ---------------------------------------------------
    umask 022


    Make sure settin for rac1 should be ORACLE_SID=orcl1; export ORACLE_SID
    Make sure settin for rac2 should be ORACLE_SID=orcl2; export ORACLE_SID

    activate bash profile on both nodes


    [oracle@rac1 ~]$ . ./.bash_profile


    verify that user nobody exists on both nodes.


    [root@rac1 ~]# id nobody
    uid=99(nobody) gid=99(nobody) groups=99(nobody)



    Create directory structures on both nodes.


    [root@rac1 ~]# mkdir -p /u01/app/grid
    [root@rac1 ~]# mkdir -p /u01/app/11.2.0/grid
    [root@rac1 ~]# chown -R grid:oinstall /u01
    [root@rac1 ~]# mkdir -p /u01/app/oracle
    [root@rac1 ~]# chown oracle:oinstall /u01/app/oracle
    [root@rac1 ~]# chmod -R 775 /u01


    Insert the following in /etc/security/limits.conf on both nodes.


    grid soft nproc 2047
    grid hard nproc 16384
    grid soft nofile 1024
    grid hard nofile 65536
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536

    Insert the following in /etc/pam.d/login on both nodes.

    session required pam_limits.so


    Append the following in root user /etc/profile on both nodes



    Code:
    if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then 
        if [ \$SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
            ulimit -n 65536
        else
            ulimit -u 16384 -n 65536
        fi
        umask 022
    fi
    Edit kernel parameters on both nodes /etc/sysctl.conf.


    leave the fields for

    kernel.shmall
    kernel.shmmax

    as it is and append the following values in /etc/sysctl.conf.
    Code:
    # Controls the maximum number of shared memory segments system wide
    kernel.shmmni = 4096
    
    # Sets the following semaphore values:
    # SEMMSL_value  SEMMNS_value  SEMOPM_value  SEMMNI_value
    kernel.sem = 250 32000 100 128
    
    # Sets the maximum number of file-handles that the Linux kernel will allocate
    fs.file-max = 6815744
    
    # Defines the local port range that is used by TCP and UDP
    # traffic to choose the local port
    net.ipv4.ip_local_port_range = 9000 65500
    
    # Default setting in bytes of the socket "receive" buffer which
    # may be set by using the SO_RCVBUF socket option
    net.core.rmem_default=262144
    
    # Maximum setting in bytes of the socket "receive" buffer which
    # may be set by using the SO_RCVBUF socket option
    net.core.rmem_max=4194304
    
    # Default setting in bytes of the socket "send" buffer which
    # may be set by using the SO_SNDBUF socket option
    net.core.wmem_default=262144
    
    # Maximum setting in bytes of the socket "send" buffer which 
    # may be set by using the SO_SNDBUF socket option
    net.core.wmem_max=1048576
    
    # Maximum number of allowable concurrent asynchronous I/O requests
    fs.aio-max-nr=1048576
    Activate kernel settings on both nodes.

    [root@rac1 ~]# sysctl -p


    Configure shared storage for the cluster.


    1) shut down both nodes.

    Do the following from first node only.

    2) In node1 click (edit virtual machine settings)
    3) click on add
    4) select harddisk click(next)
    5) select create a new virtual disk click(next)
    6) Disk Type SCSI click(next)
    7) Maximum disk size 2, select allocate all disk space now, click (next)
    8) Specify file location "C:\11gRAC\shared\Disk1.vmdk" click finish


    By following the above instructions make total of 7 disks, Specifications are as following.

    Voting Disk + OCR

    "C:\11gRAC\shared\Disk1" size 2GB
    "C:\11gRAC\shared\Disk2" size 2GB
    "C:\11gRAC\shared\Disk3" size 2GB

    Database Storage

    "C:\11gRAC\shared\Disk4" size 12GB
    "C:\11gRAC\shared\Disk5" size 12GB

    Flash Recovery Area

    "C:\11gRAC\shared\Disk6" size 12GB
    "C:\11gRAC\shared\Disk7" size 12GB

    Edit the C:\RAC11g\rac1\rac1.vmx file

    The configuration file for rac1 will already contain configuration information for the seven new SCSI virtual hard disks:
    Code:
    ...
    scsi0:1.present = "TRUE"
    scsi0:1.fileName = "Disk1.vmdk"
    scsi0:2.present = "TRUE"
    scsi0:2.fileName = "Disk2.vmdk"
    scsi0:3.present = "TRUE"
    scsi0:3.fileName = "Disk3.vmdk"
    scsi0:4.present = "TRUE"
    scsi0:4.fileName = "Disk4.vmdk"
    scsi0:5.present = "TRUE"
    scsi0:5.fileName = "Disk5.vmdk"
    scsi0:5.present = "TRUE"
    scsi0:5.fileName = "Disk6.vmdk"
    scsi0:5.present = "TRUE"
    scsi0:5.fileName = "Disk7.vmdk"
    ...
    (rac2 obviously will not at this time!) The configuration information for the seven new hard disks (on rac1) should be removed and replaced with the configuration information in the table below.

    Code:
    #
    # ----------------------------------------------------------------
    # SHARED DISK SECTION - (BEGIN)
    # ----------------------------------------------------------------
    # -  The goal in meeting the hardware requirements is to have a
    #    shared storage for the two nodes. The way to achieve this in
    #    VMware is the creation of a NEW SCSI BUS. It has to be of
    #    type "virtual" and we must have the disk.locking = "false"
    #    option.
    # -  Just dataCacheMaxSize = "0" should be sufficient with the
    #    diskLib.* parameters, although I include all parameters for
    #    documentation purposes. 
    # -  maxUnsyncedWrites should matter for sparse disks only, and
    #    I certainly do not recommend using sparse disks for
    #    clustering.
    # -  dataCacheMaxSize=0 should disable cache size completely, so
    #    other three dataCache options should do nothing (no harm,
    #    but nothing good either).
    # ----------------------------------------------------------------
    #
    
    diskLib.dataCacheMaxSize = "0"
    diskLib.dataCacheMaxReadAheadSize = "0"
    diskLib.dataCacheMinReadAheadSize = "0"
    diskLib.dataCachePageSize = "4096"
    diskLib.maxUnsyncedWrites = "0"
    
    disk.locking = "false"
    
    # ----------------------------------------------------------------
    #   Create one HBA
    # ----------------------------------------------------------------
    
    scsi1.present = "TRUE"
    scsi1.sharedBus = "virtual"
    scsi1.virtualDev = "lsilogic"
    
    # ----------------------------------------------------------------
    #   Create virtual SCSI disks on single HBA
    # ----------------------------------------------------------------
    
    scsi1:0.present = "TRUE"
    scsi1:0.fileName = "C:\RAC11g\shared\Disk1.vmdk"
    scsi1:0.redo = ""
    scsi1:0.mode = "independent-persistent"
    scsi1:0.deviceType = "disk"
    
    scsi1:1.present = "TRUE"
    scsi1:1.fileName = "C:\RAC11g\shared\Disk2.vmdk"
    scsi1:1.redo = ""
    scsi1:1.mode = "independent-persistent"
    scsi1:1.deviceType = "disk"
    
    scsi1:2.present = "TRUE"
    scsi1:2.fileName = "C:\RAC11g\shared\Disk3.vmdk"
    scsi1:2.redo = ""
    scsi1:2.mode = "independent-persistent"
    scsi1:2.deviceType = "disk"
    
    scsi1:3.present = "TRUE"
    scsi1:3.fileName = "C:\RAC11g\shared\Disk4.vmdk"
    scsi1:3.redo = ""
    scsi1:3.mode = "independent-persistent"
    scsi1:3.deviceType = "disk"
    
    scsi1:4.present = "TRUE"
    scsi1:4.fileName = "C:\RAC11g\shared\Disk5.vmdk"
    scsi1:4.redo = ""
    scsi1:4.mode = "independent-persistent"
    scsi1:4.deviceType = "disk"
    
    scsi1:5.present = "TRUE"
    scsi1:5.fileName = "C:\RAC11g\shared\Disk6.vmdk"
    scsi1:5.redo = ""
    scsi1:5.mode = "independent-persistent"
    scsi1:5.deviceType = "disk"
    
    scsi1:6.present = "TRUE"
    scsi1:6.fileName = "C:\RAC11g\shared\Disk7.vmdk"
    scsi1:6.redo = ""
    scsi1:6.mode = "independent-persistent"
    scsi1:6.deviceType = "disk"
    #
    # ----------------------------------------------------------------
    # SHARED DISK SECTION - (END)
    # ----------------------------------------------------------------
    #

    Also, append the above configuration in C:\RAC11g\rac1\rac2.vmx file

    Close vmware, then open it again and power on all the nodes one by one.

    You may face an error saying "clustering not supported for vmware workstation" just ignore this error.




    Partition disks for shared storage.


    Do this from only first node.


    [root@rac1 ~]# fdisk -l

    Code:
    Disk /dev/sda: 32.2 GB, 32212254720 bytes
    255 heads, 63 sectors/track, 3916 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14        3379    27037395   83  Linux
    /dev/sda3            3380        3901     4192965   82  Linux swap / Solaris
    
    Disk /dev/sdb: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdb doesn't contain a valid partition table
    
    Disk /dev/sdc: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdc doesn't contain a valid partition table
    
    Disk /dev/sdd: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdd doesn't contain a valid partition table
    
    Disk /dev/sde: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sde doesn't contain a valid partition table
    
    Disk /dev/sdf: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdf doesn't contain a valid partition table
    
    Disk /dev/sdg: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdg doesn't contain a valid partition table
    
    Disk /dev/sdh: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
    Disk /dev/sdh doesn't contain a valid partition table


    Partition each disk as following only from first node.

    [root@rac1 ~]# fdisk /dev/sdb
    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
    Building a new DOS disklabel. Changes will remain in memory only,
    until you decide to write them. After that, of course, the previous
    content won't be recoverable.

    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

    Command (m for help): n
    Command action
    e extended
    p primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-261, default 1):
    Using default value 1
    Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):
    Using default value 261

    Command (m for help): w
    The partition table has been altered!

    Calling ioctl() to re-read partition table.
    Syncing disks.



    After partitioning all the disks execute the following command on both nodes


    [root@rac1 ~]# partprobe



    Sample output from both nodes after partitioning

    [root@rac2 ~]# fdisk -l

    Code:
    Disk /dev/sda: 32.2 GB, 32212254720 bytes
    255 heads, 63 sectors/track, 3916 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14        3379    27037395   83  Linux
    /dev/sda3            3380        3901     4192965   82  Linux swap / Solaris
    
    Disk /dev/sdb: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1         261     2096451   83  Linux
    
    Disk /dev/sdc: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1               1         261     2096451   83  Linux
    
    Disk /dev/sdd: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1               1        1566    12578863+  83  Linux
    
    Disk /dev/sde: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1               1        1566    12578863+  83  Linux
    
    Disk /dev/sdf: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdf1               1        1566    12578863+  83  Linux
    
    Disk /dev/sdg: 12.8 GB, 12884901888 bytes
    255 heads, 63 sectors/track, 1566 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdg1               1        1566    12578863+  83  Linux
    
    Disk /dev/sdh: 2147 MB, 2147483648 bytes
    255 heads, 63 sectors/track, 261 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdh1               1         261     2096451   83  Linux






    Configure ASM


    Check your kernel version


    [root@rac1 ~]# uname -a
    Linux rac1.example.com 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux

    Download ASM rpms related to your kernel from the following website.

    http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

    In my case my os is based on 32bit or x86 architecture so every file downloaded will be 32bit or x86 based.


    IN Library and Tools download the following

    oracleasm-support-2.1.7-1.el5.i386.rpm
    oracleasmlib-2.0.4-1.el5.i386.rpm


    Download one additional rpm according to your kernel

    oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm




    Install asm on both nodes.


    [root@rac1 asm11g]# rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm \
    > oracleasmlib-2.0.4-1.el5.i386.rpm \
    > oracleasm-support-2.1.7-1.el5.i386.rpm


    Output

    Code:
    warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
    Preparing...                ########################################### [100%]
       1:oracleasm-support      ########################################### [ 33%]
       2:oracleasm-2.6.18-164.el########################################### [ 67%]
       3:oracleasmlib           ########################################### [100%]


    Check on both nodes

    [root@rac1 ~]# rpm -qa | grep asm
    Code:
    oracleasm-2.6.18-164.el5-2.0.5-1.el5
    ibmasm-3.0-9
    oracleasm-support-2.1.7-1.el5
    nasm-0.98.39-3.2.2
    ibmasm-xinput-2.1-1.el5
    oracleasmlib-2.0.4-1.el5
    Configure asm on both nodes

    [root@rac1 ~]# /usr/sbin/oracleasm configure -i

    Configuring the Oracle ASM library driver.

    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets ('[]'). Hitting without typing an
    answer will keep that current value. Ctrl-C will abort.

    Default user to own the driver interface []: grid
    Default group to own the driver interface []: asmadmin
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done



    Execute the following on both nodes.

    [root@rac1 ~]# /usr/sbin/oracleasm init

    Code:
    Creating /dev/oracleasm mount point: /dev/oracleasm
    Loading module "oracleasm": oracleasm
    Mounting ASMlib driver filesystem: /dev/oracleasm

    Execute the following commands from first node only.

    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
    Marking disk "VOL1" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
    Marking disk "VOL2" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
    Marking disk "VOL3" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sde1
    Marking disk "VOL4" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL5 /dev/sdf1
    Marking disk "VOL5" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL6 /dev/sdg1
    Marking disk "VOL6" as an ASM disk: [ OK ]
    [root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL7 /dev/sdh1
    Marking disk "VOL7" as an ASM disk: [ OK ]

    Execute the following from both nodes.

    [root@rac2 ~]# /usr/sbin/oracleasm scandisks
    Code:
    Reloading disk partitions: done
    Cleaning any stale ASM disks...
    Scanning system for ASM disks...
    Instantiating disk "VOL1"
    Instantiating disk "VOL2"
    Instantiating disk "VOL3"
    Instantiating disk "VOL4"
    Instantiating disk "VOL5"
    Instantiating disk "VOL6"
    Instantiating disk "VOL7"
    Execute the following from both nodes.

    [root@rac1 ~]# /usr/sbin/oracleasm listdisks
    Code:
    VOL1
    VOL2
    VOL3
    VOL4
    VOL5
    VOL6
    VOL7


    Download Oracle software from the following link

    http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linuxsoft-085393.html

    Download the following

    Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86
    Oracle Grid Infrastructure 11g Release 2 (11.2.0.1.0) for Linux x86

    Installation of grid infrastructure and oracle software will be done from node 1

    As grid user make the staging directory copy the grid infrastructure file and unzip it

    [root@rac1 ~]# su - grid
    [grid@rac1 ~]$ mkdir -p /home/grid/software/oracle
    [grid@rac1 ~]$ unzip


    As oracle user make the staging directory copy the oracle software files and unzip it

    [oracle@rac1 ~]$ mkdir -p /home/oracle/software/oracle
    [oracle@rac1 ~]$ unzip
    [oracle@rac1 ~]$ unzip


    Locate and install cvuqdisk RPM on both nodes

    [root@rac1 ~]# cp /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm /tmp/
    [root@rac1 tmp]# scp /tmp/cvuqdisk-1.0.7-1.rpm rac2:/tmp/
    root@rac2's password:
    cvuqdisk-1.0.7-1.rpm 100% 7831 7.7KB/s 00:00

    Install like following on both nodes.

    [root@rac1 ~]# cd /tmp/
    [root@rac1 tmp]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
    [root@rac1 tmp]# rpm -iv cvuqdisk-1.0.7-1.rpm
    Preparing packages for installation...
    cvuqdisk-1.0.7-1

    Confirm installation on both nodes.

    [root@rac1 tmp]# ls -l /usr/sbin/cvuqdisk

    -rwsr-xr-x 1 root oinstall 8272 May 28 2009 /usr/sbin/cvuqdisk

  2. #2
    Expert Oracle Administrator
    Join Date
    Oct 2011
    Location
    New Delhi, India
    Posts
    427
    Aditional Description :

    nobody user :

    A program that runs under a local username will not have enough permissions to actually perform tasks like updating log files or processing the mail queue. On the other hand, a program that runs as root can do anything, even completely wipe the server.

    In order to avoid the latter, the nobody user has more permissions than the local user but less than root. It is designed to function only within the parameters of system services.


    Native Threads vs. Green Threads :

    Native threads use the operating system's native ability to manage multi-threaded processes.
    The kernel schedules and manages the various threads that make up the process.

    Green threads emulate multithreaded environments without relying on any native OS capabilities.
    Sun wrote green threads to enable Java to work in environments that do not have native thread support.

    Always use native threads , unless the OS does not support them.

+ Reply to Thread

Similar Threads

  1. Oracle RAC 11gR2 Policy Managed Database Creation using DBCA Part 2.
    By ajaychandi in forum RAC Installation, ASM Install , ASM Administration
    Replies: 0
    Last Post: 02-26-2013, 07:14 PM
  2. Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan. (Part5)
    By ajaychandi in forum RAC Installation, ASM Install , ASM Administration
    Replies: 0
    Last Post: 08-31-2012, 05:52 PM
  3. Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan. (Part4)
    By ajaychandi in forum RAC Installation, ASM Install , ASM Administration
    Replies: 0
    Last Post: 08-31-2012, 05:46 PM
  4. Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan. (Part3)
    By ajaychandi in forum RAC Installation, ASM Install , ASM Administration
    Replies: 0
    Last Post: 08-31-2012, 05:33 PM
  5. Oracle 11gR2 RAC On Vmware with Grid Infrastructure & Scan. (Part2)
    By ajaychandi in forum RAC Installation, ASM Install , ASM Administration
    Replies: 0
    Last Post: 08-31-2012, 05:27 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

DBA Lounge (P) Ltd. deals in Oracle Technologies on Consulting, Resourcing, Corporate Training


Online and corporate training available on Oracle Database 11g, Oracle 11g Real Application Cluster (RAC), Oracle Applications 11i/R12, Oracle Fusion Middleware 11g, Oracle Identity Management-OIM, Oracle Internet Directory 11g-OID, Oracle Business Intelligence Enterprise Edition-OBIEE, Oracle Golden Gate, Oracle Access Management-OAM, Oracle Internet Directory-ODS, Oracle Identity Analytics Architecture-OIA Statistics