Friday, August 22, 2008
Oracle RAC Installation Complete - Postinstallation tasks
Verify the Oracle Cluster Database Registry Configuration
------------------------------------------------------------------
$srvctl config database -d racdb ( srvctl -- server control)
node1 racdb1 /u01/app/.../db_1 ( node instance oracle_home)
node2 racdb2 /u01/app/.../db_1
Backup the root.sh script
-----------------------------
$ cd $ORACLE_HOME
$ cp root.sh root.sh.bak
Bakup the voting disk
--------------------------
$ dd if=/dev/raw/raw7 of=/RACdb/OCR/backup/vdisk.bak
Download and install the required patch updates
-------------------------------------------------------
Check Managed Targets
-----------------------------
Use the Grid Control console for this. Open a browser and enter the address for your Grid Control console. Click the Targets tab to verify that all the targets appear here.
Oracle RAC Installation Step 6 - Perform cluster database creation
$ cd /u01/app/oracle/product/10.2.0/db_1/bin
$ . /dbca
Database Configuration Assistant: Welcome
---------------------------------------------
check - Oracle Real Application Cluster database
--> Next
Step 1 of 15: Operations
-------------------------
Select the operation that you want to perform
check - Create a Database
Step 2 of 15: Node Selection
-----------------------------
Select the nodes for which you want to manage ASM diskgroups.
click Select All --on right hand side.
Step 3 of 15: Database Templates
-----------------------------------
check - General Purpose
Step 4 of 15: Database Identification
-------------------------------------
Global Database Name: RDBA --upto 30 char length , must begin with an Alphabet
SID Prefix : RDBA --System Identifier --used to generate unique SID names for the two
instances that make up the cluster database eg. RDBA1 and RDBA2
Step 5 of 15: Management Options
------------------------------------
Check - Configure the Database with Enterprise Manager
Click - Use Grid Control for Database Management -- if you have Grid Control installed somewhere on network
-- if you have to manage databases in large enterprise deployment.
Management Service https://node1.us.oracle.com:1159/em
Click - Use Database Control for Database Management
-- if you have to Enable Email Notifications when alerts occur -- Outgoing Mail (SMTP) Server and email
address required.
-- if you have to Enable Daily Backup --supply Backup start time and OS user credentials
Step 6 of 15: Database Credentials
-----------------------------------
You must supply passwords for the user accounts created by the DBCA when configuring your database.
Click - Use Same Password for All Accounts
OR
Click - Use Different Passwords
Step 7 of 15: Storage Options
------------------------------
Cluster File System
Automatic Storage Management (ASM) --Requires ASM Instance on any of the Cluster nodes
(can be created using dbca)
Raw Devices--provide fully qualified mapping file name or set DBCA_RAW_CONFIG environment variable to point it.
Step 8 of 15: ASM Disk Group
-------------------------------
Select one or more disk groups to be used as storage fo the database. You can also create new disk group to be used by your cluster database.
select check box corresponding to the disk.
Step 9 of 15: Database file locations
------------------------------------
Specify locations for the Database files to be created.
Click - Use Oracle-Managed Files
Database Area : +DATA -- group name in database area field.
Step 10 of 15: Recovery Configuration
---------------------------------------
Choose the recovery options for the database.
Check - Specify Flash Recovery Area
Flash Recovery Area: +FLASH --Req. for all backup and recovery opeartions, automatic backups using EM
Flash Recovery Area Size: 2048 MB
Check - Enable Archiving -if you need
Step 11 of 15: Database Content
--------------------------------------
Check - Sample Schemas --only if needed
Step 12 of 15: Database Services
---------------------------------
You can add database services to be configured during database creation.
TAF Policy -- Transparent Application Failover
None : Do not use TAF
Basic : Establish connections at failover time
Pre-connect : Establish one connection to a preferred instance and another connection to backup instance that you have selected to be available.
Step 13 of 15: Initialization Parameters
----------------------------------------
You can set important database parameters -- grouped under four tabs.
Memory -- shared pool, buffer cacahe, java pool, large pool ,PGA
Sizing -- database block size (default 8kb)
Character Sets -- default language, date format
Connection Mode-- dedicated or shared server mode
Step 14 of 15: Database Storage
--------------------------------
You can specify storage parameter for database creation
-- control files
-- tablespaces
-- datafiles
-- rollback segments
-- redo log groups
Size, location and all aspects of extent management are under your control here.
Note : If you select database template including datafiles, you will not be able to add or remove datafiles,
tablespaces, or rollback segments.But you can change the destination of datafiles, controlfiles or log groups.
Step 15 of 15: Creation Options
--------------------------------
Review all options, parameters and so on that have been chosen for your database creation.
By clicking the Password Management button, you can manage the database accounts created by the DBCA.
Oracle RAC Installation Step 5 - Install EM agent on cluster nodes
$ cd /cdrom/cd0
$ ./runInstaller
Management Agent Installation: Specify Installation Type
-----------------------------------------------------------
There are two management tools available for your cluster database
-Database Control
-Grid Control
Both tools are based on Enterprise Manager
-Grid Control is the superior tool for deploying and managing cluster databases in an enterprise setting.
-To use Grid Control
---The Management Agent must be installed on each managed node in your cluster.
To install the Management Agent
---Go to the Enterprise Manager Installation CD or a software staging area and start the Oracle Universal Installer.
The Installation page provides several install types from which to choose from. Click the Additional Management Agent option button to install an agent only.
Specify Installation Location
--------------------------------
/u01/app/oracle/product/10.2.0
$ pwd
/u01/app/oracle/product/10.2.0
$ ls -l
total 20
drwxrwx--- 3 oracle dba 4096 Apr 12 07:59 agent
drwxr-x--- 54 oracle dba 4096 Apr 14 07:00 asm
drwxr-x--- 54 oracle dba 4096 Apr 14 07:16 db_1
Specify Hardware Cluster Installation Mode
---------------------------------------------
Select All button to choose all the nodes of the cluster
Prerequisite Check and OMS Location
---------------------------------------
Specify Oracle Management Service Location
Management Service Host Name: node1.us.oracle.com
Management Service Port: 4889 --default used by Grid Control
Agent Registration Password
------------------------------
Specify Agent Registration Password
Password : ******* (passwd for Grid Control server located on node1.us.oracle.com
Management Agent communicates with Management Service through secure mode.
Management Agent Installation finish
---------------------------------------
open a terminal window and run as root /u01/app/oracle/product/10.2.0/agent/agent10g/root.sh
on all Cluster Nodes : eg node1, node2
Oracle RAC Installation Step 4 - Perform Oracle Database 10g software installation
$ /cdrom/database/runInstaller
Welcome Screen Appears --> Next
Select Installation Type
------------------------
Check - Enterprise Edition (1.24 GB)
Specify Home Details
----------------------
Name - OracleDb10g_home1
Path - /u01/app/oracle/product/10.2.0/db_1
Specify Cluster Installation
----------------------------
Check - Cluster Installation
Select nodes (in addition to the local node) in the hardware cluster where the installer should install products that you select in this installation.
Product-Specific Prerequisite Checks
--------------------------------------
zero requirements to be verified.
you must manually verify and confirm the items that are flagged with warnings and items that require manual checks.
Select Configuration Option
----------------------------
check - Install database Software only
Start Install and finally run root.sh script
-----------------------------------------------
/u01/app/oracle/product/10.2.0/db_1/root.sh on all nodes eg. node1,node2
Oracle RAC Installation Step 3 - Perform ASM installation
$ id
oracle
$ /cdrom/database/runInstaller
Select Installation Type
------------------------
Oracle Database 10g 10.2.0.1.0
Check -- Enterprise Edition (1.24GB)
Specify Home Details
----------------------
Name: OraASM10g_home1
Path: /u01/app/oracle/product/10.2.0/asm
Specify Hardware Cluster Installation Mode
--------------------------------------------
Check -- Cluster Installation
select all nodes where installer should install products that you select in this installation.
Product-Specific Prerequisite Checks
-------------------------------------
0 requirements to be verified.
you must manually verify and confirm the items that are flagged with warnings and items that require manual checks.
Select Configuration Option
----------------------------
check - Configure Atomatic Storage Management (ASM)
Specify ASM SYS Password and Confirm
Configure Automatic Storage Management
-------------------------------------------
Disk Group Name: eg. DATA
Redundancy -- check - External
Add Disks -- check - Candidate Disks and Select amongst listed lists.
Start Install and finally Execute Configuration Script
-----------------------------------------------------
/u01/app/oracle/product/10.2.0/asm/root.sh on all nodes eg. node1,node2
Oracle RAC Installation Step 2 - Perform Oracle Clusterware installation
$ /cdrom/clusterware/runInstaller
Specifying the Inventory Directory
-----------------------------------
-Enter the full path of the inventory directory
/u01/app/oracle/oraInventory
-Specify Operating System group name
oinstall
Specify Home Details
----------------------
Name : OracCrs10g_home
Path: /u01/crs1020
Product-Specific Prerequisite Checks
--------------------------------------
0 requirements to be verified ( 0 requirements 0 warnings) --else check manually
Specify Cluster Configuration
-----------------------------
Cluster Name : cluster1
Cluster Nodes :
Public Node Name Private Node Name Virtual Host name
node1.us.oracle.com node1-priv.us.oracle.com node1-vip.us.oracle.com
node2.us.oracle.com node2-priv.us.oracle.com node2-vip.us.oracle.com
Specify Network Interface Usage
---------------------------------
Interface name Subnet Interface Type
eth0 192.68.10.128 Public
eth1 192.68.15.120 Do Not Use
eth2 191.2.10.110 Private
Oracle Cluster Registry File
-----------------------------
Specify Oracle Cluster Registry (OCR) Location -OCR stores cluster and database config info
check -- External Redundancy # OCR cannot be stored into ASM file system so store it in Raw file system
Specify OCR Location: /dev/raw/raw1 --requires 100MB free space
Specify Voting Disk Location
-----------------------------
Specify Voting Disk Location --Voting disk contains cluster membership info and arbitrates cluster ownership among the nodes of your cluster in the event of netwok failure
check -- External Redundancy # OCR cannot be stored into ASM file system so store it in Raw file system
Specify OCR Location: /dev/raw/raw1
Start Install
-------------
Run Configuration Scripts on All Nodes
---------------------------------------
/u01/app/oracle/oraInventory/orainstRoot.sh -- node1, node2
/u01/crs_10.2.0/root.sh -- node1, node2
Verifying Oracle Clusterware Installation
_____________________________________________________________
Check for Oracle Clusterware processes with the ps command and crsctl
-------------------------------------------------------------------------
$ ps -ef grep css
$ crsctl check css
CSS appears healthy
$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
Check the Oracle Clusterware startup entries in the /etc/inittab file
# cat /etc/inittab
#Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon
h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 /dev/null 2>&1 /dev/null 2>&1
Oracle RAC Installation Step 1 - Oracle Clusterware Installation and Configuration
NOTE: This Installation is on RHEL5/OEL5/CentOS5
Complete preinstallation tasks
System Hardware requirements
System Requirement
------------------------
At least 1 GB of physical memory is needed.
# grep MemTotal /proc/meminfo
A minimum of 1 GB of swap space is required.
# grep SwapTotal /proc/meminfo
The /tmp directory should be at least 400 MB.
# df -k /tmp
The Oracle Database 10g software requires up to 4 GB of disk space.
Network Hardware Requirement
----------------------------------
Each node must have at least two network adapters.
Each public network adapter must support TCP/IP.
The interconnect adapter must support User Datagram Protocol (UDP).
The host name and IP address associated with the public interface must be registered in the domain name service (DNS) or the /etc/hosts file.
Software requirements
Network Software Requirement
---------------------------------
Supported interconnect software protocols are required:
-TCP/IP
-UDP
-Reliable Data Gram
Token Ring (not supported on AIX platforms)
Package Requirements
-----------------------
Package versions are checked by the cluvfy utility.
RPMs needed for RHEL 4.0 64-bit
Hangcheck-timer Module Configuration
----------------------------------------
The hangcheck-timer module monitors the Linux kernel for hangs.
Make sure that the hangcheck-timer module is running on all nodes:
# /sbin/lsmod | grep -i hang
Add entry to start the hangcheck-timer module on all nodes, if necessary:
# vi /etc/rc.local
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Required UNIX Groups and Users
-----------------------------------
Create on each node.
user - oracle
group - dba, oinstall
# groupadd -g 500 oinstall
# groupadd -g 501 dba
# useradd -u 500 -d /home/oracle -g "oinstall" -G "dba" -m -s /bin/bash oracle
Verify the existence of the nobody nonprivileged user:
# grep nobody /etc/passwd
Nobody:x:99:99:Nobody:/:/sbin/nobody
Environment configuration
The oracle User Environment
------------------------------
Set umask to 022.
Set the DISPLAY environment variable.
Set the ORACLE_BASE environment variable.
Set the TMP and TMPDIR variables, if needed.
$ cd
$ vi .bash_profile
umask 022
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
TMP=/u01/mytmp; export TMP
TMPDIR=$TMP; export TMPDIR
User Shell Limits
-------------------
Add the following lines to the
/etc/security/limits.conf file:
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
Add the following line to the /etc/pam.d/login file:
session required /lib/security/pam_limits.so
Configuring for Remote Installation
------------------------------------
The OUI supports User Equivalence or Secure Shell (ssh)
for remote cluster installations. To configure user equivalence:
Edit the /etc/hosts.equiv file.
Insert both private and public node names for each node in your cluster.
#vi /etc/hosts.equiv
node1
node2
Test the configuration using rsh as the oracle user.
$ rsh node1 uname -r
$ rsh node2 uname -r
To configure Secure Shell:
--------------------------
Create the public and private keys on all nodes:
[node1]$ /usr/bin/ssh-keygen -t dsa
[node2]$ /usr/bin/ssh-keygen -t dsa
Concatenate id_dsa.pub from all nodes into the authorized_keys file on the first node:
[node1]$ ssh node1 "cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[node2]$ ssh node2 "cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Copy the authorized_keys file to the other nodes:
[node1]$ scp ~/.ssh/authorized_keys node2:/home/oracle/.ssh/
Linux Operating Systems Parameters
--------------------------------------
Parameter Value File
semmsl 250 /proc/sys/kernel/sem
semmns 32000 /proc/sys/kernel/sem
semopm 100 /proc/sys/kernel/sem
semmni 128 /proc/sys/kernel/sem
shmall 2097152 /proc/sys/kernel/shmall
shmmax ½ physical memory /proc/sys/kernel/shmmax
shmmni 4096 /proc/sys/kernel/shmmni
file-max 65536 /proc/sys/fs/file-max
rmem_max 262144 /proc/sys/net/core/rmem_max
rmem_default 262144 /proc/sys/net/core/rmem_default
wmem_max 262144 /proc/sys/net/core/wmem_max
wmem_default 262144 /proc/sys/net/core/wmem_default
# ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096 // SHMMNI
max seg size (kbytes) = 32768 // SHMMAX
max total shared memory (kbytes) = 8388608 // SHMALL
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 1024 // SEMMNI
max semaphores per array = 250 // SEMMSL
max semaphores system wide = 256000 // SEMMNS
max ops per semop call = 32 // SEMOPM
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 1024 // MSGMNI
max size of message (bytes) = 65536 // MSGMAX
default max size of queue (bytes) = 65536 // MSGMNB
Cluster Setup Tasks
---------------------
View the Certifications by Product section at http://metalink.oracle.com/.
Verify your high-speed interconnects.
Determine the shared storage (disk) option for your system:
-OCFS or other shared file system solution
-Raw devices
-ASM
ASM cannot be used for the OCR and Voting Disk files!
Install the necessary operating system patches.
Obtaining OCFS (Optional)
----------------------------
To get OCFS for Linux, visit the Web site at http://oss.oracle.com/projects/ocfs/files.
Download the following Red Hat Package Manager (RPM) packages:
ocfs-support-1.0-n.i386.rpm
ocfs-tools-1.0-n.i386.rpm
Download the following RPM kernel module:
ocfs-2.4.21-EL-typeversion.rpm, where typeversion is the Linux version.
Using Raw Partitions
----------------------
Install shared disk.
Identify the shared disks to use.
Partition the device
# fdisk -l
Disk /dev/sda: 9173 MB, 917311480 bytes 255 heads, 63 sectors/track, 1115 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes.
Disk /dev/sdb: 9173 MB, 917311480 bytes 255 heads, 63 sectors/track, 1115 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes.
# fdisk /dev/sda
Number of Partitions Partition Size(MB) Purpose
1 500 SYSTEM tablespace
1 300+250 per Inst. SYSAUX tablespace
1 per Inst. 500 UNDOTBSn tablespace
1 160 EXAMPLE tablespace
1 120 USERS tablespace
2 per Inst. 120 2 online Redo Logs per Inst.
2 120 First and Second controlfile
1 250 TEMP tablespace
1 5 Server parameter file (SPFILE)
1 5 Password file
1 100 Volume for OCR
1 20 Clusterware voting disk
Binding the Partitions
Identify the devices that are already bound:
# /usr/bin/raw -qa
Edit the /etc/sysconfig/rawdevices file:
# cat /etc/sysconfig/rawdevices file
# raw device bindings
dev/raw/raw1 /dev/sda1
...
Adjust the ownership and permissions of the OCR file to root:dba and 640 respectively
Adjust the ownership and permissions of all other raw files to oracle:dba and 660 respectively
Execute the rawdevices command
Raw Device Mapping file
-------------------------
Create a database directory and set proper permissions:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown oracle:oinstall $ORACLE_BASE/oradata
# chmod 775 $ORACLE_BASE/oradata
Edit the $ORACLE_BASE/oradata/dbname/dbname_raw.conf file:
# cd $ORACLE_BASE/oradata/dbname
# vi dbname_raw.conf
Set the DBCA_RAW_CONFIG enviroment variable to specify the full path to this file.
Verifying Cluster Setup with cluvfy
Install the cvuqdisk rpm required for cluvfy:
# su root
# cd /stage/10201-production/clusterware/rpm
# export CVUQDISK_GRP=dba
rpm -iv cvuqdisk-1.0.1-1.rpm
Run the cluvfy utility as oracle as shown below:
#cd /u01/stage/10gR2/clusterware/cluvfy
. /runcluvfy.sh stage -post hwos -n all -verbose
Complete preinstallation tasks
System Hardware requirements
System Requirement
------------------------
At least 1 GB of physical memory is needed.
# grep MemTotal /proc/meminfo
A minimum of 1 GB of swap space is required.
# grep SwapTotal /proc/meminfo
The /tmp directory should be at least 400 MB.
# df -k /tmp
The Oracle Database 10g software requires up to 4 GB of disk space.
Network Hardware Requirement
----------------------------------
Each node must have at least two network adapters.
Each public network adapter must support TCP/IP.
The interconnect adapter must support User Datagram Protocol (UDP).
The host name and IP address associated with the public interface must be registered in the domain name service (DNS) or the /etc/hosts file.
Software requirements
Network Software Requirement
---------------------------------
Supported interconnect software protocols are required:
-TCP/IP
-UDP
-Reliable Data Gram
Token Ring (not supported on AIX platforms)
Package Requirements
-----------------------
Package versions are checked by the cluvfy utility.
RPMs needed for RHEL 4.0 64-bit
Hangcheck-timer Module Configuration
----------------------------------------
The hangcheck-timer module monitors the Linux kernel for hangs.
Make sure that the hangcheck-timer module is running on all nodes:
# /sbin/lsmod | grep -i hang
Add entry to start the hangcheck-timer module on all nodes, if necessary:
# vi /etc/rc.local
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Required UNIX Groups and Users
-----------------------------------
Create on each node.
user - oracle
group - dba, oinstall
# groupadd -g 500 oinstall
# groupadd -g 501 dba
# useradd -u 500 -d /home/oracle -g "oinstall" -G "dba" -m -s /bin/bash oracle
Verify the existence of the nobody nonprivileged user:
# grep nobody /etc/passwd
Nobody:x:99:99:Nobody:/:/sbin/nobody
Environment configuration
The oracle User Environment
------------------------------
Set umask to 022.
Set the DISPLAY environment variable.
Set the ORACLE_BASE environment variable.
Set the TMP and TMPDIR variables, if needed.
$ cd
$ vi .bash_profile
umask 022
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
TMP=/u01/mytmp; export TMP
TMPDIR=$TMP; export TMPDIR
User Shell Limits
-------------------
Add the following lines to the
/etc/security/limits.conf file:
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
Add the following line to the /etc/pam.d/login file:
session required /lib/security/pam_limits.so
Configuring for Remote Installation
------------------------------------
The OUI supports User Equivalence or Secure Shell (ssh)
for remote cluster installations. To configure user equivalence:
Edit the /etc/hosts.equiv file.
Insert both private and public node names for each node in your cluster.
#vi /etc/hosts.equiv
node1
node2
Test the configuration using rsh as the oracle user.
$ rsh node1 uname -r
$ rsh node2 uname -r
To configure Secure Shell:
--------------------------
Create the public and private keys on all nodes:
[node1]$ /usr/bin/ssh-keygen -t dsa
[node2]$ /usr/bin/ssh-keygen -t dsa
Concatenate id_dsa.pub from all nodes into the authorized_keys file on the first node:
[node1]$ ssh node1 "cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[node2]$ ssh node2 "cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Copy the authorized_keys file to the other nodes:
[node1]$ scp ~/.ssh/authorized_keys node2:/home/oracle/.ssh/
Linux Operating Systems Parameters
--------------------------------------
Parameter Value File
semmsl 250 /proc/sys/kernel/sem
semmns 32000 /proc/sys/kernel/sem
semopm 100 /proc/sys/kernel/sem
semmni 128 /proc/sys/kernel/sem
shmall 2097152 /proc/sys/kernel/shmall
shmmax ½ physical memory /proc/sys/kernel/shmmax
shmmni 4096 /proc/sys/kernel/shmmni
file-max 65536 /proc/sys/fs/file-max
rmem_max 262144 /proc/sys/net/core/rmem_max
rmem_default 262144 /proc/sys/net/core/rmem_default
wmem_max 262144 /proc/sys/net/core/wmem_max
wmem_default 262144 /proc/sys/net/core/wmem_default
# ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096 // SHMMNI
max seg size (kbytes) = 32768 // SHMMAX
max total shared memory (kbytes) = 8388608 // SHMALL
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 1024 // SEMMNI
max semaphores per array = 250 // SEMMSL
max semaphores system wide = 256000 // SEMMNS
max ops per semop call = 32 // SEMOPM
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 1024 // MSGMNI
max size of message (bytes) = 65536 // MSGMAX
default max size of queue (bytes) = 65536 // MSGMNB
Cluster Setup Tasks
---------------------
View the Certifications by Product section at http://metalink.oracle.com/.
Verify your high-speed interconnects.
Determine the shared storage (disk) option for your system:
-OCFS or other shared file system solution
-Raw devices
-ASM
ASM cannot be used for the OCR and Voting Disk files!
Install the necessary operating system patches.
Obtaining OCFS (Optional)
----------------------------
To get OCFS for Linux, visit the Web site at http://oss.oracle.com/projects/ocfs/files.
Download the following Red Hat Package Manager (RPM) packages:
ocfs-support-1.0-n.i386.rpm
ocfs-tools-1.0-n.i386.rpm
Download the following RPM kernel module:
ocfs-2.4.21-EL-typeversion.rpm, where typeversion is the Linux version.
Using Raw Partitions
----------------------
Install shared disk.
Identify the shared disks to use.
Partition the device
# fdisk -l
Disk /dev/sda: 9173 MB, 917311480 bytes 255 heads, 63 sectors/track, 1115 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes.
Disk /dev/sdb: 9173 MB, 917311480 bytes 255 heads, 63 sectors/track, 1115 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes.
# fdisk /dev/sda
Number of Partitions Partition Size(MB) Purpose
1 500 SYSTEM tablespace
1 300+250 per Inst. SYSAUX tablespace
1 per Inst. 500 UNDOTBSn tablespace
1 160 EXAMPLE tablespace
1 120 USERS tablespace
2 per Inst. 120 2 online Redo Logs per Inst.
2 120 First and Second controlfile
1 250 TEMP tablespace
1 5 Server parameter file (SPFILE)
1 5 Password file
1 100 Volume for OCR
1 20 Clusterware voting disk
Binding the Partitions
Identify the devices that are already bound:
# /usr/bin/raw -qa
Edit the /etc/sysconfig/rawdevices file:
# cat /etc/sysconfig/rawdevices file
# raw device bindings
dev/raw/raw1 /dev/sda1
...
Adjust the ownership and permissions of the OCR file to root:dba and 640 respectively
Adjust the ownership and permissions of all other raw files to oracle:dba and 660 respectively
Execute the rawdevices command
Raw Device Mapping file
-------------------------
Create a database directory and set proper permissions:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown oracle:oinstall $ORACLE_BASE/oradata
# chmod 775 $ORACLE_BASE/oradata
Edit the $ORACLE_BASE/oradata/dbname/dbname_raw.conf file:
# cd $ORACLE_BASE/oradata/dbname
# vi dbname_raw.conf
Set the DBCA_RAW_CONFIG enviroment variable to specify the full path to this file.
Verifying Cluster Setup with cluvfy
Install the cvuqdisk rpm required for cluvfy:
# su root
# cd /stage/10201-production/clusterware/rpm
# export CVUQDISK_GRP=dba
rpm -iv cvuqdisk-1.0.1-1.rpm
Run the cluvfy utility as oracle as shown below:
#cd /u01/stage/10gR2/clusterware/cluvfy
. /runcluvfy.sh stage -post hwos -n all -verbose
Subscribe to:
Posts (Atom)