Setup Solaris Cluster

Setup Solaris Cluster

*****************
# Pre task

1. All disk disk type EFI

2. Add hostname all server and hostname cluster to /etc/hosts all server

Host-1
– svccfg -s system/identity:node listprop config
– svccfg -s system/identity:node setprop config/nodename=”host-1″
– svccfg -s system/identity:node setprop config/loopback=”host-1″
– svcadm restart system/identity:node

Host-2
– svccfg -s system/identity:node listprop config
– svccfg -s system/identity:node setprop config/nodename=”host-2″
– svccfg -s system/identity:node setprop config/loopback=”host-2″
– svcadm restart system/identity:node

3. Setup cluster repo solaris

– pkg set-publisher -g http://pkg.oracle.com/solaris/release/ solaris
– pkg set-publisher -g file:///repo/cluster/repo/ ha-cluster

4. Add host file on host-1,host-2

# hosts
10.0.0.11 host-1
10.0.0.12 host-2
10.0.0.13 host-cluster

# External storage
1.1.1.11 int_host-1
1.1.1.12 int_host-2
1.1.1.13 int_SCSI-server

5. Install package solaris cluster on host-1,host-2

– svccfg -s name-service/switch setprop config/host = astring:'(“files dns”)’
– svccfg -s name-service/switch setprop config/ipnodes = astring:'(“files dns”)’
– svcadm refresh svc:/system/name-service/switch
– pkg set-publisher -p file:///repo/cluster/repo/
– pkg install consolidation/cacao/cacao-incorporation
– pkg install ha-cluster-full
– pkg verify -v
– pkg fix

6. Prepare command before config cluster on host-1,host-2

– netadm enable -p ncp defaultfixed
– svccfg -s name-service/switch setprop config/host = astring:'(“cluster files”)’
– svccfg -s name-service/switch setprop config/ipnodes = astring:'(“cluster files dns”)’
– svccfg -s name-service/switch setprop config/netmask = astring:'(“cluster files”)’
– svcadm refresh svc:/system/name-service/switch
– svccfg -s name-service/switch listprop config/host
– svccfg -s name-service/switch listprop config/ipnodes
– svccfg -s name-service/switch listprop config/netmask

– ipadm set-prop -p smallest_anon_port=9000 tcp
– ipadm set-prop -p smallest_anon_port=9000 udp
– ipadm set-prop -p largest_anon_port=65500 tcp
– ipadm set-prop -p largest_anon_port=65500 udp
– ipadm set-prop -p max_buf=1048576 tcp

– ipadm set-prop -p hostmodel=weak ipv4
– ipadm set-prop -p hostmodel=weak ipv6
– ipadm show-prop -p hostmodel ip

– svccfg -s svc:/network/rpc/bind:default setprop config/local_only = boolean: false
– svcadm restart svc:/network/rpc/bind:default
– svcadm enable svc:/network/rpc/scrinstd:default
– svccfg -s network/rpc/bind listprop config/local_only
– svccfg -s rpc/bind listprop config/enable_tcpwrappers
– svcs svc:/network/rpc/bind:default
– svcs svc:/network/rpc/scrinstd:default

7. Reboot host-1,host-2 to cluster mode

– reboot

8. Verify service all

– svcs -xv

# host-1
– /usr/cluster/bin/clauth enable -n host-2

# host-2
– /usr/cluster/bin/clauth enable -n host-1

———————————————————————————————
9 ISCSI server map and share disk data,quorum to host1,host-2

Add disk iscsi host-1
– svcs -a| grep iscsi
– svcadm restart svc:/network/iscsi/initiator:default
– iscsiadm add discovery-address 1.1.1.13:3260
– iscsiadm list discovery-address -v
– iscsiadm add static-config iqn.2005-10.org.freenas.ctl:target1,1.1.1.13:3260
– iscsiadm list discovery
– iscsiadm list target
– iscsiadm modify discovery –static enable
– devfsadm -i iscsi

Add disk iscsi host-2
– svcs -a| grep iscsi
– svcadm restart svc:/network/iscsi/initiator:default
– iscsiadm add discovery-address 1.1.1.13:3260
– iscsiadm list discovery-address -v
– iscsiadm add static-config iqn.2005-10.org.freenas.ctl:target2,1.1.1.13:3260
– iscsiadm list discovery
– iscsiadm list target
– iscsiadm modify discovery –static enable
– devfsadm -i iscsi

Remove static iscsi
– iscsiadm remove static-config iqn.2005-10.org.freenas.ctl:target1
– iscsiadm remove static-config iqn.2005-10.org.freenas.ctl:target2

Remove discovery-address iscsi
– iscsiadm remove discovery-address 1.1.1.13:3260
———————————————————————————————

Setup solaris cluster

*** All disk disk type EFI ***

10. Run scinstall on host-2

– scinstall
——————————————————————————–

*** Main Menu ***

Please select from one of the following (*) options:

* 1) Create a new cluster or add a cluster node
2) Update this cluster node
3) Manage a dual-partition update
* 4) Print release information for this cluster node

* ?) Help with menu options
* q) Quit

Option: 1
——————————————————————————–
*** New Cluster and Cluster Node Menu ***

Please select from any one of the following options:

1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster

?) Help with menu options
q) Return to the Main Menu

Option: 1
——————————————————————————–
*** Create a New Cluster ***

This option creates and configures a new cluster.

Press Control-D at any time to return to the Main Menu.

Do you want to continue (yes/no) [yes]? yes
——————————————————————————–
>>> Typical or Custom Mode <<<

This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.

For more information about the differences between Typical and Custom
modes, select the Help option from the menu.

Please select from one of the following options:

1) Typical
2) Custom

?) Help
q) Return to the Main Menu

Option [1]: 1
——————————————————————————–
>>> Cluster Name <<<

Each cluster has a name assigned to it. The name cannot contain
spaces. Each cluster name should be unique within the namespace of
your enterprise.

What is the name of the cluster you want to establish? Cluster_test
——————————————————————————–
>>> Cluster Nodes <<<

This Oracle Solaris Cluster release supports a total of up to 16
nodes.

List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:

Node name (Control-D to finish): host-1

——————————————————————————–
>>> Cluster Nodes <<<

This Oracle Solaris Cluster release supports a total of up to 16
nodes.

List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:

Node name (Control-D to finish): host-1

Node name (Control-D to finish): host-2
——————————————————————————–
>>> Cluster Nodes <<<

This Oracle Solaris Cluster release supports a total of up to 16
nodes.

List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:

Node name (Control-D to finish): host-1

Node name (Control-D to finish): host-2
Node name (Control-D to finish): Control-D
——————————————————————————–
>>> Cluster Nodes <<<

This Oracle Solaris Cluster release supports a total of up to 16
nodes.

List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:

Node name (Control-D to finish): host-1

Node name (Control-D to finish): host-2
Node name (Control-D to finish): ^D

This is the complete list of nodes:

host-2
host-1

Is it correct (yes/no) [yes]? yes
——————————————————————————–
>>> Cluster Transport Adapters and Cables <<<

You must identify the cluster transport adapters which attach this
node to the private cluster interconnect.

Select the first cluster transport adapter for “host-2”:

1) net1
2) net2
3) Other

Option: 1
——————————————————————————–
Searching for any unexpected network traffic on “net1” … Done
Verification completed. No traffic was detected over a 10 second
sample period.

Select the second cluster transport adapter for “host-2”:

1) net2
2) Other

Option: 1
——————————————————————————–
>>> Resource Security Configuration <<<

The execution of a cluster resource is controlled by the setting of a
global cluster property called resource_security. When the cluster is
booted, this property is set to SECURE.

Resource methods such as Start and Validate always run as root. If
resource_security is set to SECURE and the resource method executable
file has non-root ownership or group or world write permissions,
execution of the resource method fails at run time and an error is
returned.

Resource types that declare the Application_user resource property
perform additional checks on the executable file ownership and
permissions of application programs. If the resource_security property
is set to SECURE and the application program executable is not owned
by root or by the configured Application_user of that resource, or the
executable has group or world write permissions, execution of the
application program fails at run time and an error is returned.

Resource types that declare the Application_user property execute
application programs according to the setting of the resource_security
cluster property. If resource_security is set to SECURE, the
application user will be the value of the Application_user resource
property; however, if there is no Application_user property, or it is
unset or empty, the application user will be the owner of the
application program executable file. The resource will attempt to
execute the application program as the application user; however a
non-root process cannot execute as root (regardless of property
settings and file ownership) and will execute programs as the
effective non-root user ID.

You can use the “clsetup” command to change the value of the
resource_security property after the cluster is running.

Press Enter to continue: Enter
——————————————————————————–
>>> Quorum Configuration <<<

Every two-node cluster requires at least one quorum device. By
default, scinstall selects and configures a shared disk quorum device
for you.

This screen allows you to disable the automatic selection and
configuration of a quorum device.

You have chosen to turn on the global fencing. If your shared storage
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.

If you disable automatic quorum device selection now, or if you intend
to use a quorum device that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.

Do you want to disable automatic quorum device selection (yes/no) [no]? yes
——————————————————————————–
Is it okay to create the new cluster (yes/no) [yes]? yes
——————————————————————————–
During the cluster creation process, cluster check is run on each of
the new cluster nodes. If cluster check detects problems, you can
either interrupt the process or check the log files after the cluster
has been established.

Interrupt cluster creation for cluster check errors (yes/no) [no]? no
——————————————————————————–
Cluster Creation

Log file – /var/cluster/logs/install/scinstall.log.5711

Starting discovery of the cluster transport configuration.

The following connections were discovered:

host-2:net1 switch1 host-1:net1
host-2:net2 switch2 host-1:net2

Completed discovery of the cluster transport configuration.

Started cluster check on “host-2”.
Started cluster check on “host-1”.
cluster check failed for “host-2”.
cluster check failed for “host-1”.

The cluster check command failed on both of the nodes.

Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.5711.

Configuring “host-1” … done
Rebooting “host-1” … done

Configuring “host-2” … done
Rebooting “host-2″ …

Log file – /var/cluster/logs/install/scinstall.log.5711

Rebooting …
——————————————————————————–

Verify all service after host-1,host-2 automatic reboot.

– svcs -xv

——————————————————————————–
11. Add quorum disk

Scand disk to cluster
– cldev list -v
– cldev clear
– cldev populate

Add quorum
– cldev show d4
– clquorum add d4
– clquorum list -v
– clquorum status

## Check & change cluster mode
– cluster show -t global | grep installmode
– cluster set -p installmode=disabled
– cluster set -p installmode=enabled

12. Create cluster resource group

clresourcegroup create -n [nodename1],[nodename2] [cluster resource group name]
– clresourcegroup create -n host-1,host-2 -p RG_description=”osc-rsg test” osc-rsg

clresourcegroup manage [cluster resource group name]
– clresourcegroup manage osc-rsg

clresourcegroup online -M [cluster resource group name]
– clresourcegroup online -M osc-rsg

13. Create resource logicalhostname

clreslogicalhostname create -g [cluster resource group name] -h [hostname_VIP] [logicalhostname resource name]
– clreslogicalhostname create -g osc-rsg -h host-cluster logical_resource

14. Create logical resource share pool

Registration SUNW.HAStoragePlus
– clresourcetype register SUNW.HAStoragePlus

clresource create -g [cluster resource group name] -t SUNW.HAStoragePlus -p zpools=[Share pool] -p Resource_dependencies=”[logicalhostname resource name]” [HAStoragePlus resource name]
– clresource create -g osc-rsg -t SUNW.HAStoragePlus -p zpools=dbpool -p Resource_dependencies=”logical_resource” hasp_resource

Test switch a resource group
– clresourcegroup switch -n host-2 osc-rsg
– clresourcegroup switch -n host-1 osc-rsg

15. Install Oracle database

16. Verify install oracle install

– su – “oracle_user”
– ls -l $ORACLE_HOME/bin/oracle

17. Enable access for the user and password to be used for fault monitoring

# sqlplus “/ as sysdba”

sql> create user hacluster identified by hacluster;
sql> alter user hacluster default tablespace system quota 1m on system;
sql> grant select on v_$sysstat to hacluster;
sql> grant select on v_$archive_dest to hacluster;
sql> grant select on v_$database to hacluster;
sql> grant create session to hacluster;
sql> grant create table to hacluster;
sql> exit;
#

18. Create resource oracle listener

Registration SUNW.oracle_listener
– clresourcetype register SUNW.oracle_listener

clresource create -g [cluster resource group name] -t SUNW.oracle_listener -p ORACLE_HOME=[Oracle home] -p LISTENER_NAME=[Listener name] -p resource_dependencies=[HAStoragePlus resource name] [lsnrctl resource name]
– clresource create -g osc-rsg -t SUNW.oracle_listener -p ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/db_1 -p LISTENER_NAME=ORCL -p resource_dependencies=hasp_resource lsnrctl_resource

19. Create resource oracle server

Registration SUNW.oracle_server
– clresourcetype register SUNW.oracle_server

Host-2 import dbpool
– zpool import dbpool

clresource create -g [cluster resource group name] -t SUNW.oracle_server -p ORACLE_SID=[ORACLE SID] -p ORACLE_HOME=[ORACLE HOME] -p Alert_log_file=[ORACLE alert log] -p Connect_string=[user/password] -p resource_dependencies=[lsnrctl resource name] [oracle resource name]
– clresource create -g osc-rsg -t SUNW.oracle_server -p ORACLE_SID=orcl -p ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/db_1 -p Alert_log_file=/u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl.log -p Connect_string=hacluster/hacluster -p resource_dependencies=lsnrctl_resource oracle_resource

Host-2 export dbpool
– zpool export dbpool

Test switch a resource group
– clresourcegroup switch -n host-2 osc-rsg
– clresourcegroup switch -n host-1 osc-rsg