CFS Crucial Package Failed: Unable to Join Cluster

CFS Crucial Package Failed: Unable to Join Cluster

We will be going to see how the above errors can be resolved by minimum period as we can because sometimes troubleshooting will be taking some times and rebuild sometimes can be the efficient way to resolve that kind of situation. I used to face the issues after patching activities and the OS was not able to be booted even though to the backup image or single-user mode. The above errors occurs when the OS rebuild and fresh install of the cluster packages and the rest. It looks like hung when it was trying to bring up the CFS package. Looking at to the logs of the package giving no clues but stuck at starting up the gab which is Global Atomic Broadcast services that been used by serviceguard to communicate in between nodes in the same cluster.

Below here is the sample of the logs for CFS:

06/29/19 02:32:56 Monitoring vxconfigd (pid= 532) every 20 secs
06/29/19 02:32:56 Stopping GAB
06/29/19 02:32:56 Stopping GAB.. Done
06/29/19 02:32:56 Stopping LLT
06/29/19 02:32:56 Stopping LLT.. Done
06/29/19 02:32:56 rm -f /etc/llttab /etc/llthosts /etc/gabtab
06/29/19 02:32:56 Starting service SG-CFS-cmvxpingd
06/29/19 02:32:56 cmrunserv SG-CFS-cmvxpingd >> /etc/cmcluster/cfs/SG-CFS-pkg.log 2>&1 /usr/lbin/cmvxpingd -t 132
06/29/19 02:32:56 rm -f /var/adm/cmcluster/cmvxd.socket
06/29/19 02:32:56 Starting service SG-CFS-cmvxd
06/29/19 02:32:56 cmrunserv SG-CFS-cmvxd >> /etc/cmcluster/cfs/SG-CFS-pkg.log 2>&1 /usr/lbin/cmvxd run -s /var/adm/cmcluster/cmvxd.socket -t 132
06/29/19 02:32:56 Creating LLT configuration
06/29/19 02:32:56 mktemp -d /etc
06/29/19 02:32:56 touch /etc/006771
06/29/19 02:32:56 chmod 644 /etc/006771
06/29/19 02:32:56 chmod 444 /etc/006771
06/29/19 02:32:56 mv /etc/006771 /etc/llttab
06/29/19 02:32:56 touch -r /etc/cmcluster/cfs/.SG-CFS-pkg.ref /etc/llttab
06/29/19 02:32:56 Creating GAB configuration
06/29/19 02:32:56 mktemp -d /etc
06/29/19 02:32:56 touch /etc/006788
06/29/19 02:32:56 chmod 644 /etc/006788
06/29/19 02:32:56 chmod 444 /etc/006788
06/29/19 02:32:56 mv /etc/006788 /etc/gabtab
06/29/19 02:32:56 touch -r /etc/cmcluster/cfs/.SG-CFS-pkg.ref /etc/gabtab
06/29/19 02:32:56 chmod 544 /etc/gabtab
06/29/19 02:32:56 Creating initial LLT hosts file
06/29/19 02:32:56 mktemp -d /etc
06/29/19 02:32:56 touch /etc/006808
06/29/19 02:32:56 chmod 644 /etc/006808
06/29/19 02:32:56 chmod 444 /etc/006808
06/29/19 02:32:56 mv /etc/006808 /etc/llthosts
06/29/19 02:32:56 touch -r /etc/cmcluster/cfs/.SG-CFS-pkg.ref /etc/llthosts
06/29/19 02:32:56 Starting Veritas stack
06/29/19 02:32:56 /etc/cmcluster/cfs/vx-modules.1 start
06/29/19 02:32:56 /sbin/init.d/llt start
06/29/19 02:32:56 Starting LLT
06/29/19 02:33:04 /sbin/init.d/gab start
06/29/19 02:33:04 Starting GAB

After that, you will get kernel panic with the messages of “crucial package failed” appeared just before rebooted. For emergency remediation, the cluster can be brought up on one node by running below command:

#cmruncl -n <nodename>

Cloning Partner Node OS

i found out that cloning partner node OS is the fastest and efficient way to solve the issue of crucial package failed on CFS. This is because finding solutions for incompatibility on the Veritas filesystem, cluster components such as GAB, Vxfen and LLT in between the nodes in the cluster was totally wasted and taking longer time as i could not find any answer even in the Veritas manual or websites. The hung and stuck  during CFS starting up was caused by incompatible version of the cluster components which is version 6.10 vs 5.0.1 on the existing running node.  Below is the version for existing running node filesystem:

Nodename:home/userid$ swlist|grep -i vx

  B9116DB                                       B.05.01.01     Full VxVM License for Veritas Volume Manager 5.0.1

  Base-VXFS                                     B.11.31        Base VxFS File System 4.1 Bundle for HP-UX

  Base-VxFS-501                                 B.05.01.03     Veritas File System Bundle 5.0.1 for HP-UX

  Base-VxTools-501                              B.05.01.04     VERITAS Infrastructure Bundle 5.0.1 for HP-UX

  Base-VxVM-501                                 B.05.01.04     Base VERITAS Volume Manager Bundle 5.0.1 for HP-UX

Nodename:home/userid$

Below is the steps for cloning partner node OS:

1) Backup image on the partner node.

2) Restore the image on this node.

3) Setting up the network configurations.

4) Bring the node up into the cluster.

 

Backing up an Image on Existing Partner Node

The most common ways of backup an image is make_net_recovery which we may run the commands as below:

/opt/ignite/bin/make_net_recovery -s Ignite-UX_server

Restore the image on this node.

dbprofile -dn igniteboot-sip <server_ip_address> -cip <node_ip_address> -gip <node_gateway> -m <node_netmask> -b  “/opt/ignite/boot/nbp.efi”
Verify the details of dbprofile by running “dbprofile” on the efi shell prompt. After that, we may boot over network using lanboot:
lanboot select -dn igniteboot

Setting up the network configurations.

Bring the node up into the cluster.

to be continued…

 

References

http://unixmemoires.blogspot.com/2012/01/man-page-makenetrecovery.html

https://community.hpe.com/t5/Ignite-UX/how-to-do-make-net-recovery-from-my-server-to-a-remote-server/td-p/4782498#.XS3khntS82w

http://wiki-ux.info/wiki/How_does_a_make_net_recovery_looks_like

https://docstore.mik.ua/manuals/hp-ux/en/5992-5309/ch09s06.html

 

Mirroring Disk to Migrate Data from an Old Storage Array to a New Array using VxVM in HP-UX

Mirroring Disk to Migrate Data from an Old Storage Array to a New Array using VxVM in HP-UX

Basic Steps for Luns Migration

There was some reasons why the migration need to be done. Some of them was issues related to the storage that triggered on monitoring tool or from the OS itself. Below is the example of the errors captured referring to the storage issues:

node :home/userid$ grep -i sync /var/adm/syslog/syslog.log|tail

Jul  5 00:56:22 node vmunix: Asynchronous write failed on LUN (dev=0x1000030)

Jul  5 09:05:51 node vmunix: Asynchronous write failed on LUN (dev=0x100000f)

Jul  6 16:43:58 node vmunix: Asynchronous write failed on LUN (dev=0x1000030)

Jul  6 20:26:57 node vmunix: Asynchronous write failed on LUN (dev=0x1000030)

Jul  7 04:04:35 node vmunix: Asynchronous write failed on LUN (dev=0x1000030)

Jul  7 11:47:17 node vmunix: Asynchronous write failed on LUN (dev=0x1000030)

node :home/userid$

 

Below is the basic migration steps no matter what is the software or utility that manage the volume such as LVM or VxVM :

1.Create LUNs on the new disk array
2.Present them to the HP-UX server
3.Add the LUNs into the appropriate volume groups
4.Using LVM mirroring to mirror the data from the current LUNs to the new LUNs
5.Verify that the data has been successfully mirrored
6.Reduce the mirrors from the old LUNs
7.Reduce the old LUNs out of the VG
8.Repeat as needed for each VG

For steps no 1, it will normally be handled by storage team and it will be allocated from the storage level.

 

2.Present them to the HP-UX server

We have to scan I/O system for new LUNs using below commands:

# ioscan -fC disk

Sample of the output for above will be as shown below:

Node:home/userid$ ioscan -funC disk
Class I H/W Path Driver S/W State H/W Type Description
==================================================================

disk 311 0/0/0/5/0/0/2.1.54.0.0.3.1 sdisk CLAIMED DEVICE 3PARdataVV
/dev/dsk/c11t3d1 /dev/rdsk/c11t3d1

Install special device files and enable VxVM configuration daemon:

# insf -vC disk

# vxdctl enable

Initialize new added disk so it can be added in the disk group:

/opt/VRTS/bin/vxdisksetup -i c11t3d1

 

3.Add the LUNs into the appropriate volume groups

Associate newdisk with the dg that going to be mirrored

vxdg -g dg01 adddisk dg01_disk02=c11t3d1

bring into the Volume Manager “world” using:
vxdctl enable

 

4.Using LVM mirroring to mirror the data from the current LUNs to the new LUNs

vxassist -g dg01 mirror lvol1 dg01_disk02

if we found below errors, that is mean we have to reset and resfresh the incore database of volume manager:

VxVM vxassist ERROR V-5-1-1080 Unexpected inconsistency in configuration
Disk Access and Disk Media records don’t match

/usr/sbin/vxconfigd -k -m enable

 

5.Verify that the data has been successfully mirrored

Node:home/userid$ vxtask list
TASKID PTID TYPE/STATE PCT PROGRESS
161 ATCOPY/R 22.91% 0/69632000/15955968 PLXATT lvol1 lvol1-02 dg01
Node:home/userid$

if it has been succesfully mirrored, there will be no longer task running in the vxtask list. You may want to regularly checking on the progress during mirroring proccess.

 

6.Reduce the mirrors from the old LUNs

vxplex -g dg01 -o rm dis lvol2-01

 

7.Reduce the old LUNs out of the VG

vxdg -g dg01 rmdisk dg01_disk01

/opt/VRTS/bin/vxdisksetup -i c3t0d3

Initialization to the old disk was a proper handover to the storage support for them to reclaim the luns and re-use for other purposes. This is because if we do not initialize the disk, there will be still old data such as old disk group configurations data and it will be detected during re-scanning of the disk. After initialize the disk, the disk will be considered as unused. You can re-scan the disk using vxdisk -o alldgs list, and if nothing associates or attached with the disk on the result output, then you know that is the old ones. Thanks.

 

 

References:

http://etcfstab.com/hpux/hpux_san_add_vxvm.html
https://sort.veritas.com/ecls/umi/V-5-1-1080
https://vox.veritas.com/t5/Storage-Foundation/Unable-to-mirror-a-volume/td-p/503624
https://www.veritas.com/support/en_US/article.100023745
https://community.hpe.com/t5/LVM-and-VxVM/Mirror-data-with-Mirrordisk-UX-between-two-LUNs/td-p/5700095#.XSQ46Y9S82w
https://community.hpe.com/t5/LVM-and-VxVM/Moving-data-from-old-SAN-to-new-SAN/td-p/6718937#.XSQ4bY9S82w
https://vox.veritas.com/t5/Storage-Foundation/vxassist-multiple-volume-of-a-same-subdisk/td-p/644077
https://sort.veritas.com/public/documents/sf/5.0/hpux/manpages/vxvm/vxassist_1m.html
https://sort.veritas.com/public/documents/sf/5.1/aix/html/vxvm_admin/ch09s10s02.htm

Failed to start package crsp_s1, rollback steps

Symtomps

Node# tail /var/adm/cmcluster/log/crsp_s1.log
Sep 25 01:16:50 – Node “” *** /opt/cmcluster/SGeRAC/toolkit/crsp/toolkit _oc.sh called with start argument. ***
Sep 25 01:16:50 – Node “” : Starting Oracle Clusterware at Tue Sep 25 01 :16:50 UTC 2018
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
Sep 25 01:16:50 – Node “” ERROR: Function oc_start_cmd: Failed to start Oracle Clusterware
Sep 25 01:16:50 [email protected] master_control_script.sh[5486]: ##### Failed to st art package crsp_s1, rollback steps #####
Sep 25 01:16:50 – Node “” *** /opt/cmcluster/SGeRAC/toolkit/crsp/toolkit _oc.sh called with stop argument. ***
Sep 25 01:16:50 – Node “” : Stopping Oracle Clusterware at Tue Sep 25 01 :16:50 UTC 2018
Sep 25 01:16:50 – Node “” Oracle Clusterware is already stopped
Sep 25 01:16:50 [email protected] master_control_script.sh[5486]: ###### Failed to s tart package for crsp_s1 ######

Node:home/ # cmviewcl

CLUSTER STATUS
<clustername> up

SITE_NAME Node_pri

NODE STATUS STATE
Node1 up running
Node2 up running

PACKAGE STATUS STATE AUTO_RUN NODE
prismp_sc up running enabled Node2

NODE STATUS STATE
Node3 up running

SITE_NAME Node_sec

NODE STATUS STATE
Node4 up running
Node5 up running
Node6 up running

MULTI_NODE_PACKAGES

PACKAGE STATUS STATE AUTO_RUN SYSTEM
SG-CFS-pkg up running enabled yes
SG-CFS-crsp_s1 up running enabled no
SG-CFS-crsp_s2 up running enabled no
crsp_s1 up (2/3) running enabled no
crsp_s2 up running enabled no
SG-CFS-prismp_s1 up running enabled no
SG-CFS-prismp_s2 down halted enabled no
prismp_s1 up (2/3) running enabled no
prismp_s2 down halted enabled no
Node:home/ #

Causes

It looks like network connection issue as per below log:

Node1:/ $ tail /u01/app/grid/11203/log/Node1/cssd/ocssd.log
2018-09-21 10:47:08.187: [ CSSD][27]clssnmvDHBValidateNcopy: node 2, Node2, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 225000299, LATS 275224262, lastSeqNo 225000296, uniqueness 1519012441, timestamp 1537526827/1334746757
2018-09-21 10:47:08.187: [ CSSD][27]clssnmvDHBValidateNcopy: node 3, Node3, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 224639603, LATS 275224262, lastSeqNo 224639600, uniqueness 1519018579, timestamp 1537526827/1328775359
2018-09-21 10:47:08.190: [ CSSD][30]clssnmvDHBValidateNcopy: node 3, Node3, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 224639604, LATS 275224264, lastSeqNo 224639601, uniqueness 1519018579, timestamp 1537526827/1328775836
2018-09-21 10:47:08.197: [ CSSD][36]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
2018-09-21 10:47:08.200: [ CSSD][33]clssnmvDHBValidateNcopy: node 3, Node3, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 224639605, LATS 275224274, lastSeqNo 224639602, uniqueness 1519018579, timestamp 1537526828/1328775956
2018-09-21 10:47:09.196: [ CSSD][30]clssnmvDHBValidateNcopy: node 2, Node2, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 225000300, LATS 275225270, lastSeqNo 225000021, uniqueness 1519012441, timestamp 1537526828/1334747680
2018-09-21 10:47:09.196: [ CSSD][30]clssnmvDHBValidateNcopy: node 3, Node3, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 224639607, LATS 275225270, lastSeqNo 224639604, uniqueness 1519018579, timestamp 1537526828/1328776846
2018-09-21 10:47:09.197: [ CSSD][27]clssnmvDHBValidateNcopy: node 2, Node2, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 225000302, LATS 275225272, lastSeqNo 225000299, uniqueness 1519012441, timestamp 1537526828/1334747769
2018-09-21 10:47:09.207: [ CSSD][36]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
2018-09-21 10:47:09.210: [ CSSD][33]clssnmvDHBValidateNcopy: node 3, Node3, has a disk HB, but no network HB, DHB has rcfg 414478488, wrtcnt, 224639608, LATS 275225284, lastSeqNo 224639605, uniqueness 1519018579, timestamp 1537526829/1328776966
Node1:/ $

 

When  tried to ping the CI gateway, it was failed:

Node1:11203/bin # ping CI-GW
PING CI-GW: 64 byte packets

 

Resolutions

The current config of lan interface of CI is lan1, so it need to be changed to other working lan interface that having States Link UP.

After changed to other working lan, it works fine:

Node1:11203/bin # ping CI-GW
PING CI-GW: 64 byte packets
64 bytes from CI-GW: icmp_seq=0. time=0. ms
64 bytes from CI-GW: icmp_seq=1. time=0. ms


Then, the toolkit of crsp can be started:

Node1:11203/bin # /opt/cmcluster/SGeRAC/toolkit/crsp/toolkit_oc.sh start
Sep 25 02:46:46 – Node “Node1” *** /opt/cmcluster/SGeRAC/toolkit/crsp/toolkit _oc.sh called with start argument. ***
Sep 25 02:46:46 – Node “Node1” : Starting Oracle Clusterware at Tue Sep 25 02 :46:46 UTC 2018
Sep 25 02:46:46 – Node “Node1” Oracle Clusterware is already started
Node1:11203/bin #

After that, the switching mod of the crsp package need to be enabled:

Node:11203/bin # cmmodpkg -e -v -n Node1 crsp_s1
Enabling node Node1 for switching of package crsp_s1
Successfully enabled package crsp_s1 to run on node Node1
cmmodpkg: Completed successfully on all packages specified
Node1:11203/bin # cmrunpkg crsp_s1
Package crsp_s1 is already running on all active nodes
cmrunpkg: All specified packages are running
Node1:11203/bin #

We may verify the running packages by cmviewcl command:

Node1:11203/bin # cmviewcl

CLUSTER STATUS
<clustername> up

SITE_NAME Site_pri

NODE STATUS STATE
Node1 up running
Node2 up running

PACKAGE STATUS STATE AUTO_RUN NODE
prismp_sc up running enabled Node3

NODE STATUS STATE
Node3 up running

SITE_NAME Site_sec

NODE STATUS STATE
Node4 up running
Node5 up running
Node6 up running

MULTI_NODE_PACKAGES

PACKAGE STATUS STATE AUTO_RUN SYSTEM
SG-CFS-pkg up running enabled yes
SG-CFS-crsp_s1 up running enabled no
SG-CFS-crsp_s2 up running enabled no
crsp_s1 up running enabled no
crsp_s2 up running enabled no

#################################################

Unable to run package on node

Symptoms

When you try to bring up the package in service guard, the package wont coming up with below errors:

[[email protected] ~]# cmrunpkg <packagename>
Running package <packagename> on node node2
The package script for <packagename> failed with no restart. <packagename> should not be restarted
Unable to run package <packagename> on node node2
Check the syslog and pkg log files for more detailed information
cmrunpkg: Unable to start some package or package instances.

Its same also when we try to bring up the package on the other node.

Cause

When we look at to the logs file locate in /usr/local/cmcluster/run/log/<packagename>.log, below errors found:

Sep 20 00:09:03 – Node “node2”: Exporting filesystem on /opt/apps
exportfs: internal: no supported addresses in nfs_client
exportfs: <ip_address>:/opt/apps: No such file or directory

exportfs: internal: no supported addresses in nfs_client
exportfs: <ip_address>:/opt/apps: No such file or directory

exportfs: internal: no supported addresses in nfs_client
exportfs: <ip_address>:/opt/apps: No such file or directory

exportfs: internal: no supported addresses in nfs_client
exportfs: <ip_address>:/opt/apps: No such file or directory

exportfs: internal: no supported addresses in nfs_client
exportfs: <ip_address>:/opt/apps: No such file or directory
ERROR: Function export_fs
ERROR: Failed to export -o rw @nfs1:/opt/apps
Sep 20 00:09:04 – Node “node2”: Unexporting filesystem on @nfs1:/opt/apps

## Failed to start package <packagename>, rollback steps #####
Sep 19 23:44:20 [email protected] tkit_module.sh[32107]: Install directory operation mode selected.
WARNING: Stoping rmtab synchronization proces: /usr/local/cmcluster/conf/<packagename>/sync_rmtab.PID does not exist
Sep 19 23:44:20 – Node “node2”: Unexporting filesystem on @nfs1:/opt/apps
exportfs: Could not find ‘@nfs1:/opt/apps’ to unexport.
ERROR: Function un_export_fs
ERROR: Failed to unexport @nfs1:/opt/apps

Sep 20 00:09:05 [email protected] master_control_script.sh[31933]: ###### Failed to start package for <packagename> ######

Check the status of services of nfs.

[[email protected] ]# /etc/init.d/nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped
[[email protected]]#

The reason why the cluster packages wont start up is because the service of nfs is stopped and those need to be running up.

 

Resolutions

We may start the nfs services;

[[email protected]]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: rpc.mountd: svc_tli_create: could not open connection for udp6
rpc.mountd: svc_tli_create: could not open connection for tcp6
rpc.mountd: svc_tli_create: could not open connection for udp6
rpc.mountd: svc_tli_create: could not open connection for tcp6
rpc.mountd: svc_tli_create: could not open connection for udp6
rpc.mountd: svc_tli_create: could not open connection for tcp6
[ OK ]
Starting NFS daemon: rpc.nfsd: address family inet6 not supported by protocol TCP
[ OK ]
Starting RPC idmapd: [ OK ]

Verify the nfs service;
[[email protected]]# /etc/init.d/nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 17790) is running…
nfsd (pid 17810 17809 17808 17807 17806 17805 17804 17803) is running…
rpc.rquotad (pid 17773) is running…

Then, the package can be run;
[[email protected]]# cmrunpkg <packagename>
Running package <packagename> on node node2
Successfully started package <packagename> on node node2
cmrunpkg: All specified packages are running
[[email protected]]#

Lastly, verify the status of packages in the cluster;

[[email protected] ~]# cmviewcl

CLUSTER STATUS
<clustername> up

SITE_NAME Site1_pri

NODE STATUS STATE
node1 up running

SITE_NAME Site2_sec

NODE STATUS STATE
node2 up running

PACKAGE STATUS STATE AUTO_RUN NODE
<packagename> up running disabled node2

##################################################

Unable to Change Directory to the Mount Point as Root – Permission Denied on HP-UX

Hello… i will show you how to solve the issue of permission denied when you find “permission denied” when trying to change directory to the specific directory. Below is some of the example and already become as root:

# cd /usr/local/sap/tools/
ksh: /usr/local/sap/tools/: permission denied
# ll /usr/local/sap/tools/
/usr/local/sap/tools/ not found
#

when i trying to display all the mountpoints, there were no mountpoint that i want to change to except for /usr, but i believe the abovementioned directory is not using /usr, but must be coming from external network.  On top of that, changing mod to the directory also not working as well as per below example:

# pwd
/usr/local
# chmod 755 sap/
chmod: can’t change sap/: Permission denied

When i see the mounted partition in a working server, i can see the mount point as nfs and imported from nfs server, please see below:

tools-x.xx.xxx.net:/usr/local/sap
4145152 2287273 1741912 57% /usr/local/sap

In order to get clarified, i have to see the properties of exported mount points on the nfs server:

#showmount -e <nfs_server>
export list for <nfs_server>:
/usr/local/sap (everyone)

So, from the above result, i know that mount point should be accessible and mounted by everyone and no issue if we want to mount it from the client side.

Cause

The issue is when i try to mount the nfs on client side, the error show up as device busy:

# mount <nfs_server>:/usr/local/sap /usr/local/sap
nfs mount: /usr/local/sap: Device busy

And i can see the above mount point been mounted:

# mount |grep -i ‘local/sap’
/usr/local/sap on /etc/auto_direct ignore,direct,dev=4000044 on Fri Aug 31 15:49:08 2018

Resolution

This can be resolved by unmount first the partition and mount it back accordingly. You may verify the mount point by using ‘bdf’ command as per below example:

# umount /usr/local/sap; # mount tools-<nfs_server>:/usr/local/sap  /usr/local/sap
# bdf

tools-ent.<nfs_server>:/usr/local/sap
4145152 2287273 1741912 57% /usr/local/sap

Lastly, you also may change directory to the above partition and list down its files without any problem.

Cloud Computing Awareness and Adoption in SME

A brief survey questionnaire that explains the details of why cloud computing should or should not be adopted to helps participants better understand the purpose of cloud–and can motivate us to share our thoughts.

This is one of the platform you have the ability to get explained the bigger picture of this online survey, so take the time to get briefed on the data collected here means for your organization.Please  go ahead for the survey form on below link and thanks for your time:

https://goo.gl/forms/ATwDZI6R7xmxIanS2

Brief of State of Network Security

  1. What is network security?

Is a process of taking measures to protect an organisation’s network infrastructure from unauthorized access by creating a secure platform for server including mitigating risk to the critical devices.

2. How risk, threat and vulnerability related each other?

Risk an be expressed as; Risk = Threat x Vulnerability. Threat is a potential harm that can exploit vulnerability and / or intrude into the computer system. While vulnerability is a weaknesses that may allow threat to  run in  the system.

3. List  the key characteristics of attacks?

Attacks are growing dramatically: Activities involving cyber attacks increased exponentially as well as instances of malware.

Threats are more sophisticated: Threats crime been sophisticated and normally it is unexpected because it has been deployed in one step ahead or take it for granted on the loophole.

Known outnumbered by unknowns : Focus on what is known and always be ready for known and unknown attacks

Current approach is ineffective: Current approach is insufficient to address the level and type of attacks that are presently occurring due to the ever-changing nature of attacks.

Current approach in handling security?

Define the goals of integrity principle in network security?

Confidentiality: Prevent the unauthorized disclosure of sensitive information.

Integrity: Prevent information fabrication by unauthorized user, Prevent unauthorized fabrication of information by authorized user and Preserve of the internal and external consistency.

Availability: Provide authorized user timely and uninterrupted access to the information in the network system.

  1. What are the main reasons for unreported security breaches?

-To secure the company’s reputation

– Company do not know when a breach been committed.

2.  Briefly describe two main types of attacks?

-Passive attacks; Sniffing and information gathering.

– Active attacks; Denial of service, Breaking into a site.

3.  What are the aspects of approaching good cyber security in dealing with attacks?

Aspects of approaching good cyber security are:-

– Management buy-in

– Policy development with regular updates and revisions,

– Policy reviews

– Knowledgeable network staff

– Training

–  Tested process

– Third party assessment

What is Kerberos?

Kerberos is a authentication protocol that involve three sides which are client, server and a Kerberos Distribution Center (KDC) and running Authentication Server (AS) and Ticket Granting Server before establish connection to the application.

Client will connect to AS to obtain TGS session key and ticket. Once connected, client will request TGS to obtain a Application Session Key (ASK) and secret’s key.

Client will be sending its ticket, ASK and secret’s key to the application server to initiate a connection in between client and application server.

 

 

Setup a SVM mirror in Solaris 10

Part Tag Flag Cylinders Size Blocks
0 root wm 70 – 1143 8.23GB (1074/0/0) 17253810
1 swap wu 3 – 69 525.56MB (67/0/0) 1076355
2 backup wm 0 – 1170 8.97GB (1171/0/0) 18812115
3 unassigned wu 0 0 (0/0/0) 0
4 unassigned wu 0 0 (0/0/0) 0
5 unassigned wu 0 0 (0/0/0) 0
6 unassigned wu 0 0 (0/0/0) 0
7 home wm 1144 – 1170 211.79MB (27/0/0) 433755
8 boot wu 0 – 0 7.84MB (1/0/0) 16065
9 alternates wu 1 – 2 15.69MB (2/0/0) 32130

Partition 0 is /
Partition 1 is swap
Partition 8 is /boot
Partition 9 is where metadevice state database

metadb -a -f -c3 /dev/dsk/c0d0s9

# metainit -f d12 1 1 c0d0s0

# metainit -f d12 1 1 c0d0s1

# metainit -f d12 1 1 c0d0s8

# metastat -p

# metainit d10 -m d12

# metaroot d10

# metainit d20 -m d22

# metainit d30 -m d32

# shutdown -y -g0 -i6

Then create the metadevices for the other side of the mirror and attach them

metainit -f d11 1 1 c0d1s0
metainit -f d21 1 1 c0d1s1
metainit -f d31 1 1 c0d1s8

metattach d10 d11
metattach d20 d21
metattach d30 d31

metadb -a -f -c3 /dev/dsk/c0d1s9

Solaris Volume Manager (SVM) x86 How to Replace a Failed, SCSI Disk, Mirrored with SVM

Verify failed disk (in this example, c1t0d0 is the failed disk)

# metastat -c

#format

#tail /var/adm/messages

# metastat -c (We can see that the disk is no longer an active member of the mirror.)

 

Remove failed disk from existing mirror group

# metadetach <mirror> <submirror>

# iostat -iEn c1t0d0

#cfgadm -al

# cfgadm -c unconfigure c1::dsk/c1t0d0

Maybe there is a need to delete the metadb with ‘metadb -d c1t0d0s7’ before ‘cfgadm -c unconfigure …’ can complete.

This command will remove the block and character (raw) device nodes the symbolic links in /dev/[r]dsk point to.

Physically replace the disk. Configure the new disk back into Solaris.

# cfgadm -c configure c1::dsk/c1t0d0

# ls -lL /dev/dsk/c1t0d0s* <— check the device nodes
# ls -lL /dev/rdsk/c1t0d0s*

# format

# iostat -iEn c1t0d0

if boot disk, run below:
# fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c1t0d0p0

if not, run below:
# fdisk /dev/rdsk/c1t0d0p0
# prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s – /dev/rdsk/c1t0d0s2
# /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
# metadb
# metadb -d /dev/dsk/c1t0d0s7 <—-remove old metadb replicas
# metadb -a -c3 /dev/dsk/c1t0d0s7 <—re-add new metadb replicas
# metadb
# metadevadm -u c1t0d0

#metainit -f d11 1 1 c1t0d0s0
#metainit -f d21 1 1 c1t0d0s1
#metainit -f d31 1 1 c1t0d0s3

#metattach d10 d11
#metattach d20 d21
#metattach d30 d31

#metastat -c     (below is the sample output)

d20        m 525MB d22 d21 (resync-19%)
d22 s 525MB c0d0s1
d21 s 525MB c0d1s1
d30        m 211MB d32 d31 (resync-33%)
d32 s 211MB c0d0s7
d31 s 211MB c0d1s7
d10       m 8.2GB d12 d11 (resync-0%)
d12 s 8.2GB c0d0s0
d11 s 8.2GB c0d1s0