Wednesday, 18 October 2017

scp is slow on the servers Linux



When use scp to transfer some big files between nodes ,the speed is very slow,

[1] Use iperf to test network performance, it is good.

#iperf -c 3.239.5.40
iperf: ignoring extra argument -- 3.239.5.40
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 39813
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-11.0 sec  5.38 MBytes  4.11 Mbits/sec
[  5] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 39837
[  5]  0.0-10.7 sec  3.88 MBytes  3.03 Mbits/sec
[  4] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 39863
[  4]  0.0-10.9 sec  3.25 MBytes  2.51 Mbits/sec
[  5] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 39909
[  5]  0.0-10.6 sec  4.25 MBytes  3.36 Mbits/sec
[  4] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 41279
[  4]  0.0-10.8 sec  3.88 MBytes  3.01 Mbits/sec
[  5] local 3.34.11.129 port 5001 connected with 3.239.5.40 port 41591
[  5]  0.0-11.0 sec  4.25 MBytes  3.23 Mbits/sec

[2] Use dd command to test io performance and it is also ok.

# dd if=/dev/zero of=/home/oracle/hugefile bs=10M count=100
100+0 records in
100+0 records out

1048576000 bytes (1.0 GB) copied, 0.750502 s, 1.4 GB/s


Performance of scp get effected by handling encryption/decryption packets.

Using "cipher" option in scp command to specify different encrypting protocol may speedup the file transfer.

scp -c arcfour filename root@destinationserver.com:/Destination location.


Thank you and this may help you .....:)












Thursday, 24 August 2017

Replacing failed Disk on Zpool and Error:Component system is busy, try again: failed to offline:

                     Replacing failed Disk on Zpool

Error: Component system is busy, try again: failed to offline

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


1.Check the zpool status

# zpool status
 pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scan: scrub repaired 0 in 1h12m with 0 errors on Mon Aug 21 03:12:50 2017
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror-0    DEGRADED     0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  DEGRADED     0     0 7.37K  too many errors
errors: No known data errors

2.Offline the Disk in zpool

# zpool offline rpool  c1t0d0s0
# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scan: scrub repaired 0 in 1h12m with 0 errors on Mon Aug 21 03:12:50 2017
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror-0    DEGRADED     0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  OFFLINE      0     0 7.37K

3.Unconfigure the Disk

 # cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             scsi-sas     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c2                             fc-fabric    connected    configured   unknown
 # cfgadm -c unconfigure c1::dsk/c1t0d0
Aug 24 00:33:45 corpt710 rcm_daemon[3599]: rcm script es_rcm.pl: VxVM vxdmpadm ERROR V-5-1-13080 Attempt to disable all paths through portid and enclosure failed. Last path to the disk can not be disabled.
cfgadm: Component system is busy, try again: failed to offline:
     Resource              Information
------------------  -------------------------
/dev/dsk/c1t0d0s2   Device being used by VxVM


4.Here is the error


cfgadm: Component system is busy, try again: failed to offline:

5.Cause :


This host uses ZFS to manage internal disks and Veritas Volume Manager (VxVM) to manage SAN attached disks. VxVM keeps track of the internal disks – even if it doesn’t actually manage them – and may not allow you to unconfigure them. To get around this restriction below are the steps.

# vxdmpadm getsubpaths ctlr=c1
NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-TYPE   ENCLR-NAME   ATTRS
================================================================================
c1t0d0s2     ENABLED(A)    -          disk_0       Disk         disk            -
c1t1d0s2     ENABLED(A)    -          disk_1       Disk         disk            -

# vxdmpadm disable path=c1t0d0s2
VxVM vxdmpadm ERROR V-5-1-13080 Attempt to disable all paths through portid and enclosure failed. Last path to the disk can not be disabled.

# vxdmpadm -f disable path=c1t0d0s2
 corpt710 vxdmp: NOTICE: VxVM vxdmp V-5-0-111 [Warn] disabled dmpnode 295/0x8
# vxdmpadm getsubpaths ctlr=c1
NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-TYPE   ENCLR-NAME   ATTRS
================================================================================
c1t0d0s2     DISABLED(M)    -          disk_0       Disk         disk            -
c1t1d0s2     ENABLED(A)    -          disk_1       Disk         disk            -

6.Now we can un configure the disk


# cfgadm -c unconfigure c1::dsk/c1t0d0
# cfgadm -al|grep -i c1t0d0
c1::dsk/c1t0d0                 disk         connected    unconfigured unknown

7.Replace the Disk physically


8.Now configure the disk


 # devfsadm
 # cfgadm -al|grep -i c1t0d0
c1::dsk/c1t0d0                 disk         connected    unconfigured unknown
# cfgadm -c configure c1::dsk/c1t0d0
# cfgadm -al|grep -i c1t0d0
c1::dsk/c1t0d0                 disk         connected    configured   unknown

9.Replace the zpool with new disk


# zpool replace rpool c1t0d0s0
cannot replace c1t0d0s0 with c1t0d0s0: device is too small

Note: Here we need to layout the new disk

# prtvtoc /dev/dsk/c1t1d0s2 > /tmp/vtoc_root.out
# fmthard -s /tmp/vtoc_root.out /dev/rdsk/c1t0d0s2
fmthard:  New volume table of contents now in place.

10.Now replace the disk and online


# zpool replace rpool c1t0d0s0
# zpool online rpool c1t0d0s0
 # zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Thu Aug 24 00:54:08 2017
    461M scanned out of 111G at 15.4M/s, 2h2m to go
    457M resilvered, 0.41% done
config:
        NAME                STATE     READ WRITE CKSUM
        rpool               DEGRADED     0     0     0
          mirror-0          DEGRADED     0     0     0
            c1t1d0s0        ONLINE       0     0     0
            replacing-1     DEGRADED     0     0     0
              c1t0d0s0/old  OFFLINE      0     0 7.37K
              c1t0d0s0      ONLINE       0     0     0  (resilvering)
errors: No known data errors

Note: Wait till the New disk sync and online , its depends on the disk size

# zpool status rpool ;date
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: resilvered 111G in 1h30m with 0 errors on Thu Aug 24 02:24:25 2017
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
errors: No known data errors











Wednesday, 23 August 2017

Setting up ILOM console IP to a new Sparc Server

Setting up ILOM console IP to a new Sparc Server


1.Connect the server with Console cable to server console port and power on the server .

2.Default username and password will be root & changeme


login: root
Password:
Login timed out after 60 seconds
SUNSP-BDL10031B1 login: root
c Password:
Waiting for daemons to initialize...
Daemons ready
Sun(TM) Integrated Lights Out Manager
Version 3.0.6.1.d r48331

Copyright 2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Third-party software, including font technology, is copyrighted and licensed
from Sun suppliers.
Portions may be derived from Berkeley BSD systems, licensed from U. of CA.
Sun, Sun Microsystems, and the Sun Logo are trademarks or registered trademarks
of Sun Microsystems, Inc. in the U.S. and other countries.
Federal Acquisitions: Commercial Software -- Government Users Subject to
Standard License Terms and Conditions.

Copyright 2009 Sun Microsystems, Inc. Tous droits réservés.
Distribué par des licences qui en restreignent l'utilisation.
Le logiciel détenu par des tiers, et qui comprend la technologie relative
aux polices de caractères, est protégé par un copyright et licencié par
des fournisseurs de Sun.
Des parties de ce produit pourront être dérivées des systèmes Berkeley BSD
licenciés par l'Université de Californie.
Sun, Sun Microsystems, et le logo Sun sont des marques de fabrique ou des
marques déposées de Sun Microsystems, Inc. aux Etats-Unis et dans d'autres pays.

Warning: password is set to factory default.


3. To check the existing IP details


-> show /SP/network

 /SP/network
    Targets:
 test
    Properties:
 commitpending = (Cannot show property)
 dhcp_server_ip = (none)
 ipaddress = (none)
 ipdiscovery = (none)
 ipgateway = (none)
 ipnetmask = (none)
 macaddress = (none)
 pendingipaddress = (none)
 pendingipdiscovery = (none)
 pendingipgateway = (none)
 pendingipnetmask = (none)
 state = disabled
    Commands:
 cd
 set
 show

4.Set the new IP for the server console


-> cd /SP/network
/SP/network

-> set pendingipaddress=172.28.101.9
Set 'pendingipaddress' to '172.28.101.9'

-> set pendingipdiscovery=static
Set 'pendingipdiscovery' to 'static'

-> set pendingipnetmask=255.255.255.0
Set 'pendingipnetmask' to '255.255.255.0'

-> set pendingipgateway=172.28.101.50
Set 'pendingipgateway' to '172.28.101.50'

->
->
-> set commitpending=true
Set 'commitpending' to 'true'


-> show

 /SP/network
    Targets:
 test
    Properties:
 commitpending = (Cannot show property)
 dhcp_server_ip = none
 ipaddress = 172.28.101.9
 ipdiscovery = static
 ipgateway = 172.28.101.50
 ipnetmask = 255.255.255.0
 macaddress = 00:21:28:50:0A:A1
 pendingipaddress = 172.28.101.9
 pendingipdiscovery = static
 pendingipgateway = 172.28.101.50
 pendingipnetmask = 255.255.255.0
 state = disabled
    Commands:
 cd
 set
 show

5.Setting up the auto boot


-> set /HOST/bootmode script="setenv auto-boot? false"
Set 'script' to 'setenv auto-boot? false'

6.Setting up the input and output devices


-> set /HOST/bootmode script="setenv auto-boot? false"
Set 'script' to 'input-device virtual-console'
-> set /HOST/bootmode script="input-device virtual-console"
Set 'script' to 'output-device virtual-console'
-> show

Job chronyd.service/start failed with result 'dependency'

 Job chronyd.service/start failed with result 'dependency' Issue :  CRITICAL: Neither ntpd nor chronyd running. Exiting  Error :  sy...