InsightIQ : How to change the InsightIQ network Settings and host name

Purpose: Change the InsightIQ network Settings and host name

Steps:
1. Login to the InsightIQ Web Network Settings GUI with Port 5480
Link: https://<InsightIQ_HostName>:5480/#core.Login
Click on 'Network' tab > edit the settings > Click on 'Save Settings'














Isilon Script: iew the Status of the newly started SyncIQ Job with Amount of NETWORK BYTES transfered, Current THROUGHPUT, No of Workers Assigned, Current CPU UTILIZATION

Purpose: View the Status of the newly started SyncIQ Job with Amount of NETWORK BYTES transfered, Current THROUGHPUT, No of Workers Assigned, Current CPU UTILIZATION
Modify Attributes in Script:
1. <PolicyName>
2. <PolicyID>
3. <Replication Name>

# get the Policy ID for the Sync Policy Name
IsilonCluster1-2# isi sync policies view --policy=<PolicyName> | grep -i ID
                         ID: <bxxxxxxxxxxxxxxxxxxxxxxxxxxxa>

#get the Replication Name with the Sync Policy ID
Syntax: isi_repstate_mod -ll pol_id - (list reps in a directory)
IsilonCluster1-2# isi_repstate_mod -ll <bxxxxxxxxxxxxxxxxxxxxxxxxxxxa>
<bxxxxxxxxxxxxxxxxxxxxxxxxxxxa>_snap_rep_base - Replication Name
<bxxxxxxxxxxxxxxxxxxxxxxxxxxxa>_select_238749
Syntax (list all Worker Entries): isi_repstate_mod -wa pol_id rep_name - (print all work entries)


Script:
IsilonCluster1-2# while true;do print "*******START******";print "======================";date;print "======================";print "/-SYNC JOB VERBOSE VIEW";print "======================";isi_classic sync job rep -v <PolicyName>;echo " ";print "======================";print "/-NETWORK BYTES(look for a change every run)";print "======================";isi_classic sync job rep -v <PolicyName> | grep -A4 "Bytes:";echo " ";print "======================";print "/-SYNC JOB VIEW THROUGHPUT";print "======================";isi_classic sync job rep;echo " ";print "======================";print "/-MAX WORKERS";print "======================";isi_repstate_mod -wa <PolicyID> <Replication Name> | grep workitem | wc -l;echo " ";print "======================";print "/-CPU UTILIZATION";print "======================";isi statistics system list --nodes=all;print "======================";print "*******FINISH******";sleep 300;done

Output:
*******START******
======================
Thu Aug 30 08:30:58 PDT 2018
======================
/-SYNC JOB VERBOSE VIEW
======================
Policy name: PolicyName
    Action: sync
    Sync Type: initial
    Job ID: 1
    Started: Wed Aug 29 21:33:12 PDT 2018
    Run time: 10:57:46
    Status: Running
    Details:
        Directories:
            Visited on source: 117198
            Deleted on destination: 0
        Files:
            Total Files: 1708005
            New files: 1708005
            Updated files: 0
            Automatically retransmitted files: 0
            Deleted on destination: 0
            Skipped for some reason:
                Up-to-date (already replicated): 0
                Modified while being replicated: 0
                IO errors occurred: 0
                Network errors occurred: 0
                Integrity errors occurred: 0
        Bytes:
            Total Network Traffic: 8.6 TB (9482042338352 bytes)
            Total Data: 8.6 TB (9471597284558 bytes)
            File Data: 8.6 TB (9471597284558 bytes)
            Sparse Data: 0B
        Phases (1/3):
            Treewalk (STF_PHASE_TW)
                Start: Wed Aug 29 21:37:39 PDT 2018
                End: N/A
                Start: Wed Aug 29 21:33:24 PDT 2018
                End: Wed Aug 29 21:37:39 PDT 2018

======================
/-NETWORK BYTES(look for a change every run)
======================
        Bytes:
            Total Network Traffic: 8.6 TB (9483514244073 bytes)
            Total Data: 8.6 TB (9473067646801 bytes)
            File Data: 8.6 TB (9473067646801 bytes)
            Sparse Data: 0B

======================
/-SYNC JOB VIEW THROUGHPUT
======================
Name          | Act  | St      | Duration | Transfer | Throughput
--------------+------+---------+----------+----------+-----------
PolicyName | sync | Running | 10:57:53 |   8.6 TB |   1.8 Gb/s

======================
/-MAX WORKERS
======================
      36

======================
/-CPU UTILIZATION
======================
 Node   CPU    SMB  FTP  HTTP    NFS  HDFS  Total  NetIn  NetOut  DiskIn  DiskOut
---------------------------------------------------------------------------------
  All  6.4%   7.4M  0.0 457.1 395.1k   0.0   7.8M  73.6M  272.8M  328.4M   292.7M
    1  2.6%  67.8k  0.0   0.0   2.9k   0.0  70.7k   5.6M   19.5M  423.4k    52.4k
    2  3.2%  23.1k  0.0   0.0 391.2k   0.0 414.3k   6.3M   21.2M    5.3M   432.5k
    3  3.5%  45.6k  0.0 457.1   48.9   0.0  46.1k   5.7M   19.8M  379.2k   272.0k
...........................................
   24 14.8%    0.0  0.0   0.0    0.0   0.0    0.0   6.4M   23.3M   35.6M    11.8M
   25 15.1%   4.3M  0.0   0.0    0.0   0.0   4.3M   5.0M   31.3M   15.9M    12.1M
---------------------------------------------------------------------------------
Total: 26
======================
*******FINISH******

How to Collect Isilon Log on a Single node

Purpose:  If you want to collect the logs on single Isilon Node, Open the putty session and run the below command, and use WinSCP to download the log file from the Isilon Node.


ISILONCLUSTER1-10# isi_gather_info single node
Unlocking gather-status
Gather-status unlocked

This may take several minutes.  Please do not interrupt the script.

..............Information gathering completed..
..............creating compressed package...
Packaging complete...
Package: /ifs/data/Isilon_Support/pkg/IsilonLogs-<ISILONCLUSTER>-<YYYYMMDD>-<HHMMSS>.tgz
Uploading in progress.  If problems are encountered during the
upload process, the package will need to be sent manually.
Trying Passive FTP...
Uploaded Succeeded (FTP - Passive). File IsilonLogs-<ISILONCLUSTER>-<YYYYMMDD>-<HHMMSS>.tgz
Cleaning up temporary data... done.
Gather-status unlocked
ISILONCLUSTER1-10#

How to gather SPCollects for VNX1 or VNX2 Series array

There are a number of methods to gather SPCollects:
  • Start and retrieve SPCollects from each SP using Unisphere.
  • Launch Unisphere Service Manager either directly or from within Unisphere.  This approach has the advantage of automating the whole SPCollect gathering process and gathering diagnostic data too.
  • Start and retrieve SPCollects from each SP using Navisphere Secure CLI.

Unisphere


  1. Launch Unisphere and login.
  2. Select the VNX series array from either the dashboard or from the Systems drop-down menu.  Click System on the toolbar.
  3. On the right pane, under Diagnostic Files, select 'Generate Diagnostic Files - SPA'.  Confirm that it is OK to continue.  "Success" will be displayed when the SPCollect generation starts, but this only means the script has been started and will still take several minutes to complete. 
  4. Repeat step 3 for SP B immediately.
  5. It will take around 10-15 minutes to generate the complete SPCollect file.
  6. Still on the right pane, select 'Get Diagnostic Files - SP A'.  
  7. When the SPCollect file generation has completed, a file with the following name will be listed: <Array Serial Number>_SPA_<date/time (GMT)>_Code_data.zip 
  8. Sorting by descending order of date is a good way to find the latest SPCollect and the zip file will generally be over 10MB.  If the file has not appeared, press refresh every minute or so until the correct _data.zip file appears.
  9. On the right-hand side of the box, select the location on the local computer, where the SPCollects should be transferred to.
  10. On the left hand side of the box select the file to be transferred.  Note, if a file is listed that ends in runlog.txt, this indicates that the SPcollects are still running. Wait until the data.zip is created.  
  11. Repeat Steps 6-10 on SP B to retrieve its SPCollect file.

Unisphere Service Manager

  1. Log in to Unisphere client.
  2. Select the VNX, either from the dashboard or from the Systems drop-down.  Click System on the toolbar.
  3. On the right pane, under Service Tasks, select 'Capture Diagnostic Data'.  This will launch USM.  Alternatively USM can be launched directly from the Windows Start menu.
  4. Select the Diagnostics tab and select Capture Diagnostics Data.  This will launch the Diagnostic Data Capture Wizard.
  5. The Wizard will capture and retrieve SPCollect files from both SP and Support Materials from the File storage, which will then be combined into a single zip file.

Navisphere Secure CLI

Perform the following steps:
  1. Open a command prompt on the Management Station.
  2. Type cd "C:\Program Files\EMC\Navisphere CLI" - This is the default installation folder for Windows, but the path the file was installed to may have been overridden.  Other platforms, such as Linux, would have a different folder structure, but the commands are the same.  The CLI folder may already be in the path statement, in which case, the commands can be run from any directory.
  3. Type naviseccli -h <SP_A_IP_address> spcollect
  4. Type naviseccli -h <SP_B_IP_address> spcollect
  5. These commands start the SPCollect script on each SP.  Additional security information may also need to be specified, see KBA 483583, How to gather Service Data from a Dell-EMC Unity array.
  6. Wait a minimum of 10-15 minutes for the SPCollects to run, before attempting to retrieve them.
  7. Type naviseccli -h <SP_IP_address> managefiles -list                  
  8. This will list the files created by spcollect.  Check that a file with the current date and time in GMT has been created, ending with _data.zip.  If there is a file ending with .runlog instead, then the SPCollect is still running, so wait for a while longer before retrying this.
  9. Type naviseccli -h <SP_IP_address> managefiles -retrieve       
    This will display the files that can be moved from the SP to the Management Station.

    Example:
    Index Size in KB     Last Modified            Filename
    0     339       06/25/2013 00:45:42  admin_tlddump.txt
    ...
    10    24965     06/24/2013 23:39:53  APM0000000XXXX_SPA_2013-06-24_21-35-43_325146_data.zip
    11    41577     06/25/2013 00:17:17  APM0000000XXXX_SPB_2013-06-24_21-35-52_325147_data.zip
    ...
  10. Enter files to be retrieved with index separated by comma (1,2,3,4,5) OR by a range (1-3) OR enter 'all' to retrieve all file OR 'quit' to quit> 11

    This will pull the index number 11 (the most recent ~_data.zip file) from the corresponding SP and copy it to the c:\program files\emc\navisphere cli directory, with a filename of APM0000000XXXX_SPB_2013-06-24_21-35-52_325147_data.zip
  11. For information on how to get the SPCollect files to EMC Technical Support, see KBA 459010, Where do I upload Service Request related information for analysis by EMC Support.

EMC ViPR SRM 4.x - Isilon Collector Error: HttpRequestGroup::handleResponse(): Unable to connect to host (403) for request https://@{host}:8080/platform/1/zones in requets group Isilon-Zones.

Error:
WARNING  -- [2018-08-20 15:08:03 PDT] -- HttpRequestGroup::handleResponse(): Unable to connect to host <Host Name> (403) for request https://@{host}:8080/platform/1/zones in requets group Isilon-Zones. Server returned the following message: Forbidden
SEVERE   -- [2018-08-20 15:08:03 PDT] -- HttpRequestRetriever::execute(): Unable to retrieve stream on any configured request group!
SEVERE   -- [2018-08-20 15:08:03 PDT] -- AbstractJobExecutor::executeJobRunner(): Error while executing job ISILON2-CLUSTER-CAPACITY -> HttpRequestRetriever removing it from the queue
com.watch4net.apg.concurrent.JobExecutionException: Unexpected error when running step in job ISILON2-CLUSTER-CAPACITY -> HttpRequestRetriever
    at com.watch4net.apg.ubertext.parsing.concurrent.SimpleStreamHandlerJob.step(SimpleStreamHandlerJob.java:65)
    at com.watch4net.apg.concurrent.executor.AbstractJobExecutor$SequentialJob.step(AbstractJobExecutor.java:460)
    at com.watch4net.apg.concurrent.executor.AbstractJobExecutor.executeJobRunner(AbstractJobExecutor.java:130)
    at com.watch4net.apg.concurrent.executor.AbstractJobExecutor.access$500(AbstractJobExecutor.java:25)
    at com.watch4net.apg.concurrent.executor.AbstractJobExecutor$JobRunnerImpl.run(AbstractJobExecutor.java:287)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.watch4net.apg.concurrent.JobExecutionException: SimpleStreamHandlerJob produced a null StreamHandlerStep during execution!
    at com.watch4net.apg.ubertext.parsing.concurrent.SimpleStreamHandlerJob.step(SimpleStreamHandlerJob.java:51)
    ... 7 more

Cause: ViPR SRM Isilon data collection won't work when using non-admin credential with the AuditAdmin role. When you put this url https://Isilon_IP:8080/platform/1/zones in the browser and use the non-admin account.
You will get Authentication error as below.

{
"errors" :
[

{
"code" : "AEC_NOT_FOUND",
"message" : "Path not found: /1/zones%20in."
}
]
}

Fix: To fix this issue you have to add the Service account; which you are using to discover the Isilon in ViPR SRM to the "Security Admin Group" role in Isilon Cluster.
>> login into Console > Click on "Access" > Select "Membership & Roles" > Click on "Roles" > Select "SecurityAdmin" > click on "View/Edit" > Click on "Edit Role" > "+ Add members to this role" > Search for the account > Select the account > Click on "Save Changes"

Verify: https://<HostName>:8080/platform/1/zones
Output: if the output is like below, it means it is working

{
"zones" :
[

{
"all_auth_providers" : false,
"alternate_system_provider" : "lsa-file-provider:System",
"audit_failure" : [ "create", "delete", "rename", "set_security", "close" ],
"audit_success" : [ "create", "delete", "rename", "set_security", "close" ],
"auth_providers" :
[
"lsa-activedirectory-provider:<Domian>",
"lsa-local-provider:System",
"lsa-file-provider:System"
],
"default_block_size" : 27,
"default_checksum_type" : "none",
"hdfs_ambari_namenode" : "",
"hdfs_ambari_server" : "",
"hdfs_authentication" : "all",
"hdfs_enabled" : true,
"hdfs_keytab" : "/etc/hdfs.keytab",
"hdfs_root_directory" : "/ifs",
"home_directory_umask" : 63,
"id" : "System",
"ifs_restricted" : [],
"map_untrusted" : "",
"name" : "System",
"netbios_name" : "",
"odp_version" : "",
"path" : "/ifs",
"protocol_audit_enabled" : false,
"skeleton_directory" : "/usr/share/skel",
"syslog_audit_events" : [ "create", "delete", "rename", "set_security" ],
"syslog_forwarding_enabled" : false,
"system" : true,
"system_provider" : "lsa-file-provider:System",
"user_mapping_rules" : [],
"webhdfs_enabled" : true,
"zone_id" : 1
}
]
}

Reference: Dell EMC Knowledge Base Article: 000524394

Isilon Monitoring Client to Host Performance - Steps to capture the ISILON PCAPS for one client to analyze performance

####Monitoring Client to Host Performance######
Here are the steps to capture  the ISILON PCAPS for one client to analyze performance. Also please run the Wireshark on the host end too at the same time.

And copy isiperf_v3.sh script to the location in step 2.

1. Make the following directory:
# mkdir -p /ifs/data/Isilon_Support/$(date +%m%d%Y)

2. Open a second SSH session and start isiperf.sh (it will run for 10 minutes) in the second session, so it can run in the background while the additional below are performed.
# /bin/bash /ifs/data/Isilon_Support/isiperf_v3.sh -i 10 -e 5 -r 12 -d -g lwio,lsass,netlogon

check and ensure what node client is connected:
#  isi_for_array -s netstat -an | grep "10.252.194.19"

run capture command below on node client is connected to:

4. Start a packet trace with snaplength of 320 from the node.
# ifconfig | grep flags= | awk -F: '{print $1}' | egrep -v 'lo0|ib0|ib1' | while read ifcon; do tcpdump -s 320 -i "${ifcon}" -w /ifs/data/Isilon_Support/$(date +%m%d%Y)/`hostname`.$(date +%m%d%Y_%H%M%S)."${ifcon}".pcap hsot 10.252.194.19 &; done

5. Start pcaket trace on client.

6. Reproduce the issue:

7. Stop packet trace and verify tcpdump has stoped :
#  isi_for_array "killall -2 tcpdump";sleep 2;  isi_for_array ps -auwx | grep tcpdump | grep -iv grep

9. Upload all the files to us:
# isi_gather_info --local-only --nologs -s "isi_hw_status -i" -f /ifs/data/Isilon_Support/$(date +%m%d%Y)

Script: isiperf_v3.sh
https://drive.google.com/open?id=1gC6ndNjdxW-gGAC0UKNztZqRYvb6PVXR