FLARE Code Upgrade Steps



A CLARiiON/VNX FLARE upgrade usually takes 2 to 3 hours, depending on the current and target code, and on the array model and configuration.


A FLARE upgrade is considered to be an online upgrade (or "NDU", a Non-Disruptive Upgrade). However, there are certain conditions that MUST be met for the upgrade to be non-disruptive:


1. All hosts connected to the CLARiiON/VNX should be connected to both SPA and SPB, the recommendation is to attach each host twice to each SP for high availability.


2. All hosts should have up-to-date and supported multipathing software.


3. Array utilization should be below 100%. The combined SPA Utilization in % + SPB Utilization in % should be less than 100%.


RCM can perform CLARiiON/VNX FLARE code upgrades remotely via ESRS, WebEx or Modem, depending on the customer's environment and availability.


Tasks that should be performed prior to a CLARiiON/VNX FLARE upgrade:


Steps to validate your configuration:


Hosts
Hosts can be verified using the following tools:


· For UNIX and VMware - EMC Grab


· For Windows: EMCReports


These tools can be found on support.emc.com, select your product, Downloads. When you have collected this data, you can send it to E-Lab Advisor to be validated:


· For UNIX and Windows: HEAT


· For VMware: VMHEAT


You can batch files and submit them together (details can be found on the HEAT pages).


The input from these files is matched up against the current E-Lab Interoperability Navigator and you will receive back an email containing an analysis of your hosts that indicates any issues that were found. It is your responsibility to address these issues.


Switches


You can do the same review for your switches using the SWAT tool which can be found on E-Lab Advisor. This page will tell you what diagnostic file(s) to gather from your switches, and guide you through sending the information to EMC. Again, you will receive an analysis and recommendations for your equipment, and it is your responsibility to address any issues.


Connectivity


High Availability Verification Tool (HAVT)


HEAT and SWAT will check the configuration of your hosts and switches, but they will not check state of the connectivity of your hosts to the storage system. This is done via HAVT. This utility is part of the Unisphere (Navisphere) Server Utility, which can be found on support.emc.com. Select your product, then Downloads.
Select the Verify Server High Availability option. This will generate a report that indicates the number of paths to SPs and reports any issues with those paths. Also, if it cannot detect any failover software, it will inform you of that fact. See the attached matrix for currently supported operating systems and failover software. See the linked Failover Software Operating Systems tables. This tool must be run on each host that you want to remain on-line during the upgrade. The resulting output can be automatically transferred to the storage system for inclusion in the host checking done by the Software Assistant in the Navisphere Service Taskbar.


If your host is not listed, you will have to manually check that:


1. The host has at least two valid paths, one to each Storage Processor in the storage system, from each of at least two HBAs.


2. All expected initiators show in the connectivity status for the storage system (in Navisphere Manager or Unisphere) as registered and logged in. (Note that HP-UX will only show as logged in if there is active I/O on that path.)


3. Failover software is installed and configured correctly.


Array Settings


These control how the host and the storage system will communicate with each other. The CLARiiON Failovermode, Arraycommpath, Initiator Type and Unit Serial Number settings must be verified for each host. Currently recommended settings can be found in Knowledgebase solution [Link Error:UrlName "emc99467-What-are-the-Initiator-Arraycommpath-and-Failovermode-settings-for-PowerPath-DMP-PVLinks-and-native-failover-software" not found]. (This information can be seen in the XML output generated by the Server Utility or in Navisphere Manager Connectivity Status.)


Other actions


· EMC Technical Advisories (ETAs)
Review any EMC Technical Advisories (ETAs) that have been issued for storage system models, software, or 3rd party hardware that you have in your environment. These are warnings of hardware or software issues that may affect your environment. You can choose to be automatically notified of those pertaining to your particular configuration, and there are lists by product family and a complete list of all ETAs. These are available on support.emc.com. Select your product, and then review the ETAs on the left-hand pane.






· Unisphere Service Manager (USM)/Navisphere Service Taskbar (NST) Technical Advisories
these are notifications that are displayed when you first start up USM/NST. They contain information about issues that could affect the activity you are about to perform, or the general behavior of your storage system.


· E-Lab Interoperability Navigator
This provides access to EMC interoperability support matrices. Using a guided query, you can retrieve information about supported configurations. If you have run HEAT or SWAT, you have already used this tool. It can be found at on support.emc.com. Select E-Lab Interoperability Navigator under Product and Support Tools. If you have questions as to whether certain versions of software are supported together, then use this tool.


· PowerPath Configuration Checker (PPCC)
This is a utility that can be used to check that hardware and software for a particular host is configured to support PowerPath failover. (It also verifies other PowerPath multipathing features.) EMCReports (Windows hosts) and EMC Grab (UNIX hosts) are used as input. It can be found on Powerlink at Home > Support > Product and Diagnostic Tools > Environment Analysis Tools > PowerPath Configuration Checker (PPCC) .


After performing these checks, run the Prepare for Install option in NST/USM. This option verifies that all servers are in an expected availability state using the files created in the HAVT step above.
If the upgrade is going to be performed by modem (via a management station or attached NAS control station), or via WebEx connections, please make sure you download the code bundle on the local machines. This is so the RCM Engineers can perform the upgrade at the scheduled time (without any delays). Also make sure that the customer has Unisphere Service Manager (USM) and NaviSphere CLI installed.


FLARE code can be downloaded from ftp.emc.com/pub/rcm/code/block or http://support.emc.com


The tools (USM and Naviseccli) can be downloaded from ftp://ftp.emc.com/pub/rcm/app/vnx/ or http://support.emc.com
If the upgrade is over ESRS, RCM will stage the code remotely, provided that the ESRS connection is stable enough. If not, RCM will ask for local assistance to stage the code.
The RCM Engineer will perform pre-upgrade health checks prior to the upgrade by analyzing pre-upgrade SPCollects. The customer or CE will be notified of any issues found before starting the actual upgrade.


What to expect during a FLARE upgrade:


After scheduling an upgrade with RCM, it will be assigned automatically to an RCM Engineer. The Engineer will perform health checks, notify relevant parties and start the upgrade as scheduled. The actual upgrade sequence is:


1. Install software on Secondary SP.


2. Reboot Secondary SP.


3. Wait for Secondary SP to come back from reboot and start accepting I/O requests (also called the NDU Delay).


4. Install software on Primary SP.


5. Reboot Primary SP.


6. Make sure LCC upgrade is complete and no hardware faults exist.


7. Commit Code.


8. Perform post upgrade health checks.



After the Software upgrade, Perform the following checks:


· View the event logs of both SPs to ensure there are no unexpected events that could signal a problem that occurred during or immediately after the software upgrade


· Restart all stopped SAN Copy sessions and any other replication software activities that were stopped prior to this upgrade


· Is PowerPath running on the hosts?


· If yes, are the LUNs listed under their default owner?


· If no, perform a powermt restore. Confirm that all LUNs are now listed under their default owner.
For AIX hosts only: If it is necessary to use the powermt restore command, see Knowledgebase article ETA 2753: AIX, PowerPath: AIX Host Lost access to storage during powermt restore.


· If required, upgrade ESRS IP Client for CLARiiON to the version that corresponds to the FLARE that was upgraded to in this procedure.


· Using Unisphere Release Notes and E-Lab Interoperability Navigator, check that all host based and array-based software is at an acceptable/compatible revision. If not, then update as required.


· Confirm that there are no array faults (note that the SPS units may still be charging based upon the SP reboots; this is normal). Write cache will automatically re-enable if it was enabled prior to the NDU as soon as one SPS is fully charged and ready


· Confirm that the cache settings are set properly. If new replication software (layered applications) is being added, cache settings may not be able to be set as they were previously


· Are there any unowned private LUNs?


· If yes, refer to EMC Knowledgebase article [Link Error:UrlName "emc105448--Clone-Private-LUNs-CPLs-or-MirrorView-write-intent-log-WIL-private-LUNs-or-SnapView-reserved-pool-LUNs-become-unowned-after-NDU" not found]. CPLs, WIL private LUNs, or SnapView reserved pool LUNs may show as un-owned after a code upgrade. If the storage processor (SP) is not tracking changes, the LUN will be un-owned. Once the clones, mirrors, or snapshots start up, the ownership will revert to the status of "owned". You can also try de-alllocating/re-allocating LUNs from the CPL, WIL, or reserved LUN pool to reinstate ownership to these LUNs. Refer to EMC Knowledgebase article emc105448 for more information.


· Is there a VMware server attached to the array?


· If yes, are all the LUNs are on the same SP? After a non-disruptive upgrade (NDU), all VMware LUNs are on one storage processor (SP). If yes, follow as below.
VMware currently does not use PowerPath. Its native failover will not restore LUNs back to their default SPs after a path or SP has been restored. VMware failover is configured (or should have been configured ) for MRU (Most Recently Used) path. This means if a LUN with a default owner of SP B was trespassed to SP A, then the ESX server continues to access it through SP A. It does not test to see whether the old path has been restored. The most extreme example of this would be an NDU where both SPs have rebooted, and all the LUNs will end up on the first SP that rebooted. It would appear that all the LUNs are on SP B and not on A. Since the ESX server does not automatically restore the LUNs, the LUNs will have to be manually trespassed from Navisphere Manager or by command line using the CLI command: navicli -h <default SP_IP_address> trespass mine. This must be issued to both SPs.



How to schedule a FLARE upgrade with RCM:


If you would like to schedule an upgrade with RCM, please contact rcmscheduling@emc.com or use the RCM Schedule

5 comments:

  1. Replies
    1. Infra World: Flare Code Upgrade Steps >>>>> Download Now

      >>>>> Download Full

      Infra World: Flare Code Upgrade Steps >>>>> Download LINK

      >>>>> Download Now

      Infra World: Flare Code Upgrade Steps >>>>> Download Full

      >>>>> Download LINK ZD

      Delete
  2. Infra World: Flare Code Upgrade Steps >>>>> Download Now

    >>>>> Download Full

    Infra World: Flare Code Upgrade Steps >>>>> Download LINK

    >>>>> Download Now

    Infra World: Flare Code Upgrade Steps >>>>> Download Full

    >>>>> Download LINK

    ReplyDelete