Start a Conversation

Unsolved

This post is more than 5 years old

R

2526

August 17th, 2015 03:00

VNX OE 8.1.8 / 05.33.008 new features and enhancements

FYI,

for the VNX2 series new software was just released.

Below is the list of New features and enhancements from the release notes.

The relevant manuals and white papers were also updated and are available from support.emc.com

Rainer

Fixed-block deduplication with file system data

For VNX unified systems, this release introduces the ability to enable and disable fixed-block deduplication on a specific file storage pool that is built from VNX for Block storage pool thick or thin LUNs, or on a file system that is created on a metavolume. The Deduplication enabler must be installed on the system. This feature allows you to maximize efficiency on file data that contains duplicate information, as well as frees up space on VNX for Block storage pools for reuse.

You can enable deduplication on a VNX for File storage pool if it does not use shared disks (volumes that are sliced or striped from a LUN), and also if it contains only file systems that use the split ufs Log Type (default). You can also enable deduplication on a file system that is created on a metavolume if the metavolume does not use shared disks, and if the file system uses a split log file.

File space reclaim (file system and savvol)

For file systems that are built on thin LUNs from VNX for Block storage pools, you can now free up (reclaim) space in the pool after files and checkpoints have been deleted. You can reclaim space in the storage pool from either a Production File System (PFS) on a thin LUN or a checkpoint (SavVol) on a thin LUN.

To do this, choose one of the following methods:

     • Manually run a file system or checkpoint space reclaim process when needed.

     • Create a schedule to automatically run a file system or checkpoint space reclaim process daily, weekly, or monthly when required.

Thin LUN low-space handling enhancements

For VNX unified systems, this release introduces the ability to handle low-space conditions in a VNX for Block storage pool that contains thin LUNs used for VNX for File storage. When the Free Capacity in a VNX for Block storage pool drops below the Low Space Threshold, affected file systems and read/write checkpoints are remounted as read-only, and replication sessions will be stopped. This helps prevent out-of-space conditions.

The Low Space Threshold is a system-assigned value for each VNX for Block storage pool where VNX for File OE file systems are in use, and built on thin LUNs only. This value does not apply to thick LUNs.

The value of the Low Space Threshold varies by system. It is based on the total number of Data Movers multiplied by the memory of each Data Mover. For example, a VNX5400 system that has four Data Movers, each with 6 GB memory, is assigned a Low Space Threshold of 24 GB for each VNX for Block storage pool where file systems are in use and built on thin LUNs.

Once the Low Space Threshold is reached on the VNX for Block storage pool, file systems and read/write checkpoints are remounted as read-only and replication sessions are stopped. To recover from a Low Space condition, action must be taken to increase the space above the Low Space Threshold on the VNX for Block storage pool. For example: adding storage to the pool; migrating LUNs; and performing space reclaim on file systems.

Once space becomes available, file systems, read/write checkpoints, and replication sessions can be restored to normal operation, either by using Unisphere (Storage > File Systems > Mounts > Restore), or by running the CLI server_mount server_2 -restore -all command. If the low-space condition occurred on the source side of a replication session, the replication session is automatically restarted once the restore operation is completed. However, if the low-space condition occurred on the destination side...the replication session must be manually stopped and then restarted from the Source side of the replication session.

16Gb/s 4-port Fibre Channel I/O module

New 4-port 16Gb/s Fibre Channel I/O module available and supported on the VNX2 storage processors (SPs). Available for ordered systems, as an upgrade into empty SP I/O module slots on existing VNX2 systems, or as a swap upgrade of existing 8Gb/s Fibre Channel I/O modules. The swap upgrade is performed by a service provider. This I/O module does not support Fibre Channel connectivity to File OE Data Movers.

Automatic notification of new firmware updates

When logged into a system, the latest version of USM automatically scans for and notifies you if a disk firmware update is available. If an update is available a popup message is displayed at the bottom of the System page.

A popup message will be displayed and will contain the name of the sytem for which there is a firmware update.

Clicking on either the popup message or the icon displayed at the bottom of the page will launch the USM Online Disk Firmware Upgrade (ODFU) wizard, where you can see which updates are available for your system and proceed with a non-disruptive disk firmware update.

This feature is supported on all VNX systems, but is not supported on legacy CLARiiON, Celerra, or VNX Gateway systems.

Changes to Proactive Sparing (PACO) – New Resiliency mode

The criteria to fault a drive once it enters proactive sparing have been changed in this release. Drives being proactively spared will no longer be faulted because of media errors. This will enable the RAID group to stay redundant for a longer period of time when drives are reporting media errors during proactive sparing. This is called Resiliency mode.

Drives that are reporting a large number of media errors may exhibit slow response times which may propagate to the host. In environments where hosts cannot ride through these periods of slowness, the original proactive sparing criteria which fault drives that continue to report media errors may be preferable. To make this change, contact EMC Support.

At this time, this feature is available on select drives only (TLAs 005049675 and 005049677).

Support for IPv6 configurability

For Block-only systems, the main enhancement for Block-only SPs is the ability to customize all 128-bits of the IPv6 address using the naviseccli command-line interface. In addition, some IPv6 settings can be enabled, disabled, or set from the Unisphere GUI interface.

Use the following naviseccli example to custom configure all 128-bits of the IPv6 address: /nas/sbin/naviseccli -h networkadmin -set -ipv6 -manual -address 3ffe:80c0:22c:4c:1319:8a2e:370:7348 -gateway fe80::21e:67ff:fe6a:50a9

For Unified/File arrays, the setting of IPv6 addresses on the Unified/File SPs has also been enhanced, but with several restrictions:

     • Before using IPv6 on the SPs, the IPv6 address and gateway must first be set on the primary Control Station.

     • Also, the first 64-bits of the SP IPv6 address (Control Station Prefix), must be based on the Prefix value used by Control Station 0.

     • Never use naviseccli to set IPv6 addresses on the SPs for Unified/File systems, as it will break IPv6 communications.

     • Always use the Control Station command-line script, clariion_mgmt, to configure the Proxy ND service and SP IPv6 addresses for Unified/File systems.

Two main options:

Use the following example to start the Proxy ND service while automatically configuring IPv6 addresses on the SPs based on the Control Station prefix and the array's WWN identifier (EUI-64 format): /nas/sbin/clariion_mgmt -start -use_proxy_nd

Or, use the following example to start the Proxy ND service, while specifying the first 64-bits based on the Control Station Prefix value, and customizing the last 64-bits of the address (InterfaceID): /nas/sbin/clariion_mgmt -start -spa_ip6 2620:0:170:420a::11 -spb_ip6 2620:0:170:420a::12 -use_proxy_nd.

Configuring virtual ports

When configuring second and successive virtual ports (vports) on an existing physical interface on the SP, only the newly added virtual port is reinitialized, and not the physical interface or previously set virtual ports.

CBFS Technical Debt

The CBFS file deduplication functionality was enhanced. Concurrency of CBFS metadata was improved to allow more parallel metadata access. Latency, CPU usage, and locking contention issues were reduced. Deduplication memory usage was reduced to allow an increase of speed in other functionalities.

Localization updates

Localization of Unisphere, USM, VIA: Chinese, Japanese, and Korean.

Localization of Unisphere Initialization wizard: Chinese, Japanese, Korean, French, German, Italian, Spanish, and Portuguese.

Java support

The following 32 bit Java Platforms are verified by EMC and compatible for use with Unisphere, Unified Service Manager (USM), and VNX Installation Assistant (VIA):

     • Oracle Standard Edition 1.7 up to Update 75

     • Oracle Standard Edition 1.8 up to Update 25

The 32-bit JRE is required, even on 64 bit systems.

JRE Standard Edition 1.6 is not recommended because Oracle has stopped support for this edition.

IMPORTANT: Some new feature/changes may not take effect in the Unisphere GUI after an upgrade. To avoid this, EMC recommends clearing your Java cache after an upgrade.

VNX2 Block-to-Unified Upgrades

In this latest release, EMC will support Block-to-Unified upgrades for the VNX2 family of products. This type of upgrade changes an existing VNX 5200, 5400, 5600, 7600, or 8000 block-only model to a Unified system of the same model number.

Block-to-Unified upgrades are available to customers through an EMC Professional Services engagement only and is not a customer installable procedure.

VNX2 In-Family Data-in-Place Conversion

This version of VNX OE supports in-family data in place (DIP) conversion from VNX5200/5400/5600/5800 to a higher model VNX2 series system, up to a VNX7600. In-family DIP conversions in the VNX2 series DPE-based systems re-use the chassis within the initial system, replacing only the storage processors and Data Movers (File or Unified) with those needed for the resulting system and do not involve disconnecting any component cabling.

The above conversion is available to customers through an EMC Professional Services engagement and is not a customer installable procedure.

No Responses!
No Events found!

Top