S44036729's Posts

S44036729's Posts

correct, do not use "unmap" option whenever shared PG involved.
as long as IG1 and IG2 have distinct hosts with distinct wwns, removing access to MV1 should not affect MV2 hosts even with shared PG.
you could try either by setting gloabl variable SYMCLI_MODE=v76 or run the command each time with "-mode V76" switch.
since its linux host, hope you issued command to scan scsi host bus (HBA port)
thanks for the update on AF support for CKD. yes, we can create FBA GK devices,  VMAX3 hypermax os needs FBA for internal space for containers/VMs - therefore while sizing some % space dedicat... See more...
thanks for the update on AF support for CKD. yes, we can create FBA GK devices,  VMAX3 hypermax os needs FBA for internal space for containers/VMs - therefore while sizing some % space dedicated for FBA. so there must be a narrow FBA space. you can create FBA GK devices and  present to opensystem management thorugh FC ports. since we do not expect anything to be written to GK device we can turn these gk vols as WD, just to be safe with this narrow FBA space on a dedicated CKD SRP; as you'd know in VMAX2 we can present FBA GK devices without binding, but VMAX3 automatically binds.
from what i know All flash models (F models) do not yet support mainframe (CKD), whereas hybrid model do support CKD. managing dedicated vmax3 storage array is similar to VMAX40K from z/OS lev... See more...
from what i know All flash models (F models) do not yet support mainframe (CKD), whereas hybrid model do support CKD. managing dedicated vmax3 storage array is similar to VMAX40K from z/OS level, whereby provision CKD GK devices through FICON channels.  if managing through opensystem, then through FC present GK (FBA) devices to management host and install symcli and unisphere from opensystem management host. optionally though, at initial discussion - you can ask for embedded unisphere which will be hosted on VMAX3 itself - you just manage using browser poininting to IP assigned embedded unisphere you can managed multiple vmax3 from external management host by provisioning FBA GK devices.
mapping to VPLEX while hosts are in direct access with VMAX can cause issue, and for RP to replicate traffic must go through VPLEX and not VMAX. thefore follow the encapsualtion process by takin... See more...
mapping to VPLEX while hosts are in direct access with VMAX can cause issue, and for RP to replicate traffic must go through VPLEX and not VMAX. thefore follow the encapsualtion process by taking host downtime, ensure no direct access with VMAX - on host sees luns through VPLEX you are set for replication through Recoverpoint.
each cache slot is 64KiB. hence available cache slots: 13706440 x 64 KiB = 856652.5 MiB
the scenario that you described doesn't match with cascaded snapshot scenario that is depicted. cascaded snapshot refers to snapshot that is taken on Target volume (that your DBA indirectly re... See more...
the scenario that you described doesn't match with cascaded snapshot scenario that is depicted. cascaded snapshot refers to snapshot that is taken on Target volume (that your DBA indirectly referring to Prod DB) givent to Test server. in your scenario - each new snaphsot taken on ProdDB volumes, and you link/unlink to Test server;  should anythong go wrong and you want to roll back - all just you need is to shut/un-mount on Test server and relink to the last snapshot that you linked with. Also Relink is not same as Restore. In Relink - either you can point to differen point-in time, or to revert all the changes made on Target by simply relink to the last snpshot. In Restore - you actually restoring from Target to Prod.