Start a Conversation

Unsolved

This post is more than 5 years old

51345

September 9th, 2015 07:00

Ask The Expert: ScaleIO’s Node New Release

Welcome to EMC ScaleIO

YOU MAY ALSO BE INTERESTED ON THESE ATE EVENTS...

Re: Re: Ask The Experts: Introducing ScaleIO 2.0

Ask the Expert: The New EMC VNXe1600 – Best Practice, Technical Configurations, Features and Benefits

https://community.emc.com/message/911908

Node Support Community Ask the Expert conversation. EMC ScaleIO recently launched the ScaleIO Node. During this discussion we will be covering any technical or feature related questions about this new and powerful product. Our seasoned experts have extensive experience with ScaleIO and are here to answer any and all your questions. If you missed the live announcement, view it here and ask your questions.


The below video presented by Navin Sharma, provides an overview of the ScaleIO Node. Please watch it and if you have any questions post them on this ATE thread for SME to answer.



Meet Your Experts:

Volkov.jpg

Sr. Consulting Engineer - EMC ScaleIO

Sagy has been running very large Linux clusters in the last 10 years (1000+ nodes). A perphie at heart. He mainly concentrate of Linux performance, interaction of kernel with hardware devices, hardware design (designed a chassis in my past) and application performance (Oracle, GPDB, Hadoop variations). Sagy is a Kettlebell fanatic and do a lot of cross country runs. Twitter: @ClusterGuru.

Navin+Sharma.jpg

Product Manager - EMC ScaleIO

I currently lead the ScaleIO product management team at ScaleIO. I have over 10 years of experience in technology sector including product mgmt., software engineering, HW engineering, performance engineering as well as investment banking and consulting. I have several patents and papers on storage systems and clustering. I am an outdoor enthusiast and usually spend my free time hiking, camping with my wife and our dog, a 16lb Schnoodle. Twitter: @navin101.

profile-image-display.jspa?imageID=6848&size=350

Jason Sturgeon

Product Manager - EMC ScaleIO

Jason is a Product Manager on the ScaleIO product and true technologist at heart. He works on all aspects of ScaleIO and is always interested in people thoughts on storage, networking, technology and the ways that all these intersect. In previous roles, Jason have been a Technical Trainer, Corporate System Engineer, IT manager and once upon a time, a Support Tech. Twitter: @osaddict


This discussion takes place from Sept. 17th to Oct. 1st. Get ready by bookmarking this page or signing up for e-mail notifications.


Share this event on Twitter or LinkedIn:

>> Join our Ask The Expert: ScaleIO’s New Release http://bit.ly/1Ni9gID #EMCATE <<

2 Intern

 • 

718 Posts

September 17th, 2015 07:00

This Ask the Expert session is now open for questions. For the next couple of weeks our Subject Matter Experts will be around to reply to your questions, comments or inquiries about our topic.

Let’s make this conversation useful, respectful and entertaining for all. Enjoy!

5 Practitioner

 • 

274.2K Posts

September 17th, 2015 11:00

Hey Experts,

If I buy the switches with the nodes, what sort of uplink options do I have?

110 Posts

September 17th, 2015 11:00

The optional data switches have multiple uplinks options. Depending on which data switch you choose, you can use either 4 or 8 40GbE ports. These are QSFP+ ports and optical adaptors can be purchased with this as well to link back to a core switch.

If your core switch does not have 40GbE ports and that is more bandwidth then you need to the rack, you can use some of the 10GbE ports to link to a core switch. These are SFP+ ports and you can either purchase the optical adaptor or use a copper Twix-Ax cable. The copper cable will have a linked length though, which may not be long enough to reach your core switch.

As you can see, we are working to provide an very flexible solution.

--Jason

10 Posts

September 17th, 2015 15:00

At what scale, performance requirement and use case does it make sense to use ScaleIO vs say....XtremIO?

Thanks!

110 Posts

September 17th, 2015 18:00

My thoughts are:

XtremIO is a better fit where you need deduplication and very dense performance that will be dedicated to just storage. VDI is great use case for XtremeIO.

ScaleIO is great when you want storage that can start small (3 nodes) and expand to very high scale (1024 nodes), prefer a system that runs on standard hardware and can change as you environment changes. Examples being migrating from dedicate storage servers, to running applications and storage together. Or OS changes, say linux server, to vmware, to openstack, to windows, or all of those at the same time. They can all be contributing their local disk to the same ScaleIO storage cluster sharing it back to all those systems and other application severs at the same time. As new generations of hardware and storage media come out, those can be added and the old remove without data migrations. So, yeah, flexibility is great with ScaleIO. The performance can be very high with ScaleIO. However. it will most likely take more rack space to get there with ScaleIO than some all flash product. Lastly, ScaleIO is able to get all this great performance by just using standard ethernet switches, without need to maintain a FC infrastructure.

As far as use cases, there are places where ScaleIO or other products can be used. An example of this might by Splunk. We have customers running Splunk with XtremIO and some using ScaleIO for the hot/warm bucket. Many also adding Isilon for a cold bucket to provide a really dense long term storage. A lot this comes down to how the customer wants to build and maintain their data center going forward. Many customer want to get to place where they can run application and storage together, on standard hardware, but can't make the jump directly. ScaleIO's flexibility allows them to "baby step" that direction. Creating what we call a 2-tier deployment with dedicated storage nodes and then gradually running applications on those same nodes serving up the storage, or they can wait until they need more application servers. Buying those application servers with local disk and joining them to the cluster. When they are doing this, they have a mix of servers doing both app and storage as well as servers just providing storage, and ScaleIO doesn't care. It's just simple software the shares your local drives out, aggregating them into a large, fast pool or multiple pool(s).

Obviously, performance is going to be a factor on deciding which to use as well. We have sizing guidance tool that can help with this. Please reach out to your account team and they can give you information to help you decide which way is best for you.

2 Posts

September 21st, 2015 22:00

Hi Experts,

1. Does the ongoing read/write operations between SDS and SDC involves/requires MDMs(Clustered)? From what i read is that the communication between SDS and SDC are among themselves.

2. Based on Q1, in a clustered MDMs scenario, a failure of 1MDM does not affect the operation. What if 2 out of 3 nodes fail(power down) (2 MDMs or 1MDM and 1 TB) or even all 3 MDMs, does it impact the ongoing operation/production between the SDS and SDC?

3. Let's say in a scenario where initially the ScaleIO is setup with a single MDM, can I upgrade it to clustered MDM by adding a secondary MDM and a TB? If yes, can i perform the cluster upgrade without disruption (NDU) ?

4. What does zero padding policy actually do? What's the impact of enabling it vs disabling it? (performance, throughput, bandwidth...etc)

5. The connection for data movement is between SDS and SDC, lets say one of the connection is failed, what is the number of retry and the timeout in seconds for SDS before identifying as failed? Is there a settings on this two parameters?

6. Refer to Q5, same goes to SDC, what is the number of retry and the timeout in seconds?

7. Can a mixed version ScaleIO nodes operates together? Current version is 1.32, let's say i add new SDS and SDC with newer version (eg. 1.33..1.34) into the current ScaleIO environment, is it possible? or Do I need to upgrade the existing version of ScaleIO to match with the new version of SDS and SDC?

8. Lets say I have a HDD with 1TB size, is it supported/ recommended to partition it into smaller size, and add as a few devices for each partition? (1TB= 250GBx4, 4 devices)

9. Can a Mixed disk capacity of devices be added into the same pool? is it recommended? or is it better that each pool should have the same size of device from each SDS?

10. Let's say I have a scenario where the ScaleIO consists of 6 SDSs, 3 of the SDSs fail(shutdown). By right, the ongoing operation will be stopped and data will be corrupted. If I bring up the 3 SDSs, will the data be rebuild? since the existing data is still sitting at the disk in the 3 SDSs.

11. Based on Q10, let's say the data can be recovered as long as the device is still healthy in the SDS, In a scenario with 3 SDs, is it possible to switch their HDD to 3 new SDS and remain all the data(volume) intact?  (inactivate the protection domain, bring down existing 3 SDS servers, Install SDS in 3 new servers, and physically switch the HDDs from old SDS to new SDS)?

12. Is there a configuration in SDS and SDC? if yes, how to backup it?

13. Is there a way to backup MDM configuration? Let's say I want to refresh the existing MDM servers to new servers? How do i migrate the existing configuration?

Thanks!

2 Intern

 • 

718 Posts

September 22nd, 2015 18:00

Hi folks, I wanted to let you know that on this thread I just posted a video presented by Navin Sharma (one of our SMEs). The video is an edited version of a webinar which provides an overview of the ScaleIO Node. We're making it available to you to better inform you and inspire you to come up with additional questions for our experts.

2 Intern

 • 

718 Posts

October 1st, 2015 09:00

This Ask the Expert event has officially ended, but don't let that retract you from asking more questions. At this point our SME are still welcomed to answer and continue the discussion though not required. Here is where we ask our community members to chime in and assist other users if they're able to provide information.

Many thanks to our SMEs who selflessly made themselves available to answer questions. We also appreciate our users for taking part of the discussion and ask so many interesting questions.

ATE events are made for your benefit as members of ECN. If you’re interested in pitching a topic or Subject Matter Experts we would be interested in hearing it. To learn more on what it takes to start an event please visit our Ask the Expert Program Space on ECN.

110 Posts

October 9th, 2015 13:00

Answers below

1. Does the ongoing read/write operations between SDS and SDC involves/requires MDMs(Clustered)? From what i read is that the communication between SDS and SDC are among themselves.
Answer: In the normal IO path there is no need for the MDM to be involved in the communications between the SDCs and the SDSs. The MDM will have to be involved only if there is a failure that requires its intervention like a drive or a node going offline.

2. Based on Q1, in a clustered MDMs scenario, a failure of 1MDM does not affect the operation. What if 2 out of 3 nodes fail(power down) (2 MDMs or 1MDM and 1 TB) or even all 3 MDMs, does it impact the ongoing operation/production between the SDS and SDC?
Answer: This question doesn’t really relate to Q1 (since the MDM is not required for the IOs). But if more than 1 MDM fails, (2 or 3) the system will freeze and the data will become unavailable (but there will be no data loss). The reason is that one of the MDM’s responsibilities is keeping the backend cluster alive (the SDS modules). It grants a time based lease to the logical storage maintained by the SDSs. Without an active MDM, the lease will not be renewed and the time of all the lease for all SDSs will expire and the storage cluster will become unavailable. Once the MDM cluster is brought back, the storage will become available again.

3. Let's say in a scenario where initially the ScaleIO is setup with a single MDM, can I upgrade it to clustered MDM by adding a secondary MDM and a TB? If yes, can i perform the cluster upgrade without disruption (NDU) ?
Answer: Yes

4. What does zero padding policy actually do? What's the impact of enabling it vs disabling it? (performance, throughput, bandwidth...etc)
Answer: Zero padding ensures that any area that was never accessed before will be regarded as zero-ised, If the operation is READ, any IO to an area that was never accessed before will return ZEROs. Any write that goes to a new chunk that was never accessed before will result in additional writes (“padding”) of zeroes that will set to zero the areas that were not accessed. It might reduce the performance in writes since we will write more data for the purpose of padding (but only in the first write). Zero padding is also a prerequisite for some functionality such as the background scanner data_comparison mode and RecoverPoint. One should notice that this attribute may be set to a Storage Pool only when created and it cannot be changed later on.

5. The connection for data movement is between SDS and SDC, let’s say one of the connection is failed, what is the number of retry and the timeout in seconds for SDS before identifying as failed? Is there a settings on this two parameters?
Answer: SIO will automatically maintain connection between components. If there is an error, SIO will repeatedly try to reconnect until successful. There are internal tunables for retries and timeout, but these are not exposed to the customers.

6. Refer to Q5, same goes to SDC, what is the number of retry and the timeout in seconds?
Answer: See answer to question #5

7. Can a mixed version ScaleIO nodes operates together? Current version is 1.32, let's say i add new SDS and SDC with newer version (eg. 1.33..1.34) into the current ScaleIO environment, is it possible? or Do I need to upgrade the existing version of ScaleIO to match with the new version of SDS and SDC?
Answer: The SDC can be an older or newer version (it is forwards and backwards compatible). With the SDS it’s a bit trickier since certain functionalities won’t be available if all SDSs are not at the same level. As a rule of thumb, all SDSs in the sameProtection Domain should be in the same level. ScaleIO supports NDU so there is no reason to stay in a mixed configuration of 1.32 and 1.33 (these are of course hypothetical version numbers). During the NDU there are times where there is a mix of SDSs from different code levels. But again, you shouldn’t stay at that state.

8. Let’s say I have a HDD with 1TB size, is it supported/ recommended to partition it into smaller size, and add as a few devices for each partition? (1TB= 250GBx4, 4 devices).
Answer: One should have a good reason to do so since it may create an unwanted dependency between all the partitions (e.g. they are all on the same physical device) and therefore recommend leaving the device as a whole and let ScaleIO handle it as such. Having said that, you can do either. Both will work.

9. Can a Mixed disk capacity of devices be added into the same pool? is it recommended? or is it better that each pool should have the same size of device from each SDS?
Answer: It is recommended to work with device of similar (no necessarily identical) capacity and performance characteristics in the same storage pool. Nevertheless, ScaleIO can handle devices of different sizes.

10. Let's say I have a scenario where the ScaleIO consists of 6 SDSs, 3 of the SDSs fail(shutdown). By right, the ongoing operation will be stopped and data will be corrupted. If I bring up the 3 SDSs, will the data be rebuild? since the existing data is still sitting at the disk in the 3 SDSs.
Answer: First thing first - the data will not be corrupted, it will be unavailable. If you bring the SDSs back up, ScaleIO can use the devices with the existing data. There will be some rebalancing operations.

11. Based on Q10, let's say the data can be recovered as long as the device is still healthy in the SDS, In a scenario with 3 SDs, is it possible to switch their HDD to 3 new SDS and remain all the data(volume) intact?  (inactivate the protection domain, bring down existing 3 SDS servers, Install SDS in 3 new servers, and physically switch the HDDs from old SDS to new SDS)?
Answer: Yes, this is a manual process and should be done but by ScaleIO support only.

12. Is there a configuration in SDS and SDC? if yes, how to backup it?
Answer: Yes. For both the SDC and SDS, any non-volatile configuration can be re-created using information from the MDM. No reason to backup anything.

13. Is there a way to backup MDM configuration? Let's say I want to refresh the existing MDM servers to new servers? How do i migrate the existing configuration?
Answer: The MDM cluster configuration can be changed dynamically without downtime. SIO provides the capability to change the MDM cluster membership during normal operation. the MDM configuration is backed up by the cluster members.  Moving to new MDM servers can be done by replacing the MDMs in the cluster configuration.  This is a non-disruptive process.  Furthermore, in the next version ScaleIO will support an MDM cluster with 3 repository copies (5 members)

1 Message

December 29th, 2015 18:00

Hello Experts,

I am doing the internal product evaulation on ScaleIO to propose this produc in our upcoming solutions

I have gone through ScaleIO Architecture but still have following question if you guys can answer would be grreat help

Customers wihc is such large scale of Storage normally have hetrogeonous infra with following performace requiremnet

Tier 1  400-600 IOPS with 5ms- to 7ms latency      Capacity 400 TB                OS Instance 1000

Tier 2  300-400 IOPS with 10ms to15ms latency    Capacity 800 TB                OS Instance 3000

Tier 3 200-300 IOPS with  15ms to 20ms latency  Capacity 1.5 PB                OS Instance 6000

Tier 4 100-200 IOPS with >20ms latency                  Capacity 1 PB                    OS Instance 4000

  1. Q.1 In such environment, if we have to introduce Scale-IO, then which tier will be the right fit for Scale-IO?

  1. Q.2 This environment consist of IBM SVC, EMC VMAX, VNC, Hitachi GAD and NetApp, how I will migrate the data from existing environment to new Scale-IO environment. If it’s going to be Host base replication, then will there be any downtime requirement? If yes, then how many downtimes I will require and what data transfer throughput I can expect.

  1. Q.3 Current environment is 80% virtualized running on VMware and Hyper-V. VMware environment is running on NFS Datastore, how NFS data store will be migrated to Scale IO. Will migration process be same as above or it will be different treatment for NFS datastore?

  1. Q.4 Does scale IO has integration with vCenter and SCCVM for storage management or I will have to depend completely on Scale-IO native capabilities?
  2. A.4: I have seen ScaleIO can integrate with vCentre (“plug-in must register with vCentre).From Web-Client all EMC ScaleIO functions can be performed – but can it be manage dirrectly from vcentre server and in case of Hyper-V can it be manage through SCVMM

  1. Q.5 If my VMware environment require RDM disk, will Scale-IO support the RDM disk configuration? And in case of Hyper-V if I require quorum disk, will Scale-IO support that also?

  1. Q.6 In VMware and Hyper-V environments my host will require vMotion and Hyper-V replica, will Scale-IO support that?
  2. A.6: I have read ScaleIO support vMotion (VM migrate between hosts/nodes) –but will it suppoty Hyper-V replica

  1. Q.7 What kind of rep;lication support SCaleIo? Is it sycn or async?
  2. A.7. ScaleIO with EMC RecoverPoint to provide replication and disaster recovery protection for ScaleIO environments.

is it correct> Any other replicationtechnology supported by ScaleIO?

  1. .8 Does Scale-IO support QoS?

A.8: Yes, SAcaleIO support QoS ( can adjust the amount of bandwidth and storage any given ScaleIO dataclients can use-Page45) ,ViPR not support QoS for ScaleIO.is it correct

  1. Q.9 How scale-IO support the backup environment, does it support Snapshot based backup or it will be only host base backup?

  1. Q.10 Does Scale-IO support multi-tenancy?

A.10: Scale IO does support multi-tenancy with Protection domain and Storage Pools.

is it correct

  1. Q.11 If customer has to come out of Scale-IO environment, that what will be method to move the data from Scale-IO environment to third party storage environment?

Kindly provide some PoC and Proves ( iif any written EMC links stats this would be good)

9 Posts

March 18th, 2016 08:00

Hello

On deploying the MDM, SDC,etc from the Gateway server it does not allow you to enter another login ID (other than root and Administrators), are you all planning on changing that?  In our environment only server admins have the Administrator login/password but we have domain admin accounts which would allow me to install if I could enter a login to use on the GW server. Otherwise we have to install manually, which leads to my next question.

I have followed the manual deployment steps and on Task 2 (Creating MDM Cluster) it notes a scli command, but that command does not seem to exists.  I have installed MDM, SDC, SDS, LIA on the server.

I have done a find on the server and it does not find that command.

Thanks

Jeff

5 Practitioner

 • 

274.2K Posts

April 4th, 2016 17:00

have you guys had a chance to develop any ScaleIO deployment best practices?  I work in global alliances and we have some service provider specific challenges and I'd like to gather any deployment best practices I can get my hands on.

No Events found!

Top