Start a Conversation

Unsolved

This post is more than 5 years old

1334

February 24th, 2016 06:00

ScaleIO v132 - Architectural Noob Questions

Hello all!!

How's everybody doing?! Hopefully awesome!

During this last week I deployed a ScaleIO (v1.32) inside ESX (we had 7 servers total with a bunch os disks). I read a lot of docs and watched some videos on Edutube. But I have some questions (probably noob question, sorry about that) but better ask'em right?

Here we go.

  1. What does the ScaleIO Gateway VM does?
  2. During the network step I had to define 3 networks: One for management (which I had to point one ip to ScaleIO Gateway, one for MDM primary, one for MDM secondary, one for tie breaker and one for each ESX). I thought that only MDMs were supposed to have an IP on the management network. WHats the purpose of this network?
  3. Still on the network side, on the same step of the installation I had to point out IPs for DATA-1 and DATA-2. Is it correct to assume that one of these DATA networks are designated for SDS traffic (replication/rebalance) and the other for SDC (SDC acceing the data)?
  4. Deciding DATA1 and DATA2 lead me to think about a large deploy. If my deploy consisted of 500 nodes, DATA-1 and DATA-2 would need to be a /23 network (since I can't configure a gateway for those DATA networks right? That seems odd to me. And could get me in trouble in future deploy since net admins don't like networks bigger than /24s.
  5. When I was creating the storage pools (I created 2) I had to select each disk would be part of each SP. But I had some SSDs on each server an I thought I could use those SSDs as cache (like we use on vmware vsan) but I guess cache is only done on RAM right? The best thing we can do with SSDs in ScaleIO is getting them on their own storage pool and marking that storage pool with "optimize for flash". Right?
  6. After the deploy was done each ESX host had a VM for SDS. Inside this VM I saw a lot of HDs (which I believe are the physical HDs of the host since I used RDM). Those machines can't vMotion to any other host, right? They should be still on their on ESX hosts... Right?

That's it. A lot of question... Sorry...

Thanks a lot!

Rafael Bianco

51 Posts

February 24th, 2016 22:00

Rafaelbn,

1. The ScaleIO gateway provides a REST API, as well as a web-based installation/upgrade manager.

2. The management network is separate from the data networks, as you will generally need to manage hosts from IP addresses outside the data networks. It also doesn't need to be 10gbit, like we recommend the ScaleIO data networks to be.

3. Not quite.  Unless you've configured them to be dedicated to one kind of traffic, interfaces/IPs on an SDS serve both SDS-SDC traffic (volume access) and SDS-SDS traffic (rebuilds, rebalances, and second copy writes) equally. 

4. I'd best let a Solutions Architect answer on this conclusively, but don't forget that you will also need an IP on each subnet for every SDC as well.

5. Xtremecache software is an option, as well as LSI RAID controller CacheCade.  Without either option, you are correct that you'll be best to set a separate pool for SSD.

6. Correct, the SVMs (ScaleIO VMs) live and die with the host they belong to. 

6 Posts

February 25th, 2016 03:00

Thank you Mr. Rush! 

Probably there are a lot of ways to control how ScaleIO uses every network it has but after you install it. The wizard on vCenter is kind of limited...

Hopefully an architect can shed some light on that big deploy question (number 4) but I think it's possible (or at least should be) but probably is some post install configuration done trhough the CLI...

Thank you!!

6 Posts

February 25th, 2016 10:00

Rush,

I dug a bit more and found the answer for question 4. Apparently you can have up to 8 IPs in each SDS (serving various purpouses), and in each IP you can decide its function. And that's done though CLI commands.

Check this video (ScaleIO 1.32 - IP Roles)  on edutube:https://edutube.emc.com/html5/videoPlayer.htm?vno=6aBZq4xVt6PZpZrgkAEaOA==

Thanks!

Rafael Bianco

No Events found!

Top