Even if your have a system that meets the node compatibility matrix (same drives, equiv memory), by default the cluster will still set up the nodes in a separate node pool unless you tell it to make the nodes compatible. So if you don't do that, you will have 2 node pools and SmartPools can move data between those pools.
So either way it should work. This is the use case behind the reason for the compatibility to be manually turned on.
The specification for the two sets of nodes are very close; neither have SSDs. Original nodes: X200-2U-Single-24GB-2x1GE-2x10GE SFP+-36TB New nodes: X210-36T/24G/2X10GE 2X1GE (X210-SATA-010) So, I'm assuming that I'll be able to choose "not equivalent". I look at the link you provided. Thanks! Dawn
I do have a follow-up question. After smartfailling (one at a time) the old nodes out, should I renumber the new nodes to then be 1-4? I seem to remember once upon a time reading that some things automatically looked to node 1.
Speaking of side effects, never experienced an impact to normal cluster operation when renumbering nodes.
It's just hard to keep track of node identities over time for purposes like incident and performance monitoring where logical node numbers are used instead of node ids.
At least some good news is that in recent versions of OneFS the system logs include both numbers in each log entry.
Peter_Sero
4 Operator
•
1.2K Posts
1
January 13th, 2017 00:00
X200 and X210 are different if drive and other specs differ significantly.
Good news is, when specs are identical or close, you have the /choice/
to declare the node types equivalent or not equivalent. Where the latter
means the cluster will split into two pool as you want it.
All relevant docs are linked here: Node Compatibility (Equivalency) and SSD Compatibility in OneFS 7.2 and Later - Isilon Info Hub
hth
Peter
AdamFox
254 Posts
1
January 13th, 2017 07:00
Even if your have a system that meets the node compatibility matrix (same drives, equiv memory), by default the cluster will still set up the nodes in a separate node pool unless you tell it to make the nodes compatible. So if you don't do that, you will have 2 node pools and SmartPools can move data between those pools.
So either way it should work. This is the use case behind the reason for the compatibility to be manually turned on.
Hope this helps.
DHCSTI
1 Rookie
•
48 Posts
0
January 13th, 2017 08:00
Thank you!
DHCSTI
1 Rookie
•
48 Posts
0
January 13th, 2017 08:00
The specification for the two sets of nodes are very close; neither have SSDs. Original nodes: X200-2U-Single-24GB-2x1GE-2x10GE SFP+-36TB New nodes: X210-36T/24G/2X10GE 2X1GE (X210-SATA-010) So, I'm assuming that I'll be able to choose "not equivalent". I look at the link you provided. Thanks! Dawn
DHCSTI
1 Rookie
•
48 Posts
0
January 31st, 2017 11:00
I do have a follow-up question. After smartfailling (one at a time) the old nodes out, should I renumber the new nodes to then be 1-4? I seem to remember once upon a time reading that some things automatically looked to node 1.
Peter_Sero
4 Operator
•
1.2K Posts
0
January 31st, 2017 23:00
Speaking of side effects, never experienced an impact to normal cluster operation when renumbering nodes.
It's just hard to keep track of node identities over time for purposes like incident and performance monitoring where logical node numbers are used instead of node ids.
At least some good news is that in recent versions of OneFS the system logs include both numbers in each log entry.
Cheers
-- Peter
sluetze
2 Intern
•
300 Posts
0
January 31st, 2017 23:00
It's not to node 1 but to the node with the lowest Logical Node Number.
So you don't have to renumber and i would also not do this because i am scared of side effects
sluetze
2 Intern
•
300 Posts
0
February 1st, 2017 03:00
sometimes im scared without logical reason.
DHCSTI
1 Rookie
•
48 Posts
0
February 1st, 2017 06:00
Thank you. I too am wary of side effects, and would rather not renumber if it is not necessary.
Sent from my iPad