From a pure technical point of view, a disk is always a disk .. you can always add hdisk4 to a VG that already spans hdisk2 and hdisk3 .. The question is "Is it safe to extend my VG on a different storage?" ..
If you trust enough both vendors, you can extend your VG on a different storage. But in case of problems (any problem that may give you DU or even DL) you can't complain with either vendors, you are on your own.. Not that bad, isn't it ??
I think it's better NOT to extend your existing VGs on a different storage.
From an OS perspective there is no technical roadblock preventing you from doing this, but from a planning and availability perspective...
We have a Best Practice implemented in our environment where hosts should not be connected to multiple arrays (except during migrations). Out of several hundred SAN attached host we have only made an exception to this rule once.
The reasoning behind this (as Stephano pointed out) is that you are increasing the number of factors that could affect you storage integrity.
I should mention (since dynamox's comments reminded me) that we do allow multiple LPARs on our AIX boxes to access different arrays, but they have to use separate physical HBAs to do it.
And the one exception we allowed in our environment was for a host that required DMX3 for the performance of their primary application space and a large amount of low performance high capacity disk for working space. They didn't want to pay for 3 TB of high performance DMX3 space for "throw away" data.
good points ..Allen even took it to the next level, we have some hosts where we present different arrays but typically they are from the same vendor (EMC) so there is no issue with failover software, on big HPUX superdomes that are divided into partititions , we use clariion for dev/qa partitions while symm storage is used for prod. On one of my dev boxes i have Clariion, 8830, Hitachi and StorageTek storage in one volume group ..i know i know ..but it's a dev box so who cares
nightmare !!!, especially when management wants one big dashboard with reports on storage utilization. Sure, maybe they saved a few bucks here and there by going multi-vendor but the time and effort it takes to keep up with different vendors failover software, supported HBA firmware/drivers ...and considering storage folks are not cheap, it would probably pay for itself by sticking with one/two vendors.
I was offering the suggestion of nesting filesystems to try and meet the request (same mountpoint name) while still providing the ability to leave the different storage in seperate VGs. The data breakdown will show if that is even a possible -if the filesystem is a single directory with all the storage or 100s of directories with little bits of storage this idea gets tossed right out the window...
You are saying that each vendor should have its own volumegroup .. And I agree that it's a good idea .. However the original idea was to extend an existing filesystem on a different storage .. So the original idea was to use the same VG spanning disks from different vendors.
I would agree with everyone you really should not mix different vendor storage within one Volumegroup and one filesystem.
As a compromise to make support a little bit cleaner if you can possibly find a good logical splitting point within the filesystem in question (i.e. maybe they have 2 directories each with half of the actual data) you could create a mount point that lives on top of the other filesystem living in its own Volume group.
Something like this:
emcvg containing emclv which has a mountpoint /filesystem
and
ibmvg containing ibmlv which has a mountpoint of /filesystem/directory
Not clean, but in my mind better in the long run for supportability.
Forgive me if I've been rude .. Your idea is nice .. however as you noted there are requirements that must be met before going that way. But it's a good idea
xe2sdc
4 Operator
•
2.8K Posts
0
May 12th, 2008 01:00
If you trust enough both vendors, you can extend your VG on a different storage. But in case of problems (any problem that may give you DU or even DL) you can't complain with either vendors, you are on your own.. Not that bad, isn't it ??
I think it's better NOT to extend your existing VGs on a different storage.
Allen Ward
4 Operator
•
2.1K Posts
0
May 12th, 2008 08:00
We have a Best Practice implemented in our environment where hosts should not be connected to multiple arrays (except during migrations). Out of several hundred SAN attached host we have only made an exception to this rule once.
The reasoning behind this (as Stephano pointed out) is that you are increasing the number of factors that could affect you storage integrity.
Allen Ward
4 Operator
•
2.1K Posts
0
May 12th, 2008 08:00
I should mention (since dynamox's comments reminded me) that we do allow multiple LPARs on our AIX boxes to access different arrays, but they have to use separate physical HBAs to do it.
And the one exception we allowed in our environment was for a host that required DMX3 for the performance of their primary application space and a large amount of low performance high capacity disk for working space. They didn't want to pay for 3 TB of high performance DMX3 space for "throw away" data.
dynamox
9 Legend
•
20.4K Posts
0
May 12th, 2008 08:00
dynamox
9 Legend
•
20.4K Posts
0
May 12th, 2008 09:00
nightmare !!!, especially when management wants one big dashboard with reports on storage utilization. Sure, maybe they saved a few bucks here and there by going multi-vendor but the time and effort it takes to keep up with different vendors failover software, supported HBA firmware/drivers ...and considering storage folks are not cheap, it would probably pay for itself by sticking with one/two vendors.
Allen Ward
4 Operator
•
2.1K Posts
0
May 12th, 2008 10:00
RRR
4 Operator
•
5.7K Posts
0
May 13th, 2008 01:00
xe2sdc
4 Operator
•
2.8K Posts
0
May 13th, 2008 01:00
bodnarg
2 Intern
•
385 Posts
0
May 14th, 2008 12:00
xe2sdc
4 Operator
•
2.8K Posts
0
May 14th, 2008 12:00
bodnarg
2 Intern
•
385 Posts
0
May 14th, 2008 12:00
As a compromise to make support a little bit cleaner if you can possibly find a good logical splitting point within the filesystem in question (i.e. maybe they have 2 directories each with half of the actual data) you could create a mount point that lives on top of the other filesystem living in its own Volume group.
Something like this:
emcvg containing emclv which has a mountpoint /filesystem
and
ibmvg containing ibmlv which has a mountpoint of /filesystem/directory
Not clean, but in my mind better in the long run for supportability.
xe2sdc
4 Operator
•
2.8K Posts
0
May 14th, 2008 12:00
bodnarg
2 Intern
•
385 Posts
0
May 15th, 2008 05:00
The better suggestion is new filesystem with its own volumegroup, etc. but sometimes you can't do that for technical or political reasons.