Start a Conversation

Unsolved

This post is more than 5 years old

217700

January 4th, 2011 14:00

Dell PERC 6/i : Expander Support ?

Hello,

I initially bought this card considering its future capacities : "32 physical drives"

(Source : http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=biz&cs=555&sku=341-9960&baynote_bnrank=0&baynote_irrank=7&~ck=dellSearch + PDF documentation)

Now that i've reached the standard 8 drives configuration, i searched about how to expand it and found that i need a "SAS expander" to add more drives to the RAID card...

But Dell isn't able to suggest me any model (!!!) ! Worse : It's not my concern but even for the 6/e modem, they advised me a MD1000 bay without considering my initial need : SAS expander.

But i'm afraid because i found (googling it) that many users report the lack of expander support for the 6/i model !!!!

I can't believe it for two reasons :

- It's officially declared as supporting up to 32 drives
- Reaching this quantity seems impossible without SAS expander...

Did someone already experiment to setup more than 8 drives on a PERC 6/i model ?

Thank you,

Sincerely,

946 Posts

January 6th, 2011 06:00

Can you tell me little more about your setup. What system is this in? Do you have an external array?

10 Posts

January 7th, 2011 02:00

"Can you tell me little more about your setup. What system is this in? Do you have an external array?"
Hello,

My setup consists on a classic workstation setup (standard mobo, standard cpu, etc...) but i added a PERC 6/i and use a custom rack in order to have multiple drives in a RAID volume (RAID6).

I'll say : i don't worry about cabling, drives slots, etc... What i just want : howto, technically, setup it :) ?

(Well, technically, i know but it seems to have some incompabilities)

Thank you.

Thank you.

946 Posts

January 7th, 2011 07:00

Ok, I don't think I fully understand but:

The PERC6/i adapter can support 32 drives, but that is limited by the backplane of the external enclosure it is attached to. Also, the adapter does not support creating virtual disks containing internal and external drives. Once you have an external jbod attached to the controller you should be able to go into the controller bios or OpenManage and see/configure all the drives into arrays.

10 Posts

January 9th, 2011 08:00

I'll try to simplify :) :

- I have a RAID controller Dell PERC 6/i.
- This controller has 2 internal ports (SFF-8484 connectors) also called "channels"
- On each channel, by default, you can plug "SFF-8484 to 4 SATA" cables in order to plug, so, 2 * 4 = 8 SATA drives . As an example : http://www.familie-neumann.name/ebay/perc5i_kabel.jpg (even if it's PERC 5/i, it's to visualize)
- Now, i want to add extra drives (= plug more than the actual 8 drives) : in effect, specs tell the PERC 6/i can support up to 32 drives
- I tried to know how to do it. The answer seems to be : unplug the actual cables, then connect the PERC to a SAS Expander, then plug all the drives to these new expander in order the PERC "see" all the drives (to bypass the limit due to the 2 original cables). Expander example : http://www.norcotek.com/web/hpexpander.jpg
- BUT when i wanted to buy a SAS expander, i learned, reading many posts, that no one found an expander compatible with the PERC 6/i !!!!
- It astonished me i can't understand such a nonsense statement...
- I contacted Dell to obtain advices and i went crazy : they tried to make me forget the statement found (up to 32 drives) and wanted me to buy a fresh new Dell card instead considering my needs !!!
- So i'm searching and CAN'T find someone who tested PERC 6/i + expanders !!! And i can't believe it that NO ONE has tried to plug more than 8 drives on a PERC 6/i.....
- I contacted various expander providers that can't answer me about compatibility and ask me to contact Dell !!!
- Just going crazy...

I hope being clear :)....

Thank you again.

Sincerely,

16 Posts

January 13th, 2011 20:00

Just plug the PERC into an external SAS/SATA JBOD with SES support, and as long as it has less than 32 bays you will be fine. The expander is part of the backplane. Xyratex, Newisys, LSI, Dell, Quatas, HP, Supermicro all have a wide range of products, and newer SAS-2 (6Gbit) is now shipping. Many other vendors slap their names on one of these I just mentioned also.

I would choose something that supports the faster speed right from the beginning.

Now, not all enclosures are equal, and there are implementation & compatibility issues with SATA disks. First, if they aren't enterprise class, then forget it .... you'll end up with data loss.

I have personally tested several dozen enclosures last 2 years, from a variety of vendors. Some are much better than others, and if you are investing in an enclosure (You want one that supports SES and has the expander+backplane all in same unit that you run cables to. Don't get a dumb box.

(One of the things I do professionally is write Enclosure management software & diagnostics.

So bottom line get an external SES-compliant "intelligent" enclosure, and cabling is all you need.

10 Posts

January 14th, 2011 06:00

Hello,

Thank you for your answer :).

What relieves me : all my researches lead me to some conclusions and they match with your advices :) !

Indeed, even if i like to trick a a lot my computer setups, for this purpose, i conclude that it's better to buy a turnkey solution :) ..

So i ended opting to SuperMicro solutions.

And i also opted to chassis+backplane+expander solutions that include SES2 management :).

I was thinking about a product like this one :

http://www.supermicro.com/products/chassis/3U/936/SC936E1-R900.cfm

(Well, it's only SAS v1 but it's to have a model)

Indeed, as it has a certain price (my project is for personal use :) ), better to choose, from the beginning, a good one that will last some years :).

What do you think about this brand/grade of product ?

By the way, as you're a professional, your advice is interesting :)

But, concerning my initial need :

I have a PERC 6/i which has INTERNAL ports....

And i can't find any one that seems to have tested or can confirm that, with an adaptater, it could work...

Indeed the main worry found in forums : despite the Dell specs, no one seems to have managed to use more than 8 drives (it's supposed to support up to 16 (or 32 ?) physical drives !... And i know that Dell sells T710 with 16 drives (but 2.5" ..??...)).

Here is my main problem : as i can't be sure, i'd like to avoid to spend a lot on a beautiful new chassis to see that it doesn't work...

Well, i think that i'm faced to an obligation : replace all the things (including the RAID controller) :/

Thank you very much !!!!!

16 Posts

January 14th, 2011 07:00

Boot the system and let me know what the LSI-MPT BIOS version is and also the chip P/N. (BIOS shows on the POST. The chip info can be obtained several ways, depending on what O/S you use, but visual inspection is OK too). Since your 6/i has internal ports then probably the best thing you can do is just sell it on ebay and use the money to get a controller with the external ports. Supermicro, Dell, HP, IBM all OEM the same LSI boards.

Get one with dual ports. I've picked up HP's equivalent of the card on ebay for $75. What O/S?

16 Posts

January 14th, 2011 07:00

I've had several supermicro enclosures here for testing. I found a few firmware bugs but nothing that would affect day-to-day operations, only if you wanted to do some enclosure management (so you know if a power supply / fan failed, or wanted to turn on a LED in response to a drive failure). Signal/data quality is above average.

The PERC controller is just an off-the-shelf LSI controller with essentially same firmware that LSI ships, only changes are the VID/PID info and a few vendor-specific bit settings that I have not bothered to dig into. My company has a development agreement with LSI, so I have programming manuals and APIs to talk to dig into it.

Personally, I would not use that controller. The price of SAS-2 has come way down, and with that many disks you certainly want to be using 6Gbit/sec links instead of 3Gbs. Spend a bit of a premium and future-proof. I would also go with the LSI-branded versions, not the PERCs (sorry Dell). From technology point of view, the reason to do this is because patches/updates are available much sooner, and Dell takes their sweet time modifying the firmware upgrades and putting it online.

Right now in the lab I've got 60 HDDs hooked up to just one port of a SAS-2 controller. All 60 HDDs in the same enclosure, which is a SAS-2, 4U made by LSI (but the product isn't shipping yet). The point is that once you get into SAS-2, then you really don't have to worry about busting out of them as much.

I've hooked up 96 disks to LSI SAS-1 controllers and it ran fine. But in all fairness, you have to consider that there is a difference between using them in JBOD vs RAID firmware. I almost always use software RAID, as I work mostly with appliance vendors ... and the dirty secret is that all those expensive server appliances all use SOFTWARE RAID & LSI controllers. You get much better performance and flexibility.

10 Posts

January 14th, 2011 14:00

What a pleasure to read detailed answers :) !

Indeed i read that some "limitations" could be introduced by Dell's firmware compared to the original LSI fw, which lead someones to flash a LSi fw on the PERC.

Initially, i wanted a LSI one, but i mainly wanted to improve my RAID skills with a low budget and i opted for a cheap PERC on Ebay :).

Now, that i tested it enough, indeed i'll reconsider my next choices.

Concerning the hardware raid,it has been choosed for two reasons :

- i was always "educated" with the statement : "hardware raid is true/best raid mode"

- i had very bad experiences with fakeraid (ICHx mobos).

Nowadays, after using RAID a lot, i reconsider these statements :

- fakeraid ISN'T software raid !
- how unbelievable it could be, software raid can be better/more secure than hardware raid

In parallel with the use of the PERC, i tested sofware raid (mainly on Debian).

I'm trying to find the current best choice(s) for software raid... what do you advice me : which raid mode / filesystem?

(I'm currently using RAID6)

RAID-Z2/Z3 on ZFS ?

PERC 6/i:

Firmware Package Version : 6.2.0-0013 (haven't yet installed the fresh new 6.3.x)

Firmware Version : 1.22.02-0612

The PERC 6/i is based on the LSI SAS1078 RoC chip.

I'm using a Windows 7 as OS on a separate drive.

Which HP card ?

Thanks :)

16 Posts

January 14th, 2011 15:00

ZFS significantly better in performance, reliability, features. (Heck, what file-system has integrated data compression, variable block-size I/O, MPIO, HOT snapshots, online expansion, N-way RAID(akin to RAID6,7,8 ..) You can build a pool with 10 SATA drives in a RAID6(RAIDZ2), then add a pair of SSDs into the same LUN and the O/S will automatically move things around as necessary to give you the speed.

I'm working with a site now that puts 100TB per host using ZFS ... and that is just for the development machine. Part of the architecture is that the file system is effectively an extension of the kernel, and it will use as much available RAM on demand to improve I/O and minimize reads. All writes, conversely, are flushed to disk, so data loss is difficult.

That account is using the SAS-2 based hardware the H700 is dell equivalent.

10 Posts

January 14th, 2011 16:00

Thank you very much for your answer :).

I'm going to consider/think about all the details you gave m) :).

Sincerely,

January 20th, 2011 08:00

I dont know if its too late.. but I am using Dell perc 6i Raid card withChenbro Expandder , I have 20 Hardrives hooked up in Raid 6 for Media Server.Its a custom solution using DellPErc 6i + Chenbro Expander + Supermicro Bays.
I had compatibility issues in the beginning but with lots of work and firmware updates and running into problems I have a running system,
http://usa.chenbro.com/corporatesite/products_detail.php?sku=75

10 Posts

January 20th, 2011 23:00

Hello,

I can't believe it !!!!!!! I've spent so much time with Dell and reading posts that i hadn't anymore hope :) ....

Everyone has told me that it wasn't possible because of compatibility issues !!! Even w/ the CK13601 !!!!

I had to admit you i can't believe your post :) ! By the way, your setup is the one i wanted initially :) : Dell Perc 6/i + Chenbro SAS Expander + SuperMicro Chassis :) !!!

Is there any post / thread or article relating all your (difficult) steps to get a working solution :) ???

By the way i wanted the Chenbro one because of a particular spec : powered by molex and not PCI slot :) (Unless i'm wrong : PCI slot only intends to "hold/plug" the card :) )...

PLease, ashrafi1983, let me know about how to have a working PERC 6/i w/ CK13601 (which by the way had firmware bugs according to many posts) :) !

Thank VERY VERY much !

Sincerely,

XZed

January 21st, 2011 00:00

The only problem you would have with card is detecting the drive due to Direct Pd Mapping , you can use MegaCli utility to disable it by using a command I will if you run into this problem and you should be fine.
Yes Chenbro in my setup is connect to raid with a SAS cable and it is powered via Molex connector.
I am using Gigabyte Motherboard but i dont recommend as Dell perc does not work good with Gigabyte , in my opinion the Intel Raid drivers conflicts with the raid card , and you would have a system that keeps on restarting. The only work around is if you physically disable some pins.
what is your system configuration ?
I used WD Green 2Tb , and i dont recommend it , Even though u can make it work but for a longer reliable raid I would suggest you invest in Hitachi Deskstar or buy proper Raid / Enterprise Drives.
You should research on following , I would assume atleast 10 drive setup.
Raid Type ( 5 , 50 , 6 60 ) you must decide which raid you want to implement.
Stripe Cache Size
You should be aware of 2 Tb limitation in Windows due to MBR , ( you would wana split your raid in to OS drive and use GPT partition or however you like to have it sagregated)
Make Sure you Update Firmware
Install A Fan on the Dell Perc6i card passive heatsink.
Depending on what chasis/case you intend to buy . you might need modifications.
Allocation size when partitioning
If you would be using Windows Server , use windows 2008 R2 . there are issues due to MISALIGNMENT of drives , or partition alignment in raid , Win 2008 R2 takes care of this be default for harddrives , and you would find significant performance .
Dont forget Global hotspares
well this is all on top of my head at the moment .
Goodluck

10 Posts

January 21st, 2011 02:00

Well, i read your post many times to be sure :
For PERC part : i think all is good.. Why ? Because, for my actual setup, i already have done all the research job to get a functional setup (considering mobo chipsets, etc...).

Just FYI : Gigabyte GA-73PVM-S2H (PERC works without any pin modification) / Seagate 7200.12 1To drives / RAID6.

By the way, thank you very much for all the advices (misalignment, etc...).

But to resume : if i well understood : you're telling me that, apart enabling/disabling the "Direct PD Mapping" value, it should work "out of the box" ??

I read that there is 2 hardware revisions for the Chenbro card : i suppose i had to opt to the "Rev B" ?

Another question : once back to home, i'll try to study this value (Direct PD Mapping) but i'm worrying : does the value modification cancel my current RAID setup ?

Indeed, i'm wondering if this screw up my current RAID6 setup while changing drives mapping...

Then, i'd like to know about your cabling :

- one cable like this one ( http://www.pc-pitstop.com/sas_cables_adapters/8784-05M.asp ) goes from the PERC to the CK13601
(i worry about "sense" of the cable Host/Device or Device/Host)
(so the second internal PERC port is unused ?)
- and for each mini-sas port, one cable like this one (http://www.elpeus.com/images/cbl-sff8087ocr-06m.jpg ) going to the drives

Thank you very much.

In fact, i wait your answer in order to spend money on a CK13601 :) .
No Events found!

Top