Start a Conversation


This post is more than 5 years old


July 13th, 2009 10:00

hi,can anyone tell me where exactly DAS/NAS/SAN is implemented.

Hi, I'm Nikhil and i am currently pursuing my GNIIT certification course on Storage foundation and Storage Design( by EMC).I wanna know where exactly is NAS used and where is SAN solution used.Some real world scenario can help me understand better.Thank U...

July 14th, 2009 03:00


Directly attached to server, sort of 1 to 1.  Alot of SAN devices can be used as DAS, but now a days the way storage is growing, and with all the high availabilty features within operating systems (e.g Vmare ESX) which make use of SAN environments or shared storage, alot of companies are putting in SANs.


Celerra is a good example.  EMC call it Unified storage because it can have both.  So for example, imagine a University, with lots of physical File servers.  They could use the NAS part of Celerra (CIFS or NFS), and get rid of those physical servers, and host the File servers inside the Celerra. Save lots of physical space, and cost. Now they just create some Shares in the Celerra (CIFS or NFS), and everyone can connect to the shares just like before, and the students  won't even realise that there's been a change (probably because they are too busy getting drunk). They connect to their files, home directories just as before.

Second, the university have databases like Exchange (email servers), SQL servers, which need performance (number crunching).  If the University also bought the Fibre Channel option for the Celerra (or they could also use ISCSI here), then they can host the databases using what we call Block Level storage. They will need some dedicated hosts/servers attached to the Celerra via either Fibre Chanel or Iscsi, so these servers will be on a SAN, no NAS here.

Those are pretty cut down explanantions - any help ?

5 Practitioner


274.2K Posts

July 14th, 2009 10:00

Hi Nikhil,

I'm John McKenna, a Senior Technical Education Consultant for EMC's TS Education IP Storage team. I understand you want to know where NAS and SAN fit in the real world. I think I can explain that.

SAN (storage area networking) is the use of a specialized network of storage devices. This network is typically Fibre Channel attached, and presents storage volumes directly to a host device. It is sometimes referred to as "host-attached". As the host server that is attached to the SAN perceives SAN storage to be a locally attached disk, SAN connections are referred to as being "behind the server". Since the host-SAN connection is typically made on a limited-distance FC connection, response times are deterministic (i.e., they can be measured, and are reliably consistent within that measurement). This makes SAN attachment advantageous for applications that depend on transaction timing (e.g., database applications).

NAS (network attached storage) involves client access of data from network locations, using a specialized file server running a real-time, multl-threaded OS which has had all services not related to file I/O removed and has been specially optimized for file I/O performance. Although IP transmission is non-deterministic, NAS is a popular choice because of its' relative ease of implementation, and because the Ethernet infrastructure necessary to support is ubiquitous (much more so than FC infrastructure).Technically, NAS offers three capabilities that have no parallel in SAN attachment:

     The connection of very large numbers of clients to the same data source;

     The connection of clients over extended (essentially unlimited) distance, and;

     The connection of clients using CIFS and NFS for remote access to the same file system.

Applications that require common access to shared data (e.g. Video/Content Distribution, Telco, Software Development) are well-suited for NAS.

EMC also offers a NAS/SAN hybrid solution called MPFS (Multi-Path File System). This solution uses IP connection to a NAS server, like any other NAS solution, but adds to that a separate data path between the NAS client and the SAN array. This is accomplished by adding either a FC HBA or a separate iSCSI NIC to the client, and then using these separate communition paths to the SAN array. In the MPFS solution, network clients make metada requests ( file open and close, allocation, locking, etc) to the NAS server, but make data requests ( read, write) directly to the array, using the HBA or iSCSI connection and a software agent running on the client. MPFS allows the access flexibility of NAS combined with the performance potential of SAN. Applications that could benefit from an MPFS connection include CAD/CAM, Video Streaming and Distributed/Grid computing.

I hope this helps!

5 Practitioner


274.2K Posts

July 17th, 2009 09:00

Greetings Nikhil,

I am a colleague of John McKenna and just want to add some food for thought.

Your question is a loaded one and has the dreaded answer we all hate "It depends".

Another perspective from John's comprehensive answer is, data utilization profiles generally dictate the technology used to access the data. For example if a particular database is used for high transaction, low latency type of access profile the natural tendency is to place this on some form of Fibre Channel SAN storage or Direct attached storage with high performance drives (FC SCSI or Solid State). However if we change this profile for the same database to be globally accessed then we may want to use some form of  NAS hybrid connectivity, MultiPath File System or even iSCSI access with high to medium performance disk drives (FC SCSI). If we change the profile, once again to the same database, to being Low transaction, medium to high latency e.g. web queries, then we could change this to perhaps an NFS access profile running on lower performing hard drives (ATA)

So I guess what I'm really trying to say is that in most cases the access profile of the data will dictate the technology DAS,NAS/Hybrid or FC SAN.

I do hope that this helps you defining the role played by each of these technologies.



452 Posts

July 22nd, 2009 12:00

I certainly don't profess any expertise in this area, but came across an article that might help.

34 Posts

July 22nd, 2009 14:00

This is an interesting conversation, at least for me. And if I may, I'd like to weigh in on it just a little.

I have good understanding of the DAS/SAN options and which is practical and which is functional. But with the NAS, the considerable benefits I am seeing from a NAS are:

1) Multi-platform access -- this is something that is probably more common than I see in my environment. We are primarily talking about a file server and we would just assume run that through an AD member server.

2) Optimized for File I/O -- a number of services are not running? But I have to ask, is the optimization any greater that I could accomplish by simply locking down a 2003/2008 server? As Mr. Meogi would say, "show me optimization".

3) Scalability -- we are currently considering a solution to archive medical imaging on a Centera about 1500 miles away. A local NAS Head does this well.

An arguement I have against file servers built on NAS heads is, the more data on the server (which a NAS can really handle the amounts of data), the longer it takes to back it up -- no matter what the archive bit's value. This is simply because the backup agent has to look at every single file to determine the archive bit and then backup it up as appropriate (my file server backups are in the lower 25% when it comes to backup performance). Now that is based on traditional backup strategies and not through snapshots and cloning, etc.

I guess, in summary, I have only been faced with one real example of the need for a NAS. When I hear file server, I think home directories and departmental shares. But when I came across a real reason for block level access via the IP network, NAS technology hit a home run.

I'm going to the ISM book this evening and re-reread the NAS section.


5 Practitioner


274.2K Posts

July 27th, 2009 11:00

Greetings Bart,

It certainly shows that you have a fair grasp of the concepts for SAN and NAS attached storage. I will try and embellish your answers a little.

1) One of the key areas that NAS addressed was the collaborative work environment. The ability to lock down certain areas of a file system for write access, but still allow read access to that area. This is also tied into the way the application was written and how well it shares its information. By opening up this collaborative nature of data utilization, we can how use many more servers and people resources to address business requirements and meet deadlines, software version builds, backup windows and reduce all sorts of other bottlenecks we encountered when these processes were locked into a single resource.

2) "Optimized for file level I/O" means many different things and I will try to touch on a few here. Firstly disk geometry is important here based upon the about of data that can be written efficiently to a disk in a contiguous fashion. This can be affected by the type of file system implemented on the particularly NAS device. If we look at a dedicated NAS device there are possible two (maybe more) ways of looking at data access. We can either optimize data for read retrieval or for write efficiency.

  • The way this could be achieved is that for read retrieval performance we could ensure that the data is always written to a contiguous space that can accommodate the data. This will slow down the write operation as the disk becomes fuller, as the amount of available contiguous data space will become reduced as we write more and more data to it. However with things like read ahead cache and contiguous file utilization by the application, this will allow for efficient data collection if the data is always contiguously aligned. In a nutshell this method tries to reduce the amount of fragmentation on the disk.
  • The way we could achieve write optimized performance is to write the data wherever there is space of it on the disk. This will speed up the write operation for the data, but as the disk becomes full and if the application is read intensive, then gathering the data from where it is written on the disk can become a very long process. By writing data to wherever there is available space on a disk can lead to fragmentation depending upon the source file structure. e.g. lots of little files as opposed to very large contiguous files. As we know fragmentation can adversely affect a disk's performance.
  • Another way to optimize for file level I/O is to ensure that the amount of data transferred is the same and the underling disk block architecture and that the protocol packet size can accommodate , either the same data block size or some divisible fraction of it. This is also determined by the network environment so that we minimize packet fragmentation and retransmission.
  • Reduce the size of the kernel by stripping out areas of the OS that are not deemed as critical to functionality, e.g. not supporting a complex Graphical User Interface, not supporting a number of services like print services, etc.

In general purpose NAS devices all or some of these are implemented for the environment.

3) As far as backup is concerned there is a protocol that was written especially to address the concern of backing up NAS devices. This protocol is Network Data Management Protocol (NDMP). Depending upon the device you are using you can implement this protocol to backup the NAS data to either a directly attach tape device or library or in fact you could create a virtual tape device within the NAS array and backup the data to a traditional tape format internally. This means that the actual NAS data does not travel back over the network to the backup server or node and only the meta data is transferred. Therefore precious bandwidth is preserved and the backup up time can be compressed. If you include in the mix a snapshot or clone of the production data, this backup can be initiated at any time and the importance of the backup window is reduced.

I hope this helps you with your ongoing quest for information and you new found excitement for block level I/O over traditional network environments. It certainly does open a whole new world of connectivity and exploration.

Kind Regards


34 Posts

July 28th, 2009 10:00

Thanks for the reply and the thorough answers, Simon. I have about a fair grasp at best on the technology, but as Johnny Five said, "More Input!"

I can't read enough of these posts.


5 Practitioner


274.2K Posts

July 28th, 2009 11:00

Greetings Bart,

You are most welcome. I am glad that I have been able to assist you. Thank you for making these forums worthwhile with your questions.

Take care and happy storage ;-)


21 Posts

August 4th, 2009 12:00

Excellent reply John.

Great information, thanks.

Hope to see more of your wisdom as I plough thro my EMC Proven programmes.

Snippets like this help in the learning proccess.

Thanks for your input.


546 Posts

August 26th, 2009 12:00

I thought those of you active in this discussion may be interested in a new discussion that was started today by Stu Miniman, from EMC's CTO office. He's presenting an online seminar titled "The Journey Towards the Converged Data Center: Fibre Channel over Ethernet (FCoE) and iSCSI". He's looking for input frm the Proven Professional community on the topic in this thread.

Seems like a good a good seminar if you are interested in data center technologies.

5 Practitioner


274.2K Posts

September 3rd, 2009 07:00

Hi, please try the most tasteless answer:


host+HBA<--------->Storage(such as Symmetrix, Clariion)


host+HBA<-->fabric switch<--->Storage(such as Symmetrix, Clariion)


host+NIC<----->LAN<---->Storage(Such as Celerra)

5 Practitioner


274.2K Posts

September 3rd, 2009 13:00

Greetings Yongkang (forgive me if I address you incorrectly)

Thank you for your illustrations. As they say "a picture is worth a thousand words".

Happy Storage


546 Posts

September 3rd, 2009 13:00

Yongkang I like your diagram! I don't think you really meant "tasteless" though
No Events found!