Highlighted
thomas_andy
1 Copper

File Clone

Has anyone used the new File Clone feature in OneFS 7.x? 

From my understanding, since Isilon does not allow writes to snapshots, File Clone is a read/write feature that keeps tracks of changes in a shadow file, so the clone can be written to and changed, then discarded when done.  I'm wondering if a File Clone can be made from a snapshot or from replicated data that is read only.

Thanks in advance

0 Kudos
3 Replies

Re: File Clone

Here is a brief summary, "SnapshotIQ enables you to create file clones that share blocks with existing files in order to save space on the cluster. The blocks that are shared between a clone and cloned file are contained in a hidden file called a shadow store. Immediately after a clone is created, all data originally contained in the cloned file is transferred to a shadow store. Because both files reference all blocks from the shadow store, the two files consume no more space than the original file; the clone does not take up any additional space on the cluster. However, if the cloned file or clone is modified, the file and clone will share only blocks that are common to both of them, and the modified, unshared blocks will occupy additional space on the cluster."

For example, to clone test.txt in a snapshot directory to /ifs/test01/, you would login to the OneFS CLI and use the 'cp -c' command:

cp -c /ifs/.snapshot/Snapshot2014Jun04/archive/test.txt /ifs/test01/test_clone.txt

Happy cloning!

0 Kudos
cadiletta
2 Iron

Re: File Clone

This actually sounds like the precise opposite to Dedupe features.  Where Dedupe is going to find files with duplicated content and try to collapse them to point to the same blocks until such time as one is edited and requires it's own storage space.  In cloning we are just creating a new copy efficiently.  Sounds like a good idea.

0 Kudos
Jeffey1
4 Germanium

Re: File Clone

The SmartDedupe software module enables deduplication to save storage space on a cluster by reducing redundant data. As you write file to the cluster, some of those files or blocks of data in the files might be duplicates. You can run a deduplication job that scans the file system to see if the data already exits. After duplicate blocks are discovered, SmartDedupe moves a single copy of those blocks to a special set of files known as shadow stores. During this process, duplicate blocks are removed from the actual files and replaced with pointers to the shadow stores.

0 Kudos