Hello all, newbie here. I'm a windows admin and know very little about Linux and nothing about Solaris so here is my question. I need to basically backup a directory and it's sub-directories on a nightly basis from both a Solaris server and another Linux (Red Hat) server to a share sitting on my Celerra NS20. I can make this share either NFS or Cifs(samba) but I am not allow to mount the share on either the Solaris or Linux server. I've been googling and haven't found anything that talks about rsyncing from a solaris/linux server to a Celerra so I'm hoping someone here can help or at least point in the right direction.
Thanks in advance
since the Celerra doesnt implement rsync natively you would have to go through an intermediate server that mounts both the srs and dst as dynamox suggested
However I would try to negotiate with the Solaris and Linux admins.
Often they dont want NFS mounts because they fear the problems that non-working NFS mounts can cause during booting and runtime.
First - we have many customers that trust Celerra mounts enough to run their databases and other critical apps directly from a Celerra NFS share.
Second - you could optimize the mount - using bg,intr allows booting of the client even if the NFS mount isnt available
or setup autmounter and ónly mount it when needed - i.e. when the rsync runs
Running the rsync directly on the client would be more efficient - it would also allow the Linux/Solaris admins to restore from there directly and even access any snapshots of the data you might want to take
other alternative - if its just a small number of files or large file you could use a small ftp mirror skript on the clients
the Celerra supports ftp natively and its not bad for large files
dynamox schrieb:you don't need to mount source on the utility server, only the destination (Celerra in this case). Rsync client on the source machine sends data to rsync daemon running on utility server.
true - I forgot
you're still dragging the data over the network twice though
Thanks for the suggestions and this is were my dilemma lies.
1) The 2 servers are turn-key systems and the vendor prohibits us from installing anything on it and the vendor will not allow any kind of exports to be mounted period nor even have access to login to the servers.
2) We are basically a 99.99% windows shop with the exception of these 2 non-windows boxes therefore we only have windows admins here and nothing else.
3) Reason 1 and 2 is why these 2 systems are turn-key in which the vendor does 100% support on them but they also have all these restrictions.
4) The files are mostly average size application files but it does total to 25GB of data from what the vendor tells us as we don't have access.
5) We thought about FTP but another restriction is that we have to run their proprietary API to prep their server before the backup and then another proprietary API after the backup but there are 2 separate version of the API depending if the backup fails or suceeds.
6) This ONE script needs to be created by us without knowledge of their API and we only know the name of directory to backup given to us by the vendor. The vendor will only just take the script and drop it on their server and schedule a cron job to run it, again because we don't have access to the servers.
Basically the vendor expect us to work in the dark and magically create something to backup their directory and work perfectly the first time and everytime without any kind of testing.
Why don't we just drop this vendor and move to someone better you may ask. Well lets just say it's all political at the highest level and us peons have no say.