Unsolved

This post is more than 5 years old

3465

September 20th, 2017 18:00

ecs-sync 3.2.3 using remote-copy (s3 CopyObject) fails

Hey folks,

Working with ecs-sync 3.2.3 to move 500 million objects.  The standard sync server implementation (pull S3 object from old endpoint, push it to new endpoint) will take too long. 

Looking at the "--remote-copy" option, which would indicate "in-box" support for a bucket copy, which I'm hoping will cut the time to sync by a factor of 10 or more.  I'm hitting the following S3 error, however:

          com.emc.object.s3.S3Exception: This copy request is illegal because it is trying to copy an object to itself without changing object's metadata or encryption attributes.


It would appear that this should work, there's a sample XML config that shows an ECS S3 remote copy, but I can't seem to get past this one.  Any pointers?


Thanks

Pete

September 21st, 2017 09:00

Thanks David,
See below.  The source and target are the same ECS, but each has it's own namespace and access/secret key.
Thanks

Pete
xml.PNG.png

110 Posts

September 22nd, 2017 06:00

You cannot use remote copy to copy data between namespaces.  Namespaces are a logical separation of data and tenancy, which prevents this kind of operation.  If you absolutely need a different namespace, you will have to move the data (read it out and write it in).

Is there any way you can copy to a different bucket in the same namespace?

September 22nd, 2017 07:00

This makes sense, thanks Stu.
Our goal is to have data in a currently unencrypted bucket, to an encrypted bucket. The catch is that the destination bucket(s) must eventually look identical, such that the Data Domain Cloud Tier which created the original buckets is unaware that anything has changed on the ECS side.  We could copy from the unencrypted source to a new encrypted destination bucket within the same namespace, but without the ability to rename that new bucket to be the original bucket name, this won't work from the Data Domain perspective.

Thanks

Pete

110 Posts

September 22nd, 2017 08:00

If you are using a different namespace, you must be using different credentials, which means the DataDomain config would have to change anyway.  Can you not change the bucket name on the DD side?

September 22nd, 2017 13:00

So the issue is that the DD Cloud Tier, once setup, seems to have virtually no reconfigurable options, other than an option to update the secret key.  So my thought was to move the data to a new encrypted namespace, then remove the old bucket owner, and re-add it as the owner of the new namespace.  There would be a new secret key created, but that's not an issue since the secret key is the one config we can seem to change on the DD Cloud Tier.  There don't appear to be any options (at least, options exposed to the end user) to modify the DD Cloud Tier endpoint, or bucket names that it creates when initially configured.

Thanks

Pete

110 Posts

September 25th, 2017 07:00

One other possibility would be to copy the data to a new plain-vanilla bucket (no SSE or anything special), delete the original bucket, re-create it with appropriate settings, then copy the data back in (all copies are remote).

However, I will point out that in our experience, using remote copy is not much faster than moving the data out and back in (assuming you are on a local 10G connection).  My recommendation would be to configure the new bucket in a different namespace as you originally planned, but simply move the data, then reassign bucket ownership after the migration is complete.

September 26th, 2017 12:00

Thanks for the assistance gentlemen - I figured this was the end result, but wanted to ensure I wasn't missing something simple.

Doesn't seem to be a native S3 bucket rename - feel like whipping up a method for a customer?

Thanks

Pete

No Events found!

Top