You can use the S3 API to create a bucket in a replication group. Because ECS uses custom headers (x-emc), the string to sign must be constructed to include these headers. In this procedure the s3curl tool is used. There are also several programmatic clients you can use, for example, the S3 Java client.
Prerequisites
To create a bucket, ECS must have at least one replication group configured.
Ensure that Perl is installed on the Linux machine on which you run s3curl.
Ensure that curl tool and the s3curl tool are installed. The s3curl tool acts as a wrapper around curl.
To use
s3curl with x-emc headers, minor modifications must be made to the
s3curl script. You can obtain the modified, ECS-specific version of s3curl from the
EMCECS Git Repository.
Ensure that you have obtained a secret key for the user who will create the bucket. For more information, see
ECS Data Access Guide, available from
https://www.dell.com/support/.
About this task
The EMC headers that can be used with buckets are described in
Bucket HTTP headers.
Steps
Obtain the identity of the replication group in which you want the bucket to be created, by typing the following command:
GET https://<ECS IP Address>:4443/vdc/data-service/vpools
The response provides the name and identity of all data services virtual pools. In the following example, the ID is urn:storageos:ReplicationGroupInfo:8fc8e19bedf0-4e81-bee8-79accc867f64:global.
Set up
s3curl by creating a
.s3curl file in which to enter the user credentials.
The
.s3curl file must have permissions 0600 (rw-/---/---) when
s3curl.pl is run.
In the following example, the profile
my_profile references the user credentials for the
user@yourco.com account, and
root_profile references the credentials for the root account.
The example uses thex-emc-dataservice-vpool header to specify the replication group in which the bucket is created and the
x-emc-file-system-access-enabled header to enable the bucket for file system access, such as for NFS or HDFS.
NOTE T2he
-acl public-read-write argument is optional, but can be used to set permissions to enable access to the bucket. For example, if you intend to access to bucket as NFS from an environment that is not secured using Kerberos.
If successful, (with --debug on) output similar to the following is displayed:
s3curl: Found the url: host=203.0.113.10; port=9020; uri=/S3B4; query=;
s3curl: ordinary endpoint signing case
s3curl: StringToSign='PUT\n\n\nThu, 12 Dec 2013 07:58:39 +0000\nx-amz-acl:public-read-write
\nx-emc-file-system-access-enabled:true\nx-emc-dataservice-vpool:
urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-bee8-79accc867f64:global:\n/S3B4'
s3curl: exec curl -H Date: Thu, 12 Dec 2013 07:58:39 +0000 -H Authorization: AWS
root:AiTcfMDhsi6iSq2rIbHEZon0WNo= -H x-amz-acl: public-read-write -L -H content-type:
--data-binary -X PUT -H x-emc-file-system-access-enabled:true
-H x-emc-dataservice-vpool:urn:storageos:ObjectStore:e0506a04-340b-4e78-a694-4c389ce14dc8: http://203.0.113.10:9020/S3B4
Next steps
You can list the buckets using the S3 interface, using: