ECS: How to configure and test s3cmd with ECS
Summary: This knowledge article explains how to configure s3cmd tools with ECS.
Instructions
Installation and configuration
1. Download the s3cmd tool from the additional info section.
2. Extract and install the s3cmd tool:
Commands:
# sudo tar xzf s3cmd-2.4.0.tar.gz # sudo python setup.py install
3. Run the s3cmd configuration:
Command:
# s3cmd --configured
Example:
Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key [XXXX]: objectuser for that bucket Secret Key [XXX]: secret key Default Region [US]: Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [XXXXX]: XX,XX.XX.XXX:9020 Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key:XXXXX Secret Key: XXXXXXXXX Default Region: US S3 Endpoint: XX.XX.XX.XXX:9020 DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-) Now verifying that encryption works... Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/home/admin/.s3cfg'
4. Run s3cmd to list all buckets:
Command:
# s3cmd ls
Example:
admin@ecsnode:~/mrx> s3cmd ls 2024-06-05 06:54 s3://bucket 2024-05-01 15:46 s3://s3cmd_bucket 2024-05-09 11:56 s3://s3cmd_bucket1 2024-05-08 09:18 s3://winscp
5. List the contents of the bucket (it is now empty after being configured):
Command:
admin@ecsnode:~/mrx> s3cmd ls s3://s3cmd_bucket admin@ecsnode:~/mrx>
6. Create a file and upload it to the bucket:
Command:
# touch addressbook.xml
Example:
admin@ecsnode:~/mrx> s3cmd put addressbook.xml s3://s3cmd_bucket upload: 'addressbook.xml' -> 's3://s3cmd_bucket/addressbook.xml' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done
7. List the contents of the bucket:
Command:
# s3cmd ls s3://s3cmd_bucket
Example:
admin@ecsnode:~/mrx> s3cmd ls s3://s3cmd_bucket 2024-07-03 11:36 0 s3://s3cmd_bucket/addressbook.xml admin@ecsnode8:~/mrx>
8. Read the file from the bucket:
Command:
# s3cmd get s3://s3cmd_bucket/addressbook.xml
admin@ecsnode:~/mrx> s3cmd get s3://s3cmd_bucket/addressbook.xml download: 's3://s3cmd_bucket/addressbook.xml' -> './addressbook.xml' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done
Write performance test by using PUT.
1. Create a file:
Command:
# sudo fallocate -l 10G random10GB.bin
2. Use the time command that monitors the duration of the write of file random10BG.bin into the s3cmd_bucket:
Command:
# time ./s3cmd put <file> s3://<bucket>
Example:
admin@ecsnode:~> # time ./s3cmd put random10GB.bin s3://s3cmd_bucket outputs: upload: 'random10GB.bin' -> 's3://s3cmd_bucket/random10GB.bin' [part 683 of 683, 10MB] [1 of 1] 10485760 of 10485760 100% in 0s 57.68 MB/s done real 3m8.872s user 1m29.483s sys 0m19.052s
To calculate write response time:
- 3 Minutes 8 seconds = 188 seconds
- 10240 MB / 188 = 54 MB/s (Megabytes per second) is the upload speed.
Read performance test by using GET.
Use the time command that monitors the duration of the read:
Command:
# s3time s3cmd get s3://<bucket/>/<file>
Example:
admin@ecsnode:~> s3time s3cmd get s3://s3cmd_bucket/random10GB.bin If 's3time' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf s3time admin@ecsnode8:~> time s3cmd get s3://s3cmd_bucket/random10GB.bin download: 's3://s3cmd_bucket/random10GB.bin' -> './random10GB.bin' [1 of 1] 10737418240 of 10737418240 100% in 38s 263.31 MB/s done real 0m39.172s user 0m23.688s sys 0m12.637s
To calculate read response time:
- 39 seconds
- 10240 MB / 39 = 262.6 MB/s (Megabytes per second) is the download speed.
Additional Information
Download the s3cmd tool below:
https://s3tools.org/download
Testing reference:
https://geekmungus.co.uk/?p=4018