If I have a bucket with a very large number of files, what is the best way to recursively delete the objects so I can delete the bucket?
With s3cmd, it's so large recursive delete fails. For one of the smaller buckets I'm doing s3cmd ls + s3cmd del using gnu parallel. But the larger bucket is 100,000,000+ objects and this method doesn't work.
I can’t view that page as it says dell internal only.
If you are an External user and require access to inside Dell then please contact your Dell representative for more information. Please note External User access is limited to Dell approved customers, partners, and suppliers.
Research IS Systems Engineer III
Children's Hospital of Philadelphia Research Institute
The Roberts Center for Pediatric Research
2716 South Street
Philadelphia, PA 19104
Need faster service?
Try placing your request at http://cirrus.research.chop.edu<http://cirrus.research.chop.edu/> before opening a manual service request.
Otherwise, you can submit an adhoc request here<https://chop.service-now.com/esp?sysparm_cancelable=true> (general->other), include ‘please route to research IS’
You can get the bucket-wipe tool here:
WARNING: This will erase the bucket and all of its data! Please make absolutely sure this is what you want.
usage: java -jar bucket-wipe.jar [options] <bucket-name>
-a,--access-key the S3 access key
-e,--endpoint the endpoint to connect to, including
protocol, host, and port
-h,--help displays this help text
-hier,--hierarchical Enumerate the bucket hierarchically. This
is recommended for ECS's
--keep-bucket do not delete the bucket when done
-l,--key-list instead of listing bucket, delete objects
matched in source file key list
--no-smart-client disables the ECS smart-client. use this
option with an external load balancer
-p,--prefix deletes only objects under the specified
-s,--secret-key the secret key
--stacktrace displays full stack trace of errors
-t,--threads number of threads to use
--vhost enables DNS buckets and turns off load