AWS CLI Installer For Centmin Mod LEMP Stack Usage

For AWS CLI User Guide see https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html.

Download

Download https://awscli-get.centminmod.com/awscli-get.sh

installdir=/root/tools/awscli-get
mkdir -p $installdir
cd $installdir
curl -4s https://awscli-get.centminmod.com/awscli-get.sh -o $installdir/awscli-get.sh
chmod +x $installdir/awscli-get.sh

Usage

By default the script will install Linux 64bit aws-cli from https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip and install it to /usr/local/bin/aws and install s5cmd from latest releases at https://github.com/peak/s5cmd/releases to /usr/local/bin/s5cmd.

./awscli-get.sh help

Usage:

./awscli-get.sh {install|update} {default|profilename|defaultreset} AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION AWS_DEFAULT_OUTPUT
./awscli-get.sh regions
./awscli-get.sh regions-wasabi

Both Amazon AWS CLI and s5cmd clients use the official Amazon S3 SDK so you need to export the environment variables for AWS Access Key and Secret Key credentials for the script to pick them up otherwise you get the following message. So in your SSH logged in session type the export commands and fill in the variables with your credentials you generated via AWS Console or 3rd party provider provided details

./awscli-get.sh 

Error. AWS Key & Secret not detected
export relevant environment variables
and re-run script

example:

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
export AWS_DEFAULT_OUTPUT=text

To update /usr/local/bin/aws

./awscli-get.sh update

Configuration

First time running awscli-get.sh to configure aws-cli default profile

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh 

configure default aws-cli profile

aws-cli profile: default set:

aws_access_key_id: AKIAIOSFODNN7EXAMPLE
aws_secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
default.region: us-west-2
default output format: text

If default aws-cli profile already set and detected, you'll get below message about skipping default profile configuration. You'll also get info on how to use defaulreset command to overwrite your existing default AWS CLI command line profile's AWS Key and Secret if you desire.

./awscli-get.sh                           

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

skipping configuration...

detected existing default credentials
in config file: /root/.aws/config
in credential file: /root/.aws/credentials

To override the default profile, use defaultreset mode:

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install defaultreset

Examples

List AWS S3 region codes

./awscli-get.sh regions
Africa (Cape Town)           af-south-1
Asia Pacific (Hong Kong)     ap-east-1
Asia Pacific (Mumbai)        ap-south-1
Asia Pacific (Osaka-Local)   ap-northeast-3
Asia Pacific (Seoul)         ap-northeast-2
Asia Pacific (Singapore)     ap-southeast-1
Asia Pacific (Sydney)        ap-southeast-2
Asia Pacific (Tokyo)         ap-northeast-1
AWS GovCloud (US-East)       us-gov-east-1
AWS GovCloud (US)            us-gov-west-1
Canada (Central)             ca-central-1
China (Beijing)              cn-north-1
China (Ningxia)              cn-northwest-1
Europe (Frankfurt)           eu-central-1
Europe (Ireland)             eu-west-1
Europe (London)              eu-west-2
Europe (Milan)               eu-south-1
Europe (Paris)               eu-west-3
Europe (Stockholm)           eu-north-1
Middle East (Bahrain)        me-south-1
South America (São Paulo)    sa-east-1
US East (N. Virginia)        us-east-1
US East (Ohio)               us-east-2
US West (N. California)      us-west-1

List Wasabi S3 region codes

./awscli-get.sh regions-wasabi
Wasabi US East 1 (N. Virginia)    s3.wasabisys.com or s3.us-east-1.wasabisys.com
Wasabi US East 2 (N. Virginia)    s3.us-east-2.wasabisys.com 
Wasabi US West 1 (Oregon)         s3.us-west-1.wasabisys.com
Wasabi EU Central 1 (Amsterdam)   s3.eu-central-1.wasabisys.com

Configure a custom profile

AWS CLI command line client supports using multiple AWS profiles for AWS IAM user generated AWS Key and Secret Keys. To configure aws-cli custom profile called myprofile, you can use install profile command.

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install myprofile

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

configure aws-cli profile: myprofile

aws-cli profile: myprofile set:

aws_access_key_id: AKIAIOSFODNN7EXAMPLE
aws_secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
default.region: us-west-2
default output format: text

list aws-cli profiles:

default
myprofile

Reset default aws-cli profile

To reset the default configured aws-cli profile

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install defaultreset

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

reset default aws-cli profile

aws-cli profile: defaultreset set:

aws_access_key_id: AKIAIOSFODNN7EXAMPLE
aws_secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
default.region: us-west-2
default output format: text

Cloudflare R2 S3 API Compatible Support

Create aws-cli profile named r2 by running install profile command. When you generate new Cloudflare R2 API credentials, you'll get an Access Key ID and AWS Secret Access Key. For Cloudflare R2, set AWS_DEFAULT_REGION=auto. The script will automatically apply some additional configuration adjustments specifically for max_concurrent_requests = 2, multipart_threshold = 50MB, multipart_chunksize = 50MB and addressing_style = path to ensure Cloudflare R2 is working properly.

export AWS_ACCESS_KEY_ID=CF_R2_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=CF_R2_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=auto
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install r2

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

configure aws-cli profile: r2
configure aws cli for Cloudflare R2
aws configure set s3.max_concurrent_requests 2 --profile r2
aws configure set s3.multipart_threshold 50MB --profile r2
aws configure set s3.multipart_chunksize 50MB --profile r2
aws configure set s3.addressing_style path --profile r2

aws-cli profile: r2 set:

aws_access_key_id: CF_R2_ACCESS_KEY_ID
aws_secret_access_key: CF_R2_SECRET_ACCESS_KEY
default.region: auto
default output format: text

list aws-cli profiles:

default
r2

Need to use --profile r2 and --endpoint-url=https://YOUR_CF_ACCOUNT_ID.r2.cloudflarestorage.com passed on command line where YOUR_CF_ACCOUNT_ID is your Cloudflare Account ID.

Example for uploading test.txt file to Cloudflare R2 bucket and then listing contents of bucket named YOUR_R2_BUCKET_NAME

aws s3 cp test.txt --profile r2 --endpoint-url=https://YOUR_CF_ACCOUNT_ID.r2.cloudflarestorage.com s3://YOUR_R2_BUCKET_NAME
upload: ./test.txt to s3://YOUR_R2_BUCKET_NAME/test.txt

aws s3 ls --profile r2 --endpoint-url=https://YOUR_CF_ACCOUNT_ID.r2.cloudflarestorage.com s3://YOUR_R2_BUCKET_NAME
2022-05-13 18:34:20          2 test.txt

Linode Object Storage S3 API Compatible Support

Create aws-cli profile named linode with Linode object storage url your_linode_bucket_name.us-east-1.linodeobjects.com the endpoint url has the bucket name removed so becomes https://us-east-1.linodeobjects.com/. Here AWS_DEFAULT_REGION can be anything as it isn't used.

key=your_linode_object_key
secret=your_linode_object_secret
export AWS_ACCESS_KEY_ID=$key
export AWS_SECRET_ACCESS_KEY=$secret
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install linode

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

configure aws-cli profile: linode

aws-cli profile: linode set:

aws_access_key_id: your_linode_object_key
aws_secret_access_key: your_linode_object_secret
default.region: us-east-1
default output format: text

list aws-cli profiles:

default
myprofile
b2
do
linode

Need to use --profile linode and --endpoint-url=https://us-east-1.linodeobjects.com/ passed on command line. Example for listing contents of bucket named your_linode_bucket_name

aws s3 cp test.txt --profile linode --endpoint-url=https://us-east-1.linodeobjects.com/ s3://your_linode_bucket_name
upload: ./test.txt to s3://your_linode_bucket_name/test.txt  

aws s3 ls --profile linode --endpoint-url=https://us-east-1.linodeobjects.com/ s3://your_linode_bucket_name
2020-09-27 06:01:49          3 test.txt

If you try to use endpoint url with bucket name, you will get an error like below

aws s3 ls --profile linode --endpoint-url=https://your_linode_bucket_name.us-east-1.linodeobjects.com/ s3://your_linode_bucket_name

An error occurred (NoSuchKey) when calling the ListObjectsV2 operation: Unknown

DigitalOcean Spaces Object Storage S3 API Compatible Support

Create aws-cli profile named do with endpoint https://sfo2.digitaloceanspaces.com. Here AWS_DEFAULT_REGION can be anything as it isn't used.

key=your_do_spaces_key
secret=your_do_spaces_secret
export AWS_ACCESS_KEY_ID=$key
export AWS_SECRET_ACCESS_KEY=$secret
export AWS_DEFAULT_REGION=sfo2
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install do    

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

configure aws-cli profile: do

aws-cli profile: do set:

aws_access_key_id: your_do_spaces_key
aws_secret_access_key: your_do_spaces_secret
default.region: sfo2
default output format: text

list aws-cli profiles:

default
myprofile
b2
do

Need to use --profile do and --endpoint-url=https://sfo2.digitaloceanspaces.com passed on command line. Example for listing contents of bucket named your_do_spaces_bucket_name

aws s3 cp test.txt --profile do --endpoint-url=https://sfo2.digitaloceanspaces.com s3://your_do_spaces_bucket_name
upload: ./test.txt to s3://your_do_spaces_bucket_name/test.txt

aws s3 ls --profile do --endpoint-url=https://sfo2.digitaloceanspaces.com s3://your_do_spaces_bucket_name
2020-09-27 05:37:53          3 test.txt

Backblaze S3 API Compatible Support

Create aws-cli profile named b2 by running install profile command. Backblaze when you generate a new API credentials will give you a key id and app key which are equivalent to AWS Access Key ID and AWS Secret Access Key.

keyID=your_b2_keyid
applicationKey=your_b2_app_key
export AWS_ACCESS_KEY_ID=$keyID
export AWS_SECRET_ACCESS_KEY=$applicationKey
export AWS_DEFAULT_REGION=us-west-001
export AWS_DEFAULT_OUTPUT=text

./awscli-get.sh install b2

existing config file detected: /root/.aws/config
existing credential file detected: /root/.aws/credentials

configure aws-cli profile: b2

aws-cli profile: b2 set:

aws_access_key_id: your_b2_keyid
aws_secret_access_key: your_b2_app_key
default.region: us-west-001
default output format: text

list aws-cli profiles:

default
myprofile
b2

Need to use --profile b2 and --endpoint-url=https://s3.us-west-001.backblazeb2.com passed on command line. Example for listing contents of bucket named your_b2_bucket_name

aws s3 ls --profile b2 --endpoint-url=https://s3.us-west-001.backblazeb2.com s3://your_b2_bucket_name
2020-09-26 08:27:31          3 test.txt

s5cmd Usage

s5cmd faster than s3cmd and aws-cli according to AWS Blog and Joshua Robinson and uses official AWS SDK to access S3 - thus supports reading existing aws-cli configured credentials for default user profile at /root/.aws/config and /root/.aws/credentials.

s5cmd known issues

export AWS_PROFILE=myprofile
export AWS_PROFILE=b2
export AWS_PROFILE=default

s5cmd help

s5cmd help
NAME:
   s5cmd - Blazing fast S3 and local filesystem execution tool

USAGE:
   s5cmd [global options] command [command options] [arguments...]

COMMANDS:
   ls       list buckets and objects
   cp       copy objects
   rm       remove objects
   mv       move/rename objects
   mb       make bucket
   du       show object size usage
   cat      print remote object's contents to stdout
   run      run commands in batch
   version  print version
   help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --json                         enable JSON formatted output (default: false)
   --numworkers value             number of workers execute operation on each object (default: 256)
   --retry-count value, -r value  number of times that a request will be retried for failures (default: 10)
   --endpoint-url value           override default S3 host for custom services
   --no-verify-ssl                disable SSL certificate verification (default: false)
   --log value                    log level: (debug, info, error) (default: "info")
   --install-completion           install completion for your shell (default: false)
   --help, -h                     show help (default: false)

Comparing speed via time command of listing a single file in your_b2_bucket_name s3 bucket. s5cmd is approximately 56.5% faster than aws-cli.

export AWS_PROFILE=b2
time s5cmd --endpoint-url=https://s3.us-west-001.backblazeb2.com ls s3://your_b2_bucket_name
2020/09/26 08:27:31                 3 test.txt

real    0m0.809s
user    0m0.054s
sys     0m0.017s
time aws s3 ls --profile b2 --endpoint-url=https://s3.us-west-001.backblazeb2.com s3://your_b2_bucket_name
2020-09-26 08:27:31          3 test.txt

real    0m1.861s
user    0m1.039s
sys     0m0.067s

s5cmd supports cat to output contents of an object to stdout. The S3 bucket file your_b2_bucket_name/test.txt has contents = 22

export AWS_PROFILE=b2
time s5cmd --endpoint-url=https://s3.us-west-001.backblazeb2.com cat s3://your_b2_bucket_name/test.txt
22

real    0m0.843s
user    0m0.083s
sys     0m0.018s

Backup only newer files. The “-n”, “-s”, “-u” options instruct s5cmd to only update the target if the source file has changed size or has a newer modification time than the target object.

export AWS_PROFILE=b2
time s5cmd --endpoint-url=https://s3.us-west-001.backblazeb2.com cp -n -s -u test2.txt s3://your_b2_bucket_name
cp test2.txt s3://your_b2_bucket_name/test2.txt

real    0m2.220s
user    0m0.127s
sys     0m0.057s

time s5cmd --endpoint-url=https://s3.us-west-001.backblazeb2.com cp -n -s -u test2.txt s3://your_b2_bucket_name

real    0m0.816s
user    0m0.072s
sys     0m0.018s

time s5cmd --endpoint-url=https://s3.us-west-001.backblazeb2.com ls s3://your_b2_bucket_name              
2020/09/26 08:27:31                 3 test.txt
2020/09/26 10:23:17                 3 test2.txt

real    0m0.813s
user    0m0.066s
sys     0m0.015s