AWS Roles Anywhere Part 2

17 Mar 2024

Using AWS Roles Anywhere and Glacier

#Glacier And AWS Roles Anywhere

In my previous post, I setup the IAM Roles Anywhere to enable temporary credentials. These credentials let your processes work with AWS just like long-term credentials but you don't have to distribute or embed long-term credentials in your application.

#aws_signing_helper

AWS Signing Helper is a tool that you can use to get temporary credentials after you've setup a Roles Anywhere role. I downloaded it for Linux with wget

wget https://rolesanywhere.amazonaws.com/releases/1.1.1/X86_64/Linux/aws_signing_helper

#Create a Vault

I want to store my backups in S3 Glacier. So I went to Amazon S3 Glacier > Vaults and pressed Create Vault and added a new vault named GgblogBackup.

Make a note of its ARN, you'll need it later.

#Getting Credentials to work with Vault

Recall that aws_signing_helper is a command line utility that can get us temporary credentials through Roles Anywhere

./aws_signing_helper credential-process \
    --certificate ./ggblogcert.pem \
    --private-key ./ggblog_key.pem \
    --role-arn    arn:aws:iam::XXXXXXXXXXXX:role/GgblogGlacierBackup \
    --profile-arn arn:aws:rolesanywhere:us-east-2:XXXXXXXXXXXX:profile/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX \
    --trust-anchor-arn arn:aws:rolesanywhere:us-east-2:518748733169:trust-anchor/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX

{"Version":1, "AccessKeyId":"AXXXXXXXXXXXXXXXXXXX", "SecretAccessKey":"YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY", "SessionToken":"ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ==", "Expiration":"2023-11-17T03:09:23Z"}

What's important here are the AccessKeyId, SecretAccessKey, and the SessionToken fields in the output. Store these values in following environment variables to use them with your applications: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN

Using jq, the fields can be parsed out automatically for you.

./aws_signing_helper credential-process --certificate ./ggblog_clicert.pem \
    --private-key ./ggblog_privkey.pem \
    --role-arn   arn:aws:iam::518748733169:role/GgblogGlacierBackup \
    --profile-arn  arn:aws:rolesanywhere:us-east-2:518748733169:profile/82cc2660-ee5d-4759-a680-500cde05f7d7 \
    --trust-anchor-arn  arn:aws:rolesanywhere:us-east-2:518748733169:trust-anchor/9bdf4513-d765-47b9-be61-efebf8cb35fe \
    |  jq -c '.AccessKeyId, .SecretAccessKey, .SessionToken'  | tr -d

And automatically assigned with

read -r  AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN < <(./aws_signing_helper credential-process --certificate ./ggblogcert.pem     --private-key ./ggblog_key.pem     --role-arn   arn:aws:iam::518748733169:role/GgblogGlacierBackup     --profile-arn  arn:aws:rolesanywhere:us-east-2:518748733169:profile/82cc2660-ee5d-4759-a680-500cde05f7d7     --trust-anchor-arn  arn:aws:rolesanywhere:us-east-2:518748733169:trust-anchor/9bdf4513-d765-47b9-be61-efebf8cb35fe     |  jq -r '.AccessKeyId, .SecretAccessKey, .SessionToken' | tr '\n' ' ' )

#Python

To test my setup I used boto3 in Python to upload a file from a remote server.

pip3 install boto3
import boto3
import subprocess
import json
import os

glacier_vault_name = "GgblogBackupVault"
region = 'us-east-2'

# Call aws_singing_helper
command = [
    './aws_signing_helper', 'credential-process',
    '--certificate', './ggblogcert.pem',
    '--private-key', './ggblog_key.pem',
    '--role-arn', 'arn:aws:iam::518748733169:role/GgblogGlacierBackup',
    '--profile-arn', 'arn:aws:rolesanywhere:us-east-2:518748733169:profile/82cc2660-ee5d-4759-a680-500cde05f7d7',
    '--trust-anchor-arn', 'arn:aws:rolesanywhere:us-east-2:518748733169:trust-anchor/9bdf4513-d765-47b9-be61-efebf8cb35fe'
]

result = subprocess.run(command, capture_output=True, text=True)
if result.returncode != 0:
    raise Exception(f"Error in aws_signing_helper: {result.stderr}")

output = result.stdout
aws_credentials = json.loads(output)

# You can now use these credentials with Boto3 or another AWS SDK
os.environ['AWS_ACCESS_KEY_ID'] = aws_credentials['AccessKeyId']
os.environ['AWS_SECRET_ACCESS_KEY'] = aws_credentials['SecretAccessKey']
os.environ['AWS_SESSION_TOKEN'] = aws_credentials['SessionToken']

# connect to glacier with boto3 client
client = boto3.client('glacier',  region_name=region)

# upload a file
file_to_upload = open( './test_file', 'rb' )

response = client.upload_archive(
    vaultName=glacier_vault_name,
    archiveDescription='test_upload',
    body=file_to_upload,
)

try:
    status_code = response['ResponseMetadata']['HTTPStatusCode']
    print("Status Code:" + str(status_code))
    location = response['location']
    print("Location: " + location)
except KeyError as e:
    print(f"A KeyError occurred: {e}")

Storing hard-coded paths and ARNs are a terrible idea, so introduce some indirection

import subprocess
import json
from datetime import datetime
import os
import sys
import boto3
import tarfile
from dotenv import load_dotenv

# Configuration
db_name = "mailserver"
output_dir = "/backup"  # Directory where dumps will be stored
glacier_vault_name = "GgblogBackup"

# Timestamp for file naming
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")

# Dump MySQL database
def dump_mysql():
    dump_file = f"{output_dir}/mysql_dump_{timestamp}.sql"
    dump_cmd = f"mysqldump --defaults-file=/root/glacier/my.cnf {db_name} > {dump_file}"
    subprocess.run(dump_cmd, shell=True, check=True)
    return dump_file


def get_aws_credentials():
    load_dotenv()
    certificate_path = os.environ.get('CERTIFICATE_PATH')
    private_key_path = os.environ.get('PRIVATE_KEY_PATH')
    trust_anchor_arn = os.environ.get('TRUST_ANCHOR_ARN')
    profile_arn = os.environ.get('PROFILE_ARN')
    role_arn = os.environ.get('ROLE_ARN')
    command = [
        '/root/glacier/aws_signing_helper', 'credential-process',
        '--certificate', certificate_path,
        '--private-key', private_key_path,
        '--trust-anchor-arn', trust_anchor_arn,
        '--profile-arn', profile_arn,
        '--role-arn', role_arn
    ]

    result = subprocess.run(command, capture_output=True, text=True)
    if result.returncode != 0:
        raise Exception(f"Error in aws_signing_helper: {result.stderr}")

    output = result.stdout
    credentials = json.loads(output)

    return credentials

def upload_to_glacier(source_path, archive_description):
    """
    Upload a file to an our AWS Glacier
    """
    # connect to glacier
    client = boto3.client('glacier',  region_name='us-east-1')

    # read in the database dump
    file_to_upload = open( source_path, 'rb' )
    response = client.upload_archive(
        vaultName=glacier_vault_name,
        archiveDescription=archive_description,
        body=file_to_upload,
    )
    try:
        status_code = response['ResponseMetadata']['HTTPStatusCode']
        if status_code < 200 or status_code >=300:
            print(response)
    except KeyError as e:
        print(f"A KeyError occurred: {e}")


def main():
    try:
        mysql_dump = dump_mysql()
        
        aws_credentials = get_aws_credentials()
        # You can now use these credentials with Boto3 or another AWS SDK
        os.environ['AWS_ACCESS_KEY_ID'] = aws_credentials['AccessKeyId']
        os.environ['AWS_SECRET_ACCESS_KEY'] = aws_credentials['SecretAccessKey']
        os.environ['AWS_SESSION_TOKEN'] = aws_credentials['SessionToken']

        upload_to_glacier(mysql_dump, 'Mysql Backup: ' + mysql_dump)
        upload_to_glacier(mail_archive, 'Mysql Backup: ' + mail_archive)


    except Exception as e:
        print("An error occurred:", e)

if __name__ == "__main__":
    main()


The following code uploads a file.

import os
import sys

import boto3

import json

# Amazon S3 settings.
AWS_GLACIER_VAULT_NAME = 'FaceXGvMail'

def upload_to_glacier(source_path, archive_description):
    """
    Upload a file to an our AWS Glacier
    """
    # connect to glacier
    client = boto3.client('glacier',  region_name='us-east-1')

    # read in the database dump
    file_to_upload = open( source_path )
    response = client.upload_archive(
        vaultName=AWS_GLACIER_VAULT_NAME,
        archiveDescription=archive_description,
        body=file_to_upload,
    )
    print response


def main():

    # we need a filename to upload it.
    if len( sys.argv)  == 1:
        sys.exit()

    # do the upload
    filename =  sys.argv[1]
		print 'Uploading %s to Amazon Glacier...' % filename   
		upload_to_glacier(filename, 'test upload: ' + filename)


if __name__ == '__main__':
    main()

The following code gets an inventory of your vault.

import subprocess
import json
from datetime import datetime
import os
import sys
import boto3


def get_aws_credentials():
    command = [
        './aws_signing_helper', 'credential-process',
        '--certificate', './clientcert_2.pem',
        '--private-key', './clientkey_2.pem',
        '--trust-anchor-arn', 'arn:aws:rolesanywhere:us-east-1:412187556337:trust-anchor/2017aecc-ac25-4376-bf37-03f168292081',
        '--profile-arn', 'arn:aws:rolesanywhere:us-east-1:412187556337:profile/f6ec1ee7-5725-45b4-8cae-5f324c2d0ebd',
        '--role-arn', 'arn:aws:iam::412187556337:role/FaceXGlacierMail'
    ]

    result = subprocess.run(command, capture_output=True, text=True)
    if result.returncode != 0:
        raise Exception(f"Error in aws_signing_helper: {result.stderr}")

    output = result.stdout
    credentials = json.loads(output)

    return credentials


def list_vaults(client):
    response = client.list_vaults(
        accountId='-',  # '-' means use account id of the credentials used to sign the request
        limit='10' #,
        #marker='',
    )
    return response


def upload_archive(client, vault_name, archive_data):
    # Logic to upload an archive
    pass

def get_inventory(client, vault_name):
    #

    response = client.initiate_job(
        vaultName=vault_name,
        accountId='-',
        jobParameters={
            'Description': 'Inventory Job',
            'Format': 'CSV',
            'Type': 'inventory-retrieval',
        },
    )


    while True:
        retval = client.describe_job( vaultName = 'FaceXServer001',  jobId=response['jobId'])
        if retval['Completed']:
            break
        time.sleep(500)


    output = client.get_job_output( vaultName= 'FaceXServer001',   jobId = response['jobId'] )

    response = output['body'].read()
    return response


def download_data(client, vault_name, data_identifier):
    # Logic to download data
    pass






def main():
    try:
        # Get credentials w/ client certs
        aws_credentials = get_aws_credentials()
        # You can now use these credentials with Boto3 or another AWS SDK
        os.environ['AWS_ACCESS_KEY_ID'] = aws_credentials['AccessKeyId']
        os.environ['AWS_SECRET_ACCESS_KEY'] = aws_credentials['SecretAccessKey']
        os.environ['AWS_SESSION_TOKEN'] = aws_credentials['SessionToken']

        # connect to glacier
        client = boto3.client('glacier',  region_name='us-east-1')
        vaults = list_vaults(client);
        print (vaults )
        inventory = get_inventory(client, 'FaceXGvMail');
        print (inventory )

    except Exception as e:
        print("An error occurred:", e)

if __name__ == "__main__":
    main()