Mount AWS S3 as a shared drive

Jackie
5 min readFeb 24, 2020

--

AWS S3 is a popular choice nowadays for cloud storage. As Amazon claims:

Amazon S3 is designed for 99.999999999% (11 9’s) of data durability because it automatically creates and stores copies of all S3 objects across multiple systems.

There are common needs we would like to mount the cloud drive (as a shared drive or local disk). As such we can access the cloud drive (S3 bucket here) from other cloud services or even our local OS. It will be super convenient for viewing, writing and updating files in the cloud.

The tool I have leveraged here for mounting is called s3fs, which is to mount the S3 as the FUSE file system.

Here are the steps to achieve the state:

Create the S3 bucket

As a preliminary, we need to have the S3 bucket created.

There are two ways to create the bucket, either from the web console or using the AWS CLI.

Some prefer to use the web console, as it’s quite intuitive to access the console and create the bucket there. Most of the cases it’s just to click the create bucket button then follow the steps:

You will be able to review and make changes if needed before the bucket is created:

most of the cases you would like to Block all pubic access.

Note, when you create the bucket using the console, you need to select the region. Make sure you choose the region as close as possible physically to your server/computer location where you would access the bucket. These are the regions and corresponding codes at the moment of writing:

However, once the bucket created, you will note that:

Don’t get confused, as the S3 console will display all buckets regardless of regions. You can find the actual region the bucket is residing in the corresponding column.

Alternatively, if you are more familiar with the AWS CLI, you might like to create the bucket from the command line.

aws s3api create-bucket --bucket data-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

you would need to have your AWS CLI set up properly before you are able to run the command:

aws configure

AWS Access Key ID [None]:

AWS Secret Access Key [None]:

Default region name [None]: us-west-2

Default output format [None]: json

after this, it will generate a valid config and credential file

~/.aws/config

~/.aws/credentials

set up the proper access

After the bucket is created, you need to set up the proper access. There are two approaches to set up for S3 access, either using s3 policy or through IAM policy.

Personally I think the IAM policy should be the defacto place for controlling most of the AWS resources access. The reason being it’s decoupled. It’s specifically for access control alone, while not tied to any specific role or buckets. While at the same time, S3 policy is bucket specific, which would work if your bucket is eternal.

However, you have both choices, you can make your own judgment here.

For IAM policy, you need to create the role, then associate the principal/person who needs to access the bucket with the role:

Note: you need to grant these access to the bucket

“s3:ListBucket”,

“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”

so in the policy, it would be something like this:

for the object access, make sure you grant to both resources

arn:aws:s3:::bucket
arn:aws:s3:::bucket/*

you can use the AWS simulator for confirming your set up is correct with the access:

https://policysim.aws.amazon.com/home/index.jsp?#

alternatively, you can grant specific S3 bucket access:

install s3fs either from source or package

after you have the S3 created with the proper access, you can now proceed with the installation of the s3fs.

You can either install the package directly, for example:

#ubuntu

sudo apt install s3fs #from source

Alternatively, in case you would like to build from source with any customization:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git cd s3fs-fuse ./autogen.sh ./configure make sudo make install

mount by role

Now with the bucket and s3fs installed, you can do the real mounting:

mkdir /mnt-drive && s3fs -o iam_role=”role-from-step-2” -o allow_other S3-bucket /mnt-drive

in most cases, especially if you would like to access the mount from other cloud services, you need to mount it by role. Normally those roles are tied to the cloud resources.

for example, if you would like to access it from an EC2 instance, you just need to grant the role to that EC2 instance would work.

mount by key

In most cases, you might not have the access key, as this normally it’s owned by system admin. But just in case if you do have, you can mount it using your AWS access keys:

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs

mkdir /mnt-drive && s3fs -o passwd_file=${HOME}/.passwd-s3fs -o allow_other S3-bucket /mnt-drive

Till here, you would be able to access your S3 bucket from your mounted place:

FTP

in addition, in case you would like to expose the mount place as a FTP server:

install vsftpd

systemctl start vsftpd

then you can access it from your FTP client

--

--