Multi-Folder deployments in AWS S3 Storage using Cloudformation and Ansible

For the starter, this is not something new. We all are familiar with the multi-folder buckets for providing hierarchy and different levels of policies on different folders.

This can avoid the requirement of creating different buckets for different end-users and keep everything a single umbrella for easier management from the admin perspective.

Also, for each new blog from now onwards I will be writing stuff with four main sections, What. Why. How. And………

What – I was facing?

I kind of mentioned What part in the introduction itself. But, let me give a quick recap. So, I have the requirement to create a multi-folder S3 bucket. This sounds easier but the tricky part is I already have invested a lot of effort in building my application using Cloudformation and unfortunately AWS CFN, with all its claim of covering everything related to the AWS configuration, apparently doesn’t support it.

Why – It was tricky?

There are 2 why for this issue. From the end-user perspective, why do they need it? With the different levels of access required on the bucket, I have two options, created another bucket and control access with the policy or create another folder in the same bucket and control access, … yes you guessed it right, with the policies.

From my perspective, why I decided to write this blog and share knowledge with you guys is, AWS Cloudformation doesn’t support creating the new folders on the S3 buckets during bucket creation itself.

How- I did it?

So, if I cannot use Cloudformation, what other options do I have. What I find out is that if somehow create or upload a file to a specific path on the bucket, a new folder will be created with the name of the specified path.

Upload File - 

New file > s3://<bucket-name>/<folder-name>

Let us explore a few of performing above actions,

  • Using the console and building the folders manually by uploading files with the path
  • Using Ansible builtin module for AWS to upload it from the central host

Since I wanted to perform this together with Cloudformation, I opted for option 2 which is using Ansible. In order to perform it as part of my stack, I mounted my bucket using the Cloudformation bash function so my new stack is mounted with the bucket.

          !Sub |
            #!/bin/bash -xe
            mkdir /mybucketmainfolder/bucket-sub-folder                
            echo s3fs#${<bucket-name>}:/bucket-sub-folder /mybucketmainfolder/bucket-sub-folder  	fuse	_netdev,allow_other,uid=1001,gid=1001,iam_role=${InstanceRole},use_cache=/tmp,url=https://s3.${AWS::Region} 0 0 >> /etc/fstab
            mount -a


The only reason I mounted my instance first is, I wanted to reuse my existing Ansible playbook for the instance so no point in running a separate Ansible playbook from my machine directly calling S3 buckets on built-in modules.

Instead, I can create a test file inside the EC2 instance with buckets mounted and send a copy from inside the instance to the bucket’s new folder using AWS CLI.

    - name: Ansible Creating a welcome file with content for S3 folder
        dest: /mybucketmainfolder/bucket-sub-folder/info.txt
        content: |
          Welcome to My Bucket Sub Folder, Customer Use Only

    - name: Simple PUT operation customer folder aws s3 cp . {{ bucketname }}/bucket-sub-folder  --recursive
         executable: /bin/bash
         chdir: /mybucketmainfolder/bucket-sub-folder/


Of course it worked, else I won’t be writing about it. Let me know if you tried using Ansible built-in module and if it worked or not.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s