How to share a volume between cloud servers using DigitalOcean Spaces (2022)

How to share a volume between cloud servers using DigitalOcean Spaces (1)

Caleb Lemoine

Posted on

#cloud #terraform #tutorial #tooling

Overview

So you need to share files across multiple servers in the cloud, but then you come to find out you can only attach a volume to one host! What do you do?!

Well, you have a few options:

  1. Create an NFS mount using another server

    • You could create a Network Filesystem using another server, but this introduces a few challenges:
    • Your storage capacity is bound to one underlying server, creating a single point of failure.
    • You need some decent linux chops to get this working and automate it.
  2. You can create a filesystem mount using s3fs and DigitalOcean Spaces

    • Since DigitalOcean Spaces uses Amazon's S3 protocol, we don't actually have to use AWS S3 to use s3fs, we just need storage that implements the protocol.

Why

Why would you use object storage for this filesystem to share between servers?

  1. It's cloud native.
  2. It's highly available.
  3. It's performant.
  4. It's cheap. $5/month for 250G!
  5. You don't have to maintain and secure a separate server for storage.
  6. You can use the object storage for other applications outside of your servers via HTTP.

How to

OK, so this sounds pretty great right? You can have some amazing object storage power your storage needs to easily share files across your servers. Today, I'll show you how using Terraform.

Create a DigitalOcean Spaces access key

First you'll want to login to the DigitalOcean console and create a new access key + access key secret. This will be used to authenticate with your DigitalOcean Spaces bucket to ensure only you can access the storage.

How to share a volume between cloud servers using DigitalOcean Spaces (3)

When you click "Generate New Key", you'll need to type your key name into the text box, then click the blue check mark. After you click the blue check mark establishing your key name, you'll see these 2 new fields(save these for later):

How to share a volume between cloud servers using DigitalOcean Spaces (4)

DigitalOcean API Key

Now that you have your access key and secret for the spaces bucket, you'll still need an API key to use with Terraform to create a few resources such as DigitalOcean droplets and a Spaces bucket. This can also be done in the "API" section of the console.

How to share a volume between cloud servers using DigitalOcean Spaces (5)

(Video) Digital Ocean Volumes Block Storage Tutorial

Terraform Code

OK, so now we have our Spaces Access Key + Secret as well as our DigitalOcean API key. We can now move on to actually creating some droplets, a bucket, and share files across the two using s3fs.

A quick overview of what is happening below. We're creating a new bucket and 2 droplets that will share files back and forth. This is done my taking some example input such as a region, mount point (filesystem path), and bucket name which will use cloud-init to mount the bucket to the droplets when they first boot.

First let's make a terraform.tfvars file that takes in our configuration. It looks likes this:

# Spaces Access Key IDspaces_access_key_id = "XXX"# Spaces Access Key Secretspaces_access_key_secret = "XXX"# DigitalOcean API Tokendo_token = "XXX"# SSH Key ID to be able to get into our new droplets (can leave this empty if no need to ssh)ssh_key_id = ""

Now we need to create a file called main.tf with the following content below. This will create our bucket and droplets and will configure s3fs on our droplets to be able to read and write files to the same bucket.

Please refer to the comments for the walkthrough of each component:

# Needed for terraform to initialize and# install the digitalocean terraform providerterraform { required_providers { digitalocean = { source = "digitalocean/digitalocean" } }}# Expected input, DigitalOcean Spaces Access Key IDvariable "spaces_access_key_id" { type = string sensitive = true}# Expected input, DigitalOcean Spaces Access Key Secretvariable "spaces_access_key_secret" { type = string sensitive = true}# Expected input, DigitalOcean API Tokenvariable "do_token" { type = string sensitive = true}# SSH key in DigitalOcean that will allow us to get into our hosts# (Not Necessarily Needed)variable "ssh_key_id" { type = string sensitive = true default = ""}# DigitalOcean region to create our droplets and spaces bucket in# Let's just go with nyc3variable "region" { type = string default = "nyc3"}# Name of our DigitalOcean Spaces bucketvariable "bucket_name" { type = string default = "s3fs-bucket"}# Where to mount our bucket on the filesystem on the DigitalOcean droplets# Let's just default to /tmp/mount for demo purposesvariable "mount_point" { type = string default = "/tmp/mount"}# Configure the DigitalOcean provider to create our resourcesprovider "digitalocean" { token = var.do_token spaces_access_id = var.spaces_access_key_id spaces_secret_key = var.spaces_access_key_secret}# Create our DigitalOcean spaces bucket to store files# that will be accessed by our dropletsresource "digitalocean_spaces_bucket" "s3fs_bucket" { name = var.bucket_name region = var.region}# Let's create a sample file in the bucket called "index.html"resource "digitalocean_spaces_bucket_object" "index" { region = digitalocean_spaces_bucket.s3fs_bucket.region bucket = digitalocean_spaces_bucket.s3fs_bucket.name key = "index.html" content = "<html><body><p>This page is empty.</p></body></html>" content_type = "text/html"}# Configure our DigitalOcean droplets via cloud-init# Install the s3fs package# Create a system-wide credentials file for s3fs to be able to access the bucket# Create a the mount point directory (/tmp/mount)# Call s3fs to mount the bucketlocals { cloud_init_config = yamlencode({ packages = [ "s3fs" ], write_files = [{ owner = "root:root" path = "/etc/passwd-s3fs" permissions = "0600" content = "${var.spaces_access_key_id}:${var.spaces_access_key_secret}" }], runcmd = [ "mkdir -p ${var.mount_point}", "s3fs ${var.bucket_name} ${var.mount_point} -o url=https://${var.region}.digitaloceanspaces.com" ] })}# Convert our cloud-init config to userdata# Userdata runs at first boot when the droplets are createddata "cloudinit_config" "server_config" { gzip = false base64_encode = false part { content_type = "text/cloud-config" content = local.cloud_init_config }}# Create 2 DigitalOcean droplets that will both mount the same spaces bucket# These 2 hosts will share files back and forthresource "digitalocean_droplet" "s3fs_droplet" { count = 2 image = "ubuntu-20-04-x64" name = "s3fs-droplet-${count.index}" region = var.region size = "s-1vcpu-1gb" ssh_keys = var.ssh_key_id != "" ? [var.ssh_key_id] : [] user_data = data.cloudinit_config.server_config.rendered}# Output our ip addresses to the console so that we can easily copy/pasta to ssh inoutput "s3fs_droplet_ipv4_addresses" { value = digitalocean_droplet.s3fs_droplet[*].ipv4_address}

Terraform Output

Now that we have our configuration defined above, we simple need to run terraform init && terraform apply -auto-approve to create our things!

(Video) Object Storage with DigitalOcean Spaces

❯ terraform apply -auto-approveTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + createTerraform will perform the following actions: # digitalocean_droplet.s3fs_droplet[0] will be created + resource "digitalocean_droplet" "s3fs_droplet" { + backups = false + created_at = (known after apply) + disk = (known after apply) + graceful_shutdown = false + id = (known after apply) + image = "ubuntu-20-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "s3fs-droplet-0" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "nyc3" + resize_disk = true + size = "s-1vcpu-1gb" + ssh_keys = (sensitive) + status = (known after apply) + urn = (known after apply) + user_data = "dc35535cfb286b2994e31baa83c32ef808b9bdff" + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_droplet.s3fs_droplet[1] will be created + resource "digitalocean_droplet" "s3fs_droplet" { + backups = false + created_at = (known after apply) + disk = (known after apply) + graceful_shutdown = false + id = (known after apply) + image = "ubuntu-20-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "s3fs-droplet-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "nyc3" + resize_disk = true + size = "s-1vcpu-1gb" + ssh_keys = (sensitive) + status = (known after apply) + urn = (known after apply) + user_data = "dc35535cfb286b2994e31baa83c32ef808b9bdff" + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_spaces_bucket.s3fs_bucket will be created + resource "digitalocean_spaces_bucket" "s3fs_bucket" { + acl = "private" + bucket_domain_name = (known after apply) + force_destroy = false + id = (known after apply) + name = "s3fs-bucket" + region = "nyc3" + urn = (known after apply) } # digitalocean_spaces_bucket_object.index will be created + resource "digitalocean_spaces_bucket_object" "index" { + acl = "private" + bucket = "s3fs-bucket" + content = "<html><body><p>This page is empty.</p></body></html>" + content_type = "text/html" + etag = (known after apply) + force_destroy = false + id = (known after apply) + key = "index.html" + region = "nyc3" + version_id = (known after apply) }Plan: 4 to add, 0 to change, 0 to destroy.Changes to Outputs: + s3fs_droplet_ipv4_addresses = [ + (known after apply), + (known after apply), ]digitalocean_spaces_bucket.s3fs_bucket: Creating...digitalocean_droplet.s3fs_droplet[1]: Creating...digitalocean_droplet.s3fs_droplet[0]: Creating...digitalocean_spaces_bucket.s3fs_bucket: Still creating... [10s elapsed]digitalocean_droplet.s3fs_droplet[1]: Still creating... [10s elapsed]digitalocean_droplet.s3fs_droplet[0]: Still creating... [10s elapsed]digitalocean_droplet.s3fs_droplet[1]: Still creating... [20s elapsed]digitalocean_droplet.s3fs_droplet[0]: Still creating... [20s elapsed]digitalocean_spaces_bucket.s3fs_bucket: Still creating... [20s elapsed]digitalocean_spaces_bucket.s3fs_bucket: Creation complete after 28s [id=s3fs-bucket]digitalocean_spaces_bucket_object.index: Creating...digitalocean_spaces_bucket_object.index: Creation complete after 0s [id=index.html]digitalocean_droplet.s3fs_droplet[0]: Still creating... [30s elapsed]digitalocean_droplet.s3fs_droplet[1]: Still creating... [30s elapsed]digitalocean_droplet.s3fs_droplet[0]: Still creating... [40s elapsed]digitalocean_droplet.s3fs_droplet[1]: Still creating... [40s elapsed]digitalocean_droplet.s3fs_droplet[1]: Creation complete after 43s [id=283287872]digitalocean_droplet.s3fs_droplet[0]: Creation complete after 43s [id=283287873]Apply complete! Resources: 4 added, 0 changed, 0 destroyed.Outputs:s3fs_droplet_ipv4_addresses = [ "165.227.106.47", "45.55.60.230",]

Sharing files

Cool! Now have our bucket and some droplets already configured, let's ssh to both and checkout that /tmp/mount path that we set up in our Terraform configuration above.

How to share a volume between cloud servers using DigitalOcean Spaces (6)

Let's do a recap of what's happening above.

On both s3fs-droplet-0 and s3fs-droplet-1, I ran df -h | grep s3fs which gives us our disk usage for all of the mounted volumes, but I filtered specifically for the term s3fs to shorten the list. This shows us that our bucket is mounted and available at /tmp/mount! Hooray!

root@s3fs-droplet-0:/tmp/mount# df -h | grep s3fss3fs 256T 0 256T 0% /tmp/mountroot@s3fs-droplet-1:/tmp/mount# df -h | grep s3fss3fs 256T 0 256T 0% /tmp/mount

Next, I ran ll /tmp/mount on both hosts so that we can see that the contents of the bucket and we can see the index.html file that I created in the bucket in the Terraform code is there and is viewable by both droplets. Awesooooome!

root@s3fs-droplet-0:/tmp/mount# ll /tmp/mount/total 5drwx------ 1 root root 0 Jan 1 1970 ./drwxrwxrwt 12 root root 4096 Jan 22 18:54 ../-rw-r----- 1 root root 52 Jan 22 18:48 index.htmlroot@s3fs-droplet-1:/tmp/mount# ll /tmp/mount/total 5drwx------ 1 root root 0 Jan 1 1970 ./drwxrwxrwt 12 root root 4096 Jan 22 18:54 ../-rw-r----- 1 root root 52 Jan 22 18:48 index.html

(Video) DigitalOcean Spaces with Laravel

OK, so next I ran a touch command on s3fs-droplet-0 which created a file in /tmp/mount:

root@s3fs-droplet-0:/tmp/mount# touch file_from_$(hostname)

I used $(hostname) to substitute the name of the droplet in the file name so that we can see said file on s3fs-droplet-1. Let's have a look and see if that file is viewable on the other server.

root@s3fs-droplet-1:/tmp/mount# ll /tmp/mount/total 6drwx------ 1 root root 0 Jan 1 1970 ./drwxrwxrwt 12 root root 4096 Jan 22 18:54 ../-rw-r--r-- 1 root root 0 Jan 22 19:00 file_from_s3fs-droplet-0-rw-r----- 1 root root 52 Jan 22 18:48 index.html

It's there! We successfully shared files between our 2 droplets. Now let's go look at our spaces bucket in the DigitalOcean console:

How to share a volume between cloud servers using DigitalOcean Spaces (7)

(Video) How to setup DigitalOcean Spaces | DigitalOcean | Object Storage | 2022

WOOT WOOT! Since we're using the spaces bucket, we can access these files from anywhere and in any application! NFS is looking pretty gross at this point. Yay for cloud object storage and thanks to DigitalOcean for providing us such a cool service!

Fin.

(Video) DigitalOcean Spaces setup

FAQs

What are DigitalOcean spaces used for? ›

DigitalOcean Spaces allow you to store and serve large amounts of data. Each Space you create within an account has its own URL and can be used as a logical unit for segmenting content.

What are digital ocean volumes? ›

DigitalOcean Block storage volumes are network-based block devices that provide additional data storage for Droplets. Subscribe to DigitalOcean Volumes. Get notified when new articles on DigitalOcean Volumes are published.

How do I connect to digital ocean spaces? ›

How to Connect to DigitalOcean Spaces
  1. Choose Edit > Settings > Transfers > S3 Providers.
  2. In the Providers list click Add and enter DigitalOcean.
  3. In the Regions list click Add and enter:
  4. Do the same for the following regions:
  5. In Catch All enter: .digitaloceanspaces.com (note it starts with a dot)
Jan 10, 2021

Is digital ocean spaces free? ›

Starting at $5/month for 250GiB with 1TB of outbound transfer—inbound bandwidth to Spaces is always free.

Do spaces limits? ›

Spaces have the following file size limits: PUT requests can be at most 5 GB. Each part of a multi-part upload can be at most 5 GB.

Do spaces backup? ›

Yes. The s3cmd tool which can be used with spaces has the option --recursive that will allow you to download an entire bucket locally.

How do you add volume to a droplet? ›

You can add a new volume to an existing Droplet or while creating a new Droplet.
  1. From the Create menu in the top right of the control panel, click Volumes, then choose an existing Droplet to attach the volume to in Select Droplet to attach to. ...
  2. Choose the size of your volume, which can be from 1 GB to 16 TiB (16,384 GB).
Jun 19, 2018

How do I check my storage on digital ocean? ›

How to check how much free disk space do you have on ... - YouTube

What is snapshot in DigitalOcean? ›

Snapshots are on-demand disk images of DigitalOcean Droplets and volumes saved to your account. Use them to create new Droplets and volumes with the same contents. Quickstart. Just the essentials to go from zero to working in a few minutes.

Does Digital Ocean have CDN? ›

DigitalOcean also includes a CDN with DigitalOcean Spaces, our object storage solution, and App Platform, our Platform as a Service solution.

Do spaces Access Key? ›

Within the Spaces access keys section, select Generate New Key. A text box in the Spaces access keys section will open. Name the key in a way that will allow you to identify who or what uses the key, then click the checkmark. Once you name the key, you'll see the access key and, on the next line, the secret key.

Do Spaces URL? ›

An URL can use spaces. Nothing defines that a space is replaced with a + sign. As you noted, an URL can NOT use spaces.

Does DigitalOcean charge per hour? ›

All Droplets are billed hourly up to a monthly cap of 672 hours (the number of hours in 4 weeks). If you use your server for fewer than 672 hours during the month, you will be billed for each hour that you used it. If you use your server for more than 672 hours that month, you will be billed at the monthly cost.

Which is better linode or DigitalOcean? ›

While they offer quite similar packages, DigitalOcean has much stronger performance, a better uptime guarantee, superior support, and a significantly more robust security service. But keep in mind that neither DigitalOcean nor Linode is beginner-friendly.

Does DigitalOcean have S3? ›

Spaces is an S3-compatible object storage service that lets you store and serve large amounts of data. Each Space is a bucket for you to store and serve files.

Do spaces secret? ›

Next to Spaces access keys, click Generate New Key then add a name for your key and click the check mark. A new Key and Secret will be generated. The shorter one is the Access Key and the longer one is the Secret Key. Use these two values when adding your DigitalOcean Spaces account to SimpleBackups.

How do you get more storage? ›

How to increase storage space on your Android phone or tablet
  1. Check out Settings > Storage.
  2. Uninstall unneeded apps.
  3. Use CCleaner.
  4. Copy media files to a cloud storage provider.
  5. Clear your downloads folder.
  6. Use analysis tools like DiskUsage.
Apr 17, 2015

What is the meaning of limited space? ›

1 having a limit; restricted; confined. 2 without fullness or scope; narrow. 3 (of governing powers, sovereignty, etc.) restricted or checked, by or as if by a constitution, laws, or an assembly.

How do you mount a volume? ›

In Disk Manager, right-click the partition or volume that has the folder in which you want to mount the drive. Click Change Drive Letter and Paths and then click Add. Click Mount in the following empty NTFS folder. Type the path to an empty folder on an NTFS volume, or click Browse to locate it.

How do I mount a sound in Linux? ›

To mount an attached volume automatically after reboot
  1. (Optional) Create a backup of your /etc/fstab file that you can use if you accidentally destroy or delete this file while editing it. ...
  2. Use the blkid command to find the UUID of the device. ...
  3. Open the /etc/fstab file using any text editor, such as nano or vim.

How do I check disk space on Ubuntu? ›

To check the free disk space and disk capacity with System Monitor: Open the System Monitor application from the Activities overview. Select the File Systems tab to view the system's partitions and disk space usage. The information is displayed according to Total, Free, Available and Used.

What is the difference between snapshot and backup DigitalOcean? ›

Difference between backups and Snapshots

Backups in DigitalOcean are automatic and retained for four weeks. Whereas snapshots are created manually, it gets removed only when we choose to do it. A backup is incremental, but a snapshot is a disk image at the point of time we take it.

What is DigitalOcean floating IP? ›

DigitalOcean describes a Floating IP as “a publicly-accessible static IP address that can be assigned to one of your Droplet.” This IPv4 address can be mapped to a different Droplet by using their API or Control Panel.

How long does it take to create snapshot DigitalOcean? ›

How long will my backup or snapshot take to complete? Creating a backup or snapshot takes roughly 2 minutes per GB of used space.

Which is the best CDN? ›

The Best CDN Providers
  1. StackPath. StackPath is one of the best CDN providers in the market. ...
  2. Sucuri. Sucuri is a popular website security company that protects your site from hackers, DDoS attacks, and malware. ...
  3. Cloudflare. ...
  4. KeyCDN. ...
  5. Rackspace. ...
  6. Google Cloud CDN. ...
  7. CacheFly. ...
  8. Amazon CloudFront.
Jan 10, 2022

How do I enable CDN? ›

Enable CDN

When pointing your DNS to Advanced Network or Global Edge Security a zero-configuration is automatically activated at Edge. This means by pointing your DNS to either network, you have CDN configured, activated, and secured with no additional steps.

Do spaces secret? ›

Next to Spaces access keys, click Generate New Key then add a name for your key and click the check mark. A new Key and Secret will be generated. The shorter one is the Access Key and the longer one is the Secret Key. Use these two values when adding your DigitalOcean Spaces account to SimpleBackups.

What is object oriented storage? ›

Object storage, also known as object-based storage, is a strategy that manages and manipulates data storage as distinct units, called objects. These objects are kept in a single storehouse and are not ingrained in files inside other folders.

Does Digital Ocean have CDN? ›

DigitalOcean also includes a CDN with DigitalOcean Spaces, our object storage solution, and App Platform, our Platform as a Service solution.

Do Spaces URL? ›

An URL can use spaces. Nothing defines that a space is replaced with a + sign. As you noted, an URL can NOT use spaces.

Videos

1. Upload file to DigitalOcean Spaces with Laravel #2
(Bitfumes)
2. How to replicate data between Digital Ocean Spaces and other cloud storage
(Zenko)
3. Swarm Volume Storage Drivers
(Bret Fisher Docker and DevOps)
4. Upload files using NextJS and Digital Ocean Spaces (AWS S3)
(Eddie Jaoude)
5. Persistent Volume Support Using DigitalOcean Block Storage
(Chuka Ofili)
6. How to manage cloud object storage (S3) on DigitalOcean
(Liarco DevTips)

Top Articles

You might also like

Latest Posts

Article information

Author: Mr. See Jast

Last Updated: 12/18/2022

Views: 5845

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.