# Static Assets in Production

In this section we will discuss how you can deploy your Django static assets in production. There are two main ways to deploy your static assets in production, you can either:

  • Use Ngix or simmilar and deploy from your virtual server static folder
  • Use a storage bucket designed to serve static assets in production with built in global CDN

If you followed the instructions from the previous section with nginx, you would basically have deployed your static assets from your VPS where they are located. A more robust method is using a storage bucket on amazon s3 or spaces on digital ocean. Since we love digital ocean so much, and the pricing is better --- we will cover that method in this tutorial.

# Get started with spaces

By now you should have a digital ocean account, if you do not have one yet, you can get one here: Digital Ocean Account (opens new window).

Go to your dashboard and create a new space: Creating New Space

Continue to select a datacenter, CDN on and enter a name for your space.

Continue to create your new space - bucket digitalocean

TIP

The name you select will be used later on as the "bucket name". So bucket name is space name.

Next step is to create access keys you will need later on. On the left navigation menu, click on "API". Scroll down to space access keys, and create a key-pair. You will get: (1) Access id and (2) Access key. Save them and put them in a safe place.

# Back in your Django App

There are a few changes you need to make:

  1. Install packages you need for storages
  2. Edit your settings.py file with storage settings
  3. Create a storage support file
  4. Collectstatic - moving your static files to your bucket
  5. Manually move media files (if you already had some)
  6. Restart your service with the changes

# Install Django Packages

pip install boto3
pip install django-storages

# Edit Settings file

First add the storages app to installed apps:

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'some-app',
    'another-app',
    'storages', #NEW LINE
    ]

Then replace this:

STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR,'staticfiles')
MEDIA_URL = '/uploads/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'uploads')
STATICFILES_DIRS = [os.path.join(BASE_DIR,'static')]

With this:

USE_SPACES = True
if USE_SPACES:
    # settings
    AWS_ACCESS_KEY_ID = 'enter-access-key'
    AWS_SECRET_ACCESS_KEY = 'enter-access-key-secret'
    AWS_STORAGE_BUCKET_NAME = 'space-name'
    AWS_DEFAULT_ACL = 'public-read'
    AWS_S3_ENDPOINT_URL = 'https://ams3.digitaloceanspaces.com' #enter the datacenter url here, we chose Amsterdam
    AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}
    # static settings
    AWS_LOCATION = 'static'
    STATIC_URL = f'https://{AWS_S3_ENDPOINT_URL}/{AWS_LOCATION}/'
    STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
    # public media settings
    PUBLIC_MEDIA_LOCATION = 'public'
    MEDIA_URL = f'https://{AWS_S3_ENDPOINT_URL}/{PUBLIC_MEDIA_LOCATION}/'
    DEFAULT_FILE_STORAGE = 'yourapp.storage_backends.PublicMediaStorage'
    # private media settings
    PRIVATE_MEDIA_LOCATION = 'uploads'
    PRIVATE_FILE_STORAGE = 'yourapp.storage_backends.PrivateMediaStorage'
else:
    STATIC_URL = '/static/'
    STATIC_ROOT = os.path.join(BASE_DIR,'staticfiles')
    MEDIA_URL = '/uploads/'
    MEDIA_ROOT = os.path.join(BASE_DIR, 'uploads')
### ------------------ STATI FILES SETTINGS END HERE -------------------#####################
STATICFILES_DIRS = [os.path.join(BASE_DIR,'static')]

In the code above we created a variable: USE_SPACES that we can turn on or off. When it is on, the space variables are used. We can store:

  1. Static files
  2. Media files (in a public directory)
  3. Media files (in a private directory)

The public/private options allow you the flexibility to specify in the Django model whether the files stored should be public or private. If you are uploading blog images, you might want that to be public so they can be crawled by bots and help with your SEO, but if you are uploading company documents - you might want to make them private so they can only be accessed through the application, where you have authentication built in.

# Create Storage Helper

create new file in the same directory as settings.py and call it storage_backends.py and paste this inside:

from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage


class StaticStorage(S3Boto3Storage):
    location = 'static'
    default_acl = 'public-read'


class PublicMediaStorage(S3Boto3Storage):
    location = 'public'
    default_acl = 'public-read'
    file_overwrite = False

class PrivateMediaStorage(S3Boto3Storage):
    location = 'uploads'
    default_acl = 'private'
    file_overwrite = False
    custom_domain = False

# Update your models

In the mode files you would add:

from django.db import models

from yourapp.storage_backends import PublicMediaStorage, PrivateMediaStorage


class Upload(models.Model):
    uploaded_at = models.DateTimeField(auto_now_add=True)
    file = models.FileField(storage=PublicMediaStorage())


class UploadPrivate(models.Model):
    uploaded_at = models.DateTimeField(auto_now_add=True)
    file = models.FileField(storage=PrivateMediaStorage())

TIP

if you do not specify the storage variable inside the fileField, the file will be stored in the DEFAULT_FILE_STORAGE.

# Collectstatic

Run the collect static command: From inside your project directory.

python manage.py collectstatic

# Move Media Files

If you already have media files uploaded, you might want to move them to your bucked manually. Any new files you upload from this moment forward, will be automatically added to the storage bucket. You do not have to re-create the folders in the buckets - the settings will sort that out.

How to Upload Files to Spaces (opens new window)

# Upload Programatically

Using Boto3


import boto3

# configure session and client
session = boto3.session.Session()
client = session.client(
    's3',
    region_name='ams3',
    endpoint_url='https://ams3.digitaloceanspaces.com',
    aws_access_key_id='YOUR_ACCESS_KEY_ID',
    aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',
)

# create new bucket
client.create_bucket(Bucket='your-bucket-name')

# upload file
with open('test.txt', 'rb') as file_contents:
    client.put_object(
        Bucket='your-bucket-name',
        Key='test.txt',
        Body=file_contents,
    )

# download file
client.download_file(
    Bucket='your-bucket-name',
    Key='test.txt',
    Filename='tmp/test.txt',
)

Read More about Spaces SDKs here (opens new window)

# Use s3cfg package

You can use s3cfg on the command line to move entire folders to spaces in just one command, it can be really usefull if you were migrating an existing application with multiple folders.

Start by installing it in your machine:


sudo apt-get update

sudo apt-get install s3cfg

Configure s3cfg

s3cmd --configure

# Enter access keys

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key []: EXAMPLE7UQOTHDTF3GK4
Secret Key []: exampleb8e1ec97b97bff326955375c5
Default Region [US]:

# Enter digital ocean endpoint

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: ams3.digitaloceanspaces.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars c
an be used if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket []: %(bucket)s.ams3.digitaloceanspaces.com
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: Yes
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you cant connect to S3 directly
HTTP Proxy server name:

# Confirm and save settings

New settings:
 Access Key: EXAMPLES7UQOTHDTF3GK4
 Secret Key: b8e1ec97b97bff326955375c5example
 Default Region: US
 S3 Endpoint: ams3.digitaloceanspaces.com
 DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.ams3.digitaloceanspaces.com
 Encryption password: secure_password
 Path to GPG program: /usr/bin/gpg
 Use HTTPS protocol: True
 HTTP Proxy server name:
 HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] Y
Configuration saved to '/home/username/nyc3'

# Using s3cmd

List content in a space


s3cmd ls s3://spacename s3://secondspace

Upload files to a space


s3cmd put file.txt s3://spacename/path/

Upload all the files in your current directory:


s3cmd put * s3://spacename/path/ --recursive

Check out more s3cmd commands here (opens new window)

#