Getting started with Serverless Framework and Python (Part 2)

Stax radio tower

Radio tower at Stax Museum of American Soul Music

This is a continuation of my previous post, which offered some tips for setting up Serverless Framework and concluded with generating a template service in Python. By this point you should have a file, serverless.yml, that will allow you to define your service. Again, the documentation is quite good. I suggest reading from Services down to Workflow. This gives a good overview and should enable you start hacking away at your YML file and adding your functions, but I’ll call out a few areas where I had some trouble.


Lambda functions are hard to test outside the context of AWS, but any testing withing AWS is going to cost you something (even if it is pennies). The Serverless folks suggest that you “write your business logic so that it is separate from your FaaS provider (e.g., AWS Lambda), to keep it provider-independent, reusable and more easily testable.” Ok! This separation, if we can create it, would allow us to write typical unit tests for each discrete function.

All of my previous Lambdas contained the handler function as well as any other functions required to process the incoming event. I was not even aware that it was possible to import local functions into my handler, but you can! And it works great!

Here’s my handler:

from getDhash import image_to_dhash

def dhash(event, context):
    image = event['image']
    dhash = image_to_dhash(image)
    return dhash

The handler accepts an image file in a string, passes this to the imported image_to_dash function, and returns the resulting dhash.

And here is the image_to_dash function, which I’ve stored separately in

from PIL import Image
import imagehash
from io import BytesIO
def image_to_dhash(image):
    return str(imagehash.dhash(

Now, I can simply write my tests against and ignore the handler entirely. For my first test I have a local image (test/image.jpg) and a Python script,, containing my unit tests:

import unittest
from getDhash import image_to_dhash

class TestLambdaFunctions(unittest.TestCase):
    with open('test/image.jpg', 'r') as f:
        image  =
    def testGetDhash(self):
        self.assertEqual(image_to_dhash(self.image), 'db5b513373f26f6f')
if __name__ == '__main__':

Running should return some testing results:

(myenv) dfox@dfox-VirtualBox:~/myservice$ python 
Ran 1 test in 0.026s


Environment variables

AWS Lambdas support the use of environment variables. These variables can also be encrypted, in case you need to store some sensitive information along with your code. In other cases, you may want to use variables to supply slightly different information to the same Lambda function, perhaps corresponding to a development or production environment. Serverless makes it easy to supply these environment variables at the time of deployment. And making use of Serverless’ feature-rich variable system we have a few options for doing so.

Referencing local environment variables:

    handler: handler.dhash

Or, referencing a different .yml file:

    handler: handler.dhash
      MYENVVAR: ${file(./serverless.env.yml):${opt:stage}.MYENVVAR}

The above also demonstrates how to reference CLI options, in this case the stage we provided with our deploy command:

serverless deploy --stage dev

And for completeness sake, the serverless.env.yml file:

    MYENVVAR: "my environment variable is the best"


In the past, I found that dealing with Python dependencies and Lambda could be a real pain. Inevitably, my deployment package would be incorrectly compiled. Or, I’d build the package fine, but the unzipped contents would exceed the size limitations imposed by Lambda. Using Serverless along with a plugin, Serverless Python Requirements, makes life (specifically your Lambda-creating life) a lot easier. Here’s how it works.

Get your requirements ready:

pip freeze > requirements.txt

In my case, this produced something like the following:


Call the plugin in your serverless.yml file:

  - serverless-python-requirements

And that’s it. 🙂

Now, if you have requirements like mine, you’re going to hit the size limitation (note the inclusion of Pillow, numpy, and scipy). So, take advantage of the built in zip functionality, by adding the following to your serverless.yml file:

    zip: true

This means your dependencies will remained zipped. This also means you need to unzip them when your handler is invoked.

When you run the deploy service command, the Python requirements plugin will add a new script to your directory called This script will extract the required dependencies when they are needed by your Lambda functions. You will have to import this function before all of your other imports. For example:

import unzip_requirements
from PIL import Image

There does seem to be a drawback here, however. Until you run the deploy command, the will not be added to your directory and therefore all of your local tests will fail with an ImportError:

ImportError: No module named unzip_requirements

Of course, I may be doing something wrong here.


  • There are actually two Python requirements plugins for Serverless. Am I using the best one?
  • As I add functions to my service, do I reuse the existing Or do I create new handler scripts for each function?

Getting started with Serverless Framework and Python (Part 1)

Edison water tower

Water tower of the former Edison laboratories

For a while now I’ve been working with various AWS solutions (Lambda, Data Pipelines, CloudWatch) through the console, and sometimes using homegrown scripts that take advantage of the CLI. This approach has many limitations, but I’d actually recommend it if you’re just getting started with AWS as I found it to be a great way to learn.

But if you’re ready to truly embrace the buzzy concept of serverless architecture and let your functions fly free in the rarefied air of ‘the cloud,’ well then you’ll want to make use of a framework that is designed to make development and deployment a whole heck of a lot easier. Here’s a short list of serverless frameworks:

There are many more, I’m sure, but these seem to be the popular choices. And each one has something to recommend it. Chalice is the official AWS client. Serverless is widely used. Zappa has some neat dependency packaging solutions. Apex is clearly at the apex of serverless framework technology.

I selected Serverless for a few reasons. It does seem to be used quite a bit, so there is a lot of discussion on Stackoverflow and quite a few code examples on Github. As I need all the help I can get, these are true benefits. Also, there are numerous plugins available for serverless, which seems to indicate there is an active developer community. And I knew right off the bat that I would take advantage of at least two of these plugins:

  • Serverless Python Requirements
    • Coping with Python dependencies when deploying a Lambda is one of the more challenging aspects for beginners (like me). I appreciate that someone figured out how to do it well and made that method available to me with a few extra lines in the config
  • Serverless Step Functions
    • I’m looking forward to making use of this relatively new service and none of the other frameworks had anything built in yet for Step Functions

The installation guide for Serverless is pretty good, actually, but I’ll call out a few things that might need some extra attention.

Once you’ve installed Serverless:

sudo npm install -g serverless

Your next concern will be authentication. Serverless provides some pretty good documentation on the subject, but as they describe a few different scenarios, here’s my recommendation.

Follow their instructions for generating your client key/secret, then authenticate using the CLI:

aws configure --profile serverless-user

This will walk you through entering your key, your secret, and your region. The reason I suggest authenticating using the CLI is that the CLI is darned useful. For example, you may want to ‘get-function’ just to see if serverless is doing what you think it is doing.

Also note that I have provided a profile to the command. I find profiles useful, you may not. But if you do like to use profiles, Serverless will let you take full advantage of them. For example, you can deploy with a particular profile:

serverless deploy --aws-profile serverless-user

Or check out this nice idea for per stage profiles.

I found the idea of stages in AWS a bit confusing at first. This blog post does a good job of explaining the concept and how to implement it.

Installing the plugins was dead simple:

npm install --save serverless-python-requirements
npm install --save serverless-step-functions

And setting up a new service environment ain’t much harder:

serverless create --template aws-python --path myAmazingService

Now we are ready to dig in and start writing our functions. In my next post I’ll write a bit about Python dependencies, unit testing, and anything else that occurs to me in the meantime.