SAM Template for "complex" system
Lambda Serverless with Image means having distributed containerized applications in AWS cloud provider. SAM give you the power to automate deployment without headache, because it traces the history of your deployment and what exactly is the expectation: it works by creating change set in order to alter your infrastructure with less modifications as possible.
THE CASE
Suppose we need to create 2 different Lambda services (for testing of course):
- The first is triggered by S3 event and write something to a Queue
- The second is triggered by the Queue used by first service
So the resources we need to create are:
- The bucket which trigger the first Lambda
- The Queue which trigger the second Lambda and which is filled by the first Lambda
- The role needed by the first Lambda to:
- work with S3
- work with Logs
- work with SQS
- The role needed by the second Lambda to
- work with Logs
- work with SQS
- The first Lambda with the Bucket Event mapped
- The second Lambda with the SQS Event mapped
THE TEMPLATE
The template is not so short, so we will explore by section.
FunctionS3toSqsRole
It is an IAM Role which has the policies for Full access to S3, Full access to sqs and The log based function to work. It will be used by the first Lambda
Please do not write policies by yourself, there are a lot of instrument. For me I like to have such an IAM role test where I add the policy i need from existing ones and then I copied the inline value from the editor (by converting it for Yaml format)
FunctionS3toSqsRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: FunctionS3toSqsRole
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: AmazonS3ullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 's3:*'
Resource: '*'
- PolicyName: AmazonSQSFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'sqs:*'
Resource: '*'
- PolicyName: AWSLambdaBasicExecutionRole
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: '*'
FunctionSqsRole
It is an IAM Role which has the policies for Full access to sqs and The log based function to work. It will be used by the second Lambda
FunctionSqsRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: FunctionSqsRole
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: AmazonSQSFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'sqs:*'
Resource: '*'
- PolicyName: AWSLambdaBasicExecutionRole
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: '*'
S3 Bucket and SQS
They are S3 Bucket we will create (remember that the name must be unique, so it is better to generate it automatically by CloudFormation or test before if it exist) and the Queue we will create. They are very simple with no particular Deletion Policy or other configuration on SQS, they are just for test
The Visibility Timeout is important because it must be equal or less the timeout of the function it triggers or you will incur in Stack error.
S3Bucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
Properties:
BucketName: createobjectforlambda
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: "SampleQueue"
VisibilityTimeout: 60
S3LambdaFunction
It is the function triggered by S3 which then write a message to SQS. It is packaged as Image, it as the role we defined before (set by SAM function), it has the event of S3 type with reference to Bucket created before, it has all the information about Docker and code it needs
S3LambdaFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
FunctionName: S3LambdaFunction
MemorySize: 512
PackageType: Image
Role: !GetAtt FunctionS3toSqsRole.Arn
Architectures:
- x86_64
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
JAVA_TOOL_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1 # More info about tiered compilation https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
Events:
S3Event:
Type: S3 # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Bucket:
Ref: S3Bucket # This must be the name of an S3 bucket declared in the same template file
Events: s3:ObjectCreated:*
Metadata:
DockerTag: latest
DockerContext: ./S3LambdaFunction
Dockerfile: Dockerfile
SQSLambdaFunction
It is the function triggered by SQS. Like the other it is packaged as Image, it has the role we defined before and the event is the SQS we created before, plus all the information about Docker and code it needs
SQSLambdaFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
FunctionName: SQSLambdaFunction
MemorySize: 128
Timeout: 60
PackageType: Image
Role: !GetAtt FunctionSqsRole.Arn
Architectures:
- x86_64
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
JAVA_TOOL_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1 # More info about tiered compilation https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
Events:
SQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MyQueue.Arn
BatchSize: 10
Enabled: true
Metadata:
DockerTag: latest
DockerContext: ./SQSLambdaFunction
Dockerfile: Dockerfile
Output
As output we just print out some of the ARN of created resources. To operate, we don't need them because all the resources have binded name, so we already know the output
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
S3LambdaFunction:
Description: "S3 Lambda Function ARN"
Value: !GetAtt S3LambdaFunction.Arn
S3LambdaFunctionIamRole:
Description: "Explicit IAM Role created for S3 function"
Value: !GetAtt FunctionS3toSqsRole.Arn
S3LambdaFunctionBucket:
Description: "S3 Bucket Arn"
Value: !GetAtt S3Bucket.Arn
QueueURL:
Description: "URL of new Amazon SQS Queue"
Value:
Ref: "MyQueue"
QueueARN:
Description: "ARN of new AmazonSQS Queue"
Value:
Fn::GetAtt:
- "MyQueue"
- "Arn"
QueueName:
Description: "Name of new Amazon SQS Queue"
Value:
Fn::GetAtt:
- "MyQueue"
- "QueueName"
THE CODE
The code is very very easy and it is just for test, to see the effect of S3 event, the effect of SQS insert message and SQS read message (using defined model)
S3LambdaFunction
[Code]
final static String QUEUE_NAME = "SampleQueue";
public void handleS3Event(final S3Event input, final Context context) {
String bucketName = input.getRecords().get(0).getS3().getBucket().getName();
context.getLogger().log("The bucket name is " + bucketName);
try {
String eventType = input.getRecords().get(0).getEventName();
String fileName = input.getRecords().get(0).getS3().getObject().getKey();
String message = new ObjectMapper().writeValueAsString(FileEvent.build(eventType, fileName));
sendMessageToQueue(message, context.getLogger());
} catch (Throwable e) {
try {
sendMessageToQueue("Unable to create message due to " + e.getMessage(), context.getLogger());
} catch (Throwable ex) {
throw ex;
}
}
}
public void sendMessageToQueue(String message, LambdaLogger logger){
final AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
String queueUrl = sqs.getQueueUrl(QUEUE_NAME).getQueueUrl();
SendMessageRequest send_msg_request = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(message)
.withDelaySeconds(5);
sqs.sendMessage(send_msg_request);
logger.log("Message sent");
}
The handler function is handleS3Event of course: it read the name of the bucket and print out to logger. Then it creates a FileEvent Object from S3Event information and write it to the queue. In case of error it prints out an error message to the queue (which is of course not correct, but of course it is just to see what's happen)
The pom file just contains the aws lambda start dependencies and the jackson for json serialization
[pom.xml]
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>3.11.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.2</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sqs</artifactId>
<version>1.12.530</version>
</dependency>
The Docker file so
FROM public.ecr.aws/lambda/java:11
COPY target/classes /var/task/
COPY target/dependency /var/task/lib
# Command can be overwritten by providing a different command in the template directly.
CMD ["helloworld.App::handleS3Event"]
The package and class name is of course the worst ever :)
SQSLambdaFunction
[Code]
public void handleSqsEvent(final SQSEvent input, final Context context) {
LambdaLogger lambdaLogger = context.getLogger();
String bodyValue = input.getRecords().get(0).getBody();
context.getLogger().log("Body is " + bodyValue);
try {
FileEvent fileEvent = new ObjectMapper().readValue(bodyValue, FileEvent.class);
lambdaLogger.log(fileEvent.toString());
} catch (JsonProcessingException e) {
lambdaLogger.log("Unable to parse due to " + e.getMessage());
}
}
The handler is handleSqsEvent and it just write out the message body and then convert the message into a FileEvent Object. IF something goes wrong it prints out to the logger
The pom file just contains the aws lambda start dependencies and the jackson for json serialization
[pom.xml]
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>3.11.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.2</version>
</dependency>
The Docker file of course it
FROM public.ecr.aws/lambda/java:11
COPY target/classes /var/task/
COPY target/dependency /var/task/lib
# Command can be overwritten by providing a different command in the template directly.
CMD ["helloworld.App::handleSQSEvent"]
The package and class name is of course the worst ever :)
FileEvent
public class FileEvent {
private String eventType;
private String fileName;
public static FileEvent build(String eventType, String fileName){
FileEvent fileEvent = new FileEvent();
fileEvent.eventType = eventType;
fileEvent.fileName = fileName;
return fileEvent;
}
public String getEventType() {
return eventType;
}
public void setEventType(String eventType) {
this.eventType = eventType;
}
public String getFileName() {
return fileName;
}
public void setFileName(String fileName) {
this.fileName = fileName;
}
}
COMPILE, BUILD AND DEPLOY
The structure of the project is this
This means that the stack was created once a time (i have the samconfig.toml and the .aws-sam directory.
To build the two projects I have created a recursive bat function
@echo off
setlocal EnableExtensions EnableDelayedExpansion
FOR /f "tokens=*" %%G in ('dir /b /a:d "."') DO (
if exist %%G\pom.xml (
echo Start install for %%G
call mvn clean install -f %%G\\pom.xml
echo Start copy-dependencies for %%G
call mvn dependency:copy-dependencies -DincludeScope=compile -f %%G\\pom.xml
)
)
It goes through each folder and if it find that it is a maven project (the presence of pom.xml) then it run the clean install and the copy-dependencies (this is fundamental to create the image)
Because i need to launch the build and the deploy many times and because the stack is already created, I wrote another batch function to launch the sam build and to retrieve the name of the stack for launching the deploy
echo Start Sam Build
call sam build
for /f "tokens=* delims= USEBACKQ" %%b in (`findstr stack_name samconfig.toml`) do (
FOR /f "tokens=3" %%A in ("%%b") DO (
echo Start Sam Deploy %%A
call sam deploy %%A
)
)
I know that building a pipeline should be the correct way to do, but this way give me the power to easily change the code and deploy it, so for my test scope it is ok
CONCLUSION
You will be surprised on how powerful is SAM but how easy is to create template for an entire system. Suppose to split your system into different domains, I suppose every domain should have his specific Stack Template in order to be easy created in a region or to create then environment specific template (dev, stag, prod, ecc..)
This is a nice way to start building functions and see how they work together
Commenti
Posta un commento