Last Thursday saw the seventh AWS User Group Liverpool meetup in the wonderful Tapestry building in Liverpool. This month's topic was on AWS Lambdas and lessons learnt from trying to use them for everything. There was twelve of us in total which was better than the five last month and due to the numbers and the informal setting some great conversation and topic discussions happened. It was great to see everyone chipping in with information and sharing experiences, really brings a great deal of value to these meetups rather than just presenters talking to the room.
We started the evening with drinks and pizza and lots of conversation amongst the attendees. The Dominos pizza went down a treat as did the couple of cans of IPA.
As always the evening started with "What's new in AWS this month" presented by Simon this month, which is actually a really good idea because Amazon are bringing out new components and services all the time and it's hard to keep up.
A number of things were mentioned:
We started the evening with drinks and pizza and lots of conversation amongst the attendees. The Dominos pizza went down a treat as did the couple of cans of IPA.
As always the evening started with "What's new in AWS this month" presented by Simon this month, which is actually a really good idea because Amazon are bringing out new components and services all the time and it's hard to keep up.
A number of things were mentioned:
- CloudWatch Anomaly Detection
- AWS IQ - the first thing I thought of here was "security?" considering you are asking random people to help you with your AWS configuration problems and you need to give them access to your setup
- Savings Plan - pay for your EC2 and Fargate usage on a 1 or 3 year commitment basis
- AWS for WordPress is now available
- Amazon API Gateway supports wildcard domain names
Next it was time to grab a drink and get ready for the main talk of the night.
Things I wish i'd known about Lambda... and you should too! - Des Webster
Des works at MAG-O (Manchester Airport's technical division) and they have a focus on "serverless as much as possible". Des was tasked with proving if you could run a front-end web application on serverless and the talk was his adventure through this task.
The problem with an end user-facing web application is that response time is extremely important, and serverless containers take a while to spin up from a cold start (anywhere from 0.5 to 1 second), once they are running they respond to subsequent requests pretty quickly.
When are request comes in for a lambda which is not currently running, 'the function lifecycle' kicks in, downloading your code and the appropriate container to the sandbox on the guest OS, it kicks off the runtime and then starts your code. There are ways to get the lambda to fire up from a partial cold start or a warm start but they are up to you rather than something that is controlled by AWS itself.
Des said that the language they wrote their functions in was C# but that after this experiment and some other tests he found that languages that require a runtime environment or other framework (such as C# or Java) have a slower cold start time than the other languages such as Javascript, Go, Python. The startup time did reduce for C# if you increased the instance size (increase memory) but this also increases your costs.
Apart from choosing the 'correct' programming language other optimisations available are keeping lambdas warm by regularly pinging them to keep them alive, keeping the functions small so that they are fast to load and start as well as using Lambda Layers. Des and his team used AWS X-Ray to trace and debug their investigating, using it to analyse the full stack to view the timings and bottlenecks,
He then covered the Freeze/Thaw Process where once a lambda function is executed AWS keeps the container warm for a while to reduce the number of cold starts. Not only does it keep the container warm but it 'freezes' the service so any declarations in the code are still there (so no further initialisation required) as well as any stored data is still available so this optimises the function for further reuse.
Pricing was addressed next based on their estimates of needing 25 million Lambda invocations a month. Total costs were around £1k but interestingly more than half of that cost was on the CloudWatch metrics & logs and the X-Ray tracing features. We discussed whether it would be prudent to turn off these features once the system was 'bedded in' but then you are removing any ability to investigate failures that had happened due to having in effect 'no logs'.
If you have a VPC (Virtual Private Cloud) setup then by default AWS Lambdas can't connect to resources within it, however you can create an elastic network interface for your Lambda in your VPC and now it can access your resources. In previous versions of Lambdas it used to be really slow but this has been improved upon now (see the very last section of this article Cold Starts in AWS Lambda).
Des finished with an explanation about the limits on the number of lambdas that Amazon will spin up on demand for you and it was quite surprising really. The initial level of concurrency varies between regions from 500 to 3000, and then after that initial burst is can scale by an additional 500 instances per minute. However each region also has a default lambda concurrency limit which is 1000 by default so if you are expecting a massive initial hit you'd better start talking to Amazon to get your quota increased!
There were then multiple conversations that 'spun up' from this presentation which was excellent, and I'd like to personally thank Des Webster for an excellent and insightful presentation. It was an excellent evening and it was great to have a larger audience this time and the discussions and networking that happened at the event was great.
Thank you to Paul, Paul and Simon for organising and Amazon for sponsoring.
The problem with an end user-facing web application is that response time is extremely important, and serverless containers take a while to spin up from a cold start (anywhere from 0.5 to 1 second), once they are running they respond to subsequent requests pretty quickly.
When are request comes in for a lambda which is not currently running, 'the function lifecycle' kicks in, downloading your code and the appropriate container to the sandbox on the guest OS, it kicks off the runtime and then starts your code. There are ways to get the lambda to fire up from a partial cold start or a warm start but they are up to you rather than something that is controlled by AWS itself.
Des said that the language they wrote their functions in was C# but that after this experiment and some other tests he found that languages that require a runtime environment or other framework (such as C# or Java) have a slower cold start time than the other languages such as Javascript, Go, Python. The startup time did reduce for C# if you increased the instance size (increase memory) but this also increases your costs.
Apart from choosing the 'correct' programming language other optimisations available are keeping lambdas warm by regularly pinging them to keep them alive, keeping the functions small so that they are fast to load and start as well as using Lambda Layers. Des and his team used AWS X-Ray to trace and debug their investigating, using it to analyse the full stack to view the timings and bottlenecks,
He then covered the Freeze/Thaw Process where once a lambda function is executed AWS keeps the container warm for a while to reduce the number of cold starts. Not only does it keep the container warm but it 'freezes' the service so any declarations in the code are still there (so no further initialisation required) as well as any stored data is still available so this optimises the function for further reuse.
Pricing was addressed next based on their estimates of needing 25 million Lambda invocations a month. Total costs were around £1k but interestingly more than half of that cost was on the CloudWatch metrics & logs and the X-Ray tracing features. We discussed whether it would be prudent to turn off these features once the system was 'bedded in' but then you are removing any ability to investigate failures that had happened due to having in effect 'no logs'.
If you have a VPC (Virtual Private Cloud) setup then by default AWS Lambdas can't connect to resources within it, however you can create an elastic network interface for your Lambda in your VPC and now it can access your resources. In previous versions of Lambdas it used to be really slow but this has been improved upon now (see the very last section of this article Cold Starts in AWS Lambda).
Des finished with an explanation about the limits on the number of lambdas that Amazon will spin up on demand for you and it was quite surprising really. The initial level of concurrency varies between regions from 500 to 3000, and then after that initial burst is can scale by an additional 500 instances per minute. However each region also has a default lambda concurrency limit which is 1000 by default so if you are expecting a massive initial hit you'd better start talking to Amazon to get your quota increased!
There were then multiple conversations that 'spun up' from this presentation which was excellent, and I'd like to personally thank Des Webster for an excellent and insightful presentation. It was an excellent evening and it was great to have a larger audience this time and the discussions and networking that happened at the event was great.
Thank you to Paul, Paul and Simon for organising and Amazon for sponsoring.
Comments