The basics of Serverless applications in Amazon Web Services

Serverlessgood Day, dear Habra-users!

Today I would like to talk about an actively growing technologies in the world of it — one of cloud technologies, namely on peer-to-peer application architecture (BSA – Serverless). Recently, cloud technologies are gaining more and more popularity. This happens for a simple reason – easy availability, relative cheapness and lack of seed capital – how knowledge to maintain and deploy the infrastructure and monetary nature.


Serverless Technology is becoming more and more popular, but for some reason very little coverage in the it industry, unlike other cloud technologies such as IaaS, DBaaS, PaaS.



For writing the article I used AWS (Amazon Web Services) as is undoubtedly the largest and thoughtful customer service (based on the analysis Gartner's in 2015).

Gartner''s cloud solutions chart

we need:

    the AWS account (for testing and minimal development enough Free — tier);

    the

  • development Platform (I prefer Linux Fedora, but you can use any distribution that supports not less than 4.3 Node and NPM);

  • the
  • Serverless Framework 1.* Beta of this framework, I will tell more in detail in a separate Chapter (https://github.com/serverless/serverless)

Well, let's start with the basics.

Serverless — what is the basis of popularity


Serverless – peer-to-peer application architecture. In fact, it is not so, and serverless. The core architecture of a microservices are, or function (lambda) that performs a specific task and run into logical containers, hidden from prying eyes. I.e. the end user is given only the interface code loading functions (service) and the connectivity to this feature of event sources (events).


Considering the example service Amazon, the event source can be many of the same services Amazon:

    the
  1. S3 storage can generate many events in almost any operations such as adding, deleting and editing files in baketh.

  2. the
  3. RDS and DynamoDB – what's more, Dynamo allows you to generate events for adding or changing data in a table.

  4. the
  5. Cloudwatch system in the likeness of cron.

  6. the
  7. And, most interesting for us – API Gateways. Is a software emulator of the HTTP Protocol that allows you to abstract the queries to a single event micro service.

Schematically the operation of the micro service you can represent thus:


the operating Principle of the lambda functionsIn fact, as soon as you load the function code in Amazon, it is stored as a package on the internal file server (like S3). At the time of receiving the first event, Amazon automatically runs the mini-container with a specific interpreter (or virtual machine, in the case of JAVA) to run the code, substituting the formed body of the event as argument. As is clear from the principle of microservices, each such function can have state (stateless), since access to the container no, and the time of its existence does not defined. Due to this quality microservices can freely grow horizontally depending on the number of requests and load. In fact, the experience of balancing resources in the Amazon performed quite well and function fast enough “fruit” even with an abrupt increase in load.


on the other hand, another advantage of such a stateless run is that the payment for the use of the service, as a rule, can be produced based on the execution time of specific functions. Such a convenient payment method in English literature Pay-as-you-go – allows you to run startups or other projects without initial capital. Because of the need to buy hosting to host code – no. Payment can be made in proportion to the use of the service (which flexibly allows to calculate the necessary monetization of your service).


Thus, pros of this architecture is:

    the
  1. the Lack of a hardware part – servers;
  2. the
  3. Lack of direct contact and admin backend;
  4. the
  5. Virtually unlimited horizontal growth of your project;
  6. the
  7. Payment for the used CPU time.

cons include:

    the
  1. Lack of precise control of the container (you never know where and how they are run, who has access) – that can often cause paranoia.

  2. the
  3. Lack “integrity” of the application: each function is an independent object, which often leads to a certain dispersion of the application and the difficulty to put it all together.

  4. the
  5. Cold start of the container leaves much to be desired (at least in Amazon). The first container is started with a lambda function often can be slow down for 2-3 seconds, which is not always well accepted by users.

In General, technology has its own segment demand and market consumers. I find the technology very suitable for the initial phase of startups, ranging from the simplest of blogs, to online games and not only. The emphasis in this case is the independence from server infrastructure, and limitless growth performance in automatic mode.


Serverless Framework


As mentioned above, one of the downsides of BSA is the fragmentation of applications and a very severe control of all the necessary components such as event code, roles, and security policies. I must say that the draft is slightly more complicated than Hello World, the regulation of all these components is a huge headache. And not infrequently leads to a failure of the services during the next update.


to avoid this problem, a good man has written a very useful utility with the same name – Serverless. This framework is confined strictly for use in AWS infrastructure (and, although the 0.5 branch version has been fully tuned for NodeJS a big plus was the redirection of the branch 1.* in the direction of the AWS all supported languages). In the future we will focus on the branch 1.*, because, in my opinion, its structure is more logical and flexible to use. Moreover, in version 1, it was cleaned most of the debris and added support for Java and Python.


what is the usefulness of this solution? The answer is very simple — Serverless Framework concentrates in itself all the necessary infrastructure of the project, namely: source control, testing, creating, and controlling resources, roles, and security policies. And so everything is in one place, and can be easily added to git for version control.


after Reading the basic instructions for installing the framework and setting it up, you certainly have managed to install it, but in order to preserve the usefulness of the article for beginners, let me list the necessary steps. Having read this far I hope you already a console with Centos, so let's start our acquaintance with installing NPM/Node (because the package serverless, yet written in NodeJS).


Step one

NVM I prefer to version control a node:

curl https://raw.githubusercontent.com/creationix/nvm/v0.31.6/install.sh | bash

Step two

Reload the profile, as indicated at the end of the installation:

. ~/.bashrc

Step three

Now set the build Node/NPM (in the example I'm using 4.4.5, simply because it was handy)

nvm install v4.4.5

Step four

After successful installation it is time to configure access to AWS — in this article, I'll skip the step of setting the specific AWS account to develop and its role — the detailed instruction can be found in the manuals framework.


Step five

Usually, to use the AWS key is enough to add 2 environment variables:

export AWS_ACCESS_KEY_ID=<key>


Step six

Assume that the account is installed and configured (Please note that for the SLS framework required administrator level access to AWS resources – otherwise, you can spend hours trying to figure out why things are not working as desired).


point seven

Set in the global Serverless mode:

npm install -g serverless@beta

Please note that without specifying a beta version, surely you would put in 0.5 branch. To date, 0.5 and 1.0 are different as heaven and earth, so the instructions for the 1.0 version 0.5 to work simply not will.
Stage eighth

Create a directory for the project. And, at this point a small digression about the architecture of the project.


Architecture Serverless project

Let us now move to how the lambda function you can upload to Amazon. Specifically, the means of these two:

    the
  • Via web console – a simple copy and paste. Method is very simple and convenient for one-word function with a simple code. Unfortunately, in this way the functions can not include third-party libraries (the list of the libraries supported lambda functions, you can read the documentation Amazon, but, as a rule, this is a language pack out of the box and AWS SKD – no more, no less).

  • the
  • Using AWS SKD possible to fill the package functions. It is a regular zip archive with all necessary files and libraries (in this case, there is a limit on the maximum file size to 50Mb). Don't forget that lambda is a micro service, and pour the entire software package into a single function makes no sense. As payment for the function is on the runtime code – so, do not forget to optimize.

In our case, Serverless uses the second method – ie, he prepares an existing project and creating the zip package. Below I will give an example of a project for NodeJS, otherwise the same logic could apply for other languages.

Function
|__ lib // Internal library
|__ handler.js // entry Point function
|__ serverless.env.yaml // environment Variables
|__ serverless.yml // project configuration
|__ node_modules // third-Party modules
|__ package.json

it would not be desirable to overload the article, but unfortunately, documentation on the configuration of the framework is very incomplete and fragmented, so I'd like to give an example from my practice setup. The whole configuration of the service is serverless.yml file with this structure:


the Contents of the configuration file serverless.yml
service: Name of service

provider:
name: aws
runtime: nodejs4.3
iamRoleStatement:
$ref: ../custom_iam_role.json #JSON file that describes an IAM role for our functions. http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_policy-examples.html. In this JSON file we want to keep only array Statements
vpc: #VPC Default settings (in case you need to arrange access to the lambda function in your VPC network)
securityGroupIds:
- securityGroupId1
subnetIds:
- subnetId1
stage: dev #the name of the stage (essentially, an arbitrary string that you some value)
region: us-west-2 # Region Amazon

package: #Description of package
includee: #Important point framework, by default, includes only one file, namely a Central access point (handler). If you want to include additional directories, you must specify them below.
- lib
- node_module # And Yes, as I wrote earlier, Amazon does not provide any possibilities to load modules automatically, so you must include them in the package.
exclude: # if you specify only the exception, the framework will include all in addition to this list.
- tmp
- .git
functions: #this part is a listing of the functions. Do not be confused by the duality of concepts – in this case, the function means lambda function in Amazon. At that time, as the project is called Service. Curiously, every new feature, in fact, is to create a separate package and it is a separate lambda function in the Amazon.
name: hello #the Name of the lambda function
handler: handler.hello # Path to the entry point
MemorySize: 512 # Amount of memory
timeout: 10 # the Timeout
events: # the following are the events that the function will respond
- s3: bucketName
- schedule: rate(10 minutes)
http:
path: users/create
method: get
cors: true
- sns: topic-name
vpc: # VPC Custom setup for a specific function
securityGroupIds:
- securityGroupId1
- securityGroupId2
subnetIds:
- subnetId1
- subnetId2

resources:
Resources:
$ref: ../custom_resources.json # JSON file that lists the resources.


For the most part, this configuration file is very similar to the configuration of the CloudFormation Service Amazon – I will write about this, perhaps in the next article. But in short – the service control of all resources in your account Amazon. Serverless fully rely on this service, and, usually, if during installation the function encounters an incomprehensible error, it is possible to find detailed error information in the CloudFormation console page.

I would like to note one important detail about the project Serverless – you can't include directories and files located above the directory tree than the project directory. Or rather ../lib do not work.

Now we have the configuration, proceed to the function.


Stage of the ninth

Create the project with default configuration

sls create —template aws-nodejs

After this command you will see the project structure is similar to the above described.

the tenth Stage

the function Itself is in the file handler.js. The principles of writing a function, you can read the documentation Amazon. But in General, the access point is a function with three arguments:

    the
  1. event — the event object. This object contains all the data about the event that triggered the function. In the case of the AWS API Gateway, this object will contain the HTTP request (actually, Serverless default mapper sets the HTTP request to the API Gateway, so the user has no need, whatsoever, to configure it yourself, which is very convenient for most projects).

  2. the
  3. Context — the object containing the current state of the environment – such as ARN the current function and, sometimes, authorization information. I want to remind that a new version of NodeJS 4.3 Amazon Lambda function result must be returned through the callBack, than context (e.g. {done,succeed,fail})

  4. the
  5. Callback format is callback(Error, Data) that returns a result event.

For example, let's create a simple Hello World function:

exports.hello = function(event, context, callback) {
callback({'Hello':'World', 'event': event});
}

the eleventh Stage

Loading!

sls deploy

Usually, this will take a while to pack it design features and environments in the AWS. But, in the end, Serverless and ARN will return the Endpoint through which you see the result.

In closing


Despite the fact that the article covered only the basics of using Serverless technology, in practice, the range of applications of this technology are almost limitless. From simple portals (done as a static page with React or Angular) and backend logic on lambda functions to the processing of archives or files using S3 storage, and complicated mathematical operations with load distribution. In my opinion, the technology is still at the very beginning of its inception, and probably will continue to evolve. So, take keyboard in hand and go to try and test (the benefit of Amazon's Free Tier allows you to do this absolutely for free at first).

Thank you for your attention — please share your experiences and observations in the comments! And, hopefully, the article you enjoy, in that case, you will continue the cycle of the recess in the technology.

Article based on information from habrahabr.ru

Комментарии

Популярные сообщения из этого блога

The use of Lisp in production

FreeBSD + PostgreSQL: tuning the database server

As we did a free Noodle for iOS and how we plan to earn