1 00:00:15,360 --> 00:00:20,290 Welcome back to BackSpace Academy. In this lecture we're going to look at all 2 00:00:20,290 --> 00:00:26,230 things serverless on AWS. We'll start off by having a look at what actually serverless 3 00:00:26,230 --> 00:00:30,310 means and then we'll go into the different offerings that are available on 4 00:00:30,310 --> 00:00:35,890 AWS for creating serverless applications. We'll look at the compute service 5 00:00:35,890 --> 00:00:40,780 offerings that are available, storage such as s3, database services such as 6 00:00:40,780 --> 00:00:47,500 DynamoDB and API services that allow us to create an API back-end that is 7 00:00:47,500 --> 00:00:52,620 completely serverless. We'll look at application integration and 8 00:00:52,620 --> 00:00:58,600 orchestration services that can allow us to tie in all of these different serverless 9 00:00:58,600 --> 00:01:03,730 offerings together and finally we'll look at the analytic services that are 10 00:01:03,730 --> 00:01:10,960 available that are completely serverless as well. So what is serverless? 11 00:01:10,960 --> 00:01:17,050 Well it's anything that allows you to build and run applications and services 12 00:01:17,050 --> 00:01:22,329 without having to think about servers. So what that means is that you may have a 13 00:01:22,329 --> 00:01:28,180 file server that's running on ec2, you need to worry about load balancing the 14 00:01:28,180 --> 00:01:33,880 requests that are coming in, you need to look at auto scaling for changes in 15 00:01:33,880 --> 00:01:38,680 demand, you need to look about redundancy and failover and all these sorts of 16 00:01:38,680 --> 00:01:44,820 things, whereas if you used Amazon s3 for uploading and serving of those files 17 00:01:44,820 --> 00:01:48,579 then you don't need to worry about what's going on at the back end there 18 00:01:48,579 --> 00:01:53,969 with those servers. So your apps they don't require that provisioning, 19 00:01:53,969 --> 00:01:59,530 maintaining and administering of those servers for those back-end components 20 00:01:59,530 --> 00:02:05,590 such as would be with an ec2 architecture, and under the shared responsibility 21 00:02:05,590 --> 00:02:12,790 model it shifts more of that operational responsibility over to AWS. 22 00:02:12,790 --> 00:02:19,660 Because you're doing that it can be more expensive. So on a compute only cost basis 23 00:02:19,660 --> 00:02:28,130 Lambda will be more expensive than ec2 but the total cost of ownership 24 00:02:28,130 --> 00:02:33,230 can be quite significantly lower because with ec2 you're going to have that 25 00:02:33,230 --> 00:02:37,970 server sitting there even when it's not being used. So with Lambda it's going to 26 00:02:37,970 --> 00:02:42,320 be invoked when you need it and it's going to be destroyed after that when 27 00:02:42,320 --> 00:02:48,110 you don't need it and, so that will lower the cost of those wasted resources and 28 00:02:48,110 --> 00:02:53,450 it will also reduce your support costs because you're not going to have to 29 00:02:53,450 --> 00:02:59,420 worry about having someone to monitor this server, to provision it, to design it, 30 00:02:59,420 --> 00:03:03,500 to make sure it's running quite regularly. So it's going to lower those 31 00:03:03,500 --> 00:03:11,360 support costs as well. AWS has many serverless offerings available on their 32 00:03:11,360 --> 00:03:16,940 serverless platform and they are fully managed services that you can use to 33 00:03:16,940 --> 00:03:22,790 build and run your serverless applications. They include compute such as Lambda, 34 00:03:22,790 --> 00:03:29,510 storage such as s3, database such as DynamoDB, application programming interfaces 35 00:03:29,510 --> 00:03:35,570 application integration and also orchestration of application components 36 00:03:35,570 --> 00:03:43,100 serverless analytics and developer tools to implement these 37 00:03:43,100 --> 00:03:52,520 serverless applications. Microservices are small independent services and they will 38 00:03:52,520 --> 00:03:58,730 make up a much larger application and those small independent services within that 39 00:03:58,730 --> 00:04:05,080 larger application will communicate between each other using well-defined 40 00:04:05,080 --> 00:04:11,870 application programming interfaces. Microservices architectures can be 41 00:04:11,870 --> 00:04:18,020 facilitated using serverless technologies and serverless technologies do work very 42 00:04:18,020 --> 00:04:25,370 well in this sort of architecture but that said you can also facilitate a 43 00:04:25,370 --> 00:04:30,470 Microservices architecture using containers that are running on an ec2 44 00:04:30,470 --> 00:04:37,010 server but serverless technologies are a great way of doing it. The benefits of a 45 00:04:37,010 --> 00:04:41,460 microservices architecture is that it's going to provide increased agility 46 00:04:41,460 --> 00:04:46,800 we're going to be able to get to the market quicker. It's going to provide 47 00:04:46,800 --> 00:04:52,560 good ownership by your developers of these microservices and it's going to 48 00:04:52,560 --> 00:04:58,919 enable reusable code and that is quite important because in a continuous 49 00:04:58,919 --> 00:05:04,280 integration and continuous deployment environment you're going to be deploying 50 00:05:04,280 --> 00:05:10,590 only what needs to be updated, only those individual microservices, so it's going to 51 00:05:10,590 --> 00:05:18,259 really speed up any deployment of any updates. You will achieve right-sized scaling 52 00:05:18,259 --> 00:05:24,780 because your application is broken up into individual microservices 53 00:05:24,780 --> 00:05:31,650 each one of those microservices can scale independently. So when there's 54 00:05:31,650 --> 00:05:36,150 demand on one individual part of your application, that part of your application 55 00:05:36,150 --> 00:05:41,220 will get those resources that it needs it will also be very resilient 56 00:05:41,220 --> 00:05:47,009 to single point failures. So although one individual microservice 57 00:05:47,009 --> 00:05:52,830 may not work, the rest of your application will work. So unlike where 58 00:05:52,830 --> 00:05:57,719 you had your application on a single server that can go down as a single 59 00:05:57,719 --> 00:06:02,690 point of failure, microservices can increase the resilience of that architecture. 60 00:06:02,690 --> 00:06:11,130 The AWS service application model or SAM for short is a 61 00:06:11,130 --> 00:06:16,289 framework for defining your serverless applications that you are going to be 62 00:06:16,289 --> 00:06:22,530 deploying using Cloudformation. Now you don't necessarily need to use the SAM to 63 00:06:22,530 --> 00:06:29,580 deploy a serverless service on Cloudformation but it does provide a good 64 00:06:29,580 --> 00:06:36,449 structure around how you do that. The SAM templates they are YAML templates 65 00:06:36,449 --> 00:06:42,210 the same as Cloudformation and they look very similar to a Cloudformation 66 00:06:42,210 --> 00:06:47,669 template because they are an extension of Cloudformation. So what they do is that 67 00:06:47,669 --> 00:06:52,969 they define your serverless application at a very high level and 68 00:06:52,969 --> 00:06:59,040 then Cloudformation will reconstruct that and put that into 69 00:06:59,040 --> 00:07:04,230 a Cloudformation template that will define it at a more lower level for you 70 00:07:04,230 --> 00:07:10,320 there is the SAM command-line interface or CLI you can use that to invoke your lambda functions 71 00:07:10,320 --> 00:07:16,980 you can also use it to create a deployment package and the 72 00:07:16,980 --> 00:07:22,590 command to use is SAM package, and you can also deploy those applications that are 73 00:07:22,590 --> 00:07:29,610 created using that package using Sam deploy. When you talk to people about AWS 74 00:07:29,610 --> 00:07:34,680 serverless they'll automatically think about AWS Lambda, but of course there's 75 00:07:34,680 --> 00:07:39,540 more to serverless than just lambda but that seems to be a very popular service. 76 00:07:39,540 --> 00:07:46,770 It automatically runs code in response to triggers and you only pay for the 77 00:07:46,770 --> 00:07:52,740 compute time that is used, when you don't use lambda you're not paying for it 78 00:07:52,740 --> 00:07:58,320 when it's invoked you're paying for it. The code it runs in parallel so you can 79 00:07:58,320 --> 00:08:04,400 have multiple functions running at the same time and it processes each trigger 80 00:08:04,400 --> 00:08:09,750 individually and that's great because it allows you to scale up and scale down 81 00:08:09,750 --> 00:08:14,700 because it's all running in parallel and when it's not being used they are terminated 82 00:08:14,700 --> 00:08:20,220 those individual functions are terminated 83 00:08:20,220 --> 00:08:26,850 This is short-lived raw compute power, because of that, they cannot do everything that an 84 00:08:26,850 --> 00:08:33,630 ec2 instance can do. For example if you wanted to create a web server you're not 85 00:08:33,630 --> 00:08:37,950 going to be able to do that with lambda. If you want to have a WebSocket 86 00:08:37,950 --> 00:08:42,660 connection which is a persistent connection you're not going to be able 87 00:08:42,660 --> 00:08:47,430 to do that because lambda is just going to respond to individual requests and 88 00:08:47,430 --> 00:08:52,170 then it's just going to terminate itself, so you need to take that into consideration 89 00:08:52,170 --> 00:08:58,950 lambda cannot do everything that EC2 can. Also you 90 00:08:58,950 --> 00:09:05,160 must take care not to introduce unnecessary complication into your 91 00:09:05,160 --> 00:09:09,209 architecture by using lambda in situations where you 92 00:09:09,209 --> 00:09:14,850 don't necessarily need to. So here we have an application which consists of an 93 00:09:14,850 --> 00:09:22,230 Amazon s3 bucket and when a photograph is uploaded to the s3 bucket that will 94 00:09:22,230 --> 00:09:29,369 trigger an s3 trigger for lambda to be invoked and then from there we can use 95 00:09:29,369 --> 00:09:35,790 lambda to run some code to apply some filters to that image to resize an image 96 00:09:35,790 --> 00:09:41,069 or do something to their image and then we can save that back to Amazon s3 and 97 00:09:41,069 --> 00:09:47,850 that can be used for different mobile or tablet or whatever.So that's a great 98 00:09:47,850 --> 00:09:54,240 application but you need to consider what if that photograph was taken by a 99 00:09:54,240 --> 00:10:01,699 mobile phone or was taken by a tablet? Wouldn't it be better to do that 100 00:10:01,699 --> 00:10:09,720 processing before it is uploaded by the mobile device or that tablet and by 101 00:10:09,720 --> 00:10:14,549 doing that you're not going to use AWS Lambda you don't need to use AWS Lambda 102 00:10:14,549 --> 00:10:19,649 and you're going to save the cost of doing that. So always take these 103 00:10:19,649 --> 00:10:25,439 things into consideration, if your client has compute resources either using a 104 00:10:25,439 --> 00:10:31,490 mobile phone or they're using a desktop computer or whatever, why not use their 105 00:10:31,490 --> 00:10:36,029 computer sources rather than yours you're going to have to pay for? 106 00:10:36,029 --> 00:10:41,610 The other advantage of that is it's going to be a lot quicker and simpler for the end-user 107 00:10:41,610 --> 00:10:49,199 they're going to get much better latency on these requests 108 00:10:49,199 --> 00:10:56,339 so here we have another application which has a web application running on s3 109 00:10:56,339 --> 00:11:02,279 and being served by Amazon s3 service and that front end code is going to be 110 00:11:02,279 --> 00:11:07,170 running in the browser on a mobile device or a desktop or a tablet or 111 00:11:07,170 --> 00:11:12,689 whatever it is and that's running a weather information application and so 112 00:11:12,689 --> 00:11:17,910 when the user clicks to get their local weather information their application 113 00:11:17,910 --> 00:11:22,900 that's running on that desktop computer or whatever it is running on that device 114 00:11:22,900 --> 00:11:30,400 we'll connect to the API gateway service and that can trigger our lambda function 115 00:11:30,400 --> 00:11:36,600 to be invoked to process that information and then to access the 116 00:11:36,600 --> 00:11:41,560 dynamodb table to get the correct information that's needed, send that back 117 00:11:41,560 --> 00:11:49,420 to API gateway and back to that browser application. Now one thing to consider 118 00:11:49,420 --> 00:11:57,330 here is that you don't need to have lambda for API gateway to access 119 00:11:57,330 --> 00:12:03,370 DynamoDB. That can access that directly you don't need lambda there. So unless 120 00:12:03,370 --> 00:12:09,640 there's something specific that you can't do with the API gateway then you 121 00:12:09,640 --> 00:12:15,040 don't really need to use lambda because you can set up your API gateway service 122 00:12:15,040 --> 00:12:22,210 the schema on that to access dynamodb directly depending on what that request 123 00:12:22,210 --> 00:12:27,250 is asking for. So there's a classic example of where land that may not be 124 00:12:27,250 --> 00:12:31,120 needed and you can say the cost of doing that the other thing that you may want 125 00:12:31,120 --> 00:12:34,810 to consider is actually doing that if there is processing that's required 126 00:12:34,810 --> 00:12:40,720 maybe do that on the browser side do that on the device that the application 127 00:12:40,720 --> 00:12:46,320 do that and then send that request to api gateway which will then access 128 00:12:46,320 --> 00:12:53,380 DynamoDB another thing to consider again is that you're creating a custom api gateway 129 00:12:53,380 --> 00:13:02,260 here to access DynamoDB but you need to remember that DynamoDB has its 130 00:13:02,260 --> 00:13:09,790 own APIand it can do a lot and there's not a lot that you can't do for 131 00:13:09,790 --> 00:13:15,339 accessing dynamodb that you need an api gateway to do that. So there's no reason 132 00:13:15,339 --> 00:13:19,360 why you couldn't actually get rid of the api gateway service as well, use your 133 00:13:19,360 --> 00:13:24,430 browser application to access dynamodb directly and have the appropriate 134 00:13:24,430 --> 00:13:28,320 securities for that to be established. 135 00:13:29,580 --> 00:13:36,790 Lambda@edge it's a function of Amazon CloudFront and allows you to run 136 00:13:36,790 --> 00:13:43,170 a Lambda function in response to triggers that are generated by Cloudfront 137 00:13:43,170 --> 00:13:48,490 A number of use cases for that is that you could add security headers to HTTP 138 00:13:48,490 --> 00:13:53,500 responses before they go back to the client, you can use it for bot mitigation, 139 00:13:53,500 --> 00:13:58,450 for search engine optimization, for sending back requests that are SEO 140 00:13:58,450 --> 00:14:05,560 optimized for processing requests, for data from dynamo DB and also for 141 00:14:05,560 --> 00:14:12,399 real-time image transformation, but of course remember Lambda@edge 142 00:14:12,399 --> 00:14:18,029 it's not a free service so make sure that you do your costing before you implement it 143 00:14:18,029 --> 00:14:23,170 So here's a good application of how you might want to use lambda@edge 144 00:14:23,170 --> 00:14:28,240 here we've got a website so when a user visits our website we want to know 145 00:14:28,240 --> 00:14:33,970 whether it's actually a real user or whether it's a bot. So Cloudfront can 146 00:14:33,970 --> 00:14:39,400 trigger a lambda@edge function when that request is made, and that lambda@edge 147 00:14:39,400 --> 00:14:45,339 function can run code that can check whether or not that traffic is 148 00:14:45,339 --> 00:14:50,560 from a bot or whether it's from a real user and block that bot or allow that 149 00:14:50,560 --> 00:14:58,060 real user to have access. So here we have another application of where a user that 150 00:14:58,060 --> 00:15:02,080 is visiting your website may want to have the images on that website 151 00:15:02,080 --> 00:15:07,510 delivered and resized to smaller or larger sizes depending on whether 152 00:15:07,510 --> 00:15:12,220 they're using a desktop or whether using a tablet or mobile or whatever 153 00:15:12,220 --> 00:15:16,450 Sp if they're using a mobile they want to have small images sent to them rather than 154 00:15:16,450 --> 00:15:20,770 big images, and so the way that works is that we are going to upload our original 155 00:15:20,770 --> 00:15:27,130 high definition images into an s3 bucket and when someone accesses that Amazon 156 00:15:27,130 --> 00:15:33,010 CloudFront will invoke a lambda@edge function they can resize their image and 157 00:15:33,010 --> 00:15:39,070 then present their back to that user, now at the same time all of those responses 158 00:15:39,070 --> 00:15:46,500 will be cached using the lambda@edge cache so we're not going to be paying every 159 00:15:46,500 --> 00:15:51,330 time someone accesses that that image it will be stored in a case and so that's 160 00:15:51,330 --> 00:15:58,590 going to save you a lot of costs. AWS Fargate that is a serverless environment 161 00:15:58,590 --> 00:16:04,440 that allows you to deploy Docker container applications and those 162 00:16:04,440 --> 00:16:12,060 individual tasks running on that Docker application run in their own individual 163 00:16:12,060 --> 00:16:19,610 environment and that provides a high secure isolation when you're comparing it to 164 00:16:19,610 --> 00:16:27,390 multiple containers running on a single ec2 instance. A big advantage of Fargate is 165 00:16:27,390 --> 00:16:33,290 that it allocates the right amount of compute resources for each one of those 166 00:16:33,290 --> 00:16:38,670 individual containers. So what that means is that one of those parts of your 167 00:16:38,670 --> 00:16:43,530 application is under heavy demand it will give those compute resources to 168 00:16:43,530 --> 00:16:49,920 that part of your application and you're only paying for the resources that are 169 00:16:49,920 --> 00:16:58,800 required to run that container but unlike lambda which runs in parallel 170 00:16:58,800 --> 00:17:05,939 so when you get multiple requests for a Lambda function you will get multiple 171 00:17:05,939 --> 00:17:10,170 functions operating at once and when those individual requests are finished 172 00:17:10,170 --> 00:17:16,170 then that Lambda function will terminate itself. AWS Fargate is different in that 173 00:17:16,170 --> 00:17:20,790 you're deploying your application, you're defining what your application 174 00:17:20,790 --> 00:17:27,660 needs and it will be deployed in a serverless environment but when you're not 175 00:17:27,660 --> 00:17:32,730 using that you're still going to be paying for it so when it is deployed and 176 00:17:32,730 --> 00:17:38,970 it's not receiving requests that is still going to be a cost for you 177 00:17:38,970 --> 00:17:42,900 so here we can see the advantage of using Fargate and the disadvantage of not 178 00:17:42,900 --> 00:17:47,880 using Fargate. So we're going to build our container image, we're going to have 179 00:17:47,880 --> 00:17:53,520 to deploy and define our ec2 instance, we're going to have to provision that 180 00:17:53,520 --> 00:17:55,770 we're going to have to manage it and then and 181 00:17:55,770 --> 00:18:01,920 administrate all of those ec2 resources, we're going to have to provide isolation 182 00:18:01,920 --> 00:18:07,230 of our applications in separate virtual machines and then we're going to have to 183 00:18:07,230 --> 00:18:12,750 run and manage both the application and that infrastructure and then finally 184 00:18:12,750 --> 00:18:17,790 we're going to pay for those ec2 instances and it might be a lot of them 185 00:18:17,790 --> 00:18:22,470 or it might not be enough of them depending on how we've set that up 186 00:18:22,470 --> 00:18:27,060 Now with Fargate it's a lot simpler and a lot easier. We simply build our container 187 00:18:27,060 --> 00:18:33,150 image we define what we need as far as memory and compute resources and then we 188 00:18:33,150 --> 00:18:39,090 can simply run our application on AWS Fargate and then we simply pay for what 189 00:18:39,090 --> 00:18:45,530 we requested of those compute resources and our application will be isolated 190 00:18:45,530 --> 00:18:50,400 simply by design because each one of those components will be operating on 191 00:18:50,400 --> 00:18:58,380 its own isolated environment. We already know quite a bit about Amazon simple 192 00:18:58,380 --> 00:19:05,040 storage service. We've deployed a serverless application using Amazon s3 and that was 193 00:19:05,040 --> 00:19:11,550 a serverless static website, and we've hosted it completely on Amazon s3 194 00:19:11,550 --> 00:19:17,850 but we also know that we've got storage and archiving using Glacier but we can also 195 00:19:17,850 --> 00:19:23,310 integrate it with Lambda and use the s3 service to trigger a Lambda function 196 00:19:23,310 --> 00:19:29,330 so if we upload an image to a bucket we can trigger a Lambda function that can 197 00:19:29,330 --> 00:19:36,060 process that image put some filters on it resize it or whatever. We already know 198 00:19:36,060 --> 00:19:41,460 quite a bit about the EFS service, we know that it's a fully managed network 199 00:19:41,460 --> 00:19:49,350 file server and it enables parallel shared access to those files by multiple 200 00:19:49,350 --> 00:19:56,990 ec2 instances it's scales on demand to petabytes and it does that without 201 00:19:56,990 --> 00:20:04,970 interruption and it can grow and shrink automatically as you add or remove files 202 00:20:04,970 --> 00:20:11,390 the data is stored in a region it's a shared across multiple 203 00:20:11,390 --> 00:20:18,030 availability zones so it has redundancy within a region but if you want more 204 00:20:18,030 --> 00:20:21,840 than that if a whole region goes down then you're going to have to have a 205 00:20:21,840 --> 00:20:29,370 multiple system set up in another region. You cannot mount EFS to a Lambda function 206 00:20:29,370 --> 00:20:36,000 it just won't work the reason it won't work is because lambda is not EC2 207 00:20:36,000 --> 00:20:42,690 Lambda is short-lived functions, they're created and they're 208 00:20:42,690 --> 00:20:48,030 destroyed depending on a request. So if a request comes in a Lambda function 209 00:20:48,030 --> 00:20:52,380 will be created to respond to that request and then that Lambda function 210 00:20:52,380 --> 00:20:58,020 will then be terminated afterwards. So that is no good for EFS, you cannot mount 211 00:20:58,020 --> 00:21:06,480 an EFS to a lambda function. DynamoDB we already know it's a fully managed NoSQL database 212 00:21:06,480 --> 00:21:12,720 it can be accessed directly and securely from an application 213 00:21:12,720 --> 00:21:16,820 for example we may have a browser application that we're deploying 214 00:21:16,820 --> 00:21:23,850 using Amazon s3 and we've got that running on a mobile device, so we can set 215 00:21:23,850 --> 00:21:28,669 that up using the AWS software development kit the JavaScript SDK and 216 00:21:28,669 --> 00:21:35,549 we can implement security for access to that DynamoDB database using the AWS 217 00:21:35,549 --> 00:21:41,970 Cognito identity service. It can also be integrated with Amazon's 218 00:21:41,970 --> 00:21:47,220 API gateway service and also we can secure that access to that API gateway 219 00:21:47,220 --> 00:21:55,350 using Cognito again. Ok so here's an application that 220 00:21:55,350 --> 00:22:01,530 is being deployed using Amazon s3, it's running on a web browser on a device 221 00:22:01,530 --> 00:22:08,660 be it a mobile device or desktop laptop whatever and it's using Amazon Cognito 222 00:22:08,660 --> 00:22:13,559 for authentication. So to authenticate who that person is and 223 00:22:13,559 --> 00:22:19,800 whether they have access to that dynamodb database, if they do then the 224 00:22:19,800 --> 00:22:25,769 Amazon API gateway service which we set up as a custom service will receive 225 00:22:25,769 --> 00:22:30,659 those API calls from our web browser application you can then trigger a 226 00:22:30,659 --> 00:22:38,549 Lambda function which can then access Amazon DynamoDB. Now as we saw before 227 00:22:38,549 --> 00:22:43,230 We need to think about whether we need to use AWS Lambda there to do that or 228 00:22:43,230 --> 00:22:47,220 whether we can run their actual code on the web browser and whether we can 229 00:22:47,220 --> 00:22:52,649 modify our API gateway to access their Amazon DynamoDB 230 00:22:52,649 --> 00:22:56,880 without using lambda. So we need to take that into consideration that is an extra 231 00:22:56,880 --> 00:23:04,049 cost and again we may also be able to not even require Amazon API gateway 232 00:23:04,049 --> 00:23:12,330 because DynamoDB has its own API for accessing data and we can secure that 233 00:23:12,330 --> 00:23:18,690 access using Cognito, so it's quite possible that you don't need AWS lambda 234 00:23:18,690 --> 00:23:24,120 there or Amazon API gateway, so you need to take that into consideration when you're 235 00:23:24,120 --> 00:23:29,909 using DynamoDB. You don't necessarily need to use API gateway and create your 236 00:23:29,909 --> 00:23:36,389 own custom gateway to access DynamoDB you can access it directly if you want 237 00:23:36,389 --> 00:23:44,730 to save costs. Amazon Aurora service it's a fully managed MySQL database service 238 00:23:44,730 --> 00:23:53,039 so unlike standard Amazon Aurora where you're deploying a cluster of 239 00:23:53,039 --> 00:23:59,370 service this is completely service and it's doing on-demand and auto scaling 240 00:23:59,370 --> 00:24:05,909 and when you're not using it it can hibernate when not in use but be careful 241 00:24:05,909 --> 00:24:11,220 when it goes into hibernation it needs to come out of hibernation which is not 242 00:24:11,220 --> 00:24:15,649 an instantaneous process, so it's not like lambda where things just 243 00:24:15,649 --> 00:24:20,519 instantaneously happen and instantaneously shut down when something 244 00:24:20,519 --> 00:24:25,049 goes into hibernation, it takes a while to come back out of that, so it's a 245 00:24:25,049 --> 00:24:30,659 simple cost effective option for infrequent intermittent or unpredictable 246 00:24:30,659 --> 00:24:35,039 workloads so it's very good for developers that want to quickly deploy 247 00:24:35,039 --> 00:24:37,950 something and then when they're not using it for 248 00:24:37,950 --> 00:24:44,400 it to basically shut itself down so you're paying on a per second basis for 249 00:24:44,400 --> 00:24:51,180 the database capacity that you use but you're still going to pay for the 250 00:24:51,180 --> 00:24:57,750 storage when it's not in you so when it goes it goes into hibernation although 251 00:24:57,750 --> 00:25:02,760 you're not paying for that database capacity you are still going to be 252 00:25:02,760 --> 00:25:08,670 paying for the storage that is there ready to go for it when it comes out of 253 00:25:08,670 --> 00:25:13,590 hibernation so that is something to consider so it's not necessarily free 254 00:25:13,590 --> 00:25:21,840 when you're not using it. Amazon API gateway it's a fully managed service to 255 00:25:21,840 --> 00:25:27,660 create publish maintain monitor and secure application programming 256 00:25:27,660 --> 00:25:35,400 interfaces at any scale. So if you've got an application that has a massive amount 257 00:25:35,400 --> 00:25:40,890 of requests API gateway can handle that for you 258 00:25:40,890 --> 00:25:44,310 now application programming interfaces or api's for short 259 00:25:44,310 --> 00:25:51,420 they provide the front door to your applications to access data or process 260 00:25:51,420 --> 00:25:59,160 requests for AWS resources. A very good feature of API gateway is that it has a 261 00:25:59,160 --> 00:26:05,460 cache for result caching, so if someone comes back with the same request it will 262 00:26:05,460 --> 00:26:11,040 cache that and return that back and save you significant amount of costs over 263 00:26:11,040 --> 00:26:15,870 going back to your database and getting those results again. Another great 264 00:26:15,870 --> 00:26:20,730 feature is that it's WebSockets capable, so if you've got a browser-based 265 00:26:20,730 --> 00:26:26,640 application that relies upon real time data and having that application user 266 00:26:26,640 --> 00:26:32,580 interface updated in real time you can have a WebSocket connection to API 267 00:26:32,580 --> 00:26:37,880 gateway they can deliver that real time data for you 268 00:26:37,880 --> 00:26:43,230 okay so here's an application of how we might use it, so we've got some devices 269 00:26:43,230 --> 00:26:48,480 there some Internet of Things devices or a web and mobile application or whatever 270 00:26:48,480 --> 00:26:53,580 it may be and we require a connection to our 271 00:26:53,580 --> 00:27:00,000 back-end data or our back-end services so we can receive those requests from 272 00:27:00,000 --> 00:27:05,549 your application to the Amazon API gateway service and we can create and 273 00:27:05,549 --> 00:27:13,080 design an API and then we can publish that using the API gateway service it 274 00:27:13,080 --> 00:27:19,260 will receive those requests and it will then forward those requests through to 275 00:27:19,260 --> 00:27:24,720 one of our services could be DynamoDB, could be Kinesis, could be Lambda, could 276 00:27:24,720 --> 00:27:31,590 be one of many different things and that will be forwarded back to your device 277 00:27:31,590 --> 00:27:37,289 If that request comes in again for the same data and then that can be cached using 278 00:27:37,289 --> 00:27:43,860 the API gateway cache and we can also monitor the performance of that API 279 00:27:43,860 --> 00:27:51,360 gateway using Amazon Cloudwatch as well. There are a number of services available 280 00:27:51,360 --> 00:27:58,139 on AWS for integrating our serverless applications and also for orchestrating 281 00:27:58,139 --> 00:28:05,070 different components of our application. So SNS or simple notification service 282 00:28:05,070 --> 00:28:09,649 for short, we can use that for sending notification messages between 283 00:28:09,649 --> 00:28:16,830 applications, or to Amazon CloudWatch or to even an end user via email or whatever 284 00:28:16,830 --> 00:28:22,409 simple queue service we can use that for queuing of requests to our 285 00:28:22,409 --> 00:28:28,830 serverless applications. AWS step functions we can use that to coordinate multiple 286 00:28:28,830 --> 00:28:33,630 serverless components of our application and making sure that each one of those 287 00:28:33,630 --> 00:28:38,630 processes works in a sequential function from one to the other 288 00:28:38,630 --> 00:28:48,019 AWS Appsync allows us to create a GraphQL back end for our browser applications 289 00:28:48,019 --> 00:28:53,700 Amazon Eventbridge which is a serverless event bus service and we'll talk a 290 00:28:53,700 --> 00:28:58,210 little bit more about that in the upcoming slide. 291 00:28:58,210 --> 00:29:04,700 AWS Appsync is a fully managed GraphQL server. If you don't know what GraphQL is, 292 00:29:04,700 --> 00:29:08,960 make sure that you go out and Google it and have a look at it because 293 00:29:08,960 --> 00:29:15,410 it is really taking the world by storm and it's a great way of defining your 294 00:29:15,410 --> 00:29:21,980 data and provides a glue between your application and those back-end services 295 00:29:21,980 --> 00:29:26,900 being a REST API being a datasource whatever it may be. So you can use that 296 00:29:26,900 --> 00:29:34,600 to connect to DynamoDB, Aurora, Elasticsearch, a Lambda function or a 297 00:29:34,600 --> 00:29:42,580 HTTP endpoint. If you need to, it provides real-time data subscription and 298 00:29:42,580 --> 00:29:49,850 synchronization so you can have real live and reactive data applications 299 00:29:49,850 --> 00:29:56,540 The Apollo client is an application that has been created by the Meteor development 300 00:29:56,540 --> 00:30:02,930 group and what it does it provides that front end for your graphQL connection 301 00:30:02,930 --> 00:30:10,220 and so database appsync it provides plug-ins for that Apollo client to allow 302 00:30:10,220 --> 00:30:17,870 you to create really good reactive front ends with live updating data and also 303 00:30:17,870 --> 00:30:23,030 that have offline support so when that connection drops out you still have a 304 00:30:23,030 --> 00:30:30,110 case of that regularly used data in that Apollo client. Here we have an 305 00:30:30,110 --> 00:30:36,290 application that is running AWS Appsync. So it could be running on a web application 306 00:30:36,290 --> 00:30:40,430 it could be a mobile application a real-time dashboard it 307 00:30:40,430 --> 00:30:44,600 could be an Internet of Things device whatever it may be. So we have our 308 00:30:44,600 --> 00:30:50,330 application running on one of those devices and we want to access a data 309 00:30:50,330 --> 00:30:56,600 source it could be dynamodb or Aurora or whatever it may be and so we can define 310 00:30:56,600 --> 00:31:05,110 a graphQL schema about how that data is structured and then we can design 311 00:31:05,110 --> 00:31:10,740 resolvers that define how we go from that graph QL schema 312 00:31:10,740 --> 00:31:18,330 to our back-end data service and how we achieve that once it's all done I Debus 313 00:31:18,330 --> 00:31:24,270 appsync can talk to that client application look at that graphQL schema 314 00:31:24,270 --> 00:31:29,130 look at that resolver and get that data in the format that it's required if 315 00:31:29,130 --> 00:31:35,220 we've got the Apollo client running on those applications if that connection is 316 00:31:35,220 --> 00:31:41,520 broken then that data will still be available or a subset of that data will 317 00:31:41,520 --> 00:31:50,460 be available still on that device. Amazon Eventbridge it's a serverless event bus 318 00:31:50,460 --> 00:31:55,920 service what that means is that you can create applications they can connect to 319 00:31:55,920 --> 00:32:01,920 a software as a service or a service such as Sugar CRM, Symantec, Zendesk 320 00:32:01,920 --> 00:32:08,970 whatever, or it could connect to an AWS service and it connects by reacting to events 321 00:32:08,970 --> 00:32:15,090 how we do that is that we define our data in a schema and we store that 322 00:32:15,090 --> 00:32:22,110 in a schema registry and then we can write some code that is written to react 323 00:32:22,110 --> 00:32:30,179 to an individual event and by defining rules we can integrate using that code 324 00:32:30,179 --> 00:32:36,179 those events with our targets and those targets could be a Lambda function could 325 00:32:36,179 --> 00:32:44,690 be an sqs queue, an SNS, could be step functions, or it could be a Kinesis stream 326 00:32:45,740 --> 00:32:51,059 here we have an application on the left is our event sources so that could be 327 00:32:51,059 --> 00:32:56,070 coming from AWS application it could be from a custom application about it could 328 00:32:56,070 --> 00:33:03,030 be from wherever and we want to react to events that occur on that source and we 329 00:33:03,030 --> 00:33:06,570 want to be able to do something to a target and so we can see on the right 330 00:33:06,570 --> 00:33:11,429 there we've got our targets could be lander could be a Kinesis data firehose 331 00:33:11,429 --> 00:33:19,050 it could be SNS or whatever it may be and so event bridge will respond to an 332 00:33:19,050 --> 00:33:23,870 event from that event source and then it will look at its scheme 333 00:33:23,870 --> 00:33:29,540 from the schema registry and it will look at the rules around how it reacts 334 00:33:29,540 --> 00:33:38,890 to that event and it will do something to those targets and finally we can have 335 00:33:38,890 --> 00:33:45,890 analytic services run completely without servers Kinesis we can use a Kinesis 336 00:33:45,890 --> 00:33:52,130 stream which is a fully managed data capture streaming and analysis service 337 00:33:52,130 --> 00:33:58,040 we can also use amazon athena if we've got data stored in an amazon s3 service 338 00:33:58,040 --> 00:34:04,550 and that will provide a fully managed interact your query service for that s3 339 00:34:04,550 --> 00:34:10,550 bucket so the way it works is we define that bucket and the schemer around the 340 00:34:10,550 --> 00:34:16,730 data that is stored in that bucket we can query that data using standard SQL 341 00:34:16,730 --> 00:34:23,750 commands and our results will be delivered within seconds we pay per 342 00:34:23,750 --> 00:34:29,659 terabyte scanned so each time we scan we have to pay again based on the amount of 343 00:34:29,659 --> 00:34:36,950 terabytes that we scan. So that brings us to the end of a very broad overview of 344 00:34:36,950 --> 00:34:42,530 the AWS serverless platform. I hope you enjoyed it and I look forward to seeing 345 00:34:42,530 --> 00:34:46,510 you in the next lecture