1 00:00:10,969 --> 00:00:13,289 Welcome back to BackSpace Academy 2 00:00:13,289 --> 00:00:18,980 In this lecture we'll talk specifically 3 00:00:15,590 --> 00:00:21,020 about serverless architectures we'll start 4 00:00:18,980 --> 00:00:23,599 off by talking about the core components 5 00:00:21,020 --> 00:00:26,090 we should already know what they are 6 00:00:23,599 --> 00:00:28,340 we'll talk about the AWS serverless 7 00:00:26,090 --> 00:00:30,020 application model or SAM for short 8 00:00:28,340 --> 00:00:32,119 then we'll talk it's about some 9 00:00:30,020 --> 00:00:34,340 application architectures first off 10 00:00:32,119 --> 00:00:37,579 looking at rest api's and how we can 11 00:00:34,340 --> 00:00:39,860 implement those a mobile backend how we 12 00:00:37,579 --> 00:00:42,530 can create a web application that is 13 00:00:39,860 --> 00:00:44,179 completely serverless and finally we'll 14 00:00:42,530 --> 00:00:48,379 finish up by talking about how it can 15 00:00:44,179 --> 00:00:51,012 access VPC resources from our serverless environments 16 00:00:51,012 --> 00:00:56,059 in the compute services 17 00:00:53,510 --> 00:00:59,629 first off there we've got AWS lambda 18 00:00:56,059 --> 00:01:02,899 that allows us to run stateless serverless 19 00:00:59,629 --> 00:01:05,360 applications by stateless what we mean 20 00:01:02,899 --> 00:01:08,150 is that these are very short-lived 21 00:01:05,360 --> 00:01:10,250 server environments and once they've 22 00:01:08,150 --> 00:01:12,650 finished after a millisecond or a second 23 00:01:10,250 --> 00:01:15,050 or whatever it is they disappear they're 24 00:01:12,650 --> 00:01:18,230 terminated along with any temporary data 25 00:01:15,050 --> 00:01:20,047 that was associated with them 26 00:01:20,047 --> 00:01:25,130 next we've got Amazon API gateway that allows us to 27 00:01:21,770 --> 00:01:27,320 run a fully managed rest application 28 00:01:25,130 --> 00:01:30,410 programming interface that integrates 29 00:01:27,320 --> 00:01:33,200 with the lambda so we could have our web 30 00:01:30,410 --> 00:01:36,560 application and it could be calling our 31 00:01:33,200 --> 00:01:38,930 API back-end that could then integrate 32 00:01:36,560 --> 00:01:40,490 with lambda and tell lambda to go and 33 00:01:38,930 --> 00:01:42,940 retrieve something for us for example 34 00:01:40,490 --> 00:01:46,040 some data from a database 35 00:01:42,940 --> 00:01:49,610 finally there we've got AWS step 36 00:01:46,040 --> 00:01:53,360 functions now they are similar to AWS 37 00:01:49,610 --> 00:01:56,150 simple workflow services but it's a new 38 00:01:53,360 --> 00:01:58,970 service and AWS recommends that you use 39 00:01:56,150 --> 00:02:01,850 step functions over SWF when you're 40 00:01:58,970 --> 00:02:04,340 making a decision on new applications so 41 00:02:01,850 --> 00:02:06,800 what it does is that you can define your 42 00:02:04,340 --> 00:02:09,470 business process and any conditions 43 00:02:06,800 --> 00:02:10,959 within that business process and we 44 00:02:09,470 --> 00:02:13,519 could have applications that pass 45 00:02:10,959 --> 00:02:16,099 information from one application to 46 00:02:13,519 --> 00:02:17,690 another application and a doe newest 47 00:02:16,099 --> 00:02:21,950 step functions will allow you to 48 00:02:17,690 --> 00:02:25,490 orchestrate that entire process workflow 49 00:02:21,950 --> 00:02:28,340 now there is another serverless compute 50 00:02:25,490 --> 00:02:32,150 service from AWS and that is lambda at 51 00:02:28,340 --> 00:02:35,270 edge and that allows you to run code at 52 00:02:32,150 --> 00:02:38,989 a cloudfront edge location and to have 53 00:02:35,270 --> 00:02:42,980 your code modify and interact with that 54 00:02:38,989 --> 00:02:45,530 cloud front distribution so here is an 55 00:02:42,980 --> 00:02:48,590 example from the cloudfront 56 00:02:45,530 --> 00:02:52,099 documentation and what we've got there 57 00:02:48,590 --> 00:02:54,200 is we've got an Amazon s3 origin that 58 00:02:52,099 --> 00:02:57,230 will contain images that are quite large 59 00:02:54,200 --> 00:02:59,600 and what we'd like to do is we'd like to 60 00:02:57,230 --> 00:03:02,750 resize those images modify them in some 61 00:02:59,600 --> 00:03:05,630 way we might want to put a watermark on 62 00:03:02,750 --> 00:03:08,540 those we might want to put metadata 63 00:03:05,630 --> 00:03:11,720 inside of those images for combating 64 00:03:08,540 --> 00:03:14,150 piracy for example and then we want to 65 00:03:11,720 --> 00:03:17,209 deliver that to be stored in that cloud 66 00:03:14,150 --> 00:03:19,580 front distribution at our edge locations 67 00:03:17,209 --> 00:03:23,690 so what we can do is that we can set up 68 00:03:19,580 --> 00:03:26,630 a lambda@edge function inside our 69 00:03:23,690 --> 00:03:29,030 cloud front distribution and what that 70 00:03:26,630 --> 00:03:32,870 will allow us to do is that when cloud 71 00:03:29,030 --> 00:03:35,660 front fetches an image and either image 72 00:03:32,870 --> 00:03:39,620 hasn't been processed for cloud front 73 00:03:35,660 --> 00:03:42,350 previously it will invoke an AWS lambda 74 00:03:39,620 --> 00:03:44,989 function to process an image and then 75 00:03:42,350 --> 00:03:47,329 that image will then be forwarded on to 76 00:03:44,989 --> 00:03:50,750 that CloudFront distribution and that 77 00:03:47,329 --> 00:03:53,209 way our end user gets to see exactly 78 00:03:50,750 --> 00:03:54,920 what they want and at the same time 79 00:03:53,209 --> 00:03:59,390 we're not processing all of these images 80 00:03:54,920 --> 00:04:02,390 only the ones that we need on-demand in 81 00:03:59,390 --> 00:04:04,940 data and edge delivery of content we've 82 00:04:02,390 --> 00:04:08,180 got Amazon DynamoDB as we know it's a 83 00:04:04,940 --> 00:04:11,269 fully managed serverless NoSQL database 84 00:04:08,180 --> 00:04:14,180 and we also have the DynamoDB 85 00:04:11,269 --> 00:04:17,620 accelerator or Dax and what that does it 86 00:04:14,180 --> 00:04:20,539 is a case that we can put in front of 87 00:04:17,620 --> 00:04:23,690 dynamodb and what that allows us to do 88 00:04:20,539 --> 00:04:26,419 is to cache our frequently access to 89 00:04:23,690 --> 00:04:29,419 requests from that dynamodb back-end 90 00:04:26,419 --> 00:04:33,260 which will of course reduce our load on 91 00:04:29,419 --> 00:04:35,900 dynamodb and our subsequent costs Amazon 92 00:04:33,260 --> 00:04:39,320 s3 as we know that is great for 93 00:04:35,900 --> 00:04:42,199 website hosting and for storing of 94 00:04:39,320 --> 00:04:45,229 objects cloudfront content delivery 95 00:04:42,199 --> 00:04:47,870 network for caching again frequently 96 00:04:45,229 --> 00:04:50,630 access content to reduce a load on 97 00:04:47,870 --> 00:04:52,910 Amazon s3 and again our costs 98 00:04:50,630 --> 00:04:55,610 finally there we have Amazon Elastic 99 00:04:52,910 --> 00:04:58,280 search which is a fully managed search 100 00:04:55,610 --> 00:05:01,880 engine that also has analytics tools 101 00:04:58,280 --> 00:05:04,190 available as well messaging and 102 00:05:01,880 --> 00:05:06,940 straining first off simple notification 103 00:05:04,190 --> 00:05:09,470 service we have a pub/sub messaging 104 00:05:06,940 --> 00:05:12,590 service what that does it allows us to 105 00:05:09,470 --> 00:05:14,300 create a topic and subscribers can 106 00:05:12,590 --> 00:05:16,280 subscribe to that topic and we can 107 00:05:14,300 --> 00:05:18,410 publish to that topic and our 108 00:05:16,280 --> 00:05:21,260 subscribers will receive that 109 00:05:18,410 --> 00:05:24,770 publication it also allows us to do 110 00:05:21,260 --> 00:05:27,680 mobile push notifications to Android iOS 111 00:05:24,770 --> 00:05:30,770 and other devices next up we've got 112 00:05:27,680 --> 00:05:34,729 Amazon Kinesis for streaming it collects 113 00:05:30,770 --> 00:05:36,740 and processes real-time data streams we 114 00:05:34,729 --> 00:05:39,410 have analytics for Amazon 115 00:05:36,740 --> 00:05:41,720 Kinesis as well and that allows us to do 116 00:05:39,410 --> 00:05:45,800 real-time analytics and we do that using 117 00:05:41,720 --> 00:05:47,810 our standard SQL statements and finally 118 00:05:45,800 --> 00:05:51,320 they've got Amazon Kinesis fire hose 119 00:05:47,810 --> 00:05:54,050 that allows us to direct that stream to 120 00:05:51,320 --> 00:05:56,630 more permanent data storage so it 121 00:05:54,050 --> 00:06:00,530 transforms and loads at streaming data 122 00:05:56,630 --> 00:06:04,610 into Kinesis analytics to Amazon s3 or 123 00:06:00,530 --> 00:06:02,370 redshift or elasticsearch 124 00:06:02,370 --> 00:06:04,620 user management and identity 125 00:06:04,620 --> 00:06:07,220 we've got Amazon Cognito 126 00:06:07,220 --> 00:06:12,409 it allows us to have user sign 127 00:06:10,280 --> 00:06:15,020 up and signing capabilities for our 128 00:06:12,409 --> 00:06:18,770 applications we can manage those users 129 00:06:15,020 --> 00:06:22,340 with user pools and we can also have our 130 00:06:18,770 --> 00:06:25,840 data for those users stored and synced 131 00:06:22,340 --> 00:06:28,280 in a sink store it can also incorporate 132 00:06:25,840 --> 00:06:31,130 federated identities for example you 133 00:06:28,280 --> 00:06:33,530 might have an OAuth process, a third party 134 00:06:31,130 --> 00:06:35,810 OAuth process such as login with 135 00:06:33,530 --> 00:06:38,510 Facebook or login with Amazon 136 00:06:35,810 --> 00:06:41,120 you could have a SAML process as well 137 00:06:38,510 --> 00:06:44,409 or Microsoft Active Directory that could 138 00:06:41,120 --> 00:06:47,000 also integrate in with Amazon Cognito 139 00:06:44,409 --> 00:06:48,860 for deployment of our serverless 140 00:06:47,000 --> 00:06:49,600 environment we can use the 141 00:06:49,600 --> 00:06:52,569 serverless application model or Sam 142 00:06:52,569 --> 00:06:54,700 for short and that allows us to define our applications in a cloudformation 143 00:06:54,700 --> 00:06:59,130 template we'll talk more about 144 00:06:56,790 --> 00:06:59,155 that in the next slide 145 00:06:59,155 --> 00:07:03,630 for monitoring we've got cloud watch of course for 146 00:07:00,660 --> 00:07:06,630 metrics and logs and we've also got AWS 147 00:07:03,630 --> 00:07:08,970 x-ray that allows us to carefully 148 00:07:06,630 --> 00:07:11,970 analyze the performance of our 149 00:07:08,970 --> 00:07:15,120 applications by creating tracing and 150 00:07:11,970 --> 00:07:17,760 service maps of its performance so how 151 00:07:15,120 --> 00:07:20,670 it works is that on your application you 152 00:07:17,760 --> 00:07:23,910 will have the AWS x-ray software 153 00:07:20,670 --> 00:07:26,460 development kit and that will allow you 154 00:07:23,910 --> 00:07:30,780 to send information at different stages 155 00:07:26,460 --> 00:07:32,670 of your application back to an AWS x-ray 156 00:07:30,780 --> 00:07:34,650 daemon which will be running on a 157 00:07:32,670 --> 00:07:38,070 computer environment so that could be an 158 00:07:34,650 --> 00:07:39,240 ec2 instance if it's running in your 159 00:07:38,070 --> 00:07:41,310 application is running on Elastic 160 00:07:39,240 --> 00:07:44,280 Beanstalk you can have elastic Beanstalk 161 00:07:41,310 --> 00:07:46,170 create that x-ray daemon for you 162 00:07:44,280 --> 00:07:48,150 automatically or you could have it 163 00:07:46,170 --> 00:07:50,610 running as a container on the same 164 00:07:48,150 --> 00:07:51,840 instance as your application but that 165 00:07:50,610 --> 00:07:54,330 daemon will collect that information 166 00:07:51,840 --> 00:07:57,320 that is being sent to it from your 167 00:07:54,330 --> 00:08:01,230 application and it will send that off to 168 00:07:57,320 --> 00:08:03,630 the AWS x-ray service and from there you 169 00:08:01,230 --> 00:08:06,870 can view a service map and that will 170 00:08:03,630 --> 00:08:08,910 break down all of the time frames for 171 00:08:06,870 --> 00:08:11,310 different parts of your application to 172 00:08:08,910 --> 00:08:13,800 work so if your application is slow you 173 00:08:11,310 --> 00:08:15,690 can have a look and see what is causing 174 00:08:13,800 --> 00:08:17,940 the problem you can see what can be done 175 00:08:15,690 --> 00:08:20,310 concurrently rather than sequentially 176 00:08:17,940 --> 00:08:22,650 and it allows you to really thoroughly 177 00:08:20,310 --> 00:08:26,400 analyze the performance issues of your 178 00:08:22,650 --> 00:08:29,700 application now the serverless application 179 00:08:26,400 --> 00:08:33,060 model isn't actually a service of AWS as 180 00:08:29,700 --> 00:08:36,000 such it's a model for defining your 181 00:08:33,060 --> 00:08:39,000 serverless applications and it defines the 182 00:08:36,000 --> 00:08:41,520 syntax that you need to conform to so 183 00:08:39,000 --> 00:08:43,920 that they can be interpreted by the 184 00:08:41,520 --> 00:08:46,860 cloud formation service and then 185 00:08:43,920 --> 00:08:49,230 deployed so within your cloud formation 186 00:08:46,860 --> 00:08:50,790 template you will have a transform 187 00:08:49,230 --> 00:08:53,970 section and that is where you will put 188 00:08:50,790 --> 00:08:56,790 your Sam model that describes your 189 00:08:53,970 --> 00:08:59,280 serverless environment in and that will 190 00:08:56,790 --> 00:08:59,720 define those serverless resources and what 191 00:08:59,280 --> 00:09:02,990 you want 192 00:08:59,720 --> 00:09:05,660 do with those serverless resources so here 193 00:09:02,990 --> 00:09:08,149 we've got a very simple cloud formation 194 00:09:05,660 --> 00:09:10,819 template and as you can see there it's 195 00:09:08,149 --> 00:09:14,269 got the transform section which defines 196 00:09:10,819 --> 00:09:16,160 AWS serverless application model and in 197 00:09:14,269 --> 00:09:18,649 our resources there we've got an AWS 198 00:09:16,160 --> 00:09:20,899 serverless function a lambda function and 199 00:09:18,649 --> 00:09:21,949 we have the properties for that there is 200 00:09:20,899 --> 00:09:25,819 the index handler 201 00:09:21,949 --> 00:09:28,610 the runtime known js8 and the URI for 202 00:09:25,819 --> 00:09:31,129 the code being an s3 bucket there that 203 00:09:28,610 --> 00:09:34,310 has our a code package there in a zip 204 00:09:31,129 --> 00:09:37,430 file in this cloud formation template 205 00:09:34,310 --> 00:09:39,290 we're defining a dynamo DB table so 206 00:09:37,430 --> 00:09:41,600 again we've got our transform section 207 00:09:39,290 --> 00:09:43,399 defining the serverless application model 208 00:09:41,600 --> 00:09:45,649 and then we've got our resources here 209 00:09:43,399 --> 00:09:48,980 which is a type it's a serverless and type 210 00:09:45,649 --> 00:09:51,319 simple table which is a simple dynamodb 211 00:09:48,980 --> 00:09:53,779 table and there we have the primary key 212 00:09:51,319 --> 00:09:59,480 and the provision throughput details of 213 00:09:53,779 --> 00:10:02,600 that DynamoDB table a very common use 214 00:09:59,480 --> 00:10:05,589 case for AWS serverless environments is to 215 00:10:02,600 --> 00:10:09,050 create a RESTful API or application 216 00:10:05,589 --> 00:10:12,439 programming interface for your mobile or 217 00:10:09,050 --> 00:10:14,809 web applications so the good thing about 218 00:10:12,439 --> 00:10:18,589 using a serverless environment such as 219 00:10:14,809 --> 00:10:21,980 Amazon API gateway is that it integrates 220 00:10:18,589 --> 00:10:25,100 with Amazon Cognito and so that will 221 00:10:21,980 --> 00:10:27,980 allow you to have Amazon kognito sign-in 222 00:10:25,100 --> 00:10:30,949 and sign-out capability have user ports 223 00:10:27,980 --> 00:10:33,649 for storing that user information you 224 00:10:30,949 --> 00:10:35,839 can have a sink store for personal 225 00:10:33,649 --> 00:10:39,800 information about that user and you can 226 00:10:35,839 --> 00:10:43,180 also control access to what that user 227 00:10:39,800 --> 00:10:46,670 can get through that Amazon API gateway 228 00:10:43,180 --> 00:10:50,720 the Amazon API gateway would communicate 229 00:10:46,670 --> 00:10:54,500 with AWS lambda and add abuse lambda 230 00:10:50,720 --> 00:10:57,910 will require an iam role to allow it to 231 00:10:54,500 --> 00:11:01,970 access information in an Amazon DynamoDB 232 00:10:57,910 --> 00:11:04,790 table and also AWS lambda will require a 233 00:11:01,970 --> 00:11:07,939 role to be able to send information and 234 00:11:04,790 --> 00:11:09,970 logs back to Amazon Cloud Watch for 235 00:11:07,939 --> 00:11:13,430 metrics and logging of its performance 236 00:11:09,970 --> 00:11:16,370 if we wanted to improve the performer 237 00:11:13,430 --> 00:11:19,730 of this architecture even further we 238 00:11:16,370 --> 00:11:22,850 could have an Amazon API gateway cache 239 00:11:19,730 --> 00:11:26,750 in front of that Gateway service that 240 00:11:22,850 --> 00:11:29,630 would cache our frequently accessed 241 00:11:26,750 --> 00:11:32,810 requests we could also have a case or a 242 00:11:29,630 --> 00:11:36,260 dynamo DB accelerator or Dax for short 243 00:11:32,810 --> 00:11:38,930 in front of our dynamodb database as 244 00:11:36,260 --> 00:11:42,430 well or our dynamodb table as well and 245 00:11:38,930 --> 00:11:47,390 that will reduce a load on our back-end 246 00:11:42,430 --> 00:11:50,480 DynamoDB service AWS has a very good 247 00:11:47,390 --> 00:11:53,180 offering for mobile backends first and 248 00:11:50,480 --> 00:11:56,210 foremost we would secure our application 249 00:11:53,180 --> 00:11:58,610 and all of the access to our AWS 250 00:11:56,210 --> 00:12:01,460 resources in our account using AWS 251 00:11:58,610 --> 00:12:04,220 kognito and so that could allow for 252 00:12:01,460 --> 00:12:06,860 sign-in and sign-out capability it can 253 00:12:04,220 --> 00:12:10,550 manage our users using kognito user 254 00:12:06,860 --> 00:12:13,220 pools and we could also store profile 255 00:12:10,550 --> 00:12:16,700 information about our users in a kognito 256 00:12:13,220 --> 00:12:20,570 user pool our application might require 257 00:12:16,700 --> 00:12:23,540 our users to upload files to Amazon s3 258 00:12:20,570 --> 00:12:26,390 or upload images or whatever so we can 259 00:12:23,540 --> 00:12:28,580 use Amazon s3 for receiving that and we 260 00:12:26,390 --> 00:12:29,330 can deliver those back using Amazon 261 00:12:28,580 --> 00:12:32,870 CloudFront 262 00:12:29,330 --> 00:12:35,060 for our frequently access content we 263 00:12:32,870 --> 00:12:36,830 could have a search engine capability 264 00:12:35,060 --> 00:12:39,980 built into our application and we could 265 00:12:36,830 --> 00:12:41,990 use elastic search and use lambda for 266 00:12:39,980 --> 00:12:45,860 that communication between elastic 267 00:12:41,990 --> 00:12:48,650 search and our web application we could 268 00:12:45,860 --> 00:12:51,410 also have an API back-end that could be 269 00:12:48,650 --> 00:12:53,780 delivering dynamic content from a 270 00:12:51,410 --> 00:12:56,480 database so for that of course we could 271 00:12:53,780 --> 00:12:59,870 use API gateway we could have a gateway 272 00:12:56,480 --> 00:13:03,050 cache in front of that to reduce a load 273 00:12:59,870 --> 00:13:05,570 on our API gateway service we could have 274 00:13:03,050 --> 00:13:08,600 a lambda function that will be accessing 275 00:13:05,570 --> 00:13:12,320 the data from dynamodb we could also 276 00:13:08,600 --> 00:13:15,650 have a Dax dynamodb accelerator that 277 00:13:12,320 --> 00:13:19,430 will cache our frequently accessed 278 00:13:15,650 --> 00:13:22,700 requests from dynamodb and finally we 279 00:13:19,430 --> 00:13:26,810 can use the SNS service to provide 280 00:13:22,700 --> 00:13:29,780 mobile push notifications directly to 281 00:13:26,810 --> 00:13:31,640 the device that our user is using and we 282 00:13:29,780 --> 00:13:37,040 could use a lander function to 283 00:13:31,640 --> 00:13:40,010 orchestrate that as well Amazon s3 as we 284 00:13:37,040 --> 00:13:42,590 already know is great for hosting web 285 00:13:40,010 --> 00:13:44,870 applications we also know that we can 286 00:13:42,590 --> 00:13:47,360 put a cloud front distribution in front 287 00:13:44,870 --> 00:13:50,810 of that s3 bucket and that will reduce 288 00:13:47,360 --> 00:13:53,870 the latency for our end users and reduce 289 00:13:50,810 --> 00:13:56,500 our overall costs we can use an Amazon 290 00:13:53,870 --> 00:13:59,270 Cognito usable for sign-in and sign-out 291 00:13:56,500 --> 00:14:01,340 capability and management of our users 292 00:13:59,270 --> 00:14:05,210 we can use an Amazon Cognito 293 00:14:01,340 --> 00:14:08,090 data sync store to save profile 294 00:14:05,210 --> 00:14:11,030 information about our users and we can 295 00:14:08,090 --> 00:14:14,090 also use that to secure our Amazon API 296 00:14:11,030 --> 00:14:18,770 gateway that we could use for delivering 297 00:14:14,090 --> 00:14:21,590 data back to our web application and as 298 00:14:18,770 --> 00:14:24,440 before we can quite easily put a gateway 299 00:14:21,590 --> 00:14:28,340 cache in front of Amazon API gateway and 300 00:14:24,440 --> 00:14:32,030 we can put a Dax dynamodb accelerator in 301 00:14:28,340 --> 00:14:33,950 front of DynamoDB to again increase that 302 00:14:32,030 --> 00:14:37,790 performance reduce that latency and to 303 00:14:33,950 --> 00:14:40,280 reduce our costs overall now up until 304 00:14:37,790 --> 00:14:42,920 now we've been creating architectures 305 00:14:40,280 --> 00:14:46,220 that are completely serverless but what 306 00:14:42,920 --> 00:14:49,100 happens if we have a serverless environment 307 00:14:46,220 --> 00:14:51,800 that needs to communicate with something 308 00:14:49,100 --> 00:14:52,300 that is inside of the private subnet of a VPC 309 00:14:52,300 --> 00:14:57,860 for example you may have an RDS 310 00:14:55,580 --> 00:15:00,740 instance and you that is located in a 311 00:14:57,860 --> 00:15:02,990 public subnet your lambda function will 312 00:15:00,740 --> 00:15:05,630 be able to access that and connect to 313 00:15:02,990 --> 00:15:07,040 its endpoint because it's in a public 314 00:15:05,630 --> 00:15:09,860 subnet and it will go through the 315 00:15:07,040 --> 00:15:11,870 internet gateway as you would from your 316 00:15:09,860 --> 00:15:14,330 desktop at home not a problem 317 00:15:11,870 --> 00:15:17,420 but if it's located in a private subnet 318 00:15:14,330 --> 00:15:19,610 by default it will be private that is 319 00:15:17,420 --> 00:15:22,640 your private space within the cloud and 320 00:15:19,610 --> 00:15:25,940 the AWS Lander function won't be able to 321 00:15:22,640 --> 00:15:29,000 access it so here we have an 322 00:15:25,940 --> 00:15:31,730 architecture that is a combination of a 323 00:15:29,000 --> 00:15:34,760 serverless environment and a server 324 00:15:31,730 --> 00:15:36,190 environment so for example if we have 325 00:15:34,760 --> 00:15:39,200 got there our Amazon redshift 326 00:15:36,190 --> 00:15:40,460 elastication and RDS instances if 327 00:15:39,200 --> 00:15:44,570 they're located 328 00:15:40,460 --> 00:15:48,140 side of that private subnet by default 329 00:15:44,570 --> 00:15:52,250 our AWS lambda function won't be able to 330 00:15:48,140 --> 00:15:55,220 access it it would be able to connect to 331 00:15:52,250 --> 00:15:57,650 the for example the RDS instance and the 332 00:15:55,220 --> 00:16:00,740 end point of the RDS instance if that 333 00:15:57,650 --> 00:16:02,750 instance was located in a public subnet 334 00:16:00,740 --> 00:16:05,030 cuz it would just go through to the 335 00:16:02,750 --> 00:16:07,670 internet gateway as you would do on your 336 00:16:05,030 --> 00:16:10,520 desktop at home not a problem but if 337 00:16:07,670 --> 00:16:12,380 that RDS instance is inside of a private 338 00:16:10,520 --> 00:16:14,900 subnet you're not going to be able to do 339 00:16:12,380 --> 00:16:17,750 that so what you need to do is you need 340 00:16:14,900 --> 00:16:20,510 to define after you have launched your 341 00:16:17,750 --> 00:16:24,020 outlet your lambda function you need to 342 00:16:20,510 --> 00:16:26,180 define the private subnet and the 343 00:16:24,020 --> 00:16:28,910 security groups of those resources that 344 00:16:26,180 --> 00:16:31,580 you need to access that are inside of 345 00:16:28,910 --> 00:16:35,030 that private subnet and once you've done 346 00:16:31,580 --> 00:16:37,790 that the AWS lambda service will 347 00:16:35,030 --> 00:16:41,530 orchestrate the creation of an elastic 348 00:16:37,790 --> 00:16:44,180 network interface to that private subnet 349 00:16:41,530 --> 00:16:47,630 the other advantage of that is that you 350 00:16:44,180 --> 00:16:50,750 can also use that to access your Amazon 351 00:16:47,630 --> 00:16:52,760 s3 service because you can access that 352 00:16:50,750 --> 00:16:56,450 through your private subnet and then on 353 00:16:52,760 --> 00:16:59,900 through to that Amazon s3 or that VPC 354 00:16:56,450 --> 00:17:01,780 endpoint now one thing to remember is 355 00:16:59,900 --> 00:17:04,370 that this is purely and simply for 356 00:17:01,780 --> 00:17:07,040 connecting to these instances so you're 357 00:17:04,370 --> 00:17:09,470 going to be connecting to the Amazon RDS 358 00:17:07,040 --> 00:17:11,480 instance through its endpoint if you 359 00:17:09,470 --> 00:17:14,660 wanted to actually use your software 360 00:17:11,480 --> 00:17:16,610 development kit on the AWS Lander 361 00:17:14,660 --> 00:17:19,220 function for example if you wanted to 362 00:17:16,610 --> 00:17:21,620 launch RDS instances then your lambda 363 00:17:19,220 --> 00:17:24,530 function would also need to have a role 364 00:17:21,620 --> 00:17:27,980 to be able to do that with the RDS 365 00:17:24,530 --> 00:17:29,450 service so to enable this to happen it's 366 00:17:27,980 --> 00:17:32,210 really quite easy after you have you 367 00:17:29,450 --> 00:17:34,340 launched your lambda function just 368 00:17:32,210 --> 00:17:38,300 scroll down to the network section and 369 00:17:34,340 --> 00:17:39,500 there you can fill in the details so 370 00:17:38,300 --> 00:17:41,990 what we've got there is that we've got 371 00:17:39,500 --> 00:17:43,670 our V PC that we define we define the 372 00:17:41,990 --> 00:17:46,460 subnets that are going to have these 373 00:17:43,670 --> 00:17:49,280 resources inside of them and then we 374 00:17:46,460 --> 00:17:52,100 also define these security groups that 375 00:17:49,280 --> 00:17:53,520 are associated to those instances inside 376 00:17:52,100 --> 00:17:56,700 of that private subnet 377 00:17:53,520 --> 00:17:58,710 and what will happen is that the Lambda 378 00:17:56,700 --> 00:18:01,620 service will automatically orchestrate 379 00:17:58,710 --> 00:18:04,590 all that for you and create an elastic 380 00:18:01,620 --> 00:18:06,840 network interface to that private subnet 381 00:18:04,590 --> 00:18:09,030 all those private subnets that you have 382 00:18:06,840 --> 00:18:11,490 defined and it will allow you to 383 00:18:09,030 --> 00:18:13,850 communicate through to any instances 384 00:18:11,490 --> 00:18:17,310 inside that private subnet that are 385 00:18:13,850 --> 00:18:20,160 associated to that security group that 386 00:18:17,310 --> 00:18:22,920 you've defined here as well so that 387 00:18:20,160 --> 00:18:24,960 brings us to the end of this lecture and 388 00:18:22,920 --> 00:18:28,460 I hope you've enjoyed it and I look 389 00:18:24,960 --> 00:18:28,460 forward to seeing you in the next one