1 00:00:07,730 --> 00:00:12,300 Welcome back to BackSpace Academy. Coming up next. What I'm going to do is 2 00:00:12,300 --> 00:00:16,529 run through all of the products and services that are offered on AWS. 3 00:00:16,529 --> 00:00:21,330 Now there's an enormous amount of services, so rather than go through them 4 00:00:21,330 --> 00:00:26,039 all in one hit I'm going to break it up into a lot of sections and then at the 5 00:00:26,039 --> 00:00:29,609 end of each section I'm going to give you an example of how to use those 6 00:00:29,609 --> 00:00:35,670 services and then after that we'll do a hands-on session or a lab where you can 7 00:00:35,670 --> 00:00:41,520 actually use those services yourself hands-on. So what I need to do now is 8 00:00:41,520 --> 00:00:45,840 just give you a very quick introduction to the different types of cloud 9 00:00:45,840 --> 00:00:52,140 computing models that are available. Infrastructure as a service it contains 10 00:00:52,140 --> 00:00:57,629 the basic building blocks for cloud IT. What that means is that is nuts 11 00:00:57,629 --> 00:01:02,309 and bolts stuff. So if we want to launch a Linux server and, we want to manage 12 00:01:02,309 --> 00:01:07,409 that Linux server ourselves, that is how we would do that as infrastructure as a 13 00:01:07,409 --> 00:01:14,490 service and we would do that using the Elastic Compute cloud or EC2 service. 14 00:01:14,490 --> 00:01:20,400 The next level is platform as a service or PaaS and that's where AWS will take a 15 00:01:20,400 --> 00:01:25,380 little bit more control over the underlying infrastructure. 16 00:01:25,380 --> 00:01:31,380 AWS manages their underlying infrastructure and the hardware and 17 00:01:31,380 --> 00:01:35,790 operating system normally and, a good example of that would be the relational 18 00:01:35,790 --> 00:01:41,520 database service and, in that service AWS they provision all the operating system 19 00:01:41,520 --> 00:01:46,009 the server and everything to run that but, you still need to do the high-level 20 00:01:46,009 --> 00:01:52,290 administration of that database, and then finally we've got software as a service 21 00:01:52,290 --> 00:01:57,659 or SaS and that is a complete product that normally runs in a browser 22 00:01:57,659 --> 00:02:04,079 and it mostly refers to end-user applications. A good example that would 23 00:02:04,079 --> 00:02:13,379 be Office 365 or salesforce.com, and you'll hear another term used a lot with 24 00:02:13,379 --> 00:02:16,560 AWS and that is serverless computing. 25 00:02:16,560 --> 00:02:20,670 That allows you to build and run applications without having to think 26 00:02:20,670 --> 00:02:27,090 about servers. You don't need to provision the server yourself AWS will 27 00:02:27,090 --> 00:02:32,880 do that for you. It's also referred to as function as a service or abstract 28 00:02:32,880 --> 00:02:38,430 services. Examples of that are the simple storage service where we will be using 29 00:02:38,430 --> 00:02:43,140 at the end of this lecture where we create a bucket and we put objects and 30 00:02:43,140 --> 00:02:47,160 files into that bucket. We don't know what's behind that bucket, obviously 31 00:02:47,160 --> 00:02:51,180 there's going to be an operating system, most probably a Linux operating system, 32 00:02:51,180 --> 00:02:53,580 a file server. There's going to be hard drives. 33 00:02:53,580 --> 00:02:58,740 We don't need to worry about that because AWS they look after all 34 00:02:58,740 --> 00:03:04,500 that for us. AWS Lambda is where can run code in the cloud, again without servers. 35 00:03:04,500 --> 00:03:10,160 You just provide AWS your code and AWS looks after everything for you. 36 00:03:10,160 --> 00:03:17,030 Dynamodb is a NoSQL database in the cloud as a service and, Amazon simple 37 00:03:17,030 --> 00:03:23,760 notification service they can send out notifications to your users. 38 00:03:23,760 --> 00:03:27,420 So that's a pretty quick introduction of the different types of cloud computing 39 00:03:27,420 --> 00:03:31,110 models. The best way to really get a good knowledge of it is to go through all 40 00:03:31,110 --> 00:03:36,000 these products, get your hands on with them and then you'll fully understand it 41 00:03:36,000 --> 00:03:40,709 a lot better. So there's only one more thing to do and that is let's get into 42 00:03:40,709 --> 00:03:42,739 it! 43 00:03:43,970 --> 00:03:48,630 Welcome back to BackSpace Academy. In this lecture we're going to run through 44 00:03:48,630 --> 00:03:53,820 some of the storage services that are available on AWS. Then we'll look at some 45 00:03:53,820 --> 00:03:58,290 examples of how you can use these and finally we'll finish up with a hands-on 46 00:03:58,290 --> 00:04:05,430 lab using one of these services, Amazon simple storage service or s3 for short. 47 00:04:05,430 --> 00:04:10,530 It's designed to store and access any type of data over the Internet. 48 00:04:10,530 --> 00:04:16,200 It's a serverless service and as such we don't need to worry about what is behind it. 49 00:04:16,200 --> 00:04:21,870 There's obviously a file server an operating system a hard drive but we 50 00:04:21,870 --> 00:04:26,790 don't need to be concerned about that at all. We just simply need to create this 51 00:04:26,790 --> 00:04:32,790 thing called a bucket and then we upload objects to that bucket, the bucket grows 52 00:04:32,790 --> 00:04:38,900 as we add objects to it and the size of that bucket is theoretically unlimited. 53 00:04:38,900 --> 00:04:43,880 AWS just looks after everything for us. 54 00:04:43,880 --> 00:04:49,890 Amazon Glacier is the cheapest storage option on AWS and it's used for 55 00:04:49,890 --> 00:04:56,040 long-term archiving of data. It's a serverless service just like Amazon s3 but 56 00:04:56,040 --> 00:05:02,970 it is not as readily accessible as s3 so it should only be used for content that 57 00:05:02,970 --> 00:05:09,510 is to be archived. You can also set up a lifecycle rule that will automatically 58 00:05:09,510 --> 00:05:16,500 migrate old data in Amazon s3 automatically over to Glacier for long-term archiving. 59 00:05:18,508 --> 00:05:24,030 Amazon elastic block store or EBS for short is a highly 60 00:05:24,030 --> 00:05:29,700 available, low latency block storage and it's specifically for attaching to 61 00:05:29,700 --> 00:05:34,080 servers that are launched with the Amazon ec2 service. We'll learn more 62 00:05:34,080 --> 00:05:39,540 about the EC2 service coming up and it's similar to attaching a hard drive to 63 00:05:39,540 --> 00:05:45,060 your computer at home, works in the same manner. It's block device storage. 64 00:05:45,060 --> 00:05:51,090 Amazon Elastic file system or EFS for short is network attached storage and it's 65 00:05:51,090 --> 00:05:56,820 specifically for Amazon EC2 servers. Because it is network attached store, 66 00:05:56,820 --> 00:06:03,390 this allows multiple servers to access one data source in a similar way to NAS 67 00:06:03,390 --> 00:06:08,780 on your network at home can be accessed by multiple computers on that network. 68 00:06:08,780 --> 00:06:16,380 The AWS Storage Gateway enables hybrid storage between on-premise environments 69 00:06:16,380 --> 00:06:23,090 and the AWS cloud. It provides a low latency performance by caching 70 00:06:23,090 --> 00:06:29,390 frequently used data on-premises while storing the less frequently data in 71 00:06:29,390 --> 00:06:38,310 Amazon Cloud storage services. A Snowball device is a portable, petabyte scale, data 72 00:06:38,310 --> 00:06:42,990 storage device that can be used to migrate data and large amount of data 73 00:06:42,990 --> 00:06:49,800 from on-premise environments over to the AWS cloud. You simply download your data 74 00:06:49,800 --> 00:06:55,980 to the Snowball device, then you send it off to AWS who will then upload that 75 00:06:55,980 --> 00:07:03,540 data to an AWS storage service for you. Okay so let's have a look at some 76 00:07:03,540 --> 00:07:09,870 examples of using the AWS storage services. In orange there we've got the 77 00:07:09,870 --> 00:07:20,730 AWS cloud. Now we can create a VPC inside that AWS cloud and that VPC or virtual 78 00:07:20,730 --> 00:07:27,540 private cloud is our own private space within the AWS cloud, and that is an 79 00:07:27,540 --> 00:07:33,900 impenetrable fortress against attack and no one will be able to enter our own 80 00:07:33,900 --> 00:07:40,130 private space without us allowing that to happen. 81 00:07:40,130 --> 00:07:46,440 So let's just say we launched two servers in our VPC now we want these 82 00:07:46,440 --> 00:07:52,500 servers to have access to data and somewhere to store that data and so in a 83 00:07:52,500 --> 00:07:57,449 normal environment you would just add a hard drive to that server so in the same 84 00:07:57,449 --> 00:08:06,720 way we can attach an Amazon Elastic block storage device to our server. 85 00:08:06,720 --> 00:08:12,780 So that's great, we've now got high-speed access to our data but, what if we want 86 00:08:12,780 --> 00:08:19,120 that data to be available to both of those servers? So here we've only got 87 00:08:19,120 --> 00:08:24,090 two EBS volumes. What if we want that data to be on one volume only. 88 00:08:24,090 --> 00:08:30,000 So as we know on our computer at home we can't attach a hard drive or a block 89 00:08:30,000 --> 00:08:36,360 device hard drive to multiple computers, it just doesn't work like that. 90 00:08:36,360 --> 00:08:41,039 So in a situation like that in your home network at home you would just go out and 91 00:08:41,039 --> 00:08:45,990 purchase a NAS network attached storage device. You would attach it to your 92 00:08:45,990 --> 00:08:51,900 network you would set up your operating system in your desktop computers to have 93 00:08:51,900 --> 00:08:57,150 a mount target for that network attached storage so when you go to your G Drive 94 00:08:57,150 --> 00:09:02,370 or whatever it is or E drive or F Drive, whatever it is, that will point to that 95 00:09:02,370 --> 00:09:07,200 network attached storage. In the same way that we can do that with our network at home, 96 00:09:07,200 --> 00:09:15,089 we can do the same thing with AWS. So elastic file system is network 97 00:09:15,089 --> 00:09:24,240 attached storage and so that with a mount target can enable multiple servers 98 00:09:24,240 --> 00:09:32,820 to access the one data source. Now what if we don't want to worry about mount 99 00:09:32,820 --> 00:09:35,850 targets and block devices and all this sort of stuff? 100 00:09:35,850 --> 00:09:40,110 We just want somewhere we can upload objects to in a similar way do we do 101 00:09:40,110 --> 00:09:45,120 with Google Drive or something like that and we also want to have an automated 102 00:09:45,120 --> 00:09:51,120 solution that over time migrates that data over to something more low cost and 103 00:09:51,120 --> 00:09:55,440 more long term for archiving now. That is where 104 00:09:55,440 --> 00:10:03,270 Amazon s3 comes in and so we can use Amazon s3 to create a bucket, 105 00:10:03,270 --> 00:10:07,350 store objects in that bucket, delete objects, do whatever we want with it 106 00:10:07,350 --> 00:10:14,490 and we can also set up a lifecycle rule on that bucket so that over a period of time 107 00:10:14,490 --> 00:10:22,830 as objects age, they can be migrated over to an Amazon Glacier vault for 108 00:10:22,830 --> 00:10:28,020 long-term archiving. It will still be accessible, it just won't be as readily 109 00:10:28,020 --> 00:10:33,930 accessible as the s3 bucket, but the advantage is that we'll be using the 110 00:10:33,930 --> 00:10:41,520 lowest cost storage that's available on AWS. Now the s3 bucket will be located in 111 00:10:41,520 --> 00:10:48,950 the AWS cloud it's not located in our VPC. So remember we said the VPC is our 112 00:10:48,950 --> 00:10:56,300 private space within the AWS cloud and nothing gets through it without us 113 00:10:56,300 --> 00:11:01,320 allowing it to come through. So that is where the VPC endpoint comes in. So we 114 00:11:01,320 --> 00:11:07,170 can create one of those and that will allow traffic to flow in and out of our 115 00:11:07,170 --> 00:11:15,930 VPC specifically for that s3 service. So let's have a look at a hybrid storage 116 00:11:15,930 --> 00:11:21,390 example, where we've got on-site storage in a corporate data center and we've 117 00:11:21,390 --> 00:11:26,030 also got that stored in the AWS cloud in Amazon s3. Why would we do that? 118 00:11:26,030 --> 00:11:35,730 Well it's great for a disaster recovery solution because it provides high speed access to 119 00:11:35,730 --> 00:11:39,990 our data in our corporate datacenter and at the same time we're taking advantage 120 00:11:39,990 --> 00:11:45,480 of the durability and availability of Amazon s3 as a disaster recovery 121 00:11:45,480 --> 00:11:50,370 solution. So the first problem that we're going to encounter is that this 122 00:11:50,370 --> 00:11:55,860 corporate data center will have petabytes of data and to transfer that 123 00:11:55,860 --> 00:12:03,690 over via the Internet to the AWS cloud is not going to be practical. So AWS they 124 00:12:03,690 --> 00:12:10,470 can send out to us a Snowball and that is a high-capacity device that 125 00:12:10,470 --> 00:12:16,649 can store petabytes of data and so we can upload when we receive that 126 00:12:16,649 --> 00:12:22,079 Snowball device from AWS. We can upload our data to that and then we can send 127 00:12:22,079 --> 00:12:29,220 that back to AWS and they will upload that for us into the Amazon s3 bucket 128 00:12:29,220 --> 00:12:35,880 and so that solves that problem for us. So then we've got to find a solution for 129 00:12:35,880 --> 00:12:41,610 making sure that the data in our corporate datacenter is synced with our 130 00:12:41,610 --> 00:12:49,620 AWS cloud. Now that's where the AWS Storage Gateway comes in and that will 131 00:12:49,620 --> 00:12:55,019 orchestrate all of that for us and so if you have got a high-speed link between 132 00:12:55,019 --> 00:13:00,389 your corporate data center and the AWS cloud which you can with the AWS Direct 133 00:13:00,389 --> 00:13:05,880 Connect service, you can have the AWS Storage Gateway to orchestrate and 134 00:13:05,880 --> 00:13:11,459 manage that all for you and what it will do it will get your popular content, 135 00:13:11,459 --> 00:13:18,149 your content that is frequently accessed and it will store copies of that on-site in 136 00:13:18,149 --> 00:13:24,120 your on-site storage but at the same time it will store all of that data in 137 00:13:24,120 --> 00:13:29,790 an Amazon s3 bucket for you and so then you've got the advantage of having all 138 00:13:29,790 --> 00:13:35,819 of the all of the durability and availability of Amazon s3 as a disaster 139 00:13:35,819 --> 00:13:41,579 recovery solution but at the same time you've got high-speed access to your 140 00:13:41,579 --> 00:13:49,860 data which is cached on the corporate data center Let's have a go at using the 141 00:13:49,860 --> 00:13:55,500 Amazon s3 service so what we're going to do now is that we're going to use the 142 00:13:55,500 --> 00:14:01,529 AWS management console to connect into the AWS cloud and then we're going to 143 00:14:01,529 --> 00:14:08,279 create an Amazon s3 bucket, we're going to upload files to that bucket and then 144 00:14:08,279 --> 00:14:12,329 we're going to download files from that bucket, and then finally we're going to 145 00:14:12,329 --> 00:14:18,000 empty and delete that bucket. Now there are lab notes for this lab and the 146 00:14:18,000 --> 00:14:21,460 further ones coming up make sure that you download the 147 00:14:21,460 --> 00:14:27,960 introduction to AWS lab notes that come with this course and let's get into it. 148 00:14:27,960 --> 00:14:33,130 Now before we start the lab you need to make sure that you have signed up for an 149 00:14:33,130 --> 00:14:38,530 AWS account if you haven't click on the sign up button up here at the AWS 150 00:14:38,530 --> 00:14:43,030 website. Once you've completed that signup process make sure that you take 151 00:14:43,030 --> 00:14:48,400 note of the email address and password that you use to sign up with. 152 00:14:48,400 --> 00:14:55,000 So once that's done you can go to my account and select the AWS management console 153 00:14:55,000 --> 00:14:58,300 once you've done that you can login using that email address that you use to 154 00:14:58,300 --> 00:15:02,080 create your account and the password that you used as well. So once you do 155 00:15:02,080 --> 00:15:08,530 that you'll be into the AWS management console. Okay once we've logged in we'll 156 00:15:08,530 --> 00:15:14,500 be at the AWS management console from there we can go to services, and try and 157 00:15:14,500 --> 00:15:20,140 find the s3 service now that will be in the storage category. I actually don't 158 00:15:20,140 --> 00:15:24,220 like to use these categories because they change quite a bit and when you're 159 00:15:24,220 --> 00:15:28,450 looking for stuff it might might be moved into a different category, 160 00:15:28,450 --> 00:15:32,650 so it does make life a little bit difficult. The easiest way is to select up in the 161 00:15:32,650 --> 00:15:38,410 top right hand corner here is A to Z and then if you're looking for s3 just 162 00:15:38,410 --> 00:15:47,560 go to s3 and you'll find it there. Once we've got into the s3 console the 163 00:15:47,560 --> 00:15:51,400 first thing we need to do is to create a bucket. Now I've already created quite a 164 00:15:51,400 --> 00:15:54,720 number of buckets here for different labs and whatever I've done in the past 165 00:15:54,720 --> 00:15:59,140 but yours will be obviously empty, you've never created a bucket and it's a 166 00:15:59,140 --> 00:16:02,650 brand new account so your screen here might look a little bit different to 167 00:16:02,650 --> 00:16:07,600 mine but there will still be a create button or create bucket button there. 168 00:16:07,600 --> 00:16:12,700 We'll click on that and we need to give our bucket a name so I'm just going to 169 00:16:12,700 --> 00:16:17,020 put in anything there just a whole heap of rubbish there but you can give it 170 00:16:17,020 --> 00:16:22,200 whatever you want to call it. Now a bucket name needs to be unique across 171 00:16:22,200 --> 00:16:26,890 AWS so if someone else has actually used that name you can't use it as well 172 00:16:26,890 --> 00:16:31,690 so it's a bit like domain names once you've got it no one else can use 173 00:16:31,690 --> 00:16:33,970 it after that. 174 00:16:33,970 --> 00:16:38,439 So if I, for example if I type backspace - lab or something no doubt that would have been 175 00:16:38,439 --> 00:16:43,329 gone. So I'll leave that as it is it's just a whole scramble of letters and 176 00:16:43,329 --> 00:16:49,930 we'll click on next. We're not going to worry about versioning or anything like 177 00:16:49,930 --> 00:16:56,139 that book just click on next it's going to be a private bucket so only us will 178 00:16:56,139 --> 00:17:00,160 only I will have access to this bucket. So we're not going to make a public 179 00:17:00,160 --> 00:17:06,939 we're not going to change anything here and next. So that's all we do there it's 180 00:17:06,939 --> 00:17:11,770 got our name and region so we're going to be in the US East region and for all 181 00:17:11,770 --> 00:17:16,929 of the labs make sure that you are in the US East region it's the largest 182 00:17:16,929 --> 00:17:21,699 region and has the most services and when services come out they normally 183 00:17:21,699 --> 00:17:26,770 first come out in US East. It's also normally the cheapest region to use as 184 00:17:26,770 --> 00:17:38,140 well. So that looks all ok to me. We'll just click on create bucket. If I scroll 185 00:17:38,140 --> 00:17:42,250 down I should be able to find that bucket name that I did. There we go. 186 00:17:42,250 --> 00:17:46,299 just that scramble of letters there so it will have a link to that bucket so 187 00:17:46,299 --> 00:17:50,289 we'll just click on that and there we can see our bucket is empty so our 188 00:17:50,289 --> 00:17:55,570 bucket it's simply a repository to dump objects to. It could be files, could be videos 189 00:17:55,570 --> 00:18:01,179 could be a whole directory so what we'll do now is we'll upload a 190 00:18:01,179 --> 00:18:07,600 folder to this bucket. So we click on upload now, we can click on add files here 191 00:18:07,600 --> 00:18:12,250 but I don't like to use that. The easiest way is to drag and drop 192 00:18:12,250 --> 00:18:17,049 So you're going to bring it in a folder for example it's much easier to grab the 193 00:18:17,049 --> 00:18:23,409 whole folder and just drop it on the form. So that's what I'm gonna do now and 194 00:18:23,409 --> 00:18:32,140 we just dropped that whole folder onto that form and we click on next and it's 195 00:18:32,140 --> 00:18:34,990 only going to be access for us so it's going to be private. 196 00:18:34,990 --> 00:18:40,240 We'll leave that as it is and the storage class, we've got a number of 197 00:18:40,240 --> 00:18:45,370 different storage classes depending on what the availability is, how long 198 00:18:45,370 --> 00:18:48,310 we're going to be using it, how quickly we need access 199 00:18:48,310 --> 00:18:52,120 to it, all this sort of thing. So we're just going to use this standard storage 200 00:18:52,120 --> 00:18:56,560 class. We'll click on next and that's just a review screen and we'll click on 201 00:18:56,560 --> 00:19:01,090 upload. So there you can see down the bottom it's starting to upload those 202 00:19:01,090 --> 00:19:11,500 files at the whole folder with all the files inside of it. So now that has uploaded 203 00:19:11,500 --> 00:19:17,110 we can select this folder, open it up and we can download a file, get it 204 00:19:17,110 --> 00:19:21,160 back again. So let's do that so we just click on the link in this folder to open 205 00:19:21,160 --> 00:19:25,540 up that folder and I'm just going to download these lab notes if I click on 206 00:19:25,540 --> 00:19:35,020 those on that link for the lab notes and I just do download that notes and there 207 00:19:35,020 --> 00:19:39,850 we go so that's downloading those that PDF. Now I just want to talk about this 208 00:19:39,850 --> 00:19:43,990 screen because a lot of people have a bit of trouble with this because instead 209 00:19:43,990 --> 00:19:48,670 of clicking on download, they scroll down here and they click on the object URL. 210 00:19:48,670 --> 00:19:56,440 Now if we click on that and I'll do that now, this is what you get, access denied 211 00:19:56,440 --> 00:20:02,650 so what that is, is you're trying to access through your browser directly to 212 00:20:02,650 --> 00:20:07,930 that object and it's a private object, so you can't access it, so if you want to 213 00:20:07,930 --> 00:20:12,250 have a file that is accessed like that you need to have it public and you also 214 00:20:12,250 --> 00:20:17,050 need to enable web site hosting in Amazon S3. So we'll just get out of that 215 00:20:17,050 --> 00:20:20,680 so if you get that screen that's what's happened you click this object URL 216 00:20:20,680 --> 00:20:27,160 instead of clicking the download link here. So that brings us to an end now. 217 00:20:27,160 --> 00:20:31,180 At the end of all of these labs I like to clean them all up and delete everything 218 00:20:31,180 --> 00:20:35,230 that we've done simply because we don't want to get billed for it. So if you're 219 00:20:35,230 --> 00:20:39,400 operating on a new account you're going to be getting a free tier and so all of 220 00:20:39,400 --> 00:20:43,930 this stuff that we're doing here today will be free and we just need to make 221 00:20:43,930 --> 00:20:48,040 sure that we clean it up afterwards. So I'm just going to jump back and click up 222 00:20:48,040 --> 00:20:54,880 the top here to the link to that bucket and I'm just going to select that folder 223 00:20:54,880 --> 00:21:01,040 I'm going to go to actions and scroll down until I find delete 224 00:21:01,040 --> 00:21:09,390 and I'll delete that and now our bucket is empty. So one more thing we can do is 225 00:21:09,390 --> 00:21:14,970 we really want to clean this up we can go back into our s3 management console 226 00:21:14,970 --> 00:21:18,930 by clicking on Amazon s3 up the top right-hand side or top left hand side 227 00:21:18,930 --> 00:21:28,800 and if we scroll down to find that bucket and there it is and then we can 228 00:21:28,800 --> 00:21:34,110 just click delete to delete the bucket. So we need to put in the name of 229 00:21:34,110 --> 00:21:43,380 the bucket and confirm to delete it so there we go if we have a look for that 230 00:21:43,380 --> 00:21:48,210 bucket it's no longer there so that brings us to an end and coming up next 231 00:21:48,210 --> 00:21:53,340 we're going to be doing some more looking into some of the services of AWS 232 00:21:53,340 --> 00:21:58,530 in particular database services and we're going to be creating a MySQL 233 00:21:58,530 --> 00:22:02,340 database and connecting to that database so some pretty cool stuff coming up and 234 00:22:02,340 --> 00:22:06,679 I look forward to seeing you in that one.