WEBVTT

00:07.400 --> 00:12.530
Data poisoning constitutes an attack strategy, where the training data utilized to construct machine

00:12.530 --> 00:16.520
learning or artificial intelligence is manipulated or poisoned.

00:16.550 --> 00:23.570
In 2016, Microsoft came out with a new AI bot for Twitter and in less than 24 hours, because of human

00:23.570 --> 00:30.710
interaction and manipulation, the AI bot became racist, sexist and hateful, not to mention spiteful,

00:30.710 --> 00:35.510
and Microsoft was forced to remove the AI from Twitter itself.

00:35.540 --> 00:42.890
Not only did it point out the complexities of the human mind, but it also pointed out how we can manipulate

00:42.920 --> 00:47.390
AI in construct of how we interact with its machine learning algorithm.

00:47.390 --> 00:50.570
When we talk about data poisoning, we're looking at data quality.

00:50.600 --> 00:52.580
How good is the quality of data?

00:52.610 --> 00:55.700
You have to remember that data is only as good as we put in.

00:55.730 --> 00:58.400
If we put garbage in, we're going to get garbage out.

00:58.430 --> 00:59.540
It's like anything else.

00:59.540 --> 01:04.580
If you train something to act a certain way, and the quality of that training is poor.

01:04.610 --> 01:10.460
Don't expect the data that's feeding that program to be excellent by nature.

01:10.490 --> 01:13.490
Poor data is going to constitute a poor program.

01:13.520 --> 01:18.860
We also need to constitute data monitoring because along with excellent data, we also need to ensure

01:18.860 --> 01:22.280
that that data is being perceived and written correctly.

01:22.280 --> 01:26.510
And so we want to monitor that data and how it interacts with the systems in place.

01:26.540 --> 01:31.790
If we put in garbage and we expect garbage out, if I put in excellent data, I want to make sure that

01:31.790 --> 01:37.910
the data that's going through being rewritten and utilized by the program is not only of high quality,

01:37.910 --> 01:41.480
but it's maintaining that high quality standard as we move forward.

01:41.690 --> 01:44.510
We want to use outlier detection methodologies.

01:44.510 --> 01:48.830
How is the detection of that data being utilized within our processes?

01:48.830 --> 01:51.050
We don't want to just depend on one functionality.

01:51.080 --> 01:52.790
We want a redundant functionality.

01:52.820 --> 01:56.600
We want multiple systems to detect if something is going astray.

01:56.600 --> 02:00.260
We want to validate the model that we're using, just like statistics.

02:00.260 --> 02:05.910
If I provide statistical analysis to something, and that statistics is leaning left or right or sideways

02:05.910 --> 02:07.260
in any way, shape or form.

02:07.290 --> 02:12.840
Don't be surprised when the model that we're utilizing leans left, right or sideways as well.

02:12.840 --> 02:15.120
We want to verify the data processing.

02:15.120 --> 02:17.160
Is it being processed appropriately?

02:17.190 --> 02:23.550
We often see within data processing standards that when we utilize data and we're expecting an output

02:23.550 --> 02:28.620
for a specific scenario, we can't utilize that same data processing scenario.

02:28.650 --> 02:33.270
We need to rewrite how that data is processed for the specific output that we're expecting.

02:33.270 --> 02:39.090
We want our access controls to be stringent yet usable by the average employee.

02:39.120 --> 02:45.270
If I'm looking at physicists and I'm providing access to physicists for the data that they're requesting,

02:45.270 --> 02:49.290
I want to make sure that the generator down the street doesn't have access to the data as well.

02:49.320 --> 02:51.510
Finally, we want to patch our systems.

02:51.540 --> 02:58.680
Believe it or not, patching represents a unique and robust system that could play permanent damage

02:58.680 --> 02:59.760
within our systems.

02:59.760 --> 03:01.620
If a vulnerability exists.

03:01.650 --> 03:07.560
If I fail to patch a system and its processing data, even if the data is great and it's excellent,

03:07.560 --> 03:09.330
and we're using it just the way we want.

03:09.360 --> 03:15.000
If somebody takes advantage of a vulnerability because we didn't patch the system correctly to fix a

03:15.000 --> 03:17.430
flaw or a fault within the configuration.

03:17.460 --> 03:20.310
Don't expect the data to be great when it comes out of the end.

03:22.170 --> 03:24.870
We want to talk about broken access controls as well.

03:24.870 --> 03:29.040
We want to prevent unauthorized access to sensitive data and functionality.

03:29.070 --> 03:31.950
We can utilize this in a variety of different standards.

03:31.950 --> 03:38.670
We can use rollback rule based access control, where we're using different rules or scenarios to limit

03:38.670 --> 03:41.130
the access of our different systems.

03:41.130 --> 03:45.000
We can do attribute based access control or Abac.

03:45.030 --> 03:49.350
This is where we take different attributes of the user to identify whether or not they're able to have

03:49.350 --> 03:50.040
access.

03:50.070 --> 03:53.460
When are you going to use least privilege, which we discussed in a previous slide?

03:53.460 --> 03:58.590
We want to test and weather to make sure that access control is being utilized properly.

03:58.620 --> 04:05.240
Even if we have specific rules based or access or attribute based access controls in place.

04:05.240 --> 04:08.870
If we don't test it, we're not really sure if it works the way it's supposed to.

04:08.900 --> 04:11.090
We want to make sure error handling is correct.

04:11.090 --> 04:16.340
If I provide an error and somebody, a malicious actor is able to read that error, and then that error

04:16.340 --> 04:21.830
is then pointed to a specific fault or vulnerability, then a malicious actor could take advantage of

04:21.830 --> 04:24.860
that vulnerability because we didn't handle the error properly.

04:24.890 --> 04:26.960
We want to identify session management.

04:26.960 --> 04:34.070
It's not uncommon for malicious actors to piggy tail off a specific session, and then repeat that session

04:34.070 --> 04:37.100
via a replay attack and gain access to our systems.

04:37.100 --> 04:42.980
By using proper session management, we can dictate, hey, this access for the specific length of time

04:42.980 --> 04:44.840
was for this session alone.

04:44.840 --> 04:50.030
We're not going to allow you to do a replay attack and gain access to this specific information, again

04:50.030 --> 04:51.770
using a different session.

04:52.040 --> 04:57.980
Again, we want to patch we want to provide web application firewalls which are specific to input validation

04:57.980 --> 04:58.820
or web based.

04:58.820 --> 05:00.290
And then we want to do Bola.

05:00.290 --> 05:07.910
And of course BFL when we talk about those access controls, we want to use business object level authorization

05:07.910 --> 05:12.650
or Ebola to utilize restrict access to specific business objects or resources.

05:12.650 --> 05:18.050
We don't want to just give any business object or business department within our industry access to

05:18.080 --> 05:21.920
the same information that we would operations or finance or HR.

05:21.950 --> 05:26.030
We want those to be specific to the level of authorization that they're allowed.

05:26.030 --> 05:29.180
And finally, business flow level authorization.

05:29.210 --> 05:34.700
We want to implement VFR to ensure users can only perform actions appropriate to their current role.

05:34.700 --> 05:40.220
If I have an IT department that's filled with 30 different employees and one IT person is delegated

05:40.220 --> 05:45.110
to specific servers, another one is delegated to operating systems and one is delegated to help desk.

05:45.110 --> 05:46.610
I don't want those interchangeable.

05:46.610 --> 05:51.710
I obviously don't want my help desk guy having access to my server information, and vice versa.

05:51.710 --> 05:56.150
If there's no need for it, there's really no reason to have full access, even if they're working in

05:56.150 --> 05:57.080
the same department.

05:57.080 --> 06:03.750
And I need to limit those authorizations appropriately Cryptographic failures denote a vulnerability

06:03.750 --> 06:08.880
or weakness within the symptoms, cryptographic algorithms, protocols or key management systems.

06:08.880 --> 06:13.860
That's a lot to take into play, but you need to remember that cryptography is the combination of the

06:13.860 --> 06:20.340
algorithm and the key to denote a specific encryption over a specific protocol.

06:20.340 --> 06:26.880
If I have a failure or a vulnerability within the system, then I have a cryptographic failure.

06:26.880 --> 06:28.890
We denote this by a secure algorithm.

06:28.890 --> 06:37.170
Am I using an algorithm that is compiling the encryption in such a way that it's not easily reversed?

06:37.200 --> 06:41.250
For instance, Des is an old encryption algorithm that we no longer use.

06:41.250 --> 06:47.460
But if you are using Des, even with a very high level key, then that algorithm is then weak and it

06:47.460 --> 06:49.170
provides a cryptographic failure.

06:49.170 --> 06:54.690
If we're using key management within the system, and I have a high cryptographic algorithm such as

06:54.720 --> 07:00.480
AES using a 128 bit encryption, then I need to make sure that my key management is high as well.

07:00.480 --> 07:06.690
I don't want a key that's like one, two, three, four and expecting my secure algorithm to keep my

07:06.690 --> 07:08.430
encrypted traffic secure.

07:08.460 --> 07:14.550
Obviously, having a high key or a high key management within the within the overall cryptographic function

07:14.550 --> 07:15.240
is important.

07:15.240 --> 07:18.750
But it's not just about keeping my key respectable and length.

07:18.750 --> 07:24.420
I also need to be aware that those key management institutes a functionality where the key is being

07:24.420 --> 07:29.910
stored properly in a secure location that doesn't give everybody access to it.

07:29.910 --> 07:32.940
And so I need to have proper key management when it comes into play.

07:32.940 --> 07:34.710
I need to have password storage.

07:34.710 --> 07:39.630
Is the storage of the passwords being provided with the proper level of security?

07:39.660 --> 07:40.890
Am I using salt?

07:40.890 --> 07:42.180
Am I using a pepper?

07:42.180 --> 07:47.640
Am I using hashing or am I storing those passwords in the clear on a random server out in the middle

07:47.640 --> 07:49.980
of nowhere that anybody could hack into?

07:50.010 --> 07:56.400
I want to provide a secure random number generator before we saw within several encryption algorithms

07:56.400 --> 08:02.460
or several multi-factor authentication systems, that the random number generator wasn't truly random,

08:02.460 --> 08:06.770
and you could identify what the next number was going to be implemented into the series.

08:06.770 --> 08:12.410
This provided a unique flaw or vulnerability within the MFA architecture, so we need to make sure that

08:12.410 --> 08:18.020
our secure random number generator is truly random and it is secure, meaning that somebody can gain

08:18.020 --> 08:23.750
access to it and then pinpoint the next random number to be generated, we need to provide certificate

08:23.750 --> 08:25.430
pinning and digital signatures.

08:25.430 --> 08:27.380
Where is the authorization coming from?

08:27.380 --> 08:30.410
Are we authenticating the traffic that's going back and forth.

08:30.410 --> 08:34.010
Both of these constitute a specific security vulnerability.

08:34.010 --> 08:39.590
If the digital signature or the certificate pinning is improperly introduced and kept secure, we need

08:39.590 --> 08:41.390
to provide an application firewall.

08:41.390 --> 08:47.090
It's not enough just to provide error handling and to make sure that our application is unique.

08:47.090 --> 08:52.880
If we can do an application firewall, then we control the access within our systems, and we can provide

08:52.880 --> 08:58.430
an extra level of security to ensure that our overall system does not suffer from a cryptographic failure.

08:58.430 --> 09:01.160
And just like patching before, it never changes.

09:01.160 --> 09:06.100
We're always going to keep our systems up to date with the newest patches to secure those known vulnerabilities,

09:06.100 --> 09:08.200
those known flaws within our systems.

09:10.090 --> 09:12.190
Finally, we have insecure design.

09:12.190 --> 09:15.370
Within insecure design, we have secure development lifecycle.

09:15.370 --> 09:19.960
We talked a little bit about development of software and how it needs to be secured from the from the

09:19.960 --> 09:22.510
beginning of the process all the way to the get.

09:22.540 --> 09:28.930
If you take a known piece of software and you try to put in security in the middle of the project,

09:28.930 --> 09:33.730
or even after the project is already done, you're going to suffer from vulnerabilities because there's

09:33.730 --> 09:36.400
no way you can 100% secure that software.

09:36.400 --> 09:38.110
You're also going to cost you more.

09:38.110 --> 09:43.510
When we talk about development, life cycle of security, security, what needs to be involved from

09:43.510 --> 09:49.150
the get go, we need to provide the secure coding practices and the secure design well into the beginning

09:49.150 --> 09:49.690
stages.

09:49.690 --> 09:53.440
Before we even start coding, we need to do a component analysis.

09:53.440 --> 09:56.410
How is that design being implemented as a whole?

09:56.410 --> 10:01.420
It's not enough just to put our network together and go, oh, I need another router to encompass extra

10:01.450 --> 10:02.110
users.

10:02.110 --> 10:04.930
That's going to provide us with an insecure design.

10:04.930 --> 10:06.450
We need to establish that.

10:06.480 --> 10:13.830
Hey, I'm setting up this network now, and I'm expecting to have to expedite or to add on to this network

10:13.830 --> 10:14.610
in the future.

10:14.610 --> 10:16.320
Let's design it from the get go.

10:16.320 --> 10:22.140
For expansion, we need dynamic application and security testing when we're designing, or ensuring

10:22.140 --> 10:26.310
that we have applications being dynamically tested throughout the process.

10:26.310 --> 10:31.680
If I provide a new application or I pick up an application from a third party vendor, am I testing

10:31.680 --> 10:32.220
it properly?

10:32.220 --> 10:37.380
Am I going through and dynamically testing it to make sure there's no vulnerabilities or flaws, not

10:37.380 --> 10:41.850
just within the software itself, but how it interacts with the software that I'm already using on my

10:41.850 --> 10:44.940
systems and the operating system that it's supposed to run on.

10:44.940 --> 10:47.970
We need to do runtime application and self-protection.

10:47.970 --> 10:51.180
Again, we need to do static application security testing.

10:51.180 --> 10:57.450
I need to go code by code, line by line, to verify that there's no indicative nature of vulnerabilities

10:57.450 --> 11:01.500
or flaws within that software that I'm getting ready to put on my systems.

11:01.500 --> 11:06.200
I need to maintain access controls and error handling configuration Curation management, where we're

11:06.200 --> 11:07.550
configuring the software.

11:07.580 --> 11:11.720
We're configuring the design of the hardware to be robust and secure in nature.

11:11.720 --> 11:16.310
If it's interacting with other components on our network, how am I configuring it to maintain that

11:16.310 --> 11:17.210
security level?

11:17.210 --> 11:22.730
If I have a SIM that's operating with a firewall and the firewall is feeding logs to the SIM, am I

11:22.730 --> 11:28.430
doing it in such a secure manner to where those logs are being properly secured from point A, i.e.

11:28.460 --> 11:30.530
the firewall to the SIM itself?

11:30.560 --> 11:32.000
Logging and monitoring.

11:32.030 --> 11:36.500
Again, if I'm going to provide logs and I need to do monitoring, how am I pulling that off?

11:36.500 --> 11:40.850
Am I using a SIM if I'm not using a SIM, how am I maintaining those logs?

11:40.850 --> 11:46.430
How am I continuing to monitor those logs to ensure that if an attacker comes in, I can use detective

11:46.430 --> 11:51.770
methodologies to ensure that my system as a whole is not going to be taking control of for the next

11:51.770 --> 11:56.210
three years, because we're able to monitor it and detect those failings within our system at the get

11:56.240 --> 11:56.720
go.

11:56.720 --> 11:58.760
And finally, application firewalls.

11:58.760 --> 12:04.100
How am I securing my applications properly, utilizing a firewall as a defense in depth measure?

12:04.100 --> 12:06.260
When we talk about security misconfigurations, Configurations.

12:06.260 --> 12:09.260
We need to enforce our security policy to talk about security.

12:09.260 --> 12:13.970
Misconfigurations when we talk about security misconfigurations, nothing comes to my mind more than

12:13.970 --> 12:18.230
the recent troubles that we've seen with T-Mobile over the last 2 or 3 years.

12:18.260 --> 12:24.080
T-Mobile suffered from using hardened credentials or default credentials in their routers, and has

12:24.080 --> 12:27.170
provided data breaches because of those credentials.

12:27.170 --> 12:32.510
Had their employees followed enforced security policies to change the default credentials off the get

12:32.540 --> 12:35.120
go, those data breaches never would have occurred.

12:35.120 --> 12:40.100
It's so imperative that we follow our own security policies to ensure that those security policies are

12:40.100 --> 12:45.350
being followed, that it makes up nearly almost all of our data breaches that we see today.

12:45.380 --> 12:48.830
Now, I know some people are gonna be like, wait a second, did you just say almost all?

12:48.830 --> 12:49.670
Yes, I did.

12:49.670 --> 12:56.540
Human error makes up 93% of all security failings across the board, and that's a failure of our security

12:56.540 --> 12:57.170
policies.

12:57.170 --> 13:00.320
How are we enacting those security policies within our network?

13:00.350 --> 13:03.770
It's not always the IT or the cyber guy that's making that mistake.

13:03.770 --> 13:08.530
Sometimes it's as small of a person as the receptionist at the front desk, clicking on a phishing email

13:08.560 --> 13:10.480
she should have never clicked on in the first place.

13:10.510 --> 13:15.580
Enforcing security policies is the best thing that you can do when it comes to configuration of our

13:15.580 --> 13:19.780
systems, the standard operating procedures or SOPs that we utilize.

13:19.810 --> 13:21.130
How are we operating?

13:21.130 --> 13:26.200
When we talked about the default credentials, I guarantee you there was an SOP in place for T-Mobile

13:26.230 --> 13:31.060
to follow for that technician to follow when they were reconfiguring that router, replace it.

13:31.090 --> 13:33.640
Otherwise the default credentials would not have been in place.

13:33.640 --> 13:36.130
And finally, are we following proper change management?

13:36.130 --> 13:41.350
Am I going through the notations of change management and my prepping for it, or am I just going to

13:41.380 --> 13:44.200
throw things on my network willy nilly whenever I feel like it?

13:44.230 --> 13:48.760
Change management is there for a reason, and we need to follow the proper outlines and procedures associated

13:48.760 --> 13:51.730
with change management to make sure that we have the authorization.

13:51.730 --> 13:55.360
We have the proper process, we have the pull back if something goes wrong.

13:55.390 --> 13:58.990
Change management is so important for us as IT and cyber professionals.

13:58.990 --> 14:01.390
And yet sometimes it is often ignored.

14:01.390 --> 14:06.370
And that introduces that human error which again makes up for our vulnerabilities that we see within

14:06.370 --> 14:07.480
most networks.
