EC Council CEH 312-50 – Attacking Web Technologies Part 1
July 2, 2023

1. Introduction to Web Application

In this section, we’ll talk about common security threads. We’ll also discuss the Sam’s event management errors. We’ll discuss the progression of a professional hacker, the anatomy of a Web application attack. We’ll discuss Web application attack techniques. We’ll discuss components of a generic Web application system, URL mappings to the Web application system. We’ll discuss also pen testing tools and methodologies for the Web servers assessment. We will also discuss understanding Web application security, common Web application security vulnerabilities, authentication and session management password guessing and cracking tools as well.

2. Common Security Threats, Need for Monitoring, SANS Seven MGT Errors

Now in this section we’re going to talk about different web based attacks. First off, let’s talk about the different common security threats. At the top of the list, as you can see, is misconfiguration. Somebody dropped the ball on configuring. Something did not set the access control list. Whatever the case may be, it’s misconfigured and that’s typically at the top of the list. Vulnerabilities that exist, exist in server side services, perhaps client side risks because of something they’re running on their client vulnerabilities and web based applications and of course denial of service attacks which we just simply can’t get around. If they really want to take you down, they can do so.

There are some things that we can do to attempt to stop it by buying more bandwidth from one of these providers, hoping no one will be able to take us down. But it still is very taxing and very difficult to do. Now there is actually a need for monitoring. Web services such as web shops and websites consist of numerous parts that make out the user experience. Of course, if a problem occurs in one of the components, the customer might not be able to continue the order process, use the web application or read information on your web page or whatever the case may be. So as a result you as a service provider risk to lose goodwill, leads and even revenue.

The worst part is that you might not even be aware of these problems if your site is up. But with that performance, in other words, slow load times, crashing scripts or failing third party servers, you may or may not be made aware of these problems. So in reality the problem is typically much greater than industry watchdog realizes. A lot of us businesses don’t even monitor online activities at the web application level. In reality you need to be sure that your web server and the performance of the server is at what you expect it to be. Because when we build a website we typically build it for what type of traffic we are going to expect.

This reminds me of something that happened when Kentucky Fried Chicken issued a coupon on the Oprah Winfrey Show for their new barbecue based chicken. They didn’t understand the power of oprah and unfortunately it crashed a lot of their service because they weren’t expecting that much demand for this. So consequently they weren’t monitoring the way they should be. A lot of times this puts the company into reactive security posture where nothing actually gets fixed until a situation occurs. So reactive security could mean sacrificing sensitive data as a catalyst for policy change. We get our website defaced which doesn’t happen as much anymore, but we get the website defaced.

That’s how we know something went wrong. Now I like to read these seven management errors and I want you to think to yourself if you work in the industry right now, if any of these sound familiar to you. First off, pretend the problem is simply going to go away, authorize reactive short term fixes. But the problems reemerge rapidly, something we call the bandaid fix. Now this is what’s happening with management. When you say, hey, we need to fix this, and that’s what they do. They also fail to realize how much money their information and organizational reputations are actually worth until it’s gone. They rely primarily on the firewall and ideas. I had a manager that did this. We bought that expensive firewall.

That’s what we bought the Firewall for you you got the expensive firewall. They think the Firewall is the be all, end all device for security. Well, unfortunately it’s not. It’s a part of security. It’s a very important part of security, but it is not the be all, end all device. So they rely primarily on that firewall and IDs and they get this false sense of security. They fail to deal with operational aspects of security. They make a couple of bug fixes and then not really allow the follow through to really make sure the problems stay fixed. They fail to understand the relationship of poor information security to the business problem. They understand physical security, of course, but they don’t see the consequences of poor information security until it’s gone.

Their reputations are worth a lot more to this and they don’t understand that until it’s gone. The one that really hits home with me when I worked at the mortgage company was where they assigned untrained people to maintain security and they don’t even provide the training or the time to make it possible to do the job. So you have to kind of think to yourself, do any of these sound familiar where you work as well? So let’s take a look at the anatomy of a web application attack. Step number one is going to start with the scan where the hacker starts by running a port scan to detect any open Http and Https ports for each server, grabbing a banner and then retrieving the default page for each open port.

Step two he tries to grab as much information as possible. The attacker identifies the type of server that’s running on each port and each page is parsed to find normal links, in other words, HTML anchors. Then the attacker analyzes those found pages, look for any comments and I can’t tell you the number of times I found comments that got me in. And these are possibly other useful bits of data. They could refer to files and directories that are really not even intended for public. Step three testing. The hacker is going to go through a series of testing process for each of the application scripts or dynamic functions of the application.

He’s going to look for development errors that enabled him to gain further access into the application development errors or possibly whether they left the debugging code on. Then he finally plans the attack. Then the hackers identified every bit of useful information that can be gathered by passive, in other words, undetectable means. He then selects and deploys the attacks. These attacks center on the information gathered from the passive information gathering process. Then, finally, he’ll launch as the attack. After all these procedures, the hacker engages in open warfare by attacking each web application that he identified as vulnerable. During the initial review of the site, the results of the attack could be lost. Data content manipulation, or even theft and loss of customers.

3. Anatomy of a Web Attack, Web Attack Techniques, Typical Web App Componets

Okay, so the next thing we want to talk about is different web attack techniques. And I’ve listed four of them that are very common for us. Start off talking about parameter manipulation. Now parameter manipulation can be something as simple as an invalid value passed to the web application to coax the application into revealing some internal data by itself. Or it can be something as complex as passing a hidden SQL statement that could access useful data from a database. Forcing parameter is an attempt to exploit the programming rather than the application itself by attempting to determine debugging and testing flags.

Now, when these flags are present, they might be able to be used to enable special, normally hidden modes within the application. And then we have cookie tampering. Now, cookie tampering involves manipulating contents of cookies passed between the user and the web application. If this information is unencrypted, tampering can result in an application permitting access to an otherwise unauthorized user may actually be the case. Then lastly, we have common file query. It actually involves looking for files that had been inadvertently left accessible by developers, perhaps administrators or other default application configurations.

The result could be exposure of sensitive data that otherwise should have been removed from the application. The next thing we want to talk about is the typical web application system. Now, we know that we have a Web client out here, typically known as a browser, and we’re going to make a request, generally via Http port 80 or SSL port 443, through some firewall somewhere more than likely to a Web Server, and from the Web Server to a Web app, and from that Web app to an SQL database. It’s also possible we may have something like a Web application firewall in here somewhere as well. A couple of key items that we probably want to talk to.

The client, typically known as the user agent or the Web browser, is controlled by a user to operate the web application. So the client’s functionality can be expanded by installing plugins, add ons Applets, things of that nature. And the firewall either hardware or software regulated communication between insecure networks, for example, the Internet and secure networks, in other words, corporate land. This communication is typically filtered by access rules, generally involving IP addresses and ports. Then we have something called a proxy and the proxy is typically used to temporarily store web pages like in a cache, for example.

However, proxies can also assume other functionalities, for example, adapting the contents for users, in other words, customization or user tracking. And then finally the Web server. A Web server is a software that supports various web protocols like Http and HPS. It’s important for us to know all of these various components and exactly how they fit together. Now let’s discuss the URL mappings of the web application so we get our nomenclature straight while interacting with the application. A URL that gets sent back and forth URL stands for Uniform Resource Locator. Between the browser and the web server.

They typically can have a format where we identify the protocol. And this could be Http, https could be FTP. If we’re contacting an FTP server, we have a forward slash and the server. Now typically the server is going to be in the form of a domain name. The catalog is what this is typically referred to. Some people refer to this as the directory or the folder. And then we have an application. The application is some kind of a web application that we’re going to be making requests of the server to do. Now we’ll generally pass various things to the web application and we pass it by using this question mark character.

We tell it a variable and then the value that we assign to the variable. Now if we have more than one parameter, we separate those parameters by the ampersand sign. Then we have web application penetration methodologies and this is the steps that we’re generally going to go through to look for things. We’ll first start off documenting the application, in other, try to build some kind of a sitemap or a blueprint. Then we’ll try and identify characteristics of that application. In other words, we’re going to try and fingerprint that application.

We’ll then look for signature error and possibly response codes. We’ll then try and look for various files that might be available, then try and enumerate the application. We’ll look for things like forced browsing, hidden files, vulnerable CGI which stands for common gateway interface scripts, and possibly even sample files that may be in there. And also like to add, we’ll also look at comments that might be on there as well. Then we’ll attempt to take the input and output client side data and manipulate it. And we’ll typically do this by a web proxy that we’ll use.

4. Logs Cancanolization and Other Attacks

Now, if you’ve seen something like this that I’m pointing to in your logs, for example, now you know the reason why I often kid my classes and say, there’s this one kid in China that’s constantly throwing things out to all the Internet, and hopefully he lands it up against a wall and sticks. In reality, what ends up happening is there’s this phenomenon that I like to call Internet background radiation. And you’d easily be able to see this if you ever took a Snipper and attached it to your public IP address and just see the amount of junk that is constantly throwing against your particular card. Keep in mind, we only have 4 billion addresses, two to the 32nd power of possible IP addresses in IP address version four.

So the possibility is that somebody somewhere is trying to throw something up against our particular IP address to see if something sticks, and this is only going to get worse. Now, in reality, I predict it’s going to get a lot better when we go to IP version six, because we actually have 128 bits, a phenomenal amount of address space. And so this will weed down a little bit. But then the flip side of the coin is it’s going to be the Wild, Wild West all over again, because all the things we’re learning about four took quite a number of years, and I’m sure there’s going to be things found in version six as well. Let’s talk real quickly about the concept of conconalization in computer science, conconlization abbreviated C 14 N, where the 14 represents the number of letters between C and N.

Also, sometimes standardization or normalization.Now, normalization in database terms means to bring it down to your lowest form is a process for converting data that has more than one possible representation into a standard or normal form. This could be done to compare different representations for evidence, to count the number of distinct structures, to improve the efficiency for various algorithms by eliminating repeated calculations, or to make it possible to impose some type of meaningful sorting order. This is typically used by our web search engines as well. The next thing I want to talk about are logic flaws, logic flaws in a query string. And I like to refer to this when I’m teaching a class.

I will stand out in front of the class and I’ll step 1ft forward and I’ll say, I would like to go into the mailbox for sue now, when I’ve already stepped 1ft forward. Now I’m going to do a lateral step. In other words, I’m going to step to the side. Now let’s see if we can attempt to go into Joe’s mailbox. Ideally, if the web application was developed correctly, it should stop us from this. But many times I’ve actually been able to get to another area after they have authenticated me to one area. So this is generally a logic flaw in the application and really the only way to thoroughly test logic flaws is to do it manually. And I’ve already told you my favorite tool for doing that. And yes, that is Burp proxy. Let’s discuss one of the big issues with web servers, and it’s something called cross site scripting.

5. Web App Scanner, HTTrack, HTTPrint, Proxies

Now, the next thing we want to talk about are the tools that we’ll typically use to do our attacks or even vulnerability scanning, depending upon which side of the fence you may be on. The one that I really like is a tool that’s called Net Sparker. Now I’ve been using this tool for a couple of years now, and I have to say it’s one of the best that’s out there. It’s very reasonably priced as well. Netsparker is actually a desktop application. It’s available on Windows. It’s easy to really use this application. It uses the advanced proof based vulnerability scanning technology and has a built in penetration testing and reporting tools as well.

Now, Netsparker’s unique proof based scanning technology allows you to allocate more time to fix the reported flaws. Netsmarker automatically exploits the identified vulnerabilities in a read only and safe way and produces a proof of exploitation. Therefore, you can immediately see the impact of the vulnerability and don’t have to manually verify it. Historically, Web application scanners have had a very high false positive rate, and what they’re trying to do is bring that down somewhat. Netsparker’s accurate scanning technology finds a lot more vulnerabilities than a lot of its competitors.

Its unique vulnerability scanning technology has much better coverage and finds more vulnerabilities than other scanners as well. The thing that I really like about Netsparker is it allows you to automate things. We’ve gone into more of a rapid deployment, rapid testing mechanism. What I try and preach to my classes is you should be setting up Netsparker, do a unit test, or perhaps an entire application test on every single compile and addition to the source code repository. Now let me talk about what I mean here. Netsparker has the capability to do what’s called incremental scanning.

So we’ll take where it was before and look at any differences and scan just those differences. So this would allow you to be able to scan just your unit test that you just did, making it very easy, probably less than 1015 seconds. Maybe this way you’re going to get exact feedback to the developer on things that he or she may be doing. That might not be such a good idea. Otherwise the typical scenario would be a couple of months down the line. We have somebody that’s part of our quality assurance team that does this scan. They send a big list of things back to the developer and the developer.

I can’t remember what I had for lunch yesterday, much less what I did three months ago when I was coding. So this gives a positive reinforcement to the developer and I really like that. This tool actually is a free tool from the GPL library. It’s really easy to use. It’s what’s referred to as an offline browsing utility. The idea is for you to be able to download the entire application from the Internet to a local directory building, recursively all the directories, getting all the HTML images, all that kind of information. It arranges the original site’s relative link structure. You simply open a page of the mirrored website in your browser and you can browse from site to site if you’re viewing it online.

So this gives us the capability to update an existing mirrored site and then resume interrupted downloads. It’s completely configurable and has integrated web system. This has been around for quite a bit of time. Another tool is known as the Htt Print tool. Now what we’re trying to do is fingerprint the application. So one of the first tasks when conducting a web application penetration test is identify the version of the web server and of course the version of the application and possibly the version of the framework we may be using. The reason for that is it allows us to discover any well known vulnerabilities that are affecting the web server and the application.

This is typically known as web application fingerprinting. The absolute essential tool for a penetration tester. And I’m going to even argue a developer is some type of proxy. Now I’m going to go through a couple of them. This one’s claimed the same is more along the lines of being able to grab information but it keeps track of all the sites that you have went to in a particular given period. I always tell my classes that if I was ever stranded on a desert island I had to hack a web application. I would be stranded on a desert island and have to hack a web application. But if that was the case and I could only pick one tool, this would by far be the tool that I would choose.

Burp Proxy is almost a Swiss army knife of different security and testing applications and gizmos and all kinds of things built into it when I am training individuals. And of course, as you probably have already surmised, I’m going to provide maybe about an hour, maybe an hour and a half’s worth of lecture to a week’s worth of material. I teach a class that takes an entire week to do web application security. I’m going to do it in about an hour because what we’re trying to do is get you to pass your test. And of course in order to do that I’m going to basically hit all the high point.

Now Burp Proxy is used to intercept the data before it’s sent to the server or perhaps before it comes back to the web browser and allows us to inspect it, manipulate it and do other types of things to it as well. It even has capability for us to create a fake certificate so that we can also inspect the SSL information as well. It’s a fantastic tool and I should be doing a demo on this a little bit later. Another tool that I really like is called Fiddler. Now Fiddler is used to intercept the data before it’s sent to the server. And before it comes back to the browser, we can do it in both places. We’re able to intercept and modify Http and Https traffic passing in both directions.

He can easily identify all kinds of contents with automatic coloring of the request and response syntax rendering of the Web Content Parse serialization schemes apply fine grained rules to determine the request and responses are intercepted for manual testing and manipulation. View all the traffic in the detailed proxy history send interesting items to other tools with a single click. Like there is a cookie tool, there is a Fuzzing tool that’s available for it, there’s a lot of add ons for Fiddler, and it’s really a great little tool. And one other of its claim to fame is it allows you to define rules to automatically modify requests and responses without any kind of manual intervention.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!