21.03.2013 Views

Problem - Kevin Tafuro

Problem - Kevin Tafuro

Problem - Kevin Tafuro

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

actual data is stored is then marked free and will be reclaimed whenever the filesystem<br />

needs that space.<br />

The result is that to truly erase the data, you need to overwrite it with nonsense<br />

before the filesystem delete operation is performed. Many times, this overwriting is<br />

implemented by simply zeroing all the bytes in the file. While this will certainly erase<br />

the file from the perspective of most conventional utilities, the fact that most data is<br />

stored on magnetic media makes this more complicated.<br />

More sophisticated tools can analyze the actual media and reveal the data that was<br />

previously stored on it. This type of data recovery has a limit, however. If the data is<br />

sufficiently overwritten on the media, it does become unrecoverable, masked by the<br />

new data that has overwritten it. A variety of factors, such as the type of data written<br />

and the characteristics of the media, determine the point at which the interesting<br />

data becomes unrecoverable.<br />

A technique developed by Peter Gutmann provides an algorithm involving multiple<br />

passes of data written to the disk to delete a file securely. The passes involve both<br />

specific patterns and random data written to the disk. The paper detailing this technique<br />

is available from http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html.<br />

Unfortunately, many factors also work to thwart the feasibility of securely wiping the<br />

contents of a file. Many modern operating systems employ complex filesystems that<br />

may cause several copies of any given file to exist in some form at various different<br />

locations on the media. Other modern operating system features such as virtual memory<br />

often work to defeat the goal of securely obliterating any traces of sensitive data.<br />

One of the worst things that can happen is that filesystem caching will turn multiple<br />

writes into a single write operation. On some platforms, calling fsync( ) on the file<br />

after one pass will generally cause the filesystem to flush the contents of the file to<br />

disk. But on some platforms that’s not necessarily sufficient. Doing a better job<br />

requires knowing about the operating system on which your code is running. For<br />

example, you might be able to wait 10 minutes between passes, and ensure that the<br />

cached file has been written to disk at least once in that time frame. Below, we provide<br />

an implementation of Peter Gutmann’s secure file-wiping algorithm, assuming<br />

fsync( ) is enough.<br />

48 | Chapter 2: Access Control<br />

On Windows XP and Windows Server 2003, you can use the cipher<br />

command with the /w flag to securely wipe unused portions of NTFS<br />

filesystems.<br />

We provide three functions:<br />

spc_fd_wipe( )<br />

Overwrites the contents of a file identified by the specified file descriptor in accordance<br />

with Gutmann’s algorithm. If an error occurs while performing the wipe<br />

operation, the return value is –1; otherwise, a successful operation returns zero.<br />

This is the Title of the Book, eMatter Edition<br />

Copyright © 2007 O’Reilly & Associates, Inc. All rights reserved.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!