Feature Continued from page 15 A workflow is usually comprised of several steps. A typical Two-Person Review workflow is shown below. Initiation Phase Upload File(s)/directory structure(s) File uploaded via Web form, remote file path or Java Applet Select User selects release workflow from Workflow those they have permission to utilize Typical Two-Person Review FTP Workflow Step 1: User selects destination(s) from Select possible destination defined Destination in the workflow Step 2: User selects appropriate classification Select from available classifications Classification for the destination(s) Step 3: User can modify the names of Set Remote file(s)/directory structure(s) for the File Path remote destination system Step 4: User reviews previously defined Self Sign release information and asserts the appropriateness of the request by digitally signing the release package Step 5: System performs an automated “Dirty Word” review of release package for Search classification-related issues based oncontextual search of the released file(s) Step 6: System performs an automated File Type review of release package for Check inappropriate and/or allowed file types Step 7: “Second person” approver reviews Approve file(s) and the results of the and Sign automated checks before asserting the appropriateness of the request by digitally signing the release package. Release packages can also be reverted to correct information if required. Step 8: Signed (or unsigned) release FTP Send packages are transferred via FTP tothe appropriate destinations 16 2007 ISSUE 2 RAYTHEON TECHNOLOGY TODAY Human Review Manager As a release request is processed through a HRM workflow, the status of the request is tracked for display on the Request Manager Web interface, or its status is available for query by the HRM Request Client. The HRM also automates e-mail notifications to reviewers, provides for release packaging and meta-data generation, and produces a comprehensive audit trail of the release, review and transfer process. The HRM has been deployed on dedicated Windows-based or Solaris-based machines and is comprised of two Java Servlet Web applications with a backend mySQL database running under an Apache Tomcat Web server. The HRM application provides the workflow features for release and review, while a separate Web application known as the Login Enabler (Pending Patent #064747.1151) provides a reusable and extendable single sign-on and user/ group management capability, which has been integrated into the HRM’s functionality. Publisher Web user File transfer request Status Application with Request Manager API File(s) File(s) File transfer request Status Sign E-mail notice Status FTP HRM File Server Firewall HRM File System Approve and Sign Releasing agent(s) Web user Typical HRM Deployment Architecture FTP Server Write to DVD Raytheon has fielded HRMs in support of customers in both the U.S. and U.K. markets. The HRM meets the Protection Level 2 (PL2) with configurations up to PL4 possible when combined with appropriate boundary devices. Within the U.K., the HRM has been evaluated to the SYS3 level (which approximates to a Common Criteria 3 evaluation, without all of the formal paperwork). • Monty McDougal firstname.lastname@example.org PROFILE: JAY LALA Upon earning his doctorate degree in instrumentation from MIT, Jay Lala, Ph.D. embarked on an impressive 25-year career at Draper Laboratory, where he designed and developed fault-tolerant computers for mission- and safety-critical applications. These included the swim-by-wire ship control computer for the SEAWOLF nuclear attack submarine and the flight-critical computer to control all on-board functions of the NASA X-38 crew return vehicle. In 1999, Lala joined the Defense Advanced Research Projects Agency (DARPA) as a program manager where DARPA’s Information Assurance & Survivability programs provided him with an opportunity to achieve his vision of integrating the two previously distinct and parallel disciplines of fault tolerance and computer security. Working at DARPA enabled Lala to change the security paradigm from prevention and detection to intrusion tolerance and self-healing. “Intrusion tolerance moves from the classical computer and network security approach of prevention — where you build all types of forts and moats to keep attackers out — to intrusion tolerance where you design systems that, even when some parts fail or are successfully attacked, continue to operate and degrade gracefully to perform all the mission-critical functions correctly,” he explained.“Self-healing or self-regenerative systems go beyond that — they diagnose root cause and remove vulnerability exploited by the attacker.” At the end of his four years at DARPA, a congressionally mandated term-limit, Lala was awarded the Office of Secretary of Defense Medal for exceptional public service for helping improve the security of our nation’s networks. Since joining Raytheon in 2003, Lala has been integral to several key wins. He understands our customer needs, especially in Mission Assurance, and has a thorough comprehension of the science and technology landscape that enables him to provide state-of-the-art solutions. Lala’s background and experiences in fault-tolerant computers, as well as changing a mindset from prevention to intrusion tolerance and self-healing systems, is closely aligned with Raytheon’s pursuit of Mission Assurance.
Feature Information Assurance and Survivability Research at DARPA: 1999–2003 In 1999, a group of five program managers, including myself, arrived at DARPA to initiate a major new push in countering the threat of large-scale, coordinated cyber attacks against the United States by nation-states, terrorist organizations and other adversaries. This new initiative, a suite of programs in Information Assurance and Survivability (IA&S), was started by former DARPA director, Dr. Frank Fernandez, with ample encouragement from Congress. Seven new programs were created in IA&S, though two did not survive after the first year. DARPA prides itself on funding cutting-edge, high-risk research, and sometimes, the risk manifests itself as an utter lack of progress. DARPA is also quick to take action when things go awry. The program, initially called Intrusion Tolerant Systems, operated on a simple premise: Some attacks will penetrate defenses and successfully evade intrusion detection mechanisms. Consequently, a number of basic research questions arose. How can we design systems to continue to function correctly in the presence of such inevitable compromises? How can the system operate through attacks? Can fault-tolerance techniques and principles be used to defend against cyber attacks? (Before arriving at DARPA, my background was in designing systems to tolerate accidental faults, failures and errors.) When defending against viruses, worms and denial-of-service attacks, one is dealing with an intelligent and adaptive adversary: a human being. It is a greater challenge than countering randomly occurring hardware faults or even software bugs. Nevertheless, the research results are encouraging in that we can, in fact, architect systems that are intrinsically resilient to cyber mischief. The program resulted in more than 100 referred publications, of which 24 seminal papers were edited in a book by this author, with a preface by current DARPA director, Dr. Tony Tether 1 . Numerous prototypes were also built and subjected to attacks by red teams. A follow-on program, called OASIS (Organically Assured and Survivable Information Systems) Demonstration and Validation, created, demonstrated and validated an intrusion-tolerant architecture for the Joint Battlespace Infosphere 2 , applying many of the concepts developed under the earlier program. A prototype system was subjected to sustained attacks by multiple red teams, including one from the National Security Agency. For a very long time, the principal information and communication security mechanisms focused on keeping the intruder out of critical systems. Systems were designed with multiple defense layers, like multiple walls of a fortress. Various forms of electronic and physical access controls and cryptographic techniques were employed to maintain confidentiality. This worked fairly well until the advent of networked systems. Cyber attacks accelerated as the Internet provided a path for information sharing among networked systems, while simultaneously actualizing an easy attack avenue. As a result, DARPA started to fund research in network-based intrusion detection in the 1990s, and MIT Lincoln Laboratory created a network traffic representation that mixed real network traffic with attack packets. All DARPA-funded intrusion detectors were tested against this ground truth. After a few years of research, it became apparent that detection rates had “plateaued” at much less than 100 percent and could not be improved without simultaneously increasing false positive rates. These mechanisms faired especially poorly in detecting novel attacks and zero-day worms. It was clear that despite all the preventive approaches, some attacks would succeed — and some of those would not be detected. A new approach was needed to secure information systems. The Intrusion Tolerance approach can be thought of as the third generation of information assurance — the first two being Prevention and Detection. Some of the many techniques that were researched to provide intrusion tolerance included redundancy coupled with design and implemen- tation diversity (to avoid same vulnerabilities in replicas), redundancy management (intrusion detection, response and reconfiguration), randomness and deception to confuse attackers, and proof-carrying code to shift the security burden from consumer to software vendor. Intrusion and fault-tolerance can enable the continued correct operation of a system in the presence of attacks and faults. However, as the system ages and components experience failures or are compromised, the system’s capacity to tolerate additional attacks and faults is depleted. A correctly designed system would degrade gracefully and still continue to perform all the critical functions. But at some point even this will not be possible, and the system will eventually fail to perform its mission. The current approach to dealing with this situation is to repair and replace failed components or take the system offline and purge compromised components of infections. These are mostly manual and tedious procedures. Furthermore, back-up systems must be brought online while repairs are occurring. But what if systems could be designed to be self-healing? What if they could automatically regenerate their capabilities? Thus, a new DARPA program — Self-Regenerative Systems — was born. The goal of Self-Regenerative Systems is to design systems that can automatically diagnose root causes of attacks (i.e., vulnerability exploited by an attacker), reflect on past responses and learn, and improve its performance when similar events are encountered in the future. This fourth generation of information security technology relies heavily on principles from human cognition. Accordingly, it has the potential to deal successfully with ever-morphing novel attacks and an intelligent and adaptive adversary, the human being. • Jay Lala email@example.com 1 Foundations of Intrusion Tolerant Systems, Ed. by Jay Lala, IEEE Computer Society Press, 2003. 2 DPASA Final Report, BBN Technologies, DARPA Contract No. F30602-02-C-0134, CDRL A011, 15 June 2006. RAYTHEON TECHNOLOGY TODAY 2007 ISSUE 2 17