27.03.2013 Views

Tutorial

Tutorial

Tutorial

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2012 Annual RELIABILITY and MAINTAINABILITY Symposium<br />

Closed-Loop Corrective Action Basics,<br />

Best Practices and Application<br />

Brad Cline<br />

PTC<br />

Services Manager<br />

Windchill Quality Solutions<br />

41 W. Otterman Street<br />

Suite 310<br />

Greensburg, PA 15601 USA<br />

Internet (e-mail): bcline@ptc.com<br />

Brad Cline & Ken Stillwell<br />

<strong>Tutorial</strong> Notes © 2012 AR&MS<br />

Ken Stillwell<br />

PTC<br />

Vice President, Strategy & Operations<br />

Windchill Quality Solutions<br />

41 W. Otterman Street<br />

Suite 310<br />

Greensburg, PA 15601 USA<br />

Internet (e-mail): kstillwell@ptc.com


SUMMARY & PURPOSE<br />

This tutorial introduces basic information about closed loop corrective action processes. There are many names for the<br />

closed loop corrective action process, including FRACAS (Failure Reporting, Analysis, and Corrective Action System), CAPA<br />

(Corrective and Preventive Action), PRACA (Problem Reporting, Analysis, and Corrective Action), and others. These processes<br />

bring together a wide range of information, from test results to field data to repair records. Outputs can include both qualitative<br />

and quantitative results that are inherently customizable to the specific needs of the organization. This tutorial also addresses<br />

several of the key obstacles to deriving a successful closed loop corrective action process as well as best practice suggestions<br />

and a proven methodology for realizing the full benefits of a lucrative closed loop corrective action process, including<br />

improvements in quality, reliability, and productivity along with reduced costs. To conclude the tutorial, several case studies are<br />

presented that highlight success stories from a wide range of groups who have implemented effective closed loop corrective<br />

action processes.<br />

Brad Cline<br />

Brad Cline (Services Manager - PTC’s Windchill Quality Solutions) manages enterprise-level implementations, from<br />

requirements definition and design to system installation, with over 50 large-scale FRACAS implementations globally. Brad<br />

also manages consulting services, which functions as a full-service reliability engineering support center. With a BS - Applied<br />

Mathematics - Carnegie Mellon University and MBA - University of Pittsburgh, Brad has experience managing data<br />

warehousing implementations, data analysis activities, and web-based enterprise reporting tools.<br />

Ken Stillwell<br />

Ken Stillwell (Vice President – Strategy and Operations) is responsible for driving efficiency in operations and overall<br />

growth strategies for PTC’s Windchill Quality Solutions, and leads customer support, global education, and global services. Ken<br />

oversees enterprise-level FRACAS implementations, with more than 150 FRACAS deployments completed by the<br />

implementation team. With a BS - Business/Economics - University of Pittsburgh and MBA - University of South Carolina, Ken<br />

has held executive positions in software/IT, biotech and aerospace.<br />

Table of Contents<br />

1. Introduction .......................................................................................................................................................................... 1<br />

2. FRACAS Fundmentals ......................................................................................................................................................... 1<br />

3. FRACAS Best Practices ....................................................................................................................................................... 2<br />

4. Challenges to Effective FRACAS ........................................................................................................................................ 3<br />

5. Steps t Successful FRACAS Implementation ...................................................................................................................... 4<br />

6. Practical Application ............................................................................................................................................................ 6<br />

7. Conclusions .......................................................................................................................................................................... 8<br />

8. References ............................................................................................................................................................................ 8<br />

9. <strong>Tutorial</strong> Visuals…………………………………………………………………………………….. ................................... 9<br />

ii – Cline & Stillwell 2012 AR&MS <strong>Tutorial</strong> Notes


1. INTRODUCTION<br />

To produce high-quality products in an ever more<br />

competitive marketplace, the ability to efficiently discover,<br />

track, and correct incidents or failures found during product<br />

development, testing, and operation is crucial. This need spans<br />

all industries and classifications, including hardware,<br />

software, process management, and services in both<br />

government and commercial sectors. Notably, a survey<br />

conducted by the former Reliability Analysis Center (RAC),<br />

now the Reliability Information Analysis Center (RIAC),<br />

indicated that companies view this subject, most commonly<br />

referred to as FRACAS (Failure Reporting, Analysis and<br />

Corrective Action System), to be among their top two most<br />

important reliability tasks (1). Additionally, as of mid-March<br />

2011, as per Directive-Type Memorandum (DTM) 11-003 –<br />

Reliability Analysis, Planning, Tracking, and Reporting, as<br />

part of a comprehensive reliability and maintainability (R&M)<br />

program, a failure reporting and corrective action system is<br />

required to be maintained through design, development,<br />

production, and sustainment.<br />

Manufacturers and service providers have employed a<br />

variety of methods for tracking product failures and managing<br />

corrective actions. These systems include paper-based<br />

approaches, electronic spreadsheets and personal databases, or<br />

even large enterprise systems that affect hundreds or<br />

thousands of customers, suppliers, and engineers worldwide.<br />

Conceptually, the benefits of a failure tracking and corrective<br />

action process are clear. However, implementing such a<br />

system that yields more reliable products and improved<br />

designs in an efficient and effective manner is complex.<br />

Different groups have attempted to define structured<br />

approaches and guidelines for the implementation of failure<br />

reporting and corrective action systems. Organizations such as<br />

the RIAC, in accordance with the U.S. Department of Defense<br />

(DoD), have studied the requirements of both large and small<br />

companies to develop general guidelines for FRACAS. In<br />

addition to the RIAC guidelines, other similar efforts have<br />

been published for public use, including SEMATECH’s<br />

“Failure Reporting, Analysis and Corrective Action System”<br />

Planning and Implementation Guide (2) and NASA’s<br />

“Preferred Reliability Practices, Problem Reporting and<br />

Corrective Action System” documents (3). While most<br />

government-related initiatives commonly refer to corrective<br />

action systems to as FRACAS, other names, including<br />

PRACA, Quality System, Corrective Action System, and<br />

others abound. Regardless of what the systems are called,<br />

these published guidelines present process-based approaches<br />

to solving the fundamental problem of product improvement<br />

through failure recording, corrective action, and lessons<br />

learned.<br />

Even with existing guidelines and advancements in<br />

computer and communications technologies, the<br />

implementation of a cost-effective FRACAS that produces<br />

more reliable products remains a challenging task. Having<br />

examined the deployment of FRACAS in a variety of<br />

industries, the authors have determined that the publicly-<br />

available guidelines fail to address many of the practical<br />

complications that exist. These intricacies are neither processrelated<br />

nor technical in nature but instead stem from each<br />

company’s unique organizational structure, goals, and<br />

expectations. With this tutorial, the authors seek to provide<br />

additional practical guidance for implementation of a<br />

FRACAS, considering best practices and a step-by-step<br />

process for a successful FRACAS.<br />

2. FRACAS FUNDAMENTALS<br />

2.1 What is FRACAS?<br />

As defined earlier, FRACAS stands for Failure Reporting,<br />

Analysis, and Corrective Action System. The FRACAS<br />

process allows efficient tracking, analysis, and correction of<br />

problems found during product development, testing, and<br />

operation. Though most commonly used with fielded<br />

products, FRACAS can be applied to products, services,<br />

processes, and software applications thanks to its flexibility. A<br />

FRACAS is ultimately a closed loop system that improves the<br />

reliability of the product, service, process, or software<br />

application by focusing on failure reporting and resolution.<br />

Naming conventions vary widely. Variations include CAPA<br />

(Corrective and Preventive Action) System, Corrective Action<br />

System, Quality System, and others where the first letter of<br />

the FRACAS acronym is replaced by a “P” for Problem<br />

(PRACAS), “I” for Incident (IRACAS), and “D” for Data or<br />

Defect (DRACAS).<br />

2.2 Why is FRACAS important?<br />

FRACAS implementations track, analyze, and correct<br />

problems found during development, testing, and operation,<br />

with the goal of improving the reliability and quality of its<br />

target product. In many instances, FRACAS also identifies<br />

trends related to failures and prioritizes the top issues to be<br />

addressed. In any specific implementation, the reason for<br />

employing the FRACAS process should be identified clearly<br />

as the process is defined. In some cases, specific customers<br />

may require using FRACAS explicitly.<br />

2.3 Who uses FRACAS?<br />

The need for FRACAS spans all industries and<br />

classifications, including hardware, software, process<br />

management, and services, in both government and<br />

commercial sectors. Many industries use it extensively,<br />

including:<br />

• Aerospace<br />

• Automotive<br />

• Defense<br />

• Electronics<br />

2.4 What are the results of a FRACAS?<br />

• Manufacturing<br />

• Telecommunications<br />

• Medical Devices<br />

• Others<br />

Depending on the data collected in a FRACAS process,<br />

users can often generate quantitative results such as failure<br />

rate, mean time between failures (MTBF), mean time between<br />

critical failures (MTBCF), reliability, availability, and other<br />

results. FRACAS outputs also regularly include graphs and<br />

2012 Annual RELIABILITY and MAINTAINABILITY Symposium Cline & Stillwell – 1


eports to identify important summary details visually based<br />

on the data collected.<br />

2.5 How is FRACAS performed?<br />

While specifics will vary, FRACAS processes generally<br />

include these steps:<br />

1. Record the failures or incidents. Critical data associated<br />

with each failure or incident is recorded and stored under<br />

defined procedures, often in a database management<br />

system.<br />

2. Analyze the reported failures or incidents. The root cause<br />

of the failure or incident is identified and stored in the<br />

database management system alongside the original data.<br />

3. Identify necessary corrective action. A corrective action<br />

plan for mitigating the failure or incident is developed,<br />

implemented, and stored in the database management<br />

system.<br />

4. Verify the corrective action. Finally, the effectiveness of<br />

the corrective action is reviewed and recorded in the<br />

database management system and the incident or problem<br />

is closed out per established procedures.<br />

2.6 What are the limitations of FRACAS?<br />

As much as FRACAS can substantially improve products,<br />

services, processes, or computer applications, the difficulty of<br />

efficient application is its inherent limitation. Smart planning<br />

and execution during implementation is paramount, otherwise<br />

the system may fail to manage data efficiently, effectively<br />

identify root causes of problems, or close the FRACAS<br />

process loop correctly.<br />

2.7 Where can I learn more about FRACAS?<br />

As noted in the introduction, many FRACAS resources<br />

are available, including some that define structured<br />

approaches and guidelines for implementation. You can find<br />

several key FRACAS resources listed in the REFERENCES<br />

section.<br />

3. FRACAS BEST PRACTICES<br />

While the general closed loop corrective action process<br />

seems to follow a common sense approach, many factors can<br />

impede its application. Because of the potential challenges, the<br />

following best practices are strongly recommended for use<br />

during FRACAS implementation.<br />

3.1 Set expectations and goals<br />

Prior to deploying the FRACAS corrective action system,<br />

well-defined expectations and goals are essential. These<br />

include the roles and responsibilities of all stakeholders in the<br />

process as well as the objectives of the FRACAS system itself.<br />

All key stakeholders in the process must agree upon the<br />

explicitly defined set of clear expectations and goals. With<br />

these in place, the FRACAS implementation can progress with<br />

clarity of purpose and the system can deploy efficiently and<br />

effectively in line with the defined objectives.<br />

3.2 Involve the stakeholders<br />

The support and involvement of all FRACAS<br />

stakeholders is critical. Many of these stakeholders will<br />

originate within the organization, but customers and/or<br />

suppliers may be involved as well. Involving all appropriate<br />

parties leads to support for gathering sufficient failure data<br />

and unification of a common process across the whole<br />

organization.<br />

3.3 Gain active management involvement<br />

Management involvement and support has a strong<br />

impact on the success of the FRACAS. Active management<br />

participation often results in obtaining and maintaining<br />

necessary funding and resources, and may also provide the<br />

leadership needed to implement and maintain a successful<br />

FRACAS.<br />

3.4 Keep the process simple<br />

The most successful FRACAS solutions are easy to use,<br />

employ user-friendly software tools for automation, and<br />

overall require modest investments in resources and training.<br />

By keeping things simple, active participation from those<br />

outside the quality/reliability is more likely. Ultimately, the<br />

FRACAS must be simple enough for both the expert and<br />

novice to use. Of course, the process design must also allow<br />

for a thorough and effective FRACAS.<br />

3.5 Leverage software tools<br />

Software, either a custom in-house tool or customizable<br />

off-the-shelf solution, is one key way to help to automate the<br />

FRACAS and make it easier to use. Software tools help to<br />

automate data entry, analysis, and output, and provide a<br />

central storage area for FRACAS data and results.<br />

3.6 Provide for efficient data entry and analysis<br />

Entering and analyzing data can be two of the most time<br />

consuming tasks for FRACAS users. Simple web-based forms<br />

can provide for efficient data entry, while automated<br />

calculations, graphs, and reports, along with the ability to<br />

filter the data, can increase the efficiency and effectiveness of<br />

data analysis.<br />

3.7 Supply training<br />

Even when the simplest FRACAS process is<br />

implemented, early training can alleviate the stakeholders’<br />

concerns and foster active participation. As users generate<br />

feedback and the FRACAS evolves, providing additional<br />

training is beneficial for the same reasons.<br />

3.8 Encourage and supply feedback<br />

There are two types of desirable feedback. Feedback from<br />

the users of the FRACAS can help those in leadership roles<br />

focus and streamline the system, while feedback from those in<br />

leadership roles to all participants showcases the results of<br />

their hard work and provides encouragement.<br />

2 – Cline & Stillwell 2012 AR&MS <strong>Tutorial</strong> Notes


4. CHALLENGES TO EFFECTIVE FRACAS<br />

4.1 Challenge 1: Complex organization interaction<br />

A proper FRACAS process can span many different<br />

functional groups within an organization. Data is contributed<br />

by and/or needs to be made available to the following groups:<br />

• Manufacturing • Engineering<br />

• Operations<br />

• Quality Assurance<br />

• Reliability<br />

• Customer Service<br />

• Sales and Marketing • Field Services<br />

• Testing<br />

• Suppliers<br />

• Failure Review • Maintenance<br />

Board (FRB)<br />

• Others<br />

With all these different departments involved in the<br />

process, their interaction can become complex. Some groups<br />

may want to be included at multiple points. This increases and<br />

complicates the steps required to close an incident.<br />

Consider the following scenario of how this may occur. In<br />

this simple process, a field service representative reports or<br />

logs an incident. Quality Assurance then reviews this incident<br />

and assigns it to the Reliability group to perform a root cause<br />

analysis. The Reliability group recommends a corrective<br />

action that the Engineering group implements. Finally the<br />

Failure Review Board approves the corrective action and<br />

closes out the incident.<br />

Over time, additional groups request to be involved. For<br />

example, Customer Service believes that they too need to be<br />

part of the corrective action step. Because they directly<br />

interface with the customer, they want to understand and<br />

possibly shape the corrective action response that may occur.<br />

In addition, Sales and Marketing may also be needed to<br />

properly manage a customer’s account, and a Supplier may<br />

desire to be connected with the analysis and corrective actions<br />

that have occurred with its parts. In this situation, the number<br />

of entities involved increases rapidly and the process quickly<br />

becomes convoluted.<br />

This results in a long cycle time between the opening of<br />

an incident and its eventual closing. If the process is too<br />

complex, incidents can even become “forgotten” and never<br />

worked through to completion. Ultimately, the timely<br />

reporting of trends is compromised and, in the worst case, the<br />

entire process breaks down. A typical warning sign of a<br />

breakdown in the process is the phrase, “We just need to<br />

communicate better.” Although this may be true, the process<br />

may simply be too tangled to allow effective communication.<br />

While all parties have the best intentions, the results in such<br />

situations can be disappointing.<br />

4.2 Solution 1: How to overcome the challenge of complex<br />

organization interaction<br />

While working with the stakeholders in the planning<br />

phases of the FRACAS, identify the simplest process or<br />

workflow possible while still keeping key stakeholders<br />

involved at various steps in the process. Whenever possible,<br />

automate the communication. This way, all groups that need<br />

to be aware of an incident’s status can be notified easily.<br />

Assigning a key contact person for each functional group<br />

to attend FRACAS planning meetings and communicate<br />

important information to the group further streamlines<br />

communication.<br />

Regular FRACAS process review meetings involving all<br />

key stakeholders from the various functional groups can be<br />

helpful as well. These meetings can decrease in frequency<br />

over time.<br />

Lastly, make sure that all stakeholders sign off on the<br />

proposed communication plans before the final FRACAS is<br />

deployed to ensure organizational complexity has been<br />

adequately addressed.<br />

4.3 Challenge 2: Lack of prioritized goals<br />

Based on a recent survey conducted by PTC, the top three<br />

reasons for implementing a closed-loop analysis and<br />

corrective action system are:<br />

1. Compliance with customer requirements<br />

2. Gaining insights into the reliability of products<br />

3. Improving the next generation of product design<br />

Depending on the functional groups involved, the relative<br />

importance of these reasons may vary. Consider an example<br />

organization without clearly prioritized goals. A customer<br />

contract specifies that a FRACAS be utilized for a particular<br />

project. The project manager focuses on implementing what<br />

the customer has specifically required, but financial<br />

constraints mean the resources needed to complete a fullfeatured<br />

FRACAS implementation are unavailable.<br />

Project/program managers implement a temporary spreadsheet<br />

or database solution with full intentions of replacing this<br />

solution in the future with a more substantial system, but a<br />

full-featured FRACAS never materializes. Consequently, a<br />

minimal FRACAS becomes the standard resulting in poor<br />

efficiency and a lack of cohesiveness.<br />

Here’s another example: a company believes that a<br />

FRACAS is important to the success of a product, but they see<br />

it as something they will get to in the future. As the product<br />

lifecycle progresses, the company starts to appreciate the<br />

necessity of a FRACAS. By this time, the company is far past<br />

the point of the budgeting and planning processes. Therefore,<br />

a proper implementation does not occur and teams are left<br />

scrambling late in the project to find resources to implement a<br />

FRACAS. Once again, a minimal FRACAS develops.<br />

In a third example, a program manager struggles to<br />

provide a FRACAS implementation while her executive<br />

management has much higher expectations for this system.<br />

They envision that the FRACAS will save the company<br />

money on warranty costs. In addition, the reliability group is<br />

expecting to gather significant data to explore trends and<br />

improve their product’s design over time. Consequently, these<br />

groups expect a full-featured implementation with significant<br />

data gathering, analysis, and reporting capabilities.<br />

Unfortunately, because the program manager has minimal<br />

financial resources, she can only implement a basic FRACAS.<br />

The deployed system is not set up early enough to sufficiently<br />

track information, which limits the data available to executive<br />

management and the reliability group. The system needs more<br />

2012 Annual RELIABILITY and MAINTAINABILITY Symposium Cline & Stillwell – 3


time to gather enough data to perform meaningful analysis.<br />

The resulting implementation falls short of everyone’s<br />

expectations.<br />

What caused the problems in these examples? First, the<br />

groups involved did not discuss and prioritize the list of goals<br />

and expectations. Second, there was not objective FRACAS<br />

leadership across the departments. Finally, effective executive<br />

sponsorship verifying and tracking the goals did not exist.<br />

4.4 Solution 2: How to identify and prioritize goals<br />

To overcome the problems caused by a lack of prioritized<br />

goals, several points must be considered while planning the<br />

FRACAS. First, make sure all stakeholders, including<br />

management, discuss and agree on the goals and expectations<br />

of the FRACAS. Objective FRACAS leadership ensures that<br />

all stakeholders’ goals are considered in the planning. Lastly,<br />

determine which goals can be thoroughly accomplished in the<br />

planned FRACAS, and make sure all stakeholders sign off on<br />

the goal prioritization.<br />

4.5 Challenge 3: Ineffective and inefficient data tracking<br />

As mentioned previously, the key to an effective<br />

FRACAS solution is the gathering and reporting of<br />

meaningful data. This seems to suggest that the more data<br />

gathered on an incident or failure, the better the FRACAS will<br />

be. However, this is not necessarily the case. Too much data<br />

may even prevent users from discovering meaningful trends<br />

because they can’t “see the forest through the trees”.<br />

For example, many legacy FRACAS solutions collect 80<br />

or more fields of information when recording a failure.<br />

Although much of this data is valuable, the ramifications of<br />

gathering that much data can result in several problems.<br />

If it takes customer support personnel recording an<br />

incident five seconds to enter data in each field, they will<br />

spend 400 seconds or 6.7 minutes filling out the form for all<br />

80 fields of data, not including any research time. Soon, many<br />

begin to feel that fully logging a problem is just too timeconsuming.<br />

They start routinely skipping fields and not even<br />

reporting some issues at all.<br />

To prevent this, designers develop input screens that will<br />

easily accept incident information. However, they often use<br />

free-flowing text fields to allow customer support personnel to<br />

type any information that they think is useful. The support<br />

users, though, become confused and do not always know what<br />

to enter. Consequently, they begin to include extraneous or<br />

irrelevant data, sometimes just leaving the field blank. The<br />

resulting “data” becomes unable to support any meaningful<br />

analysis and manually scanning incidents for trends becomes a<br />

tedious chore.<br />

4.6 Solution 3: How to effectively manage data<br />

To avoid the problems of inefficient and ineffective data<br />

tracking, organizations will want to establish functional<br />

procedures before data collection begins. Use of simple,<br />

streamlined forms that store data in a central database is often<br />

the best approach. Taking the time to train the FRACAS users<br />

responsible for data entry helps to ensure that all important<br />

data is entered correctly and efficiently. Reminding those<br />

responsible for the failure data entry of the importance of<br />

capturing the data at the time of failure and the long term<br />

benefits to the organization that result from that timely data<br />

capture is critical. Whenever possible, data capture should be<br />

automated as well.<br />

5. STEPS TO SUCCESSFUL FRACAS IMPLEMENTATION<br />

While this tutorial has discussed three of the most<br />

common challenges during FRACAS implementations, the<br />

issues presented are not all-encompassing. Instead, they<br />

outline typical problems that prevent a company, regardless of<br />

industry, from realizing the dramatic results that can be<br />

achieved with a FRACAS.<br />

Few documented tools and techniques exist to aid the<br />

successful implementation of a closed-loop analysis and<br />

corrective action system. This set of steps for successful<br />

FRACAS implementation is intended to help companies<br />

overcome the obstacles outlined above. The eight step<br />

approach is meant as a framework that can be modified as<br />

needed to fit a specific situation.<br />

5.1 Step 1: Define the goals and success factors<br />

Defining the goals of all intended users and stakeholders<br />

is the foundation for a successful FRACAS implementation.<br />

Every step in the process of establishing a FRACAS will be<br />

based upon this definition. A mistake or misunderstanding at<br />

this step can have negative consequences later. Therefore,<br />

implementations require a commitment to thorough research<br />

and documentation is required as otherwise issues may not<br />

come to light until months later.<br />

To begin this process, hold a series of short team<br />

meetings with each of the groups (as identified in Issue 1) and<br />

the representative stakeholders within the FRACAS process.<br />

Using general facilitation techniques, map out specific goals<br />

or expectations of the FRACAS. Typical goals include<br />

lowering maintenance costs, improving overall reliability, and<br />

improving next generation product design. Be careful of the<br />

common pitfall of moving off of goal establishment and into<br />

detailed requirements. The purpose of this exercise is for each<br />

group to reach a consensus on the priority of its main goals.<br />

Once each group has set its primary goals, call a crossfunctional<br />

meeting. One representative from each group and<br />

an executive and/or management representative should attend<br />

to review, consolidate, and prioritize the goals. During this<br />

same meeting, attach what success realistically means for each<br />

of these goals in concrete terms. For example, if a goal is to<br />

lower maintenance costs, a quantifiable success factor may be<br />

to reduce these costs by ten percent over the next 12 months.<br />

At the conclusion of this meeting, each attendee signs a<br />

document that lists the consolidated and prioritized goals<br />

along with their success factors. This single technique will<br />

immediately highlight the level of agreement between the<br />

groups.<br />

Finally, gain executive approval and support and have one<br />

executive take overall ownership and support of the<br />

implementation. Encourage that person to communicate the<br />

4 – Cline & Stillwell 2012 AR&MS <strong>Tutorial</strong> Notes


goals and success factors back to the teams.<br />

5.2 Step 2: Define the output<br />

With goals and success factors in place, define the results<br />

or output that each group expects from the FRACAS next.<br />

Based on the goals, each group will decide the outputs they<br />

need from the FRACAS to achieve the previously established<br />

success factors. Typically, the results are stated in terms of<br />

calculations, charts, graphs, and reports. For example, the<br />

Reliability group may need a Pareto chart indicating the<br />

number of failures by part number. The Field Service group<br />

may require a report indicating the cost of warranty repairs by<br />

part number. The Quality group may require a reliability<br />

growth chart.<br />

A common pitfall is that each group will want as much<br />

output as possible, resulting in a bloated “wish list” that is<br />

difficult to reasonably implement. The purpose of this<br />

exercise, though, is to focus only on the basic necessities for<br />

success based on the goals determined earlier. To make this<br />

clear, map each output requirement back to a goal and success<br />

factor.<br />

5.3 Step 3: Map the Process/Workflow<br />

Through a series of meetings and interviews with<br />

stakeholders, next discuss what process or workflow they<br />

expect to follow. Most of the groups will focus on their own<br />

internal process but may not understand the overall process to<br />

be followed. Based on each group’s individual input and<br />

through additional investigation, develop a draft of a single<br />

consolidated process diagram. Use this to search out<br />

inefficiencies and actively find ways to simplify the process.<br />

Involve the stakeholders in simplifying and coordinating<br />

the process steps between the groups. Question any step that<br />

does not assist with producing the output requirements<br />

established previously. Also, ensure that the overall process<br />

does not grow too complex. Remember that many steps can<br />

slow the working of incidents or decrease the ability to report<br />

trends in a timely manner. Change to the overall process is<br />

inevitable, but this step establishes a workable starting point.<br />

5.4 Step 4: Map data required and input method<br />

Using the process diagram and the output requirements,<br />

determine the minimum data fields required to support the<br />

workflow process at each step. The investment of time in this<br />

step helps avoid collecting data that has no direct purpose and<br />

focuses efforts in the most important areas.<br />

Once the data fields are established, determine how the<br />

user will view the data. One large form with all data fields or<br />

specific forms with tailored fields to simplify understanding<br />

for each group may be preferable. Involve the stakeholders to<br />

understand what they are expecting and why.<br />

Additionally, specify how the input data will be gathered.<br />

Recall from Issue 3 that efficient collection of valid,<br />

meaningful data is paramount. Popular methods include prepopulating<br />

fields based on previous input, drop-down menu<br />

choices, and even bar code entry.<br />

5.5 Step 5: Implement a prototype FRACAS<br />

Now that the goals, success factors, expected output,<br />

workflow process, and entry forms are defined, choose an<br />

implementation approach. A variety of automation tools that<br />

can be used to support a FRACAS exist, though they fall into<br />

three major levels of capability and sophistication.<br />

The first level of software products that support FRACAS<br />

are off-the-shelf, general purpose tools such as spreadsheet<br />

and personal database programs. Microsoft Excel and<br />

Microsoft Access are two popular options in this category.<br />

They are relatively inexpensive and can be used for a variety<br />

of purposes outside the FRACAS realm. In addition, many<br />

companies already have these tools available. However,<br />

general purpose tools have limited capacity to handle large<br />

amounts of data and may not support data sharing between<br />

multiple users. Also, these tools do not intrinsically support<br />

FRACAS calculations and graphing. Furthermore, this<br />

approach may require internal support to create and maintain<br />

the custom application.<br />

The second level of FRACAS tools includes Workgroup<br />

applications that specifically provide FRACAS support for a<br />

small group of users and moderate amounts of data. FRACAS<br />

Workgroup tools can typically support multiple users,<br />

workflow capabilities, and FRACAS calculations. They also<br />

may connect with other Reliability and Maintainability tools<br />

for more detailed analysis.<br />

The third level of FRACAS tools includes Enterprise<br />

applications that provide intrinsic FRACAS support, including<br />

workflow, calculations, graphs, as well as additional<br />

capabilities for handling large amounts of data and many<br />

users. Enterprise FRACAS tools typically support multiple<br />

means of data entry and reporting, including internet browser<br />

support that allows them to support a large, distributed user<br />

base. Enterprise FRACAS tools also often integrate with other<br />

Enterprise systems, including ERP/PDM, Mail, Workflow,<br />

and Directory Services.<br />

For both Workgroup and Enterprise FRACAS tools, a<br />

continuum of product offerings is available. Customdeveloped<br />

solutions, integrated custom tools, complete<br />

commercial off the shelf (COTS) options all exist on the<br />

market. With today’s technologies and FRACAS product<br />

offerings, custom FRACAS solutions cannot easily be<br />

justified. Custom development is usually the most expensive<br />

option because of the time and resources required to design,<br />

implement, support, and grow a single FRACAS<br />

implementation. Given that FRACAS implementations<br />

typically have common functional and data requirements, a<br />

configurable COTS solution, which allows changes without<br />

programming skills, is a more cost effective approach to<br />

deploying a FRACAS.<br />

To select the appropriate level of a FRACAS tool, users<br />

must examine their immediate needs in relation to future<br />

plans. Factors in the selection criteria must include a firm<br />

understanding of the functionality or capabilities required, the<br />

number of potential users, and the data storage necessities<br />

over time. These decisions may seem daunting, but careful<br />

2012 Annual RELIABILITY and MAINTAINABILITY Symposium Cline & Stillwell – 5


selection of the approach is imperative. To validate the<br />

decision, implement a prototype FRACAS based on the<br />

selected approach.<br />

5.6 Step 6: Accept feedback and modify FRACAS<br />

At this stage, reconvene the stakeholders and start testing<br />

the prototype system. Does the previously defined workflow<br />

process work in this environment? Can the data be efficiently<br />

entered into the system? Can the expected outputs be<br />

generated? Is the system easy to use and understand?<br />

Most importantly, revisit the goals and success factors.<br />

Does the system meet these goals? Will the success factors be<br />

met? Particular areas will likely need additional tweaking or<br />

reworking. This is the time to accept constructive feedback<br />

and make the necessary modifications.<br />

Before proceeding, gain signoff and approval from all<br />

stakeholders. With their continued involvement, the chances<br />

of gaining their support is high, and they can be the best<br />

advocates for the project when proceeding to the next step —<br />

general rollout.<br />

5.7 Step 7: Rollout and train<br />

Companies can take several different approaches for<br />

general rollout, and each method has its own set of advantages<br />

and disadvantages.<br />

The “Big Bang” approach allows all users on the system<br />

at once. This approach is typically used where time is short.<br />

When all the users begin to use the system at the same time,<br />

many problems are likely. With so many users involved,<br />

seemingly minor issues can become major obstacles.<br />

Although this approach can work, it requires significant<br />

planning and coordination. It should only be considered when<br />

absolutely necessary.<br />

Alternatively, most companies prefer a phased approach<br />

in which several groups at a time are trained and brought<br />

online. In this case, unforeseen issues can be addressed<br />

without involving the entire user base. Additionally, the<br />

implementation and training teams can adapt to provide better<br />

support for later groups. Typically, the more individualized<br />

attention ultimately gains more support and acceptance from<br />

the general user base.<br />

Training is extremely important, especially when typical<br />

FRACAS processes involve users of many different areas of<br />

interest and levels of expertise. When users understand what is<br />

expected of them and are empowered by training to use the<br />

tools given to them, the system has a much greater chance for<br />

acceptance and success.<br />

5.8 Step 8: Continue to change<br />

Continual change should be expected for the FRACAS<br />

system based on user feedback regarding what works and<br />

what needs improvement. As business objectives and<br />

processes evolve, the FRACAS needs to adapt and support<br />

these developments. As a result, active management needs to<br />

accept and validate changes to the FRACAS in relation to the<br />

original high-level goals. Through this oversight, a highperformance,<br />

functional system will remain viable for many<br />

years.<br />

6. PRACTICAL APPLICATION<br />

This section presents several case studies to address the<br />

practical application of FRACAS. In some instances the<br />

FRACAS was implemented as a result of a customer<br />

requirement and in others as an internal decision by the<br />

organization. They represent a range of successful FRACAS<br />

implementations based on systematic steps and the FRACAS<br />

best practices described above.<br />

6.1 Case Study #1: Traditional FRACAS<br />

A manufacturer in the aerospace industry instituted and<br />

continues to use a traditional FRACAS. A desire for reliable<br />

products as well as a customer requirement drove the<br />

implementation. Management established various goals,<br />

especially:<br />

• Capture of incident or failure information in a closed loop<br />

system.<br />

• Provide periodic reports to the customer with the closedloop<br />

resolution for each incident.<br />

The required outputs include metrics like MTBF. Reports<br />

are generated as well including Failure Summary, Failure<br />

Analysis, and Reliability reports.<br />

For workflow, the process follows a series of defined<br />

steps: Incident Entry, Quality Review, Failure Investigation,<br />

Root Cause Analysis, Corrective Action, Failure Review<br />

Board, and Closeout.<br />

Data inputs to the FRACAS include data from the<br />

previously implemented database system as a starting point.<br />

Subsequent data is entered manually using customized tables<br />

and forms for each workflow step.<br />

Two programs initially deployed the prototype FRACAS.<br />

During prototype review, necessary additional reports,<br />

calculations and alerts were identified and the FRACAS was<br />

updated to support the added requirements.<br />

The solution provider supplied training to key users for<br />

the FRACAS on both programs.<br />

Since the initial implementation, the FRACAS has<br />

changed slightly to better suit the organization’s needs.<br />

Modifications are ongoing as new needs arise.<br />

This FRACAS has proven to be a success thanks to the<br />

systematic planning approach used during deployment.<br />

6.2 Case Study #2: RMA System<br />

A supplier to the aerospace and defense industry<br />

implemented and continues to use a FRACAS to manage and<br />

track their return material authorization (RMA) process.<br />

The goals and success factors for the FRACAS are as<br />

follows:<br />

• Efficiently track and manage the RMA process.<br />

• Provide procedures and tools to gather reliability data<br />

from various groups, including customer service, repair<br />

and overhaul, original equipment, and other service<br />

centers.<br />

• Attain reliability data to help with quoting expected<br />

maintenance costs.<br />

6 – Cline & Stillwell 2012 AR&MS <strong>Tutorial</strong> Notes


• Identify reliability and maintainability metrics as possible<br />

inputs to other reliability analyses.<br />

Numerous outputs were identified as required, including a<br />

range of reports. Example reports include:<br />

• Incident Report by Serial Number<br />

• Failure Modes by Part Number<br />

• RMA Status Report<br />

The organization defined its workflow steps for the RMA<br />

process as: Field Service Entry, RMA Creation, Return Item<br />

Review, Preliminary Inspection Evaluation, Test Evaluation,<br />

Disassembly Evaluation, Engineering Evaluation, Final<br />

Inspection, and Closeout.<br />

Legacy data from the existing RMA database was<br />

imported to the new FRACAS with plans to input new data<br />

manually going forward.<br />

Based on the goals, available inputs, and desired outputs,<br />

a prototype FRACAS was created and initially deployed to<br />

power users. During the prototype review, users were satisfied<br />

with the FRACAS as implemented.<br />

Initial key users of the FRACAS received training from<br />

the software vendor in a “train-the-trainer” program. These<br />

users then provided training to all other personnel before<br />

rolling the new FRACAS for RMA across the organization.<br />

A systematic step-by-step process allowed for the creation<br />

of a FRACAS process and system that met all requirements<br />

for the RMA.<br />

6.3 Case Study #3: Business Intelligence System<br />

A global information technology corporation instituted<br />

and continues to use FRACAS as a business intelligence<br />

system to support better data mining and ultimately improved<br />

business decision-making. The FRACAS provides one means<br />

of gathering historical, current, and predictive views of<br />

business operations related to testing and fielded product<br />

performance. Prior to reorganization of the testing and fielded<br />

product data capturing and analysis procedure, users entered<br />

data into a customer support system. Then, for critical cases,<br />

users streamlined the data for consistency and readied it for<br />

analysis by design engineers. These data mining processes<br />

were difficult and time-consuming and the company required<br />

a new solution.<br />

The goals and success factors for the FRACAS are as<br />

follows:<br />

• Integrate previously disjoint data sets across the product<br />

lifecycle to create single source of truth for failure<br />

analysis.<br />

• Use actual reliability and availability data to refine targets<br />

from generation to generation and effectively track<br />

reliability growth.<br />

• Track failures during testing to identify potential failure<br />

modes, easily recognize trends, and determine metrics,<br />

including MTBF.<br />

• Perform fielded product performance analysis to manage<br />

outing information and reporting.<br />

The organization identified many outputs, including<br />

numerous graphs and reports. The graphs and reports provided<br />

a view of data trends over multiple years with comparisons to<br />

previous generations of products in an instant dashboard view.<br />

Disparate requirements for testing and tracking field<br />

failures required multiple workflows. For example, for fielded<br />

product failures, the workflow steps include Problem<br />

Definition, Root Cause Analysis, Corrective Action, and<br />

Closeout.<br />

The new tool imported legacy data from the previous<br />

customer support system to streamline the data mining<br />

processes. Going forward, the customer support team will<br />

input their data directly to the FRACAS via the carefully<br />

designed forms.<br />

Based on the goals, available inputs, and desired outputs,<br />

a prototype FRACAS was created and initially deployed to<br />

power users. After gathering user feedback, necessary changes<br />

were implemented, including additional graphs and reports to<br />

let both engineers and management quickly acquire up-to-date<br />

information from the FRACAS.<br />

The software vendor team who implemented the<br />

FRACAS trained power users. These power users, in turn,<br />

trained the remaining user base on the role-specific features<br />

and functions. Following training, all customer support<br />

personnel, design engineers, and management began using the<br />

new tool.<br />

A systematic step-by-step process facilitated the creation<br />

of a FRACAS system to improve the efficiency of data mining<br />

and other analysis, which provides critical information driving<br />

advantageous business decisions for the firm.<br />

6.4 Case Study #4: Non-Traditional FRACAS<br />

A manufacturer of highly specialized, leading edge<br />

industrial components implemented and continues to use a<br />

FRACAS to track details related to raw materials inspection.<br />

The primary goal and success factor for the FRACAS is<br />

to provide the process and system to track data pertaining to<br />

incoming product, work-in-progress, and final inspection. The<br />

required system allows inspectors to enter the results of their<br />

material reviews and track progress of lots. The firm also<br />

anticipated that the FRACAS would help to determine metrics<br />

such as the first pass yield and percent scrapped.<br />

The implementation team identified many outputs<br />

including various reports. Example reports include:<br />

• First Pass Yield Report<br />

• Scrap Report<br />

• Action Report<br />

The workflow for the process was identified and includes<br />

three key inspection steps: Incoming, Work-in-Progress, and<br />

Final.<br />

As no legacy data was easily available for the new<br />

FRACAS for raw materials inspection, all data is entered<br />

manually to the FRACAS using custom forms and tables.<br />

With the goals, inputs, and outputs in place, the software<br />

vendor built the prototype FRACAS for initial deployment to<br />

power users. After gathering feedback, the vendor<br />

implemented certain changes, including the addition of alerts<br />

to automate notifications based on other entered data and<br />

better control for dates associated with acceptance or rejection<br />

of the materials inspected.<br />

2012 Annual RELIABILITY and MAINTAINABILITY Symposium Cline & Stillwell – 7


The vendor also trained the initial key users of the<br />

FRACAS, who then provided subsequent training to all<br />

personnel. The new FRACAS for raw materials inspection<br />

was then rolled out corporate wide.<br />

A systematic step-by-step process allowed for the creation<br />

of a FRACAS process and system to meet requirements for<br />

the raw materials inspection system.<br />

6.5 Case Study #5: CAPA<br />

A manufacturer of highly specialized, leading edge<br />

industrial components implemented and continues to use a<br />

FRACAS to accommodate their Corrective Action Request<br />

(CAR)/Preventative Action Request (PAR) process.<br />

Key goals and success factors for the FRACAS are as<br />

follows:<br />

• Efficiently track and manage the CAR/PAR process.<br />

• Track metrics like time to close each CAR/PAR.<br />

• Incorporate alerts to notify of overdue actions.<br />

Various outputs were identified, including reports and<br />

graphs. Example reports and graphs include:<br />

• Average Time to Close CAR/PAR<br />

• Issues by Source<br />

The FRACAS team defined the workflow steps for the<br />

CAR/PAR process as: CAR/PAR Entry, Review, Root Cause,<br />

Action Taken, Effectiveness, and Closeout.<br />

The new tool imported legacy data from the existing<br />

CAR/PAR system, and new data will be input manually going<br />

forward.<br />

Based on the goals, available inputs, and desired outputs,<br />

a prototype FRACAS was created and initially deployed to a<br />

small group of users familiar with the process. Updates to<br />

further streamline the forms for more efficient data entry,<br />

among other changes, were implemented based on user<br />

feedback.<br />

The vendor trained users of the CAR/PAR FRACAS<br />

before the company rolled out the new tool to replace the<br />

previous system.<br />

Again, a systematic step-by-step process was critical for<br />

the creation of a FRACAS to meet requirements for the<br />

CAR/PAR process.<br />

7. CONCLUSIONS<br />

Closed loop corrective action systems like FRACAS<br />

provide an effective process for controlling, tracking, and<br />

analyzing failures. They can lead to improvements in quality,<br />

reliability, and productivity while simultaneously reducing<br />

costs. When working to implement FRACAS, be aware of the<br />

challenges that may arise. Keep best practices in mind and,<br />

like the organizations described in the case studies, follow the<br />

recommended eight step procedure for successful FRACAS<br />

implementations.<br />

8. REFERENCES<br />

1. Failure Reporting, Analysis and Corrective Action<br />

System (FRACAS) Application Guidelines, Product Code<br />

FRACAS, Reliability Analysis Center, 1999 Sep., p 5.<br />

2. M. Villacourt, P. Govil, “Failure Reporting, Analysis and<br />

Corrective Action System (FRACAS)”, Doc ID #:<br />

94042332A-GEN, SEMATECH, 1994 Jun., Available at<br />

http://www.sematech.org/docubase/document/2332agen.p<br />

df<br />

3. NASA, Preferred Reliability Practices “Problem<br />

Reporting and Corrective Action System,” Practice No.<br />

PD-ED-1255, available at<br />

http://klabs.org/DEI/References/design_guidelines/design<br />

_series/1255ksc.pdf<br />

4. MIL-HDBK-2155 Department of Defense Handbook:<br />

Failure Reporting, Analysis and Corrective Actions Taken<br />

5. RAC: Reliability Problem Solving, Failure Reporting and<br />

Corrective Action System (FRACAS) and Reverse<br />

Engineering, http://src.alionscience.com/pdf/fracas.pdf<br />

6. E.J. Hallquist, T. Schick, “Best Practices for a FRACAS<br />

Implementation”, pp 663-667, RAMS 2004.<br />

7. J.S. Magnus, “Standardized FRACAS for nonstandardized<br />

products”, pp 447-451, RAMS 1989.<br />

8. A. Mukherjee, “Integrated FRACAS systems for F117<br />

infrared acquisition designation system (IRADS) support<br />

yield higher MTBMA”, pp 26 - 29, RAMS 2005.<br />

9. MIL-STD-721C: Definitions of Terms for Reliability and<br />

Maintainability, 1981.<br />

10. Directive-Type Memorandum (DTM) 11-003 –<br />

Reliability Analysis, Planning, Tracking, and Reporting,<br />

March 2011, US DoD.<br />

8 – Cline & Stillwell 2012 AR&MS <strong>Tutorial</strong> Notes

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!