01.12.2014 Views

Overview of New DoD Reliability Revitalization ... - DfR Solutions

Overview of New DoD Reliability Revitalization ... - DfR Solutions

Overview of New DoD Reliability Revitalization ... - DfR Solutions

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Overview</strong> <strong>of</strong> <strong>New</strong> <strong>DoD</strong> <strong>Reliability</strong> <strong>Revitalization</strong> Initiatives<br />

James G. McLeish, CRE<br />

<strong>DfR</strong> <strong>Solutions</strong>,<br />

College Park MD<br />

jmcleish@dfrsolutions<br />

ABSTRACT<br />

Since the mid-1990s, the U.S. Department <strong>of</strong> Defense (<strong>DoD</strong>)<br />

has recognized two disturbing trends. First the percentage <strong>of</strong><br />

new systems failing to meet reliability requirements was<br />

increasing, resulting in costly delays and redesign activities.<br />

Second the cost <strong>of</strong> supporting fielded systems due to<br />

decreasing durability and reliability performance was also<br />

increasing. A number <strong>of</strong> initiatives are now under way to<br />

reverse these trends. This paper will summarize several <strong>of</strong><br />

the key initiatives that have been announced in the press and<br />

at the 2010 RAMS conference that include:<br />

The <strong>Reliability</strong> related portions <strong>of</strong> the Weapons Systems<br />

Acquisition Reform Act defining acquisition policy<br />

updates designed to strengthen oversight and<br />

accountability [1];<br />

<strong>Revitalization</strong> <strong>of</strong> Systems and <strong>Reliability</strong> Engineering<br />

processes being institutionalized to reduce risk [2+3+4];<br />

A <strong>Reliability</strong> Program Scorecard tool developed to<br />

standardize and assist new programs in applying the use<br />

<strong>of</strong> reliability best practices and to track planned and<br />

completed reliability tasks [5];<br />

<strong>Reliability</strong> initiatives currently under development;<br />

<br />

<br />

The AVSI <strong>Reliability</strong> Prediction Technology Roadmap;<br />

Proposals for resolving the limitations <strong>of</strong> actuarial<br />

reliability prediction methods by updating Mil-HDBK-<br />

217 to include science-based Physics <strong>of</strong> Failure (PoF)<br />

reliability modeling and simulations methods. [6+7]<br />

Key Words: <strong>Reliability</strong> Assurance, <strong>Reliability</strong> Assessment,<br />

Design for <strong>Reliability</strong>, Physics <strong>of</strong> Failure, <strong>Reliability</strong> Physics,<br />

INTRODUCTION<br />

An unintended consequence <strong>of</strong> <strong>DoD</strong> acquisition reforms and<br />

cost reduction efforts during the 1990s was a reduction in the<br />

rigor and effectiveness <strong>of</strong> sustainment planning, system<br />

engineering, and reliability assurance throughout materiel<br />

development programs. In the early 2000s, the <strong>DoD</strong><br />

acquisition community started to recognize and document<br />

two disturbing trends in Defense Acquisition Programs.<br />

1) An increasing percentage <strong>of</strong> systems not meeting their<br />

reliability requirements, and 2) the cost <strong>of</strong> supporting fielded<br />

systems was increasingly higher than expected. Two major<br />

reports recommended significant changes in acquisition<br />

policies to address these issues.<br />

The first, “Setting Requirements Differently Could Reduce<br />

Weapon Systems’ Total Ownership Costs” a Government<br />

Accountability Office (GAO) report [2], concluded that the<br />

requirements generation process should:<br />

Include “…total ownership cost, especially operating<br />

and support cost, and weapon system readiness rates as<br />

performance parameters equal in priority to any other<br />

performance parameters for major system before<br />

beginning the acquisition program”;<br />

Require “…the product developer to establish a firm<br />

estimate <strong>of</strong> a weapon system’s reliability based on<br />

demonstrated reliability rates at the component and<br />

subsystem level”;<br />

Structure “…contracts for major systems acquisitions so<br />

that…the product developer has incentives to ensure that<br />

proper trade <strong>of</strong>fs are made between reliability and<br />

performance prior to the production decision.”<br />

The second, the “Report <strong>of</strong> the Defense Science Board (DSB)<br />

Task Force on Developmental Test and Evaluation” [8],<br />

examined the cause <strong>of</strong> the growing trend in unsuitable<br />

systems and degraded reliability performance. The related<br />

recommendation <strong>of</strong> the DSB report concluded that:<br />

“The single most important step necessary to correct<br />

high suitability failure rates is to ensure programs are<br />

formulated to execute a viable systems engineering<br />

strategy from the beginning, including a robust RAM<br />

program, as an integral part <strong>of</strong> design and development.<br />

No amount <strong>of</strong> testing will compensate for deficiencies in<br />

RAM program formulation.”<br />

The <strong>DoD</strong> chartered the <strong>Reliability</strong> Improvement Working<br />

Group (RIWG) to implement the recommendations <strong>of</strong> the<br />

DSB. The RIWG membership was drawn from stakeholders<br />

across the <strong>DoD</strong>. A number <strong>of</strong> key policy changes were<br />

derived from the RIWG recommendations [4]. Some <strong>of</strong><br />

these reforms have been inscribed into law by congressional<br />

legislation to ensure they become permanent <strong>DoD</strong> policy.<br />

Others are being develop and implemented under separate<br />

initiatives. The remainder <strong>of</strong> this paper will review some <strong>of</strong><br />

the key reforms that have been announced to date.<br />

WEAPON SYSTEMS ACQUISITION REFORM ACT<br />

(WSARA) [1] was originally called the “Weapon Acquisition<br />

System Reform Through Enhancing Technical Knowledge<br />

and Oversight Act”. It is a bipartisan Act <strong>of</strong> Congress cosponsored<br />

by Sen. Carl Levin, (D-Mich), Chairman-House<br />

Armed Services Committee and Sen. John McCain, (R-Az)<br />

ranking member <strong>of</strong> the Senate Armed Services Committee.


The Act was based on the 2008 Defense Science Board Task<br />

Force Report on Developmental Test and Evaluation and the<br />

report <strong>of</strong> the RIWG that cited the need to reform the way<br />

<strong>DoD</strong> contracts for and purchases major weapons systems.<br />

<br />

<br />

Sec. 303. Expansion <strong>of</strong> national security objectives <strong>of</strong><br />

the national technology and industrial base.<br />

Sec. 304. Comptroller General reports on costs and<br />

financial info <strong>of</strong> major defense acquisition programs<br />

The Act was introduced Feb. 23, 2009, and it quickly passed<br />

both the Senate and House unanimously (93-0 & 411-0). It<br />

was signed into law on May 22, 2009 by President Obama<br />

who cited the need to end the "waste and inefficiency" in<br />

defense acquisition. The need for such reforms was clearly<br />

demonstrated by external audits like one by the Government<br />

Accountability Office evaluation <strong>of</strong> 95 major defense<br />

projects uncovering cost overruns totaling $295 billion. “It<br />

will strengthen oversight and accountability by appointing<br />

<strong>of</strong>ficials who will be charged with closely monitoring the<br />

weapons systems we're purchasing” [9]. The Congressional<br />

Budget Office estimates that the new reforms will cost about<br />

$55 million dollars and should be in place by the end <strong>of</strong><br />

2010. It is expected that the reforms will save millions if not<br />

billions <strong>of</strong> dollars over the next decade [10].<br />

<strong>Reliability</strong> related portions <strong>of</strong> the Act are intended to reverse<br />

the 20 year trend <strong>of</strong> system development shortcuts in <strong>DoD</strong><br />

acquisition processes, including reductions in the reliability<br />

and acceptance test workforce that have resulted in excessive<br />

cost overruns and delays in weapon systems fielding. The<br />

reliability related items and objectives <strong>of</strong> the key sections <strong>of</strong><br />

the act, summarized below, are designed to revitalize (or<br />

institutionalize) up front system engineering, total life time<br />

planning, competent design analysis, and testing while<br />

improving program and cost oversight:<br />

TITLE I--ACQUISITION ORGANIZATION<br />

Sec. 101. Cost assessment and program evaluation.<br />

Sec. 102. Directors <strong>of</strong> Developmental Test and<br />

Evaluation and Systems Engineering.<br />

Sec. 103. Performance assessments and root cause<br />

analyses.<br />

Sec. 104. Assessment <strong>of</strong> technological maturity.<br />

Sec. 105. Role <strong>of</strong> the combatant command commanders<br />

in identifying joint military requirements.<br />

TITLE II--ACQUISITION POLICY<br />

Sec. 201. Trade-<strong>of</strong>fs among cost, schedule, and<br />

performance objectives.<br />

Sec. 202. Strategies to ensure competition.<br />

Sec. 203. Prototyping requirements.<br />

Sec. 204. Actions to identify and address systemic<br />

problems in major defense acquisition programs.<br />

Sec. 205. Additional requirements for certain major<br />

defense acquisition programs.<br />

Sec. 206. Critical cost growth in major defense<br />

acquisition programs.<br />

Sec. 207. Organizational conflicts <strong>of</strong> interest in major<br />

defense acquisition programs.<br />

TITLE III--ADDITIONAL ACQUISITION PROVISIONS<br />

Sec. 301. Awards for Department <strong>of</strong> Defense personnel<br />

for excellence in acquisition.<br />

Sec. 302. Earned value management.<br />

REVITALIZING SYSTEMS & SUSTAINMENT<br />

One <strong>of</strong> the main reliability effects <strong>of</strong> WSARA is the<br />

codification <strong>of</strong> key findings and recommendations in the<br />

GAO report [2+11] regarding incorporating sustainment<br />

planning into the systems engineering process—especially in<br />

the area <strong>of</strong> Total Ownership Cost analysis and control from<br />

the earliest program activities. The DSB 2008<br />

Developmental Test and Evaluation report identified the<br />

importance <strong>of</strong> establishing a viable systems engineering<br />

process at the beginning <strong>of</strong> programs [8]. Unfortunately, the<br />

<strong>DoD</strong> and services systemically dismantled systems<br />

engineering activities beginning in the early 1990’s and<br />

revitalization efforts in the reliability arena are incomplete.<br />

This issue is addressed in WSARA by requiring the <strong>DoD</strong> to:<br />

(1) evaluate if the necessary systems engineering<br />

development planning, lifecycle management and<br />

sustainability capabilities needed to ensure that key<br />

acquisition decisions are supported by a rigorous systems<br />

analysis and systems engineering process are in place and<br />

(2) establish organizations and develop skilled employees<br />

needed to fill any gaps in such capabilities [12]. Similar<br />

capability evaluations and corrective actions are specified for<br />

test and evaluation activities.<br />

Re-establishing these capabilities along with other tasks are<br />

the responsibility <strong>of</strong> the new Directors <strong>of</strong> Developmental<br />

Test and Evaluation (D,DT&E) and Systems Engineering<br />

(D,SE), positions that were created within the Office <strong>of</strong> the<br />

Secretary <strong>of</strong> Defense (OSD) by WSARA.<br />

RELIABILITY PROGRAM SCORECARD [5]<br />

One problem that <strong>of</strong>ten faces government source selection<br />

teams is how to evaluate the ability <strong>of</strong> an <strong>of</strong>feror to develop a<br />

“viable RAM strategy.” The RIWG adapted an Army tool,<br />

the “Army Materiel System Analysis Activity (AMSAA)<br />

Scorecard,” to help in the assessment <strong>of</strong> both the program<br />

<strong>of</strong>fice and contractor’s system reliability efforts. The<br />

scorecard has focus areas for reliability requirements<br />

planning; reliability testing; failure tracking and reporting;<br />

and reliability verification and validation. When applied<br />

early in the program, the scorecard can identify areas for<br />

improvement before significant problems emerge. There are<br />

40 elements organized into the following eight evaluation<br />

categories:<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

Requirements and Planning<br />

Training and Development<br />

<strong>Reliability</strong> Analysis<br />

<strong>Reliability</strong> Testing<br />

Supply Chain Management<br />

Failure Tracking and Reporting<br />

Verification and Validation<br />

<strong>Reliability</strong> Improvements


The scorecard examines a supplier’s use <strong>of</strong> reliability best<br />

practices, as well as the supplier’s planned and completed<br />

reliability tasks. A risk assessment score is calculated based<br />

on the individual reliability risk ratings assigned to each<br />

element. Each element is given a color coded risk rating <strong>of</strong><br />

high (red) which is allotted a score <strong>of</strong> 3, medium (yellow)<br />

which equals 2, low risk (green) which is rated as a 1, or not<br />

evaluated (gray). The elements are weighted and are<br />

normalized to produce a risk score for each <strong>of</strong> the eight<br />

evaluation categories, these are combined into the overall<br />

program score. Elements that are not evaluated are removed<br />

from the risk score calculations. The risk scores for each<br />

element are adjusted by weighting factors to produce an<br />

overall reliability risk normalized to a value between 1 &100.<br />

A low score equates to a low reliability risks and a high score<br />

indicates a high risk correlated to a visual green (go), yellow<br />

(caution), and red (stop) scale.<br />

Figure 1: Overall Program Risk Assessment Example<br />

The scorecard provides an early evaluation <strong>of</strong> RAM<br />

capabilities in an acquisition program while helping to<br />

identify reliability gaps. It provides a guide to help program<br />

<strong>of</strong>fices and contractors think about reliability early in the<br />

acquisition process and throughout the program’s life cycle.<br />

The <strong>Reliability</strong> Program Scorecard is a valuable tool for<br />

evaluating risks and making an early initial reliability<br />

projection for a program. A copy <strong>of</strong> the scorecard can be<br />

obtained from the <strong>DoD</strong> <strong>Reliability</strong> Information and Analysis<br />

Center’s web site, http://www.theriac.org/ under “AMSAA<br />

<strong>Reliability</strong> Growth Tools and Scorecard”.<br />

RELIABILITY INITIATIVES CURRENTLY UNDER<br />

DEVELOPMENT<br />

A number <strong>of</strong> initiatives to permanently institutionalize<br />

reliability activities into major defense acquisition programs<br />

were defined under WSARA.<br />

<strong>Reliability</strong> related duties <strong>of</strong> the new Director, Systems<br />

Engineering position include: using systems engineering<br />

approaches to enhance reliability, availability, and<br />

maintainability (RAM) and ensuring that provisions related<br />

to reliability growth are included in the request for proposals<br />

on major defense acquisition programs.<br />

Acquisition executives <strong>of</strong> military departments and defense<br />

agencies responsible for major defense acquisition programs<br />

are responsible for ensuring that their organizations provide<br />

the System Engineering (SE) and Developmental Test and<br />

Evaluation (DT&E) organizations with appropriate resources<br />

and trained personnel to:<br />

<br />

Define a robust RAM and sustainability program as an<br />

integral part <strong>of</strong> design and development within the<br />

systems engineering master plan;<br />

Identify systems engineering lifecycle management,<br />

RAM, and sustainability requirements and incorporate<br />

them into contract requirements;<br />

<br />

<br />

<br />

Define appropriate developmental testing requirements,<br />

Participate in the planning <strong>of</strong> DT&E activities;<br />

Participate in and oversee DT&E activities, test data<br />

analysis, and test reports.<br />

To develop plans for implementing these initiatives the<br />

Under Secretary <strong>of</strong> Defense for Acquisition, Technology, &<br />

Logistics (AT&L) directed the new <strong>DoD</strong> Director <strong>of</strong> Systems<br />

Engineering to convene a <strong>Reliability</strong> Senior Steering Group<br />

(RSSG) in April 2010. Subordinate <strong>Reliability</strong> Working<br />

Group (RWG) teams were organized and tasked to address<br />

reliability-related issues regarding policy, people, and<br />

practice for reliability across the <strong>DoD</strong>. These teams actively<br />

developed recommendations regarding the reconstitution <strong>of</strong><br />

<strong>DoD</strong> reliability, test, and sustainment related policy, skills,<br />

and processes. Related policy and guidance announcements<br />

are expected in the near future.<br />

AVSI RELIABILITY ROADMAP<br />

The Aerospace Vehicle Systems Institute (AVSI) is a Texas<br />

A&M University research cooperative <strong>of</strong> aerospace<br />

companies, the <strong>DoD</strong>, and the Federal Aviation<br />

Administration that works to improve aerospace vehicles,<br />

their components, systems, and development processes. The<br />

AVSI has undertaken a project to chart the future <strong>of</strong><br />

reliability research by developing a reliability technology<br />

road map for electronics reliability assessment practices.<br />

The AVSI team is applying a Quality Function Deployment<br />

(QFD) process to analyze and prioritize the experiences and<br />

recommendations <strong>of</strong> a large number <strong>of</strong> industry experts to<br />

define future reliability methods and research. A QFD is a<br />

widely used tool to help sort out, identify, and prioritize the<br />

requirements for complex issues in order to transform user<br />

needs and demands into criterion and plans that incorporate<br />

key characteristics from the viewpoint <strong>of</strong> potential end users.<br />

This project has so far generated a prioritized “wish list” <strong>of</strong><br />

64 reliability assessment features that are evaluated by 25<br />

needs criteria. The next step is to evaluate how well existing<br />

or near term reliability assessment/prediction methodologies<br />

or tools fulfill the objectives <strong>of</strong> the wish list items. Current<br />

or near term reliability prediction methods/tools that are<br />

determined to adequately fulfill wish list needs and features<br />

will be identified as easily achievable “Low Hanging Fruit”<br />

items that will be recommended as suitable for immediate<br />

usage in current and new program.


The wish list items that can not be easily implemented in the<br />

short term will be evaluated to estimate the effort needed to<br />

make them happen. These items will be identified as near<br />

term and long term recommended efforts to help influence<br />

the future direction <strong>of</strong> reliability research. The <strong>Reliability</strong><br />

Roadmap results will be provided to the <strong>DoD</strong> for reliability<br />

research planning, future revisions to military and aerospace<br />

handbooks and processes, and to help direct future industry<br />

reliability research and development.<br />

UPDATING MIL-HDBK-217 RELIABILITY<br />

PREDICTION METHODS TO INCLUDE PoF<br />

The <strong>DoD</strong>’s Defense Standardization Program Office (DSPO)<br />

has initiated a multi-phase effort to update MIL-HDBK-217,<br />

the military’s <strong>of</strong>ten imitated and frequently criticized<br />

reliability prediction “bible” for electronics equipment that<br />

has not been updated since 1995 [4+12]. There are numerous<br />

concerns about the actuarial reliability prediction methods<br />

defined in MIL-HDBK-217 which have been covered<br />

thoroughly elsewhere [14+15]. The main concerns are:<br />

1) Currently, 217 predictions are based on constant failure<br />

rates which model only random failure situations that do<br />

not account for infant mortality and wearout issues.<br />

Tabulation errors where infant mortality and wearout<br />

issues are tallied as random failures are another risk <strong>of</strong><br />

this scheme which can produce significant inaccuracy in<br />

predicted failure rates.<br />

2) Actuarial reliability predictions typically correlate poorly<br />

to actual field performance since they do not account for<br />

the physics or mechanics <strong>of</strong> failure. Hence, they cannot<br />

provide insight for controlling actual failure mechanisms<br />

and they are incapable <strong>of</strong> evaluating new technologies<br />

that lack a field history to base projections on.<br />

3) The models are based upon industry wide average failure<br />

rates that are not vendor, device, nor event specific and<br />

MTBF results provide no insight on the starting point<br />

growth rate and distribution range <strong>of</strong> true failure trends.<br />

Also, the MTBF concept is <strong>of</strong>ten misinterpreted by<br />

people without formal reliability training.<br />

4) Over emphasis on the Arrhenius model and steady state<br />

temperature as the primary factor in electronic<br />

component failure while the roles <strong>of</strong> key stress factors<br />

such as: temperature cycling, humidity, vibration and<br />

shock are not modeled [15+16+17+18].<br />

5) Over emphasis on component failures when 78% <strong>of</strong><br />

electronic failures are due to other issues that are not<br />

modeled such as: design errors, PCB assembly defects,<br />

solder and wiring interconnect failures, PCB insulation<br />

resistance and via failures, s<strong>of</strong>tware errors, etc. [19]<br />

6) The last 217 update was in 1995; new components,<br />

technology advancement, and quality improvements<br />

developed since then are not reflected in the current<br />

actuarial data tables [15+20].<br />

In addition to the current effort to create 217-revision G that<br />

entails a simple update <strong>of</strong> the actuarial failure rate tables, the<br />

team also developed a proposal for a future revision H where<br />

improved and more holistic empirical MTBF models could<br />

be used for comparison evaluations during a program’s<br />

acquisition-supplier selection activities. Later, science-based<br />

Physics <strong>of</strong> Failure (PoF) reliability modeling combined with<br />

probabilistic mechanics techniques would be used during the<br />

actual system design-development phase to evaluate and<br />

optimize stress and wearout limitations <strong>of</strong> a design to foster<br />

the creation <strong>of</strong> highly reliable, robust systems.<br />

The Physics <strong>of</strong> Failure approach (also known as <strong>Reliability</strong><br />

Physics) applies analysis early in the design process to<br />

predict the reliability and durability <strong>of</strong> specific design<br />

alternatives in specific applications. This provides<br />

knowledge that enables designers to make design and<br />

manufacturing choices that minimize failure opportunities in<br />

order to produce reliability optimized, robust products.<br />

PoF focuses on understanding the cause and effect physical<br />

processes and mechanisms that cause degradation and failure<br />

<strong>of</strong> materials and components [21]. It is based in the analysis<br />

<strong>of</strong> loads and stresses in an application and evaluating the<br />

ability <strong>of</strong> materials to endure them from a strength and<br />

mechanics <strong>of</strong> materials point <strong>of</strong> view. This approach, known<br />

as load-to-strength interference analysis, has been used for<br />

centuries in mechanical, structural, construction, and civil<br />

engineering, integrates reliability into the design activity via<br />

a science-based process for evaluating materials, structures,<br />

and technologies. In PoF, failures are organized into 3<br />

categories:<br />

1) Overstress Failures such as yield, buckling, and<br />

electrical surges that occur when the stresses <strong>of</strong> the<br />

application rapidly or greatly exceeds the strength <strong>of</strong> a<br />

device’s materials. This causes immediate or imminent<br />

failures.<br />

2) Wearout Failures defined as stress driven damage<br />

accumulation <strong>of</strong> materials which includes failure<br />

mechanisms like fatigue and corrosion.<br />

3) Errors and Excessive Variation issues comprise the PoF<br />

view <strong>of</strong> infant mortality. Opportunities for error and<br />

variation touch every aspect <strong>of</strong> design, supply chain, and<br />

manufacturing processes. These issues are the most<br />

diverse and challenging <strong>of</strong> the PoF categories. The<br />

diverse, random, and stochastic events involved cannot<br />

be modeled using a deterministic PoF cause and effect<br />

approach. However, reliability improvements are still<br />

possible when PoF knowledge and lessons learned are<br />

used to implement error pro<strong>of</strong>ing and select capable<br />

manufacturing processes that ensure robustness [22].<br />

The proposed PoF circuit card assembly section defines four<br />

categories <strong>of</strong> analysis techniques (see Figure 2) that can be<br />

performed with currently available Computer Aided<br />

Engineering (CAE) analysis techniques. This methodology<br />

is aligned with the Analysis, Modeling, and Simulations<br />

methods recommended in Section 8 <strong>of</strong> SAE J1211 -<br />

Handbook for Robustness Validation <strong>of</strong> Automotive<br />

Electrical/Electronic Modules [23]. The 4 categories are:<br />

1) E/E Performance and Variation Modeling used to<br />

evaluate if stable E/E circuit performance objectives are<br />

achieved under static and dynamic conditions that include<br />

tolerancing and drift concerns.


2) Electromagnetic Compatibility (EMC) and Signal<br />

Integrity Analysis to evaluate if an electronic assembly<br />

generates, or is susceptible to, disruptions by<br />

Electromagnetic Interference and if the transfer <strong>of</strong> high<br />

frequency signals is stable.<br />

3) Stress Analysis is used to assess the ability <strong>of</strong> electronic<br />

packaging structures to maintain structural and circuit<br />

interconnection integrity, maintain a suitable environment<br />

for E/E circuits to function reliably, and determine if a<br />

device is susceptible to overstress failures [24].<br />

4) Wearout Durability and <strong>Reliability</strong> Modeling uses the<br />

results <strong>of</strong> the stress analysis to predict the long-term stress<br />

aging/stress endurance, gradual degradation and wearout<br />

capabilities <strong>of</strong> a CCA [24]. Results are provided in terms<br />

<strong>of</strong> time to first failure, the expected failure distribution in<br />

an ordered list <strong>of</strong> 1 st , 2 nd , 3 rd , etc. devices, features,<br />

mechanisms, and sites <strong>of</strong> most likely expected failures.<br />

These techniques provide a multi-discipline virtual<br />

engineering prototyping process for early identification <strong>of</strong><br />

design weaknesses, susceptibilities to failure mechanisms and<br />

for predicting reliability when improvements can be readily<br />

implemented at low costs.<br />

CONCLUSIONS<br />

The 1990 era attempts at up front cost reduction in <strong>DoD</strong><br />

acquisition programs through reducing system engineering,<br />

RAM, development testing, and sustainment planning efforts<br />

resulted in large development cost overruns, excessive<br />

maintenance burdens, and increased support costs in a<br />

number <strong>of</strong> defense systems. The realization <strong>of</strong> the long term<br />

consequences <strong>of</strong> these policies has led <strong>DoD</strong> leadership to<br />

institute a policy and guidance reversal to revitalize these<br />

capabilities and return to a leadership role in system<br />

engineering, system assurance, RAM, and sustainment<br />

technologies and methods. Some <strong>of</strong> these policy changes<br />

have been incorporated into law by the Weapon Systems<br />

Acquisition Reform Act <strong>of</strong> 2009. The initiatives reported in<br />

this paper are only the starting point in the revitalization<br />

effort. The <strong>DoD</strong> <strong>Reliability</strong> Senior Steering Group (RSSG)<br />

and their <strong>Reliability</strong> Working Groups (RWG) are diligently<br />

and rapidly working to refine these efforts, develop<br />

additional initiatives, and to launch activities to implement<br />

them. Announcements <strong>of</strong> the next steps are expected in late<br />

2010 or early 2011.<br />

REFERENCES<br />

[1] The Weapon Systems Acquisition Reform Act <strong>of</strong> 2009.<br />

Public Law 111-23. 2009. Washington, DC: U.S. Congress.<br />

[2] “Setting Requirements Differently Could Reduce Weapon<br />

Systems’ Total Ownership Costs.” GAO-03-57. Washington,<br />

DC: Government Accountability Office, Feb. 2003<br />

[3] P. M. Dallosta, U.S. Defense Acquisition University,<br />

“The Impact <strong>of</strong> Changes in <strong>DoD</strong> Policy on <strong>Reliability</strong> and<br />

Sustainment”, RAMS 2010<br />

[4] Final Report <strong>of</strong> the <strong>Reliability</strong> Improvement Working<br />

Group (RIWG). 2008. Washington DC.<br />

[5] M. H. Shepler, N. Welliver, USAMSAA “<strong>New</strong> Army and<br />

<strong>DoD</strong> <strong>Reliability</strong> Scorecard”, RAMS 2010<br />

[6] L. Gullo, “The <strong>Revitalization</strong> <strong>of</strong> MIL-HDBK-217”, IEEE<br />

<strong>Reliability</strong> <strong>New</strong>sletter, Sept. 2008.<br />

http://www.ieee.org/portal/cms_docs_relsoc/relsoc/<strong>New</strong>slette<br />

rs/Sep2008/<strong>Revitalization</strong>_MIL-HDBK-217.htm<br />

[7] J. G. McLeish, Enhancing MIL-HDBK-217 <strong>Reliability</strong><br />

Predictions with Physics <strong>of</strong> Failure Methods, RAMS 2010.<br />

[8] Report <strong>of</strong> the Defense Science Board on Developmental<br />

Test & Evaluation, U.S. Dept <strong>of</strong> Defense, May 2008.<br />

[9] Whitehouse Press Release “Remarks-by-the-President-atsigning-<strong>of</strong>-the-WSARA,<br />

May 22, 2009.<br />

[10] R. Lake, "Weapon bill passes House, goes to Obama".<br />

http://firstread.msnbc.msn.com/_news/2009/05/21/4426481-<br />

weapons-bill-passes-house, MSNBC (21 May 2009),<br />

[11] G.R. Schmieder, “Reintegration <strong>of</strong> Sustainment into<br />

Systems Engineering During the <strong>DoD</strong> Acquisition Process”,<br />

RAMS 2010<br />

[12] Summary <strong>of</strong> the Weapon Systems Acquisition Reform<br />

Act <strong>of</strong> 2009, Senator C. Levin, press release #308525, Feb.<br />

24, 2009<br />

[10] G. F. Decker, “Policy on Incorporating a Performance<br />

Based Approach to <strong>Reliability</strong> in RFPs”, Dept <strong>of</strong> the Army<br />

Memo, Feb, 15 1995<br />

[11] F. R. Nash, “Estimating Device <strong>Reliability</strong>: Assessment<br />

<strong>of</strong> Credibility”, AT&T Bell Labs/Kluwer Publishing, 1993.<br />

[12] M. Pecht, “Why the Traditional <strong>Reliability</strong> Prediction<br />

Models Do Not Work - Is There an Alternative?”, Electronics<br />

Cooling, Vol. 2, pp. 10-12, January 1996<br />

[13] M. Osterman, “We Still have a headache with<br />

Arrhenius”, Electronics Cooling, pp 53-54, Feb. 2001<br />

[14] M. Pecht, P. Lall, E. Hakim, “Temperature as a<br />

<strong>Reliability</strong> Factor”, 1995 Eurotherm Seminar No. 45:<br />

Thermal Management <strong>of</strong> Electronic Systems, pp. 36.1-22<br />

[15] O. Milton, “<strong>Reliability</strong> & Failure <strong>of</strong> Electronic Materials<br />

& Devices”, Ch 4.5.8 – “Is Arrhenius Erroneous” Academic<br />

Press, San Diego CA. 1998<br />

[16] D.D. Dylis, M.G. Priore, “A Comprehensive <strong>Reliability</strong><br />

Assessment Tool for Electronic Systems”, IIT Research/<br />

<strong>Reliability</strong> Analysis Center, Rome NY, RAMS 2001<br />

[17] “PRISM vs. commercially available prediction tools”,<br />

RIAC Admin Posting #558. May 17, 2007 RIAC.ORG,<br />

http://www.theriac.org/forum/showthread.php?t=12904<br />

[18] R. Alderman, “Physics <strong>of</strong> Failure: Predicting <strong>Reliability</strong><br />

in Electronic Components”, Embedded Technology, 7-2009<br />

[19] S. Salemi, L. Yang, J. Dai, J. Qin , J.B. Bernstein<br />

Physics-<strong>of</strong>-Failure Based Handbook <strong>of</strong> Microelectronic<br />

Systems, Defense Technical Info Center/Air Force Research<br />

Lab Report., U <strong>of</strong> MD & RIAC, Utica, NY, Mar. 2008<br />

[20] SAE J1211 – “Handbook for Robustness Validation <strong>of</strong><br />

Automotive E/E Modules”, Section 8 - Analysis, Modeling<br />

and Simulations, SAE, April 2009.<br />

[21] S.A. McKeown, Mechanical Analysis <strong>of</strong> Electronic<br />

Packaging Systems, Marcel Dekker, <strong>New</strong> York 1999.


BIOGRAPHY<br />

James G. McLeish, CRE<br />

<strong>DfR</strong> (Design for <strong>Reliability</strong>) <strong>Solutions</strong><br />

5110 Roanoke Place, Suite 101<br />

College Park, Maryland 20740 – USA<br />

e-mail: jmcleish@dfrsolutions.com<br />

Mr. McLeish holds a dual EE/ME Masters degree in Vehicle<br />

E/E Controls System. He is a Certified <strong>Reliability</strong> Engineer<br />

and a core member <strong>of</strong> the Society <strong>of</strong> Automotive Engineering<br />

<strong>Reliability</strong> Standards Workgroup with over 32 years <strong>of</strong><br />

automotive and military Electrical/Electronics experience.<br />

He started his career as a practicing electronics designing<br />

engineer who helped invent the first microprocessor based<br />

engine computer at Chrysler Corp. in the 1970’s. He has<br />

since worked in systems engineering, design, development,<br />

product, validation, reliability and quality assurance <strong>of</strong> both<br />

E/E components and vehicle systems at General Motors and<br />

GM Military. He is credited with the introduction <strong>of</strong><br />

Physics-<strong>of</strong>-Failure methods to GM while serving as an E/E<br />

<strong>Reliability</strong> Manager and E/E QRD (Quality/<strong>Reliability</strong>/<br />

Durability) Technology Expert. Since 2006 Mr. McLeish has<br />

been a partner and manager <strong>of</strong> the Michigan <strong>of</strong>fice <strong>of</strong> <strong>DfR</strong><br />

<strong>Solutions</strong>, a quality/reliability engineering consulting and<br />

laboratory services firm formed by senior scientists and<br />

staffers from the University <strong>of</strong> Maryland’s CALCE Center<br />

for Electronic Product and Systems. <strong>DfR</strong> <strong>Solutions</strong> is a<br />

leader in providing PoF science and expertise to the global<br />

electronics industry.


SERIES 1 - E/E<br />

CIRCUITS & SYSTEMS<br />

ANALYSIS<br />

SERIES 3 – PHYSICAL<br />

STRESS ANALYSIS<br />

SERIES 2 -<br />

EMC & SIGNAL<br />

INTEGRITY<br />

ANALYSIS<br />

FILTER<br />

PERFORMANCE<br />

CONDUCTIVE<br />

TRANSIENTS<br />

GENERATION &<br />

ENDURANCE<br />

ESD<br />

ENDURANCE<br />

VOLTAGE<br />

SUPPLY<br />

VARIATION AND<br />

TRANSIENT<br />

EMC<br />

RADIATED<br />

EMISSION<br />

SIGNAL<br />

INTEGRITY<br />

ATTENUATION<br />

& TIMING<br />

ANALYSIS<br />

CIRCUIT PERFORMANCE &<br />

VARIATION ANALYSIS<br />

CIRCUIT PERFORMANCE & I/O<br />

SENSITIVITY MODELING<br />

E/E PARAMETER TOLERANCE<br />

VARIATION ANALYSIS<br />

OPERATING VOLTAGE RANGE &<br />

GND OFFSET ANALYSIS<br />

THERMAL DRIFT<br />

SENSITIVITY ANALYSIS<br />

ABNORMAL & EXTREMES<br />

VOLTAGE SENSITIVITY ANALYSIS<br />

E/E POWER & LOAD<br />

ANALYSIS<br />

COMPONENT POWER<br />

DISSIPATION<br />

WIRE & CIRCUIT TRACE<br />

CURRENT LOADING<br />

SHORT CIRCUIT LOADING<br />

ANALYSIS<br />

MECHANICAL STRESS<br />

ANALYSIS<br />

STRUCTURAL LOAD ANALYSIS<br />

- HOUSING<br />

- CIRCUIT BOARDS<br />

- OTHER<br />

FASTENER<br />

PERFORMANCE<br />

VIBRATION MODAL<br />

ANALYSIS<br />

CIRCUIT CARD<br />

SHOCK ANALYSIS<br />

COMPONENT INERTIAL<br />

VIBRATION ANALYSIS<br />

THERMAL STRESS )<br />

SELF HEATING SIMULATION<br />

CONDUCTION<br />

CONVECTION<br />

RADIATION.<br />

WIRE/CIRCUIT TRACE<br />

- THERMAL ANALYSIS<br />

SERIES 4 – POF<br />

DURABILITY &<br />

RELIABILITY<br />

ANALYSIS<br />

CIRCUIT CARD<br />

FLEXURE ANALYSIS<br />

DROP ENDURANCE<br />

SIMULATION<br />

VIBRATION FATIGUE<br />

DURABILITY<br />

SHOCK OVERSTRESS<br />

FRACTURE<br />

DURABILITY<br />

ANALYSIS<br />

VIBRATION FATIGUE<br />

DURABILITY<br />

THERMAL-<br />

MECHANICAL<br />

CYCLING FATIGUE<br />

DURABILITY<br />

COMPONENTS<br />

THERMAL-<br />

MECHANICAL<br />

CYCLING FATIGUE<br />

DURABILITY<br />

PLATED THROUGH<br />

HOLES / VIAS<br />

RF<br />

ANTENNA<br />

ANALYSIS<br />

PHYSICAL SYSTEMS<br />

EVALUATIONS<br />

ELECTRICAL INTERFACE<br />

MODELS<br />

ELECTROMECHANICAL,<br />

POWER ELECTROMAGNETIC &<br />

ELECTRIC MACHINE ANALYSIS<br />

PHYSICAL SYSTEM<br />

PERFORMANCE MODELING<br />

FIGURE 2 – Example <strong>of</strong> an analysis work flow plan based on fundamental engineering and<br />

Physics <strong>of</strong> Failure principles for producing highly reliable and robust electronics systems.<br />

The dotted arrows identify analytical work flow when one analysis task requires the results from a preceding analysis task.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!