ANSYS Solutions? - Czech Technical University

ANSYS Solutions? - Czech Technical University

ANSYS Solutions? - Czech Technical University


Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Industry Spotlight:<br />

Chemical and<br />

Processing<br />

FEA is a valuable tool that<br />

aids doctors in orthopedic<br />

operations<br />

Researchers use <strong>ANSYS</strong> to<br />

develop micron-sized, selfpowered<br />

mobile mechanisms<br />

Quickly vary geometry even<br />

without parametric CAD

Take a look at the future of<br />

product development...<br />

...a process that's more automated, more integrated,<br />

more innovative and truer to life. That’s where <strong>ANSYS</strong><br />

is taking engineering simulation. By combining technologies<br />

like industry-leading meshing, nonlinear analysis and<br />

computational fluid dynamics, you can reduce costs and<br />

drive products to market quicker.<br />

Bring your products and processes to life with <strong>ANSYS</strong>.<br />

Visit www.ansys.com/secret/6 or call 1.866.<strong>ANSYS</strong>.AI.

Contents<br />

Industry Spotlight<br />

Departments<br />

6<br />

Chemical and Processing<br />

A continuing series on the value<br />

of engineering simulation in<br />

specific industries<br />

Editorial -To Collaborate,<br />

You Need People<br />

Industry News -Recent Announcements<br />

and Upcoming Events 3<br />

2<br />

Features<br />

10<br />

14<br />

<strong>ANSYS</strong> for Virtual<br />

Surgery<br />

FEA is a valuable tool that<br />

aids doctors in orthopedic<br />

operations<br />

FEA in Micro-Robotics<br />

Researchers use <strong>ANSYS</strong> to<br />

develop micron-sized, selfpowered<br />

mobile mechanisms<br />

Software Profile -The New Face of<br />

<strong>ANSYS</strong> ICEM CFD 16<br />

CFD Update -Simulation Helps<br />

Improve Oil Refinery Operations<br />

Managing CAE Processes -Upfront<br />

Analysis in the Global Enterprise<br />

Simulation at Work - Analysis of<br />

Artificial Knee Joints<br />

Tech File -Demystifying Contact<br />

Elements<br />

Tips and Techniques-Contact<br />

Defaults in Workbench and <strong>ANSYS</strong><br />

18<br />

25<br />

26<br />

33<br />

36<br />

Guest Commentary-Putting Quality<br />

Assurance in Finite Element Analysis 40<br />

28<br />

Design Insight for<br />

Legacy Models<br />

Quickly vary geometry even<br />

without parametric CAD<br />

Want to continue receiving<br />

<strong>ANSYS</strong> <strong>Solutions</strong>?<br />

Visit www.ansys.com/subscribe to update<br />

your information. Plus, you’ll have the chance<br />

to sign up to receive CFX eNews and email<br />

alerts when the latest electronic version of<br />

<strong>ANSYS</strong> <strong>Solutions</strong> becomes available!<br />

About the cover<br />

There are many examples<br />

of successful chemical and<br />

processing companies using<br />

<strong>ANSYS</strong> simulation technology<br />

to improve products and<br />

processes. Our cover article<br />

describes how Twister BV<br />

used <strong>ANSYS</strong> CFX to reduce<br />

costs by 70% compared<br />

to the conventional route<br />

without CFD in developing<br />

gas separator equipment.<br />

For <strong>ANSYS</strong>, Inc. sales information, call 1.866.267.9724, or visit www.ansys.com on the Internet.<br />

Go to www.ansyssolutions.com/subscribe to subscribe to <strong>ANSYS</strong> <strong>Solutions</strong>.<br />

Editorial Director<br />

John Krouse<br />

jkrouse@compuserve.com<br />

Designers<br />

Miller Creative Group<br />

info@millercreativegroup.com<br />

Ad Sales Manager<br />

Ann Stanton<br />

ann.stanton@ansys.com<br />

Editorial Advisor<br />

Kelly Wall<br />

kelly.wall@ansys.com<br />

Managing Editor<br />

Jennifer L. Hucko<br />

jennifer.hucko@ansys.com<br />

Art Director<br />

Paul DiMieri<br />

paul.dimieri@ansys.com<br />

Circulation Manager<br />

Elaine Travers<br />

elaine.travers@ansys.com<br />

CFD Update Advisor<br />

Chris Reeves<br />

chris.reeves@ansys.com<br />

<strong>ANSYS</strong> <strong>Solutions</strong> is published for <strong>ANSYS</strong>, Inc. customers, partners, and others interested in the field of design and analysis applications.<br />

The content of <strong>ANSYS</strong> <strong>Solutions</strong> has been carefully reviewed and is deemed to be accurate and complete. However, neither <strong>ANSYS</strong>, Inc., nor Miller Creative Group guarantees or<br />

warrants accuracy or completeness of the material contained in this publication. <strong>ANSYS</strong>, <strong>ANSYS</strong> DesignSpace, CFX, <strong>ANSYS</strong> DesignModeler, DesignXplorer, <strong>ANSYS</strong> Workbench<br />

Environment, AI*Environment, CADOE and any and all <strong>ANSYS</strong>, Inc. product names are registered trademarks or registered trademarks of subsidiaries of <strong>ANSYS</strong>, Inc. located in the<br />

United States or other countries. ICEM CFD is a trademark licensed by <strong>ANSYS</strong>, Inc. All other trademarks or registered trademarks are the property of their respective owners.<br />

POSTMASTER: Send change of address to <strong>ANSYS</strong>, Inc., Southpointe, 275 Technology Drive, Canonsburg, PA 15317, USA ©2004 <strong>ANSYS</strong>, Inc. All rights reserved.<br />

©2004 <strong>ANSYS</strong>, Inc. All rights reserved.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Editorial<br />

2<br />

To Collaborate, You Need People<br />

Intellectual capital for creating innovative designs is lacking<br />

at manufacturers that skimp on jobs.<br />

By John Krouse<br />

Editorial Director<br />

<strong>ANSYS</strong> <strong>Solutions</strong><br />

jkrouse@compuserve.com<br />

One of the most significant –<br />

and possibly least recognized<br />

– aspects of engineering<br />

simulation is that the technology<br />

can be a tremendously<br />

effective communication<br />

and collaboration tool in<br />

product development. By<br />

using virtual prototyping,<br />

what-if studies and a wide<br />

range of other analyses<br />

to show how proposed<br />

products will perform,<br />

engineering simulation can<br />

give people in multi-functional<br />

product development teams tremendous insight into<br />

designs. The technology also provides an effective way<br />

for team members to interact, with disciplines outside<br />

engineering able to see the impact of their various ideas,<br />

suggestions, feedback and<br />

input. In this way, teams can<br />

investigate even the most<br />

unconventional ideas, some<br />

of which can turn out to be<br />

the basis of ground-breaking<br />

new products.<br />

Collaborative product<br />

development is a growing<br />

trend in manufacturing industries, getting engineers and<br />

analysts working with others across the extended<br />

enterprise: manufacturing, testing, quality assurance,<br />

sales, marketing and service – even those outside the<br />

company such as suppliers, customers, consultants<br />

and partners. These people typically don’t know how to<br />

build meshes, define boundary conditions, run analyses<br />

or perform optimizations. But they can see the impact of<br />

what simulations show, and they can provide valuable<br />

feedback in spotting, evaluating and fixing potential<br />

problems. Marketing could suggest a different contour<br />

that would make a consumer product more saleable, for<br />

example, or procurement might suggest alternate<br />

suppliers for stronger and less expensive components<br />

to reduce excess stress.<br />

This multi-functional synergy is the basis for<br />

the creativity necessary to develop innovative products<br />

and processes that might not immediately occur to<br />

individuals working separately. Collaboration taps into<br />

the intellectual capital of the enterprise – the combined<br />

know-how and insight of workers about the company’s<br />

operation, its products and its customers.<br />

Companies need people for multidisciplinary<br />

collaboration. But, unfortunately, jobs at manufacturing<br />

firms are in steady decline. According to the National<br />

Association of Manufacturers, after peaking at 17.3 million<br />

in mid-2000, manufacturing employment has fallen<br />

by 2.8 million while employment in non-manufacturing<br />

sectors of the economy rose by 671,000 to 115 million.<br />

Data from the U.S. Bureau of Labor Statistics indicates<br />

there were 2,378 extended mass layoffs in<br />

manufacturing during 2002 alone, resulting in 454,034<br />

workers being removed from their jobs.<br />

Meanwhile, the overall economy is rebounding,<br />

with the Dow Jones Industrial Average undergoing<br />

a strong sustained rally and corporate profits up.<br />

Forecasters at the National Association for Business<br />

Economics predict that the U.S. economy will show a<br />

robust annual growth of 4.5% in 2004.<br />

Despite this strong economic growth, payrolls in<br />

manufacturing continue to go down as manufacturers<br />

operate with as few people<br />

as possible. Running these<br />

super-lean operations<br />

pumps up short-term<br />

profits. But manufacturers<br />

cannot sustain long-term<br />

growth based on savings<br />

from a barely adequate<br />

workforce being stretched<br />

to the limit. Product quality, customer service and brand<br />

image ultimately suffer, as do product innovations that<br />

spring from collaborative design.<br />

To collaborate, you need people: ones with enough<br />

time in the workday to apply their knowledge on creative<br />

projects. When manufacturers cut jobs indiscriminately,<br />

they’re not just getting rid of salaried bodies, they’re<br />

discarding the company’s most valuable asset – the<br />

wealth of intellectual capital in its workforce. Companies<br />

that fall into this trap risk being left behind in the market<br />

by astute competitors with enough sense to invest in<br />

their workers and the knowledge they bring to the<br />

collaborative processes necessary to develop winning<br />

products. ■<br />






www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Industry News<br />

3<br />

Recent Announcements<br />

EASA 3.0 - The New Standard for Efficient<br />

Application Development<br />

EASA enables ultra-rapid creation and deployment<br />

of Web-enabled applications that can drive most<br />

applications, including <strong>ANSYS</strong> and CFX. EASA<br />

also can be used to integrate several tools, thus<br />

automating processes involving say CAD, FEA and<br />

even in-house codes. EASA is available as a software<br />

product to author and publish your own custom<br />

applications. Alternatively, several ASDs are now using<br />

EASA to create turnkey applications to your<br />

specification as a service.<br />

New features in EASA 3.0 include:<br />

• Connectivity to Relational Databases such as SQL<br />

Server and Oracle, and with database applications<br />

such as ERP, CRM and PLM systems.<br />

• Improved Security for Internet Use using Secure<br />

Socket Layer (SSL) technology, enabling you to host<br />

applications for use over the Internet.<br />

• Multi-Language EASAPs — create your app in your<br />

language, and users see it in their preferred<br />

language. Character sets supported include<br />

Roman, Chinese, Japanese, Russian and Arabic.<br />

• New parametric study and optimization capabilities<br />

• New API — EASA’s differentiator has always been to<br />

allow non-programmers to create professionalgrade<br />

Web-enabled applications around their<br />

underlying software. Now an API allows EASA<br />

authors who have programming skills to create<br />

applications at the next level by using custom code.<br />

For more information, visit www.ease.aeat.com.<br />

2004 International <strong>ANSYS</strong> Conference Hailed<br />

Success<br />

Engineering professionals from throughout the world<br />

gathered at the Hilton Pittsburgh in May for the 2004<br />

International <strong>ANSYS</strong> Conference to discover the true<br />

meaning behind what it is to Profit from Simulation.<br />

Vision and strategy set the theme for the general<br />

session. Kicking off the conference with a welcome<br />

address, <strong>ANSYS</strong> president and CEO, Jim Cashman,<br />

set the stage for keynote speaker, Brad Butterworth of<br />

Team Alinghi. As the cunning strategist aboard the<br />

Team Alinghi yachts, Brad shared his<br />

experience and discussed how the America’s Cup<br />

winner is using <strong>ANSYS</strong>’ integrated simulation<br />

solutions to defend its title in the 2007 competition.<br />

After the morning break, <strong>ANSYS</strong> presented its<br />

Technology Roadmap, the company’s successful,<br />

ongoing strategy for integrating the power of the entire<br />

<strong>ANSYS</strong>, Inc. family of products into the ultimate<br />

engineering simulation solution. Then, Bruce Toal,<br />

director of Marketing and <strong>Solutions</strong>, High Performance<br />

<strong>Technical</strong> Computing Division at Hewlett-Packard<br />

Company, spoke about the company’s Adaptive<br />

Enterprise for Design and Manufacturing.<br />

Following a day of technical and general sessions,<br />

and visiting exhibitor booths, attendees enjoyed a<br />

conference social sponsored by Hewlett-Packard<br />

Monday evening. Standing ovations and triumphant<br />

applause echoed throughout the ballroom during the<br />

social as <strong>ANSYS</strong> president and CEO, Jim Cashman,<br />

presented Dr. John Swanson, <strong>ANSYS</strong> founder, with an<br />

award for being the recipient of the 2004 AAES John<br />

Fritz Medal.<br />

<strong>ANSYS</strong> long-standing partners and its key customers<br />

took to the podium for the Tuesday general session.<br />

LMS International’s Tom Curry, executive vice<br />

president and chief marketing officer, spoke about the<br />

product creation process. Tom guides the company’s<br />

growth in predictive computer-aided engineering,<br />

physical prototyping and related services.<br />

Herman Miller’s Larry Larder, director of engineering<br />

services, discussed how they use <strong>ANSYS</strong><br />

simulation technologies to experiment and innovate in<br />

the office furniture industry.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Industry News<br />

4<br />

SGI’s director of product marketing, Shawn<br />

Underwood, presented future of high performance<br />

computing followed by Dr. Paresh Pattani, director of<br />

HPC and Workstation Applications at Intel<br />

Corporation who focused on the paradigm shift in high<br />

performance computing.<br />

of quality products to market, users have faced major<br />

challenges to realizing the full value. For example,<br />

hardware and software limitations have historically<br />

made realistic simulations elusive when realism<br />

involves highly detailed models and complex physical<br />

behavior.<br />

Jorivaldo Medeiros, technical consultant at<br />

PETROBRAS, offered his <strong>ANSYS</strong> success story<br />

on how the company drives development and<br />

innovation in equipment technology.<br />

In addition, <strong>ANSYS</strong> became the first engineering<br />

simulation company to solve a 111 Million Degrees of<br />

Freedom structural analysis model. After lunch, the<br />

Management Track addressed strategies on how to<br />

implement new technologies and explain the benefits<br />

of engineering simulation to management.<br />

<strong>ANSYS</strong> Breaks Engineering Simulation Solution<br />

Barrier<br />

<strong>ANSYS</strong>, Inc. has become the first engineering<br />

simulation company to solve a structural analysis<br />

model with more than 100 million degrees of freedom<br />

(DOF), making it possible for <strong>ANSYS</strong> customers to<br />

solve models of aircraft engines, automobiles,<br />

construction equipment and other complete systems.<br />

“Manufacturers are looking for more accurate, large<br />

system simulations to improve their time-to-money,”<br />

said Charles Foundyller, CEO at Daratech, Inc. “This<br />

announcement means that users now have a clear<br />

roadmap to improved productivity.”<br />

As hardware advances in speed and capacity, <strong>ANSYS</strong><br />

is committed to being the leader in developing CAE<br />

software applications that take advantage of the latest<br />

computing power. This leadership provides customers<br />

with the best engineering simulation tools for their<br />

product development process to help achieve better<br />

cost, quality and time metrics.<br />

This powerful new offering from <strong>ANSYS</strong> speaks to its<br />

commitment to develop and deliver the best in<br />

advanced engineering solutions. In turn, <strong>ANSYS</strong> has<br />

entered into a three-year partnership with SGI<br />

to advance the capabilities of <strong>ANSYS</strong> in parallel<br />

processing and large memory solutions.<br />

In a joint effort with Silicon Graphics, Inc. (SGI),<br />

the 111 million DOF structural analysis problem was<br />

completed in only a few hours using an SGI ® Altix ®<br />

computer. DOF refers to the number of equations<br />

being solved in an analysis giving an indication of a<br />

model’s size.<br />

“<strong>ANSYS</strong>’ ability to solve models this large opens the<br />

door to an entirely new simulation paradigm. Prior to<br />

this capability, a simulation could be conducted only at<br />

a less detailed level for a complete model or only<br />

at the individual component level for a detailed model.<br />

Now, it will be possible to simulate a detailed,<br />

complete model directly; potentially shortening design<br />

time from months to weeks. Equally important, having<br />

a high fidelity comprehensive model can allow trouble<br />

spots to be detected much earlier in the design<br />

process. This may greatly reduce additional design<br />

costs and can provide an even shorter time to<br />

market,” said Jin Qian, senior analyst at Deere &<br />

Company <strong>Technical</strong> Center.<br />

According to Marc Halpern, research director at<br />

Gartner, although simulation accelerates the delivery<br />

Safe Technology Incorporates AFS Strain-Life<br />

Cast Iron Database in fe-safe<br />

Safe Technology Ltd has been granted a license to use<br />

the AFS cast iron database from the research report<br />

“Strain-Life Fatigue Properties Database for Cast Iron”<br />

in its state-of-the-art durability analysis software suite<br />

for finite element models, fe-safe. Safe Technology Ltd<br />

is a technical leader in the design and development<br />

of durability analysis software that pushes the<br />

boundaries of fatigue analysis software to ensure<br />

greater accuracy and confidence in modern fatigue<br />

analysis methods for industrial applications. The<br />

availability of the AFS database within fe-safe ensures<br />

that users will have access to the most up-to-date and<br />

accurate cast-iron materials data for their durability<br />

analyses.<br />

The AFS Ductile Iron and the Gray Iron Research<br />

Committees have developed a Strain-Life Fatigue<br />

Properties Database for Cast Iron. This database<br />

represents the capability of the domestic casting<br />

industry and is available as a special AFS publication.<br />

It is the culmination of a five-year effort in partnership<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

with the DOE Industrial Technology Program.<br />

The scope of this information includes 22 carefully<br />

specified and produced castings from ASTM/SAE<br />

standard grades of irons, including Austempered Gray<br />

Iron (AGI) (specification is under development). Each<br />

grade is comprehensively characterized from an<br />

authoritative source with chemical analysis,<br />

microstructure analysis, hardness tests, monotonic<br />

tension tests and compression tests. This information<br />

is contained in user-friendly digital files on two<br />

CD-ROMs for importing into computer aided design<br />

software. AFS Publications are described online at<br />

www.afsinc.org/estore/.<br />

For more information, visit www.safetechnology.com<br />

Product Development Platform Will Simulate<br />

and Optimize Design Performance for Autodesk<br />

Inventor Professional Customers<br />

Autodesk will license <strong>ANSYS</strong> simulation technologies<br />

and package them as an integral part of the Autodesk<br />

Inventor Professional 9.0 product and future releases.<br />

Powered by <strong>ANSYS</strong>’ part-level stress and resonant<br />

frequency simulation technologies, Autodesk Inventor<br />

Professional 9.0 will enable design engineers to create<br />

more cost-effective and robust designs, based on how<br />

the products function in the real world, by facilitating<br />

quick and easy “what-if” studies right within the<br />

software’s graphical user interface.<br />

Upcoming Events<br />

“Autodesk is proud to be working with an industry<br />

innovator like <strong>ANSYS</strong>,” said Robert Kross, vice<br />

president of the Manufacturing <strong>Solutions</strong> Division at<br />

Autodesk. “This reinforces our commitment to deliver<br />

proven and robust technologies to manufacturers, in<br />

order to help them deliver better quality products and<br />

bring them to market faster. Inventor Pro 9.0 will make<br />

simulation (CAE) functionality available to a broader<br />

mechanical design community, while protecting<br />

customers’ business investment by seamlessly<br />

integrating with other high-end <strong>ANSYS</strong> offerings. Our<br />

customers will surely benefit from this relationship.”<br />

The total solution will help product development<br />

teams make more informed decisions earlier in the<br />

design process, allowing them to reduce costs and<br />

development time while designing better and more<br />

innovative products.<br />

“This new offering from Autodesk will be viewed very<br />

strategically by their customers. As they deploy<br />

simulation tools throughout their product design<br />

process, the Autodesk-<strong>ANSYS</strong> offering will be a<br />

key component to a customer’s overall simulation<br />

strategy,” said Mike Wheeler, vice president and<br />

general manager of the Mechanical Business Unit at<br />

<strong>ANSYS</strong>. “<strong>ANSYS</strong> is proud to be part of the design<br />

effort to create this next generation tool as part of our<br />

overall <strong>ANSYS</strong> Workbench development plan.”<br />

5<br />

Date Event Location<br />

August 29-September 3 ICAS 2004 Yokohama, Japan<br />

September 5-8 RoomVent 2004 Coimbra, Portugal<br />

September 6-9 17th International Symposium Bonn, Germany<br />

on Clean Room Technology<br />

September 7-8 European UGM for Automotive Neu-Ulm, Germany<br />

Applications Radtherm User Conference<br />

September 19-20 German Aerospace Congress 2004 Dresden, Germany<br />

September 21-22 Numerican Analysis and Simulation Troy, Michigan, USA<br />

in Vehicle Engeineering<br />

September 22-25 3rd International Symposium on Two-Phase Pisa, Italy<br />

Flow Modeling and Experimentation<br />

September 29-30 Calculation & Simulation in Vehicle Building Wurzburg, Germany<br />

September 29-30 Pump Users Intarnational Forum 2004 Karlsruhe, Germany<br />

September 28 - October 2 ASME DETC/CIE Conference Salt Lake City, Utah, USA<br />

October 4 2004 PLM European Event UK<br />

October 4-5 DaratechDPS Novi, MI<br />

October 12 <strong>ANSYS</strong> Multiphysics Seminar Sweden<br />

October 13 Construtec Conference Spain<br />

October 20 <strong>ANSYS</strong> 9.0 Update Seminar Sweden<br />

October 28-29 <strong>ANSYS</strong> User Conference Mexico<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Industry Spotlight<br />

6<br />

Industry Spotlight:<br />

Chemical and Processing<br />

A continuing series on the value of engineering<br />

simulation in specific industries.<br />

The chemical and processing industries provide the building blocks for many<br />

products. By using large amounts of heat and energy to physically or chemically<br />

transform materials, these industries help meet the world’s most fundamental<br />

needs for food, shelter and health, as well as products that are vital to such advanced<br />

technologies as computing, telecommunications and biotechnology.<br />

According to the American Chemical Society, chemical and processing industries account for 25% of<br />

manufacturing energy use.<br />

These industries consume fossil resources as both fuel and feedstock, and produce large amounts of<br />

waste and emissions.<br />

In turn, as exemplified by the U.S. Government’s 2020 Vision, these industries face major challenges to<br />

meet the needs of the present without compromising the needs of future generations in the face of<br />

increasing industrial competitiveness. This translates into the need to make processes much more energy<br />

efficient, safer and more flexible, and to reduce emissions to meet the many competitive challenges within<br />

a global economy. As well as the need to reduce design cycle times and costs, major challenges where<br />

simulation has an important role including:<br />

■ ‘Scale-up’, to extrapolate a process from laboratory and pilot plant scale, to the industrial plant<br />

scale, which may require many millions of dollars investment.<br />

■ Process intensification, to combine different processes into smaller compact and efficient units,<br />

instead of treating them as individual processes.<br />

■ Retrofitting, to upgrade a plant to become more efficient, within the many constraints of the existing<br />

footprint of the plant.<br />

This issue of <strong>ANSYS</strong> <strong>Solutions</strong> provides examples of these, in offshore oil<br />

production, waste water treatment and chemical processing, and many other<br />

examples which highlight the benefits to be obtained are to be found on the<br />

<strong>ANSYS</strong> CFX Website at www.ansys.com/cfx.<br />

These problems are inherently multi-scale, with the combination of different<br />

physical and chemical processes at the molecular level, and the macro-flow<br />

processes transporting a reacting fluid around the complex geometries of a large<br />

industrial chemical reactor. The recent advances in modeling capabilities,<br />

combined with the scalable parallel performance of low cost hardware, and the<br />

powerful geometrical and meshing tools in the <strong>ANSYS</strong> software modules open<br />

up many new opportunities to achieve major new benefits in the complex and<br />

demanding world of the chemical and process industries.<br />

Offshore platform<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Case-in-point:<br />

Integral Two-Phase Flow Modeling in<br />

Natural Gas Processing<br />

Customized version of CFX reduces costs 70%<br />

compared to the conventional route without<br />

CFD in developing gas separator equipment.<br />

By Marco Betting, Team Leader Twister<br />

Technology; Bart Lammers, Fluid<br />

Dynamics Specialist; and Bart Prast,<br />

Fluid Dynamics Specialist, Twister BV<br />

Natural gas processing involves dedicated systems<br />

to remove water, heavy hydrocarbons and acidic<br />

vapors from the gas stream to make it suitable for<br />

transportation to the end-customer. From a process<br />

engineering perspective, these systems are<br />

combinations of flashes, phase separations, flow<br />

splitters, and heat and mass exchangers exhaustively<br />

designed to achieve required export specifications.<br />

While the process engineer is concerned with<br />

finding the optimal system configuration using<br />

pre-defined process steps and equilibrium<br />

thermodynamics, the flow-path designer tries to<br />

optimize the performance of each individual process<br />

step in the system based on an understanding of both<br />

two-phase flow behavior and non-equilibrium<br />

thermodynamics. The fluid dynamics interaction<br />

between subsequent process steps is not always<br />

Normalized total C8 fraction in vortex section of<br />

Twister Supersonic Separator<br />

Uniform C8<br />

distribution<br />

❿<br />

❿<br />

❿<br />

C8 separation<br />

taken into account to its full extent, even though this<br />

can strongly influence the total system performance.<br />

Developing and designing new equipment for the<br />

process industry is a time-consuming and expensive<br />

exercise. Twister BV (www.twisterbv.com) offers<br />

innovative gas processing solutions that can play<br />

an essential role in meeting these challenges.<br />

The team has been developing the Twister<br />

Supersonic Separator, which is based on a<br />

❿<br />

Liquid<br />

Vapor<br />

7<br />

Expander<br />

Cyclone Separator<br />

Compressor<br />

Saturated<br />

Feed Gas<br />

100 bar, 20ºC<br />

Dry gas<br />

70 bar, 5ºC<br />

Laval Nozzle<br />

30 bar, -40ºC<br />

Supersonic Wing<br />

Mach 1.3 (500 m/s)<br />

Cyclone Aeparator<br />

(300,000 g)<br />

Liquids + Slip-Gas<br />

70 bar, 5ºC<br />

Diffuser<br />

In Twister, the feed gas is expanded to supersonic velocity, thereby creating a homogeneous mist flow. During the<br />

expansion, a strong swirl is generated via a delta wing, causing the droplets to drift toward the circumference of the tube.<br />

Finally a co-axial flow splitter (vortex finder) skims the liquid enriched flow from the dried flow in the core. The two flows<br />

are recompressed in co-axial diffusers resulting in a final pressure being approximately 35% less than the feed pressure.<br />

Twister separator<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Industry Spotlight<br />

8<br />

unique combination of known physical processes,<br />

combining aero-dynamics, thermo-dynamics and<br />

fluid-dynamics to produce a revolutionary gas<br />

conditioning process. The route from a new Twister<br />

tube concept to marketable hardware via several<br />

production field trials has been a major undertaking.<br />

Reducing costs in the cycle of designing, testing and<br />

redesigning of Twister prototypes for the challenging<br />

conditions involved in high-pressure sour natural gas<br />

processing is of great importance. The introduction<br />

of computational fluid dynamics in the Twister<br />

development four years ago resulted in a cost<br />

reduction of approximately 70% compared to the<br />

conventional route without CFD.<br />

Customized Version of CFX<br />

Twister BV and <strong>ANSYS</strong> CFX jointly have developed a<br />

customized version of CFX 5.6*, capable of modeling<br />

non-equilibrium phase transition in multi-component<br />

real gas mixtures. The consulting team at <strong>ANSYS</strong> was<br />

chosen to perform this work because of their<br />

understanding of the needs of the industry and the<br />

flexible nature of CFX-5, which made it suitable for<br />

implementing the specialized features required. The<br />

specific features of this customized two-phase CFD<br />

code are:<br />

• Full equations of state, including the effects of<br />

phase change<br />

• Multi-component gases with several<br />

condensable species<br />

• A homogeneous nucleation model to<br />

determine the droplet number density<br />

• A growth model, to allow for the change in size<br />

of the particles, through condensation and<br />

evaporation<br />

• Droplet coalescence models depending on<br />

droplet size, number density and turbulence<br />

intensities<br />

• Slip models to predict the separation of the<br />

droplets from the continuous phase<br />

• Accounts for turbulent dispersion<br />

• Aforementioned models are coupled via mass,<br />

momentum and energy equations<br />

• Energy is affected by release of latent heat<br />

during condensation/evaporation<br />

The development and validation of the<br />

customized CFX code was of paramount importance<br />

in maturing the Twister separator for commercial<br />

application in the oil & gas industry.<br />

This custom version of CFX-5 includes all<br />

first-order effects useful for determining the<br />

performance of liquid/gas separators proceeded by an<br />

expander or throttling valve.<br />

Twister and LTX separators<br />

G + L<br />

dispersed<br />

G + L<br />

stratified<br />

G + L<br />

dispersed<br />

L<br />

For a process engineer, the quality of the gas coming over the<br />

top of the separator is determined with the phase equilibrium<br />

after an isenthalpic flash, presuming a certain liquid carry-over.<br />

The flow-path designer is concerned with the reduction of the<br />

carry-over by optimizing the flow variables of the separator,<br />

based on a feed with presumed droplet sizes.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Using the customized two-phase code, the flow path designer can study the<br />

influence of the geometry of a choke valve on the resulting droplet size<br />

distribution and better assess the performance of the separator based thereon.<br />

9<br />

P, T, flow, LGR composition<br />

P, T, flow, LGR,<br />

composition<br />

Mach number<br />

1.4<br />

1.2<br />

1.0<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0.0<br />

P, T, flow, LGR,<br />

droplet size,<br />

droplet number<br />

P, T, flow,<br />

composition<br />

LTX separator<br />

Improving Facility Performance<br />

Essential for the optimization of the separation<br />

performance of Twister is the prediction of droplet<br />

sizes. The droplet size is determined by both the vapor<br />

diffusion rate toward the droplets and the mutual<br />

agglomeration of these droplets. The size distribution<br />

mainly depends on the time interval of the nucleation<br />

pulse. The droplet size distribution determines the drift<br />

velocity of the liquid phase and hence determines the<br />

separation efficiency. Appropriate models for this have<br />

been implemented.<br />

This customized CFX code also enables the<br />

process engineer to better understand the relationship<br />

between the performance of subsequent process<br />

steps, e.g., the operation of a Low Temperature<br />

Separator (LTS) fed by a choke valve.<br />

Twister BV and <strong>ANSYS</strong> CFX have completed a<br />

powerful CFD code validated for natural gas processes.<br />

This unique CFD capability enables process engineers<br />

to optimize engineering practices, while increasing the<br />

performance of gas processing facilities. ■<br />

*I.P. Jones et. al, “The use of coupled solvers for multiphase and reacting flows”; 3rd international conference of CSIRO, 10–12 December 2003, Melbourne, Australia.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

10<br />

<strong>ANSYS</strong> for Virtual Surgery<br />

FEA is a valuable tool that aids doctors in orthopedic operations.<br />

By András Hajdu and Zoltán Zörgö<br />

Institute of Informatics<br />

<strong>University</strong> of Debrecen, Hungary<br />

Analysis, imaging and visualization technologies<br />

are being applied increasingly in medical applications,<br />

particularly in evaluating different approaches to<br />

surgery and determining the best ways to proceed in<br />

the operation. In this growing field, one of the primary<br />

focuses of our work applies finite element analysis to<br />

orthopedic surgery: specifically, the specialized area of<br />

osteotomy, where bones are surgically segmented<br />

and repositioned to correct various deformities. We<br />

chose <strong>ANSYS</strong> for this work because of the reliability<br />

and flexibility of the software in handling the irregular<br />

geometries and nonlinear properties inherent in these<br />

materials.<br />

Medical imaging technologies such as CT, MRI,<br />

PET or SPECT deliver slice or projection images<br />

of internal areas of the human body. These tools are<br />

generally used to visualize configurations of bones,<br />

organs and tissue, but they also have the ability<br />

to export image data and additional information in<br />

commonly known medical file formats like DICOM.<br />

These files then can be processed by third-party<br />

computer programs for assessing and diagnosing<br />

the condition of the patient and planning surgical<br />

intervention, that is, how the surgical procedure will be<br />

performed. Other very promising fields include<br />

telesurgery, virtual environments in medical school<br />

education and prototype modeling of artificial joints.<br />

The goal of the research is to develop computer<br />

applications in the field of orthopedic surgery,<br />

especially osteotomy intervention procedures based<br />

on CT images. the team at the Institute of Informatics<br />

uses this simulation technology to examine theories<br />

underlying new types of surgeries as well as to aid<br />

doctors in treating individual patients undergoing hip<br />

joint correction. These two approaches have many<br />

common tasks: extracting image data from diverse<br />

medical image exchange format files, enhancing<br />

images, choosing the appropriate segmentation<br />

techniques, CAD-oriented volume reconstruction,<br />

data exchange with FEM/FEA tools, and geometric<br />

description of virtual surgery.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

11<br />

Building Orthopedic Models<br />

CT data files. The first step in building an<br />

orthopedic model is extracting an image file from<br />

medical data exchange formats. As CT images<br />

represent the X-ray absorption of a given crosssection,<br />

the intensity values of their pixels represent<br />

this 12-bit absorption rate, rather than common color<br />

ranges. Since the slice density is usually reduced to<br />

a minimum for in-vivo scanning, considerable<br />

information often is lost, especially in complex regions<br />

of the human body. For visualization purposes, this<br />

deficiency can be compensated with interpolation<br />

techniques, but no lost anatomical data can be<br />

recovered in this way. Using these files for FEA work<br />

thus often requires further enhancement.<br />

Image enhancement and segmentation. As<br />

given tissue structures have their own absorption rate<br />

intervals, a windowing technique might be sufficient<br />

for a simple visualization. However, because these<br />

intervals can overlap, other tissue parts that differ from<br />

our VOI (Volume Of Interest) remain in the image, after<br />

applying the intensity window. Some conventional<br />

procedures like morphological or spectral-space<br />

filtering must be applied, as well as specific<br />

techniques for CT segmentation. We found that other<br />

methods, such as region growing and gradient-based<br />

segmentation, achieved excellent results for bone<br />

structures.<br />

Volume reconstruction. The final goal of the<br />

project is to develop an application to be used in<br />

surgery planning on a routine basis by medical staff<br />

without experience in using CAD-related software. We<br />

wanted this application to be able to transfer structural<br />

data into a finite element modeling and analysis<br />

software. Thus, volumetric information must be<br />

represented in a geometrically appropriate way. There<br />

is a difference between simple surface rendering and<br />

geometrical volume reconstruction in CAD systems.<br />

Volumetric data has to be represented using solid<br />

modeling primitives, and reconstructed using related<br />

concepts: keypoints, parametric splines, line loops,<br />

ruled and planar surfaces, volumes and solids.<br />

When extracting contour points of ROIs (Regions<br />

Of Interest), we need to reduce the number of points<br />

to approximately 10-15% by keeping only points with<br />

rapidly changing surroundings. These points then can<br />

be interpolated with splines, splines assembled to<br />

surfaces and surfaces to solids. The major difficulty is<br />

that CAD-related systems are designed to work with<br />

regular-shaped objects, and bone structures are not<br />

like that. However, to be able to execute FEA, it is<br />

necessary to use this approach. Moreover, virtual<br />

surgery interventions have to be carried out on<br />

this representation, or in such a way that proper<br />

geometrical representation of the modified bone<br />

structure remains easy to regain. As is often the case,<br />

conversion problems may occur when exchanging<br />

data between CAD systems, so we perform the above<br />

volume reconstruction procedure directly in the FEM<br />

software using built-in tools provided in the package.<br />

After testing many FEM programs, we chose <strong>ANSYS</strong><br />

software for this task. Figure 1 illustrates how they<br />

Figure 1. Steps of volume (bone) reconstruction in <strong>ANSYS</strong>.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

12<br />

Figure 2. Part of theoretical path and planar<br />

intersection of the cutting tool.<br />

reconstructed in <strong>ANSYS</strong> an 8-inch part of a femur<br />

(pipe-like bone) using the mentioned procedure. The<br />

entire reconstruction procedure was implemented in a<br />

simple <strong>ANSYS</strong> script file.<br />

A natural extension of this method seems to be<br />

suitable also for bones containing more parts, holes,<br />

etc. In this case, Boolean operations between solids<br />

provided by <strong>ANSYS</strong> gives us a powerful tool. Another<br />

challenging problem currently being investigated is the<br />

reconstruction of those parts of the bones where the<br />

CT slices contain varying topology (e.g., when<br />

reaching a junction in some special bones).<br />

Approaches to Virtual Surgery<br />

Planar approach. There are some cases when<br />

information from 2-D slices is sufficient for performing<br />

virtual surgery instead of 3-D solids. For example,<br />

the first subject of our project – human femur<br />

lengthening using helical incision – provided a good<br />

opportunity for experimenting with 3-D interventions<br />

performed in 2-D. By taking the intersection (dark<br />

section on Figure 2) of the theoretical cutting tool path<br />

(Figure 2 left) with the planes of the individual<br />

CT slices, we subtracted these profiles from the bone<br />

section profile (Figure 3).<br />

After the volume reconstruction using this<br />

technique, we obtained the modified bone structure<br />

without the need for further intervention. Another<br />

possibility is to use <strong>ANSYS</strong> to build up the geometric<br />

model of the bone and the cutting tool from their<br />

boundary lines, then to remove the solid defined by<br />

the path of the cutting tool. The team wrote an <strong>ANSYS</strong><br />

script to obtain fast and automatic model creation.<br />

In the case of the hip joint correction, some<br />

intervention also might be simulated in 2-D, but<br />

designating and registering ROIs on the slice set is<br />

more difficult. However, handling volumes as a set of<br />

unsorted 3-D points with additional attributes serves<br />

as an intermediate solution.<br />

Three-dimensional approach. In the first subject,<br />

the 3-D approach adopted by us was the combination<br />

of the volume reconstruction technique and<br />

conventional CAD modeling. We reconstructed the<br />

middle part (diaphysis) of the human femur, and, in the<br />

same coordinate system, using the axis of the actual<br />

bone, we constructed the solid object representing the<br />

path of the cutting tool. This was achieved by applying<br />

helical extrusion along this axis on a rectangle,<br />

using the parameters of the actual osteotomy. By<br />

subtracting these solids from each other, they<br />

Figure 3. Subtraction of the cutting tool from a<br />

bone section profile in 2-D, and the 3-D outcome.<br />

Figure 4. Subtracting a helix from the diaphysis.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

13<br />

Figure 5. Tetrahedron mesh for GL visualization<br />

and FEA.<br />

Figure 6. Different bar hole types and variable<br />

helix paths to improve efficiency of lengthening.<br />

obtained the wanted solid object (Figure 4). This<br />

Boolean subtraction was also executed by <strong>ANSYS</strong>.<br />

As previously mentioned, they also work on<br />

pre-operative analysis and comparison of hip joint<br />

osteotomy. The 3-D reconstruction of this region<br />

is more difficult because of the information loss<br />

during the CT scanning procedure. There are many<br />

consecutive slice pairs with large differences. In this<br />

case, interpolation gives no satisfying results, and we<br />

specialize in general methods to reduce the level of<br />

user action required.<br />

Our interface for virtual surgery is GLUT-based,<br />

containing I/O tools for importing existing meshes and<br />

exporting the model into a FEM/FEA environment.<br />

Besides using similar scripts for building up the<br />

geometry as described above, we also take advantage<br />

of the mesh generator and manager capabilities of<br />

<strong>ANSYS</strong> in data exchange. That way, we can import a<br />

tetrahedron mesh used in OpenGL technology into<br />

<strong>ANSYS</strong> for FEA analysis, for example, and <strong>ANSYS</strong><br />

geometry also can be exported as a tetrahedron mesh<br />

for visualizing purposes. Figure 5 shows an example of<br />

a tetrahedron mesh visualization in OpenGL.<br />

FEM/FEA results. Using the volume<br />

reconstruction approach, we needed only to translate<br />

our internal representation to the scripting language.<br />

Material types and parameters also can be defined<br />

using scripts. The bone material model we used is a<br />

linear isotropic one. After applying constraints and<br />

forces on the nodes of the solids, they have tested<br />

stress and displacement of the bone structure. Using<br />

the obtained results, a comparison can be made for<br />

the known osteotomy interventions of a certain type.<br />

For femur lengthening, our experience indicated that<br />

the highest stress values occurred around the starting<br />

and ending boreholes of the cut, so we also<br />

considered the usability of different borehole types<br />

and helix with variable pitches, as shown in Figure 6. ■<br />

Dr. András Hajdu is an instructor with the Institute of<br />

Informatics at the <strong>University</strong> of Debrecen in Hungary and<br />

can be contacted at hajdua@inf.unideb.hu. His research is<br />

supported by OTKA grants T032361, F043090 and IKTA<br />

6/2001. Zoltán Zörgö (zorgoz@inf.unideb.hu) is in PhD studies<br />

at the Institute.<br />

References and Resources for Further Reading<br />

H. Abé, K. Hayashi and M. Sato (Eds.): Data Book on<br />

Mechanical Properties of Living Cells, Tissues, and<br />

Organs, Springer-Verlag, Tokyo, 1996.<br />

Z. Csernátony, L. Kiss, S. Manó, L. Gáspár and K.<br />

Szepesi: Multilevel callus distraction. A novel idea to<br />

shorten the lengthening time, Medical Hypotheses,<br />

2002, accepted.<br />

R. C. Gonzalez and R. E. Woods: Digital image<br />

processing, Addison-Wesley, Reading,<br />

Massachusetts, 1992.<br />

A. L. Marsan: Solid model construction from 3-D<br />

images (PDF, PhD dissertation), The <strong>University</strong> of<br />

Michigan, 1999.<br />

K. Radermacher, C. V. Pichler, S. Fischer and G. Rau:<br />

3-D Visualization in Surgery, Helmholtz-Institute<br />

Aachen, 1998.<br />

L. A. Ritter, M. A. Liévin, R. B. Sader, H-F. B. Zeilhofer<br />

and E. A. Keeve: Fast Generation of 3-D Bone Models<br />

for Craniofacial Surgical Planning: An Interactive<br />

Approach, CARS/Springer, 2002.<br />

M. Sonka, V. Hlavac and R. Boyle: Image processing,<br />

analysis, and machine vision, Brooks/Cole Publishing<br />

Company, Pacific Grove, California, 1999.<br />

Tsai Ming-Dar, Shyan-Bin Jou and Ming-Shium Hsieh:<br />

An Orthopedic Virtual Reality Surgical Simulator<br />

(PDF), ICAT 2000.<br />

Zoltán Zörgö, András Hajdu, Sándor Manó, Zoltán<br />

Csernátony and Szabolcs Molnár: Analysis of a<br />

new femur- lengthening surgery, IEEE IASTED<br />

International Conference on Biomechanics (BioMech<br />

2003) (2003), Rhodes, Greece, Biomechanics/34-38.<br />

Web Links to More Information<br />

http://graphics.stanford.edu/data/3-Dscanrep/<br />

http://image.soongsil.ac.kr/software.html<br />

http://medical.nema.org<br />

http://www.ablesw.com/3-D-doctor/<br />

http://wwwr.kanazawa-it.ac.jp/ael/imaging/synapse<br />

http://www.materialise.com<br />

http://www.nist.gov/iges<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

14<br />

FEA in Micro-Robotics<br />

Researchers use <strong>ANSYS</strong> to develop micron-sized, self-powered<br />

mobile mechanisms.<br />

By Bruce Donald, Craig McGray, and Igor Paprotny of the Micro-Robotics Group, Computer Science Department,<br />

Dartmouth College; Daniela Rus, Department of Electrical Engineering and Computer Science, Massachusetts<br />

Institute of Technology; and Chris Levey, Dartmouth Thayer School of Engineering<br />

Mobile robots with dimensions in<br />

the millimeter to centimeter range<br />

have been developed, but the<br />

problem of constructing such<br />

systems at micron scales remains<br />

largely unsolved.<br />

The anticipated applications for mobile<br />

micro-robots are numerous: manipulation of biological<br />

cells in fighting cancer, for example, or stealth<br />

surveillance technology where clouds of flying<br />

micro-robots could monitor sites relatively undetected<br />

by sight or radar. Micrometer-sized robots could<br />

actively participate in the self-assembly of higherorder<br />

structures, linking to form complex assemblies<br />

analogous to biological systems. One could envision<br />

such self-assembly to take place inside a human<br />

body, growing prosthetic devices at their destination,<br />

for example, thus alleviating the need for intrusive<br />

surgery.<br />

Targeting these types of potential future<br />

micro-robotic applications, the Micro-Robotics Group<br />

at Dartmouth College has been developing a new<br />

class of untethered micro-actuators. Measuring less<br />

than 80 mm in length, these actuators are powered<br />

through a novel capacitive-coupled power delivery<br />

mechanism, allowing actuation without a physical<br />

connection to the power source. Finite element<br />

analysis using <strong>ANSYS</strong> allowed us to test the feasibility<br />

of the power delivery mechanism prior to actual<br />

fabrication of the device.<br />

The micro-actuators are designed to move in<br />

stepwise manner utilizing the concept of scratch-drive<br />

actuation (SDA). The functionality of a scratch-drive<br />

Figure 1. Concept behind scratch-drive actuation, which<br />

moves the micro-actuators in a stepwise manner. An electrical<br />

potential applied between the back-plate (1) and an<br />

underlying substrate (2) causes the back-plate to bend<br />

down, storing strain energy, while the edge of a bushing<br />

(3) is pushed forward. When the potential is removed from<br />

the back-plate, the strain energy is released and the backplate<br />

snaps back to its original shape, causing the actuator<br />

to move forward.<br />

actuator is shown in Figure 1. The actuation cycle<br />

begins when an electrical potential is applied between<br />

the back-plate and an underlying substrate. The<br />

back-plate bends downward, storing strain energy,<br />

while the edge of a bushing is pushed forward. When<br />

the potential is removed, the strain energy is released<br />

and the back-plate snaps back to its original shape.<br />

The actuation-cycle is now completed, and the<br />

actuator has taken a step forward.<br />

In contrast to traditional SDA power delivery<br />

schemes (such as using rails or spring tethers), our<br />

designs induce the potential onto the back-plate using<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

a capacitive circuit formed between underlying<br />

interdigitated electrodes and the back-plate of the<br />

actuator. A circuit representation of the system as<br />

shown in Figure 2 indicated that the back-plate<br />

potential should be approximately midway between<br />

the potentials of the underlying electrodes. We<br />

validated the power delivery concept for the specific<br />

geometry of our design by modeling the system<br />

through electro-static analysis in <strong>ANSYS</strong>. Figure 3<br />

shows the volume model of the actuator and the<br />

electrode field.<br />

The results of the analysis are shown in Figure 4,<br />

indicating the electrical potentials of the conductive<br />

elements in the model. Additionally, a cut through the<br />

air element shows the electrical potential from the field<br />

propagating through it. The potential of the electrodes<br />

in this example was set to 0 V (blue) and 100 V (red),<br />

which represented the model boundary conditions.<br />

The required potential of the back-plate was solved to<br />

be approximately 50 V, validating the circuit-model<br />

approximation. We also discovered that the potential<br />

of the back-plate changes only slightly as a function of<br />

the orientation of the drive in relation to the electrode<br />

field. This indicates that the actuator can be powered<br />

regardless of its orientation, so long as the device<br />

remains inside the electrode field.<br />

Additionally, we used the <strong>ANSYS</strong> model to<br />

visualize the intensity of the electric field propagating<br />

through the bottom layer of the insulation material, as<br />

shown in Figure 5. We suspect charging of the device<br />

due to charge-migration in the direction of the field,<br />

and charges embedding in the insulating layer<br />

underneath the drive. We anticipate that these charges<br />

will cluster along the areas where the electric field is<br />

the strongest. In future experiments, attempt will be<br />

made to image this pattern using a scanning electron<br />

microscope.<br />

Following the finite element analysis, we have<br />

successfully fabricated and actuated an untethered<br />

scratch-drive actuator capable of motion at speeds of<br />

up to 1.5mm/s—good pace for such a tiny device.<br />

Our current work is focused on how to apply these<br />

actuators to create steerable autonomous<br />

micro-robotic systems. We anticipate further use of<br />

<strong>ANSYS</strong> to model the electrostatic and mechanical<br />

interaction of the system components to further<br />

shorten our development cycle. In particular, we plan<br />

to use the <strong>ANSYS</strong> coupled-physics solver to<br />

determine the snap-down and operational<br />

characteristics of our actuators. ■<br />

Figure 2. Simplified capacitive-circuit representation<br />

of the system.<br />

Figure 3. Volume model of the actuator and the electrode<br />

field, prior to solving the model in <strong>ANSYS</strong>.<br />

Figure 4. Results of the electrostatic analysis, indicating the<br />

calculated potentials of the different model components<br />

after applying the boundary conditions.<br />

15<br />

Figure 5. Intensity of the electric field propagating through<br />

the bottom insulation layer of the actuator.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Software Profile<br />

16<br />

The New Face of<br />

<strong>ANSYS</strong> ICEM CFD<br />

V5.0 represents a significant redesign for<br />

the market leader in mesh generation.<br />

Judd Kaiser, <strong>Technical</strong> Solution Specialist<br />

The new user interface for <strong>ANSYS</strong> ICEM CFD brings<br />

important benefits to all users and has undergone<br />

extensive testing, with earlier releases of<br />

AI*Environment and <strong>ANSYS</strong> ICEM CFD 4.CFX utilizing<br />

essentially the same interface.<br />

The learning curve for new users can be<br />

dramatically shortened by way of an updated layout<br />

consisting of tabbed panels, a hierarchical model tree<br />

and intuitive icons.<br />

Existing users can look forward to enhanced<br />

meshing technology in a single<br />

unified environment for shell,<br />

tet, prism, hex and hybrid mesh<br />

generation. Performance improvement<br />

highlights for these users<br />

include hotkeys (which provide<br />

one-click access to the most commonly<br />

used functions), selection<br />

filters and support for the Spaceball<br />

3-D motion controller.<br />

Getting Geometry In<br />

<strong>ANSYS</strong> ICEM CFD is well-known<br />

for its ability to get geometry from<br />

virtually any source: native CAD<br />

packages, IGES, ACIS or other<br />

formats. The package continues to<br />

be unique among mesh generators<br />

in its ability to use geometry in both CAD and faceted<br />

representations. Faceted geometry is commonly used<br />

for rapid prototyping (stereo lithography, STL), reverse<br />

engineering (where the STL geometry comes from<br />

techniques such as digital photo scan) and biomedical<br />

applications (where the geometry can come<br />

from techniques such as magnetic resonance<br />

imaging [MRI]).<br />

One major development is that V5.0 is the first<br />

version of <strong>ANSYS</strong> ICEM CFD capable of running<br />

within the <strong>ANSYS</strong> Workbench Environment. As the<br />

common platform for all <strong>ANSYS</strong> products, Workbench<br />

<strong>ANSYS</strong> ICEM CFD remains the clear<br />

choice for meshing complex geometry.<br />

Shown is a tet/prism mesh for a race<br />

car wheel and suspension.<br />

provides a common desktop for a wide range of CAE<br />

applications. With <strong>ANSYS</strong> Workbench V8.1 and<br />

<strong>ANSYS</strong> ICEM CFD V5.0 installed, <strong>ANSYS</strong> ICEM CFD<br />

meshing is exposed as the Advanced Meshing tab.<br />

Geometry can be transferred seamlessly from<br />

DesignModeler to <strong>ANSYS</strong> ICEM CFD.<br />

Fault-Tolerant Meshing<br />

Having the geometry in hand doesn’t do you any good<br />

if you can’t create a mesh. Fault-tolerant meshing<br />

algorithms remain the heart of the<br />

<strong>ANSYS</strong> ICEM CFD meshing suite.<br />

Using an octree-based meshing<br />

algorithm, <strong>ANSYS</strong> ICEM CFD Tetra<br />

generates a volumetric mesh of<br />

tetrahedral elements that are projected<br />

to the underlying surface model. This<br />

methodology renders the mesh<br />

independent of the CAD surface patch<br />

structure. This makes the meshing<br />

algorithm highly fault-tolerant – sliver<br />

surfaces, small gaps and surface<br />

overlaps cause no problem. The mesh<br />

has the ability to walk over small<br />

details in the model. Control is in the<br />

hands of the user, who has the flexibility<br />

to define which geometric details are<br />

ignored and which are represented<br />

accurately by the mesh. Tetra’s computation speed<br />

has been improved with V5.0. As an example, a test<br />

model of 250,000 elements and moderate geometry<br />

complexity required 32% less CPU time during<br />

meshing when compared with the previous version.<br />

The Delaunay tet meshing algorithm was added<br />

to the meshing suite in the previous version and has<br />

undergone numerous improvements, including<br />

support for density volumetric mesh controls.<br />

For viscous CFD applications, tet meshes can be<br />

improved by adding a layer of prism elements for<br />

improved near-wall resolution for boundary layer<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

17<br />

Prism before<br />

Prism after<br />

Images showing a cut through a hybrid hex/tet mesh of a wind tunnel/missile configuration before and after adding a layer of prism elements<br />

on the wind tunnel walls. Note that the prism layer is included for both the hex and tet zones (new feature in V5.0).<br />

flows. <strong>ANSYS</strong> ICEM CFD Prism also has been<br />

improved for this release. Prism layers can now be<br />

grown from surface mesh without the need for an<br />

attached volume tet mesh. Perhaps more significant,<br />

prism layers can now be grown from both tri and quad<br />

elements. This means that it is now possible to grow a<br />

prism layer in a combined hybrid hex/tet mesh.<br />

Integrated Hex Meshing<br />

<strong>ANSYS</strong> ICEM CFD Hexa remains a leader in getting<br />

high-quality, all-hex element meshes on geometries,<br />

which most competitors wouldn’t even attempt a hex<br />

mesh. The key to the approach is a block structure<br />

that is generated independent of the underlying<br />

arrangement of CAD (or faceted) surfaces. Think of the<br />

block structure as an extremely coarse all-hex mesh<br />

that captures the basic shape of the domain. Each<br />

block is then a parametric space in which the mesh<br />

can be refined. For CFD meshes, the ability of this<br />

parametric space to be distorted to follow anisotropic<br />

physics makes it very efficient at capturing<br />

key features of the flow with the lowest possible<br />

element count.<br />

Dassault Systemes recognized the power and<br />

promise of this methodology, selecting <strong>ANSYS</strong> ICEM<br />

CFD technology as the only hex meshing solution to<br />

be offered integrated into CATIA V5. CAA V-5–based<br />

<strong>ANSYS</strong> ICEM CFD Hexa offers hex meshing that<br />

maintains parametric associativity to the native CATIA<br />

Design Analysis Model.<br />

New in V5.0, Hexa has been fully integrated into<br />

the new user interface. Hex meshing functions are<br />

housed in the blocking tab, and block structure<br />

entities are organized on the blocking branch of the<br />

model tree. In addition to reworking the user interface,<br />

several operations have been significantly streamlined.<br />

New methods of creating grid blocks have been<br />

added. The process of grouping curves and defining<br />

edge-to-curve projections has been made more<br />

efficient. Most operations now take advantage of<br />

multi-selection methods, such as box and polygon<br />

select. The addition of blocking hotkeys is a real<br />

time-saver, giving the user single-keystroke access to<br />

the most frequently used operations.<br />

For shell meshing, V5.0 offers unstructured 2-D<br />

blocks, combining the best of <strong>ANSYS</strong> ICEM CFD<br />

Hexa and the patch-based mesher formerly known as<br />

Quad. The creation of blocks for 2-D shell meshing<br />

has been automated, so that blocks can be created<br />

automatically for all selected surfaces.<br />

Mesh Editing<br />

<strong>ANSYS</strong> ICEM CFD offers maximum flexibility in its<br />

mesh editing tools, whether it’s via global smoothing<br />

algorithms or techniques to repair or recreate individual<br />

problem elements. These tools provide one last<br />

place to work around any bottlenecks.<br />

Noteworthy are new unstructured hex mesh<br />

smoothing algorithms, which strive for mesh smoothness<br />

and near-wall orthogonality while preserving<br />

mesh spacing normal to the wall. Two new quality<br />

metrics have been added in order to help quantify<br />

mesh smoothness: adjacent cell volume ratio and<br />

opposite face area ratio.<br />

Scripting Tools<br />

<strong>ANSYS</strong> ICEM CFD provides a powerful suite of tools<br />

for geometry creation, model diagnosis and repair,<br />

meshing and mesh editing. All of these tools<br />

are exposed at a command line level, providing a<br />

formidable toolbox for the development of vertical<br />

applications. Every operation performed can be stored<br />

in a script for replay on model variants. This power can<br />

be extended by using the Tcl/Tk scripting language,<br />

enabling the development of entire applications.<br />

These tools enable users to get around virtually<br />

any geometry or meshing bottleneck, getting the mesh<br />

you need using the geometry you have. ■<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

CFD Update: What’s New in Computational Fluid Dynamics<br />

18<br />

Blood Flow Analysis Improves Stent-Grafts<br />

Coupled <strong>ANSYS</strong> and CFX fluid structure simulations help<br />

researchers develop optimal surgical recommendations,<br />

improved stent designs and proper stent placement.<br />

By Dr. Clement Kleinstreuer, Professor and Director of<br />

the Computational Fluid-Particle Dynamics Laboratory<br />

and Zhonghua Li, Doctoral Student, Biomechanical<br />

Engineering Research Group, North Carolina State<br />

<strong>University</strong><br />

One of the more intriguing challenges in modern<br />

medicine is the repair of abdominal aortic aneurysms<br />

using stent-grafts: tubular wire mesh stents<br />

interwoven with a synthetic graft material. The device<br />

is guided into place through a small incision in the<br />

groin and then propped open in the aorta, thus<br />

reinforcing the damaged area of the artery. For<br />

reasons that were not well understood until recently,<br />

however, some stent-grafts move out of place. This<br />

migration may again expose the weakened aortic wall<br />

to relatively high blood pressure, potentially leading to<br />

sudden aneurysm rupture and death.<br />

Developing an understanding of stent-graft<br />

migration and finding suitable solutions is our current<br />

work at the Biomechanical Engineering Research<br />

Group (BERG) of North Carolina State <strong>University</strong> in<br />

Raleigh. We are using a pairing of computational<br />

fluid dynamics (CFD) interactively coupled with<br />

computational structure analysis. Using coupled CFX<br />

and <strong>ANSYS</strong> Structural models in these fluid structure<br />

interactions (FSI), we are learning what goes on inside<br />

the aorta before and after a stent-graft is surgically<br />

inserted, and how the stent-graft might migrate or<br />

dislodge.<br />

Most studies assume that artery walls are stiff<br />

with regard to the pressure changes that come with<br />

each heartbeat, and that arterial wall thicknesses<br />

are constant both axially and circumferentially. Neither<br />

is usually true, especially for older patients with<br />

hypertension, a group that suffers most from<br />

aneurysms.<br />

LEFT: Representation of a cross-section of an abdominal aortic aneurysm<br />

(AAA) with a bifurcating stent-graft. RIGHT: Representation of an aortic artery<br />

aneurysm (bulge on left) between the renal artery (to the kidneys, top) and the<br />

iliac bifurcation (to the legs). Aside from the color shading chosen, this is<br />

what the surgeon would see before starting to implant the stent-graft.<br />

Studying Stent Migration<br />

The stent migration problem in abdominal aortic<br />

aneurysm (AAA) repairs is critical to the patient’s<br />

survival. When the stented graft slides out of place<br />

axially, the weakened or diseased artery wall is<br />

re-exposed to the high blood pressure of pulsating<br />

blood flow. That greatly increases the possibility of<br />

AAA-rupture, which is usually fatal. Easily overlooked,<br />

aortic aneurysms are the 13th leading cause of death<br />

in the United States.<br />

Wall displacements and pressure/stress levels for Resteady=1200, using CFX and <strong>ANSYS</strong>: (left) axisymmetric AAA,<br />

and (right) stented AAA, where the stent-graft clearly shields the weakened aneurysm wall from the blood flow<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Schematic representation<br />

of an axisymmetric AAA,<br />

including implanted stent-graft<br />

with relevant analytical data.<br />

19<br />

Using five case histories, CFX and <strong>ANSYS</strong><br />

Structural were used to compute the incipient<br />

migration forces of a stented graft under different<br />

placement conditions. In the process, we modeled<br />

different artery neck configurations, variable arterial<br />

wall thicknesses, transient hemodynamics and<br />

multi-structure interactions.<br />

The actual stented AAA model in <strong>ANSYS</strong><br />

consisted of a lumen or bulge in the artery wall, an<br />

endovascular graft shell, a cavity of stagnant blood<br />

and the AAA wall.<br />

Using iterative fluid structure interaction was an<br />

intense computational problem as <strong>ANSYS</strong> Structural<br />

and CFX exchanged coupled variations in wall flex and<br />

geometry, requiring several new flow and structure<br />

results at each time step. The <strong>ANSYS</strong> Structural<br />

problem centered around nonlinear, large deformation,<br />

contact and dynamic analyses.<br />

Insight into Physical Processes<br />

The CFX post-processor in conjunction with our<br />

programs gives us a great deal of insight into the<br />

physical processes. It helps us to spot critical areas<br />

where platelets or low-density lipoproteins (LDLs) may<br />

clump together, and, ultimately, it helps us with design<br />

optimization of stent-grafts and secure stent-graft<br />

placements.<br />

The coupled CFX and <strong>ANSYS</strong> results were<br />

validated with experimental data sets and with clinical<br />

observations.<br />

Surgeons and scientists know that forces<br />

triggering stented graft migration include blood<br />

momentum changes, blood pressure and artery wall<br />

shear stress, inappropriate configurations of the<br />

healthy aortic neck section, tissue problems in the<br />

aortic neck segment and biomechanical degradation<br />

of the prosthetic material.<br />

To set the model stent-graft into motion, an<br />

increasing pull force was applied with an APDL<br />

subroutine. Coulomb’s Law was used for each contact<br />

element’s friction coefficients, but the simulations<br />

revealed a nonlinear correlation in large displacements<br />

between the migration force needed to move the stent<br />

and the friction coefficients. The simulation also<br />

revealed that the risk of displacement rises sharply in<br />

patients with high blood pressure.<br />

Coupled <strong>ANSYS</strong> and CFX fluid structure<br />

simulations verified that a stent-graft can significantly<br />

reduce the risk of an aneurysm rupture even when<br />

high blood pressure is the fundamental cause. Clearly,<br />

these tools for blood-flow-stent-artery interactions are<br />

valid, predictive and powerful for optimal surgical<br />

recommendations, improved stent designs and proper<br />

stent placement. ■<br />

For this study, CFX-4 was linked to <strong>ANSYS</strong> with<br />

Fortran to perform fluid-structure interaction.<br />

Presently, generalized, fully representative stented<br />

abdominal aortic aneurysm configurations are being<br />

analyzed, employing <strong>ANSYS</strong> and CFX-5.<br />

Cardiac cycle (time level of<br />

interest: t/T=0.32, Re=550)<br />

Velocity distribution in non-stented<br />

axisymmetric AAA model<br />

Wall stress and velocity distribution in stented<br />

axisymmetric AAA model<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

CFD Update: What’s New in Computational Fluid Dynamics<br />

20<br />

Simulation Helps Improve Oil<br />

Refinery Operations<br />

Analysis assists in reducing coke deposits while improving<br />

hydrocarbon stripping.<br />

By Dr. Peter Witt<br />

Research Scientist<br />

CSIRO Minerals<br />

During oil processing, heavier products are broken down by high<br />

temperatures into lighter products in cokers. This “cracking”<br />

process strips off lighter liquid hydrocarbon products such as<br />

naphtha and gas oils, leaving heavier coke behind. The challenge<br />

that CSIRO Minerals has been helping Syncrude resolve is how to<br />

best reduce coke deposits that build-up in their fluid coker stripper<br />

while maintaining or improving hydrocarbon stripping.<br />

Syncrude Canada Ltd. is the world’s largest<br />

producer of crude oil from oil sands and the largest<br />

single-source producer in Canada. CSIRO (Australia’s<br />

Commonwealth Scientific and Industrial Research<br />

Organisation) is one of the world’s largest and most<br />

diverse scientific global research organizations.<br />

CSIRO Minerals is a long time user of CFX and in<br />

collaboration with the Clean Power from Lignite CRC<br />

developed the fluidized bed model in CFX-4. Because<br />

of its robust multiphase capability and its ability to be<br />

extended into new application areas, CFX is used<br />

extensively by CSIRO Minerals in undertaking<br />

complex CFD modeling of multiphase, combustion<br />

and reacting processes in the mineral processing,<br />

chemical and petrochemical industries.<br />

In the past, physical modeling had been used to<br />

understand the flow of solids and gas in the stripper.<br />

This modeling is performed at ambient conditions, so<br />

scaling of both the physical size and materials is<br />

required to approximate the actual high temperature<br />

and pressure in the stripper. This scaling process can<br />

introduce some uncertainty in understanding the<br />

actual stripper operation.<br />

Maintenance work on a coker unit at Syncrude’s<br />

oil sands plant in Alberta, Canada.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

0.0 secs<br />

By using CFD modeling to complement the<br />

physical modeling programs, scaling is eliminated and<br />

the actual dimensions and operating conditions are<br />

used. Furthermore, CFX simulation provides much<br />

greater detail of the flows and forces in the stripper<br />

than can be obtained from physical models or from<br />

the plant. This is due to the difficulty in making<br />

measurements and visualizing the flow in complex<br />

multiphase systems.<br />

Syncrude senior research associates Dr. Larry<br />

Hackman and Mr. Craig McKnight explain that<br />

extensive cold flow modeling (but not CFD modeling)<br />

had previously been used to investigate the operation<br />

of the fluid bed coker stripper and the gas and solids<br />

behavior in the unit. McKnight notes this project with<br />

CSIRO Minerals resulted in detailed, high quality<br />

reports, which provide “a new understanding of the<br />

fluid coker stripper operation.” Hackman indicated,<br />

“By using CFX to gain a better understanding, it is<br />

anticipated that design changes will be identified to<br />

improve stripping efficiency, reduce shed fouling and<br />

optimize stripper operation.”<br />

To most efficiently perform the simulations and<br />

utilize the results, the two companies are leveraging<br />

the distance separating their facilities. When it is night<br />

in Edmonton, Alberta, Canada where Syncrude<br />

Research is located, CSIRO Minerals staff is hard at<br />

work in Australia performing analyses and posting<br />

results (including pictures and animations) on their<br />

extranet. The next morning, the group in Canada can<br />

view progress of the modeling work and provide<br />

feedback for a quick turnaround.<br />

In this way, CSIRO is utilizing CFX technology to<br />

assist Syncrude in determining how best to utilize their<br />

current plant to get maximum throughput and thus<br />

make the most of their capital investment. ■<br />

5.0 secs<br />

9.0 secs<br />

13.0 secs<br />

21<br />

16.5 secs<br />

Three-dimensional fluidized bed model of the Syncrude fluid<br />

coker “stripper.” The model predicts the motion of bubbles (in<br />

purple) rising from injectors in the lower part of the bed and<br />

the complex flow behavior of coke particles. Flow simulations<br />

provide insights into the stripper<br />

operation, which are then used to<br />

improve the design.<br />

Gas Volume<br />

Fraction<br />

0.75<br />

0.68<br />

0.60<br />

0.52<br />

0.45<br />

20 secs<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

CFD Update: What’s New in Computational Fluid Dynamics<br />

22<br />

CFX-5.7 Brings Powerful Integrated<br />

Tools to Engineering Design<br />

Latest release enhances core CFD features and gives<br />

users greater access to <strong>ANSYS</strong> tools.<br />

By Michael Raw<br />

Vice President, Product Development<br />

<strong>ANSYS</strong> Fluids Business<br />

Released in April 2004, CFX-5.7 demonstrates<br />

the continuing development of core CFD technologies,<br />

plus leverages <strong>ANSYS</strong> technologies to provide<br />

an exciting new series of capabilities for CFX users.<br />

This latest version contains the most advanced<br />

CFD features available, representing a powerful<br />

combination of proven, leading-edge technologies<br />

that provide the accuracy, reliability, speed and<br />

flexibility companies trust in their demanding fluid<br />

simulation applications.<br />

bi-directional associative CAD interfaces to all major<br />

CAD packages. The CFX-5 mesher, called CFX-Mesh<br />

and based on the advancing front inflation tetra/prism<br />

meshing technology, has been implemented in<br />

Workbench as a native GUI application that is easy to<br />

use and closely integrated with DesignModeler.<br />

<strong>ANSYS</strong> ICEM CFD meshers, including the unique<br />

hexahedral element meshing tools, are also now<br />

available in Workbench. They provide meshes for the<br />

most demanding CFD applications and are well<br />

known for their robustness when applied to very large<br />

or complex industrial CAD models. The combination<br />

of <strong>ANSYS</strong> DesignModeler, CFX-Mesh and ICEM<br />

<strong>ANSYS</strong> Integration<br />

CFX customers are now gaining access<br />

to state-of-the-art geometry modeling<br />

software with <strong>ANSYS</strong> DesignModeler, a<br />

Workbench-based product that is our<br />

new geometry creation tool providing<br />

CFX data can be interpolated<br />

directly onto <strong>ANSYS</strong> CBD<br />

files, providing a flexible route<br />

to transfer CFX results to an<br />

existing <strong>ANSYS</strong> mesh.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

It is now possible to perform<br />

texture mapping in CFX-Post<br />

CFD meshing technology provides a comprehensive<br />

CAD-to-meshing solution for CFD applications. As this is<br />

often the most time-consuming stage of CFD simulation,<br />

this represents a genuine time-saving benefit to <strong>ANSYS</strong><br />

CFX users.<br />

The latest release introduces a fluid structure<br />

interaction (FSI) capability for when the interaction of a<br />

fluid around the solid is important, such as fluids-induced<br />

stresses and heat transfer. A simple-to-use, one-way<br />

transfer of data from a CFX solution to <strong>ANSYS</strong> provides<br />

for seamless passing of thermal and loads information<br />

from fluids to structural analysis. This approach<br />

automatically interpolates the data into the <strong>ANSYS</strong> CBD<br />

file format. For more complex FSI situations, such as<br />

large-scale solid deformation or motions in which the<br />

two-way influences are important, CFX-5 can dynamically<br />

interact with <strong>ANSYS</strong> stress analysis. <strong>ANSYS</strong> Inc. has the<br />

unique distinction of offering the industry’s only native<br />

connection between such components, which means<br />

ease-of-use, flexibility and reliability.<br />

Core CFD Enhancements<br />

As our flagship CFD simulation product, CFX-5.7 has<br />

been significantly enhanced for this release in several<br />

modeling areas, including moving mesh capability,<br />

general grid interface support of conjugate heat transfer,<br />

advanced turbulence models, multiphase models, as well<br />

as pre- and post-processing improvements. These<br />

enhancements are briefly described in the sections that<br />

follow.<br />

When fluid flow simulations involve changing<br />

geometry (as in devices such as valves, pistons or gear<br />

pumps, for example), CFX-5 moving mesh options can be<br />

used. Several mesh movement strategies are available:<br />

prescribed surface movement with automatic mesh<br />

morphing, explicit 3-D mesh movement via user functions<br />

or multiple mesh files, remeshing with topology change<br />

and combinations of these strategies. These strategies<br />

cover almost every conceivable mesh movement needed.<br />

The Generalized Grid Interface (GGI) is now<br />

supported for Conjugate Heat Transfer (CHT), allowing the<br />

solid and fluid or the two solids to be created and meshed<br />

separately — and set up to reflect the needs of the<br />

physics in each zone independently. The heat transfer<br />

and/or radiation between the two objects can then be<br />

analyzed by connecting the two domains using GGI.<br />

CFX-5.7 adds two innovative turbulence models to<br />

its already comprehensive suite for turbulence analysis.<br />

Turbulence is present in most industrial flow, and<br />

accounting for this phenomenon appropriately can<br />

make a great difference in the accuracy of a<br />

simulation. CFX-5.7 will introduce the<br />

Transition Turbulence model, which helps<br />

to accurately predict the laminar-toturbulent<br />

scenarios often key for<br />

heat transfer prediction, e.g. in<br />

turbine blades. In addition,<br />

the detached eddy (DES)<br />

transient turbulence<br />

model has been<br />

Reacting particles are a feature of this release, including<br />

a fully featured coal combustion model.<br />

completed. Unique to CFX-5, this model combines the<br />

efficiencies of a Reynolds Averaged Navier Stokes (RANS)<br />

simulation in attached boundary layer regions with the<br />

ability to compute the large eddy transient structures using<br />

LES.<br />

Current multiphase capabilities have been extended,<br />

and an algebraic slip model (ASM) has been added. Using<br />

an algebraic approximation for the dispersed phase<br />

velocities, ASM is highly efficient even for a large number of<br />

dispersed phase size groups. A first implementation of the<br />

Multiple Size group model (MUSIG), accounting for a wide<br />

spectrum of particles sizes and shapes at every point in<br />

dispersed two-phase flows, has been added with access<br />

through the CFX Command Language (CCL).<br />

The material properties editor in CFX-Pre, the CFX-5<br />

physics pre-processor, allows users to select materials<br />

from groups to ensure proper interactions with their<br />

physical model set-up. This helps to avoid errors in<br />

selecting or combining materials.<br />

A multi-component, customizable particle model is<br />

now available as part of the Lagrangian Particle Tracking<br />

capability. Reacting particles features include evaporating,<br />

boiling and oil droplet models, as well as a fully featured<br />

coal combustion model. These models are often key to<br />

accurate simulations in the power generation industry.<br />

Some of the improvements now in CFX-Post include<br />

the ability to easily compare results between different<br />

simulation results or time steps, CGNS data support,<br />

surface streamlines, text labels that update automatically,<br />

surfaces of revolution, display of particle subset in particle<br />

tracking and texture mapping.<br />

Development is now focused on future releases,<br />

which will provide for a closer interface with the <strong>ANSYS</strong><br />

Workbench Environment, including more enhancements<br />

to fluid-structure interaction capabilities as well as the<br />

continued investment in core CFD technology. ■<br />

23<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

CFD Update: What’s New in Computational Fluid Dynamics<br />

24<br />

How many holes do we need<br />

to dig? Construction costs<br />

can exceed $1 million for a<br />

new 22m diameter tank at a<br />

water treatment plant.<br />

Improving Water Treatment Systems<br />

Engineers design compact,<br />

more efficient secondary<br />

clarifiers with the aid of CFX.<br />

By David J. Burt<br />

Senior Engineer<br />

MMI Engineering<br />

A secondary clarifier is the final treatment stage of a traditional<br />

activated sludge sewage works. It separates solid precipitate<br />

material from effluent water prior to discharge. Because of recent<br />

changes in environmental legislation, many treatment works in<br />

the UK are required to carry increased throughput or meet more<br />

stringent effluent quality limits. This means that more clarification<br />

capacity is needed. But with land in urban areas scarce and<br />

construction costs high, there is an increasing need to maximize<br />

the performance of existing units rather than build new ones.<br />

The standard technique for designing a final clarifier is mass<br />

flux theory. However, this method uses a one-dimensional settling<br />

model and cannot account for the ‘density current’ flow typical in<br />

a clarifier. Even if the clarifier external design satisfies mass flux<br />

theory it may still fail, or perform badly in practice because of the<br />

internal flow features. Often designers are forced to allow a for<br />

a 20% factor of safety in tank surface area to allow for the<br />

shortcomings of mass flux theory. With CFD modeling, it<br />

is possible to capture all of the flow processes to show<br />

short-circuiting, scouring of the sludge blanket and solids<br />

re-entrainment to effluent. This means it is possible to design<br />

more compact units or retrofit existing units with internal baffling<br />

to allow for higher loading.<br />

By augmenting the standard drift flux models in CFX,<br />

engineers at MMI have established a set of validated and verified<br />

models for clarifier performance. These models include settling<br />

algorithms and rheological functions for activated sludge<br />

mixtures. The models have recently been used at a number of<br />

UK sites to optimize final effluent quality for increased load. ■<br />

Concentration profiles through a cross section of<br />

the clarifier approaching 8000 mg/l solids in the<br />

blanket. This tank features an Energy Dissipation<br />

Influent EDI, optimized stilling well diameter and<br />

additional Stamford baffling below the effluent<br />

weir.<br />

A useful post-processing idea is to track stream<br />

lines for the solid phase velocity field. In this<br />

case colored with G scalar to show where floc<br />

may experience greatest shear.<br />

MMI Engineering is a wholly owned subsidiary of GeoSyntec Consultants<br />

and provides a range of environmental, geotechnical, hydrological<br />

and civil engineering services. Further details can be found at<br />

www.mmiengineering.com and www.geosyntec.com.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Managing CAE Processes<br />

Upfront Analysis<br />

in the Global<br />

Enterprise<br />

Early simulation is especially important when engineers at<br />

dispersed locations must collaborate in product development.<br />

25<br />

By Fereydoon Dadkhah<br />

Mechanical Analysis and Simulation<br />

Delphi Electronics and Safety<br />

Two important side effects of the continuing pressure<br />

to reduce product development time and development<br />

costs have been the increased use of analysis in the<br />

early stages of design and the development and<br />

manufacturing of many products at overseas sites.<br />

Upfront analysis has been identified by many<br />

companies as a critical stage of product development<br />

due to the many benefits it provides. Done properly,<br />

upfront analysis can shorten the design cycle of a<br />

product drastically by identifying problems early<br />

before substantial investment of time and material has<br />

been made in the product. In the earlier stages of<br />

design, engineers have more options at their disposal<br />

when changing a design to address problems<br />

uncovered by analysis. As a product’s design<br />

approaches completion, many design modification<br />

options are eliminated due to a variety of reasons<br />

such as manufacturability, cost, system integration,<br />

packaging etc. Therefore, problems that are<br />

discovered later in the process are generally more<br />

expensive to implement. Once a problem is<br />

discovered using upfront analysis, all the viable design<br />

options can also be evaluated by employing the same<br />

analysis techniques. As a result, when a prototype is<br />

finally built and tested, it is much more likely to pass<br />

the tests than if upfront analysis had not been used.<br />

Another fact of today’s global economic<br />

environment is that many companies have moved<br />

beyond establishing manufacturing-only facilities<br />

overseas to performing some of their product<br />

development activities at the overseas locations<br />

as well.This global footprint can lead to situations<br />

where a product is conceived and its performance<br />

requirements specified in country A, it is then designed<br />

and tested in country B and mass produced in country<br />

C. Therefore, development centers have to be flexible<br />

enough to respond to the needs of their local market<br />

as well as be able to develop products for<br />

different, distant markets. Once again, the shortened<br />

design schedules makes the use of CAE mandatory,<br />

especially in the early stages. Because of the<br />

distributed product development process, it is<br />

important that all the engineers and designers use the<br />

same processes<br />

and techniques.<br />

Using analysis as an<br />

integrated part of product<br />

development enables engineers<br />

from around the world to collaborate<br />

in unprecedented ways.<br />

Many of Delphi Corporation’s customers are<br />

global companies which market and sell their products<br />

around the world. It is therefore important for all of<br />

Delphi’s resources to be used to satisfy our customer’s<br />

needs regardless of where the need arises. Recent<br />

programs at Delphi Electronics and Safety (Formerly<br />

Delphi Delco Electronics Systems) have involved just<br />

such a scenario. Engineers from three different<br />

countries have been involved in the design process<br />

from the moment contracts are awarded. Even while<br />

some the system features are being finalized,<br />

the resources of the company around the world are<br />

mobilized to analyze and evaluate the component<br />

designs. Finite element analysis is used extensively to<br />

evaluate component performance. In many cases<br />

the early analysis indicates that modifications<br />

are necessary. The modifications are made and<br />

assessed until all problems are eliminated. Engineers<br />

responsible for making design modifications can use<br />

the local resources as well as those abroad to ensure<br />

the viability of their design. For example, many<br />

engineers at Delphi Electronics and Safety’s design<br />

centers around the world have been trained to use<br />

first-order analysis tools. These engineers are usually<br />

able to use analysis to eliminate many design flaws.<br />

However, often they need help in completing the<br />

picture, either because of shortage of time and other<br />

resources, or because they lack the specialty skills<br />

that are available at other sites.<br />

Finally, one of the most important reasons for<br />

performing upfront CAE is simply that many of<br />

our customers require it. In many cases, customers<br />

have developed extensive validation requirements<br />

that use simulation extensively in the concept<br />

approval phase. ■<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Simulation at Work<br />

26<br />

Y<br />

Z<br />

X<br />

Analysis of Artificial<br />

Knee Joints<br />

<strong>ANSYS</strong> provides fast, accurate feedback<br />

on new orthopedic implant designs.<br />

Founded in 1895, DePuy is the oldest manufacturer<br />

of orthopedic implants in the United States, with a<br />

reputation for innovation in new product development.<br />

The company has patented a wide range of<br />

replacement knee systems, the first of which was<br />

developed more than 20 years ago. One of these<br />

types incorporates a state-of-the-art mobile bearing,<br />

which offers a wide range of options to allow the<br />

surgeon to match the implant to the patient’s anatomy.<br />

Figure 1 illustrates a typical replacement knee.<br />

In one recent application, two sizes of a<br />

replacement knee design were analyzed at different<br />

angles of articulation using <strong>ANSYS</strong>. Initially, finite<br />

element results were compared with known<br />

experimental measurements obtained on one of the<br />

two sizes at three angles of articulation. Once<br />

correlation had been achieved, the same methodology<br />

was used to analyze the other design at various angles.<br />

Meshing Critical Components<br />

The replacement knee design is composed of two<br />

components: the femoral component and the bearing.<br />

Figure 2 shows the solid geometry of the design<br />

in <strong>ANSYS</strong> after importation of the CAD model in<br />

Parasolid format.<br />

Both the femoral component and the bearing<br />

were meshed with 3-D higher order tetrahedral<br />

elements. The meshing of the two parts was made<br />

fully parameterized. The mesh on the underside of the<br />

femoral component was made sufficiently fine to<br />

ensure minimal loss of accuracy in the geometry of the<br />

curved contact surfaces.<br />

A coarser mesh was used in the interior and on<br />

the upper side of the femoral component, since<br />

its material was significantly stiffer than that of the<br />

bearing, and, consequently, very little structural<br />

deformation was expected. Another option was to<br />

mesh the contact surfaces of the femoral component<br />

with rigid target and the load applied to a pilot node.<br />

A similar approach was used for the bearing,<br />

as the size of the elements was more critical in the<br />

contact region than other non-contacting surfaces.<br />

However, a mesh density even finer than that on the<br />

contact surfaces of the femoral component was<br />

desirable in the bearing to ensure a good resolution of<br />

the contact area and stresses.<br />

An indiscriminate refinement of the mesh on all<br />

the upper surfaces of the bearing proved to be<br />

computationally too expensive, and a new meshing<br />

procedure was developed and tested by IDAC, a finite<br />

element analysis and computer-aided engineering<br />

consulting firm and the leading UK provider of <strong>ANSYS</strong><br />

and DesignSpace software.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

27<br />

Figure 1. One of DePuy’s knee implants<br />

incorporates a mobile bearing that offers a<br />

wide range of options to allow the surgeon<br />

to match the implant to a patient’s anatomy.<br />

Figure 2. Solid geometry of orthopedic<br />

knee design in <strong>ANSYS</strong> after importation<br />

of the CAD model in Parasolid format.<br />

Figure 3. Analysis shows stress distribution<br />

in contact area between the bearing and<br />

the femoral component.<br />

Running the Analysis<br />

A preliminary contact analysis was first run with the<br />

original mesh density prescribed to the bearing, then<br />

the elements that were in contact with the femoral<br />

component were further refined for the subsequent<br />

solution. An example of this mesh is depicted in Figure<br />

3. The image illustrates the stress distribution in the<br />

contact area between the bearing and the femoral<br />

component. These stress distribution plots can be<br />

created in the <strong>ANSYS</strong> program for any point in time<br />

during the nonlinear solution.<br />

It was found that excessive geometric<br />

penetration at setup produced stress singularities and<br />

that, therefore, the contact pair should be checked<br />

prior to the solution. Localized peak contact stresses<br />

also could be produced by the discretization of<br />

the otherwise smooth contact surfaces. The mesh<br />

refinement level for the elements in the vicinity of<br />

contact after the preliminary contact analysis may be<br />

increased, but at the expense of a longer solution time.<br />

Apart from contact stresses, the total contact<br />

area was also an important aspect of the design being<br />

studied. The total contact area was obtained from<br />

summing the areas of all contact elements showing<br />

partial or full contact. This generally leads to an<br />

overestimation of the actual contact area (although it<br />

was considered insignificant given the high mesh<br />

density in the contact area).<br />

All analysis work described in this project was<br />

performed on Intel-based personal computers running<br />

the <strong>ANSYS</strong> program. DePuy are users of <strong>ANSYS</strong> and<br />

the parametric models created by IDAC have been<br />

supplied to DePuy for their engineers to perform<br />

further analyses and modifications in-house.<br />

Benefits: Speed and Accuracy<br />

“Following on from this study, and working with IDAC,<br />

a number of our own engineers have been able to do<br />

further comparisons of a new design against an<br />

existing product in various loading conditions,” says<br />

James Brooks, a senior mechanical design engineer at<br />

DePuy. “This has rapidly allowed us to get a good<br />

indication of the performance of the product before<br />

testing.”<br />

Fiona Haig, a mechanical designer at DePuy,<br />

reports additional benefits of the analysis solution.<br />

“IDAC’s macro allowed us to quickly and consistently<br />

replicate physical testing that would normally have<br />

taken weeks to undertake in our labs. In addition, it<br />

permitted us to gain detailed information on stress and<br />

deflection, which can be difficult to detect in physical<br />

tests. The macro has proved an invaluable tool in the<br />

comparison and validation of new implant designs as<br />

well as proving a highly effective learning aid for our<br />

core team of FEA users,” explains Haig. “The results<br />

achieved using IDAC’s analysis method closely<br />

correlated to the results of those physical tests<br />

previously undertaken in our labs. This validation<br />

has allowed us to extend the application of this<br />

methodology to the evaluation of a range of new<br />

implant designs, providing feedback accurately and in<br />

a short time frame.” ■<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

28<br />

More Design Insight,<br />

Faster...<br />

Quickly study the design impact of<br />

varying geometry, even without a<br />

parametric CAD model.<br />

By Pierre Thieffry, ParaMesh and Variational<br />

Technology <strong>Solutions</strong> Specialist and Raymond<br />

Browell, Product Manager, <strong>ANSYS</strong>, Inc.<br />

The combination of the <strong>ANSYS</strong> Workbench<br />

Environment and DesignXplorer VT provides <strong>ANSYS</strong><br />

users with powerful tools for gaining significant insight<br />

into designs when working with a CAD system.<br />

Bi-directional parametric associativity with the parent<br />

CAD package, made possible by the <strong>ANSYS</strong><br />

Workbench Environment, makes understanding<br />

the design impact of varying geometry easy and<br />

comprehensive.<br />

But what if this is an old design and the user<br />

cannot find the geometry files for the part? Or perhaps<br />

you have the geometry, but it is in a non-associative<br />

format such as IGES or Parasolid. Maybe the<br />

geometry is parametric and regenerates robustly, but<br />

the parameters created by the designer are not the<br />

ones that make sense for the analyst. For instance, the<br />

parameters’ definitions might be chained together so<br />

that it is impossible to vary one feature without<br />

changing others. Perhaps a consultant provided the<br />

user with only the FEA or “math model” and the<br />

original geometry used to create the model isn’t<br />

available, or it might take too much time to recreate.<br />

This is quite often the case with legacy models.<br />

Typically, this would be the end of the story. But with<br />

the combination of ParaMesh and DesignXplorer VT, it<br />

is just the beginning.<br />

Take an inside lok at how these tools can be used<br />

to study a legacy model like the engine torsional<br />

damper model shown in Figure 1.<br />

To optimize this engine damper, perform the<br />

following six-step procedure:<br />

1. From an existing <strong>ANSYS</strong> database write a<br />

“.cdb” file.<br />

2. Import the .cdb file into ParaMesh.<br />

3. Create the mesh morphing parameters within<br />

<strong>ANSYS</strong> ParaMesh. (Note the name of the<br />

parameters and their order of creation. This<br />

information will be mandatory in the next<br />

steps.)<br />

4. Declare the mesh morphing parameters by<br />

editing the <strong>ANSYS</strong> input file<br />

5. Perform the Parametric Solution using <strong>ANSYS</strong><br />

DesignXplorer VT.<br />

6. Post-process with Solution Viewer, the<br />

DesignXplorer VT post-processor within the<br />

<strong>ANSYS</strong> Environment and, if desired, optimize<br />

the results.<br />

Figure 1. Sample model of engine torsional damper.<br />

Figure 2. Parameter definitions used for mesh morphing.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Even for Legacy Models<br />

29<br />

Creating an <strong>ANSYS</strong> .cdb file is easy for <strong>ANSYS</strong><br />

users, and importing the model into ParaMesh is no<br />

different than reading a file into any package, so skip<br />

to Step 3. Figure 2 shows the parameter definitions we<br />

will create in Step 3.<br />

The first parameter adjusts the inner diameter of<br />

the structure by moving the inner surface nodes and is<br />

named “inner_diam,” and it has a range of variation of<br />

–0.5 mm to +2 mm with an initial wall thickness of<br />

2mm.<br />

The second parameter adjusts the width of the<br />

slotted hole and is named “hole_diam,” which has a<br />

range of –2 to +2mm, with an initial value of 4 mm.<br />

The third parameter is the radial location of the<br />

slotted hole and is named “hole_position,” which has a<br />

range of –3 to +3 mm and has an initial value of 45mm.<br />

hole_position<br />

Minimum Value<br />

hole_position<br />

Maximum Value<br />

angle_position<br />

Minimum Value<br />

inner_diam<br />

Minimum Value<br />

angle_position<br />

Maximum Value<br />

inner_diam<br />

Maximum Value<br />

Figure 3. Extremes in part geometry obtained by varying mesh morphing<br />

parameters.<br />

Hole_diam<br />

Minimum Value<br />

Hole_diam<br />

Maximum Value<br />

The fourth parameter is the location of the start of<br />

the bevel angle bend and is named “Angle_position,”<br />

which has a range of –6 to +2mm with an initial value<br />

of about 20mm.<br />

Comparing the images in Figure 3 indicates the<br />

extremes of the part. The boundary conditions applied<br />

are: symmetry boundary conditions on the planar<br />

faces, structure is clamped on the central hole and an<br />

inward radial pressure applied on the external surface.<br />

For Step 4, edit the <strong>ANSYS</strong> input file (see Figure<br />

4) and declare the ParaMesh mesh morphing<br />

parameters so that DesignXplorer VT will know to<br />

solve for them. As seen in the sample file, the<br />

parameters’ definitions are straightforward. To access<br />

the ParaMesh parameters from within the <strong>ANSYS</strong><br />

DesignXplorer VT solution, use the SXGEOM<br />

command.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

30<br />

…<br />

/SX<br />


! Define the output results<br />

SXRSLT,disp,NODE,U,ALL,,<br />

SXRSLT,sigma,ELEM,S,ALL,,<br />


! Define the file where the parameters have been created<br />

SXRFIL,tor_spring,rsx<br />

Starting with a sensitivity histogram as shown<br />

in Figure 5, the sensitivity of maximum stress with<br />

respect to each of the mesh morphing parameters is<br />

evident. These values are interpreted such that for a<br />

change from the minimum value to the maximum<br />

value of the parameter hole_position, the maximum<br />

stress increases by 103MPa. For the hole_diam<br />

parameter, the maximum stress decreases as the<br />

parameter increases.<br />

! Declare the shape parameters<br />

SXGEOM,inner_diam<br />

SXGEOM,hole_diam<br />

SXGEOM,angle_position<br />

SXGEOM,hole_position<br />

FINISH<br />

/SOLU<br />

! Prepare for a DXVT solution<br />

STAOP,SX<br />

Figure 4. Sample section of the <strong>ANSYS</strong> input file.<br />

Figure 5. Sensitivity diagram.<br />

This brings us to Step 5, which is solving<br />

the model with the mesh morphing parameters using<br />

DesignXplorer VT.<br />

DesignXplorer VT uses a new and exclusive<br />

technique called Variational Technology. In a traditional<br />

finite-element analysis, each change of the value of<br />

any input variable requires a new finite element<br />

analysis. To perform a “what-if” study where several<br />

input variables are varied in a certain range, a<br />

considerable number of finite element analyses may<br />

be required to satisfactorily evaluate the finite element<br />

results over the range of the input variables. In other<br />

methods, it is important to remember that each design<br />

candidate requires a complete re-mesh and re-solve.<br />

The benefit of Variational Technology is that only<br />

one solution is required to make the same type of<br />

forecast that other methods provide. The “response<br />

surface” created by DesignXplorer VT is an explicit<br />

approximation function of the finite-element results<br />

expressed as a function of all selected input variables.<br />

Variational Technology provides more accurate results,<br />

faster.<br />

Now post-process the analysis using the Solution<br />

Viewer, the DesignXplorer VT post-processor within<br />

the <strong>ANSYS</strong> Environment.<br />

Figure 6. Histogram showing sensitivity of stress, displacement<br />

and mass with respect to morphing parameters.<br />

DesignXplorer VT’s Solution Viewer allows the<br />

user to view the sensitivity of multiple results to the<br />

input parameters. In the histogram shown in Figure 6,<br />

we see the sensitivity of the Maximum Von Mises<br />

Stress, Maximum Displacement, and the Model Mass<br />

with respect to all of the mesh morphing parameters.<br />

The above sensitivities are relative ones. The<br />

“angle_position” parameter has essentially no effect<br />

on the stress.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

With the initial structure, there is a maximum Von<br />

Mises Stress at 305MPa, a maximum displacement of<br />

0.06 mm and a mass of 116g. It is ideal to keep the<br />

maximum stress under 265Mpa and keep the mass as<br />

low as possible. To reach this objective, the above<br />

sensitivities give some ideas about the changes to be<br />

made. The parameter “hole_position” has the most<br />

influence on the stress and has to be lowered.<br />

Moreover, it does not affect the mass, so it is a critical<br />

parameter for stress reduction only. The same holds<br />

for the inner_diam parameter’s effect on the mass. It is<br />

the most influent on mass, and has little effect<br />

on stresses. To reach the objective, expect two<br />

parameters to be lowered.<br />

Note the complexity of the response of this<br />

torsional damper with respect to the input parameters.<br />

As seen in the design curves, all but one of the<br />

responses to the input parameters are nonlinear. The<br />

response of the stresses to the hole_diameter has a<br />

definitive kink in it. The reason for that is tht the<br />

maximum stress jumps from one location to another<br />

when the parameters are changing. Simple Design of<br />

Experiment (DOE) curve fitting of results to selected<br />

samples would not have typically discovered this.<br />

One of the unique features of DesignXplorer VT is<br />

the instant, real time availability to the entire finite<br />

element solution for anywhere in the parameter<br />

domain. Pick any combination of parameter values<br />

and see a contour display of the finite element results.<br />

This is directly available, unlike DOE, DesignXplorer VT<br />

already has the results available for the user.<br />

Figure 9 shows a contour of Von Mises Stress<br />

from directly inside the Solution Viewer, for the<br />

parameter hole_position at -3mm, 0 and 3mm. The<br />

color scheme is the same for all meshes, so we really<br />

see the evolution with the given parameter.<br />

31<br />

Figure 7. Design curves created by Solution Viewer.<br />

hole_position<br />

at -3mm<br />

Additionally, DesignXplorer VT’s Solution Viewer<br />

allows you to view your parametric response as<br />

either design curves such as those in Figure 7, or as<br />

response surfaces as shown in Figure 8.<br />

hole_position<br />

at 0<br />

hole_position<br />

at 3mm<br />

Figure 8. Response surface created by Solution Viewer.<br />

Figure 9. Stress plots created by Solution Viewer with respect to varying<br />

parameter hole_position.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Software Highlights<br />

32<br />

DesignXplorer VT also includes powerful<br />

optimization and tolerance capabilities. Using the<br />

optimization capabilities built into the Solution Viewer,<br />

optimize the part. As stated before, minimize the mass<br />

of the part while keeping its maximum stress under<br />

265 MPa.<br />

The optimization needs in this case only 63<br />

iterations. These are achieved in 60 seconds – an<br />

amazingly short time considering the number of<br />

iterations. This is because, as mentioned earlier,<br />

DesignXplorer VT has the entire finite element solution<br />

for anywhere in the parameter domain. No additional<br />

solutions are required.<br />

The final maximum stress is 265Mpa. The final<br />

stress value is more than 10% below the initial stress<br />

value. The final mass has also been lowered to 101g, a<br />

saving of 13% which is a better solution in terms of<br />

both stress and the mass.<br />

The powerful combination of ParaMesh and<br />

DesignXplorer VT opens doors to analyses that never<br />

before existed. Previously, you had to guess, or<br />

optimize manually. Now parametric analysis is<br />

available, no matter what environment is being used:<br />

Workbench for those that have parameterized CAD<br />

models, and ParaMesh with DesignXplorer VT for<br />

those with models without parametric CAD.<br />

That is the value this powerful combination of<br />

<strong>ANSYS</strong> ParaMesh and <strong>ANSYS</strong> DesignXplorer<br />

provides: more design insight, faster...even for legacy<br />

models. ■<br />

Procedure Overview<br />

1. From an<br />

existing<br />

<strong>ANSYS</strong><br />

model, write<br />

a “.cdb” file<br />

(CDWRITE)<br />

Create<br />

an<br />

<strong>ANSYS</strong><br />

.cdb file<br />

2. Import the .cdb file into<br />

<strong>ANSYS</strong> ParaMesh<br />

<strong>ANSYS</strong> ParaMesh<br />

3. Create the mesh<br />

morphing parameters<br />

within <strong>ANSYS</strong> ParaMesh<br />

Start with an<br />

<strong>ANSYS</strong><br />

database<br />

4. Declare the mesh<br />

morphing parameters<br />

by editing the <strong>ANSYS</strong><br />

input file (file.dat)<br />

(Be sure to have<br />

consistent names)<br />

file .rsx<br />

(or .van)<br />

Edit the input file<br />

(file.dat) by adding the<br />

following commands:<br />


SXMETH<br />

5. Perform the Parametric<br />

Solution by using <strong>ANSYS</strong><br />

DesignXplorer VT<br />

<strong>ANSYS</strong> DX VT<br />

and SXPOST<br />

6. Post-process the<br />

parametric results and<br />

optimize the part<br />

Follow these six steps in using ParaMesh and DesignXplorer VT to optimize legacy models, even if you do not have geometry or if<br />

the geometry is non-associative. ParaMesh easily prepares these models so they can be studied with DesignXplorer VT to arrive at<br />

quick insight into the design impact of varying the geometry.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Tech File<br />

33<br />

Demystifying Contact Elements<br />

Part 1 of 2:<br />

What they are, how they work and when to use them.<br />

By John Crawford<br />

Consulting Analyst<br />

If you analyze enough problems,<br />

chances are good that sooner or<br />

later you’ll run across one that<br />

requires the use of contact<br />

elements. Contact elements are<br />

used to simulate how one or<br />

more surfaces interact with each<br />

other. For most analysts, our first exposure to contact<br />

elements can be a little confusing because of the<br />

variety of elements and the multitude of special<br />

features that are available.<br />

We have to determine which contact elements<br />

are appropriate for our problem, resolve any<br />

convergence problems that might arise during<br />

solution, and check the results for reasonable and<br />

accurate answers. Let’s see if we can clear up some of<br />

the mysteries that surround the use of contact<br />

elements. We’ll begin by talking about the elements<br />

themselves.<br />

Node-to-Node Elements<br />

In the early days of finite element analysis, there was<br />

one type of contact element: the node-to-node variety.<br />

The early versions of node-to-node contact elements<br />

were CONTAC12 (2-D) and CONTAC52 (3-D). More<br />

recently, CONTA178 (2-D and 3-D) was introduced to<br />

encompass the capabilities of both of these elements<br />

and also introduce some new features, such as<br />

additional contact algorithms. Node-to-node contact<br />

elements are simple and solve relatively quickly. Their<br />

basic function is to monitor the movement of one node<br />

with respect to another node. When the gap between<br />

these nodes closes, the contact element allows load<br />

to transfer from one node to the other. What does this<br />

really mean and how does <strong>ANSYS</strong> know when the<br />

nodes are touching?<br />

Remember that an analysis is made up of one or<br />

more load steps, and each load step has one or more<br />

substeps. Within each substep there can be several<br />

nested layers of equilibrium iterations. The precise<br />

number and manner in which they are nested is<br />

dependent on the solver, how many nonlinear features<br />

are being used and several other things. Contact<br />

analyses are nonlinear and therefore require their own<br />

equilibrium iteration loop. At the end of each contact<br />

equilibrium iteration, <strong>ANSYS</strong> checks to see if the<br />

status of each contact element has changed. It also<br />

calculates a convergence value (usually force<br />

equilibrium) and compares it to the convergence<br />

criteria. If the element status has not changed and the<br />

convergence criteria has been met, <strong>ANSYS</strong><br />

determines that the solution for this iteration has<br />

converged and moves on to the next outer iteration<br />

loop, the next substep or the next load step, or stops<br />

solving altogether if the analysis is now complete.<br />

If at this point you’re a little confused, don’t<br />

worry. The critical ideas to remember from this are<br />

the following:<br />

• Contact analyses are nonlinear in nature<br />

• <strong>ANSYS</strong> performs a special equilibrium<br />

iteration “loop” when doing a contact analysis<br />

• Contact elements have a “status” that<br />

indicates if they are open, closed, sliding, etc.<br />

• <strong>ANSYS</strong> checks the element status and the<br />

convergence criteria at the end of each<br />

contact equilibrium iteration to determine if<br />

equilibrium has been achieved<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Tech File<br />

34<br />

These characteristics are true for all types of contact<br />

elements. While they may seem a little primitive when<br />

compared with the newer contact elements, node-tonode<br />

contact elements have a lot going for them.<br />

They’ve been around long enough to have had their<br />

bugs worked out many years ago, and their extensive<br />

use over several decades means that there is a vast<br />

experience base to draw upon when setting up and<br />

debugging an analysis. CONTAC12 and CONTAC52<br />

can have nodes that are either coincident or non-coincident.<br />

While the majority of applications involve using<br />

non-coincident nodes, coincident nodes can be useful<br />

for certain analyses. If coincident nodes are used,<br />

the orientation of the contact “surface” that exists<br />

between the two nodes must be defined. The initial<br />

condition gap or interference can be provided by the<br />

user as being either positive (gap) or negative (interference),<br />

or automatically calculated from the relative<br />

positions of the nodes.<br />

Node-to-node contact is also available in<br />

COMBIN40. COMBIN40 is a rather unique element<br />

because it also includes a spring-slider, a damper<br />

(which works in parallel with the spring-slider) and<br />

a mass at each node. Any of these features can be<br />

used alone or simultaneously with any or all of the<br />

other features.<br />

While node-to-node contact elements are very<br />

useful, there are some limitations that must be kept in<br />

mind when using them. One limitation is that the<br />

orientation of the gap is not updated when large<br />

deflection analyses are performed. Another limitation<br />

is that these elements do not account for moment<br />

equilibrium. This does not present a problem when a<br />

line drawn between the nodes is normal to the contact<br />

surface because in this instance the moments are<br />

zero, but care should be taken in each analysis to<br />

recognize whether this is the case or not. If not, it is<br />

important to consider what effect this might have on<br />

the results. It is the responsibility of the analyst to<br />

recognize whether this condition is present and<br />

whether it introduces an unacceptable error that<br />

invalidates the usefulness of the analysis.<br />

Node-to-node elements can always be generated<br />

manually, and, depending on the model, you can often<br />

use the EINTF command to make them as well.<br />

Node-to-Surface Elements<br />

The next evolution in contact elements was the<br />

introduction of node-to-surface contact elements,<br />

such as CONTAC26 (2-D), CONTAC48 (2-D),<br />

CONTAC49 (3-D), and the recent addition of<br />

CONTA175 (2-D and 3-D). The major enhancement<br />

offered by node-to-surface contact elements is that<br />

they allow a node to contact anywhere along an edge<br />

(in 2-D) or a surface (in 3-D). Rather than a node being<br />

confined to contacting a specific node, a node can<br />

contact the edge of a certain element. This has<br />

significant benefits when objects translate or rotate<br />

relative to each other. Node-to-surface contact<br />

elements are capable of simulating large relative<br />

movements with accuracy.<br />

Because CONTA175 includes all the capabilities<br />

of the other node-to-surface contact elements and<br />

has other features that these elements do not have,<br />

CONTA175 will replace the other node-to-surface<br />

elements in future versions of <strong>ANSYS</strong>. Beginning in<br />

<strong>ANSYS</strong> 8.1, CONTAC26, CONTAC48 and CONTAC49<br />

will be undocumented, and they will eventually be<br />

removed from <strong>ANSYS</strong>.<br />

There are several ways to generate node-tosurface<br />

contact elements. They can be made<br />

manually, but this becomes impractical when making<br />

more than a few elements. GCGEN and ESURF are<br />

commands that are frequently used to generate<br />

node-to-surface contact elements, with GCGEN being<br />

the easiest and quickest way to make CONTAC48 and<br />

CONTAC49 node to surface contact elements, while<br />

ESURF is used to make CONTA175 node to surface<br />

elements. To use GCGEN, you make two components,<br />

one that contains the nodes from one of the contact<br />

surfaces, and another that contains the elements from<br />

the other contact surfaces, and then use GCGEN to<br />

automatically generate node-to-surface contact elements<br />

between every node and every element that are<br />

in these components. To use ESURF, you select the<br />

elements that the CONTA175 elements will be<br />

attached to and their nodes that are on the surface<br />

you wish to place the contact elements onto, making<br />

sure that you have the proper element attributes active<br />

(TYPE, REAL and MAT), and then issue the ESURF<br />

command.<br />

Last but not least, the Contact Wizard can be<br />

used to generate node-to-surface contact elements<br />

and is usually the easiest and quickest way of making<br />

them.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Surface-to-Surface Elements<br />

The latest evolution of contact element technology has<br />

been in the area of surface-to-surface contact. This<br />

allows contact to take place between one or more<br />

edges in 2-D, or one or more surfaces in 3-D. There<br />

are several important characteristics that make<br />

surface-to-surface contact elements very different<br />

from their less sophisticated ancestors.<br />

• Surface-to-surface contact is not defined by a<br />

single element, but by two types of elements<br />

called targets and contacts.<br />

• Any number of target and contact elements<br />

can be identified as being a set or group.<br />

Contact can take place between any contact<br />

elements and any target elements that are in<br />

this group.<br />

• <strong>ANSYS</strong> uses the real constant number to<br />

identify the target and contact elements that<br />

are in a group. All target and contact elements<br />

in this group have the same real constant<br />

number.<br />

Two-dimensional contact problems can be<br />

simulated using either CONTA171 or CONTA172 with<br />

TARGE169, while three-dimensional problems would<br />

use either CONTA173 or CONTA174 with TARGE170.<br />

CONTA171 and CONTA173 are appropriate for edges<br />

and surfaces made from linear (no midside nodes)<br />

elements while CONTA172 and CONTA174 can be<br />

used with edges and surfaces made from quadratic<br />

(having midside nodes) elements. Both CONTA172<br />

and CONTA174 can be used in a degenerate form on<br />

surfaces made from linear elements.<br />

The introduction of surface-to-surface contact<br />

elements has brought about big improvements in<br />

solution efficiency and has also broadened the types<br />

of contact problems that can be modeled. They offer<br />

many new and improved features, such as the ability<br />

to contact and then bond two surfaces together,<br />

automatic opening or closing of gaps to a uniform<br />

value, and a variety of contact algorithms, to name just<br />

a few.<br />

You can generate surface-to-surface contact<br />

elements by using series NSEL, ESEL and ESURF<br />

commands. The Contact Wizard automates these<br />

steps and makes the generation of surface-to-surface<br />

contact elements quick and easy in both 2-D and 3-D.<br />

Now that we have been introduced to the contact<br />

elements that are at our disposal, we’ll follow up next<br />

time with some helpful hints on how to use them. ■<br />

Part two of this article, to appear in the next issue of <strong>ANSYS</strong><br />

<strong>Solutions</strong>, will discuss various aspects of using contact<br />

elements, including modeling tips and setting appropriate<br />

stiffness.<br />

35<br />

Want to continue receiving<br />

<strong>ANSYS</strong> <strong>Solutions</strong>?<br />

Visit www.ansys.com/subscribe/<br />

to update your information.<br />

Plus, you’ll have the chance to sign<br />

up to receive CFX eNews and email<br />

alerts when the latest electronic<br />

version of <strong>ANSYS</strong> <strong>Solutions</strong><br />

becomes available!<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Tips and Techniques<br />

36<br />

Contact<br />

Defaults in<br />

Workbench<br />

and <strong>ANSYS</strong><br />

Intelligent default<br />

settings solve<br />

common problems<br />

fast with minimal<br />

user intervention.<br />

By <strong>ANSYS</strong>, Inc. <strong>Technical</strong> Support<br />

As every experienced FEA analyst knows, no two<br />

contact problems are exactly alike, so there is no<br />

silver bullet combination of KEYOPT and real<br />

constant settings that will successfully work for all<br />

problems. That explains the many features<br />

available today within the contact elements. It also<br />

explains, in part, the rationale behind the different<br />

default settings sometimes found in the different<br />

environments. As migration between Workbench<br />

and <strong>ANSYS</strong> environments progresses, it is<br />

important for analysts to recognize that, although<br />

the contact technology used in both of these<br />

environments is exactly the same, some of the<br />

default KEYOPT and real constant settings are not.<br />

Tables 1 and 2 summarize all surface-tosurface<br />

contact element (CONTA171–174)<br />

KEYOPTs and real<br />

constant properties<br />

with their respective<br />

default settings in each environment. Those that<br />

have different defaults in the different environments<br />

are highlighted in bold italic.<br />

KEYOPT(1): Select Degrees of Freedom (DOF)<br />

This option gives you the freedom to assign the<br />

contact DOF set consistent with the physics of the<br />

underlying elements. <strong>ANSYS</strong> surface-to-surface<br />

contact technology offers an impressive<br />

combination of structural, thermal, electric and<br />

magnetic capabilities. When building pairs through<br />

the <strong>ANSYS</strong> environment with traditional <strong>ANSYS</strong><br />

Parametric Design Language (APDL), users must<br />

Table 1: 8.0 Default Contact KEYOPTs<br />

KEYOPTs Description <strong>ANSYS</strong> APDL Contact Wizard Workbench Workbench<br />

Default Linear Default Nonlinear<br />

(bonded, no sep) (standard, rough)<br />

1 Selects DOF manual automatic automatic automatic<br />

2 Contact algorithm Aug Lagrange Aug Lagrange pure penalty pure penalty<br />

3 Stress state when super element no super elem no super elem n/a n/a<br />

is present<br />

4 Location of contact detection point gauss gauss gauss gauss<br />

5 CNOF/ICONT adjustment no adjust no adjust no adjust no adjust<br />

6 (blank)<br />

7 Element level time increment control no control no control no control no control<br />

8 Asymmetric contact selection no action no action no action no action<br />

9 Effect of initial penetration or gap include all include all exclude all include all/ ramped<br />

10 Contact stiffness update btwn loadsteps btwn substps btwn loadsteps btwn loadsteps<br />

11 Beam/shell thickness effect exclude exclude exclude exclude<br />

12 Behavior of contact surface standard standard bonded n/a<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Table 2: 8.0 Default Contact Real Constants<br />

Real Constants Description <strong>ANSYS</strong> APDLContact Wizard Workbench<br />

No. Name<br />

1 R1 Target circle radius 0 n/a n/a<br />

2 R2 Superelement thickness 1 1 n/a<br />

3 FKN Normal penalty stiffness factor 1 1 Note 1<br />

4 FTOLN Penetration tolerance factor 0.1 0.1 0.1<br />

5 ICONT Initial contact closure 0 0 0<br />

6 PINB Pinball region note 2 note 2 n ote 2<br />

7 PMAX Upper limit of initial penetration 0 0 0<br />

8 PMIN Lower limit of initial penetration 0 0 0<br />

9 TAUMAX Maximum friction stress 1.00E+20 1.00E+20 1.00E+20<br />

10 CNOF Contact surface offset 0 0 0<br />

11 FKOP Contact opening stiffness 1 1 1<br />

12 FKT Tangent penalty stiffness 1 1 1<br />

13 COHE Contact cohesion 0 0 0<br />

14 TCC Thermal contact conductance 0 0 Note 3<br />

15 FHTG Frictional heating factor 1 1 1<br />

16 SBCT Stefan-Boltzmann constant 0 0 n/a<br />

17 RDVF Radiation view factor 1 1 n/a<br />

18 FWGT Heat distribution weighing factor 0.5 0.5 0.5<br />

19 ECC Electric contact conductance 0 0 n/a<br />

20 FHEG Joule dissipation weighting factor 1 1 n/a<br />

21 FACT Static/dynamic ratio 1 1 1<br />

22 DC Exponential decay coefficient 0 0 0<br />

23 SLTO Allowable elastic slip 1% 1% 1%<br />

25 TOLS Target edge extension factor note 4 note 4 note 4<br />

26 MCC Magnetic contact permeance 0 0 n/a<br />

Notes:<br />

1. FKN = 10 if only linear contact is active (bonded, no sep). If any nonlinear contact is active, all regions will have FKN = 1 (including bonded, no sep).<br />

2. Depends on contact behavior, rigid vs. flex target, KEYOPT (9) and NLGEOM ON/OFF.<br />

3. Calculated as a function of highest conductivity and overall model size.<br />

4. 10% of target length for NLGEOM,OFF. 2% of target length for NLGEOM,ON.<br />

37<br />

set this option manually. The default will always be<br />

KEYOPT(1) =0 (for UX,UY). When building contact<br />

pairs in the <strong>ANSYS</strong> environment using the contact<br />

wizard, KEYOPT(1) is set automatically according<br />

to the DOF set of the underlying element. In<br />

Workbench, this option also is set automatically,<br />

depending on the underlying element DOF set.<br />

KEYOPT(2): Contact Algorithm<br />

<strong>ANSYS</strong> contact technology offers many algorithms<br />

to control how the code enforces compatibility at a<br />

contacting interface.<br />

The penalty method (KEYOPT(2) =1) is a<br />

traditional algorithm that enforces contact<br />

compatibility by using a contact “spring” to<br />

establish a relationship between the two surfaces.<br />

The spring stiffness is called the penalty parameter<br />

or, more commonly, the contact stiffness. The<br />

spring is inactive when the surfaces are apart (open<br />

status), and becomes active when the surfaces<br />

begin to interpenetrate.<br />

The augmented Lagrange method (KEYOPT(2)<br />

= 0) uses an iterative series of penalty methods to<br />

enforce contact compatibility. Contact tractions<br />

(pressure and friction stresses) are augmented<br />

during equilibrium iterations so that final<br />

penetration is smaller then the allowable tolerance.<br />

This offers better conditioning than the pure penalty<br />

method and is less sensitive to magnitude of<br />

contact stiffness used, but may require more<br />

iterations than the penalty method.<br />

The Multi-Point Constraint (MPC) Method<br />

(KEYOPT(2) = 2) enforces contact compatibility by<br />

using internally generated constraint equations to<br />

establish a relationship between the two surfaces.<br />

The DOFs of the contact surface nodes are<br />

eliminated. No normal or tangential stiffness is<br />

required. For small deformation problems, no<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Tips and Techniques<br />

38<br />

iterations are needed in solving system equations.<br />

Since there is no penetration or contact sliding<br />

within a tolerance, MPC represents “true linear<br />

contact” behavior. For large deformation problems,<br />

the MPC equations are updated during each<br />

iteration. This method applies to bonded surface<br />

behavior only. It is also useful for building surface<br />

constraint relationships similar to CERIG and RBE3.<br />

MPC is available as a standard option when<br />

modeling bonded contact in both <strong>ANSYS</strong> and<br />

Workbench environments.<br />

The Pure Lagrange multiplier method<br />

(KEYOPT(2) = 3) adds an extra degree of freedom<br />

(contact pressure) to satisfy contact compatibility.<br />

Pure Lagrange enforces near-zero penetration with<br />

pressure DOF. Unlike the penalty and augmented<br />

Lagrange algorithms, it does not require a normal<br />

contact stiffness. Pure Lagrange does require a<br />

direct solver, can be more computationally<br />

expensive and can have convergence difficulties<br />

related to overconstraining, but it is a very useful<br />

algorithm when zero penetration is critical. It also<br />

can be combined with the penalty algorithm in the<br />

tangential direction (KEYOPT(2) = 4), when zero<br />

penetration is critical, and friction is also present.<br />

The <strong>ANSYS</strong> environment uses the augmented<br />

Lagrange by default. The Workbench environment<br />

currently uses the penalty method, but the default<br />

can be changed via the Options Menu at 8.1. MPC<br />

is available as a standard alternative in both<br />

environments. The Pure Lagrange options are<br />

available in <strong>ANSYS</strong>, but can be accessed in<br />

Workbench via the pre-processor command<br />

builder. At version 8.1, Pure Lagrange is available in<br />

the Workbench environment. Table 3 summarizes<br />

all the algorithms with pros and cons of each.<br />

KEYOPT(9): Effect of Initial Penetration or Gap<br />

Properly accounting for or controlling interferences<br />

and gaps can sometimes be the difference<br />

between success and failure in simulating a<br />

complicated contact relationship. There are several<br />

contact options available to control how the code<br />

accounts for initial interference or gap effects:<br />

(0) Include everything: Include an initial<br />

interference from the geometry and the<br />

specified offset (if any).<br />

(1) Exclude everything: Ignore all initialinterference<br />

effects.<br />

(2) Include with ramped effects: Ramp the<br />

interference to enhance convergence.<br />

(3) Include offset only: Base initial interference<br />

on specified offset only.<br />

(4) Include offset only w/ ramp: Base initial<br />

interference on specified offset only, and<br />

ramp the interference effect to enhance<br />

convergence.<br />

Table 3: Contact Algorithms<br />

Algorithm Pros Cons When to Use<br />

Pure Penalty Offers easiest convergence in least Requires contact stiffness and Helpful when contact convergence<br />

number of iterations allowance for some finite is a challenge and minimal penetration<br />

penetration<br />

is acceptable (Default in Workbench)<br />

Augmented Minimizes penetration; better Might require more iterations The default for surf-to-surf and nodeconditioning<br />

than penalty; less<br />

to-surf in <strong>ANSYS</strong>, as it has proven to<br />

sensitive to contact stiffness<br />

produce the best quality results in the<br />

most common applications<br />

(Default in <strong>ANSYS</strong>)<br />

Pure Lagrange Offers near-zero penetration; Might require more iterations; When zero penetration is critical<br />

zero elastic slip (no contact might also require adjustment to<br />

stiffness required)<br />

chatter control parameters unique<br />

to this algorithm; can produce<br />

overconstraints in model<br />

Pure Lagrange on Same as Pure Lagrange, plus Same as Pure Lagrange When zero penetration is critical<br />

Normal; Penalty simulation of friction is handled and friction is present<br />

on Tangent most efficiently<br />

Multipoint More efficient than traditional Can produce overconstraints Recommended for large bonded contact<br />

Constraint (MPC) bonded contact; offers contact in model models to enhance run time and for<br />

betweenmixed element types;<br />

contact between mixed element types<br />

offers CERIG RBE3 type constraints<br />

and surface constraint applications<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

In <strong>ANSYS</strong>, the default KEYOPT(9) = 0 is to<br />

include everything. In Workbench, the default is to<br />

exclude everything (1) when linear contact (bonded,<br />

no separation) is defined and include with ramped<br />

effects (2) when nonlinear contact (frictional,<br />

frictionless, rough) is defined.<br />

KEYOPT(10): Contact Stiffness Update<br />

When using the penalty and/or augmented<br />

Lagrange method, contact stiffness has long been<br />

recognized as a critical property that influences<br />

both accuracy and convergence. Too high a<br />

stiffness will ultimately lead to convergence<br />

difficulty; too low a stiffness will result in<br />

over-penetration and an inaccurate assessment of<br />

surface pressures and stresses at the interface.<br />

In an effort to arrive at a good balance<br />

between these extremes, automatic stiffness<br />

updating between loadsteps (KEYOPT(10) = 0) and<br />

substeps (KEYOPT(10) = 1), or between iterations<br />

(KEYOPT(10) = 2) was introduced as an<br />

enhancement to traditional trial-and-error methods.<br />

In <strong>ANSYS</strong>, when contact is built via APDL, the<br />

default is to update stiffness between loadsteps.<br />

In <strong>ANSYS</strong>, when contact is built via the Wizard, the<br />

default has been changed to update between<br />

substeps. This is considered to produce the<br />

most robust contact simulation in most cases.<br />

In Workbench, the default behavior is still between<br />

loadsteps, but the default can be changed via the<br />

Option Menu at Version 8.1. These defaults may<br />

change in future releases as further enhancements<br />

are made.<br />

KEYOPT(12): Behavior of Contact Surface<br />

<strong>ANSYS</strong> contact technology offers a rich library of<br />

surface behavior options to simulate every possible<br />

situation. These options are as follows:<br />

(0) Standard: (Referred to as “Frictionless” or<br />

“Frictional” in Workbench) normal contact<br />

closing and opening behavior, with normal<br />

sticking/sliding friction behavior when<br />

nonzero friction coefficient is defined.<br />

(1) Rough: Normal contact closing and<br />

opening behavior, but no sliding can occur<br />

(similar to having an infinite coefficient of<br />

friction).<br />

(2) No Separation: Target and contact<br />

surfaces are tied once contact is<br />

established (sliding is permitted). This is not<br />

available as a standard option in Workbench,<br />

but can be accessed via the<br />

pre-processor command builder.<br />

(3) Bonded: Target and contact surfaces are<br />

“glued” once contact is established.<br />

(4) No Separation (always): (Referred to<br />

simply as “No Separation” in Workbench)<br />

Any contact detection points initially inside<br />

the pinball region or that come into contact<br />

are tied in the normal direction (sliding is<br />

permitted).<br />

(5) Bonded Contact (always): (Referred to<br />

simply as “Bonded” in Workbench) Any<br />

contact detection points initially inside the<br />

pinball region or that come into contact are<br />

bonded. (Design-space Default)<br />

(6) Bonded Contact (initial contact): Bonds<br />

surfaces ONLY in initial contact, initially<br />

open surfaces will remain open. This is<br />

not available as a standard option in<br />

Workbench, but can be accessed via the<br />

pre-processor command builder.<br />

The default surface behavior in <strong>ANSYS</strong> is<br />

nonlinear “standard” for simulating the most<br />

general normal contact closing and opening<br />

behavior, with normal sticking/sliding friction.<br />

In Workbench, the default behavior (which can be<br />

changed via the Options Menu at Version 8.1), set<br />

up with automatic contact detection to simulate an<br />

assembly, is linear Bonded Contact (Always).<br />

Real Constant(3): Normal Penalty Stiffness<br />

Factor (FKN)<br />

Users control the initial contact stiffness used by<br />

multiplying the calculated value by a factor, FKN.<br />

The default value for FKN used in <strong>ANSYS</strong> (APDL or<br />

Wizard) is 1.0. In Workbench, FKN = 10 if only linear<br />

contact is active (bonded or no separation). If any<br />

nonlinear contact is active, all regions will have FKN<br />

= 1 (including bonded and no separation).<br />

Real Constant(14): Thermal Contact<br />

Conductance (TCC)<br />

This constant dictates the thermal resistance<br />

across the interface of contacting bodies in<br />

applications involving thermal analysis. The default<br />

value in <strong>ANSYS</strong> for TCC is zero (perfect insulator).<br />

In Workbench, the default is automatically<br />

calculated as a function of the highest thermal<br />

conductivity of the contacting parts and the<br />

overall model sizethus essentially modeling perfect<br />

thermal contact. ■<br />

39<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Guest Commentary<br />

40<br />

Quality<br />

Assurance<br />

Putting<br />

in Finite Element Analysis<br />

Part 2 of 2:<br />

Step-by-step ways to best implement a quality assurance program.<br />

By Vince Adams<br />

Director of Analysis Services<br />

IMPACT Engineering<br />

<strong>Solutions</strong>, Inc.<br />

The NAFEMS document<br />

“Management of Finite Element<br />

Analysis—Guidelines to Best<br />

Practice” states that a quality<br />

assurance program should be<br />

developed to serve an organization, not vice-versa. To<br />

address this concern and the barriers described in Part<br />

1 of this series, IMPACT Engineering <strong>Solutions</strong> has<br />

developed a suite of QA tools that can be customized<br />

and scaled to meet the needs of a wide-range of<br />

product development teams and industries. This suite<br />

of tools for QA includes: process audits, management<br />

education, user skill-level assessment, user education/<br />

continuous improvement, pre- and post-analysis<br />

checklists, project documentation, data management,<br />

and analysis correlation guidelines.<br />

Process Audit<br />

The first step in establishing a QA program should be to<br />

document existing processes and company goals,<br />

including technical, organizational and competitive<br />

goals. Developing an understanding of how products<br />

are developed, what the historical issues and<br />

challenges have been, what interactions exist, and how<br />

simulation technologies can best impact a company’s<br />

bottom line should precede any recommendations.<br />

A process audit should evaluate not only the tools<br />

used by an engineering department but also identify<br />

additional state-of-the-art tools that can impact the<br />

design process or allow simulation activities to grow<br />

beyond current limitations. A process audit should help<br />

ensure that all groups involved in the design process<br />

are on the same page. Finally, the process audit should<br />

put some monetary values to typical tasks so that<br />

potential savings and opportunities for gains can be<br />

more readily identified. The report generated from the<br />

process audit should be a living document that<br />

allows periodic review of critical components and<br />

observations.<br />

Management Education<br />

A recent survey indicated that management, for various<br />

reasons, was the greatest barrier to success of FEA in<br />

product design by the users who responded. Helping<br />

managers set proper expectations regarding the<br />

capabilities and limitations of analysis is often the<br />

single most important step in improving the quality<br />

and value of simulation at a company. Management<br />

training should also include a discussion of the<br />

validity of assumptions and results, quality control<br />

concepts and an overview of the skills that they<br />

should expect their team members to possess to be<br />

productive with FEA.<br />

User Skill Level Assessment<br />

Skill level assessment may be the most difficult and<br />

controversial component of a QA program, and the<br />

one with the most far-reaching impact. Skill level<br />

assessment isn’t technically difficult as there are<br />

several areas of expertise that are fundamental to the<br />

successful use of analysis. The difficulty lies in the<br />

potential for perceived threat. Consequently, users<br />

must be shown that the program is not a test but a<br />

tool to help them better understand their skills and<br />

needs. The program we have developed is<br />

composed of four sections in which candidates:<br />

• Demonstrate a basic understanding of<br />

engineering mechanics (failure theory, stress<br />

concepts, material properties, etc.)<br />

• Show a working knowledge of finite element<br />

analysis (terminology, concepts, capabilities,<br />

meshing, boundary conditions, etc.)<br />

• Solve hands-on sample problems (using FEA<br />

tools that are to be part of their job)<br />

• Present a portfolio of past work they have<br />

performed (reports, screen shots, models,<br />

plots, etc.)<br />

The assessment report provided back to<br />

management should include not only performance<br />

results but also a plan for improvement for that<br />

individual (including courses and other support<br />

options, as well as special skills that might be shared<br />

with others in the organization. Finally, some sort of<br />

indication of a user’s level of competence and/or<br />

approved responsibility level should be noted,<br />

without negative connotations that could be<br />

misconstrued.<br />

User Education/Continuous Improvement<br />

A proactive and forward-thinking QA program should<br />

identify areas of growth and knowledge required to<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

keep skills of users sharp. A company can’t be<br />

confident that users are state-of-the-art in their<br />

techniques and tools unless they are exposed to<br />

people and techniques outside of their familiar<br />

surroundings. The process audit conducted at the<br />

beginning of the program should identify critical<br />

skills and techniques that are needed to maximize the<br />

benefits of simulation, while the skills assessment<br />

should identify which users need work in those<br />

techniques. Employee growth should be planned, not<br />

expected to happen haphazardly. Knowledge and<br />

documentation of the next plateau for each user or<br />

group of users, with clear milestones, will help ensure<br />

that quality is maintained. It is also preferable to insist<br />

that all users at an organization go through a standard<br />

set of courses so that all are using the same language<br />

and have been exposed to the same data.<br />

Pre and Post-Analysis Checklists<br />

NAFEMS has developed an excellent starting point<br />

for companies looking to implement checklists as a<br />

quality control tool. We suggest taking these a step<br />

further and customizing them for a particular<br />

company’s tools and analysis environment. As part of<br />

a total QA program, clients should be able to access<br />

these forms online via an Intranet or the Internet, and<br />

they should be made available as part of the project<br />

documentation as described below. We’ve found that<br />

when these simple checking tools are bypassed,<br />

minor errors in data entry and interpretation can cause<br />

major problems in the decisions based on FE data.<br />

Project Documentation<br />

Too few companies have standard report formats for<br />

analysis while many companies don’t mandate reports<br />

at all. Despite the obvious loss of intellectual capital a<br />

company will experience when an analyst leaves the<br />

organization without documenting their work, a<br />

company loses one of the most important quality<br />

control tools in the analysis process when reports<br />

aren’t completed. A QA program for analysis must<br />

include a report format that transcends groups,<br />

specializations, or departments. Analysis data on<br />

seemingly unrelated components could still provide<br />

insight and prevent repetition of work. In addition to<br />

providing details of the recent work, a project report<br />

should include references to similar historical projects,<br />

test data and correlation criteria. A report should<br />

indicate the source of inputs and assumptions as well<br />

as comment on the validity of these assumptions.<br />

Additionally, a company would benefit from linking test<br />

and analysis reports, even to the point of using similar<br />

formats for the two related tasks.<br />

Data Management<br />

As companies begin to evaluate their PLM (product<br />

lifecycle management) structures, the organization of<br />

analysis or other product performance data must be<br />

included in the initial planning. D.H. Brown and<br />

Associates have investigated the needs of CAE data<br />

management and have found that structured PDM<br />

(product data management) systems may not be up to<br />

the task. PDM systems were typically developed to<br />

manage revisions and bill hierarchies, not the<br />

simplified geometries, results formats, and validation<br />

databases required for an analysis program. While<br />

every company must develop its own PLM and data<br />

management system that best fits within their<br />

organization, a QA program for analysis must tap into<br />

that system, formalize it if need be and provide means<br />

for policing the archiving of analysis data so that a<br />

company’s intellectual property and investment in<br />

simulation is secure.<br />

Analysis Correlation Guidelines.<br />

Unfortunately, companies rarely correlate their finite<br />

element data with physical testing. And when testing<br />

is used, set-ups are often inappropriate, proper<br />

procedures are not followed and sufficient data points<br />

not gathered. Therefore, thought should be given to<br />

multiple validation points to ensure that boundary<br />

conditions, material properties and geometry are all<br />

properly specified to provide consistent correlation.<br />

The analyst and the test technician should work<br />

closely together to devise a test intended to correlate<br />

the analysis modeling assumptions. Care should be<br />

taken to evaluate the validity of constraints in the<br />

model, especially fixed constraints, as these can lead<br />

to gross variations in stiffness when comparing test<br />

results to analysis results. A QA program for analysis<br />

should bridge the gap between test and analysis, and<br />

document procedures for correlating FE data. ■<br />

Part 1 of this article in the previous issue of <strong>ANSYS</strong> <strong>Solutions</strong><br />

discussed ways to overcome barriers to effective quality<br />

assurance in finite element analysis.<br />

Vince Adams is co-author of the book “Building Better<br />

Products with Finite Element Analysis” and the inaugural<br />

chairman for the NAFEMS North American Steering<br />

Committee. He currently serves as Director of Analysis<br />

Services at IMPACT Engineering <strong>Solutions</strong>, Inc.<br />

(www.impactengsol.com), a consulting firm providing design<br />

and analysis services and support to industrial clients in a wide<br />

range of industries around the world. Vince can be reached at<br />

vadams@impactengsol.com.<br />

Quality Doesn’t Happen by Accident<br />

Even if their product lines are similar, no two companies<br />

operate alike. Consequently, no QA program can be assumed<br />

valid for all companies without running the risk of forcing an<br />

engineering organization to cater to the needs of the system.<br />

In our training programs, we identify geometry, properties,<br />

mesh and boundary conditions as the key assumptions in any<br />

analysis and the most likely sources of error. If nothing else, a<br />

QA procedure for FEA must provide a crosscheck on these<br />

factors. Ideally, all users in an organization will possess all the<br />

skills required to competently perform analyses. However, as<br />

the technology proliferates further into the design process, as it<br />

should, the likelihood of that required ideal skill level becomes<br />

less and less. So management of engineering organizations<br />

need to foster a quality environment so that analysis can be<br />

used to its full potential. Remember, quality doesn’t happen by<br />

accident. Only with planning, standardization, education and<br />

diligent follow-through can a company truly feel confident that<br />

quality in FEA is assured.<br />

www.ansys.com <strong>ANSYS</strong> <strong>Solutions</strong> | Summer 2004

Unparalleled CAE performance and infrastructure<br />

The fastest CAE performance on the planet just got more versatile<br />

Choose between:<br />

■ Intel ® Itanium ® processor’s fastest floating point in the industry.<br />

■ The CAE software portfolio offered by HP-UX, Linux and<br />

Microsoft ® Windows ® .<br />

■ The cluster capability of HP Integrity servers.<br />

■ The value offered by HP ProLiant industry-standard servers.<br />

■ The pre/post processing power offered by HP Workstations.<br />

Whatever your choice, HP and our CAE partners deliver.<br />

www.hp.com/go/cae<br />

Screen images courtesy of (left to right): <strong>ANSYS</strong> ® ICEM CFD Cabin Modeler and<br />

<strong>ANSYS</strong> ® CFX ®<br />

Intel and Itanium are registered trademarks of the Intel Corporation<br />

in the United States and other countries.<br />

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.<br />

© 2004 Hewlett-Packard Development Company, L.P.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!