Developments in Ceramic Materials Research
Developments in Ceramic Materials Research
Developments in Ceramic Materials Research
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
206<br />
M. A. Sheik<br />
exceeds 1 Million <strong>in</strong> the HITCO Unit Cell model. Moreover, the real simulation for HITCO<br />
samples may def<strong>in</strong>itely <strong>in</strong>volve the analysis of a full lam<strong>in</strong>ate such as the one formed at the<br />
end <strong>in</strong> Figure 27. This would mean the multiplication of element count for a s<strong>in</strong>gle RVE Unit<br />
Cell with the total number of Unit Cells used to form the lam<strong>in</strong>ate, e.g., 18 from the example<br />
shown <strong>in</strong> Figure 27. It is emphasized here that even <strong>in</strong> the presence of a much more powerful<br />
PC - 3 GHz, s<strong>in</strong>gle processor, 1GB physical memory - a thermal analysis with just two Unit<br />
Cells jo<strong>in</strong>ed together has not been successful, clearly exhibit<strong>in</strong>g the limitation of a s<strong>in</strong>gle<br />
processor PC when ABAQUS/CAE [18] is conduct<strong>in</strong>g a simple steady-state analysis on such<br />
a model.<br />
The next option used is an SGI Onyx 300 mach<strong>in</strong>e with 32 SGI R14000 MIPS processors<br />
runn<strong>in</strong>g at 600 Megahertz. Each such processor hav<strong>in</strong>g a peak speed of 1.2 Gigaflops makes<br />
the total peak speed of the mach<strong>in</strong>e of approx. 38 Gigaflops. But the system is a shared<br />
memory mach<strong>in</strong>e with 16 Gigabytes of physical memory, permitt<strong>in</strong>g both shared memory and<br />
distributed memory programm<strong>in</strong>g models to all users simultaneously. Although the analyses<br />
conducted on this mach<strong>in</strong>e have already yielded better memory management scenarios<br />
compared to s<strong>in</strong>gle processor W<strong>in</strong>dows PC, but still certa<strong>in</strong> ABAQUS code limitations<br />
regard<strong>in</strong>g thermal analysis mentioned <strong>in</strong> the software reference documentation has made the<br />
scope for speedup ga<strong>in</strong> rather narrow. Proof of this behavior has been seen through <strong>in</strong>itial<br />
runs conducted us<strong>in</strong>g <strong>in</strong>creas<strong>in</strong>g number of parallel processors. Some trends for ABAQUS<br />
and its solver performance are clearly seen <strong>in</strong> Table 10. Here the comparison is made with<br />
tests that were run on the same Unit Cell with a monotonic tensile load<strong>in</strong>g. Increased speed<br />
up h<strong>in</strong>ts at the advantage ga<strong>in</strong>ed by the use of multiprocessor platform for the mechanical<br />
analysis of HITCO Unit Cell model and larger models are considered to be solved faster here<br />
but only <strong>in</strong> the regime of mechanical load<strong>in</strong>gs. For thermal application simulation, this study<br />
suggests that <strong>in</strong> order to benefit from parallelization which is <strong>in</strong>evitable therefore crucial for<br />
larger models, further <strong>in</strong>vestigation are necessary <strong>in</strong> improv<strong>in</strong>g solver performance. This has<br />
also led the present study towards another doma<strong>in</strong> of parallelization specially coded for f<strong>in</strong>ite<br />
element modell<strong>in</strong>g and it is be<strong>in</strong>g pursued simultaneously now.<br />
Table 10. Speed-up observed for, a comparison (percentage) between 1-D steady-state<br />
heat transfer analyses for thermal conductivity measurement and 1-D monotonic tensile<br />
load<strong>in</strong>g for determ<strong>in</strong><strong>in</strong>g composite stiffness employ<strong>in</strong>g parallel process<strong>in</strong>g<br />
Processors Thermal Analysis (%) Mechanical Analysis (%)<br />
2 7 12<br />
4 9 20<br />
8 10 24<br />
5.1. Parallel Process<strong>in</strong>g<br />
Results are now presented of thermal analysis of a benchmark heat flow problem, as<br />
def<strong>in</strong>ed <strong>in</strong> Figure 34, <strong>in</strong> a multi-processor environment. Figure 35 shows the analysis speedup<br />
aga<strong>in</strong>st the number of processors with an <strong>in</strong>creas<strong>in</strong>g mesh size.