5.8
MOTORCYCLE CRANKSHAFT, TETRAHEDRON NO. 16
Copy the sample file
B11_G.COS to the Z88 input file Z88G.COS.
We want to compute a
crankshaft for a monocylinder motorcycle engine and put a force of
-5,000 N
onto the piston. The meshing will do Pro/ENGINEER.
The boundary conditions are
a bit tricky for this example: Put a reference (or datum) point to the
center
of the face of the crankshaft. We'll need this point to fix the
crankshaft in Z
direction, i.e. lengthwise.
The ball bearings, which
allow always some angular movement, and, thus, should be regarded as
moment-
free supports, are fastened to the larger shaft axises. The flange
facings of
the shaft axises are to be fixed in X and Y direction. Because whole
surfaces
are fixed, don't allow one ore more of these surfaces to be fixed in Z
direction, too. This would result in blocking the angular movement -
try it, if
you won't believe it.
A total force of -5,000 N
will be put onto the peripheral surface of the crankshaft journal.
The mesh is automatically
generated by Pro/MECHANICA featuring parabolic tetrahedrons. After
storing the
COSMOS file, a Z88 session may start:
Copy B11_G.COS to
Z88G.COS, the COSMOS file for the converter Z88G
Start converting Z88G.COS
with Z88G
Windows: COSMOS
converter Z88G. Looks quite similar on UNIX machines.
and proceed with the
Cuthill- McKee algorithm Z88H,
because
we'll expect a very bad node-numbering for the parabolic tetrahedrons.
Windows: Cuthill-McKee
program Z88H. Looks quite similar on UNIX machines.
The first line of Z88I1.TXT tells you the following values:
MAXKOI must have as a
minimum 3,941 elements * 10 nodes per element = 39,410.
Thus, Z88.DYN
should look as follows:
MAXGS
when
starting, any value
MAXKOI minimum
39410
MAXK minimum
6826
MAXE minimum
3941
MAXNFG minimum
20478
MAXNEG minimum
1
Proceed with a look at the
structure with Z88O .
The computing time with Z88F is about 16 sec. on a PC (AMD Athlon 64 X2
3800+ processor, 4 GByte memory, Windows XP). Enter a value of about
11,400,000 for
MAXGS..
See the deflected structure
with Z88O. The
angular deflection of the axises is quite amazing. Now you would read
off the
deflections of distinguished nodes, multiply with the appropriate lever
arms
and check with the bearing catalogue if your ball bearings will allow
this
angular movement without problems.
Windows: Computing
deflections with Z88F. Looks quite similar on UNIX machines.
Windows: Plot programm
Z88O, undeflected structure.
Windows: Plot programm
Z88O, deflected structure.
Now we'll launch the
sparse matrix iteration solver Z88I1 and Z88I2. To begin with, we'll try 1,000,000 for MAXIEZ in Z88.DYN :
COMMON
START
MAXGS
11500000
> has
for Z88I1
no meaning !
MAXKOI
40000
>
must always be
large enough !
MAXK
7000
>
read off from
Z88I1.TXT
MAXE
4000
>
read off from
Z88I1.TXT
MAXNFG
21000 >
read off from
Z88I1.TXT
MAXNEG
1
>
read off from
Z88I1.TXT
MAXPR
1
> has no meaning for this example
MAXRB
903
>
read off from
Z88I2.TXT
MAXIEZ 1000000
>
important for Z88I1
MAXGP
500000
>
used by Z88O
for the Gauss points
COMMON
END
Windows: Part 1 of the Sparse
Matrix Solver
Our entries did work
properly (otherwise, you would have to increase MAXIEZ) and
the
sorting times was just a breeze.
Read off for MAXGS:
768,687, rounded up 770,000. This looks fairly better than the direct
Cholesky
solver Z88F with its need of 11,381,064 8-Byte elements = 87 Mbyte.
The second
part of the iteration solver, i.e. Z88I2, will only need 768,687
8-Bytes
elements = 6 MByte.
Thus, we would adjust the
memory in Z88.DYN as
follows (feel free to enter even bigger values):
COMMON
START
MAXGS
770000
>
important !
MAXKOI
40000
>
must always be
large enough !
MAXK
7000
>
read off from
Z88I1.TXT
MAXE
4000
>
read off from
Z88I1.TXT
MAXNFG
21000 >
read off from
Z88I1.TXT
MAXNEG
1
>
read off from
Z88I1.TXT
MAXPR
1
> has no meaning for this example
MAXRB
903
>
read off from
Z88I2.TXT
MAXIEZ
1000000
> not
used by
Z88I2
MAXGP
500000
>
used by Z88O
for the Gauss points
COMMON
END
If you adjust the iteration
parameters in Z88I4.TXT
(chapter 3.6) as follows:
10000 1e-7 0.0001
1. 1
i.e. a maximum of 10,000
iterations, EPS with 1E-7 and ALPHA with 0.0001, then
this results in a computing time of
about 12 sec. on a PC (AMD
Athlon 64 X2 3800+ processor, 4 GByte memory, Windows XP).
In this case, both the
iteration solver and the direct Cholesky solver need about the same
time, but
the iteration solver needs fewer than one tenth of memory. For large
structures, things get even worse for the Cholesky solver! But pay
attention to
the fact, that you can't really compare the computing times. Try other
entries
for EPS, for example 1E-5 (resulting in 296 iterations and 11
seconds)
or 1E-10 (resulting in 329 iterations and 13 sec.), and see the
different
computing times.
Windows: The sparse matrix iteration
solver Part 2, i.e. Z88I2.
However, a very nice
experiment is this:
Start from the very
beginning, run Z88G, but not the Cuthill- McKee algorithm Z88H.
Launch
directly after Z88G a test run with Z88F :
Windows: The direct
Cholesky solver in test mode.
Gee, see the faces falling:
now we would need 184,122,663 8- Byte elements = 1,4 GByte. Absolutely
no need
for this!
However, run again the
iteration solver part 1, i.e. Z88I1. This will again result in only
768,687
elements for the total stiffness matrix. Calculate, please:
184,122,663 : 768,687 = 240
: 1
The second part of the
iteration solver, i.e. Z88I2, needs now some more iterations (350 in
contrary
to 415 with an equal EPS of 1E-7), because the matrix features
the same
number of non-zero elements, though, but the condition is worse
because of the
very bad node-numbering of Pro/MECHANICA. That means: When using the
iteration
solver you don't need to run the Cuthill-McKee algorithm Z88H for
reducing the
storage needs of the iteration solver (in contrary to the direct
Cholesky
solver Z88F, which may depend heavily on Z88H for larger structures!). However,
Z88H may improve the matrix condition anyway.
Try now the direct Sparse Matrix
Solver with Fill-In:
Much faster runs
the solver pair Z88I1 and Z88PAR. Adjust the 5th entry in Z88I4.TXT to the number of CPUs. Keep
in mind that Z88PAR deals heavily with dynamic memory when running.
This may
cause serious trouble when computing very large structures. For this
example
the elapsed time was ~ 4 sec. with two CPUs.
Windows:
The direct Sparse
Matrix Solver Z88PAR.