Multi-body dynamics describes the physics of motion of an assembly of constrained or restrained bodies. As such it encompasses the behaviour of nearly every living or inanimate object in the universe. The book is organized into three succinct parts: Principles of Modeling and Simulation provides a brief history of modeling and simulation, outlines its many functions, and explores the advantages and disadvantages of using models in This is hardly surprising, as so many engineering objects, materials and processes deal with fluids for example, aero- and hydrodynamics, melts, polymer extrusion, and bioengineering.
This volume collects the contributions! This case study illustrates that co-simulation techniques allow the convenient simulation and optimization of coupled Arnold, W.
Schiehlen eds. Arnold, M. In: Arnold, M. The CPU times for these simulations on a 2. These CPU times were achieved without writing any output. That is not the concern of this paper. The concern of this paper is how to improve the performance by means of the formulation of the equation of motion and software. Discussions and conclusions Using the transmission functions one can always formulate the equation of motion in a explicit second order differential equations form as represented by the equation [31].
The geometric and kinematic constraints are taken implicitly into consideration, hence, no LaGrange multipliers are needed to define a given problem. The number of equations [31] is identical with k that represents the number of the degrees of freedom that is the minimum number of equations that can describe the dynamic behavior of the system.
It was verified that equations [31] can be integrated numerically by any explicit or implicit, one step or multi-step numerical integration method guaranteeing good results when the process is convergent. In contrast with index three formulation the order of the integration method can be variable order and time step without deterioration of the results. The first and second order transmission functions used in this formulation are functions only on the positions of the mechanism.
Hence all the functions F;, and DJ,that depend on the transmission functions also depend only on the positions of the mechanism. This triggers the idea that by knowing the range of motion of the generalized coordinates, then the transmission functions and the general functions of the moments of inertia and masses can be generated before the effective integration takes place and tabulated in look up tables for speeding up the Computation.
As far as the generation of the look up tables is concerned for a single degree of freedom system the procedure is straight forward. For multi degrees of freedom the problem is more complicated. For the systems from Figures 3 and 4, for each step of the first input a series of steps defined by the range of motion of the second input should be taken.
In this case the range of motion for both inputs is '. Thus the generation of the look up tables for multi degrees of freedom systems requires large files. Although this formulation is good for any method of numerical integration, without the generation of the look up tables and pre-computation of invariants it seems more involved than the original ADAMS.
Also from a user point of view it does require more engineering and mechanics science background. Gear, C. Differential-algebraic equation index transformation. Siut Comp. Mechanics in Des. Orlandea, N. A study of effects of the lower index methods on ADAMS sparse tableau formulation for the computational dynamics of multi-body mechanical systems. Journal of Multi-body Dynamics, Proc. Instn Mech Engrs. Maros D. Contributions to the determination of the equations of motion for multi degrees of freedom systems.
Journal of Engineering for Industry, Trans. ASME, Vol. The kinematics of the multi degrees of freedom planar mechanisms. Mathematica, Vol. The transmission function of first order for the five-bar mechanism from Figure 3. Marionette may be a kind of interesting multi-body system. A general marionette is controlled by about 10 strings and has some degrees of redundant freedom.
In order to realize easy computer control system of marionettes, it is necessary to develop easy and quick methods for analyzing its static and dynamic posture. This paper presents a study on the static posture analysis of a marionette having closed link loops by the particle swarm optimization algorithm.
The authors of this paper investigate the applicability of the algorithm into the analysis of multi-body systems. This paper is the first report presenting a very easy and quick way of solving static posture analysis of a marionette having redundant freedom using the algorithm. This study can suggest more applicability of the algorithm for various analyses of multi-body dynamics.
There are some kinds of marionettes. The ones are manipulated from the top. The others are from the bottom. In the former category, there are ones entirely manipulated with strings and others manipulated arms and legs under the condition of its body being held using metal rods. The type of marionettes in this paper is one manipulated using strings from the top as shown in Fig. The final objective of this research project is to develop a computer controlled marionette system that can be easily operated with any story of play.
It has some degrees of redundant freedom. The marionette in Fig. It is realized by static analysis. However, the analysis is not so easy due to some degrees of redundant freedom and nonharity. Figure 2 shows a leg of the above-shown marionette as a basic 2-dimensional example. The number of degrees of freedom is two. The control string is put through a hole, denoted by A, in the femur part from the top and tied to the lower end, denoted by C, of the tibia.
The concept of mechanics for the solution has no discussion. That is, it is the posture that has the minimum potential energy. So, this is a nonlinear optimization problem to find the posture having the minimum potential energy among the infinite number of geometrically feasible postures. In general, Newtonian methods of optimization can work efficiently if it starts at an initial point in the design region that forms the parabola phase in which the objective function is convex for the bottom or the top with respect to the optimum point.
However, they may have difsculty for the solution of strong nonlinear Fig. PSOA may be applied to such problems. Eberhart and J. Kennedy in [1]. According to reference [l] and others, PSOA was developed based on long and careful investigation about social behaviors of some kinds of animals such as bird blocking and fish schools by zoologists.
The algorithm is as follows. At first, suppose the following scenario. There is a group of animals, such as a bird flock and a fish school. The particles in a group are searching food within an area. Each particle realizes its position in a coordinate system. The position is considered as its design variables. The position of food and the area are considered as the optimum position and the feasible design area respectively in PSOA.
No particle knows where the food is. But they can know how far they are from the food in each time. Each time means each iterative process in PSOA. An effective strategy to find the food is basically to follow the particle being nearest to the food together with using recollectednearest positionswhere each particle has ever reached. All particles are numbered No. The position of particle No. The optimum position is the place of food. The best ever position of the group and the best ever position of each particle No.
Then, particle No. Then, here is presented the posture analysis of 2-dimensional marionette model in the vertical plane as shown in Fig. The model consists of a head, an upper body, a lower body under waist, an upper arm,a lower arm, a hand, an upper leg, a lower leg as shown in Fig. It has 9 degrees of freedom and is suspended by three strings.
Let us assume the string for the head is fixed at the origin of the coordinate system, and the length is constant. Another two control strings can change the lengths and their control points are assumed to be on Y-axis.
Then, the length and the coordinates of the two control strings determine t k posture of the marionette model. The problem to solve is the posture of the model under the following parameters given: the mass, mass center, size, connecting points with neighbor parts with respect to each model part, the lengths of the three strings, and the Y-coordinates of two control strings.
With respect to the connectivity from point 0 to Yl O,yl , two algebraic equations are formulated as where the first equation expresses the connectivity about Xcoordinate and the second is about Y-coordinate. With respect to the connectivity from point 0 to Y2 O,y2 , two equations are formulated as Then, the number of the variables to define the posture of the model is eleven. The number of equation about the connectivity is four. This leads the number of the degrees of redundant freedom is seven.
If only the connectivity is considered, we can get infinite number of feasible solutions but no unique solution. However, the target to find is a solution that gives the marionette model minimum potential energy among the feasible ones. Seven variables in Eq. Substituting random numbers to seven variables in the four equations in Eq.
A number of particles are then created. Each particle keeps its potential energy by Eq. S where tn, and xG, denote the mass of part N0. In this paper, PSOA starts using 50 particles. At first, PSOA is executed by setting values between 0. However, good and fast convergence of the particles cannot be obtained.
Therefore, we set smaller values such as 0. The convergence is obtained within less than 20 iterations at high probability. Figure 4 shows initial postures of 50 particles. They are created using random numbers as the abovementioned way. It is the resultant posture of the marionette model.
It is found that appropriate values for the inertia coefficient are not always between 0. PSOA gives optimum solutions, provided appropriate values are set for the inertia coeficient and social scaling factors in accordance with the dynamic characteristics of the problems to be solved. Then, the following remarks are obtained 1 PSOA can make this sort of problems easy to be tackled.
In addition, this study can suggest the applicability of PSOA to dynamic analysis of multi-body systems. References 1 J. Kennedy and R. Groenwold and J. The complete visualization procesa includes everything from data storage to image rendering, and what is needed for a meaningful user-to-data interaction. Normally the simulation output data has a large number of time steps, in the order of lo3 to 10'.
In order to handle this large amount of data all possible bottlenecks need to be removed. This includes data storage, data processing, system modeling, and image rendering. An object oriented approach is used for the multibody model, its basic simulation data structures, and for the visualization system.
This gives well structured models and supports both efficient computation and visualization without additional transformations. Simulation data can be classified into three classes, scalar-data, vector-data, and surfacedata. This paper focuses on time-varying vector data. The huge amount of data and time steps require data compression. Vectors are compressed using an algorithm specially designed for time-varying vector data. Selective data access is required during visualization.
A block based streaming technique is created to achieve fast selective data access. These visualization techniques are used in a full-scale industrial system and has proven its usefulness. The reader should understand the complete visualization process, from data storage to image rendering. What is needed for a meaningful user-to-data interaction? Besides, some background information on multibody simulation will be given.
The following topics are covered in this paper: Modeling of m u l t i b o d y systems. How to model the hierarchical structure of a multibody system for simulation and visualization? R e q u i r e m e n t s on the visualization system. What are the requirements regarding performance and usability on the visualization system?
Mechanical parts, or bodies, are defined by their boundary surfaces. How are surface represented during visualization? Classification of simulation data. There are different types of simulation data, e. The data used for this work needs to be classified. D a t a storage and access. A lot of time varying data is produced by a dynamic multibody simulation program. Compression is needed to reduce data size and fast data access needs to be granted during visualization.
Visualization techniques for different t y p e s of simulation data. How to transfer different types of simulation data into visual representations understood by engineers and scientists? Graphics and visualization libraries. Different 3D libraries and toolkits are available for 3D image rendering and visualization.
How useful are they for visualization of the simulation data used here? User interfaces for effective usage. What is needed for a meaningful and effective interaction with the simulation data?
Large size data visualization is memory consuming and computation intensive. Is special graphics hardware needed for visualization? Some of these topics have well defined solutions while other need more attention and work. However, this paper describes a complete and working visualization system for multibody simulation data, called Beauty, based on these topics.
This is especially true for their dynamic behavior. Multibody systems are used to model mechanical systems in which several bodies interact with each other. A rolling bearing is an example of such a system, see Figure 1. Dynamic multibody simulations are conducted to investigate the dynamic behavior of these systems. In general, such simulations produce a large amount of data and different visualization techniques are needed for the analysis.
Figure 1: A ball bearing is a typical example of a multibody system. It consists of several bodies, i. Two dimensional plots are commonly used to investigate time varying dynamic scalar data and vector components. However, they are not optimal for understanding of the dynamic behavior of a complete three dimensional multibody system. Thus a three dimensional visualization system is needed. The mechanical system 2. Movement of the bodies in the system 3.
Data from contacts between bodies The simulation data used for this work is provided by a multibody simulation program called BEAST [13][24]. BEAST is specialized on contact problems of rolling bearings. This paper describes a visualization system for BEAST data, and thus focuses on the visualization of rolling bearing data. However, many of the described techniques solve general visualization problems. There are two different types of data supplied by the simulation program, static model data and dynamic simulation data.
Static model data describes the initial state of the multibody system, this is surface geometry and orientation of bodies. Dynamic simulation data is time varying data and describes multibody system dynamics. This implies changes in surface geometry, movements of bodies, and contact related data, e.
Figure 2 describes the data-flow during visualization. Throughout the visualization process the data appears in different representations. Body surfaces, for example, are defined by a composition of different static and dynamic functions.
Dynamic simulation data is superimposed on the initial static surface geometry. Each surface is then transformed into a representation understood by the graphics renderer. However, the visualization process can be divided into two parts; visualization of the multibody system and visualization of dynamic simulation data. Fixed time steps will either miss the important contact information or will generate enormous amount of unnecessary information in between impacts.
A compact data format is very important in this kind of data. The number of time steps and data variables depend on the structure of the system and the simulation. BEAST simulations run in parallel, typically on a node Linux cluster and take between 1 hour and 1 week wall-clock time.
This results in MB to 8GB of compressed data. The large number of time steps, data, and variables puts special demands on the visualization system. Scientists and engineers should be able to run the software in their office without the need for a high-end graphics workstations.
Based on this, the following requirements for the visualization system are defined: Effective data compression for timevarying scalar and vector data is needed to reduce the size of the data to a minimum.
The effectiveness of a compression algorithm is very much data dependent. Fast Data access is required for animation or visualization of time varying data. Models do not need to move in real-time but reasonably fast. A minimum frame-rate of 1 frame per second is acceptable for this work. Meaningful data interaction is required to give users the possibility to interact with the data and to understand it.
Complex models, consisting of many bodies, cannot be investigated at a whole. Viewpoint adjustment and selection of model parts is required. Since the large amount of data that is created by the simulation program cannot be visualized all at onced data selection capabilities for variables and time-steps are required. Fast data transformations are needed to create a visible representation of the data. Hardware requirements need to be defined.
What system should the visualization application run on? They work at a high level of abstraction, meaning that you have to prepare your data in a way understood by the tool. Alternatively you can write your own import module or filter. Thus there is still a lot of work needed to prepare your data. This is especially true for large dynamic data sets of multiple domains. Such data can often be found in the field of multibody dynamics, e.
These requirements do not exclude the usage of a general visualization system, but the choice of a appropriate visualization library or tool for the final image rendering process is just one part of the whole visualization process. Efficient access and storage of large data sets can be a problem for general visualization systems. When processing large data sets some form of smart memory management is needed. Operating systems have build in virtual memory management and many application relay on this.
Another technique is called out-of-core visualization [ll]and is based on external memory algorithms [l]. These approaches implement their own memory management. Cox and Ellsworth [7] for example propose a general framework for application controlled virtual memory management. Streaming techniques are used to process large sets of time varying data, and allow data to be loaded in streams of segments or blocks. Visualization of time dependent data sets is a typical application for data streaming.
Streaming techniques can be classified into two main classes, time continuous media streaming and large scale data streaming. The first is mainly used for audio and video streams [20] where continuous playback is often more important than quality. The latter is often used for large static data sets [2] [19] [17] which are processed in smaller blocks of data, to reduce memory load, or gain speed. In Section 6 , a streaming technique for time-varying vector and scalar data is presented.
In contrast to media streaming techniques which process each time-frame separately a time-block based approach is used. The output from these simulations is typically a 2D or 3D mesh [5] [25] of nodes containing simulation data, most often a 2D or 3D vector field.
They are therefore less suitable for the continuous surfaces geometry required by contact mechanics. These tools have predefined data formats which might be a limitation for simulation with large number of time steps and varying time step length. ADAMS is the most popular and widely used multibody simulation system in industry. It is scalable, i. It also has simple post-processing capabilities t o animate multibody dynamics.
However, for advanced postprocessing, i. Full 3D view of the complete model and animation of the system dynamics is supported. Multiple 3D animation views and 2D plots can be activated simultaneously. All views are synchronized during animation and the current animation step is marked within 2D plots. Force elements are visualized as animated vectors. Light sources, material colors, and camera can be adjusted.
Cameras can be set t o follow any point in the model. Tkansparency is used to uncover hidden parts. Animations can be stored as AVI movies. In simulations with focus on detailed contact analysis 2D data is very important. Thus, an internal representation or model of the system is needed.
Object oriented modeling [21] is often used to model physical systems. A bearing for example is a composition of bodies, i. Each body is a composition of surface segments, see Figure 3. Even segment contacts can be modeled as objects. This design has many advantages for multibody simulations and visualization because the hierarchical structure of the mechanical system is reflected in the object model.
Thus many tasks are delegated directly to each object which simplifies implementation and data handling. A model of the mechanical system is required for simulation and visualization. A common base design is therefore used for both programs, see Figure 4. Object oriented design has been used throughout the implementation and gives an integrated tool that supports natural model structure, efficient computation, and interactive visualization. Since bodies are described by its surfaces, different surface representations need to be analyzed.
There are two main classes of surface representations, continuous and discrete surfaces representations. However, the most useful ones are parametric equations. The parametric form of a surface has many advantages in modeling of geometric shapes [15],e. Splines [8] [15] are common in geometric modeling. A spline is a smooth piecewise polynomial function which is controlled by a set of spatially discrete points.
Spline are commonly baaed on parametric equations. B-Splines or Basis-Splines provide local shape control and independence between number of control points and degree of polynomial function.
They are generalizations of BQsier curves [12]. Only rational functions can represent a conic curve, e. This form of a surface is for instance used for fast graphics rendering where the points are connected t o polygons to build a piecewise linear surface. A piecewise linear surface is called a polygon-mesh. A mesh [5] [25] is a discretization of a geometric domain into small simple shapes, e. A polygon mesh is the piecewise linear version of the mesh. Many graphics renderers, e. There are two major groups of meshes, unstructured and structured meshes.
Unstructured meshed are commonly used in FEA [4] [14] [6] and surface reconstruction , most commonly arbitrary triangle meshes based on the Delaunay criterion [9].
Unstructured meshes adapt very good to different surface topologies and are therefore widely used. However, unstructured meshes require significantly more memory than structured meshes and are very computationally intensive. Structured meshes [25] on the other hand are simple and efficient to calculate, and are less memory consuming. The vector 51,is the normal of the base geometry in any surface point.
The scalar As is the deviation from the base surface. All these functions are static functions and are calculated from user defined input parameters.
The scalar hgeom and the vector AFfl,, are dynamic functions and are calculated by the simulation program. The scalar hgeomis the material removal function and the vector AFflexis the structural deformation of the body. The scalar function As is a composition of different parametric functions, e.
In order to create a visible representation of the surface, a quadrilateral polygon mesh is created and send to the renderer.
The surface mesh needs to be recalculated during animation due to flexibility and material removal. This is a crucial part in the visualization process because mesh calculation is time consuming, especially for multibody systems with many bodies.
Surface meshing is done in parametric space because it simplifies the meshing process significantly. A structured quadrilateral mesh is used because structured meshes are faster to compute. Furthermore is adaptive mesh resolution used, meaning that mesh resolution depends on different factors, i. Figure 5 shows a contact between one ball and the inner ring. As a result of the contact we will receive contact forces acting on both bodies.
Many things happen within the contact and a large amount of data is produced by the simulation program. The data used here can be classified into three groups: Contact data is data which belongs to the contact itself, e.
It does not belong to one single body but the resulting contact of two bodies. B o d y related data is data which belongs to a certain body, e. But it can also be the total force of all contacts acting on a body. As for contact data there are two types of body related data, scalar and vector data. Surface related data results from contact areas between two bodies. Two bodies do not contact in a single contact point but in a contact area.
The contact area results in a pressure distributions and other distributed surface data. Surface data values are either scalars, e. Thus another possible classification of the data is scalar-data, vector-data, and surfacedata. Storage and visualization of scalar-data and vector-data is quite similar.
Surfacedata on the other hand is much more complicated to store and visualize, and is therefore presented in a separate paper [23]. The main focus here is on storage and visualization of vector-data. However, typical simulations produce a large amount of dynamic data. For each time step the simulation program calculates thousands of values. These values are written to a file for later analyzes. There are two main issues to overcome here, data size and data access: The size of the data should be reduced to a minimum.
Therefore an effective and fast compression algorithm is required. Fast selective data access is required during the visualization. Dynamic or time varying data put special demands on the file format. It has to support fast access t o any variable at any time step. The efficiency of a compression algorithm depends very much on the structure of the data you want to compress.
Thus, it is useful to characterize the data first and try to find a compression algorithm which suits best. A compression algorithm for high-volume numerical data has been designed by V. Engelson et. This algorithm is specially designed for the time varying vector data used here. It is based on delta compression where values are approximated from earlier time steps. Table 1gives three examples of short but representative simulations. The compression ratio varies for different simulations.
The high compression ratio for the grinding machine is to trace back to slow changes in the data which are approximated very well and therefore can be compressed very effectively. The compressed data needs to be stored for future analyzes. Fast selective data access is required.
Delta compression complicates selective access since it requires historical data. One approach is to divided the data into blocks and compress block by block. During visualization the selected block is read an decompressed.
The technique of processing data in several blocks rather than as a whole is called data streaming [20][2] [19][17]. A time continuous streaming technique is used in this work to store vector data, e. A fixed size memory block is allocated during simulation. This block is continuously filled with data.
Full blocks are compressed and written to the file. A block header is attached to each block, see Figure 6. The memory block is cleared after writing and the compression algorithm reinitialized. The block header keeps information about vector offset in the block and time period. To allow fast arbitrary vector access the visualization program reads all block headers on startup. During animation block data is read sequentially and decompressed one by one.
A large effort has been done to find proper ways to visualize data. Data visualization is based on transfer functions or transformation algorithms which describe how the data is transformed into it a graphical representation. Some of the most common algorithms are color-mapping, contour-mapping, vector-fields, and glyphs [22].
This paper explains visualization of vector-data. Visualization of surface-data is covered in [23]. Vectors are generally visualized as directed lines or glyphs. In this work a combination of both is used, a directed line with different glyphs at one end.
Vector data can be divided into two categories; positional and non positional vectors. Positional vectors can define position and orientation of bodies and are used for transformations only. Non positional vectors are represented by a directed line with either a single arrow at the end, e. However, the essential part of vector visualization is positioning and scaling of vectors. Vectors can be of any unit, e. While positioning is fairly simple, its more a matter of choice, e.
The following is taken into account for vector scaling: A reference length in model coordinates is needed to align vectors to, e. For comparison of vectors of the same unit a common reference length is useful t o scale these vectors. Vector length needs to be adjusted if view-port size changes, e. Fast changing values during animation might lead to jumping vectors.
This can be avoided by a slowly adjusting scaling algorithm. The application of this is discussed below. Many of the techniques described here have been integrated into this system as a base for further research and development. The main window allows fast access to all the common settings and actions, e. Visualization of the complete multibody system is the goal. A 3D scene with lights, coordinate systems, and the multibody system itself is created, see Figure 7.
Mouse based rotation, translation, and scaling controls are used to interact with the model. An exploded view and transparency is used to view hidden parts. Besides each part the scene can be disabled made invisible or enabled. Default surface colors are defined for different material types. These can be adjusted by the user. A colorize mode allows recoloring of all bodies for better distinguishing. Surfaces of real-world objects are not perfect. Imperfect surface geometry can be described by the user with different parametrized functions, e.
High contact forces might lead to material removal which destroys perfect surfacea as well. User defined magnification is used to visualize surface imperfections which are often on micrometer level, see Figure 8 a. Figure 8: A ball bearing model once with the imperfect inner ring raceway magnified about times and once in wlorize mode.
Bodies move, rotate, and get in contact with each other. Movements are often very small because of the minimal gap between bodies.
To be able to see small movements they can be magnified, see Figure 9. Thus users can for instance see how a single ball moves within the cage pocket. To improve this even more a locking mechanism for bodies is used. The user can lock the movement of any body in the system and the rest of the multibody system moves relative to this body. These vectors are of dimension three and are visualized as a directed line with an arrow at one end.
Rotation and moment vectors are an exception, they are according to standard notation drawn with two arrows, see Figure Vectors belonging to a certain body, e. Contact vectors, e. Figure Different vectors have different colors. The vectors legend indicates the valve of the unit vector. Vectors are auto-scaled to fit into the view-port of the model. Old scale factors are weighted higher than new scale factors. To be able to compare vectors of the same type, e.
Additionally, the user can set fixed scale factors or magnify the auto-scale factors. Vector values length per unit are displayed in an vector-legend beside the model, see Figure Different colors and line styles indicate different vectors.
However, the results are often presented to customers and other interested parties. One way is to use the visualization system for the presentation. Some commercial visualization systems offer a free viewer for predefined animations.
But often it is more useful to create an animated film which can be included into a standard presentation. The system described here generates animated films by capturing all rendered images and writing them as an image sequence into a file. This file can be converted into different computer video formats, e. It is Motif and XWindows based and has been tested on various platforms, e.
Multibody systems with many bodies are memory consuming and at least MB is needed to be able to work with large models. The current version uses OpenGL to render the 3D scene. Since the whole system is three dimensional a 3D graphics card with OpenGL support is needed. The complete visualization process includes everything from data storage to image rendering, and what is needed for a meaningful user-tedata interaction.
Normally the simulation output data has a large number of time steps, in the order of lo3 to lo6. In order to handle this large amount of data all possible bottlenecks need to be removed, this includes data storage, data processing, system modeling, and image rendering.
Parametric surfaces are discretized during the visualization process. The final surface representation is a structured quadrilateral mesh. It gives fast re-meshing and interacts with surface data structures without additional transformations. The large amount of data and time steps require data compression. Vectors are stored as a stream of blocks each of which is compressed using an efficient algorithm specially designed for time-varying vector data.
The block structures allows fast access to a certain time step during visualization. The system supports natural interaction with simulation data, e. This is an important aspect in data visualization. Henning Wittmeyer, for his financial support and permission to publish this paper. References [l] J. Abello and J. External Memory Algorithms. American Mathematical SOC. Ahrens, K. Brislawn, K. Martin, B. Geveci, C. Law, and M. Finite Element Procedures. Prentice-Hall, Inc.
Mesh Generation and Optimal Triangulation. World Scientific, Mesh Generation. Cheung and M. Pitman Publishing Lmt. Cox and David Ellsworth. Application-controlled demand paging for out-ofcore visualization. A Pmctical Guide to Splines. Springer-Verlag, New York, Sur la Sphere.
Engelson, D. Fritzson, and P. In Proc. Farias and C. Academic Press, Inc. Fritzson and P. Kardestuncer, D. Norrie, and F. Finite Element Handbook. McGraw-Hill Publ. Geometric Modeling. Surface Reconstruction - An Introduction. Pajarola and J. Compressed compressive meshes. Dissertation, University of Washington, Ramanujan, J.
Newhouse, M. Kaddoura, A. Ahamad, E. Chartier, and K. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, and W. Object-Oriented Modeling and Design. Schroeder, K. Martin, and B. The Visualixation Toolkit. Prentice-Hall Inc. Siemers and D. To be published, Stacke, D. BEAST-a rolling bearing simulation tool. Znstn Mech. Thompson, Z. Warsi, and C. Numerical Grid Generation: Foundations and Applications.
Elsevier, New York, Woo, J. Neider, and T. OpenGL Programming Guide. Addisin Wesley, 2nd edition, The whole simulation process, which includes multibody dynamics, finite element analysis and durability analysis, is easily applicable to long simulation periods even in the case of detailed finite element models and multi-bodysystem models, respectively. Moreover, compared to a transient finite element analysis computation times are significantly reduced.
Most of these modes have no influence on the dynamic behaviour of the multi-body system and lead to high computation times. The unique feature of the new approach is, that only modes contributing to the relevant dynamic effects have to be taken into account.
In general these are a small number of eigenmodes combined with some frequency response modes that represent structural deformations due to the interface loads within applied forces, constraints and joints.
The result output of the multi-body simulation for a later stress calculation are a combination of interface loads and modal co-ordinates which describe the free oscillations of the structure. The small number of modes allows the use of efficient models in the multi-body software for dynamic load data generation. The new approach was applied to a spare wheel carrier of a truck. The computed stresses of the virtual test rig showed a very good correspondence to the measured values of the physical test.
Critical areas of the design could be identified and were improved based on the results of the new method. However, modal stress calculation based on a small number of eigenmode vectors uh yields poor results [2][3][6]. Results can be improved not until the eigenmodes are extended by a set of particular modes u p , in which each particular mode u: represents the influence of a single unit attachment load pkon flexible body deformation.
As shown in [2] so called inertia relief modes and frequency response modes are suitable to be used as particular modes in this context. The calculation of frequency response modes requires the stiffness matrix K , the mass matrix M of the finite element model and the specification of an excitation frequency R , by the user.
Frequency response modes can be calculated for floating structures, which are the frequent case in the field of multi-body dynamics, whereas constraints preventing rigid body motion have to be applied in the case of static correction modes. Thus, static correction modes cannot properly represent the behaviour of free floating bodies.
Each connection can transmit forces or moments in certain directions, which are subsequently denoted as interface degrees of freedom. Precise modal stress calculation requires the consideration of one particular mode for each interface degree of freedom.
Since most of these modes have no influence on the dynamic behaviour of the multi-body-system, LOADS Durability uses a modified approach. However, the process requires some further considerations.
Since all eigenmodes are orthogonal with respect to each other, each residual particular mode u; must be orthogonal to any significant eigenmode u:. Furhter work would show that vector r in equation 21 represents the modal co-ordinates of the eigenmodes qhminus the impact of static loading on the modal co-ordinates qt,ol,,of the eigenmodes In other words, vector r describes the free oscillations of the structure that is embedded within the multi-body-system.
As mentioned above this requires only those eigenmodes and frequency response modes which affect the dynamic behaviour of the multibody system. In the case of a very stiff component, the body may be modelled as rigid body and the LOADS Durability post-processing will exclusively be based on the attachment loads and the complete set of particular modes.
After the time integration LOADS Durability is started and the user is requested to select the time interval for the durability analysis. Furthermore, the user may select between frequency response modes or inertia relief modes to be used for stress calculation. Then, the residual transformation coefficients are calculated.
The modal stress calculation and the durability analysis is performed in the durability software. In order to obtain the stresses, the modal stress vectors must assigned to their corresponding time series. Each particular stress vector 0: must be assigned to its corresponding attachment force pk and each eigenmode stress vector 0: must be assigned to its corresponding component r, of the resonance vector r. In the multi-body-system model the elastic behaviour of the spare wheel carrier was described by a minimum number of eigenmodes and frequency response modes which guaranteed accurate dynamic behaviour during time integration.
The measurement points were located at the spare wheel, see Fig. As mentioned above the ANSYS input files, including the unit load cases for all interface degrees of freedom and information about the used eigenmodes, were automatically generated by LOADS Durability. Cumulative frequency distributions of the stresses at the two critical spots see Fig.
The correlation of computed and measured stresses has the same quality, as the correlation of the computed and measured acceleration, compare Fig. The results of the next task are shown in Fig. Also this result corresponds with the results obtained by the test rig.
However, the absolute lifetimes obtained by computation and the test rig were completely different. They were 27 hours for the test rig and 13 hours for the simulation. This enables the user to perform durability analysis over longer simulation periods even in the case of detailed finite element models and multi-body-system models, respectively.
Since the process can be performed in short computation times, it will allow optimisation of components of mechanical systems regarding fatigue life.
0コメント