An approach to the response Analysis of Shafts – Numerical Examples

18 02 2010

3Numerical examples

In the following examples, finite element simulations are used to demonstrate this concept. These simulations concern shaft loading with torsional moment and axial compression. This is a common loading condition for shafts, for example in rotating machinery used in power plants. For performing the simulations, the commercial finite element code Abaqus/Explicit 6 has been employed, using double precision arithmetic. Further, the POD processing is performed with custom software based on the LAPACK library 6. In all cases, POD analysis is performed over at least 300 snapshots (taking care to always observe the lower bound of the Nyquist–Shannon sampling theorem) including all the degrees of freedom of all nodes in the models. The results presented hereafter include the extracted mode shapes and plots of variable field distributions, amplitude vs. time fluctuation for each mode and Singular value percentage for each mode. It is noted here that in all cases multi-field POD is performed. This means that the input snapshot vectors include three displacement variables for each node. The resulting POMs include fields that correspond to each of the input variables. The amplitude vs. time curves for each mode represents the variation with time of the mode participation in the time-space domain of the simulation. In vibration problems this is usually an oscillating curve and a Fourier analysis is performed on this time variation so as to calculate the excited frequencies for each mode 6. Finally, the singular values relative percentage presents an overall estimated participation factor for the mode. Considering the fact that in the results there are three fields to each POM, it is interesting to calculate the norm of each field so as to determine its participation to the POM. In the results presented, the sum of the squared field norms equals unity and the field norms are presented next to the singular value percentage so as to easily determine which field is dominating the POM.

Read the rest of this entry »


An approach to the response Analysis of Shafts – Discussion

16 02 2010

4 Discussion

The study of shaft behaviour under dynamic loading and rotation is of profound importance in predicting resonance, in system control and system monitoring. The purpose of the work presented in this paper is to introduce the method of Proper Orthogonal Decomposition (POD) as a tool that can be effectively used in characterising the dynamics involved in the above tasks and extracting useful information for the real-time behaviour of the structure. The method is used in the form of meta-processing of finite element simulation results. It has been shown that the POD method is used to reproduce modes and corresponding frequencies that are systematically correlated under variations of initial conditions in dynamical problems of free vibration. Even when combined loading is applied, the POD method correctly discriminates and classifies these modes and frequencies. It has also been shown that the frequencies are affected by dynamical effects and pre-strain, a behaviour that is expected but is often difficult to calculate.The Proper Orthogonal Decomposition presents a considerable advantage: it is indifferent to the system that generates the input to the method. Nevertheless, it succeeds in extracting dominant modes and classifying them from the time-space response of the structure. Throughout this text, POMs have not been considered to coincide necessarily with natural modes of vibration. Rather, POMs are appropriate combinations of modes and therefore feasible configurations of a body. POMs, being orthogonal, they form a basis of the space where the configurations of the body in the particular process lie. Moreover, they are classified in an eigenvalue sense. These two properties are very important. In combination, they identify dominant POD modes in the response. This information can be interpreted in two ways, especially  in cases where a unilateral behaviour is desired. The first interpretation is that a strong dominant mode depicts a process that is consistent and “robust” to that mode. The second interpretation is that singular value percentage dispersion over more than one mode signifies a process that includes strong interference with the dominant mode.

Read the rest of this entry »

Is optimisation so bad?

27 07 2009

Tools are not bad, the way we use them may be. Let’s first take a look at the question: What is optimisation?

If we seek for a general definition of optimisation, this could be “to provide the best possible answer to a question according to a criterion.” What are the ingredients of this definition?

  1. Question
  2. Answer
  3. Criterion
  4. Optimisation method

The answer (2) depends on (1), (3) and (4) above. It is possible for (2) to be different (but not imperative) if you change (1), (3) and (4). Therefore optimisation per se is not a good or bad practice but merely a way of answering questions. For it to provide a reasonable answer, you need to use an appropriate criterion and method to a “well posed” question. “Well posed” is rather a motivation to make sure the question you ask is the one you really care about, i.e. it is sufficient to describe the problem you have in hand.

In a more mathematical manner, we could describe optimisation as follows:

“within a given range, find the extremum of a function, where the extremum satisfies some requirements”.  (a)

This implies that

  1. There is a function that describes our problem
  2. There is an extremum within the given range

For an analytical function, the extremum can be accurately calculated analytically.

For a numerical function an approximation can be given using numerical methods.

Such a numerical optimisation needs additional input, aka the accuracy to which the extremum is to be sought. Depending on the method used, the accuracy can be treated in a different manner.

Now, most problems can be translated to the statement (a). However, it is important to know what kind of problem one needs to solve and not confuse the definitions. Let’s take an example where equally interesting questions can yield different results that under some circumstances may be confused with each other.

Let’s suppose you’re standing on the edge of a cliff over a gorge and you want to get to the bottom of that gorge. This is a rather vague need and we might stipulate the following problems:

  1. Find the deepest spot in the gorge
  2. Find a sufficiently deep spot in the gorge
  3. Find the fastest/slowest/safest way to the deepest spot
  4. Find the fastest/slowest/safest way to a sufficiently deep spot
  5. If the spot you want to visit is known (you have seen it in a book) and maybe you even know where it is, find the fastest/slowest/safest way to that spot.
  6. Find a spot which is deep enough, but not that deep so that you can climb back up.
  7. Find a spot which is deep enough but on a bump where you can stand and take pictures, which bump is not that steep so you totter.
  8. If a friend is where you want to be, and he can see you with his binoculars but you cannot see him, you might want him to come and get you rather than you try to find him.

In some situations above a “sufficiently deep spot” is sought. A sufficiently good solution is sometimes the best solution to a problem, as it provides a good balance between the result and the effort to reach that result. For example let’s say you’re using the Navier-Stokes equations to choose which shape has minimum drag and maximum lift between a circle, a square, a line and an airfoil and perturbations of these. Any airfoil is a sufficiently good solution.

The CAE case

Computer Aided Engineering means using a numerical system to simulate a real system and take decisions based on the outcome of tests performed on the numerical system. One can only hope the numerical system simulates the actual system sufficiently well.

Assuming the numerical system can be a good enough approximation, one has to compose the right system for his problem. Considering the case of the airfoil, a designer could stop when he finds that among the attainable shapes, the airfoil is the one that provides more lift for the least drag. But he could go further and optimise the shape of the airfoil for a particular speed range and angle (or other service requirements). The result might be a quite peculiar cross-section. Let’s say that we construct one such cross-section and test it, and let’s assume that the results of the tests are identical to the simulation results. In this ideal case, the numerical or virtual system acts like a gauge to the real or physical system. But in order to achieve this result, a high precision manufacturing process was needed, for which the time length and cost of it might not be possible to justify.

The designer, answered the question

“Which is the airfoil that performs best under the given service conditions”?

The assumption here is that if the airfoil must be constructed, there is potentially infinite time and funding for trial and error until the result conforms with the simulation, given that the numerical system accurately simulates reality.

If a series of airfoils are to be manufactured though, a less time consuming and costly method is required. This additional restriction results in departure from the ideal case and landing into the reality of

System + measurement = variation

In this situation the “optimum” might change, which is normal, because manufacturing is a different problem to designing.

Now the question is

“Which is the airfoil that performs best under the given service conditions and can be manufactured with the particular process”?

Obviously this is a different problem. The system is no more the airfoil alone but the manufacturing is introduced as well. But this is still an optimisation problem.

So identifying the problem you actually want to solve is a first step to getting it solved in a satisfactory manner. Obvious statement but often ignored. Take another example: consider the case of a metal forming process, i.e. forming a part from a coil, and try to look for the optimal thickness of the coil so as to minimise weight. The possible coil thicknesses are 0.5, 0.6 and 0.7 mm. Let’s say a designer performs a simulation for each case and no cracks are identified for any of these. Let’s assume that an optimisation algorithm is also used which identifies 0.500 as the optimal thickness, with 0.499 giving cracks.  The designer would pick 0.5mm as the optimum thickness. So far so good, for what we asked we got an answer, there’s no statement here that we actually want to produce this part. Now, if we go to production, we might find out that the manufacturer gives us a variation of 0.01 for the coil thickness, which will result to scrap. Assuming that the manufacturer gives the same variation for all coils, the production engineer would pick 0.6 as the optimum thickness that eliminates cracks and minimises weight. Is this the best solution? It is good enough for the question asked. If the coil supplier is asked to provide statistical data in terms of a median and standard deviation for his coil we might calculate a scrap rate of a% for the thickness of 0.5. Then we should see if this scrap rate is acceptable and try to optimise the cost of production if we want to reduce it. That’s yet a different problem.

So, all in all, optimisation is a method for seeking the best solution for a given problem. As with all tools, one has to know how to use it. It is not bad or good. However, sometimes it is misused.