Sunday, 15 September 2013

If You Really Need to Optimize


Optimal solutions are fragile and should be generally avoided. This unpopular statement enjoys substantial practical and philosophical argumentation and now, thanks to complexity, we can be even more persuasive. However, this short note is about making optimisation a bit more easy. If you really insist on pursuing optimality, there is an important point to keep in mind.

Let us examine the case illustrated below: designing a composite conical structure in which the goal is to keep mass under control, as well as the fluxes and axial and lateral frequencies. The System Map shown below reflects which (red) design variables (ply thicknesses) influence the performance of the structure in the nominal (initial) configuration, prior to optimisation. In addition, the map also illustrates how the various outputs (blue nodes) relate to each other. 




In fact, one may conclude that, for example, the following relationships exist:
  • t_20013 - Weight
  • Weight - Axial frequency
  • Min Flux - Max Flux
  • t_20013 controls the Lateral Frequency
  • t_20013 also controls the Axial Frequency
  • Lateral Frequency and Axial Frequency are related to each other
  • etc.
As one may conclude, the outputs are tightly coupled: if you change one you cannot avoid changing the others. Let's first see how optimisation is handled when one faces multiple - often conflicting - objectives:

minimise y = COST (y_1, y_2, ..., y_n) where y_k stands for the k-th performance descriptor (e.g. mass, stiffness, etc.). In many cases weights are introduced as follows:

minimise y = COST (w_1  *  y_1,  w_2  * y_2, ..., w_n  * y_n).

The fundamental problem with such a formulation (and all similar MDO-type formulations) is that the various performance descriptors are often dependent (just like the example above indicates) and the analysts doesn't know. The cost function indicated above is a mathematical statement of a conflict, whereby the y's compete for protagonism. This competition is driven by an optimisation algorithm which knows nothing of the structure of the corresponding System Map and of the existence of the relationships contained therein. Imagine, for example that you are trying to reduce a variable (e.g. mass) and increase, at the same time another (e.g. frequency). Suppose also that you don't know that these two variables are strongly related to each other: the relationship looks typically like this: f = SQRT(k/m). Here, f and m, outputs of the problem, are related - changing one modifies the other. This is inevitable. In a more intricate situation, in which hundreds of design variables are involved, along with tens or hundreds of performance descriptors, the problem really becomes numerically tough. The optimisation algorithm has a very hard time. What is the solution?

If you cannot avoid optimisation, the we suggest the following approach:
  • Define your baseline design.
  • Run a Monte Carlo Simulation, in which you randomly perturb the design variables (inputs).
  • Process the results using OntoSpace, obtaining the System Map.
  • Find the INDEPENDENT outputs (performance descriptors), or, in the case the aren't any, those outputs which have the lowest degree in the System Map. There are tools in OntoSpace that actually help to do this.
  • Build your cost function using only those variables, leaving the others out.
This approach "softens" the problem from a numerical point of view and reduces the mentioned conflicts between output variables. Attempting to formulate a multi-disciplinary problem without knowing a-priori how the various disciplines interact (i.e. without the System Map) is risky, to say the least.






www.design4resilience.com


No comments:

Post a Comment