Computationally Efficient Methods in Topology Optimization

University essay from Lunds universitet/Hållfasthetslära; Lunds universitet/Institutionen för byggvetenskaper

Abstract: In topology optimization, iterative, gradient-based methods are used to find the material distribution of structures which maximizes some objective function, typically the structures' stiffness, or in some cases the fundamental frequency. Finite element analysis is used to compute the structural response in each iteration, leading to large systems of equations. Several hundred iterations may be needed for convergence of the optimization problem, however the design changes may be very small, particularly towards the end of the optimization process. This raises the question if the systems need to be solved exactly, or if information from previous iterations can be used to reduce the computational effort. This is the fundamental idea of reanalysis, which Kirsch used to develop effective basis generation methods for reduced order models, known as combined approximation (CA). Kirsch's combined approximation has seen some use for static problems in topology optimization, and methods which take the approximations inaccuracies into account for a consistent sensitivity analysis have been developed. Kirsh's CA has also been used for eigenvalue problems, and consistent sensitivity analysis for optimization of a single eigenfrequency have been developed. We found that some of the basis generation methods Kirsch proposes may be ill suited when multiple eigenfrequencies are used to approximate the fundamental frequency, and we propose a simple remedy to these problems. The sensitivities of the eigenfrequencies and the objective function are derived using the adjoint method, and are compared to finite difference approximations. The simulations show that the basis generation methods which Kirsch proposes are inconsistent, but that the novel method is consistent with a full model. Although, all reduced order methods produced indiscernible results and had similar performance in regard to computational effort saved.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)