DFG
FAU Erlangen-Nuremberg

Background

From DFG-SPP 1253

Jump to: navigation, search

Contents

Summary

Solving optimization problems subject to constraints involving distributed parameter systems is one of the most challenging problems in the context of industrial, medical and economical applications. In particular, in the design of aircraft, ’moving bed’ processes in chemical engineering, crystal growth etc. the transition from model-based numerical simulations to model-based design and optimal control is crucial. All of these applications involve modelling based on partial differential equations (PDEs), and can be classified as formand topology optimization problems or optimal control problems in an infinite-dimensional setting.

For the treatment of such problems the interaction of optimization techniques and numerical simulation is crucial. After proper discretization, the number of optimization variables varies between 103 and 1010. It is only very recently that the enormous advances in computing power have made it possible to attack problems of this size. However, in order to accomplish this task it is crucial to utilize and further explore the specific mathematical structure of prototype applications, and to develop new mathematical approaches concerning structure exploiting algorithms, adaptivity in optimization and handling of control and state constraints.

The aim of this DFG-Priority Program (DFG-PP) is thus to combine numerical analysis of PDEs and optimization governed by prototype applications so that the most recent activities in the fields are merged and further explored, and new analytic and algorithmic paradigms will be developed, implemented, and, ultimately, validated in the context of real-world applications.

On the one hand, the proposed priority program requires basic research in single projects, while on the other hand, in order to foster a synthesis of ideas and methods, a synergetic networking of research groups is required. Clustering of research projects around prototype applications is particularly important in order to achieve landmark results with respect to solving the underlying industrial, economical, medical and environmental problems.

The proposed DFG-Priority Program is interdisciplinary, with a major focus on Applied Mathematics. It receives its motivation and validation platform from very recent grand scale applications. It becomes more and more apparent that such applications clearly define a class of mathematical problems in numerical analysis and optimization that need to be solved in order make a difference in future technology. It is, therefore, strongly believed that the research results will significantly benefit computational scientists from other disciplines and engineers in charge of grand-scale applications.

There is a critical potential of several senior and many young mathematicians that has evolved in Germany in recent years, driven by fascinating future applications and the challenging mathematical problems that are to be solved.

The proposed Priority Program would trigger a new and exciting development in computationally oriented Applied Mathematics and Engineering that would lead to ’cutting edge’ research in these fields. While the first funding period is mainly devoted to the development of new computational technologies, in the second period the engineering applications shall be given more room.

Back to top

Scientific Program

The mathematical analysis and numerical treatment of optimization problems involving distributed parameters, and in particular PDEs, necessitates the improvement of existing and the development of new mathematical concepts in algorithms, analysis and discretization. This will only be possible, if ideas and methods in optimization and numerical analysis of PDEs are joined and turned into new algorithmic paradigms.

In contrast to mere simulation, i.e. the evaluation of computer models for given parameters, the process of optimization requires their systematic variation with respect to a given objective. The scope of problems ranges between 103 (stationary, 1-dimensional) and 1010 (instationary, 3-dimensional), with 107 representing the typical size of discretized problems in current industrial applications. The major goal is to develop algorithms for optimal control problems with PDE-constraints that satisfy the relation

\frac{\textrm{effort\ of\ optimization}}{\textrm{effort\ of\ simulation}} = \textrm{constant} \qquad (1)

with a constant of ’moderate’ size. General-purpose-optimization procedures do not comply with this law. In fact, as a general rule, when applied to a discretized PDE, the convergence behavior often deteriorates as the discretization parameter gets smaller. Hence, the analysis of structure exploiting algorithms suitable for optimization problems with PDE-constraints together with state and control constraints should be explored also in the limiting infinitedimensional setting. This formidable task can only be achieved by combining appropriate discrete concepts, adaptivity and optimization in a hierarchichal, multi-level framework.

Back to top

Structural concept and scientific goals

Prototype applications

Moving bed process in chemical engineering

Many chemical reactions and purification steps are performed in fixed beds that contain a catalyst or an adsorbent (fixed bed reactors, adsorbers, chromatographic columns). The mixture of reactants or of substances that have to be separated moves relative to the fixed bed, so the intensity of the reaction/adsorption depends on the spatial coordinates, and the processes thus have to be modelled by PDEs, usually 1-d or 2-d models. Recent developments integrate adsorption and chemical reaction into one bed in order to enhance the reaction by the removal of products from the reaction zone.

The performance of this class of processes depends in a complex fashion on the design parameters and the operating parameters, thus their efficient operation requires a modelbased optimization. A special feature is that the operating parameters have to ensure the desired optimal operation at the cyclic steady state of the process. In a direct approach, in each iteration of the optimizer a discretized model is simulated until the cyclic steady state is reached. This requires the simulation of more than hundred up to several thousand cycles of PDE models with steep fronts, leading to several hundred or thousands state variables after discretization in the 1-d case and much more if the radial inhomogenity of the packing and the flows is considered, leading to computation times that are prohibitive even for the off-line design of the process and rule out an online application in an optimizing control framework. An alternative method is full discretization but a straightforward application often does not yield relevant improvements.

Crystal growth

Crystal growth has been a focal point of intense applied research in recent years and significant progress has been made in many areas like growth techniques, process modelling and simulation, material characterization, defects and dislocations, process control and process design. However, until recently progress in the field primarily has been achieved by time-consuming and expensive experiments based on intuition, and by trial and error. To compete in the global market it is important to reduce the cycle time for process development by combining engineering and mathematical efforts on growth experiments, process modelling, characterization, control and system design.

Crystal growth processes involve many different related physical mechanisms which interact on very different spatial and temporal scales. Mathematically, this is expressed by a hierarchy of weakly coupled models of PDEs. In model-based simulation this weak coupling of the different components in the models can be used to derive algorithms which relate microscopic crystal properties to macroscopic growth parameters. Due to the system complexity conducting model-based optimization of crystal growth obeying rule (1) necessitates the development of structure exploiting algorithms incorporating appropriate discrete concepts, adaptive methods and the efficient treatment of control and state constraints. In this spirit crystal growth serves as prototyping test application for the development and specification of new mathematical and numerical approaches in the emerging field of optimization problems for systems of coupled nonlinear PDEs.

Aerodynamic shape design

Aerospace industry is increasingly relying on advanced numerical flow simulation tools in the early aircraft design phase. Today’s flow solvers based on the solution of the Euler and Navier-Stokes equations are able to predict aerodynamic behaviour of aircraft components under different flow conditions quite well. Due to the high computational expense required for flow simulations around realistic 3D configurations, in industry computational fluid dynamics tools are rather used for analysis and assessment of given geometries than for shape design and optimization. However, within the next few years numerical shape optimization will play a strategic role for future aircraft design. It offers the possibility of designing or improving aircraft components with respect to a prespecified figure of merit subject to geometrical and physical constraints. However, because of the huge amount of computing power required for the solution of the simulation tasks large research effort has to be devoted to develop highly efficient optimization strategies for industrial aerodynamic aircraft design, which make it possible to solve the whole optimization task with a computational effort obeying rule (1). Here, also the mesh adaptivity with respect to optimization targets plays a major role.

Furthermore, multidisciplinary analysis is necessary to reach optimum designs of practical relevance. For aerostructural shape optimization this means coupling two disciplines - aerodynamics and structural mechanics. Therefore, the sensitivity evaluation for aerodynamic shape optimization has to take into account the aeroelastic effects introduced by the variations in the aerodynamic forces, which are again generated by changes in the aerodynamic shape parameters. After all, one is interested in real multi-disciplinary optimization (MDO). Imaginable problems are e.g. the drag reduction of an airpline while taking into account static wing deformations and the additional goals of low noise and low radar signature.

Parameter identification in combustion processes

The knowledge of the spatio-temporal distribution of oscillatory heat release in unsteady combustion is a prerequiste for advances in combustion technology, e.g. in analysis and control of combustion instabilities (gas turbines, furnaces or rocket engines) or in pulsed combustion applications. Of particular interest is the relation of heat release rate fluctuations to oscillatory pressure in a combustor. To develop a thorough understanding of these combustionacoustic interactions, the quantification of various physical processes occurring in the combustion zone is needed, which would require to measure unsteady pressures, velocities, temperatures, and species concentrations inside the combustor. In practice, it is very difficult to measure these unsteady quantities with the currently available techniques. However, with existing techniques, it is fairly easy to measure the oscillatory pressure. Therefore, the possibility of recovering oscillatory heat release distribution using acoustic pressure measurements alone needs to be considered.

This approach is referred to as the ”inverse problem” of determining the oscillatory heat release distribution and it belongs to a general class of inverse problems, which have been a subject of extensive study in recent times. In such inverse or parameter identification problems, one tries to identify sources by measuring the effect produced by the sources. Some of the other applications include digital tomography, detection of tumors using ultrasound techniques, geological/ oceanographic analysis using seismography/sonar techniques, inverse heat conduction problem, etc. The characteristic of most of the inverse problems is that they are likely to be stiff or even ill-posed, e.g. small measurement errors can produce large errors in the solution, i.e. the identified source characteristics. Such inverse problems can in general be formulated as optimization problems.

Structure of the scientific program and its scientific goals

The goal of the proposed research is to achieve a synthesis of state-of-the-art numerical algorithms for PDEs with state-of-the-art methods for nonlinear mathematical optimization problems developed in the context of prototype applications in order to achieve very significant advances both in the field of constrained optimization with PDEs and engineering applications. Ultimately, software will be implemented and validated on the base of real-world applications. The description of the prototype applications cleary reveals the scope of the research to be conducted within this priority program:

Structure exploiting algorithms

In order to comply with the fundamental rule (1), it is crucial to derive structure exploiting algorithms which make it possible to achieve mesh-independence and, consequently, uniform convergence behavior. The approach of sequential quadratic programming (SQP) as well as recent interior point methods in optimization combined with multigrid methods from PDEs bear the potential to yield highly efficient algorithms for the numerical solution of complex optimization problems including PDEs with state and control constraints.

Adaptivity and optimization

In the context of algorithms for optimization problems with PDE-constraints, concepts of adaptivity in the discretization appear to be extremely important in order to keep the discretized problems manageable. This controlled ‘model reduction’ is to be based on computable a posteriori error estimates for the approximate state, control and adjoint variables taking into account the sensitivities of these quantities relevant for the optimization process.

Control and state constraints

In most industrial applications the control- or design variables are constrained by bounds or by nonlinear inequality constraints. Moreover, quite often also the state of the system is subject to constraints. Hence, the efficient handling of state and control constraints is essential. Recently, encouraging progress has been made for control constrained semilinear elliptic and parabolic problems by using interior point and semi-smooth methods, while much less research efforts have been focussed on constrained hyperbolic problems so far. Still, many important questions are open, on both the theoretical and the algorithmic side.

Back to top

State-of-the-art and proposed research

Structure exploiting algorithms

Discretize first and then optimize? Or vice versa? These are classical questions which, however, in view of the new solution methods for optimization and PDEs have to be revisited and readdressed. Whereas, presumably no strategy will turn out to be uniformly superior it is a central task to develop criteria for choosing the right one on any particular given problem. If one discretizes the optimization problem first, then very well developed codes are available from finite dimensional optimization. Due to the large dimension of the discretized systems only very few of these codes can be applied and require still structural modification and appropriate preconditioning. The use of these codes predetermines for example the discretization of the adjoint equations which in the case of state constraints can cause a substantial loss of efficiency as the Lagrange multipliers could be measure-valued. These and 6 other effects are reasons why the mesh independence of the methods cannot be guaranteed in general. If on the other hand, one optimizes at the continuous level first and discretizes the optimality system (the so-called Karush-Kuhn-Tucker (KKT) system) appropriately, then in some instances one has shown that mesh independence holds. The convergence analyses for this approach in the unconstrained case are well developed in the nonlinear case, for control constraints there are some preliminary results available, whereas for state constrained problems the research area is fairly open. The goal is that mesh independence results lead finally to diagonalized schemes where the grid is refined as the iteration progresses and the convergence rate is not affected.

The influence of the discretization approach for the state equation has to be investigated systematically with regard to the behavior of the optimization algorithms. More refined aspects surface, when one examines the relationship between the discretizations of the control, state, and adjoint variables. Presently approaches are being developed for problems with control constraints that yield discrete controls without discretizing the control space a-priori. This and time dependent problems demand intensive research which would immediately have an impact on applications, as industrial problems mostly incorporate control and state constraints.

Hierarchical methods and multigrid methods for KKT-systems

KKT systems in optimization with PDEs without constraints have the structure of a coupled system of boundary value problems which has been exploited already two decades ago to apply multigrid methods in this context. At the present time, there is a renewed interest in analyzing the use of fast methods like multigrid for optimization with PDEs, in particular under the inclusion of constraints on the state and control. Recent numerical results indicate that it seems possible to solve optimization problems with PDEs using multigrid methods in a way such that the performance measure (1) is independent of the grid, and is achieved with a constant of moderate size. The analysis and application of multigrid methods in the case of state constrained optimization problems with PDEs is an open field of research. The challenge is to combine the numerical efficiency of the PDE solvers like multigrid methods with sophisticated optimization algorithms such as interior point methods which are especially well suited for high dimensional problems and problems with nonlinear constraints. In addition, one can also use them as a preconditioner in an iterative solver for these linear systems.

Preconditioning techniques

Most of the program packages in optimization use direct solvers for the solution of linear systems. This approach is not feasible for KKT systems which arise from discretization of PDEs where one has to use iterative solvers. This causes special requirements for appropriate preconditioners, because not all blocks of the KKT matrix are ill conditioned. This structure, which is different from saddlepoint problems considered in the literature, has to be incorporated when designing preconditioners, in order to achieve high numerical efficiency. Only recently the requirements of preconditioners for optimization problems with PDEs have been addressed. Specific research aspects from optimization in this context are the nonregularity of certain blocks, potential indefiniteness and loss of symmetry.

Algorithmic differentiation

Algorithmic, or automatic, differentiation is a chain-rule based technique for evaluating derivatives of functions defined by evaluation procedures in a high level computer language like C or Fortran. While early applications concentrated on straight-line programs of small to moderate size, tool-developers are now targeting large-scale codes. The so called forward mode has been successfully applied to codes with hundred thousands of lines, like for example FLUENT, but the resulting sensitivity information may not always be useful when the original evaluation code contains iterative solvers or other adaptive elements. These difficulties mount in the alternative reverse mode, a discrete analog of adjoints in function space. It yields gradients at essentially the same cost as the underlying scalar valued functions and is therefore very attractive for large-scale optimization. Here, nondifferentiabilities which frequently arise in advanced PDE solvers may cause problems, and techniques need to be developed for avoiding storage of full evaluation trajectories, especially on discretizations of instationary PDEs.

Adaptivity and optimization

Optimization problems with PDE-constraints pose high requirements on their numerical solution. After discretization the number of free variables varies between 103 (stationary problems in one dimension) and 1010 (nonstationary problems in three dimensions). In order to achieve a sufficient accuracy, while keeping the computational work low, one needs to use adaptive discretizations based on a posteriori information about the error. In contrast to the “forward” simulation, an optimal control cycle requires special strategies of mesh adaptation in particuar with respect to optimization. Such a problem-oriented mesh adaptation is to be combined with likewise adaptively controlled algebraic algorithms.

Duality-based error estimation

The traditional error analysis of finite element methods uses duality arguments, the so-called Aubin-Nitsche trick, for deriving a priori error estimates. In the a posteriori error analysis the use of duality was introduced in [1]. This has caused a change of paradigm in the a posteriori error control of discretization methods. The traditional way of refining the mesh where the local error is large was replaced by the more natural concept of refinement where the global error is created. In this approach the information needed about the global error sensitivity is obtained by solving a dual problem which results in computable a posteriori error estimates. Recently, this idea has been developed further into a feedback process for error estimation and optimal mesh adaptation in computing quantities of physical interest, such as point values, line integrals or moments; see [1] for a survey of this technique and various model applications. Meanwhile, this concept of “goal-oriented adaptivity” has been adopted by several other research groups in Sweden, Great Britain and the USA. The applications considered range from simple elliptic and parabolic model problems, to problems from structural mechanics (elasto-plastic deformation) and fluid mechanics (drag coefficient and heat flux) to reactive flows and radiative transport.

Model reduction by adaptivity

The new concept of duality-based adaptivity may be considered in the context of model reduction. It offers a great potential in solving also complex optimization problems with moderate effort. First steps into this direction are described in [1] and [1]. Here, the starting point is the first-order necessary condition, given by the KKT system. This system of strongly coupled PDEs is approximated by a Galerkin finite element method. The error of this discretization is estimated by a posteriori estimates in which primal and dual residuals and sensitivities are multiplied. This approach has been carried further to problems in fluid mechanics; see [1] for the relevant literature. A rather new development is the use of adaptive discretization in the context of parameter estimation or more general in model adaptivity. For such problems the application of residual-based error estimates is reported in [1]. The step towards difficult practice-relevant problems has now to be done.

Though these first applications demonstrate the high potential of adaptivity in optimal control with PDEs, there are still many theoretical and practical questions to be answered. The efficient and reliable computation of dynamic dual solutions is a crucial problem in applying the method to nonstationary control problems. Its application for parameter estimation requires the computation of error sensitivities by solving an additional ’outer’ dual problem. Also the treatment of control and state constraints within an adaptive solution process is a still unsolved problem. Finally, another open problem is the control of the error of interior iterations such as Newton-, GMRES- or multigrid methods, balanced with the discretization error.

Control and state constraints

Interior point and active set methods

The optimization of PDEs usually exhibits constraints on the controls and states which are caused by the technical or economical application of the underlying problem. These constraints are usually pointwise constraints in function spaces. The numerical solution of these problems requires the development of new optimization methods which handle these constrained problems in function spaces and lead to efficient algorithms for the discretized problems.

Interior point methods have proven to be an excellent tool for solving finite dimensional optimization problems at a large scale. However, there are only very few rigorous analyses in function space which support this observation. Certain smoothing properties of PDEs can help to alleviate these difficulties and need to be investigated further. In particular, a deep understanding of interior point methods in function space would form the basis to design mesh-independent interior point algorithms for the discretized optimization problem. Moreover, the presence of control- and state constraints leads to a more complicated structure of linear systems that have to be solved in the optimization loop. It is essential and highly nontrivial to develop fast solvers for these extended systems by using existing fast solvers, e.g. multigrid solvers, for the underlying PDE.

In related approaches the optimality system is reformulated using operator equations which are possibly non-smooth. Extensions of Newton’s method are then applied to these systems, like semi-smooth Newton methods and primal-dual active-set strategies, affine scaling and smoothing methods. These algorithms need to be explored, analyzed and tested in the framework of state-constrained PDE optimization. In this context, issues like mesh independence, local convergence rate estimates and numerical tests on application driven models are also important research issues.

There are already promising results in the case of control constrained problems, which move these types of problems with regard to their computational complexity into the range of unconstrained problems. The next important research issues for control constrained problems are mesh-independence results after discretization, the development of fast solvers and inexactness criteria for the arising linear systems and the integration of mesh adaptivity.

State constrained problems are far more complex in their theoretical and numerical analysis and recent results were only obtained for state constraints of mixed (bottleneck) type. For general state contraints the main difficulties are caused by the fact that the adjoint variable often exists only in the distributional sense and therefore special numerical approximation techniques have to be employed. In particular, smoothing effects provided by interior point methods or regularization techniques may help to develop efficient optimization methods. The latter approach has recently led to encouraging results in the solution of contact problems and is also promising for handling state constraints. For state constraint problems the short term goals are a better theoretical understanding of the problem and the corresponding development and analysis of appropriate optimization algorithms. At a longer term, the development of mesh-independence results, the design of fast solvers for the arising linear systems and the integration of mesh adaptivity is planned.

Most of the issues in the preceding paragraphs deal with local convergence results. However, in case the approximate optimal solution is not known or the nonlinearity for the Newton system is substantial, globalization techniques need to be incorporated. A very recent development are filter methods, which have made big success in finite dimensional optimization, but have yet to be explored in the context of PDE-constrained optimization.

Coupled and multi-level systems with constraints

In a large number of applications one has to handle constraints also in couplings between subsystems, e.g. optimization problems in mechatronics, fluid-structure-acoustic interactions and multi-phase models. In order to handle such (coarse-grain-level) coupled systems, so-called multidisciplinary design and optimization (MDO) strategies have been developed, which couple simulation solvers from various disciplines with an outer optimization loop. However, structure respecting algorithms are required to increase the efficiency of this approach. In this direction Domain Decomposition Methods (DDM) for PDE-constraint optimization have to be further developed. Very little is known about DDM with respect to constraints for design- and state variables, in particular across interfaces. On the small scale (small-grain-level) the models often exhibit oscillating coefficients, varying domains and objectives, and constraints on design and state variables, where homogenization has proved to be a very effective model reduction. While the general concept of homogenization for both PDEs and scalar (energy) functionals is well established in theory and for numerical methods, the homogenization of optimal control and design problems with PDE-constraints requires further research, in particular with respect to numerical procedures. In both, DDM of coupled systems and homogenization, the interaction between discretization, homogenization and optimization is poorly understood so far.

Back to top

References:
    Personal tools
    organisation