Program
Time (Eastern Time) |
Presenter | Title |
---|---|---|
11:30 am - 12:00 pm | Registration | |
12:00 pm - 12:10 pm | Opening Remarks | |
12:10 pm - 1:00 pm | Plenary Talk Stefan M. Wild (Argonne National Laboratory) |
Optimization and Learning with Zeroth-Order Stochastic Oracles |
1:00 pm - 2:10 pm | Working Lunch and Poster Session** | |
2:10 pm - 3:25 pm | Program Officers Presentations and Panel Air Force Office of Scientific Research Warren P. Adams Fariba Fahroo National Science Foundation Yuliya Gorb Leland M. Jameson Stacey Levine Office of Naval Research Reza Malek-Madani |
NSF Comp Math Introductory Slides |
3:25 pm - 3:55 pm | Networking Break and Poster Session** | |
3:55 pm - 4:45 pm | Plenary Talk Howard Elman (University of Maryland at College Park) |
Reduced-Order Models for Parametrized PDE Models with Constraints |
4:45 pm - 5:00 pm | Closing Remarks |
**Poster session is open to students and early career researchers.
Submission deadline: October 1, 2022
Plenary Speakers
Prof. Dr. Howard Elman is a Professor in the Department of Computer Science at the Institute for Advanced Computer Studies (UMIACS), director of the Applied Mathematics and Statistics, and Scientific Computing (AMSC) Program and an Affiliate Professor in the Department of Mathematics at the University of Maryland at College Park. He recieved his Ph.D. in 1982 from Yale University and B.A. in 1975 from Columbia University. He is a SIAM Fellow, the Vice President for Publications at SIAM, and the Associate Editor for the journal Mathematics of Computation. His research interests are in numerical analysis, numerical linear algebra, computational fluid dynamics, parallel computation.
Dr. Stefan M. Wild is a Senior Computational Mathematician and Deputy Division Director of the Mathematics and Computer Science Division at Argonne National Laboratory and a Senior Fellow in the Northwestern Argonne Institute for Science and Engineering at Northwestern University. He joined Argonne as a Director’s Postdoctoral Fellow in September 2008. Prior to this, he obtained his Ph.D. in operations research from Cornell University and his M.S. and B.S. in applied mathematics from the University of Colorado. His primary research focus is developing model-based algorithms and software for challenging numerical optimization problems. He applies these techniques for data analysis, machine learning, and the solution of nonlinear inverse problems. At Argonne he leads a number of multidisciplinary computational science projects and shapes strategy for applied mathematics, numerical software, and statistics.
Abstracts
Reduced-Order Models for Parametrized PDE Models with Constraints
Prof. Dr. Howard Elman
Various phenomena simulated using partial differential equations (PDEs) give rise to con- strained systems of equations. These include models of optimal control with constraints given by elliptic PDEs, as well as fundamental models of fluid dynamics such as the Stokes equations where the constraints correspond to the incompressibility (divergence-free) condition. If these models also depend on random (parametrized) input data, than it is important to develop reduced-order models (ROMs) to reduce the computational costs associated with multiple solutions of the large-scale algebraic systems that arise from discretization. Several approaches have been developed to construct ROMs for constrained problems. These approaches supplement greedy search strategies with methods that augment the spaces obtained from searching in order to enforce inf-sup stability, which otherwise does not hold in the reduced spaces. In this work, we present two sets of results. The first is a comparison of the effectiveness of two such methods for augmentation, known as aggregation methods and supremizing methods. The second is an introduction of a new approach that avoids the difficulties caused by lack of inf-sup stabiity by forcing the reduced model to have a simpler structure not of saddle-point form.
Joint work with:
Kayla D. Davie, Applied Mathematics Program, University of Maryland at College Park.
Optimization and Learning with Zeroth-Order Stochastic Oracles
Dr. Stefan M. Wild
An especially challenging regime in data-driven science and engineering is when one can only query a noisy oracle. From learning controls to designing systems to calibrating models, such problems arise in many domains and are underserved by approaches that presume complete availability of first-order information. We highlight optimization methods for such problems, including methods that employ randomization to increase scalability and methods that exploit other structure outside of the oracle.
Joint work with:
Raghu Bollapragada (University of Texas)
Tyler Chang (Argonne National Laboratory)
Kwassi Joseph Dzahini (Argonne National Laboratory)
Cem Karamanli (University of Texas)
Xiaoqian Liu (North Carolina State University)
Matt Menickelly (Argonne National Laboratory)
Organizing Committee
Harbir Antil (George Mason University)
Ratna Khatri (U.S. Naval Research Laboratory)
Andrey Rukhin