*Lecture: Autoregressive Time Series Analysis for Discovering Exoplanets*

**January 18, 2017, 1:15pm – 2:15pm**

Room 150

**Speaker:** Eric Feigelson, Penn State University

### Abstract

A major impediment to detecting the periodic signals of planets orbiting other stars is aperiodic variations intrinsic to the star, typically from magnetic activity similar to sunspots. The most common statistical procedures to remove stellar variations are nonparametric such as wavelet decomposition or Gaussian Processes regression. However, we have found that, providing the time series is (approximately) evenly spaced, parametric autoregressive models are very effective. The rich ARFIMA family is needed to treat short-memory (ARMA) and long-memory (F) behaviors along with nonstationarity (I). This talk presents progress in the Kepler AutoRegressive Planet Search (KARPS) project where ARFIMA modeling is followed by a matched filter periodogram as a sensitive method for detecting periodic planetary transits in NASA Kepler data. Further development of the method (e.g. CARFIMA) is needed for application to very irregularly spaced time series.

### References

No references provided at this time

*Lecture: Convex Optimization for Convex Regression*

**January 25, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Dirk Lorenz, Technische Universität Braunscheig-GER

### Abstract

In this talk we consider the problem to find a regression function for given data that is a convex function. This problem can be formulated as an optimization problem that is itself a convex problem. The simplest version of the problem leads to convex quadratic problem with a huge number of linear constraints. We will use convex duality and first order methods for saddle point problems to solve the resulting problem and obtain methods that scale well to large problem dimensions.

### References

No references provided at this time

*Lecture: Genetic Covariance Functions and Heritability in Neuroimaging with an Application to Cortical Thickness Analysis*

** February 1, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Benjamin Risk

### Abstract

Twin studies can be used to disentangle the environmental and genetic contributions to brain structure and function. We develop a method for estimating the non-stationary covariance function of the genetic component in a functional structural equation model of heritability. Positive semi-definite functions are estimated using kernel regression with geodesic distance, and they capture both short- and long-range dependence. Simulation studies demonstrate large improvements over existing approaches, both with respect to covariance estimation and heritability estimates. Cortical thickness has been broadly associated with measures of intelligence, and cortical thinning has been associated with dementia. We provide an atlas of the genetic covariance patterns in cortical thickness, which can be used to guide future molecular genetic research.

### References

No references provided at this time

*Lecture: Composite Empirical Likelihood: A Derivation of Multiple Nonparametric Likelihood Objects*

** February 15, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Adam Jaeger

### Abstract

The likelihood function plays a pivotal role in statistical inference because it is easy to work with and the resultant estimators are known to have good properties. However, these results hinge on correct specification of the likelihood as the true data-generating mechanism. Many modern problems involve extremely complicated distribution functions, which may be difficult — if not impossible — to express explicitly. This is a serious barrier to the likelihood approach, which requires the exact specification of a model. We propose a new approach that combines multiple non-parametric likelihood-type objects to build a distribution free approximation of the true likelihood function. We build on two alternative likelihood approaches, empirical and composite likelihood, taking advantage of the strengths of each. Specifically, from empirical likelihood we borrow the ability to avoid a parametric specification, and from composite likelihood we gain a decrease in computational load. In this talk, I define the general form of the composite empirical likelihood, derive some of the asymptotic properties of this new class, and explore two applications of this method.

### References

No references provided at this time

*Lecture: Reduced Order Models for One-Step Multispectral Quantitative Photoacoustic Tomography*

** February 22, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Sarah Vallelian

### Abstract

Photoacoustic tomography, a high contrast, high resolution imaging modality, is a well-posed nonlinear inverse coefficient problem for a coupled wave equation and diffusion equation pair. When data are available at multiple optical wavelengths, the stability and accuracy of the reconstruction increases. Standard inversion methods such as regularized quasi-Newton optimization involve a significant computational cost which increases linearly with the number of optical wavelengths. To accelerate the inversion, we use a POD-based reduced order model for the wavelength dependence. We demonstrate the computational gains on a synthetic problem motivated by neuroimaging.

### References

No references provided at this time

### Presentation

To view this presentation, please **CLICK HERE**

*Lecture: TV- l2-like Model based N-th Order Riesz Dyadic Wavelet*

** March 1, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Duy Thai

### Abstract

There are two parts in my talk that summarize my current projects: **Part I:** TV- l2-like model based N-th order Riesz dyadic wavelet which addresses a question: ”which domain makes signal sparse?”.

The proposed is a hybrid model which benefits from two well-known

techniques: harmonic analysis and regularization, in terms of removing fine scale structure while preserving edges and contrast of a smoothed signal/image.

**Part II:** An introduction to stochastic dynamical system and dimension reduction which addresses a problem: how to find a cost function with a (topologically conjugate) projection of a high dimensional dynamical system onto a low dimensional space.

### References

No references provided at this time

*Lecture: Clustering Sampling Smoothers for Non-Gaussian Four Dimensional Data Assimilation*

** March 8, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Ahmed Attia

### Abstract

In this work, we develop a set of fully non-Gaussian sampling algorithms for four-dimensional data assimilation (4D-DA). Unlike the well-known ensemble Kalman smoother (EnKS), which is optimal only in the linear Gaussian case, the proposed methodologies accommodate non-Gaussian errors and non-linear forward operators. Moreover, unlike the four-dimensional variational method (4D-Var), which provides only the Maximum aposteriori estimate (MAP) of the posterior distribution, the presented schemes inherently, provide an estimate of the posterior uncertainty. The Gaussian-prior assumption is widely used, in the context of 4D-DA mainly because it is computationally tractable, however it is restrictive, and inaccurate. We propose a set of MCMC-based algorithms to sample from the non-Gaussian posterior, where the Gaussian-prior-assumption is relaxed. Specifically, a clustering step is introduced after the forecast phase of the proposed smoothers, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. Using the data likelihood function, the posterior density is then formulated as a mixture density, and is sampled using a HMC approach (or any other scheme capable of sampling multimodal densities in high-dimensional subspaces). Due to prohibitive computational cost of this approach, we propose an efficient algorithm that incorporate reduced order model. Specifically, we develop computationally efficient versions of the proposed sampling smoother based on reduced-order approximations of the underlying model dynamics.

### References

- Attia, Ahmed, Răzvan Ştefănescu, and Adrian Sandu.
**“**International Journal for Numerical Methods in Fluids 83.1 (2017): 28-51.*The reduced‐order hybrid Monte Carlo sampling smoother.”* - Attia, Ahmed, Vishwas Rao, and Adrian Sandu.
*“A Hybrid Monte‐Carlo sampling smoother for four‐dimensional data assimilation.”* - Attia, Ahmed, Azam Moosavi, and Adrian Sandu.
*“Cluster Sampling Filters for Non-Gaussian Data Assimilation.”*

*Lecture: A Mixture of Gaussian and Students’ t Errors for a Robust and Accurate Inference*

** March 15, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Hyungsuk Tak

### Abstract

A Gaussian error assumption, i.e., an assumption that the data are observed up to Gaussian noises, can bias any parameter estimation in the presence of outliers. A heavy tailed error assumption based on Student’s t-distribution helps reduce the bias, but it may be less efficient in estimating parameters if the heavy-tail assumption is uniformly applied to most of the normally observed data. We propose a mixture error assumption that selectively converts Gaussian errors into Students’ *t* errors according to latent outlier indicators, leveraging the best of the Gaussian and Students’ *t* errors; a parameter estimation becomes not only robust but also accurate. Using several examples, we demonstrate the potential for the proposed mixture error assumption to estimate parameters accurately in the presence of outliers.

### References

No references provided at this time

*Lecture: Optimization over the Sums-of-Squares Cone Using Polynomial Interpolants*

** March 22, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Sercan Yildiz

### Abstract

In this follow-up talk, we discuss our recent progress on solution methods for sums-of-squares programs. We present the details of an algorithm for optimization over the sums-of-squares cone and describe the challenges involved in demonstrating its polynomial time complexity in the bit model of computation. The talk is based on joint work with David Papp.

### References

No references provided at this time

****NOTE: This seminar has been postponed until a later date – Please check in to this page for updates on when this lecture will be presented****

*Lecture: Distributionally-Robust Optimization for Energy and Environment*

** March 29, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Peter Diao

### Abstract

Stochastic programming optimizes an objective function that depends on a distribution of uncertainty D. Distributionally-Robust optimization tackles the challenging problem of dealing with potential misspecification of D. This talk will explain ongoing work being carried out in the Energy and Environment working group related to this topic. Your feedback and suggestions will be very welcome!

### References

No references provided at this time

*Lecture: No lecture title has been determined at this time*

** April 12, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** David Jones

### Abstract

No abstract available at this time for this lecture

### References

No references provided at this time

*Lecture: No lecture title has been determined at this time*

** April 19, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** David Stenning

### Abstract

No abstract available at this time for this lecture

### References

No references provided at this time

*Special Guest Lecture: The Rise of Multi-precision Computations*

** April 26, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Nicholas J. Higham, School of Mathematics, University of Manchester (UK)

### Abstract

Multi-precision arithmetic means floating point arithmetic supporting multiple, possibly arbitrary, precisions. In recent years there has been a growing demand for and use of multi-precision arithmetic in order to deliver a result of the required accuracy at minimal cost. For a rapidly growing body of applications, double precision arithmetic is insufficient to provide results of the required accuracy. These applications include supernova simulations, electromagnetic scattering theory, and computational number theory. On the other hand, it has been argued that for climate modelling and deep learning half precision (about four significant decimal digits) We discuss a number of topics involving multi-precision arithmetic,

including:

(i) How to derive linear algebra algorithms that will run in any precision, as opposed to be being optimized (as some key algorithms are) for double precision.

(ii) The need for, availability of, and ways to exploit, higher precision arithmetic (e.g., quadruple precision arithmetic).

(iii) What accuracy rounding error bounds can guarantee for large problems solved in low precision.

(iv) How a new form of preconditioned iterative refinement can be used to solve very ill conditioned sparse linear systems to high accuracy.

### References

No references provided at this time