*Lecture: Autoregressive Time Series Analysis for Discovering Exoplanets*

**January 18, 2017, 1:15pm – 2:15pm**

Room 150

**Speaker:** Eric Feigelson, Penn State University

### Abstract

A major impediment to detecting the periodic signals of planets orbiting other stars is aperiodic variations intrinsic to the star, typically from magnetic activity similar to sunspots. The most common statistical procedures to remove stellar variations are nonparametric such as wavelet decomposition or Gaussian Processes regression. However, we have found that, providing the time series is (approximately) evenly spaced, parametric autoregressive models are very effective. The rich ARFIMA family is needed to treat short-memory (ARMA) and long-memory (F) behaviors along with nonstationarity (I). This talk presents progress in the Kepler AutoRegressive Planet Search (KARPS) project where ARFIMA modeling is followed by a matched filter periodogram as a sensitive method for detecting periodic planetary transits in NASA Kepler data. Further development of the method (e.g. CARFIMA) is needed for application to very irregularly spaced time series.

### References

No references provided at this time

*Lecture: Convex Optimization for Convex Regression*

**January 25, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Dirk Lorenz, Technische Universität Braunscheig-GER

### Abstract

In this talk we consider the problem to find a regression function for given data that is a convex function. This problem can be formulated as an optimization problem that is itself a convex problem. The simplest version of the problem leads to convex quadratic problem with a huge number of linear constraints. We will use convex duality and first order methods for saddle point problems to solve the resulting problem and obtain methods that scale well to large problem dimensions.

### References

No references provided at this time

*Lecture: Genetic Covariance Functions and Heritability in Neuroimaging with an Application to Cortical Thickness Analysis*

** February 1, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Benjamin Risk

### Abstract

Twin studies can be used to disentangle the environmental and genetic contributions to brain structure and function. We develop a method for estimating the non-stationary covariance function of the genetic component in a functional structural equation model of heritability. Positive semi-definite functions are estimated using kernel regression with geodesic distance, and they capture both short- and long-range dependence. Simulation studies demonstrate large improvements over existing approaches, both with respect to covariance estimation and heritability estimates. Cortical thickness has been broadly associated with measures of intelligence, and cortical thinning has been associated with dementia. We provide an atlas of the genetic covariance patterns in cortical thickness, which can be used to guide future molecular genetic research.

### References

No references provided at this time

*Lecture: Composite Empirical Likelihood: A Derivation of Multiple Nonparametric Likelihood Objects*

** February 15, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Adam Jaeger

### Abstract

The likelihood function plays a pivotal role in statistical inference because it is easy to work with and the resultant estimators are known to have good properties. However, these results hinge on correct specification of the likelihood as the true data-generating mechanism. Many modern problems involve extremely complicated distribution functions, which may be difficult — if not impossible — to express explicitly. This is a serious barrier to the likelihood approach, which requires the exact specification of a model. We propose a new approach that combines multiple non-parametric likelihood-type objects to build a distribution free approximation of the true likelihood function. We build on two alternative likelihood approaches, empirical and composite likelihood, taking advantage of the strengths of each. Specifically, from empirical likelihood we borrow the ability to avoid a parametric specification, and from composite likelihood we gain a decrease in computational load. In this talk, I define the general form of the composite empirical likelihood, derive some of the asymptotic properties of this new class, and explore two applications of this method.

### References

No references provided at this time

*Lecture: Reduced Order Models for One-Step Multispectral Quantitative Photoacoustic Tomography*

** February 22, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Sarah Vallelian

### Abstract

Photoacoustic tomography, a high contrast, high resolution imaging modality, is a well-posed nonlinear inverse coefficient problem for a coupled wave equation and diffusion equation pair. When data are available at multiple optical wavelengths, the stability and accuracy of the reconstruction increases. Standard inversion methods such as regularized quasi-Newton optimization involve a significant computational cost which increases linearly with the number of optical wavelengths. To accelerate the inversion, we use a POD-based reduced order model for the wavelength dependence. We demonstrate the computational gains on a synthetic problem motivated by neuroimaging.

### References

No references provided at this time

### Presentation

To view this presentation, please **CLICK HERE**

*Lecture: TV- l2-like Model based N-th Order Riesz Dyadic Wavelet*

** March 1, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Duy Thai

### Abstract

There are two parts in my talk that summarize my current projects: **Part I:** TV- l2-like model based N-th order Riesz dyadic wavelet which addresses a question: ”which domain makes signal sparse?”.

The proposed is a hybrid model which benefits from two well-known

techniques: harmonic analysis and regularization, in terms of removing fine scale structure while preserving edges and contrast of a smoothed signal/image.

**Part II:** An introduction to stochastic dynamical system and dimension reduction which addresses a problem: how to find a cost function with a (topologically conjugate) projection of a high dimensional dynamical system onto a low dimensional space.

### References

No references provided at this time

*Lecture: Clustering Sampling Smoothers for Non-Gaussian Four Dimensional Data Assimilation*

** March 8, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Ahmed Attia

### Abstract

In this work, we develop a set of fully non-Gaussian sampling algorithms for four-dimensional data assimilation (4D-DA). Unlike the well-known ensemble Kalman smoother (EnKS), which is optimal only in the linear Gaussian case, the proposed methodologies accommodate non-Gaussian errors and non-linear forward operators. Moreover, unlike the four-dimensional variational method (4D-Var), which provides only the Maximum aposteriori estimate (MAP) of the posterior distribution, the presented schemes inherently, provide an estimate of the posterior uncertainty. The Gaussian-prior assumption is widely used, in the context of 4D-DA mainly because it is computationally tractable, however it is restrictive, and inaccurate. We propose a set of MCMC-based algorithms to sample from the non-Gaussian posterior, where the Gaussian-prior-assumption is relaxed. Specifically, a clustering step is introduced after the forecast phase of the proposed smoothers, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. Using the data likelihood function, the posterior density is then formulated as a mixture density, and is sampled using a HMC approach (or any other scheme capable of sampling multimodal densities in high-dimensional subspaces). Due to prohibitive computational cost of this approach, we propose an efficient algorithm that incorporate reduced order model. Specifically, we develop computationally efficient versions of the proposed sampling smoother based on reduced-order approximations of the underlying model dynamics.

### References

- Attia, Ahmed, Răzvan Ştefănescu, and Adrian Sandu.
**“**International Journal for Numerical Methods in Fluids 83.1 (2017): 28-51.*The reduced‐order hybrid Monte Carlo sampling smoother.”* - Attia, Ahmed, Vishwas Rao, and Adrian Sandu.
*“A Hybrid Monte‐Carlo sampling smoother for four‐dimensional data assimilation.”* - Attia, Ahmed, Azam Moosavi, and Adrian Sandu.
*“Cluster Sampling Filters for Non-Gaussian Data Assimilation.”*

*Lecture: A Mixture of Gaussian and Students’ t Errors for a Robust and Accurate Inference*

** March 15, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Hyungsuk Tak

### Abstract

A Gaussian error assumption, i.e., an assumption that the data are observed up to Gaussian noises, can bias any parameter estimation in the presence of outliers. A heavy tailed error assumption based on Student’s t-distribution helps reduce the bias, but it may be less efficient in estimating parameters if the heavy-tail assumption is uniformly applied to most of the normally observed data. We propose a mixture error assumption that selectively converts Gaussian errors into Students’ *t* errors according to latent outlier indicators, leveraging the best of the Gaussian and Students’ *t* errors; a parameter estimation becomes not only robust but also accurate. Using several examples, we demonstrate the potential for the proposed mixture error assumption to estimate parameters accurately in the presence of outliers.

### References

No references provided at this time

*Lecture: Optimization over the Sums-of-Squares Cone Using Polynomial Interpolants*

** March 22, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Sercan Yildiz

### Abstract

In this follow-up talk, we discuss our recent progress on solution methods for sums-of-squares programs. We present the details of an algorithm for optimization over the sums-of-squares cone and describe the challenges involved in demonstrating its polynomial time complexity in the bit model of computation. The talk is based on joint work with David Papp.

### References

No references provided at this time

****NOTE: This seminar has been moved to April 19, 2017 from 12:00pm-1:00pm****

*Lecture: Distributionally-Robust Optimization for Energy and Environment*

** March 29, 2017, 1:15pm – 2:15pm **

Room 203

**Speaker:** Peter Diao

### Abstract

Stochastic programming optimizes an objective function that depends on a distribution of uncertainty D. Distributionally-Robust optimization tackles the challenging problem of dealing with potential misspecification of D. This talk will explain ongoing work being carried out in the Energy and Environment working group related to this topic. Your feedback and suggestions will be very welcome!

### References

No references provided at this time

*Lecture: Structural Brain Connectome Analysis*

** April 12, 2017, 12:00pm – 1:00pm **

Room 150

**Speaker:** Zhengwu Zhang

### Abstract

The connectome in an individual brain plays a fundamental role in how the mind responds to everyday tasks and challenges. Modern imaging technology such as diffusion MRI (dMRI) makes it easy to peer into an individual’s brain and collect valuable data to infer connectivity. The main challenges for the statistical analysis of structural connectivity include (1) reliably extracting the connectome from dMRI data and (2) relating the connectome to covariates and traits of individuals. We aim to develop a mapping framework called the Population-based Structural Connectome (PSC) to estimate structural connectivity on a common space while accounting for individual variability. The PSC framework allows one to analyze the structural connectome on three different levels of increasing complexity: the binary network level, the weighted network level and the streamline level. The reliability of PSC is assessed using a test-retest dataset. We applied our methodology to the Human Connectome Project ( HCP) dataset with more than 900 subjects and related our estimates of structural connectivity to behavioral measures. Structural connectivity is found to be significantly related to phenotypes such as fluid intelligence, memory, education, language comprehension and taste.

### References

No references provided at this time

*Lecture: Detecting Planets: Jointly Modeling Radial Velocity and Stellar Activity Time Series*

** April 12, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** David Jones

### Abstract

The radial velocity technique is one of the two main approaches for detecting planets outside our solar system, or exoplanets as they are known in astronomy. The method works by detecting the Doppler shift resulting from the motion of a host star caused by an orbiting planet. Unfortunately, this Doppler signal is typically contaminated by various “stellar activity” phenomena, such as dark spots on the star surface. A principled approach to recovering the Doppler signal was proposed by Rajpaul et al. (2015), and involves the use of dependent Gaussian processes to jointly model the corrupted Doppler signal and multiple proxies for the stellar activity.

We build on this work in two ways: (i) firstly, we propose using PCA or diffusion maps to construct more informative stellar activity proxies; (ii) secondly, we extend the Rajpaul et al. (2015) model to a larger class of models and use a model comparison procedure to select the best model for the particular stellar activity proxies at hand. This framework also enables us to compare the performance of PCA and diffusion map based proxies in terms of the resulting statistical power for planet detection.

### References

Rajpaul, V., Aigrain, S., Osborne, M. A., Reece, S., & Roberts, S. (2015). *A Gaussian process framework for modelling stellar activity signals in radial velocity data. Monthly Notices of the Royal Astronomical Society*, 452(3), 2269-2291. https://arxiv.org/abs/1506.07304

### Presentation

To view this presentation, please **CLICK HERE**

*Lecture: Optimization over the Sums-of-Squares Cone Using Polynomial Interpolants*

** April 19, 2017, 12:00pm – 1:00pm **

Room 150

**Speaker:** Sercan Yildiz

### Abstract

In this follow-up talk, we discuss our recent progress on solution methods for sums-of-squares programs. We present the details of an algorithm for optimization over the sums-of-squares cone and describe the challenges involved in demonstrating its polynomial time complexity in the bit model of computation. The talk is based on joint work with David Papp.

### References

No references provided at this time

*Lecture: Computer Model Calibration to Enable Disaggregation of Large Parameter Spaces, with Application to Mars Rover Data*

** April 19, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** David Stenning

### Abstract

We are developing a novel statistical method to address a fundamental scientific goal: disaggregation, or estimation of the composition of an unknown aggregate target. By combining forward (computer) models of the target of interest with measured data, our approach enables computer-model calibration techniques to directly solve the disaggregation problem. We are developing our method in the context of chemical spectra generated by laser-induced breakdown spectroscopy (LIBS), used by instruments such as ChemCam on the Mars Science Laboratory rover Curiosity. Because a single run of the LIBS computer model may take hours on parallel computing platforms, we build fast emulators for single-compound targets. Because the chemical spectra are high-dimensional, we follow the general framework described in Higdon et al. (2008). We will next build multi-compound emulators by combining the single-compound emulators in a hierarchical model. We expect our approach to yield the first statistical characterization of matrix effects, i.e. spectral peaks that are amplified or suppressed when compounds are combined in a target versus measured in isolation, and the first capability in uncertainty quantification (UQ) that addresses the unique challenges of chemical spectra.

### References

Higdon, D., J. Gattiker, B. Williams, and M. Rightley. (2008). * Computer Model Calibration Using High-Dimensional Output*. Journal of the American Statistical Association, 103, 482.

*Special Guest Lecture: The Rise of Multi-precision Computations*

** April 26, 2017, 1:15pm – 2:15pm **

Room 150

**Speaker:** Nicholas J. Higham, School of Mathematics, University of Manchester (UK)

### Abstract

Multi-precision arithmetic means floating point arithmetic supporting multiple, possibly arbitrary, precisions. In recent years there has been a growing demand for and use of multi-precision arithmetic in order to deliver a result of the required accuracy at minimal cost. For a rapidly growing body of applications, double precision arithmetic is insufficient to provide results of the required accuracy. These applications include supernova simulations, electromagnetic scattering theory, and computational number theory. On the other hand, it has been argued that for climate modelling and deep learning half precision (about four significant decimal digits) We discuss a number of topics involving multi-precision arithmetic,

including:

(i) How to derive linear algebra algorithms that will run in any precision, as opposed to be being optimized (as some key algorithms are) for double precision.

(ii) The need for, availability of, and ways to exploit, higher precision arithmetic (e.g., quadruple precision arithmetic).

(iii) What accuracy rounding error bounds can guarantee for large problems solved in low precision.

(iv) How a new form of preconditioned iterative refinement can be used to solve very ill conditioned sparse linear systems to high accuracy.

### References

No references provided at this time