*Introductory Talks from SAMSI Postdoctoral Fellows*

**September 14, 2016, 1:15pm – 2:15pm**

Room 150

Speaker: Benjamin Risk, Sarah Vallelian, Zhengwu Zhang, Duy Thai, David Jones, Hyungsuk Tak, David Stenning, Ahmed Attia, Sercan Yildiz, Peter Diao

### Abstract

Each of our Postdoctoral fellows will perform a short introductory talk discussing their areas of academic interest and possibly an overview of their upcoming lectures throughout the fall.

*Lecture: Spurious Activation from Multiband Acquisition fMRI*

**September 21, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Benjamin Risk

### Abstract

In fMRI, conventional pulse sequences collect two-dimensional data one slice at a time and consequently require up to six seconds to acquire the dozens of slices composing a single volume with whole-brain coverage. Multiband acquisition techniques collect multiple slices in a single shot and can be used to decrease the time between acquisition of fMRI volumes, which can increase statistical power and better characterize the temporal dynamics of the blood-oxygen level dependent

(BOLD) signal. The technique requires an additional processing step in which the slices are separated, or unaliased, to recover the whole brain volume. However, this may result in signal leakage between aliased locations and lead to spurious activation (false positives). We examine the Slice-GRAPPA algorithm for image reconstruction at different acceleration factors. We found a high incidence of spurious activation in simulations. Perversely, more time points actually increase the detection of spurious activation, such that longer acquisition times can lead to poorer estimates of activation. We also show evidence of artifacts in fMRI data from the Human Connectome Project, which uses a higher acceleration factor than most studies and thus may be more susceptible to spurious activation.

### Presentation

To see this presentation, **click here**

*Lecture: Computationally efficient Markov chain Monte Carlo methods for hierarchical Bayesian inverse problems*

**September 28, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Sarah Vallelian

### Abstract

In Bayesian inverse problems, the posterior distribution can be used to quantify uncertainty about the reconstructed solution. In practice, approximating the posterior requires Markov chain Monte Carlo (MCMC) algorithms, but these can be computationally expensive. We present a computationally efficient MCMC sampling scheme for ill-posed Bayesian inverse problems.

### References

1. *Computationally Efficient Markov Chain Monte Carlo Methods for Hierarchical Bayesian Inverse Problems:* D. Andrew Brown, Arvind K. Saibaba, Sarah Vallélian

### Presentation

To see this presentation, **click here**

*Lecture: Nonparametric Bayes Models of Fiber Curves Connecting Brain Regions*

**October 5, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Zhengwu Zhang

### Abstract

In studying structural inter-connections in the human brain, it is common to first estimate fiber bundles connecting different regions of the brain relying on diffusion tensor imaging. These fiber bundles act as three-dimensional highways for neural activity and communication, snaking through the brain and connecting different regions. Current statistical methods for analyzing these fibers reduce the rich information into an adjacency matrix, with the elements containing a count of the number of connections between pairs of regions. The goal of this article is to avoid discarding the rich functional data on the shape, size and orientation of fibers, developing flexible models for characterizing the population distribution of fibers between brain regions of interest within and across different individuals. We start by efficiently decomposing each fiber in each individual’s brain into a corresponding rotation matrix, shape and translation from a global reference curve. These components can then be viewed as data lying on a product space composed of different Euclidean spaces and manifolds. To non-parametrically model the distribution within and across individuals, we rely on a hierarchical mixture of product kernels specific to the component spaces. Taking a Bayesian approach to inference, we develop an efficient method for posterior sampling. The approach automatically produces clusters of fibers within and across individuals, and yields interesting new insight into variation in fiber tracks, while providing a useful starting point for more elaborate models relating fibers to covariates and neuropsychiatric traits.

### Presentation

To see this presentation, **click here**

*Lecture: TV-l2-like Model based Riesz Wavelet Frame *

**October 12, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Duy Thai

### Abstract

In order to address a question: “which domain makes signal sparse?” Recently, there are two techniques: harmonic analysis and regularization. Sampling theory in harmonic analysis (bi)-orthogonally projects an image into scaling and wavelet spaces whose coefficients correspond to energy (lowpass signal) and singularity (edges), e.g. wavelet. Decomposition with frames (instead of bases) is known to enhance sparsity of an image, e.g. curvelet transform. Although these projection methods can define scales of oscillating components, e.g. texture or noise, it is based on linearly filtering technique, i.e. convolution with a kernel in the spatial domain, producing Gibbs effect. The total variation (TV)-l2 model, a regularization method in a Banach space, is more attractive to tackle these blurring artifact due to a shrinkage operator by the l1-norm, i.e. this model smooths a noisy image while preserving edges. We observe that a solution of this model can be rewritten as in a form of sampling theory with scaling/wavelet functions (which is similar to wavelet-like operator) and a shrinkage operator. However, their bandwidths are quite large which captures all frequencies components of fine scale structure, e.g. texture. Thus, this model cannot enhance enough sparsity under the l1-norm, i.e. although edges on a reconstructed image are kept, this model cannot remove texture while preserving contrast and it is known that stair case effect is produced in image denoising.

Benefit from these two techniques, we build a hybrid model, called TV-l2-like model based N-th order Riesz dyadic wavelet, to remove fine scale structure while preserving edges and its contrast. Implementation is simple and natural images with texture are performed for a comparison.

### References

1. D.H. Thai and L. Mentch. Multiphase Segmentation For Simultaneously Homogeneous and Textural Images: https://arxiv.org/pdf/1606.09281.pdf

2. D.H. Thai and D. Banks. Directional Mean Curvature for Textured Image Demixing (in preparation and provided upon request)

*Lecture: Detecting planets: jointly modeling radial velocity and stellar activity time series*

**October 19, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: David Jones

### Abstract

The radial velocity technique is one of the two main approaches for detecting planets outside our solar system, or exoplanets as they are known in astronomy. The method works by detecting the Doppler shift resulting from the motion of a host star caused by an orbiting planet. Unfortunately, this Doppler signal is typically contaminated by various “stellar activity” phenomena, such as dark spots on the star surface. A principled approach to recovering the Doppler signal was proposed by Rajpaul et al. (2015), and involves the use of dependent Gaussian processes to jointly model the corrupted Doppler signal and multiple proxies for the stellar activity.

During the SAMSI ASTRO program, we aim to extend this work by (i) proposing more informative stellar activity proxies, (ii) adapting the model to incorporate a wider variety of proxies, and (iii) utilizing the new model to optimally schedule telescope observations. In my talk, I will introduce the problem and present a more general model that allows, for example, a lag between the current stellar activity and subsequent effects on the Doppler time series. I will then discuss potential scheduling methods and conclude.

### References

1. Rajpaul, V., Aigrain, S., Osborne, M. A., Reece, S., & Roberts, S. (2015). A

Gaussian process framework for modelling stellar activity signals in radial velocity data. Monthly Notices of the Royal Astronomical Society, 452(3), 2269-2291. https://arxiv.org/abs/1506.07304

2. Boyle, P., & Frean, M. (2004). Dependent Gaussian processes. In Advances in

neural information processing systems, 217-224. http://papers.nips.cc/paper/2561-dependent-gaussian-processes.pdf

### Presentation

To see this presentation, **click here**

*Lecture: Signal Separation for Exoplanet Detection: A Diffusion Map Approach*

**October 26, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: David Stenning

### Abstract

An active area of research in modern astronomy is the detection of exoplanets—planets that orbit stars other than the Sun. A promising approach for discovering Earth-like exoplanets is the radial velocity method, which involves detecting the Doppler shift in a star’s spectral lines resulting from the gravitational effects of an orbiting planet. However, the signal caused by a Doppler shift is often obscured by those resulting from stellar activity, such as dark spots rotating across the star’s surface.

We aim to disentangle the Doppler shift signal present in stellar spectra from those of stellar activity, thereby improving the efficacy of the radial velocity method. We rely on synthetic spectra generated by the SOAP 2.0 code (Dumusque et al. 2014) to test various methodologies. In this talk, I will present some preliminary results on the use of diffusion maps (e.g. Coifman and Lafon, 2006; Lafron and Lee, 2006; Richards et al., 2009)—a nonlinear dimension reduction technique—as part of our overall procedure for distinguishing between Doppler shifts and stellar activity.

### References

1. Dumusque, X., Boisse, I., and Santos, N.C. (2014). SOAP 2.0: A Tool to Estimate the Photometric and Radial Velocity Variations Induced by Stellar Spots and Plages.

2. The Astrophysical Journal, 796,Coifman, R.R. and Lafon, S. (2006). Diffusion maps. Applied and Computational Harmonic Analysis. 21: 5–30.

3. Lafon, S. and Lee, A.B. (2006). Diffusion Maps and Coarse-Graining: A Unified Framework for Dimensionality Reduction, Graph Partitioning, and Data Set Parameterization. IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, pp. 1393-1403.

4. Richards, J.W., Freeman, P.E., Lee, A.B., and Schafer, C.M. (2009). Exploiting Low-Dimensional Structure in Astronomical Spectra. The Astrophysical Journal, 691, 32-42.

*Lecture: Strong Lens Time Delay Challenge 2*

**November 9, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Hyungsuk Tak

### Abstract

The gravitational field of a galaxy can act as a lens and deflect the light emitted by a more distant object such as a quasar. If the galaxy is a strong gravitational lens, it can produce multiple images of the same quasar in the sky. Since the light in each gravitationally-lensed image traverses a different path length from the quasar to the Earth, fluctuations in the source brightness are observed in the several images at different times. The time delay between these fluctuations can be used to constrain cosmological parameters and can be inferred from the time series of brightness data or light curves of each image. Strong lens time delay challenge (TDC) is a blind competition with the aim of improving time delay estimation methods for application to realistic observational data sets in preparation for the era of the Large Synoptic Survey Telescope (LSST), the top-ranked ground-based telescope project in the 2010 Astrophysics Decadal Survey. Though the LSST will produce multi-band light curves (observed via multiple optical bands), the organizers of the first TDC simulated thousands of single-band data sets as a simplified starting point. The second TDC is about to begin with more realistic multi-band light curves. We introduce a couple of models developed from the first TDC model and demonstrate their potential to handle the multi-band light curves using several examples.

### References

1. H. Tak, K. Mandel, D. A. van Dyk, V. L. Kashyap, X.-L. Meng, and A. Siemiginowska (in progress), “Bayesian Estimates of Astronomical Time Delays between Gravitationally Lensed Stochastic Light Curves,” tentatively accepted in Annals of Applied Statistics, ArXiv 1602.01462.

2. Liao et al. (2015), “Strong Lens Time Delay Challenge: II. Results of TDC1,” The Astrophysical Journal, 800, 11.

3. T. Treu and P. Marshall (in progress), “Time Delay Cosmography,” ArXiv 1605.05333

*Lecture: Goal-Oriented Optimal Experimental Design*

**November 16, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Ahmed Attia

### Abstract

Computer models play an essential role in forecasting complicated phenomena such as the atmosphere, ocean dynamics and volcanic eruptions among others. These models however are usually imperfect due to various sources of uncertainty. Measurements are snapshots of reality that are collected as an additional source of information. Parameter inversion and data assimilation are means of fusing information obtained from measurement, model, prior knowledge, and other available sources to produce reliable and accurate description (the analysis) of the underlying physical system. The accuracy of the analysis is greatly influenced by the quality of the observational grid design used

to collect measurements. Sensor placement can be viewed as an optimal experimental design (OED) problem, where the locations of the sensors define an experimental design. There are many criteria for choosing an optimal experimental design, one of which being the minimization of the uncertainty in the output (e.g., minimization of the trace of the posterior covariance). Including the end-goal (predictions) in the experimental design leads to a goal-oriented OED approach that can be used in several applications. In this talk, we outline the idea of goal-oriented optimal design of experiments for PDE based Bayesian linear inverse problems with infinite-dimensional parameters. If time permits, we will also introduce a problem in practical data assimilation where this idea can be of great practical use.

### References:

1. Alexanderian, Alen, et al. “A-Optimal Design of Experiments for Infinite-Dimensional Bayesian Linear Inverse Problems with Regularized `l0-Sparsification.” SIAM Journal on Scientific Computing 36.5 (2014): A2122-A2148.

2. Chaloner, Kathryn, and Isabella Verdinelli. “Bayesian experimental design: A review.” Statistical Science (1995): 273-304.

3. Evensen, Geir. “The ensemble Kalman Filter: Theoretical formulation and practical implementation.” Ocean dynamics 53.4 (2003): 343-367.

4. Haber, Eldad, Lior Horesh, and Luis Tenorio. “Numerical methods for experimental design of large-scale linear ill-posed inverse problems.” Inverse Problems 24.5 (2008): 055012.

5. Kalnay, Eugenia. Atmospheric modeling, data assimilation and predictability. Cambridge university press, 2003.

### Presentation

To see this presentation, **click here**

*Lecture: Improved Computational Methods for Sum-of-Squares Optimization*

**November 30, 2016, 1:15pm – 2:15pm **

Room 150

Speaker: Sercan Yildiz

### Abstract

Sum-of-squares relaxations are a hierarchy of relaxations for polynomial optimization that are parameterized with the degree of the polynomials in the sum-of-squares representation. Each fixed level of the hierarchy provides a lower bound on the true optimal value, which can be computed in polynomial time via semidefinite programming, and these lower bounds converge asymptotically to the optimal value under general conditions. However, solving the semidefinite programs arising from sum-of-squares relaxations poses practical challenges at higher levels of the hierarchy. First, the sizes of these semidefinite programs depend quadratically on the number of monomials in the sum-of-squares representations. Second, numerical problems are often encountered when solving these semidefinite programs. In this talk, we discuss a novel approach to solving sum-of-squares programs. Preliminary computational results indicate that our algorithm compares favorably against existing approaches in terms of time and memory requirements as well as numerical stability. This talk is based on joint research with David Papp, done as part of the SAMSI Working Group on Sum-of-Squares Optimization and Semidefinite Programming.

### References

To Be Determined

*Lecture: In Search of New Structure: Limits of Discrete Structures, Alignment, and Category Theory*

**December 7, 2016, 1:15pm – 2:15pm**

Room 150

Speaker: Peter Diao

### Abstract

We will discuss new mathematical methods that have been developed and are being developed to address data alignment problems and the analysis of feature learning. Basic concepts and problems will be developed from first principles. We will be using category theory and dense graph limits.

### References

To Be Determined