Campus Alert:
7:24 PM
May 1st, 2024

PSU ALERT: POLICE ACTIVITY at SOUTH PARK BLOCKS. Avoid the area. Updates will be sent via PSU Alert, when and if available.

Maseeh Mathematics + Statistics Colloquium Series 2014-2015 Archive

October 10, 2014 
Liz Stanhope, Lewis & Clark College
You can't hear the shape of an orbifold  

If we can infer the chemical composition of a star from the colors of light it emits, can we determine the shape of a bell from the ringing that it makes? One way to address this question is to ask if the eigenvalues of the Laplace operator associated to a Riemannian manifold determine the manifold. A famous answer of "No" came in 1992 when Gordon, Webb and Wolpert exhibited nonisometric planar domains with exactly the same Laplace spectrum. After an introduction to the mildly singular spaces known as Riemannian orbifolds, I will discuss the degree to which the Laplace spectrum of an orbifold gives us information about the geometry and topology of the orbifold.

October 24, 2014 
Irene Fonseca, President of SIAM, Carnegie Mellon University
Variational methods in materials and image processing  

Several questions in applied analysis motivated by issues in computer vision, physics, materials sciences and other areas of engineering may be treated variationally leading to higher order problems and to models involving lower dimension density measures. Their study often requires state-of-the-art techniques, new ideas, and the introduction of innovative tools in partial differential equations, geometric measure theory, and the calculus of variations.  
In this talk it will be shown how some of these questions may be reduced to well understood first order problems, while in others higher order terms play a fundamental role.  
Applications to quantum dots in epitaxy deposition, and decolorization and denoising in imaging science will be addressed.

October 31, 2014 
Tanya Kostova Vassilevska, Lawrence Livermore National Laboratory
Model reduction with proper orthogonal decomposition for dynamical systems: Using snapshots from the time derivatives  

In many areas of science and technology, complex multi-physics time-dependent problems are modeled by large systems of differential equations. Their analysis often poses huge computational challenges as it requires multiple, prohibitively expensive simulations, in terms of time and memory. However, in many cases, the solutions of these systems lie in low-dimensional manifolds. In these cases, reduced order models (ROMs) exploiting that manifold structure can dramatically reduce the time and memory needed to execute the corresponding full-order model (FOM). One of the most studied approaches to building reduced order models is based on Proper Orthogonal Decomposition (POD). Normally, POD uses the solution of the FOM at selected time, space and parameter values, typically called "snapshots, to calculate the basis of the reduced space.  
Kunisch and Volkwein suggested using difference quotients (DQs) as snapshots in addition to the solution snapshots. Their incentive came from the method of derivation of their error bound, which, without using the DQs, blows up when the distance between snapshots diminishes. Other authors have questioned whether using DQs brings any advantage for designing better POD ROMs but this question has so far been not addressed.  
We have made some progress in this direction. I will describe our recent work on developing error bounds to compare two POD ROMs: the first using only solution snapshots, the second using, in addition, snapshots from the time derivatives. This work brings two main new results. The first is the actual form of the error bound which involves the time moments at which the snapshots were taken. The second innovation is that the bounds demonstrate for the first time that, asymptotically, the method with time derivative information can be more accurate. The new bounds give insights about the behavior that we test numerically. Specifically, we demonstrate that the behavior of the errors in numerical experiments with the discretized FitzHugh-Nagumo system (known from neurophysiology) is well predicted by the bounds.  
In addition, if time permits, I will talk about possible future applications in biology using POD ROM methods.

November 7, 2014 
Boris Mordukhovich, Wayne State University
Variational analysis: what is it?  

Variational analysis has been recognized as an active and rapidly growing area of mathematics and operations research motivated mainly by the study of constrained optimization and equilibrium problems, while also applying perturbation ideas and variational principles to a broad class of problems and situations that may be not of a variational nature. One of the most characteristic features of modern variational analysis is the intrinsic presence of nonsmoothness, which naturally enters not only through the initial data of the problems under consideration but largely via variational principles and perturbation techniques applied to a variety of problems with even smooth data. Nonlinear dynamics and variational systems in applied sciences also give rise to nonsmooth structures and motivate the development of new forms of analysis that rely on generalized differentiation.  
This lecture is devoted to discussing some basic constructions and results of variational analysis and its remarkable applications.

November 14, 2014 
Piotr Zwiernik, University of California–Berkeley
Understanding statistical models through their geometry  

Discrete and Gaussian statistical models have a rich geometric structure and can be often viewed as semi-algebraic sets. The geometry viewpoint provides not only a good intuition about behavior of statistical procedures but also tools for proving concrete statistical results. I want to discuss in more detail two examples of statistical models: latent graphical tree models and Gaussian linear covariance models. Although these models are very different, in both cases the corresponding likelihood function is multimodal and so its efficient optimization requires potentially fragile numerical procedures. In the first case, I will show how combinatorics and algebra is used to understand the structure of such a model, which also provides a better insight into numerical procedures such as the EM algorithm. In the second case, using recent results in random matrix theory, I will show that optimization of the likelihood function is essentially (with high probability) a convex programming problem.

November 21, 2014 
Andrew Gillette, University of Arizona
Modern directions in finite element theory: polytope meshes and serendipity methods  

Finite element methods take a domain decomposed into a mesh of elements with simple geometry and produce an approximate solution to specified PDEs in terms of basis functions associated to each mesh element. In this talk, I will discuss two trends in the analysis of finite element methods for modern applications: (1) the use of polygonal or polyhedral mesh elements for domain decomposition and (2) the use of reduced "serendipity" basis sets for efficient solution approximation. Theoretical and numerical results will be presented, along with a view of how future research in these areas will be closely related. This talk will be accessible to a general mathematical audience.

December 5, 2014 
Alexis Dinno, Portland State University School of Community Health
Frequentist tests for equivalence, tests for relevance  

I motivate and introduce the frequentist test for equivalence using the Two One-Sided Tests approach, which originated in clinical epidemiology, but has application anywhere one wants to demonstrate evidence of the absence of an effect. I make a case that relevance tests—inference based on combining tests for difference and tests for equivalence—resolve several problems in frequentist hypothesis testing. I close with my own efforts to develop an equivalence test for the Kolmogorov-Smirnov test, and invite discussion.

January 7, 2015
Bruno Jedynak, The Johns Hopkins University
The game of 20 questions: a delight of information theory, probability, control, and computer vision  

We will explore various instances of the game of 20 questions with special interest in the situations where (1) the responses are noisy and (2) there are multiple targets. We will discuss adaptive as well as non-adaptive policies. We will study performance and optimality for an information theoretic cost function. Application in fast face detection, micro-surgical tool tracking, and human vision will be briefly presented.

January 9, 2015 
Leonid Chindelevitch, Harvard School of Public Health
Modeling tuberculosis, from genes to populations  

Tuberculosis (TB) continues to afflict millions of people and causes over a million deaths a year worldwide. Multi-drug resistance is also on the rise, causing concern among public-health experts. Mathematical and statistical modeling and the development of improved computational tools have an important role to play in supporting worldwide control of TB infections.  
This talk will give an overview of my work on modeling TB by leveraging population information together with molecular genetics data. I will start by presenting a joint model of the dynamics of TB and HIV, whose analysis in a Bayesian framework has helped inform policy decisions on TB control. I will go on to discuss an optimization-based methodology I developed for an accurate classification of complex TB infections as originating from mutation or mixed infection. I will finish by describing an approach for improving the assignment of lineages to TB strains by using a model of molecular evolution, and an ongoing project on differentiating acquired and transmitted resistance in a high TB burden setting.

January 12, 2015
Daniel Sewell, University of Illinois at Urbana-Champaign
Latent space models for dynamic networks  

Dynamic networks are used in a variety of fields to represent the structure and evolution of the relationships between entities. We present a model which embeds longitudinal network data as trajectories in a latent Euclidean space. A Markov chain Monte Carlo algorithm is proposed to estimate the model parameters and latent positions of the actors in the network. The model yields meaningful visualization of dynamic networks, giving the researcher insight into the evolution and the structure, both local and global, of the network. The model handles directed or undirected edges, easily handles missing edges, and lends itself well to predicting future edges. Further, a novel approach is given to detect and visualize an attracting influence between actors using only the edge information. We use the case-control likelihood approximation to speed up the estimation algorithm, modifying it slightly to account for missing data. We apply the latent space model to data collected from a Dutch classroom, and cosponsorship network collected on members of the U.S. House of Representatives, illustrating the usefulness of the model by making insights into the networks.

January 23, 2015 
Marina Meila, University of Washington
Geometrically faithful non-linear dimension reduction  

Manifold learning algorithms aim to recover the underlying low-dimensional parameters of the data using either local or global features. It is however widely recognized that the low dimensional parametrizations will typically distort the geometric properties of the original data, like distances, angles, areas and so on. These distortions depend both on the data and on the algorithm used.  Building on the Laplacian Eigenmap framework, we propose a paradigm that offers a guarantee, under reasonable assumptions, that *any* manifold learning algorithm will preserve the geometry of a data set. Our approach is based on augmenting the output of an algorithm with geometric information, embodied in the Riemannian metric of the manifold. This allows us to define geometric measurements that are independent of the algorithm used, and hence move seamlessly from one algorithm to another. In this work, we provide an algorithm for estimating the Riemannian metric from data and demonstrate the advantages of our approach in a variety of examples.  
As an application we develop a new, principled, unsupervised method for selecting the scale parameter in manifold learning.  
Joint work with Dominique Perrault-Joncas.

February 20, 2015 
Reza Sarhangi, Towson University
Decorating regular polyhedra using historical interlocking star polygonal patterns  

This presentation reports on the application of some historical interlocking patterns for the embellishment of the regular polyhedra (Platonic and Kepler-Poinsot solids). Such patterning can be extended to cover surfaces of some other convex and non-convex solids. In this regard, at first, the two methods of Shamseh (n/k star polygon) and the radial grid method will be employed and a step-by-step geometric constructions will be demonstrated. Then, the girih tile modularity is used to explore more patterning designs.

February 27, 2015 
David Yanez, Oregon Health & Science University
Longitudinal Change in IMT & Risk of Stroke, MI, and CHD  

Carotid Intima Media Thickness(IMT) assessed by B-mode ultrasound is an important non-invasive modality for evaluating atherosclerotic disease burden and global cardiovascular (CV) disease risk. A number of studies have examined the relationship between carotid IMT and subsequent cardiovascular disease [O’ Leary, 1999; Psaty, 1999; Kuller, 2006 ARIC], and have generally shown a strong relationship. The evidence for a relationship between carotid IMT and future cardiovascular events is strong, especially among younger individuals [Lorenz MW et al. (2006). Carotid intima-media thickening indicates a higher vascular risk across a wide age range: prospective data from the Carotid Atherosclerosis Progression Study (CAPS). Stroke 37: 87–92]. Carotid (IMT) has been used also as measure of disease progression in clinical trials investigating the efficacy of new pharmacologic products tested for the ability to reduce cardiovascular disease burden [Bots ML et al. (2003)]. Change in IMT has been reported to be associated with several known cardiovascular risk factors [Chambless 2002]. The use of IMT as an image surrogate marker of sub-clinical atherosclerosis and cardiovascular events has several desirable features, as it is easily measurable in all study participants, it is non-invasive, is relatively inexpensive, and, of particular importance in clinical trials, it does not require an extended duration of follow up for cardiovascular events to occur [Demol P and Weihrauch TR (1998)]. Other studies in patients with more severe cardiovascular disease have shown disease regression with the use of statins, indicating a reduction in carotid IMT. Likewise, multiple diabetes [CHICAGO (Carotid Intima-Media Thickness in Atherosclerosis Using Pioglitazone)] and hypertensive medications have been shown to slow the progression of carotid IMT.  
However, much more limited evidence is available regarding the association of carotid IMT progression and cardiovascular outcomes in longitudinal studies. A meta-analysis of several longitudinal studies has examined the relationship between IMT and future events, but different studies have used different measurement methods and studied different populations, therefore these data, although important, are difficult to interpret [Lorenz MW Circulation]. A high correlation between the surrogate and the ultimate outcome is desirable for an intermediate outcome measure to be valid. As IMT measurement is associated with a noteworthy amount of measurement error, the effect of measurement error on IMT change can potentially affect the prediction of cardiovascular events and introduce bias.  
The Cardiovascular Health Study provides an ideal setting to examine the relationship between cardiovascular events and changes in IMT among a group of relatively healthy participants 65 and older that had carotid IMT measures at baseline, year 5 and year 11 of the study. For this investigation, our primary research hypothesis is to evaluate whether changes in the common carotid IMT and the internal carotid IMT are associated with subsequent clinical coronary heart disease, stroke, and myocardial infarction. We perform our analysis in this observational study by accounting for measurement error bias using risk-set regression calibration (RSRC) methods on both the time independent and time dependent IMT measurements. We also adjust for known baseline confounders in our analyses, such as age, sex, race, smoking status, height, weight, systolic blood pressure, HDL, LDL, LV Mass, Factor VII, fibrinogen, insulin, and blood glucose. We also investigate the impact of the measurement error bias by comparing our results to those using standard (naïve) Cox PH.

March 6, 2015 
David Burton, Lancaster University
A geometrical perspective on many-body theory and radiation reaction  

For over a century, despite extensive theoretical investigation, the dynamical behaviour of an electron interacting with its own electromagnetic field has remained a controversial subject. However, advancements in ultra-high intensity laser science will soon herald a new era in which the radiative self-force will be prominent in experiments. In anticipation, considerable theoretical effort is now being devoted to understanding the dynamics of a large number of electrons, each interacting with its own field as well as the fields of other charges. Most many-body theories used in this context are based on the Landau-Lifshitz (LL) equation of motion for a single radiating electron, and the LL equation is often obtained as an approximation to the Lorentz-Abraham-Dirac (LAD) equation. Although the LAD equation is more elegant than the LL equation, its third-order structure and concomitant pathological behaviour appear to have discouraged physicists from studying it in the many-body context. However, from the perspective of a mathematical physicist, the collisionless Boltzmann (Vlasov) equation can be understood as the invariance of a 1-particle distribution under Lie transport and this approach naturally induces a Vlasov equation from any system of first-order ODEs. Thus, a kinetic theory of radiating electrons based on the LAD equation can be obtained using such techniques. An introduction to the LAD equation and the LL equation will be given, and the geometrical approach to the Vlasov equation will be discussed. Some of the implications of combining radiation reaction with particle interactions will be explored. No prior knowledge of the LAD equation, LL equation or relativistic kinetic theory will be assumed.

April 10, 2015 
Thuan Nguyen, Oregon Health & Science University
The fence method and its applications  

The fence method (Jiang et al. 2008) is a recently proposed strategy for model selection. It was motivated by the limitation of the traditional information criteria in selecting parsimonious models in some nonconventional situations, such as mixed model selection. Jiang, Nguyen & Rao (2009) simplified the adaptive fence method of Jiang et al. (2008) to make it more suitable and convenient to use in a wide variety of problems. Still, the current modification encounters computational difficulties when applied to high dimensional and complex problems. In the first part of my talk, I address the concern about high dimensional model selection by proposing a restricted fence procedure that combines the idea of fence with that of the restricted maximum likelihood (REML). Furthermore, we propose a robust bootstrap procedure to choose adaptively the tuning parameter used in the restricted fence. We focus on problems of longitudinal studies, and demonstrate the performance of the new procedure and its comparison with the information criteria in a simulation study. The method is further illustrated by a real-data analysis.  
The adaptive fence idea has been used in solving other problems regarding tuning parameter selection in model selection problems. In the second part of my talk, I briefly discuss some recent advances in choosing the regularization parameter in shrinkage selection/estimation, including Lasso, adaptive Lasso and SCAD that are motivated by the fence idea.

May 8, 2015
Boris Botvinnik, University of Oregon
Conformal geometry and topology of manifolds  

We will discuss several aspects of conformal geometry and its relationship to the topology of manifolds. In particular, we will describe how the analytical tools, such as the conformal Laplace and Dirac operators, are related to topological ones, such as surgery on smooth manifolds. We will aim to explain some applications of those tools to describe recent results on the space of metrics with positive scalar curvature.

May 15, 2015 
Heather Johnson, University of Colorado–Denver
Investigating roots of covariational reasoning: leveraging a dynamic computer environment to foster students’ shifts from variational to covariational reasoning  

Although forming and interpreting relationships between changing quantities—covariational reasoning—is essential for secondary students, little is known about its development. Dynamic computer environments (DCEs) can provide students opportunities to investigate and represent change in related quantities, yet students interacting with DCEs may not be coordinating change in quantities. To investigate roots of students’ covariational reasoning, I examined (1) how students might shift from variational to covariational reasoning when interacting with a DCE and (2) what design aspects of mathematical tasks might foster students’ shifts from variational to covariational reasoning. Using design experiment methodology, I designed a DCE involving a turning Ferris wheel and developed a sequence of related instructional tasks to provide students opportunities to form and interpret relationships between nontemporal quantities of distance, height and width, represented in the Ferris wheel DCE. Reporting results from a study with five ninth grade students, in which I implemented the Ferris wheel DCE and related tasks, I document a student’s shift from variational to covariational reasoning. Results of the study suggest that students’ images of change as continuing play a central role as they shift from variational to covariational reasoning. Tasks involving DCEs should provide opportunities for students to: (1) Envision how nontemporal quantities from the same measure spaces may change; (2) Predict how individual quantities may change prior to predicting how quantities may change together; (3) Make predictions about changing quantities prior to viewing dynamic computer animations and/or graphs. This research has implications for instruction related to key, gatekeeping concepts of function and rate.

May 22, 2015
Ralph Showalter, Oregon State University
Variational problems and mixed formulations  

Many results of functional analysis were motivated by the formulation of boundary-value problems as equations in function space. We recall some of these developments, compare with corresponding "mixed formulations" as a system of equations in function spaces, and illustrate some of their advantages and recent extensions.

May 29, 2015 
Andrew Bridy, University of Rochester
Automatic sequences and curves over finite fields  

An amazing theorem of Christol states that a power series y with coefficients in a finite field is an algebraic function if and only if its coefficient sequence can be produced by a finite automaton, which is a limited model of a computer that has no memory. In this situation, it is possible to represent the automaton by a family of differential operators acting on a curve whose function field contains y. I study this connection in detail and show how it can be used to draw a precise link between the complexity of the automaton and algebraic invariants of the function y, such as its degree and height.

June 5, 2015
Zachary Scherr, University of Pennsylvania
Polynomial Pell identities  

Toward the end of the 18th century, Euler discovered several polynomial identities relating to the Pell equation. Since that time, the study of polynomial Pell equations has been intimately intertwined with the discovery of many deep theorems and ideas in algebraic and arithmetic geometry. More recently there has been a push to try to classify all integral polynomial Pell identities. In this talk I will survey background on Pell and polynomial Pell equations, present new results on the classification of integral Pell identities, and discuss a framework for thinking about these problems.