Campus Alert:
7:24 PM
May 1st, 2024

PSU ALERT: POLICE ACTIVITY at SOUTH PARK BLOCKS. Avoid the area. Updates will be sent via PSU Alert, when and if available.

Maseeh Mathematics + Statistics Colloquium Series 2013-2014 Archive

October 4, 2013 
Long Chen, University of California, Irvine
Optimal Delanuay Triangulations

Optimal Delanuay triangulations (ODT) are optimal meshes minimizing function in Lp-norm. In this talk we shall present several applications of ODTs:
Mesh smoothing: Meshes with high quality are obtained by minimizing the interpolation error in a weighted L1-norm.
Anisotropic mesh adaptation: Optimal anisotropic interpolation error estimate is obtained by choosing anisotropic functions. The error estimate is used to produce anisotropic mesh adaptation for convection-dominated problems.
Sphere covering and convex polytope approximation: Asymptotic exact and sharp estimate of some constant in these two problems are obtained from ODTs.
Quantization: Optimization algorithms based on ODTs are applied to quantization to speed up the processing.
October 18, 2013 (return)
Jeff Borggaard, Interdisciplinary Center for Applied Mathematics, Virginia Tech
Reduced-order models of fluids for fast simulation  
Reduced-order models based on the proper orthogonal decomposition (POD) of Navier-Stokes simulations and Galerkin projection are commonly used as surrogates in design, control, and analysis of fluid systems. However,this approach has a number of limitations. One is that the accuracy of the reduced-basis may not be adequate when the model is applied at parameter values different from those used to generate the original simulation data. A second is that even mild turbulence can slow the decay of singular values in the POD and that to achieve a reasonable model size, a dramatic truncation in the basis is required. However, the influence of the discarded modes on the remaining modes must be treated with additional modeling.  
In this talk, we discuss procedures to overcome these limitations. Computing derivatives of the POD basis with respect to parameters, such as the Reynolds number, allows us to expand the range of flows that can be modeled. This includes at least an order of magnitude in relative accuracy for nearby parameter variations as well as more effective prediction of dynamical system properties such as the Strouhal number. Additionally, we propose models motivated by modern large eddy simulation (LES) closure models (variational multiscale and dynamic subgridscale models) along with an efficient two-level implementation to better represent mildly turbulent 3D flow past a cylinder at Reynolds Number 1000.  
Finally, we will report on our progress in developing reduced-order models for the airflow in buildings. These models may lead to incorporating airflow considerations earlier in the building control and design cycle.  
This is joint work with Sunil Ahuja, Imran Akhtar, Gene Cliff, Serkan Gugercin, Alexander Hay, Traian Iliescu, Christopher Jarvis, Zhu Wang, and Lizette Zietsman.

October 25, 2013
Cameron Gordon, University of Texas at Austin
Left-orderability of 3-manifold groups  

The fundamental group is essentially a complete invariant of a 3-manifold. We will discuss how the purely algebraic property of this group being left-orderable is related to the topology of the manifold, and present evidence for the conjecture that it is equivalent to a property that is ultimately analytic in nature.  
This is joint work with Steve Boyer and Liam Watson.

November 15, 2013 
Peter Monk, University of Delaware
Time domain integral equations for computational electromagnetism  

Scattering problems for Maxwell's equations can be solved in the frequency or time domain. In the frequency domain both finite element and boundary integral methods are in common use, and their relative strengths and weaknesses are well understood. In contrast, in the time domain the principle technique is the finite difference time domain method. However, time domain integral equations have become much more popular in recent years, although they still represent a considerable coding challenge. This can be mitigated by using the convolution quadrature approach, together with a boundary Galerkin method in space and efficient integral equation software.  
I shall outline the CQ method applied to Maxwell's equations using the problem of computing waves scattered by a penetrable object as a model problem. After discussing some properties of the scheme, I shall present some numerical results.

November 22, 2013 
He Hao, University of California, Berkeley
Sailing the electric grid through wind and sunshine  

The North American power network is often described as the largest and most complex system designed by the humankind, and it is rated as the top engineering achievement of the 20th century. Due to growing environmental concerns as well as economic and political requirements, the future power grid will rely increasingly on renewable energies from wind and solar. The proper functioning of an electric grid requires a continuous power balance between supply and demand. However, renewable energy resources have a high degree of uncertainty, which presents a daunting challenge for the power system operators to maintain the power balance. Hence reliability of the grid will require more flexibility through generation, as well as flexible consumption from demand response.  
The thermal storage potential in buildings is an enormous untapped resource for providing various services to the power grid. Moreover, buildings account for 70% of total electricity consumption in the United States. Buildings are, therefore, a natural candidate for demand-side flexibility. In this talk, we talk about how to extract the flexibility of residential and commercial building energy consumption to enable deep penetration of renewable energies.

January 10, 2014 
Mainak Patel, Duke University
The essential role of phase delayed inhibition in decoding synchronized oscillations within the brain  

The widespread presence of synchronized neuronal oscillations within the brain suggests that a mechanism must exist that is capable of decoding such activity. Two realistic designs for such a decoder include: 1) a read-out neuron with a high spike threshold, or 2) a phase-delayed inhibition network motif. Despite requiring a more elaborate network architecture, phase-delayed inhibition has been observed in multiple systems, suggesting that it may provide inherent advantages over simply imposing a high spike threshold. We use a computational and mathematical approach to investigate the efficacy of the phase-delayed inhibition motif in detecting synchronized oscillations. We show that phase-delayed inhibition is capable of creating a synchrony detector with sharp synchrony filtering properties that depend critically on the time course of inputs. Additionally, we show that phase-delayed inhibition creates a synchrony filter that detects synchrony far more robustly than that created by a high spike threshold. A high spike threshold detects a minimum number of synchronous input spikes (absolute synchrony), while phase-delayed inhibition requires a fixed fraction of incoming spikes to be synchronous (relative synchrony). Furthermore, we show that, in a system with noisy encoders where stimuli are encoded through synchrony, phase-delayed inhibition enables the creation of a decoder that can respond both reliably and specifically to a stimulus, while a high spike threshold does not.

January 17, 2014 
Lisa Madsen, Oregon State University
Simulating dependent discrete data  

Statisticians use simulated data to assess and compare the performance of statistical procedures. Therefore, the ability to simulate realistic data is an important tool. I will present a method to simulate count-valued dependent random variables that mimic observed data sets. The method simulates a correlated normal random vector, then transforms to the desired marginal distributions. The difficulty is in establishing the normal correlations that yield the desired dependence and even in characterizing the desired dependence. I focus on two measures of dependence, Pearson's produce-moment correlation and Spearman's rank correlation. I will show how to determine the normal correlation matrix that will lead to any specified feasible Pearson or Spearman correlation matrix. To illustrate the method, I'll simulate data to mimic two real data sets, one longitudinal and the other ecological.

January 22, 2014 
Panayot S. Vassilevski, Lawrence Livermore National Laboratory
Numerical upscaling by multilevel methods

Multigrid (or MG) is becoming the method of choice to solve large sparse systems of algebraic equations that typically arise from discretized partial differential equations (or PDEs). We give a brief motivation why this is the case; namely, due to the potential for optimality of MG. We describe some necessary and sufficient conditions for an optimal MG iteration method. Next, we focus on the ``algebraic'' versions of MG (or AMG). This refers to the case when the hierarchy of spaces needed to construct a MG is not given, hence it has to be constructed by the user, generally, in some problem dependent way.  The construction of operator dependent coarse spaces with some guaranteed approximation properties is also useful for discretization (upscaling) purposes. Based on our theory (necessary conditions), as it turns out, the AMG constructed coarse spaces corresponding to a good AMG solver, have at least some weak approximation properties. In practice, however, stronger approximation properties are needed, in particular, when the spaces are meant as a discretization (upscaling) tool. We present our element-based AMG approach to construct coarse spaces with guaranteed approximation properties targeting general classes of finite elements suitable for deriving coarse (upscaled) discretizations. The performance of the method is illustrated with a number of examples.

January 24, 2014 
Shari Moskow, Drexel University  
Scattering and resonances of thin high contrast dielectric structures

We consider scattering of electromagnetic waves by a thin structure of high index of refraction. We first consider the Helmholtz equation, and show that if the squared refractive index scales as O(1/h) where h is the thickness of the scatter, an approximate solution based on perturbation analysis can be obtained which involves solving a 2D integral equation for a 3D problem. We then discuss extensions to Maxwells' equations in this context while accounting for material jumps, and the calculation of resonant frequencies by this asymptotic approach.  
This work involves papers with coauthors D. Ambrose, J. Gopalakrishnan, F. Santosa, S. Rome and J. Zhang

January 27, 2014 
Blanca Ayuso de Dios, Center for Uncertainty Quantification in Computational Science & Engineering
Division of Mathematics & Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
Discontinuous Galerkin approximation to the Vlasov-Poisson system  

One of the simplest model problems in the kinetic theory of plasma-physics is the Vlasov-Poisson system with periodic boundary conditions. Such system describes the evolution of a plasma of charged particles (electrons and ions) under the effects of the transport and self-consistent electric field. In this talk, we introduce a family of discontinuous Galerkin methods for the approximation of the Vlasov-Poisson system.  
We shall discuss the error and convergence analysis and the properties of the proposed methods. We also present numerical experiments in the one dimensional case that validate the theory.  
In the last part of the talk, we shall discuss the possibility of combining the proposed methods with some dimension reduction techniques, such as sparse grids.  
The talk is based on joint works with Saverio Castelanelli (Zurich), J.A. Carrillo (Imperial College, UK), Soheil Hajian (Univ. Geneva) and Chi-Wang Shu (Brown University, US).

January 29, 2014
Jessica F. Ellis, University of California San Diego & San Diego State University
Preparing future college instructors: the role of Graduate Student Teaching Assistants (GTAs) in successful college calculus programs

In this presentation I detail two studies that have shaped my research program, both coming out of the Characteristics of Successful Programs in College Calculus (CSPCC)project. The first investigates the profile and experiences of students who initially indicate intent to pursue Calculus II (a proxy of Science, Technology, Engineering, and Mathematics (STEM) intention) and then no longer do so after taking Calculus I. Results of this study indicate that students who switch are less engaged during class, even when in the same class as their persister counterparts. Additionally, a significant amount of these students are taught by Graduate-student Teaching Assistants (GTAs). In the second study, I contrast Graduate-student Teaching Assistants’ (GTAs’) Calculus instruction to that of other instructor types, and examine the various professional development experiences GTAs have that successfully prepare them as future professors. This study is rooted in a sociocultural approach to learning how to teach and becoming part of a community. By drawing on both quantitative and qualitative methods, I connect components of GTA professional development programs to instructor’s conceptions, their practices, and their students’ success. I conclude by articulating future directions for my emerging program of research

January 31, 2014
Terry Speed, University of California, Berkeley
Co-methylation  

CpG methylation is a mitotically heritable epigenetic mark on DNA which plays a key role in genomic imprinting, X-inactivation, transcriptional regulation, tissue specificity and carcinogenesis. What is co-methylation? Loosely, it is the persistence of the methylated (M) or unmethylated (U) state along a chromosome. Slightly more precisely, it is the association between the methylation state at nearby CpGs, as a function of their separation. "Co-" here is meant to bring to mind correlation. This talk will summarize some results concerning co-methylation we obtained by analysing publicly available sequence data on whole genome bisuplhite-treated DNA. Our immediate goal was to see whether we can simulate whole-genome methylation data that is indistinguishable from the real thing. I'll explain why we want to do this. It turns out to be quite hard (for us, Terry Speed and Peter Hickey of the Walter & Eliza Hall Institute of Medical Research, Australia).

February 5, 2014 
Brittany A. Erickson, San Diego State University 
Provably stable computational methods for solving time-dependent PDEs with an application to earthquake cycle modeling

Mathematical models for earthquake nucleation and propagation must contend with highly varying temporal and spatial scales of elastic motion on faults, geometric irregularities in fault slip surfaces, nonlinear friction laws, heterogeneous material properties, and inelastic response of the solid Earth.  We use high-order accurate finite difference methods to incorporate these physical and geometrical complexities and weak enforcement of boundary conditions to develop provably stable numerical methods.  Many of the challenges that arise are in fact common in Earth Science: the equations governing motion in the Earth can be highly nonlinear, numerically stiff, involve problems of constrained optimization, and have complicated interface and boundary conditions that can be very challenging to incorporate into a computational procedure.  To illustrate how our methods address these difficulties, I will share results from earthquake cycle simulations within heterogeneous basins of sediment common to several major faults worldwide and conclude with a discussion of how these results give insight into some outstanding questions in earthquake science.

February 10, 2014 
Matt Elsey, New York University
Local structure analysis and defect characterization in atomic-resolution images

Recent advances in atomic-resolution imaging allow us to capture enormous data sets, but techniques for efficiently analyzing these images are lacking. To address this shortcoming, we propose a variational method which yields a tensor field describing the local crystal strain at each point. Local values of this field describe the crystal orientation and elastic distortion, while the curl of the field locates and characterizes crystal defects and grain boundaries. The proposed energy functional has a simple L2-L1 structure which permits minimization via a split Bregman iteration, and GPU parallelization results in short computing times.

February 12, 2014
Yeonwoo Rho, University of Illinois at Urbana-Champaign
Inference for time series regression models with nonstationary errors

In time series regression problems, it is often assumed that the errors in regression models are stationary with weak dependence, and the existing inferential procedures critically depend on this assumption. Recently, there has been a surge of awareness that the stationary error assumption is too restrictive and for many time series of macroeconomic and climate variables, the errors exhibit strong nonstationarity. Thus there is a need to develop new inference methods that account for nonstationary errors. In this talk, we consider two problems in time series regression: inference of the parameter vector in deterministic trend models, and unit root testing in stochastic trend models. In both models, we allow for general forms of nonstationary errors, which can accommodate both smooth and abrupt changes in second order properties. For commonly used statistics based on ordinary least square estimators, we derive the limiting null distributions, which depend on the unknown nonstationarity of the errors in a nontrivial way. To perform the inference, we propose to use the wild bootstrap and one of its variants to approximate the nonpivotal limiting null distributions and rigorously justify the consistency.  Numerical results will be presented to demonstrate the size or coverage accuracy achieved by our procedure in comparison with the existing counterparts in presence of nonstationary errors.

February 19, 2014 
Ruriko Yoshida, University of Kentucky
Optimality of the Neighbor Joining Algorithm and Faces of the Balanced Minimum Evolution Polytope

Balanced minimum evolution (BME) is a statistically consistent distance-based method to reconstruct a phylogenetic tree from an alignment of molecular data. In 2008, Eickmeyer, Huggins, Pachter, and myself developed a notion of the BME polytope, the convex hull of the BME vectors obtained from Pauplin's formula applied to all binary trees. We also showed that the BME can be formulated as a linear programming problem over the BME polytope.  The BME is related to the Neighbor Joining (NJ) algorithm, now known to be a greedy optimization of the BME principle. Further, the NJ and BME algorithms have been studied previously to understand when the NJ algorithm returns a BME tree for small numbers of taxa. In this talk we aim to elucidate the structure of the BME polytope and strengthen knowledge of the connection between the BME method and NJ algorithm. We first show that any subtree-prune-regraft move from a binary tree to another binary tree corresponds to an edge of the BME polytope. Moreover, we describe an entire family of faces parametrized by disjoint clades. We show that these clade-faces are smaller-dimensional BME polytopes themselves. Finally, we show that for any order of joining nodes to form a tree, there exists an associated distance matrix (i.e., dissimilarity map) for which the NJ algorithm returns the BME tree. More strongly, we show that the BME cone and every NJ cone associated to a tree T have an intersection of positive measure.  We end this talk with the current and future projects on phylogenomics with biologists in University of Kentucky and Eastern Kentucky University.  This work is supported by NIH.

February 21, 2014 
Colin Starr, Willamette University
Prime Distance Graphs

A graph G is a prime distance graph (respectively, a 2-odd graph) if its vertices can be labeled with distinct integers such that for any two adjacent vertices, the difference between their labels is prime (either 2 or odd). We seek to characterize prime distance graphs as well as some generalizations of them. In this talk, I will make connections between this problem and several famous problems and theorems in number theory.

February 21, 2014 
Lizhen Lin, Duke University
Shape constrained regression using Gaussian process projections

Shape constrained regression analysis has applications in dose-response modeling, environmental risk assessment, disease screening and many other areas. Incorporating the shape constraints can improve estimation efficiency and avoid implausible results. In this talk, I will talk about nonparametric methods for estimating  shape constrained (mainly monotone constrained) regression functions. I will focus on a novel Bayesian method  from our recent work for estimating monotone curves and surfaces using Gaussian process  projections. Inference is based on projecting posterior samples from the Gaussian process. Theory is developed on continuity of the projection and rates of contraction. Our approach leads to simple computation with good performance in finite samples. The projection approach can be applied in other constrained function estimation problems including in multivariate settings. 

February 24, 2014 
Alden Edson, Western Michigan University
Implications of highly interactive digital instructional materials for mathematics teacher education

Digital instructional resources are rapidly replacing print materials offering a promising direction for education at all levels. Participants in this interactive session will experience pedagogical and tool features of a highly interactive digital instructional unit focusing on binomial distributions and statistical inference. How these features were enacted in high school classrooms during a design experiment with a focus on the roles of both teacher and students will be discussed. Collectively, we will consider implications of the increasing use of digital instructional materials for re-thinking the experiences needed in teacher preparation and professional development programs.

March 3, 2014
Steven J. Boyce, Virginia Tech
Modeling Students’ Units Coordinating Activity

Primarily via constructivist teaching experiment methodology (Steffe & Thompson, 2000), units coordination (Steffe, 1992) has emerged as a useful construct for modeling students’ psychological constructions pertaining to several mathematical domains. The number of levels and types of numerical units that students can assimilate and coordinate are used to distinguish particular counting sequences (Steffe & Cobb, 1988), whole number multiplicative conceptions (Hackenberg & Tillema, 2009), and fraction schemes (Steffe & Olive, 2010). Lack of units coordination has subsequently been implicated as a constraint in students’ reasoning relating to signed integer addition (Ulrich, 2012) and understanding of linear equations (Hackenberg & Lee, 2012).  In this talk, I will discuss how I am extending the teaching experiment methodology to model students’ units coordinating activity across contexts, using the construct of propensity to coordinate units.  I will demonstrate my approach using data from a teaching experiment with sixth-grade students that focused primarily on fractions. I will then discuss the viability and utility of the resulting model, including findings suggesting instructional sequencing that may support students’ constructions of more powerful fractions schemes.

March 5, 2014 
Manuel de León, Instituto de Ciencias Matemáticas (ICMAT)
A brief history of geometric mechanics

In this talk we will do a historical tour from the origins of mechanics to the present day. In particular, we will show how applying geometry in mechanics lead to a discipline now known as the geometric mechanics. The tour will bear in mind the prominent figures who have contributed to this discipline.

April 18, 2014 
Angélica Osorno, Reed College
Why do algebraic topologists care about categories?  

The study of category theory was started by Eilenberg and MacLane, in their effort to codify the axioms for homology. Category theory provides a language to express the different structures that we see in topology, and in most of mathematics. Categories also play another role in algebraic topology. Via the classifying space construction, topologists use categories to build spaces whose topology encodes the algebraic structure of the category. This construction is a fruitful way of producing important examples of spaces used in algebraic topology. In this talk we will describe how this process works, starting from classic examples and ending with some recent work.

April 25, 2014 
Luis Caffarelli, University of Texas at Austin
Nonlinear problems with nonlocal diffusions  

Diffusion processes, like the flow of heat, flow in porous media, random evolution of a population, or the price of a good, has been described traditionally by continuous processes, but in many cases discontinuities (large jumps on a price, non-local information in the behavior of a population, and other factors) need to be treated in their mathematical description by non-local interactions. We will discuss a few examples and results.

May 2, 2014 
Daniel Asimov, University of California, Berkeley
On periodic surface maps  

We discuss periodic automorphisms of closed surfaces, with a focus on rotation numbers at fixed points. Our goal is to represent such maps in a geometric manner that aids the intuition. This is a work in progress.

May 30, 2014 
Erin Glover, Portland State University, and Karen Marrongelle, Oregon University System
Scouring educational research literature on calculus: Getting the new National Council of Teachers of Mathematics Research Handbook off of the ground  

The National Council of Teachers in Mathematics is preparing the third Handbook of Research on Mathematics Teaching And Learning. This Handbook is considered to be the go-to resource for all mathematics education researchers. The development of the chapter on Calculus Learning and Teaching has begun with a comprehensive review of four decades of research literature related to calculus. The focus of the talk will be on ‘lessons learned’ in conducting this literature search. We will begin by describing the goals for the NCTM Handbook and the process of creating it. Then we will describe the process of conducting a comprehensive search of the literature. We will share techniques and tips that will be beneficial for any student or faculty member interested in educational research.

May 30, 2014 
Michael Neilan, University of Pittsburgh
Finite element methods for elliptic partial differential equations in non-divergence form  

The finite element method is a powerful and ubiquitous tool in scientific computing and numerical analysis for computing approximate solutions to partial differential equations (PDEs). A contributing factor of the method's success is that it naturally fits into the functional analysis framework of variational models. In this talk I will discuss finite element methods for PDEs problems that do not conform to the usual variational framework, namely, elliptic PDEs in non-divergence form. I will first present the derivation of the scheme and give a brief outline of the convergence analysis. Finally, I will present several challenging numerical examples showing the robustness of the method as well as verifying the theoretical results.