Seminar Abstracts - University of Houston
Skip to main content

Seminar Abstracts

2015 Spring Semester.

  • 03/06/2015,  Professor Wenjiang Fu, Some Mysterious and Yet Open Problems in Statistics and Their Applications.

    In economics, marketing research and business management, it is important to estimate the temporal trend of sales of products or the market share of a business during a period of time. In public health likewise, it is crucial to estimate the temporal trend of chronic disease (cancer, cardiovascular diseases, etc.) mortality rates. Often the sales of products vary with the age of consumers (e.g. sales of cosmetic products or life insurance policies), and the mortality rate varies with the age of patients (e.g. mortality rate of breast cancer). More importantly, they also vary largely with the cohort (or generation) effect, which shows large difference from the old generations to young generations. This presents a major challenge in statistical modeling - the identifiability problem, which still remains open. I will present this complicated problem using simple graphical method. I will also present its sister problem – the selection of reference population in direct age-standardization, which also remains open to date. I hope this will lead to major discussions among students and encourage them to work on the problems for their thesis.


  • 02/20/2015,  Professor Jingmei Qiu, High order numerical methods for hyperbolic problems and applications.

    In this talk, I will introduce a family high order numerical methods for hyperbolic problems. I will also present their applications in many fields of science and engineering, including fluid dynamics, plasma physics, climate and traffic flow  simulations.


  • 03/27/2015,  Professor Kresimir Josic, Mathematics as biology's new microscope

    The 21st century promises unprecedented advances in biology. We are now able to observe how thousands of cells interact to form actions and thoughts. On the other hand, synthetic biology offers the possibility of engineering new organisms. However, the complexity of biological systems means that intuition and quantitative descriptions will be insufficient to guide the way.  Mathematics will be essential in both interpreting the results of novel imaging methods, using them to understand how biological systems functions, and translating this knowledge into tools that will advance medicine.

2014 Fall Semester.

  • 11/19/2014,  Professor Mark Tomforde, The Mad Veterinarian Puzzle.

There is a collection of problems which have come to be known as "Mad Veterinarian Puzzles". In these problems a Mad Veterinarian has various transmogrifying machines that can turn a single animal into collections of other animals (for example, one machine might turn a single cat into a collection of two dogs and three mice). These puzzles ask whether the Mad Veterinarian can turn certain collections of animals into other collections using the machines.

In this talk, I will discuss the Mad Veterinarian Puzzle and some interesting (and unexpected!) connections with groups, algebras, and C*-algebras. I'll assume very little background, and the talk should be accessible even to undergraduates that have a little knowledge of equivalence relations, linear algebra, and groups.

  • 10/24/2014,  Professor Hongkun Zhang, NChaotic Billiards and Their Applications.

I will briefly introduce the billiard dynamical systems, including both integral and chaotic billiards. Then we will go over the motivation from ergodic and statistical point of view. The study of billiards has applications in many scientific fields, including fluid mechanics, physics, chemical mechanics, computer graphing and financial math.

  • 09/19/2014,  Professor Manos Papadakis, Neuroscience Imaging from the Perspective of an Analyst.

Unlike traditional optical microscopy, fluorescence microscopy is based on the principle of absorption and subsequent excitation of fluorophores emitting light with separable spectral properties. Through chemical conjugation, these fluorophores can be linked with a high degree of specificity to single molecules and used as probes to track sub-cellular localization and expression pattern of any protein of interest. Hence, fluorescence microscopy can be invaluable to unravel structure-function properties at the single cell level, in tissues, or in whole organisms. Furthermore, combined with automated image acquisition and robotic handling, it can be applied for the systematic study of morphological changes within a large populations of cells under a variety of perturbations (e.g., drugs, compounds, gene silencing), so that automated fluorescence microscopy has become an essential technique for discovering new molecular pathways in diseases or new potential pharmacological treatments.

One of the several challenging problems in the analysis of images acquired with confocal microscopes which are noisy and have a perplexing appearance of individual neuronal components, such as axons, somas and dendritic branches. We will showcase how mathematical analysis and probability theory facilitate effective tools to discriminate the neuronal components of individual neurons, sort out their associations with various cells, and can automatically extract metrics assessing the association between individual neurons in small neuronal circuits. All images that we will show are of live neuronal cells organized in small circuits, thus, enabling the monitoring of their responses and the understanding between structure and function at the cellular level.

  • 09/12/2014,  Professor Robert Azencott, Estimation of joint SDE models  for Stock Prices and  Volatilities:  Impact of modeling errors on Option Pricing.

Joint dynamics of stock prices and volatilities are often modeled by Heston stochastic differential equations, where coefficients have to be estimated from stock prices data. With Yutheekha Ghadyan and Roland Glowinski, we had quantified how estimates inaccuracies impact option pricing.

Currently, with Peng Ren and Ilya Timofeyev, we have begun evaluating the impact of replacement of (unobservable) theoretical volatility by the computable "realized" volatility.

2014 Spring Semester.

  • 1/24/2014,  Professor Edward Kao, Financial Math Introduction and FMO Program at UH.

The purpose of this talk is to introduce the subject of financial math to those who have little knowledge about it and would like to have some ideas about it. We will go over the brief history about the evolution of mathematical finance as a serious scientific discipline. We will survey the type of areas in which research have been carried out, the employment prospects of quants. We will also give a briefing about the FMO program existed in the department.

  • 2/21/2014,  Professor Ilya Timofeyev, Traffic Modeling: From Microscopic to Effective PDE Models.

In this talk I will discuss the connection between the stochastic multi-agent Cellular Automata models for car and pedestrian traffic and effective PDE equations. Agents in stochastic models follow a set of probabilistic rules for the motion on an equally spaced lattice and the effective macroscopic PDE models for the density can be derived. In particular, the famous Lighthill-Whitham-Richards Model of car traffic can be derived using this approach.

The PDE models are much more computationally efficient and can be used to predict the averaged behavior of traffic systems.

2013 Fall Semester.

  • 11/01/2013,  Professor David Blecher, Noncommutative topology and prescribing behaviour of noncommutative functions on noncommutative subsets

This will be about noncommutative topology and noncommutative Urysohn lemmas. And more generally, finding noncommutative versions of functions (usually from a fixed algebra of operators) that have certain behaviours on certain noncommutative sets.

2013 Spring Semester.

  • 04/05/2013,  Professor Min Ru, It is as easy as abc

The usually quiet world of mathematics is recently abuzz with a claim that one of the most important problems in number theory has been solved. Mathematician Shinichi Mochizuki of Kyoto University in Japan released in last year a 500-page proof of the abc conjecture (see New York times: "A Possible Breakthrough in Explaining a Mathematical Riddle", Sept 17, 2012)).

If Shinichi Mochizuki's 500-page proof stands up to scrutiny, mathematicians say it will represent one of the most astounding achievements of mathematics of the twenty-first century.

The proof will also have ramifications all over mathematics, and even in the real-world field of data encryption.

The abc conjecture was proposed independently by David Masser and Joseph Oesterle in 1985. The abc conjecture, if proved true, at one stroke solves many famous Diophantine problems, including Fermat's Last Theorem.?

In this talk, I'll discuss the abc conjecture. While (of course) I am unable to talk about Mochizuki's proof, I'll indicate that the statement of the conjecture is amazingly simple, compared to the deep questions in number theory, and this strange conjecture turns out to be equivalent to all the main problems.
It is also deeply related to (algebraic and differential) geometry and (complex) analysis, esepcially the  Nevanlinna theory. It's at the center of everything that's been going on.

2012 Fall Semester.

  • 11/02/2012,  Professor Bernhard Bodmann, Phase recognition: From X-ray crystallography to factoring polynomials

This talk concerns a problem in non-linear signal acquisition which has a long history, unsolved problems and many modern applications: signal recovery from intensity measurements. A notorious example is X-ray crystallography, the determination of a function from the magnitude of its Fourier transform. After a brief overview of the history of this inverse problem, we study a toy model, determining a complex polynomial from its magnitudes on the unit circle. This simple problem already exhibits the main difficulties that need to be overcome in X-ray crystallography and points to methods that resolve the underdetermined nature of intensity measurements.

2012 Spring Semester.

  • 04/13/2012,  Professor Demetrio Labate, Computed Tomography and the Shearlet Representation.

The Radon transform, introduced by Johann Radon in 1917, is the underlying mathematical foundation for a number of methods employed to determine tructural properties of objects by using projected information, such as computed tomography. ?In this talk, we show that the Shearlet representation can be applied to obtain a highly effective reconstruction algorithm which provides near-optimal rate of convergence in estimating a large class of images from noisy Radon data. This is achieved by introducing a Shearlet-based decomposition of the Radon operator and applying a thresholding scheme on the noisy Shearlet transform coefficients. We present numerical demonstrations showing that the Shearlet-based approach outperforms both traditional methods and similar strategies based on wavelets and curvelets.

  • 2/24/2012,  Professor Jingmei Qiu, Discontinuous Galerkin methods and their super convergence.

Various super convergence properties of discontinuous Galerkin (DG) method for hyperbolic conservation laws have been investigated in the past. They include the super convergence in negative norm and for properly post-processed solution, the super convergence at Radau and downwind points, the super convergence of dispersion and dissipation error of physically relevant eigenvalues in Fourier analysis and the super convergence toward a special projection of exact solution among many others. Due to these super convergence properties, the DG method has been known to provide good wave resolution properties, especially for long time integrations.

In this talk, via Fourier approach, we observe that the error of DG solution can be decomposed into two parts: (1) the dispersion and dissipation error of the physically relevant eigenvalue; this part of error will grow linearly in time and is of order $2k+1$ (2) projection error, that is, there exists a special projection of the solution such that the numerical solution is much closer to the special projection of exact solution, than the exact solution itself; the magnitude of this part of error will not grow in time.  Based on this fact, we conclude that the error of DG solution will not grow over a period of time that is on the order of $h^{-k}$, where $h$ is the spatial mesh size and $k$ is the degree of polynomial space.

  • 02/17/2012,  Tom Mrowka (MIT), Instantons and their impact on low dimensional topology

In 1983 Donaldson shocked the mathematical community by using moduli spaces of instantons to show that certain topological four-manifolds admitted no smooth structure. The existence of these topological four-manifold was only proved a year earlier by Michael Freedman. Thus began a lively interaction between low dimensional topology and physics which has lead to a resolution of many old problems in low dimensional topology. This talk will survey some of the highlights of this story.

2011 Fall Semester.

  • 10/21/2012,  Annalisa Quaini, The Coanda Effect for Incompressible Fluids

The Coanda effect is the tendency of a fluid jet to be attracted to a nearby surface. The importance of this phenomenon in aircraft development has been widely recognized for over 70 years. Recently, the Coanda effect has gained popularity also among cardiologists: it is seen (thanks to echocardiography) in certain patients affected by a disorder called mitral regurgitation (MR). The correct assessment of MR through echocardiography is hampered by the Coanda effect, with negative consequences for the patients. Understanding the causes of the Coanda effect for an incompressible fluid (like blood in the heart) will lead to big improvements in MR diagnosis. This is our motivation to undertake a numerical study of the fluid dynamics properties associated with the Coanda effect. Results will be shown in 2D and 3D geometries.

  • 10/14/2011,  David Blecher, Matrices and the nature of mathematical discovery

This is adapted from a talk written for a very general audience of scientists and undergraduates (i.e.\ purposefully nontechnical). We begin by talking about matrices, and a generalization of matrices, their importance in mathematics and physics, and the basic matrix operations (adding and multiplying matrices). It is surprisingly easy to think up questions about matrices involving only adding and multiplying, which are open, or have very subtle answers. These answers can also lead into interesting research programs when one sees what happens when one asks the same questions in more general settings. I will talk about how matrices are generalizations of numbers, review some basic matrix theory and linear algebra, and mention some recent questions we have worked on of the type just described, some of which are solved and others not.

  • 09/09/2011,  Rebecca Chen, Spatio-temporal calcium smoothing in dendritic trees

Synaptic plasticity is the cell’s ability to change the strength in its connections with other cells, and may be the cellular basis of learning and memory. We are interested in the mechanisms underlying such changes in synaptic connections. To do so we have employed a fast, random access two photon microscope to image temporally evolving changes in calcium concentration across the tree. We propose two protocols for experimentation that test Hebbian theory ”Cell that fire together wire together”, and have two statistical methods of inference to analyze the spatially sparse and noise corrupted data.

2011 Spring Semester.

  • 04/08/2011,  Jose Manuel Lopez, Writing MEX files for MATLAB

MATLAB is a high level language for technical computing which provides many tools for computation, visualization and programming development.

In spite of these advantages, MATLAB suffers from the drawback of being computationally very expensive for some tasks. One way to overcome this pitfall is to be able to combine the user friendly benefits of this scripting language with the efficiency of C/Fortran/C++.

In this talk I will explain how to create and compile C and C++ files in order to call Fortran, C and C++ routines from MATLAB. This talk will be particularly useful for graduate students who are beginning with research and are faced with computational challenges.,

  • 03/04/2011,  Ilya Timofeyev, Stochastic Modeling and Coarse-Grained Closures

In this talk I will give an overview of my research related to stochastic modeling of several processes, including
(i) stochastic modeling of small scales in fluid dynamics,
(ii) modeling of highway and pedestrian traffic, and
(iii) parameter estimation in stochastic systems from discrete observations.
I will present some of the background material and simple examples illustrating the key concepts in each area.

  • 02/25/2011,  Manos Papadakis, A tale of two problems

This talk is not intended to be a talk with heavy math. It is a actually a talk intended for a wide audience including senior undergraduates who major or minor in mathematics. We will present two problems, face recognition in 2D-images; detection of dendritic arbors and spines in neuroscience in live neurons. One of the main and unsolved challenges in face recognition is to identify point landmarks, that is points of anatomical significance on a face, that can then be used in the recognition process. The difficulty when images are acquired in realistic conditions we assume that are not poor, are lighting variation and shades, as well as pose and occlusions of some of these landmarks. In the second problem we want to extract the geometric characteristics of the dendritic structure and of its numerous protrusions called spines, that are attached to the arbor and are suspected to be an integral part of the synaptic network. The input images are 3D-data volumes of neurons acquired by confocal microscopes and are occasionally corrupted by noise. Both problems require a processing of the data that will enable the extraction of pertinent information to be carried out in the original dimensionality of each data set and not to be influenced by variations of the input due to rigid motions and changes in orientation.

2010 Fall Semester.

  • 10/22/2010,  Min Ru, The geometry of doughnuts

A compact real surface looks like a doughnut with several holes (the number of holes is called the genus of the surface). The (total) geometry of the surface is found deeply related to its topological properties (for example, the number of holes). A perfect example is the so-called "Gauss-Bonnet theorem". In this talk, I'll discuss the Gauss-Bonnet theorem for compact surfaces, as well as Prof. Chern Shiing-Shen's higher-dimensional generalization. I'll try to explain how Chern's intrinsic proof of the Gauss-Bonnet theorem leads him to discover the famous "Chern characteritic classes

  • 10/01/2010,  Anando Sen, Tomographic reconstruction from highly collimated CT data

CT scans are used to detect tumors in various organs of the body. X-rays are projected on to the organ to obtain image data. The data is then inverted to reconstruct the image of the organ which is checked for the presence of a tumor. A major disadvantage of the CT scan is that it is not local. To reconstruct a small region around the tumor one needs to scan the whole organ hence increasing the exposure to harmful radiations. We develop and imlement algorithm to significantly improve local reconstruction and at the sa time keeping the exposure to radiation to a minimum. The 2D problem is presented in detail while certain aspects of the 3D case are also discussed.

  • 09/10/2010,  Robert Rosenbaum, Vasudha Sehgal, Helen Elwood and Rebecca Chen, using the department's compute servers

Do you have large computing jobs that use too much memory or too much CPU time to run on your desktop computer? We will explain the technical know-how and technique to use the math department's compute servers.

 

2010 Spring Semester.

  • 04/23/2010, Mikhail Perepelitsa, Hyperbolic regularization in conservation laws

We will discuss some qualitative properties of the solutions to conservation laws on the example of solutions to the simplest such equation -- the Burger's equation. Unlike the linear transport that only shifts the initial data without changing its properties, the evolution of the solution by the nonlinear transport makes solutions more smooth, when considered in the appropriate functional setting. To get to this result we will introduce the class of weak (discontinuous) solutions to the Burger's equation. Such solutions contain shock and rarefaction waves. We will consider how these waves propagate relative to each other to get a view on the structure of the solutions. The analysis is of the geometric nature and is easily followed with the standard calculus tools.

 

  • 04/02/2010, Gordon Heier, Introduction to complex geometry

This talk will give an introduction to higher-dimensional complex geometry, with special emphasis on aspects of the classification of complex projective varieties and manifolds. Some open problems will be discussed. Previous familiarity with basic complex analysis and commutative algebra will be helpful, but is not required to follow the general line of thought.

 

 

 

2009 Fall Semester.

  • 11/20/2009, Robert Rosenbaum, Using simplified stochastic models to understand how correlations propagate in neuronal networks

One of the greatest frontiers in science today is understanding how the brain functions and how it processes information. Mathematical models play a central role in addressing these questions. I will give a brief overview of how mathematics is used to model networks of neurons and why simplified stochastic models are necessary. I will then discuss some recent work on the problem of how correlations between spike trains propagate in neuronal networks.

 

  • 11/06/2009, William Ott and Chinmaya Gupta, Dynamical Systems: A probabilistic viewpoint

The presence of instabilities in models of physical processes often makes it practically impossible to simulate individual orbits for long periods of time. It is therefore natural to view dynamical systems probabilistically, an idea that can be traced back to Boltzmann and the development of statistical mechanics. Although dynamical systems de?ned by ?ows or maps are deterministic, they can nevertheless give rise to time series that look as if they were generated by stochastic processes with strong independence properties. In this talk we will introduce the probabilistic viewpoint and discuss one particular manifestation: Borel-Cantelli lemmas for dynamical systems.

  • 10/23/2009, Dmitri Kuzmin, Introduction to Computational Fluid Dynamics

Fluid flows and transport phenomena, such as heat and mass transfer, play an important role in human life. Gases and liquids surround us, flow inside our bodies, and have a profound impact on the environment in which we live. Fluid flows produce winds, rains, floods, and hurricanes. They cause temperature fluctuations and transport of contaminants in air, water, or soil. The ability to understand, predict, and control fluid flows is essential for the design of cars, airplanes, furnaces, and chemical reactors. Heating, air conditioning, and weather forecast have become an integral part of our life. We take such things for granted and hardly ever think about the physics and mathematics behind them.

The traditional approach to investigation of a physical process is based on observations, experiments, and measurements. The amount of information that can be obtained in this way is usually very limited and the cost is rather high. Alternatively, an analytical or computational study can be performed on the basis of a suitable mathematical model. The most detailed models of fluid flow are derived from conservation laws for the mass, momentum, and energy. Differential equations that embody these physical principles are impossible to solve using paper and pencil but accurate approximate solutions can be obtained using numerical algorithms. The branch of applied mathematics that deals with the design of such methods for flow problems is known as Computational Fluid Dynamics (CFD).

The development of a CFD code involves modeling, discretization, and numerical solution of algebraic equations. A brief introduction to each step of the CFD design process will be given in this talk. Also, some upcoming research projects (simulation of flows in porous media, around wind turbines, and in blood vessels) will be presented.

  • 10/09/2009, Bernhard Bodmann, From Quantum Communications to Equiangular Tight Frames

We start with a messaging problem in ancient Babylon, and then proceed to its the modern-day equivalent in quantum communications: Quantum states can be 'intercepted' by a classical environment, leading to information loss. Simply put, the state is represented by a matrix and the environment replaces some of its entries by zeros. We then examine how including repetitive information in the transmission helps preserve the quantum state and which transmissions are optimal for a minimally invasive environment. This optimization problem leads us to the design of equiangular tight frames. I will conclude with a few results with Helen Elwood.

 

  • 09/25/2009, Sonia Sharma, Operator spaces with an ideal structure (view pdf file)

The notion of an ideal is an essential algebraic notion in ring theory and algebra. The theory of ideals has been generalized to the non-algebraic setting of Banach spaces in more than one way. One of the more successful and vastly studied notion of ideals is that of ``$M$-ideals''. A particular class of $M$-ideals that has been extensively studied in the Banach space literature, are the spaces that are $M$-ideals in their second dual $X^{**}$. Recall, every Banach space $X$ can be thought of as a subspace of its second dual, $X^{**}$, via a canonical embedding $X\hookrightarrow X^{**}$.

We have developed a non-commutative generalization of the above class, namely the theory of operator spaces which are `ideals' in their second dual. We will talk about some of the interesting features and examples of these spaces.

  • 09/18/2009, Vern Paulsen,The Kadison-Singer Problem and Fourier Frames.

What is the Kadison-Singer problem and why should anyone care? I'll try to answer both of these questions, by first explaining the original version of the problem and then discussing a problem in Fourier analysis that would have a solution provided the Kadison-Singer problem has a positive answer. I'll then discuss the consequences of my own work on this problem in Fourier analysis.

 

2009 Spring Semester.

  • 04/17/2009, Labate, Harmonic Analysis and Geometric Image Representation.

Over the past several years, there has been a rapidly increasing pressure to handle more efficiently the ever larger multidimensional data sets arising from a variety of applications. It is now widely recognized that taking advantage of geometrical structures in the data is crucial to improve the state of the art in signal and image processing. In this talk, I will present a number of ideas recently introduced to bridge the gap between geometric representations and harmonic analysis. This is a particularly exciting area research, with challenging mathematical questions and profound implications to applications from image processing, computer vision, data compression and inverse problems.

  • 04/10/2009, Alexandre Caboussat, From Least Squares to Monge-Ampère.

Least Squares methods are 'textbook' methods used in statistics and approximation theory. The Monge-Ampère equation is the most important equation when studying fully nonlinear partial differential equations. Starting with the introduction of classical least-squares methods, we then address the numerical solution of the Dirichlet problem for the canonical, elliptic, Monge-Ampère equation, by using some less classical least-squares methods. We discuss the advantages and drawbacks of such methods and conclude with numerical experiments. This is joint work with Roland Glowinski.

  • 02/13/2009, Azencott, "Importance Sampling" Optimized by Large Deviations Theory.

For controlling random processes or analyzing biological evolution, specific rare events have high pragmatic impact. Evaluating the probability of such events by straight simulations is often unfeasible due to overwhelming computing costs. Large deviations algorithms enable optimal focus of simulations by forcing the process to follow key trajectories determined by solving a variational problem.

  • 05/01/2009, Emmanuel Candes (California Institute of Technology), L1-magic.

In many applications, one often has fewer equations than unknowns. While this seems hopeless, the premise that the object we wish to recover is sparse or nearly sparse radically changes the problem, making the search for solutions feasible. This lecture will introduce sparsity as a key modeling tool, and introduce a series of little miracles touching on many areas of data processing. These examples show that finding that solution to an underdetermined system of linear equations with minimum L1 norm, often returns the ``right'' answer.

  • 02/17/2012,  Tom Mrowka (MIT), Instantons and their impact on low dimensional topology
In 1983 Donaldson