Research and Think Pieces from College of Engineering Faculty
Dr. Akpan

Think Piece: The Mathematics We Don't Teach
Much of what makes mathematics extraordinary never appears in a textbook. We teach procedures, theorems, and carefully curated examples, but we rarely discuss the experience of doing mathematics—the uncertainty, the revision, and the quiet recalibration of intuition that happens each time a problem resists our expectations. Students often imagine mathematics as a rigid sequence of steps, yet mathematicians know it as a deeply human activity: exploratory, nonlinear, and filled with false starts. The mathematics we don’t teach is the mathematics of doubt—the stage where a promising idea fails, forcing a rethinking that becomes essential to understanding. In this way, genuine mathematical growth is less about accumulating tools and more about developing resilience in the face of conceptual friction.
Research:
My research lies at the intersection of nonlinear partial differential equations, wave phenomena, and computational mathematics. I focus particularly on soliton theory, analytical techniques for understanding complex dispersive systems, and the analysis of nonlinear lattice models, including long-range variants of the Fermi–Pasta–Ulam–Tsingou (FPUT) lattice. Much of my work examines coupled and generalized forms of the nonlinear Schrödinger equation (NLS) and the Korteweg–de Vries equation (KdV)—two foundational models that describe interactions between short and long waves in optics, plasma physics, fluid dynamics, and other applied sciences and developing sufficient conditions that allow for the construction of solitary wave solutions.
Dr. Moore

Think Piece:
Mathematics is entering a period where long-standing structures are finding new expression through machine learning, and nowhere is this more apparent than in the relationship between linear algebra and Graph Neural Networks (GNNs). I’ve recently been exploring how the fundamental operations of sparse linear algebra — relaxation, interpolation, restriction, even multigrid-style coarsening — can be reframed as learnable message-passing processes on graphs. This perspective doesn’t replace traditional numerical methods; instead, it reveals that many classical algorithms are already performing a form of structured information propagation that GNNs naturally generalize. What I find most compelling is that, by building this bridge between linear algebra and GNNs, we can now use neural architectures to learn nonlinear operators that stand in for classical linear ones. I hope this development will lead to new advances in computational science.
For graduate students, this convergence is more than an intellectual curiosity: it signals a future in which mathematical insight and machine-learning intuition must develop hand in hand. Understanding graph structure, spectral behavior, or iterative methods is no longer solely the domain of numerical analysts — it becomes essential for designing adaptive, data-driven models with solid theoretical foundations. As these fields continue to merge, mathematicians have an opportunity to shape the foundations of next-generation learning systems and computational science, not merely apply them.
Research:
My research sits at the intersection of traditional scientific computing and modern machine learning. I study how classic numerical tools, such as linear algebra routines and multigrid solvers, can be improved by ideas from today’s learning methods, including Graph Neural Networks and Recurrent Neural Networks. The goal is to create algorithms that are both fast and reliable, but also flexible enough to adapt to complex real-world problems. In practice, this means using machine learning to help numerical solvers make smarter decisions, like how to simplify a problem or speed up an iteration. These hybrid methods can lead to quicker simulations, better predictions, and more efficient use of computational resources.
Dr. Song

Think Piece:
Hypergraphs, as a generalization of classical graphs, allow each edge to contain any number of vertices rather than just two. These broader structures appear naturally in many modern applications, ranging from collaborative networks to multi-agent decision processes. In machine learning and data mining, hypergraph-based models capture shared-feature relationships that ordinary graphs often distort or miss. Computational biology uses hypergraphs to represent multi-gene interactions, protein complexes, and complex biochemical reactions that cannot be modeled with pairwise edges. Their ability to illustrate group-level relationships makes them increasingly valuable in analyzing real-world systems characterized by collective behavior.
These generalized structures capture relationships that cannot be reduced to simple pairs, forcing classical concepts like connectivity, paths, and cycles to take on richer and less predictable forms. Investigating hypergraphs presents significant challenges, as many familiar graph-theoretic concepts do not generalize uniquely or cleanly. For example, definitions of connectivity, acyclicity, or matchings can vary based on which definition one adopts. As a result, hypergraphs become a natural setting for exploring the boundaries between graph theory, topology, and extremal combinatorics. The interplay between the complex structure, the challenges of generalization, and the depth of the theoretical questions continues to make hypergraphs an exciting area of modern mathematical inquiry.
Research:
My research centers on Hamiltonian problems in graph theory, a classical area with deep connections to combinatorics and modern network science. The Hamiltonian problem, one of the most well-known NP-complete problems, remains central to understanding structural properties of graphs. Much of my work investigates line graphs, a transformation that reveals hidden connectivity and cyclic properties of the original graph. In particular, I am motivated by and study a conjecture of Thomas, which states that every 4-edge-connected line graph is Hamiltonian.
More recently, my research has extended to hypergraphs, whose structural complexity offers both challenges and opportunities. Preliminary results have characterized hypergraphs whose line graphs are Hamiltonian, including the class of supereulerian hypergraphs (Xiaofeng Gu 2022). Building on the Catlin’s reduction method, my work develops new approaches for understanding Hamiltonicity in both graph and hypergraph (S. S.-J. Wei Xiong 2021, Lan Lei 2024, Xiaofeng Gu 2022). Through this line of research, I aim to establish broader and more unified methodologies for analyzing Hamiltonian properties in complex combinatorial structures.
Dr. Wu

Research:
Dr. Qingquan (Harry) Wu is a mathematician whose research interests include number theory, abstract algebra, real analysis, and broader areas of mathematics. His recent work explores:
- The Chinese Remainder Theorem for non-coprime moduli,
- The intriguing connection between group multiplication tables and Sudoku puzzles,
- An elementary analytic method for identifying the limit superior and limit inferior of certain bounded sequences, and
- The countable cardinalities of subsequential limits and limit points of bounded sequences.
Dr. Yang

Think Piece:
After many years in numerical analysis, I continue to approach the computation of derivatives with a measure of caution that surprises me. The underlying idea is disarmingly simple: approximate the derivative of a function by evaluating it at two nearby points and forming the difference quotient. In practice, however, the method is notoriously sensitive. If the perturbation is too large, the approximation suffers from excessive truncation error; if it is too small, subtractive cancellation and round-off contamination dominate. The optimal step size is often confined to an exceedingly narrow interval, and the presence of even modest noise (common in experimental data or large-scale simulations) can render the entire procedure unreliable. This delicate balance explains why numerical differentiation remains one of the most fragile routine operations in scientific computing.
What I find particularly instructive about this topic is how clearly it reveals the gap between mathematical theory and computational reality. While calculus presents derivatives as precise, instantaneous rates of change, translating that concept into finite-precision arithmetic forces us to confront approximation, stability, and conditioning in their purest form. Teaching this topic allows me to show students that research-level challenges often hide inside the most elementary ideas, and it reminds all of us that rigor in computational mathematics demands not only theoretical insight but also a deep respect for the limitations of the machines we use every day.
Research:
My research centers on developing advanced computational methods to tackle complex problems in fluid dynamics and interdisciplinary applications, bridging traditional numerical techniques with emerging technologies. In the realm of high-order numerical schemes, I focus on creating stable and efficient algorithms for solving partial differential equations, particularly those arising in computational fluid dynamics (CFD). This work involves designing schemes that minimize numerical dissipation and dispersion while handling discontinuities and multi-scale phenomena in flows, such as turbulence or shock waves. By integrating these schemes into CFD simulations, I've explored their impact on predicting real-world scenarios like aerodynamic flows and hydrodynamic flows. More recently, I've delved into machine learning applications for CFD, where I employ neural networks to accelerate simulations, enhance turbulence modeling, and optimize parameter estimation in large-scale systems, efforts that have shown promise in reducing computational costs without sacrificing accuracy.
Beyond fluids, my interests extend to computer vision applications in agriculture, where I apply image processing and deep learning to analyze aerial and ground-based imagery for precision farming. This includes developing models for crop health monitoring, yield prediction, and pest detection, often using convolutional neural networks trained on multispectral data. These projects not only advance agricultural sustainability but also highlight the synergies between computational mathematics and practical domains. Overall, my work emphasizes the importance of robust, high-fidelity computations in addressing societal challenges, and I actively collaborate with interdisciplinary teams to translate theoretical advancements into deployable tools.
My research areas include:
- High-order numerical schemes for partial differential equations
- Computational fluid dynamics, including turbulence modeling, flow control, and multi-phase flows
- Machine learning integrations in CFD for simulation acceleration and data-driven predictions
- Computer vision techniques applied to agriculture, such as crop monitoring and precision farming