Applied Math Seminar
Please follow the link for speaker/topic information:
Please follow the link for speaker/topic information:
Title: The Krylov Subspace Methods for the Computation of Matrix Exponentials
Abstract: The problem of computing the matrix exponential etAe^{tA} arises in many theoretical and practical problems. Many methods have been developed to accurately and efficiently compute this matrix function or its product with a vector, i.e., etAve^{tA}v. In the past few decades, with the increasing need of the computation for large sparse matrices, iterative methods such as the Krylov subspace methods have proved to be a powerful class of methods in dealing with many linear algebra problems. The Krylov subspace methods have been introduced for computing matrix exponentials by Gallopoulos and Saad, and the corresponding error bounds that aim at explaining the convergence properties have been extensively studied. Many of those bounds show that the speed of convergence depends on the norm of the matrix, while some others emphasize the important role played by the spectral distribution for some special matrices. For example, it is shown in a recent work by Ye that the speed of convergence is closely related to the condition number, namely the convergence is fast for a well-conditioned matrix no matter how large the norm is.
In this dissertation, we derive new error bounds for computing etAve^{tA}v with non-symmetric AA, using the spectral information of AA. Our result is based on the assumption that the field of values of AA lies entirely in the left half of the complex plane, such that the underlying dynamic system is stable. The new bounds show that the speed of convergence is related to the size and shape of the rectangle containing the field of values, and they agree with the existing results when AA is nearly symmetric. Furthermore, we also derive a simpler error bound for the special case when AA is skew-Hermitian. This bound explains an observed convergence behavior where the approximation error initially stagnates for certain iterations before it starts to converge. In deriving our new error bounds, we use sharper estimates of the decay property of exponentials of Hessenberg matrices, by constructing polynomial approximations of the exponential function in the region containing the field of values. The Jacobi elliptic functions are used to construct the conformal mappings and generate the Faber polynomials. We also present numerical tests to demonstrate the behavior of the new error bounds.
Please follow the link for speaker/topic information:
Title: On the perfect reconstruction of the topology of dynamic networks
Abstract: The network inference problem consists in reconstructing the topology or wiring diagram of a dynamic network from time-series data. Even though this problem has been studied in the past, there is no algorithm that guarantees perfect reconstruction of the topology of a dynamic network. In this talk I will present a framework and algorithm to solve the network inference problem for discrete-time networks that, given enough data, is guaranteed to reconstruct the topology of a dynamic network perfectly. The framework uses tools from algebraic geometry.
Please follow the link for speaker/topic information:
Please follow the link for speaker/topic information:
Please follow the link for speaker/topic information:
http://www.ms.uky.edu/~rlca238/AppliedMathSeminar.html
Please follow the link for title and abstract information:
Please follow the link for speaker/topic information:
http://www.ms.uky.edu/~rlca238/AppliedMathSeminar.html
Title: Singular Value Computation and Subspace Clustering
Abstract: In this dissertation we discuss two problems. In the First part, we consider the problem of computing a few extreme singular values of a symmetric defnite generalized eigenvalue problem or a large and sparse matrix C. Most existing numerical methods are based on reformulating the singular value problem as an equivalent symmetric eigenvalue problem. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. The inverse-free Krylov subspace method focuses on the computation of one extreme eigenvalue and a deflation technique is needed to compute additional eigenvalues. The Wielandt deflation has been considered and can be used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we First propose to consider a deformation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate effciency and robustness of the new algorithm.
In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on Self Expressive Property(SEP) such as Sparse Subspace Clustering(SSC) and Low Rank Representations( LRR) have been shown to enjoy superior performances than other methods. Self Expressive Property means the points can be expressed as linear combinations of themselves. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Nnumerical results are given to illustrate the effectiveness and effciency of this method.
Title: Network Analysis with Matrix Functions
Abstract: Networks arise in many applications. It is often of interest to be able to identify the most important nodes of a network or to determine the ease of traveling between them. We are interested in carrying out these tasks for large undirected and directed networks. Many quantities of interest can be determined by computing certain matrix functionals. We discuss how for directed and undirected graphs a few steps of the Lanczos method in combination with Gauss-type quadrature rules can be applied to determine upper and lower bounds for quantities of interest.