ICMS 2014 Session: Software for Computational Topology

ICMS 2014: Home, Sessions


Aim and Scope

The interest in algorithms on topological problems and their implementation has rapidly grown during the last decade. One driving force is the emergence of ``topological data analysis'' which connects topological concepts like Morse theory and homology to the investigation of real-world data. Another recent track of research substantially expands the realm of possibility for computational approaches in 3-manifold and knot theory. Common to these and other developments is the ability to handle large data collections through an efficient algorithmic framework as well as mature software implementations of those. The goal of this workshop session is to bring together active developers and current and potential future users of topological software to present the state of the art, report on applications of that software and discuss open problems and further steps.

Topics (including, but not limited to)


Submission Guidelines


  1. Recent developments in Regina: Exact computation with triangulated 3-manifolds

    Benjamin Burton (University of Queensland)

    Abstract: Regina is a software package for studying triangulated 3-manifolds and knot complements. It provides powerful tools for exact computation, which makes it useful not just for exploration and experimentation, but also for rigorous computer-assisted proofs. The software is now 15 years old, with 31 public releases and over 275,000 lines of open-source code.
    Here we outline some recent developments in the software. These include new features, such as testing for essential surfaces, certifying that manifolds are non-hyperbolic, and analysing the Pachner graphs (or flip graphs) of 3-manifolds. They also include new optimisations to core routines, such as prime decomposition, unknot recognition, and the broader recognition of larger classes of manifolds. All of these features now play important computational roles in larger mathematical projects.
    The features listed above are surprisingly practical, and have been used effectively in bulk processing over millions (and in some cases billions) of triangulations. This is despite the underlying algorithms being exponential time in some cases and doubly exponential in others. We outline the algorithmic machinery that makes this possible, including recent techniques based on normal surfaces, linear and integer programming, graph searching, and Jaco-Rubinstein crushing. We also present comprehensive empirical data to measure how feasible these routines are for "real" topological calculations, and to identify where the performance bottlenecks lie.

  2. Heuristic manifold recognition, bistellar flips and discrete Morse theory

    Michael Joswig, Frank Lutz and Mimi Tsuruga (TU Berlin)

    Abstract: A classical result of Markov establishes that the homeomorphism type of a manifold in dimensions four or higher, given, e.g., in terms of a triangulation, cannot be decided (in the Turing model). A local modification of a triangulation, known as bistellar flipping, does not change the homeorphism type, in fact, not even the PL-type. This was exploited by Björner nd Lutz to suggest a heuristical approach to recognizing manifolds.
    An alternative way to recognize spheres is provided by discrete Morse theory. If some given PL triangulation of a closed manifold admits a discrete Morse function with only two critical cells, then this manifold is PL homeomorphic to a standard sphere. Finding optimal discrete Morse functions is NP-hard already for 2-dimensional simplicial complexes according to work of Joswig and Pfetsch and Lewiner, Lopes and Tavares. Yet, in many cases a random search for discrete Morse functions with few critical cells allows for a fast recognition of spheres. By checking all face links this can be used to verify if a given simplicial complex is a PL-manifold.
    Here we report on related implementations in polymake and on experiments concerning candidates for exotic 4-spheres and other manifolds.

  3. Computing Persistence Modules on Commutative Ladder Quivers of Finite Type

    Yasuaki Hiraoka and Emerson Escolar (Kyushu University)

    Abstract: In this talk, we present a theory and algorithm for computing persistence diagrams of persistence modules on commutative ladder quivers of finite type. A commutative ladder quiver is defined by two An-quivers connecting two vertices with same indices by arrows, together with commutativity relations. Persistence modules on these commutative ladder quivers naturally arise in topological data analysis applied to materials science and protein structural analysis. One of its most important applications is the detection of common and robust topological features between two materials, specifically by studying persistence modules on triple ladder quivers.
    We first show that all isomorphism classes of indecomposable modules, which are the counterparts to persistence intervals in the standard setting of persistent homology, can be derived for persistence modules on commutative ladder quivers of finite type. Furthermore, the concept of persistence diagrams is naturally generalized by functions defined on Auslander-Reiten quivers of commutative ladder quivers. Here the set of vertices in an Auslander-Reiten quiver is given by all isomorphism classes of indecomposable modules, whereas the set of arrows is defined by all irreducible morphisms among them. In this generalization of persistence intervals and persistence diagrams, the Auslander-Reiten theory plays an important role.
    Then, we present an algorithm to compute persistence diagrams by inductively applying echelon form reductions to a given persistence module. In particular, the flowchart of our algorithm closely follows the structure of the corresponding Auslander-Reiten quiver. We also show that discrete Morse reduction and coreduction can be applied to our problem to reduce computational time. Finally, based on these theoretical and computational tools, we also discuss several applications in materials science and protein structural analysis.

  4. CAPD::RedHom v2 - Homology software based on reduction algorithms

    Mateusz Juda, Marian Mrozek (Jagiellonian University)

    Abstract: CAPD::RedHom is a software package for efficient homology computations of cubical and simplicial complexes as well as some special cases of regular CW complexes. Originally, the software was designed for applications in rigorous numerics of Topological Dynamics. Such applications, based on interval arithmetic, lead in a natural way to cubical sets. They may be represented very efficiently as bitmaps. The cubical sets arising from the algorithms in dynamics usually are strongly inflated in the sense the sets which much smaller representation have the same topology or homotopy type. Such small representations may be found in linear time by various geometric reduction techniques. The algebraic invariants of topology, in particular homology, are then computed for the small representation. This leads to a very significant speed up. In particular, the expensive, linear algebra computations, such as Smith diagonalization, are performed on small data.
    We use several geometric reduction methods in CAPD::RedHom. In low dimensions (2 and 3) a very efficient geometric reduction is based on the acyclic subspace construction. A classical reduction method consists in the elementary collapse approach. The dual method, based on removing free coface pairs (so called coreductions) turns out to be very efficient for real data applications. We also use this method as a builder of a discrete Morse function and the associated gradient discrete vector field. Such a vector field provides, via discrete Morse theory, another geometric reduction method.
    The original implementation of the software was only for cubical sets. However, in the last few years we extended the area of applications to several other kinds of inputs. Using the modern C++ template based techniques we are able to implement efficient algorithms independently of the data structures (input sets). An unwanted side effect is that this makes the code very hard to use as a library or a plug-in. For this reason, recently we put a lot of effort to make the efficient C++ code accessible in external, commonly used libraries. Presently, the code is available as a plug-in for Python and GAP system.
    The talk will start with a brief presentation of the theoretical background for the reductions. Then, we will compare the efficiency of our techniques with the methods based on algebraic reductions for matrices. We will also present how to use the software with real code and data based on our recent applications. At the end we will discuss advantages of this techniques for distributed computations and our recent applications e.g. fundamental group and persistence of maps.
    Work joint with Marian Mrozek, Piotr Brendel, Hubert Wagner, Grzegorz Jablonski, Pawel Pilarczyk, Natalia Zelazny, Pawel Dlotko.

  5. PHAT - Persistent Homology Algorithms Toolbox

    Ulrich Bauer (IST Austria), Michael Kerber (MPI Saarbrücken), Jan Reininghaus (IST Austria), Hubert Wagner (Jagiellonian University)

    Abstract: PHAT is a C++ library for the computation of persistent homology. This task is usually split into two major tasks: (1) building a boundary matrix representation of the given filtration and (2) bringing it into a reduced form by elementary matrix operations. PHAT focuses entirely on the latter step. We aim for a simple generic design that allows for flexibility, without sacrificing efficiency or user-friendliness. This makes PHAT a versatile platform for experimenting with novel algorithmic ideas and comparing them to state of the art implementations.
    A major aspect of PHAT is to decouple the reduction strategy from the representation of the boundary matrix and the low-level operations to query and manipulate it. We recap the reduction algorithms currently implemented in PHAT as well as the available representation types. In particular, we decribe a novel approach that transforms a column of the matrix into an intermediate data structure that is more suitable for efficient manipulations. We show in experimental evaluations that the choice of a suitable representation has an equally important effect on the practical performance as the choice of the reduction strategy.

  6. The Gudhi Library: Simplicial Complexes and Persistent Homology

    Clement Maria, Jean-Daniel Boissonnat, Marc Glisse, Mariette Yvinec (INRIA)

    Abstract: We present the main algorithmic and design choices that have been made to implement the construction and manipulation of simplicial complexes and the computation of persistent homology in the Gudhi library. The Gudhi library (Geometric Understanding in Higher Dimensions) is a generic C++ library for computational topology. Its goal is to provide robust, efficient, flexible and easy to use implementations of state-of-the-art algorithms and data structures for computational topology. An application of interest for computational topology is topological data analysis, where one is interested in learning topological invariants of a shape, sampled by a point cloud. A popular approach is to construct, at different scales, an approximation of the shape using simplicial complexes built on top of the points, and then compute the persistent homology of these complexes. The simplicial complex and persistent homology packages in Gudhi provide all software components for this approach. Specifically, simplicial complexes are implemented with a simplex tree data structure. The simplex tree is an efficient and flexible data structure for representing general (filtered) simplicial complexes. The persistent homology of a filtered simplicial complex is computed by mean of the persistent cohomology algorithm, implemented with a compressed annotation matrix. Persistent cohomology is the dual of persistent homology, and provides the same topological invariant represented by a persistence diagram. The persistent homology package provides as well the implementation of the multi-field persistence framework, i.e. the computation of persistence with various field coefficients in a single pass. We present the different components of the software, their interaction and the user interface. We justify the algorithmic and design decisions made in Gudhi, and provide detailed benchmarks of the code.
    Joint work with Jean-Daniel Boissonnat, Marc Glisse and Mariette Yvinec.

  7. Distributed Persistent Homology via Mayer Vietoris

    Ryan H. Lewis and Gunnar Carlsson (Stanford University)

    Abstract: In this work we examine the problem of computing persistent homology in distributed memory via the Mayer-Vietoris principle. We design and implement a divide and conquer framework for computing persistent homology by covering a space by overlapping subspaces. We begin with a filtration of the input space and a cover by overlapping subspaces. We use the cover to build the Mayer Vietoris Blowup Complex, a space which organizes the various pieces of the input space necessary for carrying out the Mayer-Vietoris principle. The input filtration, along with the nerve of the cover, is used to place a filtration on the blowup which has identical persistent homology to that of the input. The boundary matrix for the blowup complex is naturally partitioned into chunks, and relative homology computations may be carried out on each chunk in parallel. We show how the topology of the nerve connects these relative computations together and controls the communication pattern between nodes in the computer cluster. We provide preliminary experimental results and compare our results to existing algorithms for computing persistence in distributed memory.

  8. javaPlex - an extensible platform for persistence

    Henry Adams (IMA), Andrew Tausz (Stanford), Mikael Vejdemo-Johansson (KTH, Jozef Stefan Institute)

    Abstract: javaPlex is the newest member of the Stanford-based Plex family of persistent homology and cohomology software packages. Written in Java, it is optimized for easy interfacing with Matlab as well as anything running in the Java ecosystem - including Processing, Scala, Jython, and Mathematica.
    javaPlex was written to create a platform easier to extend and experiment with than its predecessor jPlex. In its last release, the platform support absolute and relative persistence (co)homology, as well as hom-complexes of simplicial complexes, cellular and simplicial complex constructions and fast Vietoris-Rips and witness complex constructions.
    We will demonstrate the architecture, usage, development and extension features of javaPlex, focusing on interoperability with Matlab as an interface to the library.
    joint work with Henry Adams and Andrew Tausz