For example, the trajectory of a simple predator-prey system governed by the Lotka–Volterra equations[1] forms a closed circle in state space.
Graphically speaking, The first algorithm over all fields for persistent homology in algebraic topology setting was described by Barannikov[11] through reduction to the canonical form by upper-triangular matrices.
was given by Edelsbrunner et al.[8] Afra Zomorodian and Carlsson gave the practical algorithm to compute persistent homology over all fields.
A comparison between these tools is done by Otter et al.[24] Giotto-tda is a Python package dedicated to integrating TDA in the machine learning workflow by means of a scikit-learn [1] API.
An R package TDA is capable of calculating recently invented concepts like landscape and the kernel distance estimator.
[25] The Topology ToolKit is specialized for continuous data defined on manifolds of low dimension (1, 2 or 3), as typically found in scientific visualization.
Cubicle is optimized for large (gigabyte-scale) grayscale image data in dimension 1, 2 or 3 using cubical complexes and discrete Morse theory.
Many methods have been invented to extract a low-dimensional structure from the data set, such as principal component analysis and multidimensional scaling.
Thus, the study of visualization of high-dimensional spaces is of central importance to TDA, although it does not necessarily involve the use of persistent homology.
Because the topology of a finite point cloud is trivial, clustering methods (such as single linkage) are used to produce the analogue of connected sets in the preimage
An ongoing work by Carlsson et al. is trying to give geometric interpretation of persistent homology, which might provide insights on how to combine machine learning theory with topological data analysis.
The circle-valued map might be useful, "persistence theory for circle-valued maps promises to play the role for some vector fields as does the standard persistence theory for scalar fields", as commented in Dan Burghelea et al.[58] The main difference is that Jordan cells (very similar in format to the Jordan blocks in linear algebra) are nontrivial in circle-valued functions, which would be zero in real-valued case, and combining with barcodes give the invariants of a tame map, under moderate conditions.
[60] More recent results can be found in D. Burghelea et al.[61] For example, the tameness requirement can be replaced by the much weaker condition, continuous.
The proof of the structure theorem relies on the base domain being field, so not many attempts have been made on persistence homology with torsion.
[63] One advantage of category theory is its ability to lift concrete results to a higher level, showing relationships between seemingly unconnected objects.
A summary of works can be found in Vin de Silva et al.[66] Many theorems can be proved much more easily in a more intuitive setting.
[66] The language of category theory also helps cast results in terms recognizable to the broader mathematical community.
[13][16] In fact, the interleaving distance is the terminal object in a poset category of stable metrics on multidimensional persistence modules in a prime field.
[clarification needed] The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset of
Statistical analysis gives us the ability to separate true features of the data from artifacts introduced by random noise.
Works on null hypothesis significance test,[80] confidence intervals,[81] and robust estimates[82] are notable steps.
A third way is to consider the cohomology of probabilistic space or statistical systems directly, called information structures and basically consisting in the triple (
[85] Minima of mutual-informations, also called synergy, give rise to interesting independence configurations analog to homotopical links.
These approaches were developed independently and only indirectly related to persistence methods, but may be roughly understood in the simplicial case using Hu Kuo Tin Theorem that establishes one-to-one correspondence between mutual-informations functions and finite measurable function of a set with intersection operator, to construct the Čech complex skeleton.
Information cohomology offers some direct interpretation and application in terms of neuroscience (neural assembly theory and qualitative cognition [87]), statistical physic, and deep neural network for which the structure and learning algorithm are imposed by the complex of random variables and the information chain rule.
[88] Persistence landscapes, introduced by Peter Bubenik, are a different way to represent barcodes, more amenable to statistical analysis.
), but statistical quantities can be readily defined, and some problems in Y. Mileyko et al.'s work, such as the non-uniqueness of expectations,[79] can be overcome.
A very incomplete list of successful applications includes [92] data skeletonization,[93] shape study,[94] graph reconstruction,[95][96][97] [98] [99] image analysis, [100][101] material,[102][103] progression analysis of disease,[104][105] sensor network,[67] signal analysis,[106] cosmic web,[107] complex network,[108][109][110][111] fractal geometry,[112] viral evolution,[113] propagation of contagions on networks,[114] bacteria classification using molecular spectroscopy,[115] super-resolution microscopy,[116] hyperspectral imaging in physical-chemistry,[117] remote sensing,[118] feature selection,[119] and early warning signs of financial crashes.
A forgotten result of R. Deheuvels long before the invention of persistent homology extends Morse theory to all continuous functions.
[123] This is motivated by theoretical work in TDA, since the Reeb graph is related to Morse theory and MAPPER is derived from it.