HDC is motivated by the observation that the cerebellum cortex operates on high-dimensional data representations.
[1] In HDC, information is thereby represented as a hyperdimensional (long) vector called a hypervector.
A hyperdimensional vector (hypervector) could include thousands of numbers that represent a point in a space of thousands of dimensions,[2] as vector symbolic architectures is an older name for the same approach[2][failed verification].
This research extenuates into Artificial Immune Systems for Evolutionary Computation.
Data is mapped from the input space to sparse HD space under an encoding function φ : X → H. HD representations are stored in data structures that are subject to corruption by noise/hardware failures.
H is typically restricted to range-limited integers (-v-v)[3] This is analogous to the learning process conducted by fruit flies olfactory system.
[3] HDC algebra reveals the logic of how and why systems makes decisions, unlike artificial neural networks.
An in-memory hyperdimensional computing system can implement operations on two memristive crossbar engines together with peripheral digital CMOS circuits.
Experiments using 760,000 phase-change memory devices performing analog in-memory computing achieved accuracy comparable to software implementations.
[2] A simple example considers images containing black circles and white squares.
Hypervectors can represent SHAPE and COLOR variables and hold the corresponding values: CIRCLE, SQUARE, BLACK and WHITE.
[2] HDC uses the concept of distributed representations, in which an object/observation is represented by a pattern of values across many dimensions rather than a single constant.
[4] All computational tasks are performed in high-dimensional space using simple operations like element-wise additions and dot products.
Combining addition with permutation preserves the order; the event sequence can be retrieved by reversing the operations.
Early examples include holographic reduced representations, binary spatter codes, and matrix binding of additive terms.
A vector could contain information about all the objects in the image, including properties such as color, position, and size.
[2] In 2023, Abbas Rahimi et al., used HDC with neural networks to solve Raven's progressive matrices.
[2] In 2023, Mike Heddes et Al. under the supervision of Professors Givargis, Nicolau and Veidenbaum created a hyper-dimensional computing library[5] that is built on top of PyTorch.
HDC algorithms can replicate tasks long completed by deep neural networks, such as classifying images.
[2] Another algorithm creates probability distributions for the number of objects in each image and their characteristics.
They too are transformed into hypervectors, then algebra predicts the most likely candidate image to fill the slot.
[2] This approach achieved 88% accuracy on one problem set, beating neural network–only solutions that were 61% accurate.
For 3-by-3 grids, the system was 250x faster than a method that used symbolic logic to reason, because of the size of the associated rulebook.