Link-state algorithms are theoretically attractive because they find optimal routes, reducing waste of transmission capacity.
The inventors of HSLS claim[citation needed] that routing protocols fall into three basically different schemes: proactive (such as OLSR), reactive (such as AODV), and algorithms that accept sub-optimal routings.
If one graphs them, they become less efficient as they are more purely any single strategy, and the network grows larger.
HSLS is said to optimally balance the features of proactive, reactive, and suboptimal routing approaches.
These strategies are blended by limiting link state updates in time and space.
The designers started the tuning of these items by defining a measure of global network waste.
Their exact definition is "The total overhead is defined as the amount of bandwidth used in excess of the minimum amount of bandwidth required to forward packets over the shortest distance (in number of hops) by assuming that the nodes had instantaneous full-topology information."
They then made some reasonable assumptions and used a mathematical optimization to find the times to transmit link state updates, and also the breadth of nodes that the link state updates should cover.
The algorithm has a few special features to cope with cases that are common in radio networks, such as unidirectional links, and looped-transmission caused by out-of-date routing tables.
The routing information and the data transfer are decentralized, and should therefore have good reliability and performance with no local hot spots.
The system requires capable nodes with large amounts of memory to maintain routing tables.
Such schemes are challenging in practice because in the ad hoc environment reachability of public key infrastructure servers cannot be assured.
Like almost all routing protocols, HSLS does not include mechanisms to protect data traffic.