Memetic algorithm

An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately.

The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search.

Inspired by both Darwinian principles of natural evolution and Dawkins' notion of a meme, the term memetic algorithm (MA) was introduced by Pablo Moscato in his technical report[1] in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements.

In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high-quality solutions more efficiently than their conventional evolutionary counterparts.

MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.

This insight leads directly to the recommendation to complement generally applicable metaheuristics with application-specific methods or heuristics,[7] which fits well with the concept of MAs.

[8] This original definition of MA although encompasses characteristics of cultural evolution (in the form of local refinement) in the search cycle, it may not qualify as a true evolving system according to universal Darwinism, since all the core principles of inheritance/memetic transmission, variation, and selection are missing.

[15] Co-evolution[16] and self-generating MAs[17] may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system have been considered.

Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources.

In the context of continuous optimization, individual learning exists in the form of local heuristics or conventional exact enumerative methods.

In combinatorial optimization, on the other hand, individual learning methods commonly exist in the form of heuristics (which can be deterministic or stochastic) that are tailored to a specific problem of interest.

Bambha et al. introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality.

This question has been controversially discussed for EAs in the literature already in the 1990s, stating that the specific use case plays a major role.

More recent applications include (but are not limited to) business analytics and data science,[2] training of artificial neural networks,[27] pattern recognition,[28] robotic motion planning,[29] beam orientation,[30] circuit design,[31] electric service restoration,[32] medical expert systems,[33] single machine scheduling,[34] automatic timetabling (notably, the timetable for the NHL),[35] manpower scheduling,[36] nurse rostering optimisation,[37] processor allocation,[38] maintenance scheduling (for example, of an electric distribution network),[39] scheduling of multiple workflows to constrained heterogeneous resources,[40] multidimensional knapsack problem,[41] VLSI design,[42] clustering of gene expression profiles,[43] feature/gene selection,[44][45] parameter determination for hardware fault injection,[46] and multi-class, multi-objective feature selection.