The most general hypotheses (i.e., the general boundary GB) cover the observed positive training examples, but also cover as much of the remaining feature space without including any negative training examples.
If the example is consistent with multiple hypotheses, a majority vote rule can be applied.
[1] The notion of version spaces was introduced by Mitchell in the early 1980s[2] as a framework for understanding the basic problem of supervised learning within the context of solution search.
Although the basic "candidate elimination" search method that accompanies the version space framework is not a popular learning algorithm, there are some practical implementations that have been developed (e.g., Sverdlik & Reynolds 1992, Hong & Tsang 1997, Dubois & Quafafou 2002).
[1] One solution of this problem is proposed by Dubois and Quafafou that proposed the Rough Version Space,[3] where rough sets based approximations are used to learn certain and possible hypothesis in the presence of inconsistent data.