Maximum disjoint set

Therefore, selecting an x that minimizes |MDS(N(x))| minimizes the loss from adding x to S. In particular, if we can guarantee that there is an x for which |MDS(N(x))| is bounded by a constant (say, M), then this greedy algorithm yields a constant M-factor approximation, as we can guarantee that:

Such an upper bound M exists for several interesting cases: When C is a set of intervals on a line, M=1, and thus the greedy algorithm finds the exact MDS.

Therefore the greedy algorithm yields a 3-approximation, i.e., it finds a disjoint set with a size of at least MDS(C)/3.

This may require to discard a small number of shapes that do not fit into any one of the subsets, as explained in the following subsections.

Let C be a set of n axis-parallel rectangles in the plane, all with the same height H but with varying lengths.

There is an algorithm that finds a disjoint set with a size of at least |MDS(C)|/(1 + 1/k) in time O(n2k−1), for every constant k > 1.

[2] The algorithm is an improvement of the above-mentioned 2-approximation, by combining dynamic programming with the shifting technique of Hochbaum and Maass.

For a long time, it was not known whether a constant-factor approximation exists for axis-parallel rectangles of different lengths and heights.

rectangles are separated, then it can be used in a dynamic programming approach to find a constant-factor approximation to the MDS.

However, there are constant-factor approximation algorithms using non-guillotine cuts: Let C be a set of n squares or circles of identical size.

Hochbaum and Maass[4] presented a polynomial-time approximation scheme for finding an MDS using a simple shifted-grid strategy.

Let C be a set of n fat objects, such as squares or circles, of arbitrary sizes.

There is a PTAS for finding an MDS based on multi-level grid alignment.

An algorithm of Erlebach, Jansen and Seidel[12] finds a disjoint set with a size of at least (1 − 1/k)2 ⋅ |MDS(C)| in time nO(k2), for every constant k > 1.

For every r, s between 0 and k, define D(r,s) as the subset of disks that are not intersected by any horizontal line whose index modulo k is r, nor by any vertical line whose index modulu k is s. By the pigeonhole principle, there is at least one pair (r,s) such that

, i.e., we can find the MDS only in D(r,s) and miss only a small fraction of the disks in the optimal solution: An algorithm of Chan[5] finds a disjoint set with a size of at least (1 − 2/k)⋅|MDS(C)| in time nO(k), for every constant k > 1.

The boundary of a cell of size R can be covered by 4k squares of size R/k; hence the number of disjoint fat objects intersecting the boundary of that cell is at most 4kc, where c is a constant measuring the fatness of the objects.

Therefore, if all objects are fat and k-aligned, it is possible to find the exact maximum disjoint set in time nO(kc) using a divide-and-conquer algorithm.

The return value of the algorithm is the largest A(j); the approximation factor is (1 − 2/k), and the run time is nO(k).

Both versions can be generalized to d dimensions (with different approximation ratios) and to the weighted case.

Let C be a set of n fat objects, such as squares or circles, of arbitrary sizes.

Chan[5] described an algorithm finds a disjoint set with a size of at least (1 − O(√b))⋅|MDS(C)| in time nO(b), for every constant b > 1.

If we could calculate MDS(C) exactly, we could make the constant a as low as 2/3 by a proper selection of the separator rectangle.

The worst case for the algorithm is when the split in each step is in the maximum possible ratio which is a:(1 − a).

[13] The algorithm is based on a width-bounded geometric separator on the set Q of the centers of all disks in C. This separator theorem allows to build the following exact algorithm: The run time of this algorithm satisfies the following recurrence relation: The solution to this recurrence is: A pseudo-disks-set is a set of objects in which the boundaries of every pair of objects intersect at most twice (Note that this definition relates to a whole collection, and does not say anything about the shapes of the specific objects in the collection).

A local search algorithm by Chan and Har-Peled[14] finds a disjoint set of size at least

: Every exchange in the search step increases the size of S by at least 1, and thus can happen at most n times.

The algorithm is very simple; the difficult part is to prove the approximation ratio.

Using linear programming relaxation, it is possible to find a disjoint set of size at least

This is possible either with a randomized algorithm that has a high probability of success and run time

A region quadtree with point data