Heuristics: Local search 3

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche


Theorie:

Local search belongs to heuristics to solve integral and combinatorial optimization problems. In contrast to linear optimization, combinatorial optimization covers also nonlinear problems. There a two possibilities to solve integral and combinatorial optimization problems. On one hand you can use exact method like the decision tree method. But those exact methods have a disadvantage. They need a long runtime. To around that problem, heuristics can be used. Heuristics just approximate the solution. They can’t guarantee to find an optimum. These heuristics can be distinguished in:

• an opening procedure, to identify a (first) admissible solution;

• local search and improvement procedures, to improve a given admissible solution

• incomplete exact procedures, e.g. prematurely aborted Branch-and-Bound procedures

• a combination of the first three procedures


In following, local search heuristics will be elaborated:

Local search heuristics start mostly with an admissible solution of the problem. This admissible solution can be found randomly or can be calculated by an above-mentioned opening procedure. After that, you make iterations from the considered solution x to a solution in the neighbourhood of x NB(x) by using a transformation rule that need to be specified. There are three types of transformation rules:

• Modification of a solution in only one point

• Exchange of elements, e.g. inside of an order

• Moving of elements, e.g. inside of an order


You can also differentiate the procedures relative to their strategy used to analyse the neighbourhood:

• The type of the order that is used to analyse the solutions of the neighbourhood: randomly or systematically

• The strategy that is used to choose one of the neighbourhood’s solutions to go on with the iteration to continue the search:

- First-fit-strategy: Selection of the first found solution in the neighbourhood that is better than the previous solution.

- Best-fit-strategy: Analysis of the whole neighbourhood. Selection of the best solution.


In general, there are three procedures to solve local search problems:

• Deterministic approach: Several approaches to the same problem with similar starting conditions identify always the same solution.

• Stochastic (or random) approach: This approach contains a random component which identifies several solutions in repeated applications for the same problem. An example for a stochastic approach is the combination of a randomly order of the solutions in the neighbourhood with the first-fit-strategy.

• Pure improvement method: The procedure is finished when there can’t be found any better solution in the neighbourhood by iteration. The best found solution is the local optimum for the chosen neighbourhood. To find even a better solution, moves need to be legal that lead to intermediate declines of the objective function. Heuristics which allow these kinds of moves can be called local search procedures (in the narrow sense). These local search procedures belong to so called meta-heuristics. Examples for those meta-heuristics are simulated annealing or tabu search:

- Simulated annealing: You choose randomly any solution x’ in the neighbourhood. Is x’ better than x according to the objective function value, x will be exchanged for x’. Is x’ not better than x, the exchange depends on a specific probability. This probability depends on the dimension of the impairment.

- Tabu search: Every iteration analyses the neighbourhood NB(x). You will choose the not prohibited neighbour with the best objective function value even if it leads to impairment. That chosen neighbour is the starting point for the next iteration. Maximization problems use the principle of steepest ascent/mildest descent. Minimization problems use the principle of steepest descent/mildest ascent. You try to realise the largest possible improvement or the lowest possible impairment of the objective function value. A worse solution will be exclusively accepted if there’s no possibility to reach an improvement. To prevent the return to already inspected better solutions, they need to be annulled by setting a tabu.

Example:

Example of a transformation rule: With a given knapsack-problem a move could for example be to remove a chosen asset in x from the backpack or vice versa. NB(x) contains al solutions that are different to x in one component.

Example of a knapsack-problem to be solved with the tabu search:

Max! f(x)= 3x1 + 4x2 + 2x3 + 3x4

3x1 + 2x2 + 4x3 + x4 < 9

Xj € {0,1} for j = 1,...,4

One admissible solution is: x = (1,0,1,1)

For the neighbourhood NB(x) of x we only chose solutions that comes from only changing one position of the vector x. If we change x on position q from 0 to 1 we mark it with q, if we do it vice versa and change a 1 to 0 we mark it with q.

We always prohibit the last two performed moves and write the complementary moves in a tabulist. If we start with the solution x = (0,0,1,1) we get the following tabutable:


Iter. l solution l Tabulist l objective function value

0 l (0,0,1,1) l - l 5

1 l (0,1,1,1) l 2 l 9

2 l (0,1,0,1) l 2,3 l 7

3 l (1,1,0,1) l 3,1 l 10

4 l (1,1,0,0) l 1,4 l 7

5 l (1,1,1,0) l 4,3 l 9

6 l (0,1,1,0) l 3,1 l 6


Solution number 3 of the table is the (only) optimal solution of the problem.

The next step would reach the second solution again and the procedure would begin to circle. The shorter the tabulist the faster you get to an circling procedure but a longer tabulist holds the risk that there can’t be found any admissible neighbourhood solutions.


Quelle: Einführung in Operations Research. Wolfang Domschke, Andreas Drexl. Springer. 6. Auflage 2005