Heuristics: Representation of Search Space and Neighborhoods 1: Unterschied zwischen den Versionen

Aus Operations-Research-Wiki
Wechseln zu: Navigation, Suche
[unmarkierte Version][unmarkierte Version]
K (Sources)
 
(75 dazwischenliegende Versionen von 2 Benutzern werden nicht angezeigt)
Zeile 1: Zeile 1:
Content:
+
== Search Problems in General==
  
- Search Problems in General
+
The primary difference between an optimization problem and a search problem is, that the search problem concentrates on the variables you have to fill in for the optimal function value, whereas the optimization problem only focuses on the optimal value itself.
- Search Spaces
+
- Finite Search Spaces
+
- Infinite Search Spaces
+
- Solution Spaces
+
- Problem Spaces
+
- Neighborhoods in General (Mathematically)
+
- In directed Graphs
+
- In undirected Graphs
+
- Conclusion
+
- Heuristics in reference to Search Space and Neighborhoods
+
(Opening and improving procedures) 
+
- The Nearest-Neighbor-Algorithm
+
- In General
+
- Example
+
  
 +
Search spaces are defined by the following:
 +
{|
 +
|
 +
::* Search Space S:
 +
::* Set of valid transition operators:
 +
::* Initial state from set S:
 +
::* Subset of goal states in S:
 +
|
 +
See below.
 +
<br />Function which describes the valid sub-solutions for the successiv steps.
 +
<br />The starting point from which the problem begins to unfold.
 +
<br />Describes the boundaries of a valid solution.
 +
|}
  
 +
== Search Spaces ==
 +
There are multiple definitions of the Search Space.
 +
You can divide it into infinite and finite search spaces and in solution and problem spaces.
  
== Search Problems in General==  
+
=== Finite Search Spaces ===
  
The goal of a search problem is not to minimize/maximize an objective function (in contrary to optimization problems)
+
The finite search space is characterized by its finite amount of nodes and edges. You can use algorithms or heuristics to find the best solution, but if you have a finite but larger search space it is often more efficient to use heuristic due to the shorter runtime.
but rather to find the best solution which belongs to the minimized/maximized objective function.
+
Search spaces are defined by the following:
+
- Search Space S: All nodes which have to be travelled to.
+
- Set of valid transition operators: Function which describes the valid sub-solutions for the following steps.
+
- Initial state from set S: The starting point from which the problem begins to unfold.
+
- Subset of goal states in S: Describes the boundaries of a valid solution.
+
  
 +
=== Infinite Search Spaces ===
  
 +
The infinite search space is characterized by its infinite amount of nodes and edges. You can only use heuristics to find the optimal solution. If one would use algorithms in infinite search spaces, it would not terminate, as there are infinite solution possibilites. Heuristics have the advantage of not computing every last possibility, but rather to narrow down to the best solutions. So heuristics should be used for infinite search spaces.
  
 +
=== Solution Spaces ===
  
 +
In the solution space, the objective function defines an order of preference among all the possible solutions.
 +
There are given potential solutions (e.g. valid tours of a TSP) which are represented by the nodes of the graph.
 +
The edges between the nodes illustrate the transition from a given solution to a new (neighboring) solution.
  
Search Spaces:
+
As you can see below, the problem space can be portrayed as a undirected graph.
There are multiple definitions of the Search Space.
+
You can divide it into infinite and finite search spaces and in solution and problem spaces.
+
  
 +
[[Datei:BSP_Solutionspace.jpg ]]
  
 +
=== Problem Spaces ===
  
The finite search space
+
In the problem space the nodes depict the problems and sub-problems.
 +
The root of the tree is the initial problem while the interim nodes are the subproblems.
 +
The leaves represent the solutions. The edges are the decomposition of a problem into a simpler sub-problem.
  
Though we will not look further into, one can use algorithms to find the best solution if one has a finite search space.
+
As you can see below, the problem space can be portrayed as a spanning tree.
Heuristics would work also, but as algorithms can deliver better solutions it would not be efficient to use heuristics.
+
  
 +
[[Datei:BSP_Problemspace.jpg ]]
  
 +
== Neighborhoods in General (Mathematically) ==
  
The infinite search space
 
  
If we would use algorithms in infinite search spaces, they would not terminate, as there are infinite possibilites to form a solution.
+
=== In directed Graphs ===
As heuristics do not compute every last possibility but rather tries to narrow down the good solutions, we use heuristics to handle with infinite search spaces.
+
Let <math>G= \left ( V, R, \alpha, \omega \right )</math> be the directed graph
 +
with the functions <math> \alpha: R \rightarrow V</math> and <math>\omega: R \rightarrow V</math> which define starting and end node.
 +
Let <math>v \in V</math> be a node of <math>G</math>, then you could define:
  
 +
* <math>\delta_{G}^{+}(v) = \left \{ r \in R \mid \alpha(r) = v \right \}</math> set of outgoing arcs of <math>v</math>
 +
* <math>\delta_{G}^{-}(v) = \left \{ r \in R \mid \omega(r) = v \right \}</math> set of incoming arcs of <math>v</math>
 +
* <math>N_{G}^{+}(v) = \left \{ \omega(r)\mid r \in \delta_{G}^{+}(v) \right \}</math> set of successors of <math>v</math>
 +
* <math>N_{G}^{-}(v) = \left \{ \alpha(r)\mid r \in \delta_{G}^{-}(v) \right \}</math> set of predecessors of <math>v</math>
  
 +
Two nodes <math>u,v \in V</math> are called adjacent, also known as neighbored, if an arc exists with <math>\alpha(r)=u</math> and <math>\omega(r)=v</math> or <math>\alpha(r)=v</math> and <math>\omega(r)=u</math>.
 +
Therefore you could declare the Neighbordhood of <math>v</math> as <math>N(v) = N_{G}^{+}(v) + N_{G}^{-}(v)</math>
  
The solution space
+
=== In undirected Graphs ===
 +
Let <math> G=(V,E,\gamma)</math> be the undirected graph
 +
with the function <math> \gamma : E \rightarrow V * V</math> which defines the two end nodes of the edge.
 +
Two nodes <math>u,v \in V</math> are called adjacent, also known as neighbored, if an edge exists with <math>\gamma(e) = \left \{ u, v \right \}</math>.
 +
Therefore you could declare the Neighbordhood of <math>v</math> as <math>N(v) = \{ u \in V \mid \gamma(e) = \{ u, v \}\}</math>.
  
In the solution space, the objective function defines a preference order over the solutions shown.
+
=== Conclusion ===
There are already potential solutions ( e.g. valid tours of a TSP) which are resembled by the nodes of the graph.
+
Two nodes are neighbored if an arc or an edge connects both nodes.
The edges between the nodes visualize the transition from a given solution to a new (neighboring) solution.
+
  
[ Insert random pic here ]
+
If you want the neighborhood of a set of nodes, you take a look at the neighborhoods of single nodes and sum them up.
As you can see, the problem space can be portrayed as a non-directed graph.
+
  
  
The problem space
 
  
In the problem space the nodes resemble the problems and sub-problems.
+
== Heuristics in reference to Search Space and Neighborhoods ==
The root of the tree is the initial problem while the interim nodes are the subproblems.
+
 
The leaves represent the solutions. The edges are the decomposition of a problem into a simpler sub-problem.
+
 
 +
=== Opening procedures ===
 +
 
 +
 
 +
The opening procedures construct a feasible solution with the help of algorithms.
 +
 
 +
A few examples for opening procedures are:
 +
::* Nearest Neighbor
 +
::* Double-sided nearest Neighbor
 +
::* Nearest Addition
 +
::* Nearest Insert
 +
::* Cheapest Insert
 +
::* Farthest Insert
 +
 
 +
=== Improving procedures ===
 +
 
 +
 
 +
The improving procedures take these feasible solutions and try to further improve them.
 +
 
 +
A few examples for improving procedures are:
 +
::* 2-Opt
 +
::* 3-Opt
 +
::* R-Opt
 +
::* 2-Nodes-Opt
 +
 
 +
== The Nearest-Neighbor-Algorithm ==
 +
 
 +
 
 +
=== In General ===
 +
The algorithm belongs to the category of "greedy-algorithms" and has a runtime of <math>\mathcal O(n^2)</math>.
 +
 
 +
 
 +
Let <math>G=(V,E,\gamma)</math> be the undirected and weighted graph and <math>T=\{ marked</math> <math>nodes \}</math>
 +
# Set <math>T=\{ starting </math> <math> node\}</math>
 +
# Choose the edge <math>e=\{u,v\}</math> with the least weight from <math>v \in V \setminus T</math> to the node you chose previously <math>u \in T</math>
 +
# Add <math>v</math> to <math>T</math>
 +
# Repeat step (2) and (3) until <math>T=\{all</math> <math>nodes\}</math>
 +
# Take the edge which connects the starting node with the previously chosen node
 +
 
 +
=== Example ===
 +
 
 +
==== The Search Problem in reference to the example ====
 +
{|
 +
|
 +
::* Search Space S:
 +
::* Set of valid transition operators:
 +
::* Initial state from set S:
 +
::* Subset of goal states in S:
 +
|
 +
All nodes {A,B,C,D,E,F} - therefore it is a finite Problem Space
 +
<br />Take the nearest neighbor of the current node, which has not been visited yet. Otherwise if all nodes have been visited, go back to the starting node E.
 +
<br />E
 +
<br />You have a cycle {E,C,A,B,D,F,E} through all nodes.
 +
|
 +
|}
 +
 
 +
==== Initial state ====
 +
Here we have a classic (strongly simplified) Traveling Salesman Problem which we want to solve by using the nearest neighbor heuristic. As the nearest neighbor heuristic is just an opening procedure, we will not necessarily get the optimal value of the objective function and therefore not the best travelling route itself. However, the solution we are going to find will be a more or less good foundation from which you could further improve it by using improving procedures. There are no weighted edges between the nodes shown yet, despite their existing.
 +
 
 +
{|
 +
|[[Datei:Tsp_neighborhoods_initial_situation.png|mini|initial situation]]
 +
|}
 +
 
 +
==== Iteration steps ====
 +
 
 +
Here, we randomly pick one of the nodes (in this case node E) to be our starting node from which we will further iterate. The edges from E to any other node are now presented by the lines and the weights as numbers above them. As you can see, the weights differ from edge to edge and because we want to find a preferably short route through all the edges we choose the edge with the smallest weight from E to another node.
 +
We are confronted by a ''special case'' of the nearest neighbor heuristic: ''Two edges share the smallest weight''. As the algorithm is not particularly foreshadowing we can now choose randomly between the two smallest edges.
 +
 
 +
{|
 +
|[[Datei:Tsp_neighborhoods_1.png|mini|after choosing the first node]]
 +
|}
  
[ Insert second random pic here ]
+
We chose node C and are now confronted by new edges between C and all other nodes we have not yet visited. The shortest route beginning from C would now be to A.
As you can see, the problem space can be portrayed as a spanning tree.
+
  
 +
{|
 +
|[[Datei:Tsp_neighborhoods_2.png|mini|iteration 2]]
 +
|}
  
 +
Repeat the last steps:
  
 +
{|
 +
|[[Datei:Tsp_neighborhoods_3.png|mini|iteration 3]]||[[Datei:Tsp_neighborhoods_4.png|mini|iteration 4]]
 +
|-
 +
|[[Datei:Tsp_neighborhoods_5.png|mini|iteration 5]]||[[Datei:Tsp_neighborhoods_6.png|mini|iteration 6]]
 +
|}
  
 +
This goes on until we reach the final node that has not yet been visited, in this case node F. Now, as we have a Travelling Salesman Problem, we just need to close the cycle by connecting the last node with the starting node.
  
 +
{|
 +
|[[Datei:Tsp_neighborhoods_final_solution.png|mini|final solution]]
 +
|}
  
Neighborhoods in General (Mathematically)
+
==== Summary ====
  
In directed graphs:
 
Let G=(V,R,α,ω) be the directed graph
 
with the functions α:R→ V & ω:R→V which define starting and end node
 
Let v∈V be a node of G, then you could define:
 
<math>〖δ^+〗_G (v)≔{r∈R ┤|  α(r)=v}</math> set of outgoing arcs of v
 
〖δ^-〗_G (v)≔{r∈R ┤|  ω(r)=v} set of incoming arcs of v
 
〖N^+〗_G (v)≔{ω(r)┤|r∈〖δ^+〗_G (v)}  set of successors of v
 
〖N^-〗_G (v)≔{α(r)┤|r∈〖δ^-〗_G (v)}  set of predecessors of v
 
Two nodes  u,v ∈V are called adjacent also known as neighbored if there exists an arc with α(r)=u  &  ω(r)=v or α(r)=v  &  ω(r)=u.
 
Therefore you could declare the Neighbordhood of v as N(v) ∶= 〖N^+〗_G (v)+ 〖N^-〗_G (v).
 
  
In undirected graphs:
+
With this, the algorithm from the nearest neighbor heuristic has ended and we found an opening travelling route from which we can further improve our cycle. The last step would be the addition of the weights of the travelled edges so we get the value of our route.
Let G=(V,E,γ)bet the undirected graph
+
with the function γ:E→V×V which defines the two end nodes of the edge
+
Two nodes  u,v ∈V are called adjacent also known as neighbored if there exists an edge with γ(e)={u,v}.
+
Therefore you could declare the Neighbordhood of v as N(v) ∶={u∈V┤|  γ(e)={u,v}}.
+
  
Conclusion:
+
In this case: 2+2+3+5+1+6 = 19
Two nodes are neighbored if there is an arc respectively an edge which connects both nodes.
+
If you want the neighborhood of a set of nodes you take a look at the neighborhoods of single nodes and sum them up.
+
  
 +
The solution is presented as: <math>E\rightarrow C \rightarrow A \rightarrow B \rightarrow D \rightarrow F\rightarrow E</math>
  
Heuristics in reference to Search Space and Neighborhoods
+
== Sources ==
  
 +
'''Scripts:'''
 +
::* Operations Research - Prof. Dr. Oliver Wendt (SS 2013)
 +
::* Introduction to Network Optimization - Dr. Clemens Thielen (SS 2013)
  
Opening and improving procedures
+
'''Books:'''
 +
::* Optimierung, Operations Research, Spieltheorie - Karl Heinz Borgwardt
 +
::* Lineare Optimierung und Netzwerkoptimierung - Horst W. Hamacher und Kathrin Klamroth
 +
::* Einführung in Operations Research - W. Domschke und A. Drexl
  
 +
'''Internet:'''
 +
::* [https://bisor.wiwi.uni-kl.de/orwiki/Repr%C3%A4sentation_des_Suchraums Wikipedia-Entry "Repräsentation des Suchraums"]
 +
::* [http://de.wikipedia.org/wiki/Suchproblem Wikipedia-Entry "Suchproblem"]
 +
::* [http://de.wikipedia.org/wiki/Suchraum Wikipedia-Entry "Suchraum"]
 +
::* [http://de.wikipedia.org/wiki/Suchverfahren Wikipedia-Entry "Suchverfahren"]
 +
::* [http://de.wikipedia.org/wiki/Nearest-Neighbor-Heuristik Wikipedia-Entry "Nearest-Neighbor-Heuristik"]
 +
::* [http://de.wikipedia.org/wiki/Problem_des_Handlungsreisenden Wikipedia-Entry "TSP"]
  
Opening procedures construct a feasible solution with the help from heuristics.
+
'''About:'''
Improving procedures take these feasible solutions and try to further improve them.
+
  
A few examples for opening procedures:
+
:: Brought to you by Anna Pfeiffer, Markus Kaiser and Viktor Barie
- Nearest Neighbor
+
- Double-sided nearest Neighbor
+
- K-Nearest Neighbor
+
- Nearest Addition
+
- Nearest Insert
+
- Cheapest Insert
+
- Farthest Insert
+
  
A few examples for improving procedures:
 
  
- 2-Opt
 
- 3-Opt
 
- R-Opt
 
- 2-Nodes-Opt
 
  
  
  
 +
Comment by group 2
  
Nearest-Neighbor Algorithm
+
In your wiki- entry you described the theoretical backgrounds of "search space" and "neighborhood" very detailed! You pointed out all aspects of this topic pretty well and we couldn´t find any mistakes in your description. We could have done this in a more detailed way, too. In difference to our entry you focused on the nearest neighbor algorithm which is the simplest one of of the search algorithms.
Let G=(V,E,γ) be the undirected and weigthed graph and T∶={marked nodes}
+
We think if we combine both of our entries (your theoretical part + nearest neighbor algorithm + our VNS ) we will cover the whole topic!
Set  T={starting node}
+
Regarding the Neighborhood of the nodes of T :
+
Chose the edge e={u,v} with a minimum of weight
+
from v ∈V\T to  u∈T the node you have chosen least
+
Add v to T
+
Repeat step (2) and (3) until T={all nodes}
+
Take the edge which connects the starting node with the last node you have chosen
+

Aktuelle Version vom 8. Juli 2013, 18:20 Uhr

Search Problems in General

The primary difference between an optimization problem and a search problem is, that the search problem concentrates on the variables you have to fill in for the optimal function value, whereas the optimization problem only focuses on the optimal value itself.

Search spaces are defined by the following:

  • Search Space S:
  • Set of valid transition operators:
  • Initial state from set S:
  • Subset of goal states in S:

See below.
Function which describes the valid sub-solutions for the successiv steps.
The starting point from which the problem begins to unfold.
Describes the boundaries of a valid solution.

Search Spaces

There are multiple definitions of the Search Space. You can divide it into infinite and finite search spaces and in solution and problem spaces.

Finite Search Spaces

The finite search space is characterized by its finite amount of nodes and edges. You can use algorithms or heuristics to find the best solution, but if you have a finite but larger search space it is often more efficient to use heuristic due to the shorter runtime.

Infinite Search Spaces

The infinite search space is characterized by its infinite amount of nodes and edges. You can only use heuristics to find the optimal solution. If one would use algorithms in infinite search spaces, it would not terminate, as there are infinite solution possibilites. Heuristics have the advantage of not computing every last possibility, but rather to narrow down to the best solutions. So heuristics should be used for infinite search spaces.

Solution Spaces

In the solution space, the objective function defines an order of preference among all the possible solutions. There are given potential solutions (e.g. valid tours of a TSP) which are represented by the nodes of the graph. The edges between the nodes illustrate the transition from a given solution to a new (neighboring) solution.

As you can see below, the problem space can be portrayed as a undirected graph.

Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden

Problem Spaces

In the problem space the nodes depict the problems and sub-problems. The root of the tree is the initial problem while the interim nodes are the subproblems. The leaves represent the solutions. The edges are the decomposition of a problem into a simpler sub-problem.

As you can see below, the problem space can be portrayed as a spanning tree.

Fehler beim Erstellen des Vorschaubildes: Die Miniaturansicht konnte nicht am vorgesehenen Ort gespeichert werden

Neighborhoods in General (Mathematically)

In directed Graphs

Let Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): G= \left ( V, R, \alpha, \omega \right )

be the directed graph 

with the functions Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \alpha: R \rightarrow V

and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \omega: R \rightarrow V
which define starting and end node.

Let Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): v \in V

be a node of , then you could define:
  • Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \delta_{G}^{+}(v) = \left \{ r \in R \mid \alpha(r) = v \right \}
set of outgoing arcs of 
  • Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \delta_{G}^{-}(v) = \left \{ r \in R \mid \omega(r) = v \right \}
set of incoming arcs of 
  • Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): N_{G}^{+}(v) = \left \{ \omega(r)\mid r \in \delta_{G}^{+}(v) \right \}
set of successors of 
  • Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): N_{G}^{-}(v) = \left \{ \alpha(r)\mid r \in \delta_{G}^{-}(v) \right \}
set of predecessors of 

Two nodes Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): u,v \in V

are called adjacent, also known as neighbored, if an arc exists with  and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \omega(r)=v
or  and Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \omega(r)=u

. Therefore you could declare the Neighbordhood of as Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): N(v) = N_{G}^{+}(v) + N_{G}^{-}(v)


In undirected Graphs

Let be the undirected graph with the function Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): \gamma : E \rightarrow V * V

which defines the two end nodes of the edge.

Two nodes Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): u,v \in V

are called adjacent, also known as neighbored, if an edge exists with .

Therefore you could declare the Neighbordhood of as Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): N(v) = \{ u \in V \mid \gamma(e) = \{ u, v \}\} .

Conclusion

Two nodes are neighbored if an arc or an edge connects both nodes.

If you want the neighborhood of a set of nodes, you take a look at the neighborhoods of single nodes and sum them up.


Heuristics in reference to Search Space and Neighborhoods

Opening procedures

The opening procedures construct a feasible solution with the help of algorithms.

A few examples for opening procedures are:

  • Nearest Neighbor
  • Double-sided nearest Neighbor
  • Nearest Addition
  • Nearest Insert
  • Cheapest Insert
  • Farthest Insert

Improving procedures

The improving procedures take these feasible solutions and try to further improve them.

A few examples for improving procedures are:

  • 2-Opt
  • 3-Opt
  • R-Opt
  • 2-Nodes-Opt

The Nearest-Neighbor-Algorithm

In General

The algorithm belongs to the category of "greedy-algorithms" and has a runtime of .


Let be the undirected and weighted graph and

  1. Set
  2. Choose the edge with the least weight from Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): v \in V \setminus T
to the node you chose previously Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): u \in T
  1. Add to
  2. Repeat step (2) and (3) until
  3. Take the edge which connects the starting node with the previously chosen node

Example

The Search Problem in reference to the example

  • Search Space S:
  • Set of valid transition operators:
  • Initial state from set S:
  • Subset of goal states in S:

All nodes {A,B,C,D,E,F} - therefore it is a finite Problem Space
Take the nearest neighbor of the current node, which has not been visited yet. Otherwise if all nodes have been visited, go back to the starting node E.
E
You have a cycle {E,C,A,B,D,F,E} through all nodes.

Initial state

Here we have a classic (strongly simplified) Traveling Salesman Problem which we want to solve by using the nearest neighbor heuristic. As the nearest neighbor heuristic is just an opening procedure, we will not necessarily get the optimal value of the objective function and therefore not the best travelling route itself. However, the solution we are going to find will be a more or less good foundation from which you could further improve it by using improving procedures. There are no weighted edges between the nodes shown yet, despite their existing.

initial situation

Iteration steps

Here, we randomly pick one of the nodes (in this case node E) to be our starting node from which we will further iterate. The edges from E to any other node are now presented by the lines and the weights as numbers above them. As you can see, the weights differ from edge to edge and because we want to find a preferably short route through all the edges we choose the edge with the smallest weight from E to another node. We are confronted by a special case of the nearest neighbor heuristic: Two edges share the smallest weight. As the algorithm is not particularly foreshadowing we can now choose randomly between the two smallest edges.

after choosing the first node

We chose node C and are now confronted by new edges between C and all other nodes we have not yet visited. The shortest route beginning from C would now be to A.

iteration 2

Repeat the last steps:

iteration 3
iteration 4
iteration 5
iteration 6

This goes on until we reach the final node that has not yet been visited, in this case node F. Now, as we have a Travelling Salesman Problem, we just need to close the cycle by connecting the last node with the starting node.

final solution

Summary

With this, the algorithm from the nearest neighbor heuristic has ended and we found an opening travelling route from which we can further improve our cycle. The last step would be the addition of the weights of the travelled edges so we get the value of our route.

In this case: 2+2+3+5+1+6 = 19

The solution is presented as: Fehler beim Parsen (http://mathoid.testme.wmflabs.org Serverantwort ist ungültiges JSON.): E\rightarrow C \rightarrow A \rightarrow B \rightarrow D \rightarrow F\rightarrow E


Sources

Scripts:

  • Operations Research - Prof. Dr. Oliver Wendt (SS 2013)
  • Introduction to Network Optimization - Dr. Clemens Thielen (SS 2013)

Books:

  • Optimierung, Operations Research, Spieltheorie - Karl Heinz Borgwardt
  • Lineare Optimierung und Netzwerkoptimierung - Horst W. Hamacher und Kathrin Klamroth
  • Einführung in Operations Research - W. Domschke und A. Drexl

Internet:

About:

Brought to you by Anna Pfeiffer, Markus Kaiser and Viktor Barie



Comment by group 2

In your wiki- entry you described the theoretical backgrounds of "search space" and "neighborhood" very detailed! You pointed out all aspects of this topic pretty well and we couldn´t find any mistakes in your description. We could have done this in a more detailed way, too. In difference to our entry you focused on the nearest neighbor algorithm which is the simplest one of of the search algorithms. We think if we combine both of our entries (your theoretical part + nearest neighbor algorithm + our VNS ) we will cover the whole topic!