Johnson’s Algorithm

47th Friday Fun Session – 19th Jan 2018

We have seen why Dijkstra’s algorithm cannot work with negative edge and that we cannot trivially add a constant to each of the edge weights and make them non-negative to proceed further. It is where Johnson’s algorithm comes into play. It finds a special set of offset values to remove the negative edges (change the negative edge weights to non-negative edge weights) and now this transformed graph is all set to work with Dijkstra’s algorithm.

How Does Johnson’s Algorithm work?

Johnson’s algorithm starts with a graph having negative edge(s). Let’s go through it using an example as shown below.

1

Add a New Node

It then adds a new vertex, let’s call it s, with edges starting from it and ending to each of the vertices of the existing graph, each having a cost of 0, as we have done earlier.

2.png

Apply Bellman-Ford

Then it applies Bellman-Ford, a Single Source Shortest Path (SSSP) algorithm that can work with a graph having negative edge(s). We will use s as the source, and find shortest path from it to all other vertices.

We also need to check whether a negative cycle exists, something that Bellman-Ford can detect. If it exists then we cannot proceed further as we cannot find shortest path in a graph with negative cycle. In our example graph, there is no negative cycle.

We find d[s, 1] = 0, d[s, 2] = -30, and d[s, 3] = 0 as shown below, using this code where d[s, t] indicates the shortest path from s to t.

3.png

Adjust Original Edge Weights

Now using these shortest path costs, original edges will be updated using the formula: w’[u, v] = w[u, v] + d[s, u] – d[s, v]. Applying the same for the original 3 edges in the original graph, we find,

w’[1, 2] = w[1, 2] + d[s, 1] – d[s, 2] = 20 + 0 – (-30) = 50

w’[1, 3] = w[1, 3] + d[s, 1] – d[s, 3] = 40 + 0 – 0 = 40

w’[3, 2] = w[3, 2] + d[s, 3] – d[s, 2] = (-30) + 0 – (-30) = 0

Now that we have adjusted the original edge costs, the new (cost) adjusted graph (without s and associated edges) does not have any more negative edge. Let’s see how the cost adjusted graph looks like.

4

Apply Dijkstra

With this non-negative edge graph we can proceed with Dijkstra’s algorithm. For each shortest path found in this graph from u to v, we have to adjust back the cost by subtracting d[s, u] – d[s, v] from it.

Is the Shortest Path Still the Same?

We are adjusting edge cost to remove negative edge. That way, we are changing the graph to some extent. However, while doing so we must preserve certain things of it. What was the cheapest cost in the original graph must still remain the cheapest path in the transformed graph. Let’s first verify whether that is indeed the case.

We will first look at the original graph (before edge cost adjustment). Let’s take a certain source destination pair (1, 2). There are two paths to reach from vertex 1 to vertex 2.

The first one (original):

d1[1, 2]

= from vertex 1 to vertex 2 directly using edge 1->2

= 20.

The second one (original):

d2[1, 2]

= from vertex 1 to 3 and then from 3 to 2

= 40 + (-30)

= 10.

Now let’s see how the costs of the same two paths change in the new cost adjusted graph.

The first one (cost adjusted):

d’1[1, 2]

= from vertex 1 to vertex 2 directly using edge 1->2

= 50.

The second one (cost adjusted):

d’2[1, 2]

= from vertex 1 to 3 and then from 3 to 2

= 40 + 0

= 40.

We see both the path costs have increased by 30, a constant. So what was earlier the shortest from vertex 1 to vertex 2, in the original graph, which was the second path, using two edges: edge 1->3 and edge 3->2, still remains the shortest path in the cost adjusted graph.

So how did that happen? Let’s have a closer look as to how the path cost changes.

The first one (cost adjusted):

d’1[1, 2]

= w’[1, 2]

= w[1, 2] + d[s, 1] – d[s, 2]

= d1[1, 2]  + d[s, 1] – d[s, 2]

The second one (cost adjusted):

d’2[1, 2]

= w’[1, 3] + w’[3, 2]

= w[1, 3] + d[s, 1] – d[s, 3] + w[3, 2] + d[s, 3] – d[s, 2]

= w[1, 3] + d[s, 1] + w[3, 2] – d[s, 2]

= w[1, 3] + w[3, 2] + d[s, 1] – d[s, 2]

= d2[1, 2] + d[s, 1] – d[s, 2]

So we see both the paths, with a certain source u and a certain destination v, have increased with a constant cost = d[s, u] – d[s, v], where s is the extra node that we added before applying Bellman-Ford algorithm.

We can easily find, no matter how many paths are present between a certain source s and a certain destination v, and no matter how many edges each of those paths uses, each of them would be adjusted by adding a constant cost = d[s, u] – d[s, v] to it. And hence, the shortest path in the original graph remains the shortest path in the new cost adjusted, non-negative edge graph.

Let’s consider a path that goes through 5 vertices: u, x1, x2, x3, and v.

In the cost adjusted graph the cost

d’[u, v]

= w’[u, x1] + w’[x1, x2] + w’[x2, x3] + w’[x3, v]

= w[u, x1] + d[s, u] – d[s, x1] + w[x1, x2] + d[s, x1] – d[s, x2] + w[x2, x3] + d[s, x2] – d[s, x3] + w[x3, v] + d[s, x3] – d[s, v]

= w[u, x1] + d[s, u] + w[x1, x2] + w[x2, x3] + w[x3, v] – d[s, v]

= w[u, x1] + w[x1, x2] + w[x2, x3] + w[x3, v] + d[s, u] – d[s, v]

= d[u, v] + d[s, u] – d[s, v]

By generalizing the above, we see that a constant cost d[s, u] – d[s, v] is getting added to all paths from u to v.

Are all Negative Edge Removed?

The second thing that we need to prove is: no longer there exists a negative edge in the adjusted graph. After applying Bellman-Ford, we computed the shortest paths from source s. Let’s assume, d[s, u] and d[s, v] are the shortest paths from s to any two vertices, u and v, respectively. In that case, we can say,

d[s, v] <= d[s, u] + w[u, v]

=> 0 <= d[s, u] + w[u, v] – d[s, v]

=> 0 <= w[u, v] + d[s, u] – d[s, v]

=> 0 <= w’[u, v]

We prove that the new edge cost, w’[u, v] is always non-negative.

Why Would We Use Johnson’s algorithm?

So here with Johnson’s algorithm, first we use Bellman-Ford to get a set of values; using which we transform the graph with negative edge to a graph with all non-negative edges so that we can apply Dijkstra’s algorithm.

But why would anyone want to do that? After all, both Bellman-Ford and Dijkstra are SSSP algorithms. What is the point of using one SSSP algorithm to transform a graph so that another SSSP algorithm can be used on the transformed graph?

Dijkstra’s Algorithm is Faster

Well, the reason being, the latter SSSP algorithm, namely Dijkstra’s, is much faster than Bellman-Ford. So, if we need to find shortest paths many times, then it is better that first we apply a bit more expensive SSSP alogorithm – Bellman-Ford to get the graph ready to work with Dijkstra’s algorithm. Then we execute much cheaper Dijkstra’s algorithm on this transformed graph, as many times as we want – later.

Sparse Graph

But in such a situation is it not better to run an ALL-Pairs Shortest Paths (APSP) algorithm like Floyd-Warshall? After all, Floyd-Warshall can compute APSP at a cost of O(V3) while Bellman-Ford costs O(|V| * |E|) that can shoot up to O(V3), when E=|V|2 for a dense graph.

Yes, that is correct. For a dense graph Johnson’s algorithm won’t possibly be useful. Johnson’s algorithm is preferable for a sparse graph when Bellman-Ford is reasonably efficient to work with it.

Index

Dijkstra’s Problem with Negative Edge

46th Friday Fun Session – 12th Jan 2018

Dijkstra’s algorithm cannot work with negative edge. Also, we cannot trivially add a constant to each of the edge weights and make them non-negative to proceed further.

Why Does Dijkstra’s Algorithm not Work with Negative Edge?

negative edge

In the above figure, we are trying to get shortest paths from source node 1 to all other nodes (node 2 and node 3). Since Dijkstra’s algorithm works by employing a greedy process, it outputs 20 as the shortest path cost to node 2.

As we can see, from node 1, we can go to two nodes – node 2 and node 3, at a cost of 20 and 40 respectively. Hence, going to node 2 is cheaper. And that is why, it outputs 20 to be the cheapest cost to reach node 2.

However, we know that the cheapest cost to reach node 2 is through node 3. And the associated cost is: 40 + (-30) = 10. So Dijkstra’s algorithm gets it wrong. It gets it wrong because it cannot foresee that later, a negative edge can bring down the total cost to below 20.

If we carefully observe, we see that the wrong calculation by Dijkstra’s algorithm happens due to the negative edge. Had cost from node 3 to node 2 not been negative, it could never bring down the total cost to lower than 20, after getting added to 40.

Why Does Adding a Constant Cost to Each Edge not Work?

Now that we realize, Dijkstra’s algorithm fails due to the negative edge from node 3 to node 2, having the value -30, we might be tempted to add 30 to each of the edges. We might think, this way we can remove the negative edge. And doing so would be fair; after all, we are adding the same value to each of the edges. Let’s do it and see what happens.

adjusting negative edge.png

After updating the edge costs, the graph looks as shown above. So what is the cheapest path from node 1 to node 3 now?

Well, now the cheapest cost is 50, which uses the direct edge from node 1 to node 2. But this is not supposed to be the cheapest path, right? The cheapest path was node 1 -> node 3 -> node 2, before we adjusted the edge cost. Adjusting edge cost should not change the graph. It must not change the cheapest path, right?

So why does that happen? Well, if we observe, we find that path node 1 -> node 3 -> node 2 uses two edges/segments – node 1 to node 3 and node 3 to node 2. On the other hand, path node 1 -> node 2 uses just one edge/segment. The way we have updated the edge cost – adding a constant to each path segment – is not fair to a path using more path segments. For the path that uses two path segments, which was originally the cheapest path, we have added the constant 30 twice. On the other hand, for the path that uses just one path segment, we have added 30 only once. That way, we are unfair to the path using more path segments.

We must add a constant to each of the paths, not to each of the path segments.

Solution

Johnson’s algorithm does this – add a constant cost to each path with a certain source s to a certain target t. It does so, by finding a special set of offset values to remove the negative edges from a graph. Once that is done Dijkstra’s algorithm can work. But that works in absence of a negative cycle in the graph.

Index

Dijkstra’s Algorithm

10th Friday Fun Session – 17th Mar 2017

Dijkstra’s algorithm is a Single-Source Shortest Path (SSSP) algorithm developed by Edsger Wybe Dijkstra. It uses a greedy process and yet finds the optimal solution. It looks similar to Breadth-first search.

Compare to Bellman-Ford

It is asymptotically the fastest SSSP algorithm, at a cost O(|E|+|V|log|V|), when min-priority queue implemented by Fibonacci heap is used.

That is quite cheap, given Bellman-Ford’s complexity of O(|V||E|) to find the same, something that can become prohibitively expensive for a dense graph having |V|2 edges.

However, while Bellman-Ford can work with negative edge and can detect negative cycle, Dijkstra’s algorithm cannot work with negative edge. Since it cannot work with negative edge, there is no question of detecting negative cycle at all.

Standard Algorithm

dist[] //shortest path vector

p[] //predecessor vector, used to reconstruct the path

Q //vertex set

for each vertex v in Graph
  dist[v] = ∞
  p[v] = undefined
  add v to Q

dist[s] = 0

while Q is not empty
  u = vertex with min dist[] value
  remove u from Q

  for each neighbor v of u
    alt = dist[u] + weight(u, v)
    if alt < dist[v]
      dist[v] = alt
      p[v] = u

return dist[], p[]

Given source vertex s, it finds the shortest distance from s to all other vertices. At first, it initializes dist[] vector to infinite to mean that it cannot reach any other vertex. And sets dist[s] = 0 to mean that it can reach itself at a cost of 0, the cheapest. All vertices including itself are added to the vertex set Q.

Then, it chooses the vertex with min dist[] value. At first, s (set to u) would be chosen. Then using each of the outgoing edges of u to v, it tries to minimize dist[v] by checking whether v can be reached via u using edge (u, v). If yes, dist[v] is updated. Then again it retrieves vertex u with the cheapest dist[u] value and repeats the same. This continues till Q is not empty. Whenever, a vertex u is removed from Q, it means that the shortest distance from s to u is found.

Since we are retrieving |V| vertices from Q, and for each vertex, trying with all its edges (=|V|, at max), to minimize distance to other vertices, the cost can be |V|2.

So, here we see a greedy process where it is retrieving the vertex with min dist[] value.

Since retrieving a vertex u from Q means that we found the minimum distance from s to u, if we are solving shortest path from a single source s to a single destination d, then when u matches the destination d, we are done and can exit.

It can also be noted that from source s, we find the shortest distances to all other vertices, in the ascending order of their distances.

Finally, we see that dist[] vector is continuously changing. And each time when we retrieve a vertex u, we choose the one with min dist[] value. That indicates using min-priority queue might be the right choice of data structure for this algorithm.

Using Fibonacci Heap

dist[] //shortest path vector
p[] //predecessor vector, used to reconstruct the path
Q //priority queue, implemented by Fibonacci Heap

dist[s] = 0

for each vertex v
  if(s != v)
    dist[v] = ∞
    p[v] = undefined
  
  Q.insert_with_priority(v, dist[v]) // insert

while Q.is_empty() = false
  u = Q.pull_with_min_priority() // find min and delete min
  
  for each neighbor v of u
    alt = dist[u] + weight(u, v)
    if alt < dist[v]
      dist[v] = alt
      p[v] = u
      Q.decrease_priority(v, alt) //decrease key

return dist[], p[]

In the above algorithm, we have used a function called decrease_priority(), something that is absent in standard priority queue but present in Fibonacci heap. So the above algorithm is implemented using Fibonacci heap.

Fibonacci heap is a special implementation of a priority queue that supports decrease key (decrease_priority()) operation. Meaning, we can decrease the value of a key while it is still inside the priority queue. And this can be achieved by using constant amortized time for insert, find min and decrease key operation and log (n) time for delete min operation.

As for cost, since we have called delete operation for each of the v vertices, and we have treated each of the |E| edges once, the cost here is O(|E|+|V|log|V|), as mentioned at the beginning of this post, as the cost of Dijkstra’s algorithm.

Using Standard Priority Queue

Standard priority queue implementation takes log (n) time for both insert and delete operation and constant time for find min operation. But there is no way to change the key value (decrease key) while the item is still in the priority queue, something Dijkstra’s algorithm might need to do quite frequently as we have already seen.

If standard priority queue is used, one has to delete the item from the priority queue and then insert into it again, costing log (n) each time, or an alternative to that effect. However, as long as standard priority queue is used, it is going to be slower than Fibonacci heap. With standard priority queue, the algorithm would look like below:

dist[] //shortest path vector
p[] //predecessor vector, used to reconstruct the path
Q //standard priority queue

for each vertex v
  dist[v] = ∞
  p[v] = undefined

dist[u] = 0
Q.insert_with_priority(u, dist[u]) // insert

while Q.is_empty() = false
  u = Q.pull_with_min_priority() // find min and delete min
  
  for each neighbor v of u
    alt = dist[u] + weight(u, v)
    if alt < dist[v]
      dist[v] = alt
      p[v] = u
      insert_with_priority(v, alt) //insert v 
                                     even if already exists 
return dist[], p[]

There are two differences from the earlier algorithm:

First, we have not inserted all vertices into the standard priority queue at first, rather inserted the source only.

Second, instead of decreasing priority that we cannot do using standard priority queue, we have kept on inserting vertex v when dist[v] decreases. That might mean, inserting a vertex v again while it is already there inside the queue with a higher priority/dist[v]. That is another way of pushing aside the old entry (same v but with higher priority) out of consideration for the algorithm. When shortest distances from source s to all other vertices v are found, those pushed aside vertices will be pulled one by one from the priority queue and removed. They will not affect dist[] vector anymore. And thus the queue will be emptied and the algorithm will exit.

Negative Edge

Please check Dijkstra’s Problem with Negative Edge for further details.

Index

Floyd-Warshall Algorithm

35th Friday Fun Session – 29th Sep 2017

Floyd-Warshall, also known as Roy-Warshall is an All-Pairs Shortest Path (APSP) algorithm developed by Robert Floyd, Bernard Roy, and Stephen Warshall. It is an example of dynamic programming that uses 3 nested loops. At a cost O(|V|3), it is quite impressive, given that Bellman-Ford might encounter the same cost (O(|V||E|)) to find only Single Source Shortest Path (SSSP) for dense graph having |V|2 edges. Floyd-Warshall can work with negative edges just like Bellman-Ford. After all, both are based on dynamic programming. As for detecting negative cycle, once again, both can detect it. However, in presence of negative cycle, results from both are invalid.

Three Nested Loops

dist[][] //shortest path matrix
p[][] //predecessor matrix, used to reconstruct the path

dist[][] = ∞

for each vertex i
  dist[i][i] = 0

for each edge (i, j)
  dist[i][j] = weight(i, j)
  p[i][j] = j

for k = 1 to |V|
  for i = 1 to |V|
    for j = 1 to |V|
      if dist[i][j] > dist[i][k] + dist[k][j]
        dist[i][j] = dist[i][k] + dist[k][j]
        p[i][j] = p[i][k]

To compute the shortest path between any pair (s, t), we have considered each of the |V| vertices as intermediate points k, and chosen the cheaper between i) existing (s, t) and ii) the sum of s to k and then from k to t, meaning s to t via k.

Short-circuiting an SSSP?

Does it mean that we can derive a SSSP solution for any pair (s, t), at a cost of O(|V|2)? To be precise, can we do the following?

for k = 1 to |V|
  if dist[i][j] > dist[i][k] + dist[k][j]
    dist[i][j] = dist[i][k] + dist[k][j]

After all, we have relaxed via all the intermediate nodes. Well, that will not work! Why?

Dynamic Programming

If we want to get the shortest path between (i, j) using k (1 to k) intermediate nodes then we have to choose the cheaper between the below paths:

  1. Without using k: dist[i][j] using intermediate nodes 1 to k-1.
  2. Using k: dist[i][k] + dist[k][j], where both dist[i][k] and dist[j][k] should make use of intermediate nodes 1 to k-1.

At k = 0, dist[][] is initialized using edge weights where exists, 0 for diagonals (dist[v][v]) and infinite for the rests.

An Example

Suppose, we want to compute dist[2][3] when k = 5.

Then, dist[2][3] = min { dist[2][3], dist[2][5] + dist[5][3] }

Here, all three distances – dist[2][3], dist[2][5] and dist[5][3] must already use intermediate nodes 1 to 4. Meaning, dist[2][5] is not the static cost set at k=0; possibly the edge cost, 0 or infinite. Rather, dist[2][5] is already computed using k from 1 to 4. Similarly, dist[5][3] (and dist[2][3] as well) is also computed using k from 1 to 4.

In other words, we cannot compute a certain dist[s][t] alone, using the intermediate nodes 1 to k. Rather for each intermediate node k, we need to compute dist[i][j] progressively, using the 3 nested loops, as shown earlier.

Obviously we can use recursion without the loops. That will not save any work for us. In fact, while using recursion, if we are not reusing existing solutions for the sub-problems, we will repeat the computation – something very expensive.

Path Reconstruction

The predecessor matrix p, keeps track of the shortest path. If we have to find the best path from s to t, we know for sure that we start with s. We print s. To know where we went from there, we have to look at p[s][t]. If that is t, we are done as that is the destination. However, if that is not the case, that means we find another node r. Then we know from s we went to an intermediate node r. So this becomes the new start s for the rest of the path. However, destination remains the same t. Again we look at p[s][t] and continue the same till we reach t, all along printing r (=p[s][t]).

Incremental Node Addition

Suppose as of now, we have 4 nodes and APSP is computed. At this point 5th node arrives, along with some edges connecting the existing nodes. Instead of computing APSP from the scratch, at a cost of O(|V|3) = O(125), we can use the already computed APSP and extend that to complete it for 5 nodes, at a cost of O(|V|2) = O(25).

Adjusting Edge Weight Changes

What if weight for an edge changes (increases or decreases)? Do we need to re-compute APSP from scratch? Or we can adjust the existing results using some partial computations?

Index

Bellman-Ford Algorithm

17th Friday Fun Session – 12th May 2017

We use Bellman-Ford Algorithm to find the shortest path from a single source node/vertex (red color) to all destination nodes/vertices.

Let’s use our intuition

Image1

Given that distance from city-1 (city-1 is node 1 here) to city-2 is 5, and from city-2 to city-3 is 6, if I have travel from city-1 to city-2 and city-3 respectively what would be the cheapest way to do so?

We can start from city-1 and reach city-2 at cost 5. Now that we have arrived at city-2 at cost 5, we can add cost 6 to it and reach city-3 via city-2. Thus, the shortest path from city-1 to city-2 and city-3 are respectively 5 and 11.

Image2

Distance

Since city-1 is the source, we can add a self-loop on it with cost 0. That means, reaching city-1 from itself would cost 0. We also set that reaching city-2 and city-3 would cost infinity. We set so, because as of now we don’t know what would be the cost to reach there. So we put the maximum possible cost. Let’s call it distance. So we have distance [1] = 0, distance [2] = ∞ and distance [3] = ∞.

Predecessor

Let’s also maintain another array, called predecessor to indicate the last node from which we arrived here. We set predecessor [1] = 0, predecessor [2] = 0, predecessor [3] = 0.  Since we have not arrived to city-2 and city-3 yet, we set the predecessor value for them to something invalid (0). For city-1, it is the source, can be set to 0 as well.

Relaxation

Now we take each path/edge. We have two edges here. First one is from city-1 to city-2 with cost 5 and the second edge is from city-2 to city-3 with cost 3. Now let’s do what is called relaxation on each of the edges.

We see that using first edge we can arrive at city-2 from city-1 at a cost of 5 (distance [1] + cost of first edge). Since 5 is less than the existing distance of city-2 that is ∞, we update distance [2] to 5. We also note that, we arrived here from city-1 and hence got this new distance, that means we also update predecessor [2] = 1.

Now let’s do relaxation on edge 2. We see that distance [3] that is ∞ as of now can be improved by using edge 2. We set distance [3] = distance [2] + cost of edge 2 = 5 + 6 = 11. Since we arrived here from city-2, let’s update predecessor [3] = 2.

Does order of edges for relaxation matter?

Now we are done with relaxation for all the edges once. First, we did the relaxation on first edge. Then we did the relaxation on the second edge. What would happen if the we changed the order? That means do the relaxation on the second edge first and then do it on the first edge.

Let’s do it. Start the relaxation afresh with new edge order. As of now we have predecessor [1] = 0, predecessor [2] = 0, predecessor [3] = 0. Also distance [1] = 0, distance [2] = ∞ and distance [3] = ∞.

We do relaxation on second edge (city-2 to city-3 at cost 6) first. We see that both distance [2] and distance [3] = ∞. Hence there is no chance to improve distance [3] since distance [2] + 6 = ∞ + 6 = ∞, that is no better than the existing distance [3] that also ∞. Hence relaxing second edge did not yield anything.

Let’s do relaxation on first edge. We know that would result in distance [2] = 5 and predecessor [2] = 1.

Well, at this point we see that we are done with relaxation on all the edges once and yet we have not found the shortest path to reach city-3.

So when shall we get the result?

Iteration

That brings us to the next concept called iteration. Relaxation on all the edges once is called an iteration. So how many iterations do we need to get the shortest path to all destination nodes? Let’s use our intuition on the example that we are working on. We got 3 nodes. So if we have to reach from one end (say node 1) to the other end (say node 3), the maximum edges we might have to travel is 2, that is, the number of nodes minus 1. If we do the relaxation on the edges in an order that would choose the furthest edge from source (or the closest to the destination, in this case, second edge that is going from city-2 to city-3 at cost 6), we see that at each iteration we would increase the path (source node to furthest node) by at least one edge. And hence at 2 iterations we will certainly reach all reachable (I am saying reachable because all destinations might not be reachable) destinations.

Let’s continue our workout from where we left. Let’s start iteration 2 for the cases when we did the relaxation on second edge first. At this 2nd iteration, we again start with second edge. This time we can update distance [3] = distance [2] + 6 = 5 + 6 = 11. Predecessor [3] = 2.

We are done with2 iterations and we have found the result to reach from city-1 to both city-2 and city-3.

Are all nodes reachable?

Let’s consider the below example, that is constructed by adding one more node 4.

Image3

We will see that distance [4] will remain ∞ and predecessor [4] will remain 0 (invalid) after |V| -1 = 4 – 1 = 3 iterations, where |V| is the number of nodes/vertices. This is because there is no incoming edge (path) to city-4. Hence city-4 is unreachable.

Did order of edges for relaxation really matter?

We have seen that intermediate results (distance and predecessor values) might vary based on the order of edges we chose for relaxation but final result after all the iterations will still be the same. Hence the order of edges on which we do relaxation does not really matter (as far as final result is concerned).

So after |V| – 1 iterations we have got the correct result?

Unfortunately not! Well, we did get the final result. But as of now we don’t know whether the result is valid or not. That sounds interesting. So, we have found the result and still we don’t know whether the result is correct/valid or not. So what is the issue? Well, let’s consider the below case.

Image4

I have added a third edge from city-3 to city-1. And the cost is -12. Negative cost? Why? We don’t answer the why question but let’s answer the what question. Cost -12 means reaching city-1 from city-4 would cost -1.

Let’s continue our workout where we have finished 2 iterations by considering the second edge first. Let’s assume we also considered the third edge and that third edge was considered at first for relaxation at each iteration. Distance [3] got a less than infinity value after 2nd iteration. Since we used 3rd edge at first, that means third edge was not used till 2nd iteration to update any distance for any node. That means the values (distance and predecessor) we got last time would be the same value even with the presence of third edge after 2 iterations.

Since we just added an extra edge (3rd edge) but no new nodes, the number of iterations we have to do still remains 2. That also means the result we found so far still valid in this case. But is the result (distance [2] = 5 and distance [3] = 11) correct with this new situation?

Negative cycle

Now that you arrived at city-3 at cost 11, you can go to city-1 at cost 11 + (-12) = -1 and then city-2 at -4 and so on. The more you travel, the less cost you would incur. And hence the shortest path found after 2 iterations are not valid.

So how do we find that the result is invalid? Well after we are done with |V|-1 iterations, we have to do one more iteration that is the |V|th one (a cycle involving |V| nodes can be found with |V| edges). If that changes distance value for any node that means there exists a negative cycle (a cycle whose edges (costs) sum to a negative value). When there is negative cycle present in a graph then the answer found is invalid.

Negative edge vs. negative cycle

Does negative edge means negative cycle? Does presence of negative edge mean no answer can be found?

Image5

Negative edge is fine with Bellman-Ford as in the above example. A correct solution can still be found. A correct solution cannot be found when there is a negative cycle. But Bellman-Ford can detect a negative cycle and in that case it can indicate that a correct solution is not found.

Are all iterations required?

Not really. When we did the relaxation on the first edge first, we already found the shortest paths to both city-2 and city-3. How do we know? Well, at iteration 2 we would have found that no distance got updated. If an iteration does not change any distance value then we can terminate the algorithm there and return valid result. Because in that case, subsequent iterations are not going to change anything. It also means there is no negative cycle.

The shortest path sequence

We can use the predecessor array recursively to get the shortest path sequence. For example, earlier after 2 iterations we got the following result.

distance [1] = 0, distance [2] = 5, distance [3] = 11

predecessor [1] = 0, predecessor [2] = 1, predecessor [3] = 2

If we want to find the shortest path sequence to city-3, we can find the predecessor [3] that is 2, recursively we can check the predecessor [2] that is 1 and that equals to source node, that is city-1. So we stop and the sequence is city-1 to city-2 to city-3.

Distance for a particular node can be updated more than once in an iteration

Image6

In this example above we have two nodes. We have to do one iteration. We have two edges: first with cost 5, second with cost 2. Source is node 1. If we relax the first edge then distance [2] will be 5. Subsequent relaxation on second edge would result the node 2 distance to be updated again with 2 (because, existing distance 5 > (0 + 2)). We see that node 2 distance got updated twice within the same iteration.

The algorithm

Now that we have done with the workout, let’s write down the algorithm.

Function BellmanFord()
{
  input = G {V, E};
  distance[] = ∞;
  predecessor[] = -1;
  distance[sourceNode] = 0;

  for i = 1 to |V|-1
  {
    valueChanged = false;
    for j = 1 to |E|
      valueChanged = Relax (E[j]) || valueChanged;

    if(!valueChanged)
      return Result();
  }

  for j = 1 to |E|
    if(Relax(E[j])
      print ‘negative cycle detected, solution not possible’;
}

Function Relax (e)
{
  if(distance[e.to]) > distance[e.from] + e.cost)
  {
    distance[e.to] = distance[e.from] + e.cost;
    predecessor[e.to] = e.from;
    return true;
  }

  return false;
}

Function Result()
{
  print ‘success’;
  print distance[];
  print predecessor[];
}

The complexity

For each iteration (number of vertices – 1), we are iterating over all the edges. That means the complexity is O(|V|. |E|).

GitHub: Manipulating Money Exchange

Index