wiki:BreadthFirstSearch

Version 6 (modified by gemacke, 9 years ago) (diff)

--

Visiting the Adjacencies of a List of Vertices

Traversing the adjacencies of a list of vertices is a simple graph kernel that is used in more complicated algorithms such as single source shortest path and breadth first search. In an adjacency list traversal, some operation is typically applied to each visited vertex. The operation is often simple but can also be more complex. We give a series of algorithms for adjacency list traversal to demonstrate how good performance and parallelization can be achieved while using some generic constructs in the MTGL. All of the adjacency list traversal algorithms are for a compressed sparse row (CSR) format graph. The code for the adjacency list traversal algorithms is located at test/test_static_graph6.cpp

Consider Algorithm 1 shown below.

void count_adjacencies_higher_id(static_graph<directedS>* g, size_type* indeg)
{
  size_type i, j;
  size_type order = g->n;
  size_type* index = g->index;
  size_type* end_points = g->end_points;
  #pragma mta assert noalias *indeg
  for (i = 0; i < order; i++)
  {
    size_type begin = index[i];
    size_type end = index[i + 1];
    for (j = begin; j < end; j++)
    {
      if (i < end_points[j]) mt_incr(indeg[end_points[j]], 1);
    }
  }
}

The algorithm is the simple C implementation of traversing the adjacencies of all the vertices in a graph to determine for each vertex the number of adjacent vertices that have a higher id. It is a simple nested loop whose body is an simple if statement.

Now look at the canal output for the nested loop.

           |   #pragma mta assert noalias *indeg
           |   for (i = 0; i < order; i++)
           |   {
           |     size_type begin = index[i];
           |     size_type end = index[i + 1];
           |     for (j = begin; j < end; j++)
           |     {
    8 PP:m |       if (i < end_points[j]) mt_incr(indeg[end_points[j]], 1);
++ function T1 mtgl::mt_incr<T1, T2> inlined
           |     }
           |   }

Note the PP:m marking to the left of the if statement. The capital P ’s mean the compiler automatically parallelized both levels of the loops. The m means the compiler automatically performed a manhattan loop collapse.

To see the real cost of this loop, we need to search through the loop annotations to see what loops (including compiler generated ones) there are for the function count_adjacencies_higher_id(). There are four pertinent loops: 3, 4, 5, and 8.

Loops 3, 4, and 5 represent a recurrence. Loop 3 is stage 1 of the recurrence. It performs 3 instructions including 1 load and 1 store. Loop 4 is the communication phase for stage 1 of the recurrence. It performs 13 instructions including 1 load, 1 store, 1 reload, and 3 branches. Loop 5 is stage 2 of the recurrence. It performs 2 instructions including 1 load and 1 store.

Loop   3 in count_adjacencies_higher_id(mtgl::static_graph<(int)1> *, unsigned long *) in loop 2
       Stage 1 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		3 instructions, needs 44 streams for full utilization
		pipelined

Loop   4 in count_adjacencies_higher_id(mtgl::static_graph<(int)1> *, unsigned long *) in loop 2
       Stage 1 recurrence communication
       Loop not pipelined: not structurally ok
       Compiler generated
       Loop summary: 13 instructions, 0 floating point operations
		1 loads, 1 stores, 1 reloads, 0 spills, 3 branches, 0 calls

Loop   5 in count_adjacencies_higher_id(mtgl::static_graph<(int)1> *, unsigned long *) in loop 2
       Stage 2 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		2 instructions, needs 90 streams for full utilization
		pipelined

Loop 8 is the main work performed by the body of the inner loop. It performs 6 instructions including 1 load, 1 store, and 2 branches.

Loop   8 in count_adjacencies_higher_id(mtgl::static_graph<(int)1> *, unsigned long *) at line 64 in loop 7
       Loop not pipelined: not structurally ok
       Parallel section of loop from level 2
       Loop summary: 6 instructions, 0 floating point operations
		1 loads, 1 stores, 0 reloads, 0 spills, 2 branches, 0 calls

But what happens if a user desires to do something more complicated? Consider Algorithm 2 shown below.

typedef bool (*pred_t)(size_type, size_type);

#pragma mta inline
inline bool test_pred_func(size_type i, size_type j)
{
  return (i < j);
}

void count_adjacencies_higher_id_func_ptr(static_graph<directedS>* g,
                                          size_type* indeg, pred_t my_func)
{
  size_type i, j;
  size_type order = g->n;
  size_type* index = g->index;
  size_type* end_points = g->end_points;
  #pragma mta assert noalias *indeg
  #pragma mta assert parallel
  for (i = 0; i < order; i++)
  {
    size_type begin = index[i];
    size_type end = index[i + 1];
    #pragma mta assert parallel
    for (j = begin; j < end; j++)
    {
      if (my_func(i, end_points[j])) mt_incr(indeg[end_points[j]], 1);
    }
  }
}

This algorithm does the same thing as Algorithm 1, but it performs the id test by calling a function via a function pointer. The function test_pred_func is the function passed to count_adjacencies_higher_id_func_ptr() later in the code. A function pointer is the natural C method to allow a user to perform a generic operation on each of the adjacencies.

Looking at the canal output, we see that the compiler doesn't inline the function called by the function pointer, and we have to use assert parallel pragmas to get the compiler to parallelize the code. This is noted by the pp:m marking to the left of the function call. The small p’s mean that the compiler could not automatically parallelize the loop but that the user forced parallelization using a pragma statement. The number of instructions and memory references increases significantly.

           |   #pragma mta assert noalias *indeg
           |   #pragma mta assert parallel
           |   for (i = 0; i < order; i++)
           |   {
           |     size_type begin = index[i];
           |     size_type end = index[i + 1];
           |     #pragma mta assert parallel
           |     for (j = begin; j < end; j++)
           |     {
   14 pp:m |       if (my_func(i, end_points[j])) mt_incr(indeg[end_points[j]], 1);
++ function T1 mtgl::mt_incr<T1, T2> inlined
           |     }
           |   }

Looking at the loop annotations, there are four pertinent loops: 11, 12, 13, and 14.

Loops 11, 12, and 13 represent a recurrence. Loop 11 is stage 1 of the recurrence. It performs 3 instructions including 1 load and 1 store. Loop 12 is the communication phase for stage 1 of the recurrence. It performs 13 instructions including 1 load, 1 store, 1 reload, and 3 branches. Loop 13 is stage 2 of the recurrence. It performs 2 instructions including 1 load and 1 store.

Loop  11 in count_adjacencies_higher_id_func_ptr(mtgl::static_graph<(int)1> *, unsigned long *,
                                                 bool (*)(unsigned long, unsigned long)) in loop 10
       Stage 1 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		3 instructions, needs 44 streams for full utilization
		pipelined

Loop  12 in count_adjacencies_higher_id_func_ptr(mtgl::static_graph<(int)1> *, unsigned long *,
                                                 bool (*)(unsigned long, unsigned long)) in loop 10
       Stage 1 recurrence communication
       Loop not pipelined: not structurally ok
       Compiler generated
       Loop summary: 13 instructions, 0 floating point operations
		1 loads, 1 stores, 1 reloads, 0 spills, 3 branches, 0 calls

Loop  13 in count_adjacencies_higher_id_func_ptr(mtgl::static_graph<(int)1> *, unsigned long *,
                                                 bool (*)(unsigned long, unsigned long)) in loop 10
       Stage 2 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		2 instructions, needs 90 streams for full utilization
		pipelined

Loop 14 is the main work performed by the body of the inner loop. It performs 25 instructions including 4 loads, 1 store, 12 reloads, and 2 branches. This is a significant increase in instructions and memory operations over the previous version!

Loop  14 in count_adjacencies_higher_id_func_ptr(mtgl::static_graph<(int)1> *, unsigned long *,
                                                 bool (*)(unsigned long, unsigned long)) at line 99 in region 9
       In parallel phase 3
       Interleave scheduled
       General loop collapse
       Loop not pipelined: not structurally ok
       Loop from level 2 combined with this loop
       Loop summary: 25 instructions, 0 floating point operations
		4 loads, 1 stores, 12 reloads, 0 spills, 2 branches, 2 calls

When programming in C++, a natural method to allow a user to perform a generic operation is a function object. Consider Algorithm 3 shown below. This algorithm is the same as the previous ones except it performs the id test using a function object. The function object test_pred_func_obj is passed to count_adjacencies_higher_id_func_obj later in the code.

class test_pred_func_obj {
public:
  inline bool operator()(size_type i, size_type j) { return (i < j); }
};

template <class pred>
void count_adjacencies_higher_id_func_obj(static_graph<directedS>* g,
                                          size_type* indeg, pred my_func)
{
  size_type order = g->n;
  size_type *index = g->index;
  size_type *end_points = g->end_points;
  #pragma mta assert noalias *indeg
  for (size_type i = 0; i < order; i++)
  {
    size_type begin = index[i];
    size_type end = index[i + 1];
    for (size_type j = begin; j < end; j++)
    {
      if (my_func(i, end_points[j])) mt_incr(indeg[end_points[j]], 1);
    }
  }
}

Looking at the canal output, we see that once again the compiler was able to automatically parallelize the loop and perform a manhattan loop collapse.

           |   #pragma mta assert noalias *indeg
           |   for (size_type i = 0; i < order; i++)
           |   {
           |     size_type begin = index[i];
           |     size_type end = index[i + 1];
           |     for (size_type j = begin; j < end; j++)
           |     {
   90 PP:m |       if (my_func(i, end_points[j])) mt_incr(indeg[end_points[j]], 1);
++ function T1 mtgl::mt_incr<T1, T2> inlined
++ function test_pred_func_obj::operator  inlined
           |     }
           |   }

Looking at the loop annotations, there are four pertinent loops: 43, 53, 63, and 90.

Loops 43, 53, and 63 represent a recurrence. Loop 43 is stage 1 of the recurrence. It performs 3 instructions including 1 load and 1 store. Loop 53 is the communication phase for stage 1 of the recurrence. It performs 13 instructions including 1 load, 1 store, 1 reload, and 3 branches. Loop 63 is stage 2 of the recurrence. It performs 2 instructions including 1 load and 1 store.

Loop  43 in void count_adjacencies_higher_id_func_obj<T1>(mtgl::static_graph<(int)1> *, unsigned long *, T1)
[with T1=test_pred_func_obj] in loop 33
       Stage 1 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		3 instructions, needs 44 streams for full utilization
		pipelined

Loop  53 in void count_adjacencies_higher_id_func_obj<T1>(mtgl::static_graph<(int)1> *, unsigned long *, T1)
[with T1=test_pred_func_obj] in loop 33
       Stage 1 recurrence communication
       Loop not pipelined: not structurally ok
       Compiler generated
       Loop summary: 13 instructions, 0 floating point operations
		1 loads, 1 stores, 1 reloads, 0 spills, 3 branches, 0 calls

Loop  63 in void count_adjacencies_higher_id_func_obj<T1>(mtgl::static_graph<(int)1> *, unsigned long *, T1)
[with T1=test_pred_func_obj] in loop 33
       Stage 2 of recurrence
       Compiler generated
       Loop summary: 1 loads, 1 stores, 0 floating point operations
		2 instructions, needs 90 streams for full utilization
		pipelined

Loop 90 is the main work performed by the body of the inner loop. It performs 6 instructions including 1 load, 1 store, and 2 branches. This is once again the same amount of work as the first version.

Loop  90 in void count_adjacencies_higher_id_func_obj<T1>(mtgl::static_graph<(int)1> *, unsigned long *, T1)
[with T1=test_pred_func_obj] at line 123 in loop 81
       Loop not pipelined: not structurally ok
       Parallel section of loop from level 2
       Loop summary: 6 instructions, 0 floating point operations
		1 loads, 1 stores, 0 reloads, 0 spills, 2 branches, 0 calls

Now, consider Algorithm 4 shown below. This is the same as Algorithm 3 (uses a function object), but it uses the syntactic sugar of the MTGL graph API. The graph adapter used is a static graph adapter, which is the MTGL’s implementation of a compressed sparse row graph.

template <class Graph, class predicate>
void count_adjacencies_higher_id_mtgl(
    Graph& g, typename graph_traits<Graph>::size_type* indeg, predicate f)
{
  typedef typename graph_traits<Graph>::size_type size_type;
  typedef typename graph_traits<Graph>::vertex_descriptor vertex_t;
  typedef typename graph_traits<Graph>::vertex_iterator vertex_iterator_t;
  typedef typename graph_traits<Graph>::adjacency_iterator adj_iter_t;

  vertex_id_map<Graph> vid_map = get(_vertex_id_map, g);
  size_type order = num_vertices(g);
  vertex_iterator_t verts = vertices(g);
  #pragma mta assert noalias *indeg
  for (size_type i = 0; i < order; i++)
  {
    vertex_t u = verts[i];
    size_type uid = get(vid_map, u);
    adj_iter_t adjs = adjacent_vertices(u, g);
    size_type end = out_degree(u, g);
    for (size_type j = 0; j < end; j++)
    {
      vertex_t v = adjs[j];
      size_type vid = get(vid_map, v);
      if (f(uid, vid)) mt_incr(indeg[vid], 1);
    }
  }
}

Looking at the canal output, we see that the compiler was still able to automatically parallelize the loop and perform a manhattan loop collapse.

           |   #pragma mta assert noalias *indeg
           |   for (size_type i = 0; i < order; i++)
           |   {
           |     vertex_t u = verts[i];
++ function T1 mtgl::static_vertex_iterator<T1, T2>::operator [] inlined
           |     size_type uid = get(vid_map, u);
++ function T2 mtgl::get<T1, T2, T3> inlined
   91 P:m  |     adj_iter_t adjs = adjacent_vertices(u, g);
++ function mtgl::static_graph_adapter<N1>::adjacency_iterator mtgl::adjacent_vertices<N1> inlined
   64 P:e  |     size_type end = out_degree(u, g);
++ function mtgl::static_graph_adapter<N1>::size_type mtgl::out_degree<N1> inlined
           |     for (size_type j = 0; j < end; j++)
           |     {
           |       vertex_t v = adjs[j];
++ function T1 mtgl::static_adjacency_iterator<T1, T2>::operator [] inlined
   91 PP:m |       size_type vid = get(vid_map, v);
++ function T2 mtgl::get<T1, T2, T3> inlined
   91 PP:m |       if (f(uid, vid)) mt_incr(indeg[vid], 1);
++ function T1 mtgl::mt_incr<T1, T2> inlined
++ function test_pred_func_obj::operator  inlined
           |     }
           |   }

Looking at the loop annotations, there are four pertinent loops: 44, 54, 64, and 91.

Loops 44, 54, and 64 represent a recurrence. Loop 44 is stage 1 of the recurrence. It performs 2 instructions including 1 load. Loop 54 is the communication phase for stage 1 of the recurrence. It performs 11 instructions including 1 load, 1 store, and 3 branches. Loop 64 is stage 2 of the recurrence. It performs 3 instructions including 1 load and 2 stores. Stage 1 of the recurrence requires fewer instructions and memory operations than Algorithm 1, but stage 2 requires 1 more instruction and memory operation than Algorithm 1.

Loop  44 in void count_adjacencies_higher_id_mtgl<T1, T2>(T1 &, mtgl::graph_traits<T1>::size_type *, T2)
[with T1=mtgl::static_graph_adapter<(int)1>, T2=test_pred_func_obj] at line 146 in region 20
       Stage 1 of recurrence
       Loop summary: 1 loads, 0 stores, 0 floating point operations
		2 instructions, needs 45 streams for full utilization
		pipelined

Loop  54 in void count_adjacencies_higher_id_mtgl<T1, T2>(T1 &, mtgl::graph_traits<T1>::size_type *, T2)
[with T1=mtgl::static_graph_adapter<(int)1>, T2=test_pred_func_obj] in region 20
       Stage 1 recurrence communication
       Loop not pipelined: not structurally ok
       Compiler generated
       Loop summary: 11 instructions, 0 floating point operations
		1 loads, 1 stores, 0 reloads, 0 spills, 3 branches, 0 calls

Loop  64 in void count_adjacencies_higher_id_mtgl<T1, T2>(T1 &, mtgl::graph_traits<T1>::size_type *, T2)
[with T1=mtgl::static_graph_adapter<(int)1>, T2=test_pred_func_obj] at line 146 in region 20
       Stage 2 of recurrence
       Loop summary: 1 loads, 2 stores, 0 floating point operations
		3 instructions, needs 60 streams for full utilization
		pipelined

Loop 91 is the main work performed by the body of the inner loop. It performs 6 instructions including 1 load, 1 store, and 2 branches. Thus, the MTGL version performed the same number of instructions and memory references as for this graph adapter as directly accessing the CSR structure in Algorithm 1. The compiler was able to eliminate the extra operations seemingly implied by the syntactic sugar, and it was able to automatically parallelize the loops involving the iterators. In this case we achieved our dual goals of generic code and good parallel performance.

Loop  91 in void count_adjacencies_higher_id_mtgl<T1, T2>(T1 &, mtgl::graph_traits<T1>::size_type *, T2)
[with T1=mtgl::static_graph_adapter<(int)1>, T2=test_pred_func_obj] at line 145 in loop 82
       Loop not pipelined: not structurally ok
       Parallel section of loop from level 2
       Loop summary: 6 instructions, 0 floating point operations
		1 loads, 1 stores, 0 reloads, 0 spills, 2 branches, 0 calls

The operation applied to each adjacency in the above algorithms is quite simple. What happens if the operation becomes more complex? It is possible for a user to choose an operation that the compiler may not be able to parallelize well or at all. In this case parallelism could still be achieved by manually dividing the adjacencies into equal-sized chunks and assigning the chunks to threads. Consider the diagram, which represents Algorithm 5, shown below. The first row of boxes is the queue of vertices whose adjacencies we want to visit. The second row of boxes is the vertices array of the CSR graph. The third row of boxes is the adjacencies array of the CSR graph. Algorithm 5 performs the precomputation necessary to evenly divide the adjacencies into chunks and manually assigns those chunks to threads. This algorithm has overhead the others do not, but it allows parallelization in more circumstances.

Performance Results

The five algorithms presented for visiting the adjacencies of the vertices of a graph are tested against two graphs. The first is an Erdos-Renyi random graph of two billion edges. The second is a graph of realistic data with 0.5 billion edges. The Erdos-Renyi graph is an example of simple synthetic data, while the realistic data graph is an example of more complex data. The figures below show the time in seconds it took to perform the five algorithms on each of the graphs for varying numbers of processors. The results for the Erdos-Renyi graph are on the left while the results for the realistic data graph are on the right. The algorithms performed similarly for both the simple and realistic data. Algorithms 1, 3, and 4 performed roughly the same as each other, and the fully generic algorithm that manually chunks the adjacencies performed about two to three times slower than the better algorithms. The algorithm that used the function pointer (Algorithm 2), however, performed horribly. All the algorithms except for Algorithm 2 scaled up to 64 processors.

Breadth First Search

Breadth first search (BFS) is another common graph algorithm. It utilizes the kernel of performing an operation on each of the adjacencies of a list of vertices. The best performing algorithm for BFS is the one developed by Petr Konecny in 2007. Consider Figure the diagram below. Q is a circular queue that contains the search vertices. Threads read the current level of vertices from Q and also write the next level of vertices to Q. A is the virtual adjacency list for the current level of vertices in Q. Initialize Q with the source vertex. For each level of the search, divide the current level vertices in Q into equal-sized chunks. Each thread grabs the next unprocessed vertex chunk and the next output chunk in Q. Each thread visits the adjacencies of every vertex in its input chunk, writing the next level of vertices to its output chunk. Threads grab new output chunks as needed and fill the unused portion of their output chunks with a marker to indicate no vertex. When a thread encounters a no vertex marker in its input chunk, it skips to the next entry.

Performance Results

We tested two versions of Petr’s algorithm. The first is a C implementation, and the second is an MTGL-ized version. We also tested a breadth first search algorithm called visit adj that uses Algorithm 5 described above to divide the adjacencies between the threads at each search level. The three algorithms were tested against the Erdos-Renyi graph and the realistic data graph. The diagrams below show the time in seconds it took to perform the three algorithms on each of the graphs for varying numbers of processors. The results for the Erdos-Renyi graph are on the left while the results for the realistic data graph are on the right.

For the simple data, the C and MTGL versions of Petr’s algorithm perform roughly the same and scale up to 64 processors. The visit adj algorithm initially performs about two times slower than the other two algorithms, but it does not scale very well. For the realistic data, the MTGL version of Petr’s algorithm achieves no speedup from one processor. The visit adj algorithm performs much better for this data set, and it scales up to 16 processors.

The realistic data tests reveal the one known issue with Petr’s algorithm. The algorithm chunks only the vertices and not their adjacencies. When the degree of the vertices does not vary too widely or there are many vertices in the search level, this is not an issue. The similar degree vertices or a large number of vertices allow for good load balancing. If a graph has a few vertices of extremely high degree while most of the vertices have small degree, however, Petr’s algorithm chokes on the high degree vertices if they are discovered in early levels of the search and essentially serializes the level with the high degree vertex. The realistic data has this property, and the visit adj algorithm that manually load balances the number of adjacencies assigned to each thread performs much better.

Attachments