Assefaw Gebremedhin, Research, Parallel Computing

Graph (network) computations are critical kernels in many algorithms in data mining, data analysis, scientific computing, computational science and engineering, etc. In large-scale applications, the graph computations need to be performed in parallel. Parallelizing graph algorithms effectively -- with emphasis on scalability and performance -- is particularly challenging for a variety of reasons: In many graph algorithms runtime is dominated by memory latency rather than processor speed, there exist little computation to hide memory access costs, data locality is poor, and available concurrency is low.

Listed below in reverse chronological order are papers I have written together with a number of different collaborators introducing a range of techniques for dealing with these challenges in the context of a variety graph problems. Much of my earlier work in this direction focused on distributed-memory platforms. My more recent effort targets the emerging and rapidly growing multicore platforms as well as massively multithreaded platforms. The list also includes my recent works on other combinatorial problems than graph probems and on problems around matrix computations. I am in general interested in exploring the interplay between algorithms, architectures, and applications in developing scalable systems.


MPI-based implementations of some of the distributed memory parallel algorithms for the distance-1 and distance-2 coloring problems I developed together with colleagues are publicly available via the Zoltan software toolkit of Sandia National Laboratories.


Multi-core and multi-threaded architectures
Distributed memory architectures
Shared memory architectures
Multi-core and multi-threaded architrectures Distributed-memory architectures
Shared-memory and coarse-grained multiprocessors

Go back to Research Areas or go side-ways to
  • Network Science
  • Coloring and Automatic Differentiation
  • Parallel Computation Models