Abstract:
In this paper we describe a new parallel algorithm called Fast Adaptive Sequencing Technique (FAST) for maximizing a monotone submodular function under a cardinality constraint k. This algorithm achieves the optimal 1-1/e approximation guarantee and is orders of magnitude faster than the state-of-the-art on a variety of experiments over real-world data sets.
In the past two years, following the work by Balkanski and Singer there has been a great deal of work on algorithms whose theoretical parallel runtime is exponentially faster than algorithms used for submodular maximization over the past 40 years. Although these algorithms are fast in terms of asymptotic worst case guarantees, it is computationally infeasible to use them in practice. The reason is that the number of rounds and queries they require depends on very large constants as well as high-degree polynomials in terms of the precision and confidence, causing these algorithms to be impractical even on small data sets.
The design principles behind the FAST algorithm we present here are a significant departure from those of theoretically fast algorithms that have been studied in the past two years. Rather than optimize for theoretical guarantees, the design of FAST introduces several new techniques that achieve remarkable practical and theoretical parallel runtimes. More specifically, the approximation guarantee obtained by FAST is arbitrarily close to 1 − 1/e, its theoretical parallel runtime (adaptivity) is O(log(n) log^2(log k)), and the total number of queries is O(n log log(k)). We show that FAST is orders of magnitude faster than any algorithm for submodular maximization we are aware of, including hyper-optimized parallel versions of state-of-the-art serial algorithms, by running experiments on large data sets.