09/07/2020

Adaptive Submodular Maximization under Stochastic Item Costs

Srinivasan Parthasarathy

Keywords: Combinatorial optimization, Approximation algorithms, Non-convex optimization, Stochastic optimization

Abstract: Constrained maximization of non-decreasing utility functions with submodularity-like properties is at the core of several AI and ML applications including viral marketing, pool-based active learning, adaptive feature selection, and sensor deployment. In this work, we develop adaptive policies for maximizing such functions when both the utility function and item costs may be stochastic. \n\nFirst, we study maximization of an adaptive weak submodular function which is submodular in an approximate and probabilistic sense, subject to a stochastic fractional knapsack constraint, which requires total expected item cost to be within a given capacity. We present the $\beta$-\textsc{Greedy} policy for this problem; our approximation guarantee for it recovers many known greedy maximization guarantees as special cases. Next, we study maximization of an adaptive submodular function, which is submodular in a probabilistic sense, subject to a stochastic knapsack constraint, which requires the total item cost to be within a given capacity with probability one. We present the \textsc{Mix} policy for this problem; our approximation guarantee for it is the first known approximation guarantee for maximizing a non-linear utility function subject to a stochastic knapsack constraint. Using alternative parameterizations of \textsc{Mix}, we also derive the first known approximation guarantees for maximizing an adaptive submodular function subject to a deterministic knapsack constraint.\n\nOur guarantees are powered by an innovative differential analysis technique which models the $\beta$-\textsc{Greedy} policy using a continuous-time stochastic reward process of a particle whose reward is related to the optimal utility through a differential inequality. The solution to this inequality yields our $\beta$-\textsc{Greedy} guarantee. We combine differential analysis with a variety of other ideas to derive our \textsc{Mix} guarantees.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLT 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers