Submodular set function

In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences), electrical networks, and very recently, in Machine Learning and Artificial Intelligence.

Definition

If \Omega is a set, a submodular function is a set function f:2^{\Omega}\rightarrow \mathbb{R}, where 2^\Omega denotes the power set of \Omega, which satisfies one of the following equivalent definitions.[1]

  1. For every X, Y \subseteq \Omega with  X \subseteq Y and every x \in \Omega \backslash Y we have that f(X\cup \{x\})-f(X)\geq f(Y\cup \{x\})-f(Y).
  2. For every S, T \subseteq \Omega we have that f(S)+f(T)\geq f(S\cup T)+f(S\cap T).
  3. For every X\subseteq \Omega and x_1,x_2\in \Omega\backslash X we have that f(X\cup \{x_1\})+f(X\cup \{x_2\})\geq f(X\cup \{x_1,x_2\})+f(X).

A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular.

Types of submodular functions

Monotone

A submodular function f is monotone if for every T\subseteq S we have that f(T)\leq f(S). Examples of monotone submodular functions include:

Linear functions 
Any function of the form f(S)=\sum_{i\in S}w_i is called a linear function. Additionally if \forall i,w_i\geq 0 then f is monotone.
Budget-additive functions 
Any function of the form f(S)=\min(B,\sum_{i\in S}w_i) for each w_i\geq 0 and B\geq 0 is called budget additive.
Coverage functions 
Let \Omega=\{E_1,E_2,\ldots,E_n\} be a collection of subsets of some ground set \Omega'. The function f(S)=|\cup_{E_i\in S}E_i| for S\subseteq \Omega is called a coverage function. This can be generalized by adding non-negative weights to the elements.
Entropy 
Let \Omega=\{X_1,X_2,\ldots,X_n\} be a set of random variables. Then for any S\subseteq \Omega we have that H(S) is a submodular function, where H(S) is the entropy of the set of random variables S
Matroid rank functions 
Let \Omega=\{e_1,e_2,\dots,e_n\} be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function.[2]

Non-monotone

A submodular function which is not monotone is called non-monotone.

Symmetric

A non-monotone submodular function f is called symmetric if for every S\subseteq \Omega we have that f(S)=f(\Omega-S). Examples of symmetric non-monotone submodular functions include:

Graph cuts 
Let \Omega=\{v_1,v_2,\dots,v_n\} be the vertices of a graph. For any set of vertices S\subseteq \Omega let f(S) denote the number of edges e=(u,v) such that u\in S and v\in \Omega-S. This can be generalized by adding non-negative weights to the edges.
Mutual information 
Let \Omega=\{X_1,X_2,\ldots,X_n\} be a set of random variables. Then for any S\subseteq \Omega we have that f(S)=I(S;\Omega-S) is a submodular function, where I(S;\Omega-S) is the mutual information.

Asymmetric

A non-monotone submodular function which is not symmetric is called asymmetric.

Directed cuts 
Let \Omega=\{v_1,v_2,\dots,v_n\} be the vertices of a directed graph. For any set of vertices S\subseteq \Omega let f(S) denote the number of edges e=(u,v) such that u\in S and v\in \Omega-S. This can be generalized by adding non-negative weights to the directed edges.

Continuous extensions

Lovász extension

This extension is named after mathematician László Lovász. Consider any vector \bold{x}=\{x_1,x_2,\dots,x_n\} such that each 0\leq x_i\leq 1. Then the Lovász extension is defined as f^L(\bold{x})=\mathbb{E}(f(\{i|x_i\geq \lambda\})) where the expectation is over \lambda chosen from the uniform distribution on the interval [0,1]. The Lovász extension is a convex function.

Multilinear extension

Consider any vector \bold{x}=\{x_1,x_2,\ldots,x_n\} such that each 0\leq x_i\leq 1. Then the multilinear extension is defined as F(\bold{x})=\sum_{S\subseteq \Omega} f(S) \prod_{i\in S} x_i \prod_{i\notin S} (1-x_i).

Convex closure

Consider any vector \bold{x}=\{x_1,x_2,\dots,x_n\} such that each 0\leq x_i\leq 1. Then the convex closure is defined as f^-(\bold{x})=\min(\sum_S \alpha_S f(S):\sum_S \alpha_S 1_S=\bold{x},\sum_S \alpha_S=1,\alpha_S\geq 0). It can be shown that f^L(\bold{x})=f^-(\bold{x}).

Concave closure

Consider any vector \bold{x}=\{x_1,x_2,\dots,x_n\} such that each 0\leq x_i\leq 1. Then the concave closure is defined as f^+(\bold{x})=\max(\sum_S \alpha_S f(S):\sum_S \alpha_S 1_S=\bold{x},\sum_S \alpha_S=1,\alpha_S\geq 0).

Properties

  1. The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function f_1,f_2,\ldots,f_k and non-negative numbers \alpha_1,\alpha_2,\ldots,\alpha_k. Then the function g defined by g(S)=\sum_{i=1}^k \alpha_i f_i(S) is submodular. Furthermore, for any submodular function f, the function defined by g(S)=f(\Omega \setminus S) is submodular. The function g(S)=\min(f(S),c), where c is a real number, is submodular whenever f is monotonic.
  2. If f:2^\Omega\rightarrow \mathbb{R}_+ is a submodular function then g:2^\Omega\rightarrow \mathbb{R}_+ defined as g(S)=\phi(f(S)) where \phi is a concave function, is also a submodular function.
  3. Consider a random process where a set T is chosen with each element in \Omega being included in T independently with probability p. Then the following inequality is true \mathbb{E}[f(T)]\geq p f(\Omega)+(1-p) f(\varnothing) where \varnothing is the empty set. More generally consider the following random process where a set S is constructed as follows. For each of 1\leq i\leq l, A_i\subseteq \Omega construct S_i by including each element in A_i independently into S_i with probability p_i. Furthermore let S=\cup_{i=1}^l S_i. Then the following inequality is true \mathbb{E}[f(S)]\geq \sum_{R\subseteq [l]} \Pi_{i\in R}p_i \Pi_{i\notin R}(1-p_i)f(\cup_{i\in R}A_i).

Optimization problems

Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints.

Submodular Minimization

The simplest minimization problem is to find a set S\subseteq \Omega which minimizes a submodular function subject to no constraints. This problem is computable in (strongly)[3][4] polynomial time.[5][6] Computing the minimum cut in a graph is a special case of this general minimization problem. Unfortunately, however, even simple constraints like cardinality lower bound constraints make this problem NP hard, with polynomial lower bound approximation factors.[7][8]

Submodular Maximization

Unlike minimization, maximization of submodular functions is usually NP-hard. Many problems, such as max cut and the maximum coverage problem, can be cast as special cases of this general maximization problem under suitable constraints. Typically, the approximation algorithms for these problems are based on either greedy algorithms or local search algorithms. The problem of maximizing a symmetric non-monotone submodular function subject to no constraints admits a 1/2 approximation algorithm.[9] Computing the maximum cut of a graph is a special case of this problem. The more general problem of maximizing an arbitrary non-monotone submodular function subject to no constraints also admits a 1/2 approximation algorithm.[10] The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a 1 - 1/e approximation algorithm.[11] The maximum coverage problem is a special case of this problem. The more general problem of maximizing a monotone submodular function subject to a matroid constraint also admits a 1 - 1/e approximation algorithm.[12][13] Many of these algorithms can be unified within a semi-differential based framework of algorithms.[8]

Related Optimization Problems

Apart from submodular minimization and maximization, another natural problem is Difference of Submodular Optimization.[14][15] Unfortunately, this problem is not only NP hard, but also inapproximable.[15] A related optimization problem is minimize or maximize a submodular function, subject to a submodular level set constraint (also called submodular optimization subject to submodular cover or submodular knapsack constraint). This problem admits bounded approximation guarantees.[16] Another optimization problem involves partitioning data based on a submodular function, so as to maximize the average welfare. This problem is called the submodular welfare problem.[17]

Applications

Submodular functions naturally occur in several real world applications, in Economics, Game Theory, Machine Learning and Computer Vision. Owing the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when the appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. For more information on applications of submodularity, particularly in machine learning, see [18][19][20]

See also

Citations

  1. (Schrijver 2003,§44, p. 766)
  2. Fujishige (2005) p.22
  3. S. Iwata, L. Fleischer, and S. Fujishige, A combinatorial strongly polynomial algorithm for minimizing submodular functions, J. ACM 48 (2001), pp. 761–777.
  4. A. Schrijver, A combinatorial algorithm minimizing submodular functions in strongly polynomial time, J. Combin. Theory Ser. B 80 (2000), pp. 346–355.
  5. M. Grötschel, L. Lovasz and A. Schrijver, The ellipsoid method and its consequences in combinatorial optimization, Combinatorica 1 (1981), pp. 169–197.
  6. W. H. Cunningham, On submodular function minimization, Combinatorica,5 (1985),pp. 185–192.
  7. Z. Svitkina and L. Fleischer, Submodular approximation: Sampling-based algorithms and lower bounds, SIAM Journal of Computing (2011).
  8. 8.0 8.1 R. Iyer, S. Jegelka and J. Bilmes, Fast Semidifferential based submodular function optimization, Proc. ICML (2013).
  9. U. Feige, V. Mirrokni and J. Vondrák, Maximizing non-monotone submodular functions, Proc. of 48th FOCS (2007), pp. 461–471.
  10. N. Buchbinder, M. Feldman, J. Naor and R. Schwartz, A tight linear time (1/2)-approximation for unconstrained submodular maximization, Proc. of 53rd FOCS (2012), pp. 649-658.
  11. G. L. Nemhauser, L. A. Wolsey and M. L. Fisher, An analysis of approximations for maximizing submodular set functions I, Mathematical Programming 14 (1978), 265–294.
  12. G. Calinescu, C. Chekuri, M. Pál and J. Vondrák, Maximizing a submodular set function subject to a matroid constraint, SIAM J. Comp. 40:6 (2011), 1740-1766.
  13. Y. Filmus, J. Ward, A tight combinatorial algorithm for submodular maximization subject to a matroid constraint, Proc. of 53rd FOCS (2012), pp. 659-668.
  14. M. Narasimhan and J. Bilmes, A submodular-supermodular procedure with applications to discriminative structure learning, In Proc. UAI (2005).
  15. 15.0 15.1 R. Iyer and J. Bilmes, Algorithms for Approximate Minimization of the Difference between Submodular Functions, In Proc. UAI (2012).
  16. R. Iyer and J. Bilmes, Submodular Optimization Subject to Submodular Cover and Submodular Knapsack Constraints, In Advances of NIPS (2013).
  17. J. Vondrák, Optimal approximation for the submodular welfare problem in the value oracle model, Proc. of STOC (2008), pp. 461–471.
  18. http://submodularity.org/.
  19. A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008
  20. J. Bilmes, Submodularity in Machine Learning Applications, Tutorial at AAAI-2015

References

External links