Anticoncentration versus the number of subset sums

Let $\vec{w} = (w_1,\dots, w_n) \in \mathbb{R}^{n}$. We show that for any $n^{-2}\le\epsilon\le 1$, if \[\#\{\vec{\xi} \in \{0,1\}^{n}: \langle \vec{\xi}, \vec{w} \rangle = \tau\} \ge 2^{-\epsilon n}\cdot 2^{n}\] for some $\tau \in \mathbb{R}$, then \[\#\{\langle \vec{\xi}, \vec{w} \rangle : \vec{\xi} \in \{0,1\}^{n}\} \le 2^{O(\sqrt{\epsilon}n)}.\] This exponentially improves the $\epsilon$ dependence in a recent result of Nederlof, Pawlewicz, Swennenhuis, and W\k{e}grzycki and leads to a similar improvement in the parameterized (by the number of bins) runtime of bin packing.

We prove this theorem in Section 2.

Application to bin packing
The bin packing problem is a classic NP-complete problem whose decision version may be stated as follows: given n items with weights w 1 , . . . , w n ∈ [0, 1] and m bins, each of capacity 1, is there a way to assign the items to the bins without violating the capacity constraints? Formally, is there a map f : Björklund, Husfeldt, and Koivisto [1] provided an algorithm for solving bin packing in timeÕ(2 n ) where the tilde hides polynomial factors in n. It is natural to ask whether the base of the exponent may be improved at all i.e. is there a (possibly randomized) algorithm to solve bin packing in timeÕ(2 (1−ε)n ) for some absolute constant ε > 0?
In recent work, Nederlof, Pawlewics, Swennenhuis and Węgrzycki [2] showed that this is true provided that the number of bins m is fixed. More precisely, they showed that there exists a function σ : N → R >0 and a randomized algorithm for solving bin packing which, on instances with m bins, runs in timeÕ(2 (1−σ (m))n ), whereÕ hides polynomials in n as well as exponential factors in m. Their analysis, which crucially relies on (1.1), gives a very small value of σ (m) satisfying (1.3) Using Theorem 1.2 instead of (1.1) in a black-box manner in the analysis of [2], we exponentially improve the bound on σ (m). Corollary 1.3. With notation as above, the randomized algorithm of [2] solves bin packing instances with m bins in timeÕ(2 (1−σ (m))n ) with high probability, for σ : whereΩ hides logarithmic factors in m.

Notation
We use big-O notation to mean that an absolute multiplicative constant is being hidden. We use Ber(1/2) to denote the balanced {0, 1} Bernoulli distribution and Bin(k) to denote the binomial distribution on k trials with parameter 1/2. Recall that Bin(k) is the sum of k independent Ber(1/2) random variables. Given a distribution µ, we let µ ⊗n denote the distribution of a random vector with n independent samples from µ as its coordinates. We also use the following standard additive combinatorics notation: is the sumset (if C, D are subsets of the same abelian group), and for a positive integer k, we let k ·C = C + · · · +C (k times) be the iterated sumset. Finally, in some cases we will use the notation Σ· or · to denote that the expression in the sum or integral is the same as in the previous line to simplify the presentation of long expressions.

Outline of the proof
As in [2], the starting point of our proof is the following observation: let A denote a fixed (but otherwise arbitrary) set of unique preimages for points in R( w) (hence, |A| = |R( w)|) and let B denote the the set of preimages of a value τ ∈ R realising ρ( w). Then (Lemma 2.2) for any k ≥ 1, the map A×(k ·B) → A+k ·B is a bijection. In particular, if a is sampled from the uniform distribution on A and b 1 , . . . , b k are independently sampled from the uniform distribution on B, then In [2], the largeness of B is exploited by finding, for every a ∈ A, a large subset of B which is 'balanced' (in a certain sense) with respect to a. Instead, we exploit the largeness of B directly by using the observation that the density of the uniform measure on B with respect to the uniform measure on {0, 1} n is at most 2 n /|B| ≤ 2 εn . In particular, if we let µ k denote the measure on k · B induced by the product measure on B × · · · × B via the map (b 1 , . . . , b k ) → b 1 + · · · + b k and if we let Bin(k) ⊗n denote the n-fold product of the Binomial(k, 1/2) distribution, then the density of µ k with respect to Bin(k) ⊗n is at most 2 kεn . This allows us to replace the measure µ k appearing in the last line of the above equation by Bin(k) ⊗n , at the cost of a factor of 2 kεn . Thus, The above expression is still complicated by the presence of the shift a( x), about which we have no information except that it lies in the set A. The key technical lemma in the proof is Lemma 2.1, which essentially allows us to remove this shift after paying a factor which depends on |A|. Ultimately, this gives an upper bound on the sum in terms of |A| and k, which amounts to an upper bound on |A| in terms of k, ε, and |A|. Optimizing the value of the free parameter k now gives the desired conclusion.

Proof of Theorem 1.2
We begin by recording the following key comparison bound, which will be proved at the end of this section.
We will make use of the simple, but crucial, observation from [2] that A and k · B have a full sumset for all k ≥ 1.
j ∈ B, then taking the inner product of both sides with w and using w, b = τ for all b ∈ B, we see that w, a 1 = w, a 2 , which implies that a 1 = a 2 by the definition of A.
We are now ready to prove Theorem 1.2.
Proof of Theorem 1.2. Let k ≥ 2 be a parameter which will be chosen later depending on ε. We may assume ε ∈ (0, (2C 2.1 ) −2 ) by adjusting C 1.2 appropriately at the end to make larger values trivial. By Lemma 2.2, for each x ∈ {0, . . . , k + 1} n for which there exist a ∈ A and c ∈ k · B with a + c = x, there exists a unique such choice a = a( x) ∈ A. (For x / ∈ A + k · B, we let a( x) be an arbitrary element of A.) Now, let a be uniform on A, let b 1 , . . . , b k be uniform on B, and let v 1 , . . . , v k be uniform on {0, 1} n . Let C i ⊆ {0, . . . , k + 1} n be the set of vectors with i coordinates equal to k + 1. For x ∈ {0, . . . , k + 1} n , we let x * ∈ {0, . . . , k} n denote the vector obtained by setting every occurrence of k + 1 in x to k. We have . ADVANCES IN COMBINATORICS, 2021:6, 10pp.
Let A S be the set of elements in A ⊆ {0, 1} n whose support contains S. Let Recall that |A| = exp(δ n). Abusing notation so that the supremum of an empty set is 0, we can continue the above chain of inequalities to get that by Lemma 2.1 applied to A S , as long as n/2 ≥ k ≥ C 2.1 ≥ 20. To deduce the last line, note that n i 2 −ki ≤ (2 −k en/i) i , so for i ≥ en/2 k−1 the sum of weighted binomials is bounded by a geometric series. Additionally, for 1 ≤ i ≤ en/2 k , if this interval is nonempty, the sum of binomials is certainly bounded by exp(O(k −1 n)).
The proof of Lemma 2.1 relies on the following preliminary estimate.
Moreover, for any −k ≤ ≤ k, In particular, under this coupling of y, z 1 , . . . , z k , we have Let z = z 1 + · · · + z k , so that z ∼ N(0, k). Then, by the convexity of f (y) = exp 4sy k + 2 + 32sy k 2 and using Jensen's inequality, we have Finally, we can prove Lemma 2.1 Proof of Lemma 2.1. We may assume that δ ≥ 2000/k since the statement for δ < 2000/k follows from the statement for δ = 2000/k. Also, note that we may assume that δ ≤ log 2. For any t ∈ R, we have In the last line, we have used that if k ≥ 3. Therefore, by Lemma 2.3, we have ≥ e tn ≤ |A| inf Here, the second case follows by plugging in s = k/(24π log k) and simplifying (assuming C 2.1 is large enough so s ≥ 2), and the first case follows from plugging in s = kt/(24π) which satisfies 2 ≤ s ≤ k/(10 log k) by the restriction on t and δ . Finally, since