Theory of Computing ------------------- Title : Near-Optimal NP-Hardness of Approximating Max $k$-CSP$_R$ Authors : Pasin Manurangsi, Preetum Nakkiran, and Luca Trevisan Volume : 18 Number : 3 Pages : 1-29 URL : https://theoryofcomputing.org/articles/v018a003 Abstract -------- We prove almost optimal hardness for Max k-CSP_R. In Max k-CSP_R, we are given a set of constraints, each of which depends on at most k variables. Each variable can take any value from 1, 2, ... , R. The goal is to find an assignment to variables that maximizes the number of satisfied constraints. We show that, for any k \geq 2 and R \geq 16, it is NP-hard to approximate Max k-CSP_R to within factor $k^{O(k)}(\log R)^{k/2}/R^{k - 1}$. In the regime where $3 \leq k = o(\log R/\log \log R$, this ratio improves upon Chan's $O(k/R^{k-2})$ factor NP-hardness of approximation of Max k-CSP_R (J. ACM 2016). Moreover, when $k = 2$, our result matches the best known hardness result of Khot, Kindler, Mossel and O'Donnell (SIAM J. Comp. 2007). We remark here that NP-hardness of an approximation factor of $2^{O(k)}\log(kR)/R^{k - 1}$ is implicit in the (independent) work of Khot and Saket (ICALP 2015), which is better than our ratio for all $k\geq 3$. In addition to the above hardness result, by extending an algorithm for Max 2-CSP_R by Kindler, Kolla and Trevisan (SODA 2016), we provide an $\Omega(\log R/R^{k - 1})$-approximation algorithm for Max k-CSP_R. Thanks to Khot and Saket's result, this algorithm is tight up to a factor of $O(k^2)$ when $k \leq R^{O(1)}$. In comparison, when $3 \leq k$ is a constant, the previously best known algorithm achieves an $O(k/R^{k - 1})$-approximation for the problem, which is a factor of $O(k \log R)$ from the inapproximability ratio in contrast to our gap of $O(k^2)$. -------------------- A conference version of this paper appeared in the Proceedings of the 19th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX'16).