TY - JOUR
AB - We develop a framework for the rigorous analysis of focused stochastic local search algorithms. These algorithms search a state space by repeatedly selecting some constraint that is violated in the current state and moving to a random nearby state that addresses the violation, while (we hope) not introducing many new violations. An important class of focused local search algorithms with provable performance guarantees has recently arisen from algorithmizations of the Lovász local lemma (LLL), a nonconstructive tool for proving the existence of satisfying states by introducing a background measure on the state space. While powerful, the state transitions of algorithms in this class must be, in a precise sense, perfectly compatible with the background measure. In many applications this is a very restrictive requirement, and one needs to step outside the class. Here we introduce the notion of measure distortion and develop a framework for analyzing arbitrary focused stochastic local search algorithms, recovering LLL algorithmizations as the special case of no distortion. Our framework takes as input an arbitrary algorithm of such type and an arbitrary probability measure and shows how to use the measure as a yardstick of algorithmic progress, even for algorithms designed independently of the measure.
AU - Achlioptas, Dimitris
AU - Iliopoulos, Fotis
AU - Kolmogorov, Vladimir
ID - 7412
IS - 5
JF - SIAM Journal on Computing
SN - 0097-5397
TI - A local lemma for focused stochastical algorithms
VL - 48
ER -
TY - CONF
AB - We present a new proximal bundle method for Maximum-A-Posteriori (MAP) inference in structured energy minimization problems. The method optimizes a Lagrangean relaxation of the original energy minimization problem using a multi plane block-coordinate Frank-Wolfe method that takes advantage of the specific structure of the Lagrangean decomposition. We show empirically that our method outperforms state-of-the-art Lagrangean decomposition based algorithms on some challenging Markov Random Field, multi-label discrete tomography and graph matching problems.
AU - Swoboda, Paul
AU - Kolmogorov, Vladimir
ID - 7468
SN - 10636919
T2 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
TI - Map inference via block-coordinate Frank-Wolfe algorithm
VL - 2019-June
ER -
TY - CONF
AB - Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on a norm of the function, as is classically considered in statistical learning theory. In this work, we study the tractability of function norms for deep neural networks with ReLU activations. We provide, to the best of our knowledge, the first proof in the literature of the NP-hardness of computing function norms of DNNs of 3 or more layers. We also highlight a fundamental difference between shallow and deep networks. In the light on these results, we propose a new regularization strategy based on approximate function norms, and show its efficiency on a segmentation task with a DNN.
AU - Rannen-Triki, Amal
AU - Berman, Maxim
AU - Kolmogorov, Vladimir
AU - Blaschko, Matthew B.
ID - 7639
SN - 9781728150239
T2 - Proceedings of the 2019 International Conference on Computer Vision Workshop
TI - Function norms for neural networks
ER -
TY - JOUR
AB - It is well known that many problems in image recovery, signal processing, and machine learning can be modeled as finding zeros of the sum of maximal monotone and Lipschitz continuous monotone operators. Many papers have studied forward-backward splitting methods for finding zeros of the sum of two monotone operators in Hilbert spaces. Most of the proposed splitting methods in the literature have been proposed for the sum of maximal monotone and inverse-strongly monotone operators in Hilbert spaces. In this paper, we consider splitting methods for finding zeros of the sum of maximal monotone operators and Lipschitz continuous monotone operators in Banach spaces. We obtain weak and strong convergence results for the zeros of the sum of maximal monotone and Lipschitz continuous monotone operators in Banach spaces. Many already studied problems in the literature can be considered as special cases of this paper.
AU - Shehu, Yekini
ID - 6596
IS - 4
JF - Results in Mathematics
SN - 1422-6383
TI - Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces
VL - 74
ER -
TY - CONF
AB - A Valued Constraint Satisfaction Problem (VCSP) provides a common framework that can express a wide range of discrete optimization problems. A VCSP instance is given by a finite set of variables, a finite domain of labels, and an objective function to be minimized. This function is represented as a sum of terms where each term depends on a subset of the variables. To obtain different classes of optimization problems, one can restrict all terms to come from a fixed set Γ of cost functions, called a language.
Recent breakthrough results have established a complete complexity classification of such classes with respect to language Γ: if all cost functions in Γ satisfy a certain algebraic condition then all Γ-instances can be solved in polynomial time, otherwise the problem is NP-hard. Unfortunately, testing this condition for a given language Γ is known to be NP-hard. We thus study exponential algorithms for this meta-problem. We show that the tractability condition of a finite-valued language Γ can be tested in O(3‾√3|D|⋅poly(size(Γ))) time, where D is the domain of Γ and poly(⋅) is some fixed polynomial. We also obtain a matching lower bound under the Strong Exponential Time Hypothesis (SETH). More precisely, we prove that for any constant δ<1 there is no O(3‾√3δ|D|) algorithm, assuming that SETH holds.
AU - Kolmogorov, Vladimir
ID - 6725
SN - 1868-8969
T2 - 46th International Colloquium on Automata, Languages and Programming
TI - Testing the complexity of a valued CSP language
VL - 132
ER -
TY - JOUR
AB - The main contributions of this paper are the proposition and the convergence analysis of a class of inertial projection-type algorithm for solving variational inequality problems in real Hilbert spaces where the underline operator is monotone and uniformly continuous. We carry out a unified analysis of the proposed method under very mild assumptions. In particular, weak convergence of the generated sequence is established and nonasymptotic O(1 / n) rate of convergence is established, where n denotes the iteration counter. We also present some experimental results to illustrate the profits gained by introducing the inertial extrapolation steps.
AU - Shehu, Yekini
AU - Iyiola, Olaniyi S.
AU - Li, Xiao-Huan
AU - Dong, Qiao-Li
ID - 7000
IS - 4
JF - Computational and Applied Mathematics
SN - 2238-3603
TI - Convergence analysis of projection method for variational inequalities
VL - 38
ER -
TY - CONF
AB - The accuracy of information retrieval systems is often measured using complex loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive and negative samples, the parameters of a retrieval system can be estimated by minimizing these loss functions. However, the non-differentiability and non-decomposability of these loss functions does not allow for simple gradient based optimization algorithms. This issue is generally circumvented by either optimizing a structured hinge-loss upper bound to the loss function or by using asymptotic methods like the direct-loss minimization framework. Yet, the high computational complexity of loss-augmented inference, which is necessary for both the frameworks, prohibits its use in large training data sets. To alleviate this deficiency, we present a novel quicksort flavored algorithm for a large class of non-decomposable loss functions. We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions. Furthermore, we prove that no comparison based algorithm can improve upon the computational complexity of our approach asymptotically. We demonstrate the effectiveness of our approach in the context of optimizing the structured hinge loss upper bound of AP and NDCG loss for learning models for a variety of vision tasks. We show that our approach provides significantly better results than simpler decomposable loss functions, while requiring a comparable training time.
AU - Mohapatra, Pritish
AU - Rolinek, Michal
AU - Jawahar, C V
AU - Kolmogorov, Vladimir
AU - Kumar, M Pawan
ID - 273
SN - 9781538664209
T2 - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
TI - Efficient optimization for rank-based loss functions
ER -
TY - JOUR
AB - An N-superconcentrator is a directed, acyclic graph with N input nodes and N output nodes such that every subset of the inputs and every subset of the outputs of same cardinality can be connected by node-disjoint paths. It is known that linear-size and bounded-degree superconcentrators exist. We prove the existence of such superconcentrators with asymptotic density 25.3 (where the density is the number of edges divided by N). The previously best known densities were 28 [12] and 27.4136 [17].
AU - Kolmogorov, Vladimir
AU - Rolinek, Michal
ID - 18
IS - 10
JF - Ars Combinatoria
SN - 0381-7032
TI - Superconcentrators of density 25.3
VL - 141
ER -
TY - CONF
AB - We show attacks on five data-independent memory-hard functions (iMHF) that were submitted to the password hashing competition (PHC). Informally, an MHF is a function which cannot be evaluated on dedicated hardware, like ASICs, at significantly lower hardware and/or energy cost than evaluating a single instance on a standard single-core architecture. Data-independent means the memory access pattern of the function is independent of the input; this makes iMHFs harder to construct than data-dependent ones, but the latter can be attacked by various side-channel attacks. Following [Alwen-Blocki'16], we capture the evaluation of an iMHF as a directed acyclic graph (DAG). The cumulative parallel pebbling complexity of this DAG is a measure for the hardware cost of evaluating the iMHF on an ASIC. Ideally, one would like the complexity of a DAG underlying an iMHF to be as close to quadratic in the number of nodes of the graph as possible. Instead, we show that (the DAGs underlying) the following iMHFs are far from this bound: Rig.v2, TwoCats and Gambit each having an exponent no more than 1.75. Moreover, we show that the complexity of the iMHF modes of the PHC finalists Pomelo and Lyra2 have exponents at most 1.83 and 1.67 respectively. To show this we investigate a combinatorial property of each underlying DAG (called its depth-robustness. By establishing upper bounds on this property we are then able to apply the general technique of [Alwen-Block'16] for analyzing the hardware costs of an iMHF.
AU - Alwen, Joel F
AU - Gazi, Peter
AU - Kamath Hosdurg, Chethan
AU - Klein, Karen
AU - Osang, Georg F
AU - Pietrzak, Krzysztof Z
AU - Reyzin, Lenoid
AU - Rolinek, Michal
AU - Rybar, Michal
ID - 193
T2 - Proceedings of the 2018 on Asia Conference on Computer and Communication Security
TI - On the memory hardness of data independent password hashing functions
ER -
TY - DATA
AB - Graph matching problems for large displacement optical flow of RGB-D images.
AU - Alhaija, Hassan
AU - Sellent, Anita
AU - Kondermann, Daniel
AU - Rother, Carsten
ID - 5573
KW - graph matching
KW - quadratic assignment problem<
TI - Graph matching problems for GraphFlow – 6D Large Displacement Scene Flow
ER -