Filters








58 Hits in 0.91 sec

Online Budgeted Repeated Matching [article]

Ajil Jalal, Rahul Vaze, Umang Bhaskar
2015 arXiv   pre-print
A basic combinatorial online resource allocation problem is considered, where multiple servers have individual capacity constraints, and at each time slot, a set of jobs arrives, that have potentially different weights to different servers. At each time slot, a one-to-one matching has to be found between jobs and servers, subject to individual capacity constraints, in an online manner. The objective is to maximize the aggregate weight of jobs allotted to servers, summed across time slots and
more » ... vers, subject to individual capacity constraints. This problem generalizes the well known adwords problem, and is also relevant for various other modern applications. A simple greedy algorithm is shown to be 3-competitive, whenever the weight of any edge is at most half of the corresponding server capacity. Moreover, a randomized version of the greedy algorithm is shown to be 6-competitive for the unrestricted edge weights case. For parallel servers with small-weight jobs, we show that a load-balancing algorithm is near-optimal.
arXiv:1512.00153v1 fatcat:g7dbpdpi4zbc3nrqku7obxvcoa

Compressed Sensing using Generative Models [article]

Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis
2017 arXiv   pre-print
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G:
more » ... k →R^n. Our main theorem is that, if G is L-Lipschitz, then roughly O(k L) random Gaussian measurements suffice for an ℓ_2/ℓ_2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.
arXiv:1703.03208v1 fatcat:i4fssc2rnrbp3pe4kwsln675fu

Instance-Optimal Compressed Sensing via Posterior Sampling [article]

Ajil Jalal and Sushrut Karmalkar and Alexandros G. Dimakis and Eric Price
2021 arXiv   pre-print
Acknowledgements Ajil Jalal and Alex Dimakis were supported by NSF Grants CCF 1934932, AF 1901292, 2008710, 2019844 Define c y = ch1(y) (1−c)h0(y)+ch1(y) , and let z|y ∼ Bernoulli( c y ) be the posterior  ...  Ongie, G., Jalal, A., Metzler, C. A., Baraniuk, R. G., Dimakis, A. G., and Willett, R. Deep learning tech- niques for inverse problems in imaging. arXiv preprint arXiv:2005.06001, 2020.  ... 
arXiv:2106.11438v1 fatcat:n6tqdezb7zgrnal36iqznydy64

High Dimensional Channel Estimation Using Deep Generative Networks [article]

Eren Balevi, Akash Doshi, Ajil Jalal, Alexandros Dimakis, Jeffrey G. Andrews
2020 arXiv   pre-print
This paper presents a novel compressed sensing (CS) approach to high dimensional wireless channel estimation by optimizing the input to a deep generative network. Channel estimation using generative networks relies on the assumption that the reconstructed channel lies in the range of a generative model. Channel reconstruction using generative priors outperforms conventional CS techniques and requires fewer pilots. It also eliminates the need of a priori knowledge of the sparsifying basis,
more » ... d using the structure captured by the deep generative model as a prior. Using this prior, we also perform channel estimation from one-bit quantized pilot measurements, and propose a novel optimization objective function that attempts to maximize the correlation between the received signal and the generator's channel estimate while minimizing the rank of the channel estimate. Our approach significantly outperforms sparse signal recovery methods such as Orthogonal Matching Pursuit (OMP) and Approximate Message Passing (AMP) algorithms such as EM-GM-AMP for narrowband mmWave channel reconstruction, and its execution time is not noticeably affected by the increase in the number of received pilot symbols.
arXiv:2006.13494v1 fatcat:uoieantwpzf7dojwqe44v4dwbu

Robust compressed sensing using generative models

Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
2020 Neural Information Processing Systems  
Plots use a PGGAN on CelebA-HQ. 8 Acknowledgments 8 Ajil Jalal and Alex Dimakis have been supported by NSF Grants CCF 1763702,1934932, AF 1901292, 2008710, 2019844 research gifts by NVIDIA, Western Digital  ... 
dblp:conf/nips/JalalLDC20 fatcat:2yv25lzpfjazhenht55i7dmehu

Deep Learning Techniques for Inverse Problems in Imaging [article]

Gregory Ongie, Ajil Jalal, Christopher A. Metzler, Richard G. Baraniuk, Alexandros G. Dimakis, Rebecca Willett
2020 arXiv   pre-print
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging. We explore the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. Our taxonomy is organized along two central axes: (1) whether or not a forward model is known and to what extent it is used in training and testing, and (2) whether or not the
more » ... ng is supervised or unsupervised, i.e., whether or not the training relies on access to matched ground truth image and measurement pairs. We also discuss the trade-offs associated with these different reconstruction approaches, caveats and common failure modes, plus open problems and avenues for future work.
arXiv:2005.06001v1 fatcat:z7w3vygugjf57fqbe6t62fvni4

Robust Compressed Sensing using Generative Models [article]

Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
2021 arXiv   pre-print
[15] Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models.  ...  [75] Gregory Ongie, Ajil Jalal, Christopher A Metzler, Richard G Baraniuk, Alexandros G Dimakis, and Rebecca Willett.  ... 
arXiv:2006.09461v3 fatcat:dqwmkk7ianhenbyowfv2dudete

Inverting Deep Generative models, One layer at a time [article]

Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis
2019 arXiv   pre-print
We study the problem of inverting a deep generative model with ReLU activations. Inversion corresponds to finding a latent code vector that explains observed measurements as much as possible. In most prior works this is performed by attempting to solve a non-convex optimization problem involving the generator. In this paper we obtain several novel theoretical results for the inversion problem. We show that for the realizable case, single layer inversion can be performed exactly in polynomial
more » ... e, by solving a linear program. Further, we show that for multiple layers, inversion is NP-hard and the pre-image set can be non-convex. For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected. Very recent work analyzed the same problem for gradient descent inversion. Their analysis requires significantly higher expansion (logarithmic in the latent dimension) while our proposed algorithm can provably reconstruct even with constant factor expansion. We also provide provable error bounds for different norms for reconstructing noisy observations. Our empirical validation demonstrates that we obtain better reconstructions when the latent dimension is large.
arXiv:1906.07437v2 fatcat:kk5ypmtr75a3xlv7mzc32yxhkq

Fairness for Image Generation with Uncertain Sensitive Attributes [article]

Ajil Jalal and Sushrut Karmalkar and Jessica Hoffmann and Alexandros G. Dimakis and Eric Price
2021 arXiv   pre-print
Acknowledgements Ajil Jalal and Alex Dimakis were supported by NSF Grants CCF 1934932, AF 1901292, 2008710, 2019844 A. FFHQ Experiments Original Measurements PULSE Conditional Resampling .  ...  Another line of work has shown that Posterior Sampling using approximate deep generative priors is instance-optimal for compressed sensing (Jalal et al., 2021) .  ... 
arXiv:2106.12182v2 fatcat:ynd6tzfdejaexhzioajgck23oy

The Adwords Problem with Strict Capacity Constraints

Umang Bhaskar, Ajil Jalal, Rahul Vaze, Marc Herbstritt
2016 Foundations of Software Technology and Theoretical Computer Science  
Jalal, and R. Vaze 30:7 Let E 1 = ∪ T t=1 E 1 (t), E 2 = ∪ T t=1 E 2 (t), E 3 = ∪ T i=1 E 3 (t). Define E S 1 = {e = (i, j) ∈ E 1 | i ∈ S}, E S 2 = {e = (i, j) ∈ E 2 | i ∈ S}.  ... 
doi:10.4230/lipics.fsttcs.2016.30 dblp:conf/fsttcs/BhaskarJV16 fatcat:gytjqjwzqze65m4ftiji6zbrsa

Robust Compressed Sensing MRI with Deep Generative Priors [article]

Ajil Jalal and Marius Arvinte and Giannis Daras and Eric Price and Alexandros G. Dimakis and Jonathan I. Tamir
2021 arXiv   pre-print
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deep generative priors can be powerful tools for solving inverse problems.  ...  Ajil Jalal, Giannis Daras and Alex Dimakis were supported by NSF Grants CCF 1934932, AF 1901292, 2008710, 2019844 the NSF IFML 2019844 award and research gifts by Western Digital, WNCG and MLL, computing  ... 
arXiv:2108.01368v2 fatcat:7yhbliz5wjafdp6x2aypaogawy

Analytics, Machine Learning & NLP – use in BioSurveillance and Public Health practice

Mujitha B. K B, Ajil Jalal, Vishnuprasad V, Nishad K A
2015 Online Journal of Public Health Informatics  
doi:10.5210/ojphi.v7i1.5950 fatcat:pwksrwnxv5gebedugeewz4pcz4

Compressed Sensing with Deep Image Prior and Learned Regularization [article]

Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, Alexandros G. Dimakis
2020 arXiv   pre-print
We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require
more » ... ning over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally, we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks can perfectly fit any signal despite the non-convex nature of the fitting problem. This theoretical result provides justification for early stopping.
arXiv:1806.06438v4 fatcat:hqjraly4vrhwbnhpjvsci2fosa

The Robust Manifold Defense: Adversarial Training using Generative Models [article]

Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis
2019 arXiv   pre-print
We propose a new type of attack for finding adversarial examples for image classifiers. Our method exploits spanners, i.e. deep neural networks whose input space is low-dimensional and whose output range approximates the set of images of interest. Spanners may be generators of GANs or decoders of VAEs. The key idea in our attack is to search over latent code pairs to find ones that generate nearby images with different classifier outputs. We argue that our attack is stronger than searching over
more » ... perturbations of real images. Moreover, we show that our stronger attack can be used to reduce the accuracy of Defense-GAN to 3%, resolving an open problem from the well-known paper by Athalye et al. We combine our attack with normal adversarial training to obtain the most robust known MNIST classifier, significantly improving the state of the art against PGD attacks. Our formulation involves solving a min-max problem, where the min player sets the parameters of the classifier and the max player is running our attack, and is thus searching for adversarial examples in the low-dimensional input space of the spanner. All code and models are available at
arXiv:1712.09196v5 fatcat:bqombwemozfq7glvwg7lpklq3a

Towards Designing and Exploiting Generative Networks for Neutrino Physics Experiments using Liquid Argon Time Projection Chambers [article]

Paul Lutkus, Taritree Wongjirad, Shuchin Aeron
2022 arXiv   pre-print
Ajil Jalal, Liu Liu, Alexandros G Dimakis, and Constantine Caramanis. Robust compressed sensing using generative models. Advances in Neural Information Processing Systems, 33, 2020.  ...  Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using genera-Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the tive models.  ... 
arXiv:2204.02496v1 fatcat:mpxzfmsjmbdhfhadjs3livlh5u
« Previous Showing results 1 — 15 out of 58 results