Machine Learning Latest Submitted Preprints | 2019-05-03
Machine Learning
A tutorial on recursive models for analyzing and predicting path choice behavior (1905.00883v1)
Maëlle Zimmermann, Emma Frejinger
2019-05-02
The problem at the heart of this tutorial consists in modeling the path choice behavior of network users. This problem has extensively been studied in transportation science and econometrics, where it is known as the route choice problem. In this literature, individuals' choice of paths are typically predicted from discrete choice models. The aim of this tutorial is to present this problem from the novel and more general perspective of inverse optimization, in order to describe the modeling approaches proposed in related research areas and thereby motivate the use of so-called recursive models. The latter have the advantage of predicting path choices without generating choice sets. In this paper, we contextualize discrete choice models as a probabilistic approach to an inverse shortest path problem with noisy data, highlighting that recursive discrete choice models in particular originate from viewing the inner shortest path problem as a parametric Markov Decision Process. We also illustrate through simple numerical examples that recursive models overcome issues associated with the path-based discrete choice models commonly found in the transportation literature.
You Only Propagate Once: Painless Adversarial Training Using Maximal Principle (1905.00877v1)
Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong
2019-05-02
Deep learning achieves state-of-the-art results in many areas. However recent works have shown that deep networks can be vulnerable to adversarial perturbations which slightly changes the input but leads to incorrect prediction. Adversarial training is an effective way of improving the robustness to the adversarial examples, typically formulated as a robust optimization problem for network training. To solve it, previous works directly run gradient descent on the "adversarial loss", i.e. replacing the input data with the corresponding adversaries. A major drawback of this approach is the computational overhead of adversary generation, which is much larger than network updating and leads to inconvenience in adversarial defense. To address this issue, we fully exploit structure of deep neural networks and propose a novel strategy to decouple the adversary update with the gradient back propagation. To achieve this goal, we follow the research line considering training deep neural network as an optimal control problem. We formulate the robust optimization as a differential game. This allows us to figure out the necessary conditions for optimality. In this way, we train the neural network via solving the Pontryagin's Maximum Principle (PMP). The adversary is only coupled with the first layer weight in PMP. It inspires us to split the adversary computation from the back propagation gradient computation. As a result, our proposed YOPO (You Only Propagate Once) avoids forward and backward the data too many times in one iteration, and restricts core descent directions computation to the first layer of the network, thus speeding up every iteration significantly. For adversarial example defense, our experiment shows that YOPO can achieve comparable defense accuracy using around 1/5 GPU time of the original projected gradient descent training.
Self-supervised Learning for Video Correspondence Flow (1905.00875v1)
Zihang Lai, Weidi Xie
2019-05-02
The objective of this paper is self-supervised learning of feature embeddings from videos, suitable for correspondence flow, i.e. matching correspondences between frames over the video. We leverage the natural spatial-temporal coherence of appearance in videos, to create a "pointer" model that learns to reconstruct a target frame by copying colors from a reference frame. We make three contributions: First, we introduce a simple information bottleneck that enforces the model to learn robust features for correspondence matching, and avoids it learning trivial solutions, e.g. matching based on low-level color information. Second, we propose to train the model over a long temporal window in videos. To make the model more robust to complex object deformation, occlusion, i.e. the problem of tracker drifting, we formulate a recursive model, trained with scheduled sampling and cycle consistency. Third, we evaluate the approach by first training on the Kinetics dataset using self-supervised learning, and then directly applied for DAVIS video segmentation and JHMDB keypoint tracking. On both tasks, our approach has achieved state-of-the-art performance, especially on segmentation, we outperform all previous methods by a significant margin.
Similarities between policy gradient methods (PGM) in Reinforcement learning (RL) and supervised learning (SL) (1904.06260v3)
Eric Benhamou
2019-04-12
Reinforcement learning (RL) is about sequential decision making and is traditionally opposed to supervised learning (SL) and unsupervised learning (USL). In RL, given the current state, the agent makes a decision that may influence the next state as opposed to SL (and USL) where, the next state remains the same, regardless of the decisions taken, either in batch or online learning. Although this difference is fundamental between SL and RL, there are connections that have been overlooked. In particular, we prove in this paper that gradient policy method can be cast as a supervised learning problem where true label are replaced with discounted rewards. We provide a new proof of policy gradient methods (PGM) that emphasizes the tight link with the cross entropy and supervised learning. We provide a simple experiment where we interchange label and pseudo rewards. We conclude that other relationships with SL could be made if we modify the reward functions wisely.
Parity Models: A General Framework for Coding-Based Resilience in ML Inference (1905.00863v1)
Jack Kosaian, K. V. Rashmi, Shivaram Venkataraman
2019-05-02
Machine learning models are becoming the primary workhorses for many applications. Production services deploy models through prediction serving systems that take in queries and return predictions by performing inference on machine learning models. In order to scale to high query rates, prediction serving systems are run on many machines in cluster settings, and thus are prone to slowdowns and failures that inflate tail latency and cause violations of strict latency targets. Current approaches to reducing tail latency are inadequate for the latency targets of prediction serving, incur high resource overhead, or are inapplicable to the computations performed during inference. We present ParM, a novel, general framework for making use of ideas from erasure coding and machine learning to achieve low-latency, resource-efficient resilience to slowdowns and failures in prediction serving systems. ParM encodes multiple queries together into a single parity query and performs inference on the parity query using a parity model. A decoder uses the output of a parity model to reconstruct approximations of unavailable predictions. ParM uses neural networks to learn parity models that enable simple, fast encoders and decoders to reconstruct unavailable predictions for a variety of inference tasks such as image classification, speech recognition, and object localization. We build ParM atop an open-source prediction serving system and through extensive evaluation show that ParM improves overall accuracy in the face of unavailability with low latency while using 2-4 less additional resources than replication-based approaches. ParM reduces the gap between 99.9th percentile and median latency by up to compared to approaches that use an equal amount of resources, while maintaining the same median.
Text Embeddings for Retrieval From a Large Knowledge Base (1810.10176v2)
Tolgahan Cakaloglu, Christian Szegedy, Xiaowei Xu
2018-10-24
Text embedding representing natural language documents in a semantic vector space can be used for document retrieval using nearest neighbor lookup. In order to study the feasibility of neural models specialized for retrieval in a semantically meaningful way, we suggest the use of the Stanford Question Answering Dataset (SQuAD) in an open-domain question answering context, where the first task is to find paragraphs useful for answering a given question. First, we compare the quality of various text-embedding methods on the performance of retrieval and give an extensive empirical comparison on the performance of various non-augmented base embedding with, and without IDF weighting. Our main results are that by training deep residual neural models, specifically for retrieval purposes, can yield significant gains when it is used to augment existing embeddings. We also establish that deeper models are superior to this task. The best base baseline embeddings augmented by our learned neural approach improves the top-1 paragraph recall of the system by 14%.
Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments (1811.02480v3)
Giovanni Morrone, Luca Pasa, Vadim Tikhanoff, Sonia Bergamaschi, Luciano Fadiga, Leonardo Badino
2018-11-06
In this paper, we address the problem of enhancing the speech of a speaker of interest in a cocktail party scenario when visual information of the speaker of interest is available. Contrary to most previous studies, we do not learn visual features on the typically small audio-visual datasets, but use an already available face landmark detector (trained on a separate image dataset). The landmarks are used by LSTM-based models to generate time-frequency masks which are applied to the acoustic mixed-speech spectrogram. Results show that: (i) landmark motion features are very effective features for this task, (ii) similarly to previous work, reconstruction of the target speaker's spectrogram mediated by masking is significantly more accurate than direct spectrogram reconstruction, and (iii) the best masks depend on both motion landmark features and the input mixed-speech spectrogram. To the best of our knowledge, our proposed models are the first models trained and evaluated on the limited size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech enhancement in a multi-talker setting.
On the Smoothness of Nonlinear System Identification (1905.00820v1)
Antônio H. Ribeiro, Koen Tiels, Jack Umenberger, Thomas B. Schön, Luis A. Aguirre
2019-05-02
New light is shed onto optimization problems resulting from prediction error parameter estimation of linear and nonlinear systems. It is shown that the smoothness" of the objective function depends both on the simulation length and on the decay rate of the prediction model. More precisely, for regions of the parameter space where the model is not contractive, the Lipschitz constant and -smoothness of the objective function might blow up exponentially with the simulation length, making it hard to numerically find minima within those regions or, even, to escape from them. In addition to providing theoretical understanding of this problem, this paper also proposes the use of multiple shooting as a viable solution. The proposed method minimizes the error between a prediction model and observed values. Rather than running the prediction model over the entire dataset, as in the original prediction error formulation, multiple shooting splits the data into smaller subsets and runs the prediction model over each subdivision, making the simulation length a design parameter and making it possible to solve problems that would be infeasible using a standard approach. The equivalence with the original problem is obtained by including constraints in the optimization. The method is illustrated for the parameter estimation of nonlinear systems with chaotic or unstable behavior, as well as on neural network parameter estimation.
Spectrum-enhanced Pairwise Learning to Rank (1905.00805v1)
Wenhui Yu, Zheng Qin
2019-05-02
To enhance the performance of the recommender system, side information is extensively explored with various features (e.g., visual features and textual features). However, there are some demerits of side information: (1) the extra data is not always available in all recommendation tasks; (2) it is only for items, there is seldom high-level feature describing users. To address these gaps, we introduce the spectral features extracted from two hypergraph structures of the purchase records. Spectral features describe the \textit{similarity} of users/items in the graph space, which is critical for recommendation. We leverage spectral features to model the users' preference and items' properties by incorporating them into a Matrix Factorization (MF) model. In addition to modeling, we also use spectral features to optimize. Bayesian Personalized Ranking (BPR) is extensively leveraged to optimize models in implicit feedback data. However, in BPR, all missing values are regarded as negative samples equally while many of them are indeed unseen positive ones. We enrich the positive samples by calculating the similarity among users/items by the spectral features. The key ideas are: (1) similar users shall have similar preference on the same item; (2) a user shall have similar perception on similar items. Extensive experiments on two real-world datasets demonstrate the usefulness of the spectral features and the effectiveness of our spectrum-enhanced pairwise optimization. Our models outperform several state-of-the-art models significantly.
Object Contour and Edge Detection with RefineContourNet (1904.13353v2)
Andre Peter Kelm, Vijesh Soorya Rao, Udo Zoelzer
2019-04-30
A ResNet-based multi-path refinement CNN is used for object contour detection. For this task, we prioritise the effective utilization of the high-level abstraction capability of a ResNet, which leads to state-of-the-art results for edge detection. Keeping our focus in mind, we fuse the high, mid and low-level features in that specific order, which differs from many other approaches. It uses the tensor with the highest-levelled features as the starting point to combine it layer-by-layer with features of a lower abstraction level until it reaches the lowest level. We train this network on a modified PASCAL VOC 2012 dataset for object contour detection and evaluate on a refined PASCAL-val dataset reaching an excellent performance and an Optimal Dataset Scale (ODS) of 0.752. Furthermore, by fine-training on the BSDS500 dataset we reach state-of-the-art results for edge-detection with an ODS of 0.824.