id stringlengths 11 20 | paper_text stringlengths 29 163k | review stringlengths 666 24.3k |
|---|---|---|
iclr_2018_B1EVwkqTW | While deep neural networks have shown outstanding results in a wide range of applications, learning from a very limited number of examples is still a challenging task. Despite the difficulties of the few-shot learning, metric-learning techniques showed the potential of the neural networks for this task. While these met... | Make SVM great again with Siamese kernel for few-shot learning
** PAPER SUMMARY **
The author proposes to combine siamase networks with an SVM for pair classification. The proposed approach is evaluated on few shot learning tasks, on omniglot and timit.
** REVIEW SUMMARY **
The paper is readable but it could be more... |
iclr_2018_SJTB5GZCb | The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. ... | The manuscript discusses a learning algorithm that is based on the equilibrium propagation method, which can be applied to networks with asymmetric connections. This extension is interesting, but the results seem to be incomplete and missing necessary additional analyses. Therefore, I do not recommend acceptance of the... |
iclr_2018_SJIA6ZWC- | Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal ... | *Summary*
The paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the w... |
iclr_2018_Hyg0vbWC- | Published as a conference paper at ICLR 2018 GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES
We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive mod... | The main significance of this paper is to propose the task of generating the lead section of Wikipedia articles by viewing it as a multi-document summarization problem. Linked articles as well as the results of an external web search query are used as input documents, from which the Wikipedia lead section must be gener... |
iclr_2018_ByQZjx-0- | We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to se... | Summary:
The paper presents a method for learning certain aspects of a neural network architecture, specifically the number of output maps in certain connections and the existence of skip connections. The method is relatively efficient, since it searches in a space of similar architectures, and uses weights sharing be... |
iclr_2018_HyyP33gAZ | Published as a conference paper at ICLR 2018 ACTIVATION MAXIMIZATION GENERATIVE ADVER- SARIAL NETS
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of... | This paper is a thorough investigation of various “class aware” GAN architectures. It purposes a variety of modifications on existing approaches and additionally provides extensive analysis of the commonly used Inception Score evaluation metric.
The paper starts by introducing and analyzing two previous class aware GAN... |
iclr_2018_HyI6s40a- | Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and sign... | Summary:
The paper presents an unsupervised method for detecting adversarial examples of neural networks. The method includes two independent components: an ‘input defender’ which tried to inspect the input, and a ‘latent defender’ trying to inspect a hidden representation. Both are based on the claim that adversarial... |
iclr_2018_Sy-dQG-Rb | Published as a conference paper at ICLR 2018 NEURAL SPEED READING VIA SKIM-RNN
Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives computa... | Summary: The paper proposes a learnable skimming mechanism for RNN. The model decides whether to send the word to a larger heavy-weight RNN or a light-weight RNN. The heavy-weight and the light-weight RNN each controls a portion of the hidden state. The paper finds that with the proposed skimming method, they achieve a... |
iclr_2018_HkPCrEZ0Z | Model-free deep reinforcement learning algorithms are able to successfully solve a wide range of continuous control tasks, but typically require many on-policy samples to achieve good performance. Model-based RL algorithms are sampleefficient on the other hand, while learning accurate global models of complex dynamic e... | This paper presents a model-based approach to variance reduction in policy gradient methods. The basic idea is to use a multi-step dynamics model as a "baseline" (more properly a control variate, as the terminology in the paper uses, but I think baselines are more familiar to the RL community) to reduce the variance o... |
iclr_2018_ryZElGZ0Z | The ability of an agent to discover its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win... | I really enjoyed reading this paper and stopped a few time to write down new ideas it brought up. Well written and very clear, but somewhat lacking in the experimental or theoretical results.
The formulation of AdaGain is very reminiscent of the SGA algorithm in Kushner & Yin (2003), and more generally gradient descent... |
iclr_2018_BkrSv0lA- | LOSS-AWARE WEIGHT QUANTIZATION OF DEEP NET- WORKS
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling ... | This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights).
The paper is very clearly written, and the pr... |
iclr_2018_B1Gi6LeRZ | Published as a conference paper at ICLR 2018 LEARNING FROM BETWEEN-CLASS EXAMPLES FOR DEEP SOUND RECOGNITION
Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for de... | This manuscript proposes a method to improve the performance of a generic learning method by generating "in between class" (BC) training samples. The manuscript motivates the necessity of such technique and presents the basic intuition. The authors show how the so-called BC learning helps training different deep archit... |
iclr_2018_BJypUGZ0Z | Workshop track -ICLR 2018 ACCELERATING NEURAL ARCHITECTURE SEARCH US- ING PERFORMANCE PREDICTION
Methods for neural network hyperparameter optimization and architecture search are computationally expensive due to the need to train a large number of model configurations. In this paper, we show that simple regression mod... | This paper shows a simple method for predicting the performance that neural networks will achieve with a given architecture, hyperparameters, and based on an initial part of the learning curve.
The method assumes that it is possible to first execute 100 evaluations up to the total number of epochs.
From these 100 evalu... |
iclr_2018_BJgPCveAW | We propose a novel way of reducing the number of parameters in the storagehungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accu... | This paper examines sparse connection patterns in upper layers of convolutional image classification networks. Networks with very few connections in the upper layers are experimentally determined to perform almost as well as those with full connection masks. Heuristics for distributing connections among windows/group... |
iclr_2018_Hk6WhagRW | Published as a conference paper at ICLR 2018 EMERGENT COMMUNICATION THROUGH NEGOTIATION
Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environme... | The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature is the consideration of a secondary communication (linguistic) channel for the purpose of cheap talk, i.e. talk whose semantics are not laid out a priori.
... |
iclr_2018_B1X0mzZCW | Published as a conference paper at ICLR 2018 FIDELITY-WEIGHTED LEARNING
Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources ... | The problem of interest is to train deep neural network models with few labelled training samples. The specific assumption is there is a large pool of unlabelled data, and a heuristic function that can provide label annotations, possibly with varying levels of noises, to those unlabelled data. The adopted learning mode... |
iclr_2018_S1Y7OOlRZ | Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs. For such models, we cannot afford to train candidate models sequentially and wait months before finding a suitable hyperparameter configuration. Hence, we introduce the large-scale regime f... | This paper introduces a simple extension to parallelize Hyperband.
Points in favor of the paper:
* Addresses an important problem
Points against:
* Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box... |
iclr_2018_rJwelMbR- | DIVIDE-AND-CONQUER REINFORCEMENT LEARNING
Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typica... | This paper presents a method for learning a global policy over multiple different MDPs (referred to as different "contexts", each MDP having the same dynamics and reward, but different initial state). The basic idea is to learn a separate policy for each context, but regularized in a manner that keeps all of them rela... |
iclr_2018_rkcya1ZAW | Two fundamental problems in unsupervised learning are efficient inference for latent-variable models and robust density estimation based on large amounts of unlabeled data. For efficient inference, normalizing flows have been recently developed to approximate a target distribution arbitrarily well. In practice, however... | The authors propose the use of first order Langevin dynamics as a way to transition from one latent variable to the next in the VAE setting, as opposed to the deterministic transitions of normalizing flow. The extremely popular Fokker-Planck equation is used to analyze the steady state distributions in this setting. Th... |
iclr_2018_S1CChZ-CZ | Published as a conference paper at ICLR 2018 ASK THE RIGHT QUESTIONS: ACTIVE QUESTION REFORMULATION WITH REINFORCEMENT LEARNING
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system ... | This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting ... |
iclr_2018_Hk6kPgZA- | Published as a conference paper at ICLR 2018 CERTIFYING SOME DISTRIBUTIONAL ROBUSTNESS WITH PRINCIPLED ADVERSARIAL TRAINING
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributi... | This paper proposes a principled methodology to induce distributional robustness in trained neural nets with the purpose of mitigating the impact of adversarial examples. The idea is to train the model to perform well not only with respect to the unknown population distribution, but to perform well on the worst-case di... |
iclr_2018_HJ39YKiTb | In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate transla... | The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the origi... |
iclr_2018_HJPSN3gRW | In this work, we focus on the problem of grounding language by training an agent to follow a set of natural language instructions and navigate to a target object in a 2D grid environment. The agent receives visual information through raw pixels and a natural language instruction telling what task needs to be achieved. ... | **Paper Summary**
The paper studies the problem of navigating to a target object in a 2D grid environment by following given natural language description as well as receiving visual information as raw pixels. The proposed architecture consists of a convoutional neural network encoding visual input, gated recurrent uni... |
iclr_2018_Sy-tszZRZ | In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the PWL function. Typicall... | Paper Summary:
This paper looks at providing better bounds for the number of linear regions in the function represented by a deep neural network. It first recaps some of the setting: if a neural network has a piecewise linear activation function (e.g. relu, maxout), the final function computed by the network (before so... |
iclr_2018_B1lMMx1CW | Workshop track -ICLR 2018 THE EFFECTIVENESS OF A TWO-LAYER NEURAL NET- WORK FOR RECOMMENDATIONS
We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer's implicit feedback histor... | The paper proposes a new neural network based method for recommendation.
The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed.
This contribution is not bad for an empirical paper. There's certainly ... |
iclr_2018_r1BRfhiab | We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is to identify only whether the given example belongs to a specific class, which can be different in different applications of the classifier. For instance, this is the case in an image search engin... | The paper addresses the problem of a mismatch between training classification loss and a loss at test time. This is motivated by use cases in which multiclass classification problems are learned during training, but where binary or reduced multi-class classifications is performed at test time. The question for me is th... |
iclr_2018_SJJySbbAZ | TRAINING GANS WITH OPTIMISM
We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the conte... | This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is... |
iclr_2018_S1Auv-WRZ | Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible... | This paper proposes a conditional Generative Adversarial Networks that is used for data augmentation. In order to evaluate the performance of the proposed model, they use Omniglot, EMNIST, and VGG-Faces datasets and uses in the meta-learning task and standard classification task in the low-data regime. The paper is wel... |
iclr_2018_HyI5ro0pW | Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a blo... | The paper proposes to make the inner layers in a neural network be block diagonal, mainly as an alternative to pruning. The implementation of this seems straightforward, and can be done either via initialization or via pruning on the off-diagonals. There are a few ideas the paper discusses:
(1) compared to pruning weig... |
iclr_2018_rylejExC- | Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampli... | The paper proposes a method to speed up the training of graph convolutional networks, which are quite slow for large graphs. The key insight is to improve the estimates of the average neighbor activations (via neighbor sampling) so that we can either sample less neighbors or have higher accuracy for the same number of ... |
iclr_2018_HkuGJ3kCb | ALL-BUT-THE-TOP: SIMPLE AND EFFECTIVE POST- PROCESSING FOR WORD REPRESENTATIONS
Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a very simple, and yet counter-intui... | This paper proposes a simple post-processing technique for word representations designed to improve representational quality and performance on downstream tasks. The procedure involves mean subtraction followed by projecting out the first D principle directions and is motivated by improving isotropy of the partition fu... |
iclr_2018_S1pWFzbAW | The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for l... | This paper proposes an interesting approach to compress the weights of a network for storage or transmission purposes. My understanding is, at inference, the network is 'recovered' therefore there is no difference in processing time (slight differences in accuracy due to the approximation in recovering the weights).
- ... |
iclr_2018_SkAK2jg0b | Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose em... | The paper addresses the scenario when using a pretrained deep network as learnt feature representation for another (small) task where retraining is not an option or not desired. In this situation it proposes to use all layers of the network to extract feature from, instead of only one layer.
Then it proposes to standa... |
iclr_2018_SyKoKWbC- | In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative mod... | The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes.
The idea of avoiding mode collapse by p... |
iclr_2018_Hk0wHx-RW | Published as a conference paper at ICLR 2018 LEARNING SPARSE LATENT REPRESENTATIONS WITH THE DEEP COPULA INFORMATION BOTTLENECK
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that ci... | This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applyi... |
iclr_2018_BkDB51WR- | We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax l... | Interesting ideas that extend LSTM to produce probabilistic forecasts for univariate time series, experiments are okay. Unclear if this would work at all in higher-dimensional time series. It is also unclear to me what are the sources of the uncertainties captured.
The author proposed to incorporate 2 different discret... |
iclr_2018_ry8dvM-R- | Published as a conference paper at ICLR 2018 ROUTING NETWORKS: ADAPTIVE SELECTION OF NON-LINEAR FUNCTIONS FOR MULTI-TASK LEARN- ING
Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To ... | Summary:
The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is propose... |
iclr_2018_rJoXrxZAZ | This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation. As an example, we propose a hybrid model that combines an autoregressive network named WaveNet and a conventional LSTM model to address speech synthesis. Instead of generating one sample per tim... | This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation.
Summary: The proposed model, HybridNet is a fairly straightforward variation of Wa... |
iclr_2018_rJSr0GZR- | Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to lea... | This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a "code generator" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more... |
iclr_2018_Syx6bz-Ab | Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop ... | This paper presents a new approach to support the conversion from natural language to database queries.
One of the major contributions of the work is the introduction of a new real-world benchmark dataset based on questions over Wikipedia. The scale of the data set is significantly larger than any existing ones. Howev... |
ReviewRobot Dataset
This dataset contains the raw dataset from the ReviewRobot work. It was curated by Wang et al. 2020 for the purpose of explainable peer review generation of research papers.
Dataset Details
Dataset Description
The raw research paper text (extracted using Grobid by the authors) and the peer reviews are made available here. Each paper can have multiple reviews, we only keep the longest review for each paper.
Dataset Sources [optional]
- Repository: https://github.com/EagleW/ReviewRobot/tree/master
- Paper: ReviewRobot: Explainable Paper Review Generation based on Knowledge Synthesis
Citation
BibTeX:
@inproceedings{wang-etal-2020-reviewrobot, title = "{R}eview{R}obot: Explainable Paper Review Generation based on Knowledge Synthesis", author = "Wang, Qingyun and Zeng, Qi and Huang, Lifu and Knight, Kevin and Ji, Heng and Rajani, Nazneen Fatema", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.inlg-1.44", pages = "384--397" }
- Downloads last month
- 7