If you have a disability and are having trouble accessing information on this website or need materials in an alternate format, contact email@example.com for assistance.
Cornell´s University repository of academic papers. Several new submissions, every day. You can subscribe to the list, and receive updates from choosen categories.
Meet Capsule Networks from Geoffrey Hinton
Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges.
In many regression problems, the variable to be predicted depends not only on a sample-specific feature vector, but also on an unknown (latent) manifold that must satisfy known constraints. An example is house prices which depend on the characteristics of the house, and on the desirability of the neighborhood, which is not directly measurable. The proposed method comprises two trainable components. The first one is a parametric model that predicts the "intrinsic" price the house from its description. The second one is a smooth, non-parametric model of the latent "desirability" manifold. The predicted price of a house is the product intrinsic price and desirability.
We consider a class of well-known dynamic resource allocation models in loss network systems with advanced reservation.
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance.Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice.
Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations.
This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands for Covariance Matrix Adaptation.
Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms.
We develop an end-to-end training algorithm for whole-image breast cancerdiagnosis based on mammograms.
Black-box risk scoring models permeate our lives, yet are typically proprietary and opaque. We propose a transparent model distillation approach to detect bias in such models.
We use decision trees to build a helpdesk agent reference network to facilitate the on-the-job advising of junior or less experienced staff on how to better address telecommunication customer fault reports.
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data.