A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Revealing the Basis: Ordinal Embedding Through Geometry
[article]

2018
*
arXiv
*
pre-print

Ordinal Embedding places n objects into R^d based on comparisons such as "a is closer to b than c." Current optimization-based approaches suffer from scalability problems and an abundance of low quality local optima. We instead consider a computational geometric approach based on selecting comparisons to discover points close to nearly-orthogonal "axes" and embed the whole set by their projections along each axis. We thus also estimate the dimensionality of the data. Our embeddings are of lower

arXiv:1805.07589v1
fatcat:fcrorh3safhgddcpodvm2o3et4
## more »

... quality than the global optima of optimization-based approaches, but are more scalable computationally and more reliable than local optima often found via optimization. Our method uses \Theta(n d \log n) comparisons and \Theta(n^2 d^2) total operations, and can also be viewed as selecting constraints for an optimizer which, if successful, will produce an almost-perfect embedding for sufficiently dense datasets.##
###
Measuring Human-perceived Similarity in Heterogeneous Collections
[article]

2018
*
arXiv
*
pre-print

We present a technique for estimating the similarity between objects such as movies or foods whose proper representation depends on human perception. Our technique combines a modest number of human similarity assessments to infer a pairwise similarity function between the objects. This similarity function captures some human notion of similarity which may be difficult or impossible to automatically extract, such as which movie from a collection would be a better substitute when the desired one

arXiv:1802.05929v1
fatcat:iauektghivcrrcz6nzxqtchwka
## more »

... s unavailable. In contrast to prior techniques, our method does not assume that all similarity questions on the collection can be answered or that all users perceive similarity in the same way. When combined with a user model, we find how each assessor's tastes vary, affecting their perception of similarity.##
###
TREC 2013 Temporal Summarization

2013
*
Text Retrieval Conference
*

##
###
Adapting RNN Sequence Prediction Model to Multi-label Set Prediction
[article]

2019
*
arXiv
*
pre-print

We present an adaptation of RNN sequence models to the problem of multi-label classification for text, where the target is a set of labels, not a sequence. Previous such RNN models define probabilities for sequences but not for sets; attempts to obtain a set probability are after-thoughts of the network design, including pre-specifying the label order, or relating the sequence probability to the set probability in ad hoc ways. Our formulation is derived from a principled notion of set

arXiv:1904.05829v1
fatcat:7ga5knttf5anbfg4ze4gm3dog4
## more »

... y, as the sum of probabilities of corresponding permutation sequences for the set. We provide a new training objective that maximizes this set probability, and a new prediction objective that finds the most probable set on a test document. These new objectives are theoretically appealing because they give the RNN model freedom to discover the best label order, which often is the natural one (but different among documents). We develop efficient procedures to tackle the computation difficulties involved in training and prediction. Experiments on benchmark datasets demonstrate that we outperform state-of-the-art methods for this task.##
###
Adapting

2019
*
Proceedings of the 2019 Conference of the North
*

We present an adaptation of RNN sequence models to the problem of multi-label classification for text, where the target is a set of labels, not a sequence. Previous such RNN models define probabilities for sequences but not for sets; attempts to obtain a set probability are after-thoughts of the network design, including pre-specifying the label order, or relating the sequence probability to the set probability in ad hoc ways. Our formulation is derived from a principled notion of set

doi:10.18653/v1/n19-1321
dblp:conf/naacl/QinLPA19
fatcat:vnea377afbdy3pquzmuqcs4nsi
## more »

... y, as the sum of probabilities of corresponding permutation sequences for the set. We provide a new training objective that maximizes this set probability, and a new prediction objective that finds the most probable set on a test document. These new objectives are theoretically appealing because they give the RNN model freedom to discover the best label order, which often is the natural one (but different among documents). We develop efficient procedures to tackle the computation difficulties involved in training and prediction. Experiments on benchmark datasets demonstrate that we outperform stateof-the-art methods for this task.##
###
Regularizing Model Complexity and Label Structure for Multi-Label Text Classification
[article]

2017
*
arXiv
*
pre-print

Multi-label text classification is a popular machine learning task where each document is assigned with multiple relevant labels. This task is challenging due to high dimensional features and correlated labels. Multi-label text classifiers need to be carefully regularized to prevent the severe over-fitting in the high dimensional space, and also need to take into account label dependencies in order to make accurate predictions under uncertainty. We demonstrate significant and practical

arXiv:1705.00740v1
fatcat:374ssbwjmvb25duau7wkwko5dy
## more »

... nt by carefully regularizing the model complexity during training phase, and also regularizing the label search space during prediction phase. Specifically, we regularize the classifier training using Elastic-net (L1+L2) penalty for reducing model complexity/size, and employ early stopping to prevent overfitting. At prediction time, we apply support inference to restrict the search space to label sets encountered in the training set, and F-optimizer GFM to make optimal predictions for the F1 metric. We show that although support inference only provides density estimations on existing label combinations, when combined with GFM predictor, the algorithm can output unseen label combinations. Taken collectively, our experiments show state of the art results on many benchmark datasets. Beyond performance and practical contributions, we make some interesting observations. Contrary to the prior belief, which deems support inference as purely an approximate inference procedure, we show that support inference acts as a strong regularizer on the label prediction structure. It allows the classifier to take into account label dependencies during prediction even if the classifiers had not modeled any label dependencies during training.##
###
Variational bayes for modeling score distributions

2010
*
Information retrieval (Boston)
*

Empirical modeling of the score distributions associated with retrieved documents is an essential task for many retrieval applications. In this work, we propose modeling the relevant documents' scores by a mixture of Gaussians and the non-relevant scores by a Gamma distribution. Applying Variational Bayes we automatically trade-off the goodness-of-fit with the complexity of the model. We test our model on traditional retrieval functions and actual search engines submitted to TREC. We

doi:10.1007/s10791-010-9156-2
fatcat:3sg2i3ggbbbkrpyjiigk2joagi
## more »

... the utility of our model in inferring precision-recall curves. In all experiments our model outperforms the dominant exponential-Gaussian model.##
###
A Complex KBQA System using Multiple Reasoning Paths
[article]

2020
*
arXiv
*
pre-print

Multi-hop knowledge based question answering (KBQA) is a complex task for natural language understanding. Many KBQA approaches have been proposed in recent years, and most of them are trained based on labeled reasoning path. This hinders the system's performance as many correct reasoning paths are not labeled as ground truth, and thus they cannot be learned. In this paper, we introduce an end-to-end KBQA system which can leverage multiple reasoning paths' information and only requires labeled

arXiv:2005.10970v1
fatcat:zcn4yk4w4jgy7oqu3vnfz2oaha
## more »

... swer as supervision. We conduct experiments on several benchmark datasets containing both single-hop simple questions as well as muti-hop complex questions, including WebQuestionSP (WQSP), ComplexWebQuestion-1.1 (CWQ), and PathQuestion-Large (PQL), and demonstrate strong performance.##
###
Northeastern University Runs at the TREC13 Crowdsourcing Track

2013
*
Text Retrieval Conference
*

The goal of the TREC 2013 Crowdsourcing Track was to evaluate approaches to crowdsourcing high quality relevance judgments for web pages and search topics. This paper describes our submission to Crowdsourcing track. Participants of this track were required to assess documents judged on a six-point scale. Our approach is based on collecting a linear number of preference judgements, and combining these into nominal grades using a modified version of QuickSort algorithm.

dblp:conf/trec/BashirAPA13
fatcat:ejfkuk4gqbaehisqaa7cpmr4ki
##
###
If I Had a Million Queries
[chapter]

2009
*
Lecture Notes in Computer Science
*

As document collections grow larger, the information needs and relevance judgments in a test collection must be well-chosen within a limited budget to give the most reliable and robust evaluation results. In this work we analyze a sample of queries categorized by length and corpus-appropriateness to determine the right proportion needed to distinguish between systems. We also analyze the appropriate division of labor between developing topics and making relevance judgments, and show that only a

doi:10.1007/978-3-642-00958-7_27
fatcat:ikc7siy45jcj5a5y32fnnoiy5u
## more »

... small, biased sample of queries with sparse judgments is needed to produce the same results as a much larger sample of queries.##
###
TREC 2015 Temporal Summarization Track Overview

2015
*
Text Retrieval Conference
*

##
###
TREC 2014 Temporal Summarization Track Overview

2014
*
Text Retrieval Conference
*

##
###
Extended Expectation Maximization for Inferring Score Distributions
[chapter]

2012
*
Lecture Notes in Computer Science
*

Inferring the distributions of relevant and nonrelevant documents over a ranked list of scored documents returned by a retrieval system has a broad range of applications including information filtering, recall-oriented retrieval, metasearch, and distributed IR. Typically, the distribution of documents over scores is modeled by a mixture of two distributions, one for the relevant and one for the nonrelevant documents, and expectation maximization (EM) is run to estimate the mixture parameters. A

doi:10.1007/978-3-642-28997-2_25
fatcat:arugdpwpc5grnk6iwhzf3jzvuu
## more »

... large volume of work has focused on selecting the appropriate form of the two distributions in the mixture. In this work we consider the form of the distributions as a given and we focus on the inference algorithm. We extend the EM algorithm (a) by simultaneously considering the ranked lists of documents returned by multiple retrieval systems, and (b) by encoding in the algorithm the constraint that the same document retrieved by multiple systems should have the same, global, probability of relevance. We test the new inference algorithm using TREC data and we demonstrate that it outperforms the regular EM algorithm. It is better calibrated in inferring the probability of document's relevance, and it is more effective when applied on the task of metasearch.##
###
Conditional Bernoulli Mixtures for Multi-label Classification

2016
*
International Conference on Machine Learning
*

Multi-label classification is an important machine learning task wherein one assigns a subset of candidate labels to an object. In this paper, we propose a new multi-label classification method based on Conditional Bernoulli Mixtures. Our proposed method has several attractive properties: it captures label dependencies; it reduces the multi-label problem to several standard binary and multi-class problems; it subsumes the classic independent binary prediction and power-set subset prediction

dblp:conf/icml/LiWPA16
fatcat:lrmr56w3z5eudfqyfvcnj5yo6q
## more »

... ods as special cases; and it exhibits accuracy and/or computational complexity advantages over existing approaches. We demonstrate two implementations of our method using logistic regressions and gradient boosted trees, together with a simple training procedure based on Expectation Maximization. We further derive an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets. Experimental results show the effectiveness of the proposed method against competitive alternatives on benchmark datasets.##
###
Northeastern University Runs at the TREC12 Crowdsourcing Track

2012
*
Text Retrieval Conference
*

The goal of the TREC 2012 Crowdsourcing Track was to evaluate approaches to crowdsourcing high quality relevance judgments for images and text documents. This paper describes our submission to the Text Relevance Assessing Task. We explored three different approaches for obtaining relevance judgments. Our first two approaches are based on collecting a limited number of preference judgments from Amazon Mechanical Turk workers. These preferences are then extended to relevance judgments through the

dblp:conf/trec/BashirAWEGPA12
fatcat:v2ja3fo7wfgwxamffgyjc7nrsm
## more »

... use of expectation maximization and the Elo rating system. Our third approach is based on our Nugget-based evaluation paradigm.
« Previous

*Showing results 1 — 15 out of 56 results*