A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
TREMA-UNH at TREC 2020
2020
Text Retrieval Conference
This notebook describes the submissions of team TREMA-UNH to the TREC Podcasts track. We participate in the summarization task of the track. ...
Introduction This year, team TREMA-UNH from the University of New Hampshire, USA, participated in the summarization task of TREC Podcasts track. ...
As the training dataset, we use the benchmarkY1 train split of TREC CAR year 1 dataset [5] . ...
dblp:conf/trec/KashyapiD20
fatcat:cny2v66pcvc4hmonclayhjy3eu
TREC CAsT 2019: The Conversational Assistance Track Overview
[article]
2020
arXiv
pre-print
The document corpus is 38,426,252 passages from the TREC Complex Answer Retrieval (CAR) and Microsoft MAchine Reading COmprehension (MARCO) datasets. ...
The Conversational Assistance Track (CAsT) is a new track for TREC 2019 to facilitate Conversational Information Seeking (CIS) research and to create a large-scale reusable test collection for conversational ...
UNH-trema-ecn
Y
automatic
uogTr
ug_1stprev3_sdm
automatic
TREMA-UNH UNH-trema-ent
Y
automatic
uogTr
ug_cedr_rerank
Y
automatic
TREMA-UNH unh-trema-relco
automatic
uogTr
ug_cont_lin
Y ...
arXiv:2003.13624v1
fatcat:ful7udqmmvfcfom65oxc76dq24
DUTh at TREC 2020 Conversational Assistance Track
2020
Text Retrieval Conference
This paper describes the DUTh's participation in the TREC 2020 Conversational Assistance Track (CAsT) track. ...
/TREMA-UNH/trec-car-tools
www.tensorflow.org ...
Introduction This is an overview of the Democritus University of Thrace (DUTh) retrieval runs submissions to the TREC 2020 Conversational Assistance Track(CAsT) 1 , which focuses on conversational question ...
dblp:conf/trec/FotiadisPSA20
fatcat:p4tgp2finbfrvpvlbt5daafplm
Overview of the TREC 2019 deep learning track
[article]
2020
arXiv
pre-print
The Deep Learning Track is a new track for TREC 2019, with the goal of studying ad hoc ranking in a large data regime. ...
It is the first track with large human-labeled training sets, introducing two sets corresponding to two tasks, each with rigorous TREC-style blind evaluation and reusable test sets. ...
0.2402 0.7036
0.5058
0.7490 0.3013
srchvrs_ps_run1
srchvrs
fullrank trad
0.1902 0.5597
0.4990
0.7240 0.2972
bm25tuned_p
BASELINE
fullrank trad
0.2363 0.6850
0.4973
0.7472 0.2903
UNH_bm25
TREMA-UNH ...
arXiv:2003.07820v2
fatcat:a4wghnw6fzbmfe4m24lpgpuwhy
BERT-ER
2022
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
We show that our entity ranking system using BERT-ER can increase precision at the top of the ranking by promoting relevant entities to the top. ...
https://www.cs.unh.edu/~dietz/eal-dataset-2020/
http://trec-car.cs.unh.edu
https://github.com/iai-group/DBpedia-Entity 6 https://github.com/TREMA-UNH/DBpediaV2-entity-CAR 7 https://www.cs.unh.edu ...
For example, BERT-LeadText places the relevant entity "Organic Consumers Association" at rank 57 whereas BERT-SupportPsg places it at rank 13 (see Figure 4 ). ...
doi:10.1145/3477495.3531944
fatcat:7qwg5hir6bedfhcyqyttnomemu