Deep learning relevance: creating relevant information (as opposed to retrieving it)
Research output: Contribution to conference › Paper › Research › peer-review
Standard
Deep learning relevance : creating relevant information (as opposed to retrieving it). / Lioma, Christina; Larsen, Birger; Petersen, Casper; Simonsen, Jakob Grue.
2016. Paper presented at SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR), Pisa, Italy.Research output: Contribution to conference › Paper › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CONF
T1 - Deep learning relevance
T2 - SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR)
AU - Lioma, Christina
AU - Larsen, Birger
AU - Petersen, Casper
AU - Simonsen, Jakob Grue
N1 - Conference code: 1
PY - 2016
Y1 - 2016
N2 - What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a singledocument? We present a preliminary study that makes a first step towards answering this question.Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
AB - What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a singledocument? We present a preliminary study that makes a first step towards answering this question.Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
M3 - Paper
Y2 - 21 July 2016 through 21 July 2016
ER -
ID: 171795008