<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Please, apologies for cross-posting.</p>
<p><br>
</p>
<p>***********************************************************************************<br>
</p>
<p>CLEF 2018 Conference and Labs on the Evaluation Forum<br>
Information Access Evaluation meets Multilinguality, Multimodality
and Visualization<br>
10 - 14 September 2018, Avignon - France</p>
<p>***********************************************************************************</p>
<p><font color="#cc0000" size="+1">Call for Lab Participation - <b>Registration
closes: 27 April 2018</b></font></p>
<p><font color="#cc0000" size="+1"></font><font color="#cc0000">Lab
participants must register for the Labs via the CLEF website: <font
color="#000000"><a class="moz-txt-link-freetext" href="http://clef2018-labs-registration.dei.unipd.it/">http://clef2018-labs-registration.dei.unipd.it/</a></font></font><br>
<br>
<br>
Conference website: <a class="moz-txt-link-freetext" href="http://clef2018.clef-initiative.eu/">http://clef2018.clef-initiative.eu/</a><br>
Labs flyer (pdf):
<a class="moz-txt-link-freetext" href="http://clef2018.clef-initiative.eu/resources/CLEF2018-labs-flyer.pdf">http://clef2018.clef-initiative.eu/resources/CLEF2018-labs-flyer.pdf</a><br>
<br>
CLEF 2018 is the 19th edition of CLEF which, since 2000,
contributes to the systematic evaluation of information access
systems. It consists of a peer-reviewed conference (see the
separate call for papers) and a set of ten Labs designed to test
different aspects of multilingual and multimedia IR systems: <br>
1. CENTRE@CLEF 2018, CLEF/NTCIR/TREC Reproducibility<br>
2. CheckThat! Automatic Identification and Verification of
Political Claims<br>
3. CLEF eHealth<br>
4. DynSe, Dynamic Search for Complex Tasks<br>
5. eRISK, Early Risk Prediction on the Internet<br>
6. ImageCLEF, Multimedia Retrieval in CLEF<br>
7. LifeCLEF<br>
8. MC2, Multilingual Cultural Mining and Retrieval<br>
9. PAN, Lab on Digital Text Forensics<br>
10. PIR-CLEF, Evaluation of Personalised Information Retrieval<br>
<br>
*****************<br>
Important Dates<br>
*****************<br>
<br>
Lab Registration Opens: 8 November 2017<br>
Registration closes: 27 April 2018<br>
End Evaluation Cycle: 11 May 2018<br>
Working Notes papers due: 31 May 2018<br>
Camera Ready Copy: 29 June 2018<br>
<br>
</p>
<p>*****************<br>
Organizers<br>
*****************<br>
<b><i>Conference Chairs</i></b><br>
Patrice Bellot, Aix-Marseille Université - CNRS LSIS, France<br>
Chiraz Trabelsi, University of Tunis El Manar, T unis<br>
<br>
<b><i>Program Chairs</i></b><br>
Josiane Mothe, SIG, IRIT, France)<br>
Fionn Murtagh, University of Huddersfield, UK)<br>
<br>
<b><i>Lab Chairs</i></b><br>
Jian Yun Nie, DIRO, Université de Montréal, Canada<br>
Laure Soulier, LIP6, UPMC, France<br>
<br>
<i><b>Proceedings Chairs</b></i><br>
Linda Cappellato, University of Padua, Italy<br>
Nicola Ferro, University of Padua, Italy<br>
<br>
</p>
<p><br>
<i><b>Publication</b></i><br>
Labs Working Notes will be published in the CEUR-WS Proceedings:<br>
<a class="moz-txt-link-freetext" href="http://ceur-ws.org/">http://ceur-ws.org/</a><br>
Lab Paper Submission via Easychair:
<a class="moz-txt-link-freetext" href="http://easychair.org/conferences/?conf=clef2018">http://easychair.org/conferences/?conf=clef2018</a><br>
</p>
<p><br>
</p>
<p>*****************<br>
LABS <br>
*****************<br>
<br>
<i><b>CENTRE@CLEF 2018 - CLEF/NTCIR/TREC Reproducibility</b></i><br>
The goal of CENTRE@CLEF 2018 is to run a joint CLEF/NTCIR/TREC
task on challenging participants: 1) to reproduce best results of
best/most interesting systems in previous editions of
CLEF/NTCIR/TREC by using standard open source IR systems; 2) to
contribute back to the community the additional components and
resources developed to reproduce the results in order to improve
existing open source systems.<br>
- Task 1 - Replicability: replicability of selected methods on
the same experimental collections.<br>
- Task 2 - Reproducibility: reproducibility of selected
methods on the different experimental collections. Task 3 -
Re-reproducibility: using the components developed in T1 and T2
and made available by the other participants to
replicate/reproduce their results.<br>
<i>Lab Coordination:</i> Nicola Ferro (University of Padua),
Tetsuya Sakai (Waseda University), Ian Soboroff (NIST) <br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://www.centre-eval.org/">http://www.centre-eval.org/</a> <br>
<i>Twitter:</i> @_centre_<br>
<br>
<b><i>LifeCLEF</i></b><br>
LifeCLEF lab aims at boosting research on the identification of
living organisms and on the production of biodiversity data.
Through its biodiversity informatics related challenges, LifeCLEF
is intended to push the boundaries of the state-of-the-art in
several research directions at the frontier of multimedia
information retrieval, machine learning and knowledge engineering.
The lab is organized around three tasks:<br>
- Task 1 - GeoLifeCLEF: location-based species recommendation.<br>
- Task 2 - BirdCLEF: bird species identification from bird
calls and songs.<br>
- Task 3 - ExpertLifeCLEF: experts vs. machines identification
quality.<br>
<i>Lab Coordination:</i> Alexis Joly (INRIA, LIRMM), Henning
Müller (HES-SO), Pierre Bonnet (CIRAD, AMAP), Hervé Goëau (CIRAD,
AMAP), Hervé Glotin (University of Toulon, LSIS CNRS), Simone
Palazzo (University of Catania), Willem-Pier Vellinga (Xeno-Canto)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://lifeclef.org/">http://lifeclef.org/</a><br>
<br>
<i><b>PAN - Lab on Digital Text Forensics</b></i><br>
PAN is a series of scientific events and shared tasks on digital
text forensics.<br>
- Task 1 - Author Identification: cross-domain authorship
attribution. More specifically, cases where the topic of texts
varies significantly will be examined. In addition, we will
continue the pilot task of style change detection, focusing on
finding switches of authors within documents based on an intrinsic
style analysis.<br>
- Task 2 - Author Obfuscation: while the goal of author
identification and author profiling is to model author style so as
to deanomyize authors, the goal of author obfuscation technology
is to prevent that by disguising the authors. We will study author
masking vs. authorship verification.<br>
- Task 3 - Author Profiling: the goal is to identify an
author's traits based on their writing style. The focus will be on
age and gender, whereas text and image will be used as information
sources, offering tweets in English, Spanish and Arabic.<br>
<i>Lab Coordination:</i> Martin Potthast (Leipzig University),
Paolo Rosso (Universitat Politècnica de València), Efstathios
Stamatatos (Univerisity of the Aegean), Benno Stein
(Bauhaus-Universität Weimar)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://pan.webis.de/">http://pan.webis.de/</a><br>
<br>
<i><b>CLEF eHealth</b></i><br>
Medical content is available electronically in a variety of forms
ranging from patient records and medical dossiers, scientific
publications and health-related websites to medical-related topics
shared across social networks. This lab aims to support the
development of techniques to aid laypeople, clinicians and
policy-makers in easily retrieving and making sense of medical
content to support their decision making.<br>
- Task 1 - Multilingual Information Extraction: Participants
will be required to extract the causes of death from death
certificates, authored by physicians in European languages. This
can be seen as a named entity recognition, normalization, and/or
text classification task.<br>
- Task 2 - Technologically Assisted Reviews in Empirical
Medicine: Participants will be challenged to retrieve medical
studies relevant to conducting a systematic review on a given
topic. This can be seen as a total recall problem and is addressed
by both query generation and document ranking.<br>
- Task 3 - Patient-centred Information Retrieval: Participants
must retrieve web pages that fulfil a given patient’s personalised
information need. This needs to fulfil the following criteria:
information reliability, quality, and suitability. The task also
has a multilingual querying track.<br>
<i>Lab Coordination:</i> Leif Azzopardi (Univ. of Strathclyde),
Lorraine Goeuriot (Univ. J.Fourier), Evangelos Kanoulas (Univ. of
Amsterdam), Liadh Kelly (Maynooth University), Aurélie Névéol
(CNRS-LIMSI), Joao Palotti (Vienna Univ.), Aude Robert
(INSERM/CepiDC), Rene Spijker (Cochrane), Hanna Suominen
(Australian National Univ.), Guido Zuccon (Queensland Univ. of
Technology) <br>
<i>Lab Website:</i>
<a class="moz-txt-link-freetext" href="https://sites.google.com/view/clef-ehealth-2018/home">https://sites.google.com/view/clef-ehealth-2018/home</a> <br>
<i>Twitter :</i> @clefehealth<br>
<br>
<i><b>MC2 - Multilingual Cultural Mining and Retrieval</b></i><br>
Developing processing methods and resources to mine the social
media sphere surrounding cultural events such as festivals. This
requires to deal with almost all languages and dialects as well as
informal expressions.There are three tasks:<br>
- Task 1 - Cross Language Cultural Retrieval over MicroBlogs:
a) Small Microblogs Multilingual Information Retrieval in Arabic,
English, French and Latin languages; b) Microblogs Bilingual
Information Retrieval for tuning systems running on language
pairs; c) Microblog Monolingual Information Retrieval based on
2017 language identification.<br>
- Task 2 - Mining Opinion Argumentation: a) Polarity detection
in microblogs; b) Automatic identification of argumentation
elements over Microblogs and WikiPedia; c) Classification and
summarization of arguments in texts.<br>
- Task 3 - Dialectal Focus Retrieval: a) Arabic dialects in
Blogs, MicroBlogs and Video News transcriptions; b) Spanish
language variations in Blogs, MicroBlogs and Journals. Lab <i>Coordination:</i>
Chiraz Latiri (University Tunis El Manar), Eric SanJuan (LIA,
Avignon University), Catherine Berrut (LIG, Grenoble Alpes
University), Lorraine Goeuriot (LIG, Grenoble Alpes University),
Julio Gonzalo (UNED)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="https://mc2.talne.eu/">https://mc2.talne.eu/</a> <br>
<i>Twitter:</i> @talne_mc2<br>
<br>
<b><i>ImageCLEF - Multimedia Retrieval in CLEF</i></b><br>
The lab provides an evaluation forum for the language independent
annotation and retrieval of images, a domain for which tools are
by far not as advanced as for text analysis and retrieval.<br>
- Task 1 - ImageCLEFlifelog: An increasingly wide range of
personal devices, such as smartphones, video cameras as well as
wearable devices that allow capturing pictures, videos, and audio
clips in every moment of our life are becoming available. The task
addresses the problems of lifelogging data understanding,
summarization and retrieval.<br>
- Task 2 - ImageCLEFcaption: Interpreting and summarizing the
insights gained from medical images such as radiology output is a
time-consuming task that involves highly trained experts and often
represents a bottleneck in clinical diagnosis pipelines. The task
addresses the problem of bio-medical image concept detection and
caption prediction from large amounts of training data.<br>
- Task 3 - ImageCLEFtuberculosis: The objective of this task
is to determine tuberculosis subtypes and drug resistances, as far
as possible automatically, from the volumetric image information
in computed tomography (CT) volumes (mainly texture analysis) and
based on clinical information (e.g., age, gender, etc).<br>
- Task 4 - VisualQuestionAnswering: With the ongoing drive for
improved patient engagement and access to the electronic medical
records via patient portals, patients can now review structured
and unstructured data from labs and images to text reports
associated with their healthcare utilization. Given a medical
image accompanied with a set of clinically relevant questions,
participating systems are tasked with answering the questions
based on the visual image content.<br>
<i>Lab Coordination:</i> Bogdan Ionescu (University Politehnica of
Bucharest), Mauricio Villegas (SearchInk), Henning Müller (HES-SO)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://www.imageclef.org/2018/">http://www.imageclef.org/2018/</a> <br>
<i>Twitter:</i> @imageclef<br>
<br>
<i><b>PIR-CLEF - Evaluation of Personalised Information Retrieval</b></i><br>
The primary aim of the PIR-CLEF 2018 laboratory is: 1) to
facilitate comparative evaluation of PIR by offering participating
research groups a mechanism for evaluation of their
personalisation algorithms; 2) to give the participating groups
the means to formally define and evaluate their own and novel user
profiling approaches for PIR.<br>
- Task 1 - Personalized Search: we will provide a bag-of-words
profile gathered during the query sessions performed by real
searchers, the set of queries formulated by each user, together
with the corresponding document relevance, and the the search logs
of each user. Task participants will be expected to compute search
results obtained by applying their personalization algorithms on
these queries. The search will be carried out on the ClueWeb12
collection, by using the API provided by DCU.<br>
- Task 2 - User Profile Models: participants will be required
to develop their own user profile models using the information
gathered about the real user during her interactions with the
system. The same information have been used for creating the
baseline (keyword-based user profiles), which is provided in the
benchmark.<br>
<i>Lab Coordination:</i> Gabriella Pasi (University of Milano
Bicocca), Gareth J. F. Jones (Dublin City University), Stefania
Marrara (Consorzio C2T), Debasis Ganguly (IBM Research Dublin) ,
Procheta Sen (Dublin City University), Camilla Sanvitto
(University of Milano Bicocca)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://www.ir.disco.unimib.it/pir-clef2018/">http://www.ir.disco.unimib.it/pir-clef2018/</a> <br>
<i>Twitter:</i> @clef2018_pir<br>
<br>
<i><b>eRISK - Early Risk Prediction on the Internet</b></i><br>
eRisk explores the evaluation methodology, effectiveness metrics
and practical applications (particularly those related to health
and safety) of early risk detection on the Internet.<br>
- Task 1 - Early Detection of Signs of Depression: the
challenge consists of sequentially processing pieces of evidence
(Social Media entries) and detect early traces of depression as
soon as possible.<br>
- Task 2 - Early Detection of Signs of Anorexia: the challenge
consists of sequentially processing pieces of evidence (Social
Media entries) and detect early traces of anorexia as soon as
possible.<br>
Both tasks are mainly concerned about evaluating Text Mining
solutions and, thus, we concentrate on texts written in Social
Media. Texts should be processed in the order they were posted. In
this way, systems that effectively perform this task could be
applied to sequentially monitor user interactions in blogs, social
networks, or other types of online media.<br>
<i>Lab Coordination:</i> David E. Losada (University of Santiago
de Compostela), Fabio Crestani (University of Lugano), Javier
Parapar (University of A Coruña)<br>
<i>Lab website: </i><a class="moz-txt-link-freetext" href="http://early.irlab.org/">http://early.irlab.org/</a> <br>
<i>Twitter:</i> @earlyrisk<br>
<br>
<i><b>DynSe - Dynamic Search for Complex Tasks</b></i><br>
The primary aim of the CLEF Dynamic Search Lab is to develop
algorithms which interact dynamically with user (or other
algorithms) towards solving a task, and evaluation methodologies
to quantify their effectiveness. The lab is organized along two
tasks:<br>
- Task 1 - Query Suggestion: given a verbose topic description
participants will generate and submit a sequence of queries and a
ranking of the collection for each query. Queries will be
evaluated over their effectiveness (query agent) and/or
resemblance to user queries (user simulation). Query suggestion
will be performed iteratively. <br>
- Task 2 - Result Composition: Given the obtained results from
the aforementioned queries obtain a single ranked list by merging
the individual rankings.<br>
<i>Lab Coordination:</i> Evangelos Kanoulas (University of
Amsterdam), Leif Azzopardi (University of Strathclyde) <br>
<i>Lab website: </i><a class="moz-txt-link-freetext" href="https://ekanou.github.io/dynamicsearch/">https://ekanou.github.io/dynamicsearch/</a> <br>
<i>Twitter:</i> @clef_dynamic<br>
<br>
<i><b>CheckThat! - Automatic Identification and Verification of
Political Claims</b></i><br>
CheckThat! aims to foster the development of technology capable of
both spotting and verifying check-worthy claims in political
debates in English and Arabic.<br>
- Task 1 - Check-Worthiness: Given a political debate, which
is segmented into sentences with speakers annotated, identify
which statements (claims) should be prioritized for fact-checking.
This will be a ranking problem, and systems will be asked to
produce a score, according to which the ranking will be performed.<br>
- Task 2 - Factuality: Given a list of already-extracted
claims, classify them with factuality labels (e.g., true,
half-true, false). This task will be run in an open mode. We will
not provide any pre-selected set of documents to support the
veracity labels. Participants will be free to use whatever
resources they have and the Web in general, with the exception of
the websites used by the organizers to collect the data.<br>
<i>Lab Coordination:</i> Preslav Nakov, Lluís Màrquez, Alberto
Barrón-Cedeño (Qatar Computing Research Institute), Wajdi
Zaghouani (Carnegie Mellon University Qatar), Tamer Elsayed, Reem
Suwaileh (Qatar University), Pepa Gencheva (Sofia University)<br>
<i>Lab website:</i> <a class="moz-txt-link-freetext" href="http://alt.qcri.org/clef2018-factcheck/">http://alt.qcri.org/clef2018-factcheck/</a> <br>
<i>Twitter:</i> @_checkthat_<br>
<br>
<br>
<br>
<br>
<br>
</p>
<pre class="moz-signature" cols="72">--
Laure Soulier
Maître de Conférences
LIP6 - Université Pierre et Marie Curie
4 place Jussieu, 75252 Paris, France
Couloir 26-00 - Bureau 515
tel: (+ 33) 1 44 27 74 91</pre>
</body>
</html>