Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims
read more
Citations
SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles
SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles
Overview of the CLEF-2019 CheckThat! Lab: Automatic Identification and Verification of Claims
Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media
CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims.
References
Information credibility on twitter
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Detecting rumors from microblogs with recurrent neural networks
SemEval-2016 Task 4: Sentiment Analysis in Twitter
Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking
Related Papers (5)
Fake News Detection on Social Media: A Data Mining Perspective
Frequently Asked Questions (6)
Q2. What have the authors stated for future works in "Overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims" ?
The corpora and evaluation metrics the authors have released as a result of this lab should enable further research in check-worthiness estimation and in automatic claim verification.
Q3. How many claims were created by modifying a selection of sentences from the Wikipedia?
FEVER consists of 185,445 claims created by modifying a selection of sentences from the Wikipedia and later on verified neglecting the knowledge of the sentence they were derived from.
Q4. How many sentences were used in the task?
They used presidential (3 presidential one vice-presidential) and primary debates (7 Republican and 8 Democratic) for a total of 21,700 sentences.
Q5. What was the purpose of the task?
The purpose was to come out with a dataset closer to the speechesincluded in the test set. [10] used a text distortion model [11] to try to remove irrelevant contents.
Q6. What was the first work to target check-worthiness?
It was trained on data that was manually annotated by students, professors, and journalists, where each sentence was annotated as non-factual, unimportant factual, or check-worthy factual.