G
Gianni Barlacchi
Researcher at University of Trento
Publications - 32
Citations - 875
Gianni Barlacchi is an academic researcher from University of Trento. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 10, co-authored 26 publications receiving 572 citations. Previous affiliations of Gianni Barlacchi include Kessler Foundation & Telecom Italia.
Papers
More filters
Posted Content
Modeling Taxi Drivers' Behaviour for the Next Destination Prediction
TL;DR: A Recurrent Neural Network (RNN) approach is presented that models the taxi drivers’ behavior and encodes the semantics of visited locations by using geographical information from Location-Based Social Networks (LBSNs).
Posted Content
scikit-mobility: An open-source Python library for human mobility analysis and simulation
TL;DR: Scikit-mobility as mentioned in this paper is a Python library for mobility analysis and simulation that allows the user to: (1) analyze mobility data by using the main measures characterizing human mobility patterns (e.g., radius of gyration, daily motifs, mobility entropy); (2) simulate individual and collective mobility by executing the most common human mobility models.
Book ChapterDOI
Learning to Rank Aggregated Answers for Crossword Puzzles
TL;DR: This paper defines a new effective approach consisting in querying the database with a search engine for clues that are similar to the target one, and rerank the obtained clue list using state-of-the-art methods.
Proceedings ArticleDOI
LiMoSiNe pipeline: Multilingual UIMA-based NLP platform
Olga Uryupina,Barbara Plank,Gianni Barlacchi,Gianni Barlacchi,Francisco Valverge-Albacete,Manos Tsagkias,Alessandro Moschitti,Alessandro Moschitti +7 more
TL;DR: A robust and efficient parallelizable multilingual UIMA-based platform for automatically annotating textual inputs with different layers of linguistic description, ranging from surface level phenomena all the way down to deep discourse-level information.
Journal ArticleDOI
From Rewriting to Remembering: Common Ground for Conversational QA Models
TL;DR: It is shown that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.