scispace - formally typeset
Search or ask a question
Author

Simon Maskell

Bio: Simon Maskell is an academic researcher from University of Liverpool. The author has contributed to research in topics: Particle filter & Kalman filter. The author has an hindex of 27, co-authored 128 publications receiving 14367 citations. Previous affiliations of Simon Maskell include Qinetiq & University of Cambridge.


Papers
More filters
Proceedings ArticleDOI
Paul R. Horridge1, Simon Maskell1
10 Jul 2006
TL;DR: This work shows the feasibility of using an exact JPDAF implementation to track 400 targets and generalizes this approach to process the objects in a tree structure this exploits conditional independence between subsets of the objects.
Abstract: An assignment problem is considered with the constraint that the same hypothesis cannot be applied to more than one object. We desire efficiency without approximation. Multiple target tracking methods such as the Joint Probabilistic Association Filter (JPDAF) motivate us. Methods of solving this assignment problem involving enumerating all possible joint assignments are infeasible except for small problems. A recent approach circumvents this combinatorial explosion by representing the structure of the target hypotheses in a `net' which exploits redundancy in an ordered list of objects used to describe the problem. Here, we generalize this approach to process the objects in a tree structure; this exploits conditional independence between subsets of the objects. This gives a substantial computational saving and allows us to consider scenarios which were previously impractical. In particular, we show the feasibility of using an exact JPDAF implementation to track 400 targets.

41 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: In this paper, a scheme for efficiently tracking multiple targets using particle filters is described, where the tracking of the individual targets is made efficient through the use of Rao-Blackwellisation.
Abstract: For many dynamic estimation problems involving nonlinear and/or non-Gaussian models, particle filtering offers improved performance at the expense of computational effort. This paper describes a scheme for efficiently tracking multiple targets using particle filters. The tracking of the individual targets is made efficient through the use of Rao-Blackwellisation. The tracking of multiple targets is made practicable using Quasi-Monte Carlo integration. The efficiency of the approach is illustrated on synthetic data.

39 citations

Journal ArticleDOI
TL;DR: The results clearly suggest that broad-ranging statistical signal detection in Twitter and Facebook, using currently available methods for adverse event recognition, performs poorly and cannot be recommended at the expense of other pharmacovigilance activities.
Abstract: Social media has been proposed as a possibly useful data source for pharmacovigilance signal detection. This study primarily aimed to evaluate the performance of established statistical signal detection algorithms in Twitter/Facebook for a broad range of drugs and adverse events. Performance was assessed using a reference set by Harpaz et al., consisting of 62 US Food and Drug Administration labelling changes, and an internal WEB-RADR reference set consisting of 200 validated safety signals. In total, 75 drugs were studied. Twitter/Facebook posts were retrieved for the period March 2012 to March 2015, and drugs/events were extracted from the posts. We retrieved 4.3 million and 2.0 million posts for the WEB-RADR and Harpaz drugs, respectively. Individual case reports were extracted from VigiBase for the same period. Disproportionality algorithms based on the Information Component or the Proportional Reporting Ratio and crude post/report counting were applied in Twitter/Facebook and VigiBase. Receiver operating characteristic curves were generated, and the relative timing of alerting was analysed. Across all algorithms, the area under the receiver operating characteristic curve for Twitter/Facebook varied between 0.47 and 0.53 for the WEB-RADR reference set and between 0.48 and 0.53 for the Harpaz reference set. For VigiBase, the ranges were 0.64–0.69 and 0.55–0.67, respectively. In Twitter/Facebook, at best, 31 (16%) and four (6%) positive controls were detected prior to their index dates in the WEB-RADR and Harpaz references, respectively. In VigiBase, the corresponding numbers were 66 (33%) and 17 (27%). Our results clearly suggest that broad-ranging statistical signal detection in Twitter and Facebook, using currently available methods for adverse event recognition, performs poorly and cannot be recommended at the expense of other pharmacovigilance activities.

39 citations

Journal ArticleDOI
TL;DR: Recommendations relating to the use of mobile apps for PV are summarised in this paper and supporting considerations, rationales and caveats are presented as well as suggested areas for further research.
Abstract: Over a period of 3 years, the European Union’s Innovative Medicines Initiative WEB-RADR (Recognising Adverse Drug Reactions; https://web-radr.eu/ ) project explored the value of two digital tools for pharmacovigilance (PV): mobile applications (apps) for reporting the adverse effects of drugs and social media data for its contribution to safety signalling. The ultimate intent of WEB-RADR was to provide policy, technical and ethical recommendations on how to develop and implement such digital tools to enhance patient safety. Recommendations relating to the use of mobile apps for PV are summarised in this paper. There is a presumption amongst at least some patients and healthcare professionals that information ought to be accessed and reported from any setting, including mobile apps. WEB-RADR has focused on the use of such technology for reporting suspected adverse drug reactions and for broadcasting safety information to its users, i.e. two-way risk communication. Three apps were developed and publicly launched within Europe as part of the WEB-RADR project and subsequently assessed by a range of stakeholders to determine their value as effective tools for improving patient safety; a fourth generic app was later piloted in two African countries. The recommendations from the development and evaluation of the European apps are presented here with supporting considerations, rationales and caveats as well as suggested areas for further research.

38 citations

Proceedings ArticleDOI
10 Jul 2006
TL;DR: There is a great deal of flexibility built into the continuous transferable belief model and in the comparison with a Bayesian classifier, it is shown that the novel approach offers a more robust classification output that is less influenced by noise.
Abstract: This paper describes the integration of a particle filter and a continuous version o f the transferable belief model. The output from the particle filter is used as input to the transferable belief model. The transferable belief model's continuous nature allows for the prior knowledge over the classification space to be incorporated within the system. Classification of objects is demonstrated within the paper and compared to the more classical Bayesian classification routine. This is the first time that such an approach has been taken to jointly classify and track targets. We show that there is a great deal o f flexibility built into the continuous transferable belief model and in our comparison with a Bayesian classifier, we show that our novel approach offers a more robust classification output that is less influenced by noise.

36 citations


Cited by
More filters
Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Journal ArticleDOI
TL;DR: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed, which employs a metric derived from the Bhattacharyya coefficient as similarity measure, and uses the mean shift procedure to perform the optimization.
Abstract: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed. The feature histogram-based target representations are regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity functions suitable for gradient-based optimization, hence, the target localization problem can be formulated using the basin of attraction of the local maxima. We employ a metric derived from the Bhattacharyya coefficient as similarity measure, and use the mean shift procedure to perform the optimization. In the presented tracking examples, the new method successfully coped with camera motion, partial occlusions, clutter, and target scale variations. Integration with motion filters and data association techniques is also discussed. We describe only a few of the potential applications: exploitation of background information, Kalman tracking using motion models, and face tracking.

4,996 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

Proceedings ArticleDOI
26 Apr 2010
TL;DR: This paper investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event and produces a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location.
Abstract: Twitter, a popular microblogging service, has received much attention recently. An important characteristic of Twitter is its real-time nature. For example, when an earthquake occurs, people make many Twitter posts (tweets) related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing the tweets. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. We consider each Twitter user as a sensor and apply Kalman filtering and particle filtering, which are widely used for location estimation in ubiquitous/pervasive computing. The particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons. As an application, we construct an earthquake reporting system in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (96% of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and sends e-mails to registered users. Notification is delivered much faster than the announcements that are broadcast by the JMA.

3,976 citations