scispace - formally typeset
Search or ask a question
Author

Sophia Bano

Bio: Sophia Bano is an academic researcher from University College London. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 7, co-authored 33 publications receiving 188 citations. Previous affiliations of Sophia Bano include University of Dundee & Queen Mary University of London.

Papers published on a yearly basis

Papers
More filters
Posted Content
TL;DR: The robotic instrument segmentation dataset was introduced with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs and added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes.
Abstract: In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of ex-vivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modifications on U-Nets and other popular CNN architectures. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs.

61 citations

Journal ArticleDOI
TL;DR: The Endoscopy Computer Vision Challenge (EndoCV) as discussed by the authors is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies.

57 citations

Journal ArticleDOI
TL;DR: It is shown that shin and waist are the best places on the body for placing sensors and this could help other researchers to collect higher quality activity data and to develop an effective classifier that can accurately predict the performed human activity.
Abstract: Automatic recognition of human activities using wearable sensors remains a challenging problem due to high variability in inter-person gait and movements Moreover, finding the best on-body location for a wearable sensor is also critical though it provides valuable context information that can be used for accurate recognition This article addresses the problem of classifying motion signals generated by multiple wearable sensors for the recognition of human activity and localisation of the wearable sensors Unlike existing methods that used the raw accelerometer and gyroscope signals for extracting time and frequency-based features for activity inference, we propose to create frequency images for the raw signals and show this representation to be more robust The frequency image sequences are generated from the accelerometer and gyroscope signals from seven different body parts These frequency images serve as the input to our proposed two-stream Convolutional Neural Networks (CNN) for predicting the human activity and the location of the sensor generating the activity signal We show that the complementary information collected by both accelerometer and gyroscope sensors can be leveraged to develop an effective classifier that can accurately predict the performed human activity We evaluate the performance of the proposed method using the cross-subjects approach and show that it achieves an impressive F1-score of 090 on a publicly available real-world human activity dataset This performance is superior to that reported by another state-of-the-art method on the same dataset Moreover, we also experimented with the datasets from different body locations to predict the best position for the underlying task We show that shin and waist are the best places on the body for placing sensors and this could help other researchers to collect higher quality activity data We plan to publicly release the generated frequency images from all sensor positions and activities and our implementation code with the publication

34 citations

Journal ArticleDOI
06 May 2020
TL;DR: This study presents a framework for developing automated EGD reports using deep learning, demonstrating that the method is feasible to address EGD image classification and can lead towards improved performance and qualitatively demonstrate its performance on the authors' dataset.
Abstract: Upper gastrointestinal (GI) endoscopic image documentation has provided an efficient, low-cost solution to address quality control for endoscopic reporting. The problem is, however, challenging for computer-assisted techniques, because different sites have similar appearances. Additionally, across different patients, site appearance variation may be large and inconsistent. Therefore, according to the British and modified Japanese guidelines, we propose a set of oesophagogastroduodenoscopy (EGD) images to be routinely captured and evaluate its efficiency for deep learning-based classification methods. A novel EGD image dataset standardising upper GI endoscopy to several steps is established following landmarks proposed in guidelines and annotated by an expert clinician. To demonstrate the discrimination of proposed landmarks that enable the generation of an automated endoscopic report, we train several deep learning-based classification models utilising the well-annotated images. We report results for a clinical dataset composed of 211 patients (comprising a total of 3704 EGD images) acquired during routine upper GI endoscopic examinations. We find close agreement between predicted labels using our method and the ground truth labelled by human experts. We observe the limitation of current static image classification scheme for EGD image classification. Our study presents a framework for developing automated EGD reports using deep learning. We demonstrate that our method is feasible to address EGD image classification and can lead towards improved performance and additionally qualitatively demonstrate its performance on our dataset.

26 citations

Proceedings ArticleDOI
05 Jun 2019
TL;DR: The authors' method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset, superior to that reported by another state-of-the-art method on the same dataset.
Abstract: This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset.

24 citations


Cited by
More filters
01 Jan 2006

3,012 citations

Journal ArticleDOI
18 Oct 2017

243 citations

Book Chapter
01 Dec 2001
TL;DR: In this article, a summary of the issues discussed during the one day workshop on SVM Theory and Applications organized as part of the Advanced Course on Artificial Intelligence (ACAI ’99) in Chania, Greece is presented.
Abstract: This chapter presents a summary of the issues discussed during the one day workshop on “Support Vector Machines (SVM) Theory and Applications” organized as part of the Advanced Course on Artificial Intelligence (ACAI ’99) in Chania, Greece [19]. The goal of the chapter is twofold: to present an overview of the background theory and current understanding of SVM, and to discuss the papers presented as well as the issues that arose during the workshop.

170 citations

Journal ArticleDOI
TL;DR: In this article, the authors focused on critical role of machine learning in developing HAR applications based on inertial sensors in conjunction with physiological and environmental sensors, which is considered as one of the most promising assistive technology tools to support elderly's daily life by monitoring their cognitive and physical function through daily activities.
Abstract: In the last decade, Human Activity Recognition (HAR) has become a vibrant research area, especially due to the spread of electronic devices such as smartphones, smartwatches and video cameras present in our daily lives. In addition, the advance of deep learning and other machine learning algorithms has allowed researchers to use HAR in various domains including sports, health and well-being applications. For example, HAR is considered as one of the most promising assistive technology tools to support elderly’s daily life by monitoring their cognitive and physical function through daily activities. This survey focuses on critical role of machine learning in developing HAR applications based on inertial sensors in conjunction with physiological and environmental sensors.

168 citations