scispace - formally typeset
Search or ask a question
Author

Javier Pérez de Frutos

Bio: Javier Pérez de Frutos is an academic researcher from SINTEF. The author has contributed to research in topics: Deep learning & Fiducial marker. The author has an hindex of 3, co-authored 6 publications receiving 57 citations. Previous affiliations of Javier Pérez de Frutos include Norwegian University of Science and Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: This article presents a review on trends in modular reconfigurable robots, comparing the evolution of the features of the most significant robots over the years and focusing on the latest designs.
Abstract: This article presents a review on trends in modular reconfigurable robots, comparing the evolution of the features of the most significant robots over the years and focusing on the latest designs. These features are reconfiguration, docking, degrees of freedom, locomotion, control, communications, size, and powering. For each feature, some of the most relevant designs are presented and the current trends in the design are discussed.

79 citations

Journal ArticleDOI
TL;DR: AR showed an increase in both TRE and FRE throughout the experimental studies, proving that AR is not robust to the sampling accuracy of the targets used to compute image-to-patient registration.
Abstract: Purpose This study aims to evaluate the accuracy of point-based registration (PBR) when used for augmented reality (AR) in laparoscopic liver resection surgery. Material and methods The study was conducted in three different scenarios in which the accuracy of sampling targets for PBR decreases: using an assessment phantom with machined divot holes, a patient-specific liver phantom with markers visible in computed tomography (CT) scans and in vivo, relying on the surgeon's anatomical understanding to perform annotations. Target registration error (TRE) and fiducial registration error (FRE) were computed using five randomly selected positions for image-to-patient registration. Results AR with intra-operative CT scanning showed a mean TRE of 6.9 mm for the machined phantom, 7.9 mm for the patient-specific phantom and 13.4 mm in the in vivo study. Conclusions AR showed an increase in both TRE and FRE throughout the experimental studies, proving that AR is not robust to the sampling accuracy of the targets used to compute image-to-patient registration. Moreover, an influence of the size of the volume to be register was observed. Hence, it is advisable to reduce both errors due to annotations and the size of registration volumes, which can cause large errors in AR systems.

11 citations

Journal ArticleDOI
TL;DR: FastPathology as mentioned in this paper is a C++-based platform for reading and processing whole-slide microscopy images (WSIs) in a single application, including inference of CNNs with real-time display of the results.
Abstract: Deep convolutional neural networks (CNNs) are the current state-of-the-art for digital analysis of histopathological images. The large size of whole-slide microscopy images (WSIs) requires advanced memory handling to read, display and process these images. There are several open-source platforms for working with WSIs, but few support deployment of CNN models. These applications use third-party solutions for inference, making them less user-friendly and unsuitable for high-performance image analysis. To make deployment of CNNs user-friendly and feasible on low-end machines, we have developed a new platform, FastPathology , using the FAST framework and C++. It minimizes memory usage for reading and processing WSIs, deployment of CNN models, and real-time interactive visualization of results. Runtime experiments were conducted on four different use cases, using different architectures, inference engines, hardware configurations and operating systems. Memory usage for reading, visualizing, zooming and panning a WSI were measured, using FastPathology and three existing platforms. FastPathology performed similarly in terms of memory to the other C++-based application, while using considerably less than the two Java-based platforms. The choice of neural network model, inference engine, hardware and processors influenced runtime considerably. Thus, FastPathology includes all steps needed for efficient visualization and processing of WSIs in a single application, including inference of CNNs with real-time display of the results. Source code, binary releases, video demonstrations and test data can be found online on GitHub at https://github.com/SINTEFMedtek/FAST-Pathology/ .

10 citations

Posted Content
TL;DR: This work has developed a new platform, FastPathology, which minimizes memory usage for reading and processing WSIs, deployment of CNN models, and real-time interactive visualization of results, and includes all steps needed for efficient visualization and processing of WSIs in a single application.
Abstract: Deep convolutional neural networks (CNNs) are the current state-of-the-art for digital analysis of histopathological images. The large size of whole-slide microscopy images (WSIs) requires advanced memory handling to read, display and process these images. There are several open-source platforms for working with WSIs, but few support deployment of CNN models. These applications use third-party solutions for inference, making them less user-friendly and unsuitable for high-performance image analysis. To make deployment of CNNs user-friendly and feasible on low-end machines, we have developed a new platform, FastPathology, using the FAST framework and C++. It minimizes memory usage for reading and processing WSIs, deployment of CNN models, and real-time interactive visualization of results. Runtime experiments were conducted on four different use cases, using different architectures, inference engines, hardware configurations and operating systems. Memory usage for reading, visualizing, zooming and panning a WSI were measured, using FastPathology and three existing platforms. FastPathology performed similarly in terms of memory to the other C++ based application, while using considerably less than the two Java-based platforms. The choice of neural network model, inference engine, hardware and processors influenced runtime considerably. Thus, FastPathology includes all steps needed for efficient visualization and processing of WSIs in a single application, including inference of CNNs with real-time display of the results. Source code, binary releases and test data can be found online on GitHub at this https URL.

7 citations

Journal ArticleDOI
TL;DR: The proposed Single Landmark registration method allows the clinician to accurately register lesions intraoperatively by clicking on these in the ultrasound image provided by the ultrasound transducer, suitable for being integrated in a laparoscopic workflow.
Abstract: Test the feasibility of the novel Single Landmark image-to-patient registration method for use in the operating room for future clinical trials. The algorithm is implemented in the open-source platform CustusX, a computer-aided intervention research platform dedicated to intraoperative navigation and ultrasound, with an interface for laparoscopic ultrasound probes. The Single Landmark method is compared to fiducial landmark on an IOUSFAN (Kyoto Kagaku Co., Ltd., Japan) soft tissue abdominal phantom and T2 magnetic resonance scans of it. The experiments show that the accuracy of the Single Landmark registration is good close to the registered point, increasing with the distance from this point (12.4 mm error at 60 mm away from the registered point). In this point, the registration accuracy is mainly dominated by the accuracy of the user when clicking on the ultrasound image. In the presented set-up, the time required to perform the Single Landmark registration is 40% less than for the FLRM. The Single Landmark registration is suitable for being integrated in a laparoscopic workflow. The statistical analysis shows robustness against translational displacements of the patient and improvements in terms of time. The proposed method allows the clinician to accurately register lesions intraoperatively by clicking on these in the ultrasound image provided by the ultrasound transducer. The Single Landmark registration method can be further combined with other more accurate registration approaches improving the registration at relevant points defined by the clinicians.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The world of mobile robots is explored including the new trends led by artificial intelligence, autonomous driving, network communication, cooperative work, nanorobotics, friendly human–robot interfaces, safe human-robot interaction, and emotion expression and perception.
Abstract: Humanoid robots, unmanned rovers, entertainment pets, drones, and so on are great examples of mobile robots. They can be distinguished from other robots by their ability to move autonomously, with ...

287 citations

Posted Content
TL;DR: A generalized deep learning-based framework for histopathology tissue analysis is proposed that has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets.
Abstract: Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Given the large size of these images and the increase in the number of potential cancer cases, an automated solution as an aid to histopathologists is highly desirable. In the recent past, deep learning-based techniques have provided state of the art results in a wide variety of image analysis tasks, including analysis of digitized slides. However, the size of images and variability in histopathology tasks makes it a challenge to develop an integrated framework for histopathology image analysis. We propose a deep learning-based framework for histopathology tissue analysis. We demonstrate the generalizability of our framework, including training and inference, on several open-source datasets, which include CAMELYON (breast cancer metastases), DigestPath (colon cancer), and PAIP (liver cancer) datasets. We discuss multiple types of uncertainties pertaining to data and model, namely aleatoric and epistemic, respectively. Simultaneously, we demonstrate our model generalization across different data distribution by evaluating some samples on TCGA data. On CAMELYON16 test data (n=139) for the task of lesion detection, the FROC score achieved was 0.86 and in the CAMELYON17 test-data (n=500) for the task of pN-staging the Cohen's kappa score achieved was 0.9090 (third in the open leaderboard). On DigestPath test data (n=212) for the task of tumor segmentation, a Dice score of 0.782 was achieved (fourth in the challenge). On PAIP test data (n=40) for the task of viable tumor segmentation, a Jaccard Index of 0.75 (third in the challenge) was achieved, and for viable tumor burden, a score of 0.633 was achieved (second in the challenge). Our entire framework and related documentation are freely available at GitHub and PyPi.

54 citations

Journal ArticleDOI
TL;DR: In this paper, a generalized deep learning-based framework for histopathology tissue analysis is proposed, which is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and generalizability of the analysis.
Abstract: Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.

49 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: Multi-agent systems are not currently ready for deployment in search and rescue applications; however, progress is being made in a number of critical domains.
Abstract: The goal of this review is to evaluate the current status of multi-robot systems in the context of search and rescue. This includes an investigation of their current use in the field, what major technical challenge areas currently preclude more widespread use, and which key topics will drive future development and adoption. Work blending machine learning with classical control techniques is driving progress in perception-driven autonomy, decentralized multi-robot coordination, and human–robot interaction, among others. Ad hoc mesh networking has achieved reliability suitable for safety-critical applications and may be a partial solution for communication. New modular and multimodal platforms may overcome mobility limitations without significantly increasing cost. Multi-agent systems are not currently ready for deployment in search and rescue applications; however, progress is being made in a number of critical domains. As the field matures, research should focus on realistic evaluations of constituent technologies, and on confronting the challenges of simulation-to-reality transfer, algorithmic bias in autonomous agents that rely on machine learning, and novelty-versus-reliability incentive mismatch

46 citations

Journal ArticleDOI
19 Jun 2019
TL;DR: This work presents the set of interconnectable modules (IMPROV), which programs and verifies the safety of assembled robots themselves, and shows a reduction of robot idle time by 36% without compromising on safety using the self-verification concept compared with current safety standards.
Abstract: Industrial robots cannot be reconfigured to optimally fulfill a given task and often have to be caged to guarantee human safety. Consequently, production processes are meticulously planned so that they last for long periods to make automation affordable. However, the ongoing trend toward mass customization and small-scale manufacturing requires purchasing new robots on a regular basis to cope with frequently changing production. Modular robots are a natural answer: Robots composed of standardized modules can be easily reassembled for new tasks, can be quickly repaired by exchanging broken modules, and are cost-effective by mass-producing standard modules usable for a large variety of robot types. Despite these advantages, modular robots have not yet left research laboratories because an expert must reprogram each new robot after assembly, rendering reassembly impractical. This work presents our set of interconnectable modules (IMPROV), which programs and verifies the safety of assembled robots themselves. Experiments show that IMPROV robots retained the same control performance as nonmodular robots, despite their reconfigurability. With respect to human-robot coexistence, our user study shows a reduction of robot idle time by 36% without compromising on safety using our self-verification concept compared with current safety standards. We believe that by using self-programming and self-verification, modular robots can transform current automation practices.

39 citations