scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2019"


Book ChapterDOI
TL;DR: The team has extended a long-standing community file format (OME-TIFF) for use in digital pathology by making use of the core TIFF specification to store multi-resolution representations of a single slide in a flexible, performant manner.
Abstract: Faced with the need to support a growing number of whole slide imaging (WSI) file formats, our team has extended a long-standing community file format (OME-TIFF) for use in digital pathology. The format makes use of the core TIFF specification to store multi-resolution (or "pyramidal") representations of a single slide in a flexible, performant manner. Here we describe the structure of this format, its performance characteristics, as well as an open-source library support for reading and writing pyramidal OME-TIFFs.

34 citations


Book ChapterDOI
TL;DR: A cascaded deep network framework, in which the whole tumor is segmented firstly and then the tumor internal substructures are further segmented, and a loss weighted sampling scheme is presented to eliminate the issue of imbalanced data during training the network.
Abstract: This paper proposes a novel cascaded U-Net for brain tumor segmentation. Inspired by the distinct hierarchical structure of brain tumor, we design a cascaded deep network framework, in which the whole tumor is segmented firstly and then the tumor internal substructures are further segmented. Considering that the increase of the network depth brought by cascade structures leads to a loss of accurate localization information in deeper layers, we construct between-net connections to link features at the same resolution and transmit the detailed information from shallow layers to the deeper layers. Then we present a loss weighted sampling (LWS) scheme to eliminate the issue of imbalanced data. Experimental results on the BraTS 2017 dataset show that our framework outperforms the state-of-the-art segmentation algorithms, especially in terms of segmentation sensitivity.

33 citations


Book ChapterDOI
TL;DR: In this article, the authors consider optimization methods for convex minimization problems under inexact information on the objective function, which as a particular cases includes inexact oracle and relative smoothness condition.
Abstract: We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [16] and relative smoothness condition [36]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [41]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments.

33 citations


Book ChapterDOI
TL;DR: A vision for supporting model-based DevOps practices is introduced, and the corresponding research roadmap for the modeling community to address this vision by discussing a CPS demonstrator is inferred.
Abstract: The emerging field of Cyber-Physical Systems (CPS) calls for new scenarios of the use of models. In particular, CPS require to support both the integration of physical and cyber parts in innovative complex systems or production chains, together with the management of the data gathered from the environment to drive dynamic reconfiguration at runtime or finding improved designs. In such a context, the engineering of CPS must rely on models to uniformly reason about various heterogeneous concerns all along the system life cycle. In the last decades, the use of models has been intensively investigated both at design time for driving the development of complex systems, and at runtime as a reasoning layer to support deployment, monitoring and runtime adaptations. However, the approaches remain mostly independent. With the advent of DevOps principles, the engineering of CPS would benefit from supporting a smooth continuum of models from design to runtime, and vice versa. In this vision paper, we introduce a vision for supporting model-based DevOps practices, and we infer the corresponding research roadmap for the modeling community to address this vision by discussing a CPS demonstrator.

31 citations


Book ChapterDOI
TL;DR: Some directions for future research on how black-box model learning can be enhanced using white-box information extraction methods are explored, with the aim to maintain the benefits of dynamic black- box methods while making effective use of information that can be obtained through white- box techniques.
Abstract: Model learning is a black-box technique for constructing state machine models of software and hardware components, which has been successfully used in areas such as telecommunication, banking cards, network protocols, and control software. The underlying theoretic framework (active automata learning) was first introduced in a landmark paper by Dana Angluin in 1987 for finite state machines. In order to make model learning more widely applicable, it must be further developed to scale better to large models and to generate richer classes of models. Recently, various techniques have been employed to extend automata learning to extended automata models, which combine control flow with guards and assignments to data variables. Such techniques infer guards over data parameters and assignments from observations of test output. In the black-box model of active automata learning this can be costly and require many tests, while in many application scenarios source code is available for analysis. In this paper, we explore some directions for future research on how black-box model learning can be enhanced using white-box information extraction methods, with the aim to maintain the benefits of dynamic black-box methods while making effective use of information that can be obtained through white-box techniques.

25 citations


Book ChapterDOI
TL;DR: The manifold facets of this field of research are discussed by surveying the verification of various MDP extensions, rich classes of properties, and their applications by surveyed the basic ingredients of MDP model checking.
Abstract: This paper presents a retrospective view on probabilistic model checking. We focus on Markov decision processes (MDPs, for short). We survey the basic ingredients of MDP model checking and discuss its enormous developments since the seminal works by Courcoubetis and Yannakakis in the early 1990s. We discuss in particular the manifold facets of this field of research by surveying the verification of various MDP extensions, rich classes of properties, and their applications.

23 citations


Book ChapterDOI
TL;DR: This paper evaluates the proposed deep learning-based method for brain tumor classification using pateint dataset from Computational Precision Medicine: Radiology-Pathology Challenge (CPM: Rad-Path) on Brain Tumor Classification 2019 with mixed results.
Abstract: In this paper, we propose a deep learning-based method for brain tumor classification. It is composed of two parts. The first part is brain tumor segmentation on the multimodal magnetic resonance image (mMRI), and the second part performs tumor classification using tumor segmentation results. A 3D deep neural network is implemented to differentiate tumor from normal tissues, subsequentially, a second 3D deep neural network is developed for tumor classification. We evaluate the proposed method using pateint dataset from Computational Precision Medicine: Radiology-Pathology Challenge (CPM: Rad-Path) on Brain Tumor Classification 2019. The result offers 0.749 for dice score and 0.764 for F1 score for validation data, while 0.596 for dice score and of 0.603 for F1 score for testing data, respectively. Our team was ranked second in the CPM:Rad-Path challenge on Brain Tumor Classification 2019 based on overall testing performance.

22 citations


Book ChapterDOI
TL;DR: A novel unsupervised domain adaptation (DA) framework for semantic segmentation which uses self ensembling and adversarial training methods to effectively tackle domain shift between MR images is presented.
Abstract: Advances in deep learning techniques have led to compelling achievements in medical image analysis. However, performance of neural network models degrades drastically if the test data is from a domain different from training data. In this paper, we present and evaluate a novel unsupervised domain adaptation (DA) framework for semantic segmentation which uses self ensembling and adversarial training methods to effectively tackle domain shift between MR images. We evaluate our method on two publicly available MRI dataset to address two different types of domain shifts: On the BraTS dataset [11] to mitigate domain shift between high grade and low grade gliomas and on the SCGM dataset [13] to tackle cross institutional domain shift. Through extensive evaluation, we show that our method achieves favorable results on both datasets.

20 citations


Book ChapterDOI
TL;DR: The numerical optimization of rainwater harvesting systems will improve the knowledge from previous studies in the field, and provide an additional tool to identify the optimal rainwater reuse in order to save water and reduce the surface runoff discharged into the sewer system.
Abstract: Rainwater harvesting systems represent sustainable solutions that meet the challenges of water saving and surface runoff mitigation. The collected rainwater can be re-used for several purposes such as irrigation of green roofs and garden, flushing toilets, etc. Optimizing the water usage in each such use is a significant goal. To achieve this goal, we have considered TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and Rough Set method as Multi-Objective Optimization approaches by analyzing different case studies. TOPSIS was used to compare algorithms and evaluate the performance of alternatives, while Rough Set method was applied as a machine learning method to optimize rainwater-harvesting systems. Results by Rough Set method provided a baseline for decision-making and the minimal decision algorithm were obtained as six rules. In addition, The TOPSIS method ranked all case studies, and because we used several correlated attributes, the findings are more accurate from other simple ranking method. Therefore, the numerical optimization of rainwater harvesting systems will improve the knowledge from previous studies in the field, and provide an additional tool to identify the optimal rainwater reuse in order to save water and reduce the surface runoff discharged into the sewer system.

20 citations


Book ChapterDOI
TL;DR: This chapter provides a map of the current knowledge about the boundaries of Moving and Computing in Continuous Spaces, describing the “models” under which the results known so far have been obtained.
Abstract: This chapter provides a map of the current knowledge about the boundaries of Moving and Computing in Continuous Spaces, describing the “models” under which the results known so far have been obtained.

19 citations


Book ChapterDOI
TL;DR: This work presents the first method for detecting vertebral fractures in CT using automatically learned 3D feature maps, and trains a voxel-classification 3D Convolutional Neural Network with a training database of 90 cases that has been semi-automatically generated using radiologist readings that are readily available in clinical practice.
Abstract: Osteoporosis induced fractures occur worldwide about every 3 s. Vertebral compression fractures are early signs of the disease and considered risk predictors for secondary osteoporotic fractures. We present a detection method to opportunistically screen spine-containing CT images for the presence of these vertebral fractures. Inspired by radiology practice, existing methods are based on 2D and 2.5D features but we present, to the best of our knowledge, the first method for detecting vertebral fractures in CT using automatically learned 3D feature maps. The presented method explicitly localizes these fractures allowing radiologists to interpret its results. We train a voxel-classification 3D Convolutional Neural Network (CNN) with a training database of 90 cases that has been semi-automatically generated using radiologist readings that are readily available in clinical practice. Our 3D method produces an Area Under the Curve (AUC) of 95% for patient-level fracture detection and an AUC of 93% for vertebra-level fracture detection in a five-fold cross-validation experiment.

Journal Article
TL;DR: In this article, a neural network model for joint extraction of named entities and relations between them, without any hand-crafted features, is proposed, which is based on BiLSTM-CRF-based entity recognition model.
Abstract: © Springer Nature Switzerland AG 2019. We propose a neural network model for joint extraction of named entities and relations between them, without any hand-crafted features. The key contribution of our model is to extend a BiLSTM-CRF-based entity recognition model with a deep biaffine attention layer to model second-order interactions between latent features for relation classification, specifically attending to the role of an entity in a directional relationship. On the benchmark “relation and entity recognition” dataset CoNLL04, experimental results show that our model outperforms previous models, producing new state-of-the-art performances.


Book ChapterDOI
TL;DR: A new Simulink-based solution of the Infinity Computer, a new type of a supercomputer allowing one to work numerically with finite, infinite, and infinitesimal numbers in one general framework is introduced.
Abstract: This paper is dedicated to the Infinity Computer – a new type of a supercomputer allowing one to work numerically with finite, infinite, and infinitesimal numbers in one general framework. The existent software simulators of the Infinity Computer are used already for solving important real-world problems in applied mathematics. However, they are not efficient for solving difficult problems in control theory and dynamics, where visual programming tools like Simulink are used frequently. For this purpose, the main aim of this paper is to introduce a new Simulink-based solution of the Infinity Computer.

Book ChapterDOI
TL;DR: In a distributed locally-checkable proof, a prover is a computationally-unbounded oracle that aims at convincing the network that its state is legal, by providing the nodes with certificates that form a distributed proof of legality.
Abstract: In a distributed locally-checkable proof, we are interested in checking the legality of a given network configuration with respect to some Boolean predicate. To do so, the network enlists the help of a prover—a computationally-unbounded oracle that aims at convincing the network that its state is legal, by providing the nodes with certificates that form a distributed proof of legality. The nodes then verify the proof by examining their certificate, their local neighborhood and the certificates of their neighbors.

Journal Article
TL;DR: CentroidNet as mentioned in this paper is a Fully Convolutional Neural Network (FCNN) architecture specifically designed for object localization and counting in precision agriculture, where a field of vectors pointing to the nearest object centroid is trained and combined with a learned segmentation map to produce accurate object centroids by majority voting.
Abstract: In precision agriculture, counting and precise localization of crops is important for optimizing crop yield. In this paper CentroidNet is introduced which is a Fully Convolutional Neural Network (FCNN) architecture specifically designed for object localization and counting. A field of vectors pointing to the nearest object centroid is trained and combined with a learned segmentation map to produce accurate object centroids by majority voting. This is tested on a crop dataset made using a UAV (drone) and on a cell-nuclei dataset which was provided by a Kaggle challenge. We define the mean Average F1 score (mAF1) for measuring the trade-off between precision and recall. CentroidNet is compared to the state-of-the-art networks YOLOv2 and RetinaNet, which share similar properties. The results show that CentroidNet obtains the best F1 score. We also explicitly show that CentroidNet can seamlessly switch between patches of images and full-resolution images without the need for retraining.

Book ChapterDOI
TL;DR: This work uses and enhances an existing online serious game to train employees to use defence mechanisms of social psychology to prevent social engineering and discusses the resulting game with practitioners in the field of security awareness to gather some qualitative feedback.
Abstract: Social engineering is the clever manipulation of human trust. While most security protection focuses on technical aspects, organisations remain vulnerable to social engineers. Approaches employed in social engineering do not differ significantly from the ones used in common fraud. This implies defence mechanisms against the fraud are useful to prevent social engineering, as well. We tackle this problem using and enhancing an existing online serious game to train employees to use defence mechanisms of social psychology. The game has shown promising tendencies towards raising awareness for social engineering in an entertaining way. Training is highly effective when it is adapted to the players context. Our contribution focuses on enhancing the game with highly configurable game settings and content to allow the adaption to the player’s context as well as the integration into training platforms. We discuss the resulting game with practitioners in the field of security awareness to gather some qualitative feedback.

Book ChapterDOI
TL;DR: Big data problems are often not easily amenable to efficient and effective use of High Performance Computing (HPC) facilities and technologies, and M&S communities typically lack the detailed expertise required to exploit the full potential of HPC solutions.
Abstract: Modelling and Simulation (M&S) offer adequate abstractions to manage the complexity of analysing big data in scientific and engineering domains. Unfortunately, big data problems are often not easily amenable to efficient and effective use of High Performance Computing (HPC) facilities and technologies. Furthermore, M&S communities typically lack the detailed expertise required to exploit the full potential of HPC solutions while HPC specialists may not be fully aware of specific modelling and simulation requirements and applications.

Book ChapterDOI
TL;DR: It is found that the compile_ultra option reduces the success rate significantly from 5 key candidates with a correctness of between 75 and 90% down to 3 key candidate with a maximum success rate of 72% compared to the simple compile option.
Abstract: In this paper we analyse the impact of different compile options on the success rate of side channel analysis attacks. We run horizontal differential side channel attacks against simulated power traces for the same kP design synthesized using two different compile options after synthesis and after layout. As we are interested in the effect on the produced ASIC we also run the same attack against measured power traces after manufacturing the ASIC. We found that the compile_ultra option reduces the success rate significantly from 5 key candidates with a correctness of between 75 and 90% down to 3 key candidates with a maximum success rate of 72% compared to the simple compile option. Also the success rate after layout shows a very high correlation with the one obtained attacking the measured power and electromagnetic traces, i.e. the simulations are a good indicator of the resistance of the ASIC.

Book ChapterDOI
TL;DR: NSSA realizes behavior identification, intention understanding and impact assessment of various activities in the network to support reasonable security response decisions and provides an important basis for management decision-making.
Abstract: With the increasing importance of cyberspace security, more attention is being paid to the research and application of network security situation awareness (NSSA). NSSA realizes behavior identification, intention understanding and impact assessment of various activities in the network to support reasonable security response decisions. It is a means of quantitative analysis of network security. Network security management system can grasp the security situation of the whole network and analyze the intentions of attackers with the help of network security management system. It provides an important basis for management decision-making. Then, it summarizes network security from three aspects: extraction of elements of network security situation, evaluation of network security situation and prediction of network security situation. Research status and development trend of situational awareness.

Journal Article
TL;DR: In this paper, the authors describe a large collection of benchmarks, publicly available through the wiki automata.cs.ru.nl, of different types of state machine models: DFAs, Moore machines, Mealy machines, interface automata and register automata, including both randomly generated state machines and models of real protocols and embedded software/hardware systems.
Abstract: We describe a large collection of benchmarks, publicly available through the wiki automata.cs.ru.nl, of different types of state machine models: DFAs, Moore machines, Mealy machines, interface automata and register automata. Our repository includes both randomly generated state machines and models of real protocols and embedded software/hardware systems. These benchmarks will allow researchers to evaluate the performance of new algorithms and tools for active automata learning and conformance testing.

Book ChapterDOI
TL;DR: An IoT information security protection scheme based on blockchain technology that utilizes the security features of the blockchain combined with the AES encryption algorithm to encrypt the original IoT information, and the ciphertext distributed storage can effectively solve the IoT data storage problem.
Abstract: The Internet of Things (IoT) is an important area of next-generation information technology, and its value and significance are widely recognized. While providing development opportunities, the IoT also presents major challenges. Security and privacy have become severe issues that cannot be ignored in the development of the IoT in this paper, so we will propose an IoT information security protection scheme based on blockchain technology. The scheme utilizes the security features of the blockchain combined with the AES encryption algorithm to encrypt the original IoT information, and the ciphertext distributed storage can effectively solve the IoT data storage problem. Experiments shown in this scheme could reduce the operation and credit cost of centralized network. At the same time, the blockchain-based IoT information security protection scheme combined with cryptography knowledge can effectively solved the big data management and trust faced in the development of the IoT, security and privacy issues.

Book ChapterDOI
TL;DR: This paper aims to identify the controls provisioned in ISO/IEC 27001:2013 and ISO/ IEC 27002:2013 that need to be extended in order to adequately meet, if/where possible, the data protection requirements that the GDPR imposes.
Abstract: With the enforcement of the General Data Protection Regulation (GDPR) in EU, organisations must make adjustments in their business processes and apply appropriate technical and organisational measures to ensure the protection of the personal data they process. Further, organisations need to demonstrate compliance with GDPR. Organisational compliance demands a lot of effort both from a technical and from an organisational perspective. Nonetheless, organisations that have already applied ISO27k standards and employ an Information Security Management System and respective security controls need considerably less effort to comply with GDPR requirements. To this end, this paper aims to identify the controls provisioned in ISO/IEC 27001:2013 and ISO/IEC 27002:2013 that need to be extended in order to adequately meet, if/where possible, the data protection requirements that the GDPR imposes. Thus, an organisation that already follows ISO/IEC 27001:2013, can use this work as a basis for compliance with the GDPR.

Book ChapterDOI
TL;DR: This work identified how frailty is associated with clinical phenotypes, which most reliably characterize the group of older patients from the authors' local environment: the general practice attenders, and performed cluster analysis based on using a set of anthropometric and laboratory health indicators routinely collected in electronic health records.
Abstract: Many problems in clinical medicine are characterized by high complexity and non-linearity. Particularly, this is the case with aging diseases, chronic medical conditions that are known to tend to accumulate in the same person. This phenomenon is known as multimorbidity. In addition to the number of chronic diseases, the presence of integrated geriatric conditions and functional deficits, such as walking difficulties, of frailty (a general weakness associated with weight and muscle loss and low functioning) are important for the prediction of negative health outcomes of older people, such as hospitalization, dependency on others or pre-term mortality. In this work, we identified how frailty is associated with clinical phenotypes, which most reliably characterize the group of older patients from our local environment: the general practice attenders. We have performed cluster analysis, based on using a set of anthropometric and laboratory health indicators, routinely collected in electronic health records. Differences found among clusters in proportions of prefrail and frail versus non-frail patients have been explained with differences in the central values of the parameters used for clustering. Distribution patterns of chronic diseases and other geriatric conditions, found by the assessment of differences, were very useful in determining the clinical phenotypes derived by the clusters. Once more, this study demonstrates the most important aspect of any machine learning task: the quality of the data!

Book ChapterDOI
TL;DR: This paper introduces a new type of gait controller where complexity can be set by a single parameter, using a dynamic genotype-phenotype mapping, and shows that having variable complexity allows for more sophistication and high performance for demanding tasks, at the cost of optimization effort.
Abstract: The complexity of a legged robot’s environment or task can inform how specialised its gait must be to ensure success. Evolving specialised robotic gaits demands many evaluations—acceptable for computer simulations, but not for physical robots. For some tasks, a more general gait, with lower optimization costs, could be satisfactory. In this paper, we introduce a new type of gait controller where complexity can be set by a single parameter, using a dynamic genotype-phenotype mapping. Low controller complexity leads to conservative gaits, while higher complexity allows more sophistication and high performance for demanding tasks, at the cost of optimization effort. We investigate the new controller on a virtual robot in simulations and do preliminary testing on a real-world robot. We show that having variable complexity allows us to adapt to different optimization budgets. With a high evaluation budget in simulation, a complex controller performs best. Moreover, real-world evolution with a limited evaluation budget indicates that a lower gait complexity is preferable for a relatively simple environment.

Book ChapterDOI
TL;DR: A fetal MRI neuroimaging analysis pipeline for fetuses with SB is presented, including automated fetal ventricle segmentation and deformation-based morphometry, and its applicability with an analysis of ventricles enlargement in fetuseswith SB is demonstrated.
Abstract: Open spina bifida (SB) is one of the most common congenital defects and can lead to impaired brain development. Emerging fetal surgery methods have shown considerable success in the treatment of patients with this severe anomaly. Afterwards, alterations in the brain development of these fetuses have been observed. Currently no longitudinal studies exist to show the effect of fetal surgery on brain development. In this work, we present a fetal MRI neuroimaging analysis pipeline for fetuses with SB, including automated fetal ventricle segmentation and deformation-based morphometry, and demonstrate its applicability with an analysis of ventricle enlargement in fetuses with SB. Using a robust super-resolution algorithm, we reconstructed fetal brains at both pre-operative and post-operative time points and trained a U-Net CNN in order to automatically segment the ventricles. We investigated the change of ventricle shape post-operatively, and the impacts of lesion size, type, and GA at operation on the change in ventricle shape. No impact was found. Prenatal ventricle volume growth was also investigated. Our method allows for the quantification of longitudinal morphological changes to fully quantify the impact of prenatal SB repair and could be applied to predict postnatal outcomes.

Book ChapterDOI
TL;DR: The main objective of this study is to predict the intra tumor breast cancer response to neoadjuvant chemotherapy (NAC) to avoid unnecessary treatment sessions for no responders’ patients.
Abstract: Purpose: Many breast cancer patients receiving chemotherapy cannot achieve positive response unlimitedly. The main objective of this study is to predict the intra tumor breast cancer response to neoadjuvant chemotherapy (NAC). This aims to provide an early prediction to avoid unnecessary treatment sessions for no responders’ patients.


Book ChapterDOI
TL;DR: This paper combines massive location data coming from smartphones and mixed data sampling (MIDAS) techniques to predict current unemployment rate in Japan and found GPS data is very useful to predict the status of labor markets.
Abstract: Unemployment rate is one of the most important macroeconomic indicators. Central governments and market participants heavily rely on the index to assess the economies. However, official statistics of unemployment rate are released infrequently with substantial delay. Prediction of official statistics of labor market will be helpful for these authorities as well as private companies and even workers. In this paper, we combine massive location data coming from smartphones and mixed data sampling (MIDAS) techniques to predict current unemployment rate in Japan. We found GPS data is very useful to predict the status of labor markets.

Book ChapterDOI
TL;DR: A parametric design method to improve smart manufacturing in 4.0 jewelry industry by using constrained collection of schemata, the so called Direct Acyclic Graphs (DAGs) and additive manufacturing technologies, and creating a process by which customers are able to modify 3D virtual models and to visualize them, according to their preferences.
Abstract: The industrial and technological revolution and the use of innovative software allowed to build a virtual world from which we can control the physical one. In particular, this development provided relevant benefits in the field of jewelry manufacturing industry using parametric modeling systems. This paper proposes a parametric design method to improve smart manufacturing in 4.0 jewelry industry. By using constrained collection of schemata, the so called Direct Acyclic Graphs (DAGs) and additive manufacturing technologies, we created a process by which customers are able to modify 3D virtual models and to visualize them, according to their preferences. In fact, by using the software packages Mathematica and Grasshopper, we exploited both the huge quantity of mathematical patterns (such as curves and knots), and the parametric space of these structures. A generic DAG, grouped into a unit called User Object, is a design tools shifting the focus from final shape to digital process. For this reason, it is capable to returns a huge number of unique combinations of the starting configurations, according to the customers preferences. The configurations chosen by the designer or by the customers, are 3D printed in wax-based resins and, later, ready to be merged, according to artisan jewelry handcraft. Two cases studio are proposed to show empirical evidences of the designed process to transform abstract mathematical equations into real physical forms.