Q2. What are the future works mentioned in the paper "High-performance modelling and simulation for big data applications" ?
In the following work, some more specific implementations and experimental results could be presented, based on the guidelines, outlines, and integration possibilities presented in this chapter. Author RS also acknowledges that work was supported by the Ministry of Education, Forecasting Cryptocurrency Value by Sentiment Analysis 341 Youth and Sports of the Czech Republic within the National Sustainability Programme Project No. LO1303 ( MSMT-7778/2014 ), further supported by the European Regional Development Fund under the Project CEBIA-Tech no.
Q3. What is the advantage of Apache Storm?
Apache Storm [34] is a scalable, rapid, fault-tolerant platform for distributed computing that has the advantage of handling real time data processing downloaded from synchronous and asynchronous systems.
Q4. How can the authors unify all methods for computing total derivatives?
By using the method of modular analysis and unified derivatives (MAUD), the authors can unify all methods for computing total derivatives using a single equation with associated distributed-memory, sparse data-passing schemes.
Q5. What is the main reason for the complexity of the medical approach to comorbidities?
The medical approach to comorbidities represents an impressive computational challenge, mainly because of data synergies leading to the integration of heterogeneous sources of information, the definition of deep phenotypingand markers re-modulation; the establishment of clinical decision support systems.
Q6. Why can't the authors use a genome fragment of limited size?
Due to biotechnologies limitations, sequencing (that is, giving as input the in vitro DNA and getting out an in silico text file) can only be done on a genome fragment of limited size.
Q7. What was the main sideeffect of cHiPSet COST?
in the case of EU project RIVR (Upgrading National Research Structures in Slovenia) supported by European Regional Development Fund (ERDF), an important sideeffect of cHiPSet COST action was leveraging it’s experts’ inclusiveness to gain capacity recognition at a national ministry for co-financing HPC equipment1.
Q8. What is the expensive process for complex disease management?
In particular, complex disease management is mostly based on electronic health records collection and analysis, which are expensive processes.
Q9. What are the main reasons why of these applications are not HPC enabled?
Since most of these applications belong to domains within the life, social and physical sciences, their mainstream approaches are rooted in non-computational abstractions and they are typically not HPC-enabled.
Q10. What are the two types of applications that are used to make decisions?
Classical HPC applications, where the authors build a large-scale complex model and simulate this in order to produce data as a basis for decisions, and Big data applications, where the starting point is a data set, that is processed and analyzed to learn the behaviour of a system, to find relevant features, and to make predictions or decisions.
Q11. What could be done to trace cancer clones?
For instance by using the Next Generation Sequencing technology approaches cancer clones, subtypes and metastasis could be appropriately traced.
Q12. What is the common way to validate data in biomedical studies?
For instance, in bio-medical studies, wet-lab validation typically involves additional resource-intensive work that has to be geared towards a statistically significant distilled fragment of the computational results, suitable to confirm the bio-medical hypotheses and compatible with the available resources.
Q13. What is the way to make the data the drivers of paths to cures for many complex?
Their chances to make the data the drivers of paths to cures for many complex diseases depends in a good percentage on extracting evidences from large-scale electronic records comparison and on models of disease trajectories.
Q14. What is the popular open source framework for modeling and simulation of cloud computing infrastructures and services?
CloudSim [54] is one of the most popular open source framework for modeling and simulation of cloud computing infrastructures and services.
Q15. What is the main reason for the growth of the e-healthcare field?
The growth is driven by three main factors:1. Biomedicine is heavily interdisciplinary and e-Healthcare requires physicians, bioinformaticians, computer scientists and engineers to team up.
Q16. What is the framework for modelling and simulating a particular use-case?
The optimum framework for modelling and simulating a particular use-case depends on the availability, structure and size of data [126].
Q17. How many articles were retained for final review?
in the SMS, the initial literature search resulted in 420 articles; 152 articles were retained for final review after the evaluation of initial search results by domain experts.
Q18. What are some of the tools that are developed specifically for the visualisation of mapped read alignment?
Other tools, such as BamView [40] have been developed specifically to visualise mapped read alignment data in the context of the reference sequence.
Q19. What are some of the approaches that have been successful?
Some approaches have been successful, leading to potential industrial impact and supporting experiments that generate petabytes of data, like those performed at CERN for instance.
Q20. What are the main limitations of the computational analysis of complex biological systems?
The computational analysis of complex biological systems can be hindered by three main factors:2. When the system is composed of hundreds or thousands of reactions and chemical species, the classic CPU-based simulators could not be appropriate to efficiently derive the behaviour of the system.