Interactive Visual Analysis of Heterogeneous Cohort-Study Data
read more
Citations
Big data analytics in health sector: Theoretical framework, techniques and prospects
A Survey of Visual Analytics for Public Health
A review of the literature on big data analytics in healthcare
Interactive Visual Analysis of Image-Centric Cohort Study Data.
3D Regression Heat Map Analysis of Population Study Data
References
An overview of data warehousing and OLAP technology
Polaris: a system for query, analysis, and visualization of multidimensional relational databases
Polaris: a system for query, analysis and visualization of multi-dimensional relational databases
Practice-related effects demonstrate complementary roles of anterior cingulate and prefrontal cortices in attentional control.
Age-related decline in white matter tract integrity and cognitive performance: A DTI tractography and structural equation modeling study
Related Papers (5)
Frequently Asked Questions (16)
Q2. What are the future works mentioned in the paper "Interactive visual analysis of heterogeneous cohort study data" ?
In future, the authors plan to continue in this research direction, and extend the capabilities of this tool. As future work the authors also plan to import genotype data for the subjects, that at the time being was not readily available, and to integrate 2D/3D graph views for representing the brain connectivity information. Finally, the authors plan to include the retrieval and visualization of patient-specific image data, to assess whether outliers originate from the image data, or whether they are the result of an erroneous derivation process.
Q3. What are the key aspects that were regarded as useful in generating new hypotheses?
The key aspects that were regarded as most useful in generating new hypotheses are: having the whole data at hand in one tool, the ease of use, and being able to fire queries in the tool.
Q4. Why do analysts have to limit their attention to subsets of the data?
Due to the complexities posed by the data heterogeneity, analysts often have to limit their attention to subsets of the data, making the analysis lose the overall relations within different modalities.
Q5. How fast can OLAP cubes perform on relational data?
It has been reported that executing complex queries using OLAP cubes can perform about hundred times faster than doing the same on relational data [4].
Q6. What is the typical workflow approach to analyze the data coming from such studies?
The typical workflow approach to analyze the data coming from such studies is to manually extract the pieces of data to analyze from the dataset (e.g., using custom scripts or programs for each analysis), andthen process them using mathematical and statistical packages.
Q7. What can be done to restrict the data to specific subsets?
To enable a more focused analysis, selection techniques can be used in order to restrict the processed or visualized data to specific subsets under investigation.
Q8. What are the methods to evaluate specific hypotheses based on such cohort study data?
There are means to evaluate specific hypotheses, based on such cohort study data, often involving accordingly designed data extraction, transformation, and fusion approaches.
Q9. What are the dimensions and measures used to access?
The dimensions and measures can be thought of as independent and dependent variables, and dimension coordinates are used to access the measures.
Q10. What is the standard way to organize the data into a single cube?
The standard way to organize these data into a single data-cube would be to build a denormalized cube characterized by all the dimensions in the dataset, which would contain all the data.
Q11. What is the way to automate the analysis?
In some cases, however, the analysis can benefit from automating certain steps, like repeating selected tests or analyses using a scheme defined by the user on different data, or with varying parame-ters or methods.
Q12. What is the common anatomical data that is often used to explain the anatomy?
While analyses are often performed on the quantitative measures, it also becomes necessary to occasionally fetch and inspect the related anatomical data, to explain, for example, data outliers, or to see what effects certain conditions have on the anatomy.
Q13. How do the authors encode the values of the fibers?
The aggregated values are then encoded, upon normalization, by modifying the color saturation of each fiber segment (high values resulting in high saturation).
Q14. How do you generate plots of the results?
plots of the results are generated either using custom scripts, or by importing the results into applications that can plot the data.
Q15. How do the authors present statistics in physical space?
To present selections and statistical information in physical space, the authors employ a brain atlas onto which aggregated statistics can be mapped.
Q16. What is the correlation between the Stroop task scores and the cortical thickness?
At this point the authors also wonder whether any relation between the Stroop task scores and the cortical thickness is present in the data, as thickness is another measure that has been shown to correlate with level of cognitive functions [3]