scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer Graphics and Applications in 2020"


Journal ArticleDOI
TL;DR: The significance and technical challenges of applying FL in vehicular IoT, and future research directions are discussed, and a brief survey of existing studies on FL and its use in wireless IoT is conducted.
Abstract: Federated learning (FL) is a distributed machine learning approach that can achieve the purpose of collaborative learning from a large amount of data that belong to different parties without sharing the raw data among the data owners. FL can sufficiently utilize the computing capabilities of multiple learning agents to improve the learning efficiency while providing a better privacy solution for the data owners. FL attracts tremendous interests from a large number of industries due to growing privacy concerns. Future vehicular Internet of Things (IoT) systems, such as cooperative autonomous driving and intelligent transport systems (ITS), feature a large number of devices and privacy-sensitive data where the communication, computing, and storage resources must be efficiently utilized. FL could be a promising approach to solve these existing challenges. In this paper, we first conduct a brief survey of existing studies on FL and its use in wireless IoT. Then, we discuss the significance and technical challenges of applying FL in vehicular IoT, and point out future research directions.

194 citations


Journal ArticleDOI
TL;DR: In this paper, the authors advocate a personalized federated learning framework in a cloud-edge architecture for intelligent IoT applications, which is able to mitigate the negative effects caused by heterogeneities in different aspects.
Abstract: Internet of Things (IoT) have widely penetrated in different aspects of modern life and many intelligent IoT services and applications are emerging. Recently, federated learning is proposed to train a globally shared model by exploiting a massive amount of user-generated data samples on IoT devices while preventing data leakage. However, the device, statistical and model heterogeneities inherent in the complex IoT environments pose great challenges to traditional federated learning, making it unsuitable to be directly deployed. In this paper, we advocate a personalized federated learning framework in a cloud-edge architecture for intelligent IoT applications. To cope with the heterogeneity issues in IoT environments, we investigate emerging personalized federated learning methods which are able to mitigate the negative effects caused by heterogeneities in different aspects. With the power of edge computing, the requirements for fast-processing capacity and low latency in intelligent IoT applications can also be achieved. We finally provide a case study of IoT based human activity recognition to demonstrate the effectiveness of personalized federated learning for intelligent IoT applications.

158 citations


Journal ArticleDOI
TL;DR: Four research topics are discussed that, individually and collaboratively, would help the visualization research community reach broader audiences with data visualization, making data more accessible.
Abstract: The visualization research community can and should reach broader audiences beyond data-savvy groups of people, because these audiences could also greatly benefit from visual access to data. In this article, we discuss four research topics—personal data visualization, data visualization on mobile devices, inclusive data visualization, and multimodal interaction for data visualization—that, individually and collaboratively, would help us reach broader audiences with data visualization, making data more accessible.

52 citations


Journal ArticleDOI
TL;DR: HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not, for teaching and research institutions in France or abroad, or from public or private research centers.
Abstract: HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Data Physicalization Trevor Hogan, Uta Hinrichs, Samuel Huron, Jason Alexander, Yvonne Jansen

28 citations


Journal ArticleDOI
TL;DR: A study compares the effectiveness of virtual and physical training for teaching a bimanual assembly task and in a novel approach, task complexity (T$_{\mathrm{CXB}}$ CXB ) as an indicator of assembly errors during final assembly is introduced.
Abstract: Virtual immersive training (VIT) systems based on gamification of tasks are increasingly employed to train assembly workers. In this article, we present a study that compares the effectiveness of virtual and physical training for teaching a bimanual assembly task and in a novel approach, we introduce task complexity (T$_{\mathrm{CXB}}$ CXB ) as an indicator of assembly errors during final assembly. In a between-subjects experiment, 100 participants were trained to assemble a 3-D cube in one of the four conditions (physical, virtual and with trainer's verbal assistance or not). The results demonstrate that the best-performing conditions, both in terms of successful assemblies and time performance, are the ones that the physical objects are included in the training, whereas no significant difference is found when the trainer's verbal assistance is present or absent during training. Additionally, we address the validity of a practical T$_{\mathrm{CXB}}$ CXB list as a tool for supporting the design of VIT systems.

21 citations


Journal ArticleDOI
TL;DR: Key findings of the NASEM report are summarized, a panel discussion is organized, and a set of recommendations for the visualization community are outlined.
Abstract: The American National Academies of Sciences, Engineering, and Medicine (NASEM) has recently released the report “Reproducibility and Replicability in Science.” The report has prompted discussions within many disciplines about the extent of the current adoption of reproducibility and replicability, the challenges involved in publishing reproducible results as well as strategies for improving. We organized a panel at the IEEE VIS conference 2019 to start a discussion on the reproducibility challenges faced by the visualization community and how those challenges might be addressed. In this viewpoint, we summarize key findings of the NASEM report, the panel discussion, and outline a set of recommendations for the visualization community.

19 citations


Journal ArticleDOI
TL;DR: Challenges and strategies for evaluating natural language interfaces for data visualization including development of new evaluation metrics, and considering the type of natural language input (spoken versus typed), among others are discussed.
Abstract: In this article, we discuss challenges and strategies for evaluating natural language interfaces (NLIs) for data visualization. Through an examination of prior studies and reflecting on own experiences in evaluating visualization NLIs, we highlight benefits and considerations of three task framing strategies: Jeopardy-style facts, open-ended tasks, and target replication tasks. We hope the discussions in this article can guide future researchers working on visualization NLIs and help them avoid common challenges and pitfalls when evaluating these systems. Finally, to motivate future research, we highlight topics that call for further investigation including development of new evaluation metrics, and considering the type of natural language input (spoken versus typed), among others.

16 citations


Journal ArticleDOI
TL;DR: This work has built an introduction to visualization for the digital humanities that sits on a data-driven stance adopted by the authors, and analyzes paper keywords to identify significant themes and research opportunities in the field.
Abstract: The newly rediscovered frontier between data visualization and the digital humanities has proven to be an exciting field of experimentation for scholars from both disciplines. This fruitful collaboration is attracting researchers from other areas of science who may be willing to create visual analysis tools that promote humanities research in its many forms. However, as the collaboration grows in complexity, it may become intimidating for these scholars to get engaged in the discipline. To facilitate this task, we have built an introduction to visualization for the digital humanities that sits on a data-driven stance adopted by the authors. In order to construct a dataset representative of the discipline, we analyze citations from a core corpus on 300 publications in visualization for the humanities obtained from recent editions of the InfoVis Vis4DH workshop, the ADHO Digital Humanities Conference, and the specialized digital humanities journal Digital Humanities Quarterly. From here, we extract referenced works and analyze more than 1900 publications in search of citation patterns, prominent authors in the field, and other interesting insights. Finally, following the path set by other researchers in the visualization and Human–Computer Interaction (HCI) communities, we analyze paper keywords to identify significant themes and research opportunities in the field.

16 citations


Journal ArticleDOI
Junhua Lu, Jie Wang, Hui Ye, Yuhui Gu, Zhiyu Ding1, Mingliang Xu2, Wei Chen 
TL;DR: This article presents an approach that extracts and visualizes important changes of a time-series that users can explore and modify, and apply visual effects on them, and demonstrates the effectiveness and usability of this approach.
Abstract: Understanding the changes of time-series is a common task in many application domains. Converting time-series data into videos helps an audience with little or no background knowledge gain insights and deep impressions. It essentially integrates data visualizations and animations to present the evolution of data expressively. However, it remains challenging to create this kind of data video. First, it is difficult to efficiently detect important changes and include them in the video sequence. Existing methods require much manual effort to explore the data and find changes. Second, how these changes are emphasized in the videos is also worth studying. A video without emphasis will hinder an audience from noticing those important changes. This article presents an approach that extracts and visualizes important changes of a time-series. Users can explore and modify these changes, and apply visual effects on them. Case studies and user feedback demonstrate the effectiveness and usability of our approach.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method that automatically generates a target typographic font containing thousands of characters from a small subset of character images in the target font by generating characters other than the subset so that a complete font is obtained.
Abstract: The automated generation of fonts containing a large number of characters is in high demand. For example, a typical Japanese font requires over 1000 characters. Unfortunately, professional typographers create the majority of fonts, resulting in significant financial and time investments for font generation. The main contribution of this article is the development of a method that automatically generates a target typographic font containing thousands of characters, from a small subset of character images in the target font. We generate characters other than the subset so that a complete font is obtained. We propose a novel font generation method with the capability to deal with various fonts, including a font composed of distinctive strokes, which are difficult for existing methods to handle. We demonstrated the proposed method by generating 2965 characters in 47 fonts. Moreover, objective and subjective evaluations verified that the generated characters are similar to the original characters.

13 citations


Journal ArticleDOI
TL;DR: The Marching Cubes paper by Bill Lorensen and Harvey Cline, “MarchingCubes: A High Resolution 3D Surface Construction Algorithm,” was published at SIGGRAPH 1987, the most highly cited paper in computer graphics.
Abstract: The Marching Cubes paper by Bill Lorensen and Harvey Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm,” was published at SIGGRAPH 1987.1 According to Google Scholar, their paper has 15,667 citations (as of January 17, 2020), the most highly cited paper in computer graphics. Sadly, while writing this article Bill Lorensen passed away on December 12, 2019. Origins Department Editor Chris Johnson contributed the text in italics. EARLY

Journal ArticleDOI
TL;DR: The challenges and research gaps identified in an IEEE VIS workshop on the evaluation of IVML systems are described and described.
Abstract: In interactive visual machine learning (IVML), humans and machine learning algorithms collaborate to achieve tasks mediated by interactive visual interfaces. This human-in-the-loop approach to machine learning brings forth not only numerous intelligibility, trust, and usability issues, but also many open questions with respect to the evaluation of the IVML system, both as separate components, and as a holistic entity that includes both human and machine intelligence. This article describes the challenges and research gaps identified in an IEEE VIS workshop on the evaluation of IVML systems.

Journal ArticleDOI
TL;DR: An immersive virtual reality system for exploring spatial cultural heritage, which utilizes the digitized data from the Dunhuang Research Academy to represent the virtual environment of the cave.
Abstract: The Dunhuang Caves are the home to the largest Buddhist art sites in the world and are listed as a UNESCO World Heritage Site. Over time, the murals have been damaged by both humans and nature. In this article, we present an immersive virtual reality system for exploring spatial cultural heritage, which utilizes the digitized data from the Dunhuang Research Academy to represent the virtual environment of the cave. In this system, the interaction techniques that allow users to flexibly experience any of the artifacts or displays contribute to their understanding of the cultural heritage. Additionally, we evaluated the system by conducting a user study to examine the extent of user acquaintance after the entire experience. Our result has shown what participants learn from the spatial context and augmented information in the VR. This can be used as design considerations for developing other spatial heritages.

Journal ArticleDOI
TL;DR: The hidden abilities of sound in interactive environments are discussed, including the emotional, subconscious, and subliminal impact and how different sounds can be combined with visual information to help improve interactive conditions and stimulate the imagination.
Abstract: How important is sound in an interactive environment? For example, what happens when we play a video game without sound? Does the game still have the same impact? Even if sight is the primary sense in interactive environments, sound is also important, and should not be overlooked during the development process. The necessity of sound for perceptive quality enrichment in virtual environments cannot be underestimated. However, how designers should integrate and leverage the benefits of sound design effectively in an interactive environment can be challenging. This short article, discusses a variety of important and intriguing psychological concepts and immersive sound techniques, used in interactive environments, such as video games, to improve engagement and enhance the experience (from passive background music to active and procedural sounds). Computer graphics has proven itself in many fields of entertainment and computing as a means for communicating and engaging users (visually). This article discusses the hidden abilities of sound in interactive environments (e.g., the emotional, subconscious, and subliminal impact). We explain how different sounds can be combined with visual information to help improve interactive conditions and stimulate the imagination, not to mention, control (or steer) the user's emotions and attention.

Journal ArticleDOI
TL;DR: PixelClipper as discussed by the authors is a tool built for facilitating data engagement events through annotation and commenting capabilities, which allows viewers to quickly and expressively extract visual clippings from visualizations and add comments to them.
Abstract: In this article, we present PixelClipper, a tool built for facilitating data engagement events. PixelClipper supports conversations around visualizations in public settings through annotation and commenting capabilities. It is recognized that understanding data is important for an informed society. However, even when visualizations are available on the web, open data is not yet reaching all audiences. Public facilitated events centered around data visualizations may help bridge this gap. PixelClipper is designed to promote discussion and engagement with visualizations in public settings. It allows viewers to quickly and expressively extract visual clippings from visualizations and add comments to them. Ambient and facilitator displays attract attention by showing clippings. They function as entry points to the full visualizations while supporting deeper conversations about the visualizations and data. We describe the design goals of PixelClipper, share our experiences from deploying it, and discuss its future potential in supporting data visualization engagement events.

Journal ArticleDOI
TL;DR: This Viewpoint posits both what personal information and how such personal information could be taken into account to design more effective visual analytic systems, a valuable and under-explored direction.
Abstract: Interactive data exploration and analysis is an inherently personal process. One's background, experience, interests, cognitive style, personality, and other sociotechnical factors often shape such a process, as well as the provenance of exploring, analyzing, and interpreting data. This Viewpoint posits both what personal information and how such personal information could be taken into account to design more effective visual analytic systems, a valuable and under-explored direction.

Journal ArticleDOI
TL;DR: A deep learning model for motion primitive classification is introduced that combines a convolutional processing layer with a bidirectional analysis module and is less sensitive to skeleton errors, occurring using low-cost sensors, such as Kinect, while simultaneously handling the high amount of detail when using RGB visual information.
Abstract: Serious games are receiving increasing attention in the field of cultural heritage (CH) applications. A special field of CH and education is intangible cultural heritage and particularly dance. Machine learning (ML) tools are necessary elements for the success of a serious game platform since they introduce intelligence in processing and analysis of users’ interactivity. ML provides intelligent scoring and monitoring capabilities of the user's progress in a serious game platform. In this article, we introduce a deep learning model for motion primitive classification. The model combines a convolutional processing layer with a bidirectional analysis module. This way, RGB information is efficiently handled by the hierarchies of convolutions, while the bidirectional properties of a long short term memory (LSTM) model are retained. The resulting convolutionally enhanced bidirectional LSTM (CEBi-LSTM) architecture is less sensitive to skeleton errors, occurring using low-cost sensors, such as Kinect, while simultaneously handling the high amount of detail when using RGB visual information.

Journal ArticleDOI
TL;DR: Global illumination refers to a complete shading model that simulates real lighting and reflection as accurately as possible to match the appearance of the real world.
Abstract: Global illumination refers to a complete shading model that simulates real lighting and reflection as accurately as possible. Whether used for product prototyping or special effects for entertainment, the goal is to match the appearance of the real world. The origins of global illumination come at the intersection of a steady progression of shading models with the ancient simulation technique of ray tracing.

Journal ArticleDOI
TL;DR: A point-based rendering approach visualizes scan data as a dense splat cloud, and a virtual texturing system that dynamically loads required image data is presented, paired with a single-pass page prediction method that minimizes visible texturing artifacts.
Abstract: Digitalization of three-dimensional (3-D) objects and scenes using modern depth sensors and high-resolution RGB cameras enables the preservation of human cultural artifacts at an unprecedented level of detail. Interactive visualization of these large datasets, however, is challenging without degradation in visual fidelity. A common solution is to fit the dataset into available video memory by downsampling and compression. The achievable reproduction accuracy is thereby limited for interactive scenarios, such as immersive exploration in virtual reality (VR). This degradation in visual realism ultimately hinders the effective communication of human cultural knowledge. This article presents a method to render 3-D scan datasets with minimal loss of visual fidelity. A point-based rendering approach visualizes scan data as a dense splat cloud. For improved surface approximation of thin and sparsely sampled objects, we propose oriented 3-D ellipsoids as rendering primitives. To render massive texture datasets, we present a virtual texturing system that dynamically loads required image data. It is paired with a single-pass page prediction method that minimizes visible texturing artifacts. Our system renders a challenging dataset in the order of 70 million points and a texture size of 1.2 TB consistently at 90 frames per second in stereoscopic VR.

Journal ArticleDOI
TL;DR: This work collected a dataset of 5000 streets across the cities of Paris, Vienna, London, and New York, and built their cultural maps grounded on cartographic storytelling techniques to demonstrate how cultural maps engage users and allow them to discover distinct patterns in the ways these cities are gender-biased, celebrate various professions, and embrace foreign cultures.
Abstract: Throughout history, maps have been used as a tool to explore cities. They visualize a city's urban fabric through its streets, buildings, and points of interest. Besides purely navigation purposes, street names also reflect a city's culture through its commemorative practices. Therefore, cultural maps that unveil socio-cultural characteristics encoded in street names could potentially raise citizens’ historical awareness. But designing effective cultural maps is challenging, not only due to data scarcity but also due to the lack of effective approaches to engage citizens with data exploration. To address these challenges, we collected a dataset of 5000 streets across the cities of Paris, Vienna, London, and New York, and built their cultural maps grounded on cartographic storytelling techniques. Through data exploration scenarios, we demonstrated how cultural maps engage users and allow them to discover distinct patterns in the ways these cities are gender-biased, celebrate various professions, and embrace foreign cultures.

Journal ArticleDOI
TL;DR: A neural network model that segments sketched symbols into stroke-level components and surpasses existing methodologies on an available small state-of-the-art dataset is proposed.
Abstract: Hand-drawn objects usually consist of multiple semantically meaningful parts. In this article, we propose a neural network model that segments sketched symbols into stroke-level components. Our segmentation framework has two main elements: a fixed feature extractor and a multilayer perceptron (MLP) network that identifies a component based on the feature. As the feature extractor we utilize an encoder of a stroke-rnn, which is our newly proposed generative variational auto-encoder (VAE) model that reconstructs symbols on a stroke-by-stroke basis. Experiments show that a single encoder could be reused for segmenting multiple categories of sketched symbols with negligible effects on segmentation accuracies. Our segmentation scores surpass existing methodologies on an available small state-of-the-art dataset. Moreover, extensive evaluations on our newly annotated big dataset demonstrate that our framework obtains significantly better accuracies as compared to baseline models. We release the dataset to the community.

Journal ArticleDOI
TL;DR: The relatively underexplored potential for physicalizations to materialize qualitative data related to human experiences and knowledge domains is explored, with detailed attention to the characteristics of materials and how they are combined, in the context of interdisciplinary exchange.
Abstract: This article explores the relatively underexplored potential for physicalizations to materialize qualitative data related to human experiences and knowledge domains. Our reading of “data” in this context extends from imperceptible systems and infrastructures to mental models and the phenomenological dimensions of experiences themselves. Physical objects can be regarded as a form of knowledge with which to inquire about human life, bring about improved conditions, and imagine alternative realities. Objects are made of materials, which are manipulated materials into various configurations. The materials used in the process of externalization have a profound influence on the resulting forms, and through them on how knowledge is constructed and internalized. We pay detailed attention to the characteristics of materials and how they are combined, in the context of interdisciplinary exchange. We are motivated by the need for a shared understanding of what work materials can do in the making of physicalisations. We suggest this work is useful in the analysis of physicalizations, specifically where they seek to articulate the phenomena of lived experience.

Journal ArticleDOI
TL;DR: A multiregional convolutional neural network is presented to classify the rubbings of oracle bones and the classification of tortoise shell and animal bone was realized.
Abstract: The classification of materials of oracle bone is one of the most basic aspects for oracle bone morphology. However, the classification method depending on experts' experience requires long-term learning and accumulation for professional knowledge. This article presents a multiregional convolutional neural network to classify the rubbings of oracle bones. First, we detected the “shield grain” and “tooth grain” on the oracle bone rubbings, then complete the division of multiple areas on an image of oracle bone. Second, the convolutional neural network is used to extract the features of each region and we complete the fusion of multiple local features. Finally, the classification of tortoise shell and animal bone was realized. Utilizing the image of oracle bone provided by experts, we conducted an experiment; the result show our method has better classification accuracy. It has made contributions to the progress of the study of oracle bone morphology.

Journal ArticleDOI
TL;DR: Regular Grid Global Illumination (ReGGI), a distributed rendering pipeline that eliminates response lag and provides cloud-based dynamic GI for low-powered devices such as smartphones and the class of devices typically used in untethered VR headsets, is presented.
Abstract: Real-time high-fidelity rendering requires the use of expensive high-end hardware, even when rendering moderately complex scenes. Interactive streaming services and cloud gaming have somewhat mitigated the problem at the cost of response lag. In this article, we present Regular Grid Global Illumination (ReGGI), a distributed rendering pipeline that eliminates response lag and provides cloud-based dynamic GI for low-powered devices such as smartphones and the class of devices typically used in untethered VR headsets. Results show that ReGGI is scalable, has low bandwidth requirements, and produces images of comparable quality to instant radiosity.

Journal ArticleDOI
TL;DR: A survey on the role of digital human-like characters in virtual worlds, both as counterparts of real human users and as embodied agents driven by artificial intelligence is presented.
Abstract: As human beings, we are so used to interacting with each other that any world without humans would feel alien to us, including digital ones. In this article, we present a survey on the role of digital human-like characters in virtual worlds, both as counterparts of real human users and as embodied agents driven by artificial intelligence. The main issues related to 3-D graphics, physics, animation, and behavioral modeling are introduced, suggesting wherever available different alternatives and related development pipelines. A sizeable list of examples illustrating the use of virtual humans in different application sectors is then presented, focusing in particular on four domains: environmental design, training, cultural heritage, and healthcare.

Journal ArticleDOI
TL;DR: This work proposes an interactive video completion method that enables the user to interactively complete a video by drawing holes and immediately removes objects from the video, and successfully interpolates sparse modifications initialized by the designer.
Abstract: We propose an interactive video completion method aiming for practical use in a digital production workplace. The results of earlier automatic solutions often require a considerable amount of manual modifications to make them usable in practice. To reduce such a laborious task, our method offers an efficient editing tool. Our iterative algorithm estimates the flow fields and colors in space-time holes in the video. As in earlier approaches, our algorithm uses an $L^1$L1 data term to estimate flow fields. However, we employ a novel $L^2$L2 data term to estimate temporally coherent color transitions. Our graphics processing unit implementation enables the user to interactively complete a video by drawing holes and immediately removes objects from the video. In addition, our method successfully interpolates sparse modifications initialized by the designer. According to our subjective evaluation, the videos completed with our method look significantly better than those with other state-of-the-art approaches.

Journal ArticleDOI
TL;DR: In this overview, considerations of time compression, parameter-driven simulation, modeling fidelity, instancing and duplication, interaction, artistic license, integrating feedback, and ongoing collaborative processes are highlighted.
Abstract: Virtual Angkor is a comprehensive, interactive, visual representation of life in the premodern Cambodian city of Angkor. It is the cumulative result of extensive computer graphics research underpinned by a broad investigation in the sources and oriented towards a clear set of educational goals. Virtual Angkor represents the potential for symbiotic platforms that draw on and merge artistic, historical, technical, and scientific expertise. In this overview, considerations of time compression, parameter-driven simulation, modeling fidelity, instancing and duplication, interaction, artistic license, integrating feedback, and ongoing collaborative processes are highlighted.

Journal ArticleDOI
TL;DR: The research in this paper and proposed solution calculates cloud service provider (CSP) trustworthiness levels and predicts cloud service and cloud service level agreement (SLA) availability performance and CSP trustworthiness.
Abstract: Analytics firm Cyence estimated Amazon's four-hour cloud computing outage in 2017 “cost S&P 500 companies at least $150 million” and traffic monitoring firm Apica claimed “54 of the top 100 online retailers saw site performance slump by at least 20 percent”. According to Ponemon, 2015 data center outages cost Fortune 1000 companies between $1.25 and $2.5 billion. Despite potential risks, the cloud computing industry continues to grow. For example, Internet of Things, which is projected to grow 266% between 2013 and 2020, will drive increased demand on cloud computing as data across multiple industries is collected and sent back to cloud data centers for processing. RightScale estimates enterprises will continue to increase cloud demand with 85% having multi-cloud strategies. This growth and dependency will influence risk exposure and potential for impact (e.g. availability, performance, security, financial). The research in this paper and proposed solution calculates cloud service provider (CSP) trustworthiness levels and predicts cloud service and cloud service level agreement (SLA) availability performance. Evolving industry standards (e.g. NIST, ISO/IEC) for cloud SLAs and existing work regarding CSP trustworthiness will be leveraged as regression-based predictive models are constructed to analyze CSP cloud computing services, SLA performance and CSP trustworthiness.

Journal ArticleDOI
TL;DR: The Aquarius crew successfully executed a complex procedure taking place across four different task areas by using an AR application called ProtoSpace developed at the Jet Propulsion Laboratory.
Abstract: Augmented reality (AR) has the potential to help astronauts execute procedures in a quicker, more intuitive, and safer way. A key part of realizing these benefits has been the use of an undersea research facility—the Aquarius—that acts as an analog to the International Space Station to a certain extent. In a June 2019 mission, the Aquarius crew successfully executed a complex procedure taking place across four different task areas by using an AR application called ProtoSpace developed at the Jet Propulsion Laboratory. In this article, we share the detailed results of the study, lessons learned, and future work needed to further enable the enhancement of procedure execution through augmented reality.

Journal ArticleDOI
TL;DR: An interactive visualization is developed that reflects on how the immersive design and features of the exhibit contribute as a tool for science communication in light of the emerging paradigm of exploranation, and some forward-looking perspectives about what this notion has to offer the domain.
Abstract: Public understanding of contemporary scientific issues is critical for the future of society. Public spaces, such as science centers, can impact the communication of science by providing active knowledge-building experiences of scientific phenomena. In contributing to this vision, we have previously developed an interactive visualization as part of a public exhibition about nano. We reflect on how the immersive design and features of the exhibit contribute as a tool for science communication in light of the emerging paradigm of exploranation, and offer some forward-looking perspectives about what this notion has to offer the domain.