scispace - formally typeset
Search or ask a question
Author

Amy Banic

Other affiliations: Idaho National Laboratory
Bio: Amy Banic is an academic researcher from University of Wyoming. The author has contributed to research in topics: Virtual reality & Haptic technology. The author has an hindex of 4, co-authored 33 publications receiving 85 citations. Previous affiliations of Amy Banic include Idaho National Laboratory.

Papers
More filters
Proceedings ArticleDOI
23 Mar 2015
TL;DR: 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms, and proposes a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch.
Abstract: 3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch, a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on a relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on. With 3DTouch, we attempt to bring 3D applications a step closer to users.

28 citations

Proceedings ArticleDOI
23 Mar 2019
TL;DR: The design of a high school summer course which uses the Visual Design Problem-based Learning Pedagogy using Virtual Environments as a strategy to teach computer science may have a positive impact on computer science education by increasing engagement, knowledge acquisition, and self-directed learning.
Abstract: In this paper, we present our design of a high school summer course which uses our Visual Design Problem-based Learning Pedagogy using Virtual Environments as a strategy to teach computer science. Students solved visual design problems by creating 3D sculptures in an online virtual environment. These creations were further explored and refined in immersive display systems fostering embodied learning and remote peer presence and support. To achieve the desired design, students use programming and computing concepts, such as loops, to solve those visual design centered problems, i.e. solving for composition, positive/negative space, balance, as opposed to computational problems first, i.e. create a loop, a fractal, randomized lines, etc. We present results from a study conducted on three high school summer courses. We compared the use of our Visual Design Problem-based teaching strategy (students wrote code to solve challenges based on art and design principles) to a traditional strategy (students wrote code to demonstrate comprehension of computer science concepts). Our results showed that test scores were higher for students in our Visual Design Problem-based courses. This work may have a positive impact on computer science education by increasing engagement, knowledge acquisition, and self-directed learning.

9 citations

Book ChapterDOI
Amy Banic1
22 Jun 2014
TL;DR: Preliminary data collection on natural user actions for volume selection on 3D data collection, as well as challenges, for designing components for a direct selection of volumes of data points, are presented.
Abstract: Visualization enables scientists to transform data in its raw form to a visual form that will facilitate discoveries and insights. Although there are advantages for displaying inherently 3-dimensional (3D) data in immersive environments, those advantages are hampered by the challenges involved in selecting volumes of that data for exploration or analysis. Selection involves the user identifying a set of points for a specific task. This paper preliminary data collection on natural user actions for volume selection. This paper also presents a research agenda outlining an extension for volume selection classification, as well as challenges, for designing components for a direct selection of volumes of data points.

8 citations

Proceedings ArticleDOI
22 Jul 2018
TL;DR: The results on task completion time, task performance, user experience, and feedback among multiple and Feedback among multiple geographically distributed collaborators using different multiple-platform for collaboration are presented.
Abstract: Collaboration among research scientists across multiple types of visualizations and platforms is useful to enhance scientific workflow and lead to unique analysis and discovery. However, current analytic tools and visualization infrastructure lack sufficient capabilities to fully support collaboration across multiple types of visualizations, display/interactive systems, and geographically distributed researchers. We have combined, adapted, and enhanced several emerging immersive and visualization technologies into a novel collaboration system that will provide scientists with the ability to connect with other scientists to work together across multiple visualization platforms (i.e. stereoscopic versus monoscopic), multiple datasets (i.e. 3-Dimensional versus 2-Dimensional data), and multiple visualization techniques (i.e. volumetric rendering versus 2D plots). We have demonstrated several use cases of this system in materials science, manufacturing, planning, and others. In one such use case, our collaboration system imports material science data (i.e., graphite billet) and enable multiple scientists to analyze and explore the density change of graphite across immersive and non-immersive systems, which will help to understand the potential structural problem in it. We recruited scientists that work with the datasets we demonstrate in three use case scenarios and conducted an experimental user study to evaluate our novel collaboration system on scientific visualization workflow effectiveness. In this paper, we present the results on task completion time, task performance, user experience, and feedback among multiple and feedback among multiple geographically distributed collaborators using different multiple-platform for collaboration.

7 citations

Posted Content
TL;DR: This paper presents an off-the-shelf, low-cost prototype that leverages the Augmented Reality technology to deliver a novel and interactive way of operating office network devices around using a mobile device.
Abstract: With the evolution of mobile devices, and smart-phones in particular, comes the ability to create new experiences that enhance the way we see, interact, and manipulate objects, within the world that surrounds us. It is now possible to blend data from our senses and our devices in numerous ways that simply were not possible before using Augmented Reality technology. In a near future, when all of the office devices as well as your personal electronic gadgets are on a common wireless network, operating them using a universal remote controller would be possible. This paper presents an off-the-shelf, low-cost prototype that leverages the Augmented Reality technology to deliver a novel and interactive way of operating office network devices around using a mobile device. We believe this type of system may provide benefits to controlling multiple integrated devices and visualizing interconnectivity or utilizing visual elements to pass information from one device to another, or may be especially beneficial to control devices when interacting with them physically may be difficult or pose danger or harm.

6 citations


Cited by
More filters
Patent
16 Mar 2016

454 citations

Proceedings ArticleDOI
16 Oct 2016
TL;DR: FaceTouch, a novel interaction concept for mobile Virtual Reality (VR) head-mounted displays (HMDs) that leverages the backside as a touch-sensitive surface, is presented and interaction techniques and three example applications that explore the FaceTouch design space are presented.
Abstract: We present FaceTouch, a novel interaction concept for mobile Virtual Reality (VR) head-mounted displays (HMDs) that leverages the backside as a touch-sensitive surface. With FaceTouch, the user can point at and select virtual content inside their field-of-view by touching the corresponding location at the backside of the HMD utilizing their sense of proprioception. This allows for rich interaction (e.g. gestures) in mobile and nomadic scenarios without having to carry additional accessories (e.g. a gamepad). We built a prototype of FaceTouch and conducted two user studies. In the first study we measured the precision of FaceTouch in a display-fixed target selection task using three different selection techniques showing a low error rate of 2% indicate the viability for everyday usage. To asses the impact of different mounting positions on the user performance we conducted a second study. We compared three mounting positions of the touchpad (face, hand and side) showing that mounting the touchpad at the back of the HMD resulted in a significantly lower error rate, lower selection time and higher usability. Finally, we present interaction techniques and three example applications that explore the FaceTouch design space.

93 citations

Journal Article
TL;DR: These techniques leverage the unique features of volumetric displays, including a 360° viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene.
Abstract: Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive manner. We explore this area through the design and implementation of techniques for interactive direct manipulation of objects with a 3D volumetric display. Motion tracking of the user's fingers provides for direct gestural interaction with the virtual objects, through manipulations on and around the display's hemispheric enclosure. Our techniques leverage the unique features of volumetric displays, including a 360° viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene. We demonstrate our techniques within a prototype 3D geometric model building application.

85 citations

Journal ArticleDOI
TL;DR: Evidence that the perceived height and width of an American-football field goal post relates to the perceiver's kicking performance is presented and it is demonstrated that performance is a factor in size perception.
Abstract: Perception relates not only to the optical information from the environment but also to the perceiver's performance on a given task. We present evidence that the perceived height and width of an American-football field goal post relates to the perceiver's kicking performance. Participants who made more successful kicks perceived the field goal posts to be farther apart and perceived the crossbar to be closer to the ground compared with participants who made fewer kicks. Interestingly, the current results show perceptual effects related to performance only after kicking the football but not before kicking. We also found that the types of performance errors influenced specific aspects of perception. The more kicks that were missed left or right of the target, the narrower the field goal posts looked. The more kicks that were missed short of the target, the taller the field goal crossbar looked. These results demonstrate that performance is a factor in size perception.

72 citations

Proceedings ArticleDOI
02 May 2019
TL;DR: RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality.
Abstract: We propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body. A five-day study finds both hand positions achieved speeds of at least 14 words-per-minute (WPM) with uncorrected error rates near 1%, outperforming previous comparable techniques.

67 citations