scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2012"


Journal ArticleDOI
TL;DR: Glotaran is introduced as a Java-based graphical user interface to the R package TIMP, a problem solving environment for fitting superposition models to multi-dimensional data which features interactive and dynamic data inspection and interactive viewing of results.
Abstract: In this work the software application called Glotaran is introduced as a Java-based graphical user interface to the R package TIMP, a problem solving environment for fitting superposition models to multi-dimensional data. TIMP uses a command-line user interface for the interaction with data, the specification of models and viewing of analysis results. Instead, Glotaran provides a graphical user interface which features interactive and dynamic data inspection, easier -- assisted by the user interface -- model specification and interactive viewing of results. The interactivity component is especially helpful when working with large, multi-dimensional datasets as often result from time-resolved spectroscopy measurements, allowing the user to easily pre-select and manipulate data before analysis and to quickly zoom in to regions of interest in the analysis results. Glotaran has been developed on top of the NetBeans rich client platform and communicates with R through the Java-to-R interface Rserve. The background and the functionality of the application are described here. In addition, the design, development and implementation process of Glotaran is documented in a generic way.

994 citations


Patent
23 Oct 2012
TL;DR: In this article, a medical instrument is disclosed, which includes a housing, at least one electrical contact, a radio frequency (RF) generation circuit coupled to and operated by the battery and operable to generate an RF drive signal and to provide the RF drive signals to the electrical contact.
Abstract: A medical instrument is disclosed. The medical instrument includes a housing, at least one electrical contact, a radio frequency (RF) generation circuit coupled to and operated by the battery and operable to generate an RF drive signal and to provide the RF drive signal to the at least one electrical contact, and a user interface supported by the housing. The user interface includes visual and audible feedback elements, wherein the state of the instrument can be determined by the state of the visual and audible feedback elements.

653 citations


Patent
21 Nov 2012
TL;DR: In this article, the authors present a system for building and validating an application (including e.g., various software versions and revisions, programming languages, code segments, among other examples) without any scripting required by a system user.
Abstract: Provided is a system for building and validating an application (including e.g., various software versions and revisions, programming languages, code segments, among other examples) without any scripting required by a system user. In one embodiment, an SDLC system is configured to construct a build and test environment, by automatically analyzing a submitted project. The build environment is configured to assemble existing user code, for example, to generate an application to test. Code building can include any one or more of code compilation, assembly, and code interpretation. The system can include a user interface provided to clients, users, and/or customer environments to facilitate user interaction and control of build and test validation. The system can accept user specification of configurations that controls the way the system runs the user's tests. The system can also provide flexible billing models for different customers.

449 citations


Journal ArticleDOI
TL;DR: Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use.
Abstract: Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use. A related video can be seen here: http://youtu.be/ndUjLwcBIOw. It shows examples of augmented reality experiences in an educational setting.

440 citations


Book
06 Dec 2012
TL;DR: Visualizing Argumentation is written by practitioners and researchers for colleagues working in collaborative knowledge media, educational technology and organizational sense-making, with particular emphasis on the usability and effectiveness of tools in different contexts.
Abstract: About the book: Computer Supported Argument Visualization is attracting attention across education, science, public policy and business. More than ever, we need sense-making tools to help negotiate understanding in the face of multi-stakeholder, ill-structured problems. In order to be effective, these tools must support human cognitive and discursive processes, and provide suitable representations, services and user interfaces. Visualizing Argumentation is written by practitioners and researchers for colleagues working in collaborative knowledge media, educational technology and organizational sense-making. It will also be of interest to theorists interested in software tools which embody different argumentation models. Particular emphasis is placed on the usability and effectiveness of tools in different contexts.

436 citations


Patent
04 Oct 2012
TL;DR: In this paper, a short range communication transceiver is used to communicate with a mobile device to emulate the function of a traditional card, and the user interface and an e-wallet application are used to interface with the transceiver.
Abstract: Universal cards are used in place of all the other traditional cards which a person may want to carry. The universal card can include a short range communications transceiver to communicate with a mobile device. The mobile device can include a user interface and an e-wallet application so that the user can interface with the e-wallet application for programming the universal card via the short range communication link. Once programmed, the universal card emulates a function of a traditional card.

425 citations


Proceedings ArticleDOI
05 May 2012
TL;DR: A sample of existing work on shape-changing interfaces is reviewed to address shortcomings and identify eight types of shape that are transformed in various ways to serve both functional and hedonic design purposes.
Abstract: Shape change is increasingly used in physical user interfaces, both as input and output. Yet, the progress made and the key research questions for shape-changing interfaces are rarely analyzed systematically. We review a sample of existing work on shape-changing interfaces to address these shortcomings. We identify eight types of shape that are transformed in various ways to serve both functional and hedonic design purposes. Interaction with shape-changing interfaces is simple and rarely merges input and output. Three questions are discussed based on the review: (a) which design purposes may shape-changing interfaces be used for, (b) which parts of the design space are not well understood, and (c) why studying user experience with shape-changing interfaces is important.

387 citations


Proceedings ArticleDOI
10 Dec 2012
TL;DR: This work analyzes about 35 million check-ins made by Foursquare users in over 5 million venues across the globe, and proposes a set of features that aim to capture the factors that may drive users' movements, finding that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy.
Abstract: Mobile location-based services are thriving, providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit. This multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility, but it also opens avenues for the development of novel mobile applications and services. In this work we study the problem of predicting the next venue a mobile user will visit, by exploring the predictive power offered by different facets of user behavior. We first analyze about 35 million check-ins made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a period of five months. We then propose a set of features that aim to capture the factors that may drive users' movements. Our features exploit information on transitions between types of places, mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We further extend our study combining all individual features in two supervised learning models, based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy. We find that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one in two user check-ins, amongst thousands of candidate items in the prediction list.

341 citations


Journal ArticleDOI
TL;DR: Brain-computer interaction has already moved from assistive care to applications such as gaming, but improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.
Abstract: Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.

332 citations


Patent
07 Jan 2012
TL;DR: In this paper, a system and methods for implementing searches using contextual information associated with a Web page (or other document) that a user is viewing when a query is entered is described.
Abstract: Systems and methods, including user interfaces, are provided for implementing searches using contextual information associated with a Web page (or other document) that a user is viewing when a query is entered. The page includes a contextual search interface that has an associated context vector representing content of the page. When the user submits a search query via the contextual search interface, the query and the context vector are both provided to the query processor and used in responding to the query.

331 citations


Patent
03 Aug 2012
TL;DR: In this article, a system, method, and computer program product for a touch or pressure signal-based interface is provided for modifying objects in one or more memory devices, including a non-volatile memory.
Abstract: A system, method, and computer program product are provided for a touch or pressure signal-based interface. In operation, a touch or pressure signal is received in association with a touch interface of a device. To this end, a user experience is altered utilizing the signal. A system, method, and computer program product are provided for modifying one or more objects in one or more memory devices. In one embodiment, an apparatus is provided, comprising one or more memory devices including a non-volatile memory. Additionally, the apparatus comprises circuitry including a first communication path for communicating with the at least one processor, and a second communication path for communicating with at least one storage sub-system which operates slower than the one or more memory devices. Further, the circuitry is operable to modify one or more objects in the one or more memory devices.

Patent
Stephen C. Moore1
03 May 2012
TL;DR: In this paper, the user can select one type of user interface action by "lightly" touching the screen and select another type of action by exerting more pressure, depending on the stored gesture profile.
Abstract: Disclosed is a user interface that responds to differences in pressure detected by a touch-sensitive screen. The user selects one type of user-interface action by “lightly” touching the screen and selects another type of action by exerting more pressure. Embodiments can respond to single touches, to gestural touches that extend across the face of the touch-sensitive screen, and to touches in which the user-exerted pressure varies during the course of the touch. Some embodiments respond to how quickly the user changes the amount of pressure applied. In some embodiments, the location and pressure of the user's input are compared against a stored gesture profile. Action is taken only if the input matches “closely enough” to the stored gesture profile. In some embodiments, a notification is sent to the user when the pressure exceeds a threshold between a light and a heavy press.

Journal ArticleDOI
TL;DR: An evaluation of CQPweb against criteria earlier laid down for a future web-based corpus analysis tool suggests that it fulfils many, but not all, of the requirements foreseen for such a piece of software.
Abstract: CQPweb is a new web-based corpus analysis system, intended to address the conflicting requirements for usability and power in corpus analysis software. To do this, its user interface emulates the BNCweb system. Like BNCweb, CQPweb is built on two separate query technologies: the IMS Open Corpus Workbench and the MySQL relational database. CQPweb’s main innovative feature is its flexibility; its more generalised data model makes it compatible with any corpus. The analysis options available in CQPweb include: concordancing; collocations; distribution tables and charts; frequency lists; and keywords or key tags. An evaluation of CQPweb against criteria earlier laid down for a future web-based corpus analysis tool suggests that it fulfils many, but not all, of the requirements foreseen for such a piece of software. Despite some limitations, in making a sophisticated query system accessible to untrained users, CQPweb combines ease of use, power and flexibility to a very high degree.

Proceedings ArticleDOI
09 Sep 2012
TL;DR: An evaluation of an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources indicates that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.
Abstract: This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.

Proceedings ArticleDOI
07 Oct 2012
TL;DR: This work enables jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing.
Abstract: Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming's potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.

Patent
30 Dec 2012
TL;DR: In this paper, a system for automatic code generation and tuning for natural language interaction applications, comprising a build environment comprising a developer user interface, automated coding tools, automated testing tools and automated optimization tools, and an analytics framework software module.
Abstract: A system for supervised automatic code generation and tuning for natural language interaction applications, comprising a build environment comprising a developer user interface, automated coding tools, automated testing tools, and automated optimization tools, and an analytics framework software module. Text samples are imported into the build environment and automated clustering is performed to assign them to a plurality of input groups, each input group comprising a plurality of semantically related inputs. Language recognition rules are generated by automated coding tools. Automated testing tools carry out automated testing of language recognition rules and generate recommendations for tuning language recognition rules. The analytics framework performs analysis of interaction log files to identify problems in a candidate natural language interaction application. Optimizations to the candidate natural language interaction application are carried out and an optimized natural language interaction application is deployed into production and stored in the solution data repository.

Patent
04 May 2012
TL;DR: In this paper, various methods and apparatus are described for enabling one or more users to interface with virtual or augmented reality environments, including a computing network having computer servers interconnected through high bandwidth interfaces to gateways for processing data and/or for enabling communication of data between the servers and local user interface devices.
Abstract: Various methods and apparatus are described herein for enabling one or more users to interface with virtual or augmented reality environments. An example system includes a computing network having computer servers interconnected through high bandwidth interfaces to gateways for processing data and/or for enabling communication of data between the servers and one or more local user interface devices. The servers include memory, processing circuitry, and software for designing and/or controlling virtual worlds, as well as for storing and processing user data and data provided by other components of the system. One or more virtual worlds may be presented to a user through a user device for the user to experience and interact. A large number of users may each use a device to simultaneously interface with one or more digital worlds by using the device to observe and interact with each other and with objects produced within the digital worlds.

Patent
22 Feb 2012
TL;DR: In this article, the authors present a virtual channel environment where user interfaces accessible through the virtual channel present various functional options, including the selection or exploration of content having similarity or prescribed relationships to other content and the ability to order purchasable content.
Abstract: Network content delivery apparatus and methods based on content compiled from various sources and particularly selected for a given user. In one embodiment, the network comprises a cable television network, and the content sources include DVR, broadcast, nPVR, and VOD. The user-targeted content is assembled into a playlist, and displayed as a continuous stream on a virtual channel particular to that user. User interfaces accessible through the virtual channel present various functional options, including the selection or exploration of content having similarity or prescribed relationships to other content, and the ability to order purchasable content. An improved electronic program guide is also disclosed which allows a user to start over, record, view, receive information on, “catch up”, and rate content. Apparatus for remote access and configuration of the playlist and virtual channel functions, as well as a business rules “engine” implementing operational or business goals, are also disclosed.

Patent
17 Dec 2012
TL;DR: In this article, a method and system for assisting a user when interacting with a graphical user interface combines gaze-based input with gesture-based user commands, which can be used to achieve a touchscreen-like environment.
Abstract: A method and system for assisting a user when interacting with a graphical user interface combines gaze based input with gesture based user commands. A user of a computer system without a traditional touch-screen can interact with graphical user interfaces in a touch-screen like manner using a combination of gaze based input and gesture based user commands. A solution for touch-screen like interaction uses gaze input and gesture based input as a complement or an alternative to touch-screen interactions with a computer device having a touch-screen. Combined gaze and gesture based interaction with graphical user interfaces can be used to achieve a touchscreen like environment in computer systems without a traditional touchscreen or in computer systems having a touchscreen arranged ergonomically unfavorable for the user or a touchscreen arranged such that it is more comfortable for the user to use gesture and gaze for the interaction than the touchscreen.

Journal Article
TL;DR: In this study, a small-scale aerial drone was used as a tool for exploring potential benefits to safety managers within the construction jobsite and recommendations for the required features of an Ideal Safety Inspection Drone were led.
Abstract: SUMMARY: The construction industry lags behind many others in the rate of adoption of cutting edge technologies In the area of safety management this is more so Many advances in information technology could provide great benefits to this important aspect of construction operations Innovative use of these tools could result in safer jobsites This paper discusses initial application of drone technology in the construction industry In this study, a small-scale aerial drone was used as a tool for exploring potential benefits to safety managers within the construction jobsite This drone is an aerial quadricopter that can be piloted remotely using a smart phone, tablet device or a computer Since the drone is equipped with video cameras, it can provide safety managers with fast access to images as well as real time videos from a range of locations around the jobsite An expert analysis (heuristic evaluation) as well as a user participation analysis were performed on said quadricopter to determine the features of an ideal safety inspection drone The heuristic evaluation uncovered some of the user interface problems of the drone interface considering the context of the safety inspection The user participation evaluation was performed following a simulated task of counting the number of hardhats viewed through the display of a mobile device in the controlled environment of the lab Considering the task and the controlled variables, this experimental approach revealed that using the drone together with a large-size interface (eg iPad) would be as accurate as having the safety manager with plain view of the jobsite The results of these two evaluations together with previous experience of the authors in the area of safety inspection and drone technology led to recommendations for the required features of an Ideal Safety Inspection Drone Autonomous navigation, vocal interaction, high-resolution cameras, and collaborative user-interface environment are some examples of those features This innovative application of the aerial drone has the potential to improve construction practices and in this case facilitate jobsite safety inspections

Proceedings ArticleDOI
22 Oct 2012
TL;DR: P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points by delivering >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants.
Abstract: Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has significant memory and execution costs due to its combinatoric gesture representation approach. We present $P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points. $P performs similarly to $1 on unistrokes and is superior to $N on multistrokes. Specifically, $P delivers >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. We provide a pseudocode listing of $P to assist developers in porting it to their specific platform and a "cheat sheet" to aid developers in selecting the best member of the $-family for their specific application needs.

Book ChapterDOI
01 Jan 2012
TL;DR: This 4806 chapter discusses the implementation of some key 4807 features of DOLFIN in detail and reviews the functionality of the C++/Python library.
Abstract: DOLFIN is a C++/Python library that functions as the main user interface of FEniCS. In this 4806 chapter, we review the functionality of DOLFIN. We also discuss the implementation of some key 4807 features of DOLFIN in detail.

Book ChapterDOI
01 Jan 2012
TL;DR: Durch die immer starkere Verbreitung des Internets and die zunehmende Globalisierung ist der Wettbewerb internationaler geworden.
Abstract: In den letzten Jahren hat sich in der Entwicklung von Softwareprodukten einiges verandert. Durch die immer starkere Verbreitung des Internets und die zunehmende Globalisierung ist der Wettbewerb internationaler geworden.

Patent
04 Apr 2012
TL;DR: A user interface can enable a user to control the display of content in a way that is natural for the user and requires little physical interaction as discussed by the authors, where the gaze direction and viewing location of a user can be determined using any of a variety of imaging or other such technologies.
Abstract: A user interface can enable a user to control the display of content in a way that is natural for the user and requires little physical interaction. The gaze direction and/or viewing location of a user can be determined using any of a variety of imaging or other such technologies. By determining the location at which the user is gazing, an electronic device can control aspects such as the scroll rate or page turns of displayed content. In many cases, a device utilizes the natural reading or viewing style of a user to determine appropriate aspects for that user, and can update automatically as conditions change based at least in part upon the change in gaze location and/or viewing patterns.

Patent
13 Jul 2012
TL;DR: In this article, a first client device performs a handoff operation to a second client device by transmitting application information, associated with a first application, to the second device when the first device is positioned within a predefined proximity of the second client devices.
Abstract: A first client device performs a handoff operation to a second client device by transmitting application information, associated with a first application, to the second client device when the first client device is positioned within a predefined proximity of the second client device. The first application has a first client device user interface state when the handoff operation is performed. In response to receiving the application information from the first client device or system, the second client device or system executes a second application corresponding to the first application with an initial user interface state corresponding to the first client device user interface state.

Patent
24 Feb 2012
TL;DR: In this paper, a method and portable electronic device are provided that presents a user interface based upon detected input, such as a touch contact with a touch-sensitive display of the portable electronic devices, from displacement of a covering apparatus to uncover a portion of the display while the display is in a low power condition such as sleep state.
Abstract: A method and portable electronic device are provided that presents a user interface based upon detected input, such as a touch contact with a touch-sensitive display of the portable electronic device, from displacement of a covering apparatus to uncover a portion of the display while the display is in a low power condition such as a sleep state. The information in the user interface that is displayed is determined at least in part by the extent of the displacement of the covering apparatus. The user of the device can peek at the user interface of the device and not have to completely uncover or remove the device from the covering apparatus to view particular types of information.

Patent
07 Sep 2012
TL;DR: In this article, a conversation user interface enables patients to better understand their healthcare by integrating diagnosis, treatment, medication management, and payment, through a system that uses a virtual assistant to engage in conversation with the patient.
Abstract: A conversation user interface enables patients to better understand their healthcare by integrating diagnosis, treatment, medication management, and payment, through a system that uses a virtual assistant to engage in conversation with the patient. The conversation user interface conveys a visual representation of a conversation between the virtual assistant and the patient. An identity of the patient, including preferences and medical records, is maintained throughout all interactions so that each aspect of this integrated system has access to the same information. The conversation user interface presents allows the patient to interact with the virtual assistant using natural language commands to receive information and complete task related to his or her healthcare.

Patent
06 Apr 2012
TL;DR: In this article, the authors describe methods, systems and apparatuses for distributing content over a communication network, including managing, by at least one content distribution server, a plurality of content, assisting in preloading at least a portion of the content to a storage element associated with a wireless device.
Abstract: Embodiments of methods, systems and apparatuses for distributing content over a communication network are disclosed. One method includes managing, by at least one content distribution server, a plurality of content, assisting in preloading at least a portion of the content to a storage element associated with a wireless device, identifying a portion of a user interface of the wireless device, and sending configuration information to the wireless device, the configuration information configured to assist the wireless device in placing, in the identified portion of the user interface, a service launch object that launches the content.

Book
12 Aug 2012
TL;DR: The inspiration to create user descriptions includes character-driven narratives, and the film Thelma & Louise is analyzed in order to understand how the development process can also be an engaging story in various professional contexts.
Abstract: People relate to other people, not to simplified types or segments. This is the concept that underpins this book. Personas, a user centered design methodology covers topics from interaction design within IT, through to issues surrounding product design, communication, and marketing. Project developers need to understand how users approach their products from the products infancy, and regardless of what the product might be. Developers should be able to describe the user of the product via vivid depictions, as if they with their different attitudes, desires and habits were already using the product. In doing so they can more clearly formulate how to turn the product's potential into reality. With contributions from professionals from Australia, Brazil, Finland, Japan, Russia, and the UK presenting real-world examples of persona method, this book will provide readers with valuable insights into this exciting research area. The inspiration to create user descriptions includes character-driven narratives, and the film Thelma & Louise is analyzed in order to understand how the development process can also be an engaging story in various professional contexts. With a solid foundation in her own research at the IT University of Copenhagen and more than five years of experience in solving problems for businesses, Lene Nielsen is Denmarks leading expert in the persona method. She has a PhD in personas and scenarios, and through her research and practical experiences she has developed her own approach to the method 10 Steps to Personas. Personas User Focused Design presents a set-by-step methodology of personas which will be of interest to developers of IT, communications solutions and innovative products.

Proceedings ArticleDOI
05 Mar 2012
TL;DR: This work implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator.
Abstract: Human-in-the loop robotic systems have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of a human operator with autonomous tools and behaviors. Along these lines, we present a system for remote human-in-the-loop grasp execution. An operator uses a computer interface to visualize a physical robot and its surroundings, and a point-and-click mouse interface to command the robot. We implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator. Our controlled experiment (N=48) results indicate that people were able to successfully grasp more objects and caused fewer unwanted collisions when using the strategies with more autonomous assistance. We used an untethered robot over wireless communications, making our strategies applicable for remote, human-in-the-loop robotic applications.