scispace - formally typeset
Search or ask a question

Showing papers on "User interface published in 2004"


Book
23 Jul 2004
TL;DR: This book discusses 3D User Interfaces, the history and roadmap of 3D UIs, and strategies for Designing and Developing 3D user Interfaces.
Abstract: From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.

1,806 citations


Journal ArticleDOI
TL;DR: OsiriX was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies and ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program.
Abstract: A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program’s toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

1,741 citations


Book ChapterDOI
29 Mar 2004
TL;DR: The tool supports almost all ANSI-C language features, including pointer constructs, dynamic memory allocation, recursion, and the float and double data types, and is integrated into a graphical user interface.
Abstract: We present a tool for the formal verification of ANSI-C programs using Bounded Model Checking (BMC). The emphasis is on usability: the tool supports almost all ANSI-C language features, including pointer constructs, dynamic memory allocation, recursion, and the float and double data types. From the perspective of the user, the verification is highly automated: the only input required is the BMC bound. The tool is integrated into a graphical user interface. This is essential for presenting long counterexample traces: the tool allows stepping through the trace in the same way a debugger allows stepping through a program.

1,425 citations


Patent
16 Apr 2004
TL;DR: In this article, a free-form grid layout is provided that allows an application provider to create a desired number of placeholders, each of a desired size, by positioning objects at desired locations on the freeform grid.
Abstract: Embodiments of the present invention provide methods and apparatuses for quickly and easily configuring an application user interface using a flexible generic layout file. For one embodiment, a free-form grid layout is provided that allows an application provider to create a desired number of placeholders, each of a desired size, by positioning objects at desired locations on the free-form grid. In this way the application provider configures the application user interface. For one embodiment, the placeholders are created by dragging selected objects, from a provided set of objects, onto the grid layout. For such an embodiment, a set of parameters that describe the objects on the grid layout (e.g., indicating number, size, and location) is stored to a database. At run-time, the parameters are used to dynamically generate HTML code, which when executed presents the application user interface.

1,142 citations


Journal ArticleDOI
TL;DR: CC4 mg is a program designed to meet needs for model building and analysis in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology.
Abstract: Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/σ(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.

578 citations


Journal ArticleDOI
TL;DR: A description of the components and the modus operandi of haptic interfaces are described, followed by a list of current and prospective applications and a discussion of a cross‐section of current device designs.
Abstract: Haptic interfaces enable person‐machine communication through touch, and most commonly, in response to user movements. We comment on a distinct property of haptic interfaces, that of providing for simultaneous information exchange between a user and a machine. We also comment on the fact that, like other kinds of displays, they can take advantage of both the strengths and the limitations of human perception. The paper then proceeds with a description of the components and the modus operandi of haptic interfaces, followed by a list of current and prospective applications and a discussion of a cross‐section of current device designs.

577 citations


Proceedings Article
01 Jan 2004
TL;DR: Some of the goals of the 3D Slicer project are discussed and how the architecture helps support those goals and some of the practical issues which arise from this approach are pointed out.
Abstract: To be applied to practical clinical research problems, medical image computing software requires infrastructure including routines to read and write various file formats, manipulate 2D and 3D coordinate systems, and present a consistent user interface paradigm and visualization metaphor. At the same time, research software needs to be flexible to facilitate implementation of new ideas. 3D Slicer is a project that aims to provide a platform for a variety of applications through a community-development model. The resulting system has been used for research in both basic biomedical and clinically applied settings. 3D Slicer is built on a set of powerful and widely used software components (Tcl/Tk, VTK, ITK) to which is added an application layer that makes the system usable by non-programmer end-users. Using this approach, advanced applications including image guided surgery, robotics, brain mapping, and virtual colonoscopy have been implemented as 3D Slicer modules. In this paper we discuss some of the goals of the 3D Slicer project and how the architecture helps support those goals. We also point out some of the practical issues which arise from this approach.

543 citations


Book
01 Jan 2004
TL;DR: The article describes in detail the methods that have been adopted in some well-known dialogue systems, explores different system architectures, considers issues of specification, design, and evaluation, reviews some currently available dialogue development toolkits, and outlines prospects for future development.
Abstract: Spoken dialogue systems allow users to interact with computer-based applications such as databases and expert systems by using natural spoken language. The origins of spoken dialogue systems can be traced back to Artificial Intelligence research in the 1950s concerned with developing conversational interfaces. However, it is only within the last decade or so, with major advances in speech technology, that large-scale working systems have been developed and, in some cases, introduced into commercial environments. As a result many major telecommunications and software companies have become aware of the potential for spoken dialogue technology to provide solutions in newly developing areas such as computer-telephony integration. Voice portals, which provide a speech-based interface between a telephone user and Web-based services, are the most recent application of spoken dialogue technology. This article describes the main components of the technology---speech recognition, language understanding, dialogue management, communication with an external source such as a database, language generation, speech synthesis---and shows how these component technologies can be integrated into a spoken dialogue system. The article describes in detail the methods that have been adopted in some well-known dialogue systems, explores different system architectures, considers issues of specification, design, and evaluation, reviews some currently available dialogue development toolkits, and outlines prospects for future development.

542 citations


Patent
21 Oct 2004
TL;DR: In this paper, the authors propose an innovative security solution which separates a client into a Protected Context, which is the real files and resources of the client, and an Isolated Context which is a restricted execution environment which makes use of virtualized resources to execute applications and modify content in the isolated context.
Abstract: An innovative security solution which separates a client into a Protected Context, which is the real files and resources of the client, and an Isolated Context, which is a restricted execution environment which makes use of virtualized resources to execute applications and modify content in the Isolated Context, without allowing explicit access to the resources in the Protected Context. The solution further consolidates user interfaces to allow users to seamlessly work with content in both contexts, and provide a visual indication of which display windows are rendered from content executed in the Isolated Context.

525 citations


Patent
05 Mar 2004
TL;DR: In this paper, the authors propose an eventing model to synchronize the state table of a controlled device across all user control devices according to a state table update eventing mechanism.
Abstract: Controlled devices according to a device control model maintain a state table representative of their operational state. Devices providing a user control point interface for the controlled device obtain the state table of the controlled device, and may also obtain presentation data defining a remoted user interface of the controlled device and device control protocol data defining commands and data messaging protocol to effect control of the controlled device. These user control devices also subscribe to notifications of state table changes, which are distributed from the controlled device according to an eventing model. Accordingly, upon any change to the controlled device's operational state, the eventing model synchronizes the device's state as represented in the state table across all user control devices.

499 citations


Journal ArticleDOI
TL;DR: A novel technique to learn user profiles from users' search histories is proposed, which are then used to improve retrieval effectiveness in Web search.
Abstract: Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.


Proceedings ArticleDOI
25 Apr 2004
TL;DR: This paper identifies the fundamental functionality that tabletop user interfaces should embody, then presents the toolkit's architecture and API, and discusses insights on tabletop interaction issues the authors have observed from a set of applications built with DiamondSpin.
Abstract: DiamondSpin is a toolkit for the efficient prototyping of and experimentation with multi-person, concurrent interfaces for interactive shared displays. In this paper, we identify the fundamental functionality that tabletop user interfaces should embody, then present the toolkit's architecture and API. DiamondSpin provides a novel real-time polar to Cartesian transformation engine that has enabled new, around-the-table interaction metaphors to be implemented. DiamondSpin allows arbitrary document positioning and orientation on a tabletop surface. Polygonal tabletop layouts such as rectangular, octagonal, and circular tabletops can easily be constructed. DiamondSpin also supports multiple work areas within the same digital tabletop. Multi-user operations are offered through multi-threaded input event streams, multiple active objects, and multiple concurrent menus. We also discuss insights on tabletop interaction issues we have observed from a set of applications built with DiamondSpin.

Proceedings ArticleDOI
13 Jan 2004
TL;DR: This paper proposes a novel solution based on treating interface adaptation as an optimization problem that minimizes the estimated effort for the user's expected interface actions.
Abstract: In order to give people ubiquitous access to software applications, device controllers, and Internet services, it will be necessary to automatically adapt user interfaces to the computational devices at hand (eg, cell phones, PDAs, touch panels, etc.). While previous researchers have proposed solutions to this problem, each has limitations. This paper proposes a novel solution based on treating interface adaptation as an optimization problem. When asked to render an interface on a specific device, our supple system searches for the rendition that meets the device's constraints and minimizes the estimated effort for the user's expected interface actions. We make several contributions: 1) precisely defining the interface rendition problem, 2) demonstrating how user traces can be used to customize interface rendering to particular user's usage pattern, 3) presenting an efficient interface rendering algorithm, 4) performing experiments that demonstrate the utility of our approach.

Proceedings ArticleDOI
26 Sep 2004
TL;DR: A study of beginning programmers learning Visual Basic.NET identified six types of barriers, which inspire a new metaphor of computation, which provides a more learner-centric view of programming system design.
Abstract: As programming skills increase in demand and utility, the learnability of end-user programming systems is of utmost importance. However, research on learning barriers in programming systems has primarily focused on languages, overlooking potential barriers in the environment and accompanying libraries. To address this, a study of beginning programmers learning Visual Basic.NET was performed. This identified six types of barriers: design, selection, coordination, use, understanding, and information. These barriers inspire a new metaphor of computation, which provides a more learner-centric view of programming system design

Patent
19 Apr 2004
TL;DR: A control system for controlling an infusion pump, including interface components for allowing a user to receive and provide information, a processor connected to the user interface components and adapted to provide instructions to the infusion pump is described in this paper.
Abstract: A control system for controlling an infusion pump, including interface components for allowing a user to receive and provide information, a processor connected to the user interface components and adapted to provide instructions to the infusion pump, and a computer program having setup instructions that cause the processor to enter a setup mode upon the control system first being turned on. In the setup mode, the processor prompts the user, in a sequential manner, through the user interface components to input basic information for use by the processor in controlling the infusion pump, and allows the user to operate the infusion pump only after the user has completed the setup mode.

Patent
03 Jul 2004
TL;DR: In this paper, a lightweight, battery operated, portable, personal electronic device capable of faxing, scanning, printing and copying media as a standalone device or in cooperation with other electronic devices including PCs, mobile telephones, PDAs, etc. is provided.
Abstract: A lightweight, battery operated, portable, personal electronic device capable of faxing, scanning, printing and copying media as a standalone device or in cooperation with other electronic devices including PCs, mobile telephones, PDAs, etc. is provided. The device automatically detects the presence of fax-capable devices and reconfigures the software for compatibility with the fax-capable device eliminating the need for user programming. The device's ergonomic design, intrinsic physical stability, and same side paper feeds and user interface provide use in work areas having limited space. The device includes unidirectional, independent pathways for original and recording media such that paper jams are minimized. Portability is maximized through innovative power management software and hardware.

Journal ArticleDOI
Kenneth P. Fishkin1
01 Sep 2004
TL;DR: A spectrum-based taxonomy is presented, which unifies previous categorizations and definitions, integrates the notion of “calm computing,” reveals a previously un-noticed trend in the field, and suggests design principles appropriate for different areas of the spectrum.
Abstract: There have been many research efforts devoted to tangible user interfaces (TUIs), but it has proven difficult to create a definition or taxonomy that allows us to compare and contrast disparate research efforts, integrate TUIs with conventional interfaces, or suggest design principles for future efforts. To address this problem, we present a taxonomy, which uses metaphor and embodiment as its two axes. This 2D space treats tangibility as a spectrum rather than a binary quantity. The further from the origin, the more “tangible” a system is. We show that this spectrum-based taxonomy offers multiple advantages. It unifies previous categorizations and definitions, integrates the notion of “calm computing,” reveals a previously un-noticed trend in the field, and suggests design principles appropriate for different areas of the spectrum.

Book ChapterDOI
11 Jul 2004
TL;DR: Model-to-model transformation as discussed by the authors can be supported in multiple configurations, based on composition of three basic transformation types: abstraction, reification, and translation, which is the cornerstone of Model-Driven Architecture.
Abstract: USer Interface eXtensible Markup Language (USIXML) consists in a User Interface Description Language (UIDL) allowing designers to apply a multi-path development of user interfaces. In this development paradigm, a user interface can be specified and produced at and from different, and possibly multiple, levels of abstraction while maintaining the mappings between these levels if required. Thus, the development process can be initiated from any level of abstraction and proceed towards obtaining one or many final user interfaces for various contexts of use at other levels of abstraction. In this way, the model-to-model transformation, which is the cornerstone of Model-Driven Architecture (MDA), can be supported in multiple configurations, based on composition of three basic transformation types: abstraction, reification, and translation.

Proceedings ArticleDOI
14 Mar 2004
TL;DR: The Drishti system uses a precise position measurement system, a wireless connection, a wearable computer, and a vocal communication interface to guide blind users and help them travel in familiar and unfamiliar environments independently and safely.
Abstract: There are many navigation systems for visually impaired people but few can provide dynamic interactions and adaptability to changes. None of these systems work seamlessly both indoors and outdoors. Drishti uses a precise position measurement system, a wireless connection, a wearable computer, and a vocal communication interface to guide blind users and help them travel in familiar and unfamiliar environments independently and safely. Outdoors, it uses DGPS as its location system to keep the user as close as possible to the central line of sidewalks of campus and downtown areas; it provides the user with an optimal route by means of its dynamic routing and rerouting ability. The user can switch the system from an outdoor to an indoor environment with a simple vocal command. An OEM ultrasound positioning system is used to provide precise indoor location measurements. Experiments show an in-door accuracy of 22 cm. The user can get vocal prompts to avoid possible obstacles and step-by-step walking guidance to move about in an indoor environment. This paper describes the Drishti system and focuses on the indoor navigation design and lessons learned in integrating the indoor with the outdoor system.

Patent
08 Apr 2004
TL;DR: A user interface as a whole is contextually sensitive in that the appearance of user interface elements (e.g., color, size, font, contrast, order, grouping, arrangement, etc.) and/or the behavior of user interfaces are varied in a manner that is dependent on the context of the control unit.
Abstract: A user interface having a plurality of user interface elements including: background, passive elements such as frames and borders, information display elements that present information from application software operating on the control unit, and control elements that cause application software operating on the control unit to initiate programmed behaviors. The user interface as a whole is contextually sensitive in that the appearance of user interface elements (e.g., color, size, font, contrast, order, grouping, arrangement, etc.) and/or the behavior of user interface elements are varied in a manner that is dependent on the context of the control unit. The context of the control unit is represented by state information known to the control unit, which includes context-specific state information known to a particular control unit as well as global context information known to multiple or all control units in a system.

Journal ArticleDOI
TL;DR: It is argued that early involvement of cognitive engineering methods in the system design process may be of great help in designing systems that fully support health care professionals' work practices.

Journal ArticleDOI
TL;DR: A solution is presented, based on the use of three levels of abstractions, that allows designers to focus on the relevant logical aspects and avoid dealing with a plethora of low-level details in development of nomadic applications.
Abstract: The increasing availability of new types of interaction platforms raises a number of issues for designers and developers. There is a need for new methods and tools to support development of nomadic applications, which can be accessed through a variety of devices. We present a solution, based on the use of three levels of abstractions, that allows designers to focus on the relevant logical aspects and avoid dealing with a plethora of low-level details. We have defined a number of transformations able to obtain user interfaces from such abstractions, taking into account the available platforms and their interaction modalities while preserving usability. The transformations are supported by an authoring tool, TERESA, which provides designers and developers with various levels of automatic support and several possibilities for tailoring such transformations to their needs.

Journal ArticleDOI
TL;DR: A new approach to enhance presence technologies that will adapt to user affect dynamically in the current context, thus providing enhanced social presence and introducing the prototype multimodal affective user interface.
Abstract: In this article we describe a new approach to enhance presence technologies. First, we discuss the strong relationship between cognitive processes and emotions and how human physiology is uniquely affected when experiencing each emotion. Secondly, we introduce our prototype multimodal affective user interface. In the remainder of the paper we describe the emotion elicitation experiment we designed and conducted and the algorithms we implemented to analyse the physiological signals associated with emotions. These algorithms can then be used to recognise the affective states of users from physiological data collected via non-invasive technologies. The affective intelligent user interfaces we plan to create will adapt to user affect dynamically in the current context, thus providing enhanced social presence.

Book ChapterDOI
08 Nov 2004
TL;DR: This paper proposes an adaptable and extensible context ontology for creating context-aware computing infrastructures, ranging from small embedded devices to high-end service platforms.
Abstract: To realise an Ambient Intelligence environment, it is paramount that applications can dispose of information about the context in which they operate, preferably in a very general manner. For this purpose various types of information should be assembled to form a representation of the context of the device on which aforementioned applications run. To allow interoperability in an Ambient Intelligence environment, it is necessary that the context terminology is commonly understood by all participating devices. In this paper we propose an adaptable and extensible context ontology for creating context-aware computing infrastructures, ranging from small embedded devices to high-end service platforms. The ontology has been designed to solve several key challenges in Ambient Intelligence, such as application adaptation, automatic code generation and code mobility, and generation of device specific user interfaces.

Book ChapterDOI
05 Oct 2004
TL;DR: This paper presents the most common difficulties encountered by newcomers to the language, that have been observed during the course of more than a dozen workshops, tutorials and modules about OWL-DL and it’s predecessor languages.
Abstract: Understanding the logical meaning of any description logic or similar formalism is difficult for most people, and OWL-DL is no exception. This paper presents the most common difficulties encountered by newcomers to the language, that have been observed during the course of more than a dozen workshops, tutorials and modules about OWL-DL and it’s predecessor languages. It emphasises understanding the exact meaning of OWL expressions – proving that understanding by paraphrasing them in pedantic but explicit language. It addresses, specifically, the confusion which OWL’s open world assumption presents to users accustomed to closed world systems such as databases, logic programming and frame languages. Our experience has had a major influence in formulating the requirements for a new set of user interfaces for OWL the first of which are now available as prototypes. A summary of the guidelines and paraphrases and examples of the new interface are provided. The example ontologies are available online.

Journal ArticleDOI
TL;DR: It is concluded that a polylithic approach is most suitable for toolkit builders, visual design software where code is automatically generated, and application builders where there is much customization of the toolkit.
Abstract: Here, we analyze toolkit designs for building graphical applications with rich user interfaces, comparing polylithic and monolithic toolkit-based solutions. Polylithic toolkits encourage extension by composition and follow a design philosophy similar to 3D scene graphs supported by toolkits including JavaSD and Openlnventor. Monolithic toolkits, on the other hand, encourage extension by inheritance, and are more akin to 2D graphical user interface toolkits such as Swing or MFC. We describe Jazz (a polylithic toolkit) and Piccolo (a monolithic toolkit), each of which we built to support interactive 2D structured graphics applications in general, and zoomable user interface applications in particular. We examine the trade offs of each approach in terms of performance, memory requirements, and programmability. We conclude that a polylithic approach is most suitable for toolkit builders, visual design software where code is automatically generated, and application builders where there is much customization of the toolkit. Correspondingly, we find that monolithic approaches appear to be best for application builders where there is not much customization of the toolkit.

Patent
18 Aug 2004
TL;DR: In this article, a graphical user interface and method for creating a mapping between a source object and a destination or target object is presented, where the user interface includes a source screen region which displays a graphical representation of the source object, a target screen region, and a mapping screen region that allows a user to create a mapping using graphical mapping indicia.
Abstract: A graphical user interface and method for creating a mapping between a source object and a destination or target object are provided. The user interface includes a source screen region which displays a graphical representation of a source object, a target screen region which displays a graphical representation of a target object, and a mapping screen region which allows a user to create a mapping between the graphical representation of the source object and the graphical representation of the target object using graphical mapping indicia. The methodology includes displaying a graphical representation of a source object in a source screen region, displaying a graphical representation of a target object in a target screen region, creating a mapping between the graphical representation of the source object and the graphical representation of the target object in a mapping screen region using graphical mapping indicia, and displaying the mapping in the mapping screen region. The source and target objects may be schemas, spreadsheets, documents, databases, or other information sources, and the graphical mapping indicia may include link indicia and/or function objects linking nodes in the target object with nodes in the source object. The mapping may be compiled into code used by a runtime engine to translate source documents into target documents.

Journal ArticleDOI
TL;DR: Two demonstrator platforms for a robotic home assistant—called Care-O-bot—were designed and implemented at Fraunhofer IPA, Stuttgart and a new method for sensor based manipulation using a tilting laser scanner and camera integrated in the head of the robot has been implemented.
Abstract: Technical aids allow elderly and handicapped people to live independently and supported in their private homes for a longer time. As a contribution to such technological solutions, two demonstrator platforms for a robotic home assistant—called Care-O-bot—were designed and implemented at Fraunhofer IPA, Stuttgart. Whereas Care-O-bot I is only a mobile platform with a touch screen, Care-O-bot II is additionally equipped with adjustable walking supporters and a manipulator arm. It has the capability to navigate autonomously in indoor environments, be used as an intelligent walking support, and execute manipulation tasks. The control software of Care-O-bot II runs on two industrial PCs and a hand-held control panel. The walking aid module is based on sensors in the walking aid handles and on a dynamic model of conventional walking aids. In “direct mode”, the user can move along freely with the robot whereas obstacles are detected and avoided. In “planned mode”, he can specify a target and be lead there by the robotic assistant. Autonomous planning and execution of complex manipulation tasks is based on a symbolic planner and environmental information provided in a database. The user input (graphical and speech input) is transferred to the task planner and adequate actions to solve the task (sequence of motion and manipulation commands) are created. A new method for sensor based manipulation using a tilting laser scanner and camera integrated in the head of the robot has been implemented. Additional sensors in the robot hand increase the grasping capabilities. The walking aid has been tested with elderly users from an assisted living facility and a nursery home. Furthermore, the execution of fetch and carry tasks has been implemented and tested in a sample home environment.

Proceedings ArticleDOI
25 May 2004
TL;DR: It is argued that the only way to significantly improve user interfaces is to shift the research focus from designing interfaces to designing interaction, which requires powerful interaction models, a better understanding of both the sensory-motor details of interaction and a broader view of interaction in the context of use.
Abstract: Although the power of personal computers has increased 1000-fold over the past 20 years, user interfaces remain essentially the same. Innovations in HCI research, particularly novel interaction techniques, are rarely incorporated into products. In this paper I argue that the only way to significantly improve user interfaces is to shift the research focus from designing interfaces to designing interaction. This requires powerful interaction models, a better understanding of both the sensory-motor details of interaction and a broader view of interaction in the context of use. It also requires novel interaction architectures that address reinterpretability, resilience and scalability.