scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1990"


Journal ArticleDOI
TL;DR: A verbal protocol study describes how designers heavily rely on problem domain scenario simulations throughout solution development and describes how they exploit powerful heuristics and personalized evaluation criteria to constrain the design process and select a satisfactory solution.
Abstract: High-level software design is characterized by incompletely specified requirements, no predetermined solution path, and by the integration of multiple domains of knowledge at various levels of abstraction. The application of data-driven knowledge rules characterizes expertise. A verbal protocol study describes these domains of knowledge and how experts exploit their rich knowledge during design. It documents how designers heavily rely on problem domain scenario simulations throughout solution development. These simulations trigger the inferences of new requirements and complete the requirement specification. Designers recognize partial solutions at various levels of abstraction in the design decomposition through the application of data-driven rules. Designers also rely heavily on simulations of their design solutions. but these are shallow, that is, limited to one level of abstraction in the solution. The findings also illustrate how designers capitalize on design methods, notations, and specialized software design schemas. Finally, the study describes how designers exploit powerful heuristics and personalized evaluation criteria to constrain the design process and select a satisfactory solution. Studies, such as this one, help map the road to understanding expertise in complex tasks.

234 citations


Journal ArticleDOI
TL;DR: This paper analyses when and how alternative-to-the-plan actions come up, the degree of plan deviation, the design components and the definitional aspects which are most affected by these deviations, and the deviation patterns.
Abstract: An observational study was conducted on a mechanical engineer throughout his task of defining the functional specifications for the machining operations of a factory automation cell. The engineer described his activity as following a hierarchically structured plan. The actual activity is in fact opportunistically organized. The engineer follows his plan as long as it is cognitively cost-effective. As soon as other actions are more interesting, he abandons his plan to proceed to these actions. This paper analyses when and how these alternative-to-the-plan actions come up. Quantitative results are presented with regard to the degree of plan deviation, the design components and the definitional aspects which are most affected by these deviations, and the deviation patterns. Qualitative results concern their nature. An explanatory framework for plan deviation is proposed in the context of a blackboard model. Plan deviation is supposed to occur if the control, according to certain selection criteria, selects an alternative-to-the-planned-action proposal rather than the planned action proposal. Implications of these results for assistance tools are discussed briefly.

185 citations


Journal ArticleDOI
TL;DR: The prospects and implications of automatic filtering of information, and the results suggest that the prediction of preferences can be straightforward when general categories for news articles are used; however, prediction for specific news reports is much more difficult.
Abstract: While the technology of new information services is rapidly advancing, it is not clear how this technology can be best adapted to people's needs and interests. One possibility is that user models may select and filter information sources for readers. This paper examines the prospects and implications of automatic filtering of information, and focuses on predicting preferences for news articles presented electronically. The results suggest that the prediction of preferences can be straightforward when general categories for news articles are used; however, prediction for specific news reports is much more difficult. In addition, an effort is made to establish a systematic study of the effectiveness of information interfaces and user models. Fundamental issues are raised such as techniques for evaluating user models, their essential components, their relationship to information retrieval models, and the limits of using them to predict user behavior at various levels of granularity. For instance, prediction and evaluation methodology may be adopted from personality psychology. Finally, several directions for research are discussed such as treating news as hypertext and integration of news with other information sources.

171 citations


Journal ArticleDOI
TL;DR: Results indicated that men and women in managerial positions do not differ in the level of computer anxiety reported, and are very similar in their attitudes toward microcomputers, however, gender differences were found in the pattern of relationships of demographic and personality variables with computer anxiety and microcomputer attitudes.
Abstract: The study examined the determinants of computer anxiety and attitudes toward microcomputers among 166 managers employed in a variety of organizations. Results indicated that men and women in managerial positions do not differ in the level of computer anxiety reported, and are very similar in their attitudes toward microcomputers. However, gender differences were found in the pattern of relationships of demographic and personality variables with computer anxiety and microcomputer attitudes. For men, education and intuition-sensing were negatively related to computer anxiety, while age, external locus of control, and math anxiety were associated with heightened computer anxiety. In contrast, demographic and personality variables were unrelated to computer anxiety among women. Computer anxiety was the strongest predictor of attitudes toward microcomputers among both men and women. Among women, however, the feeling-thinking dimension of cognitive style, and math anxiety were additional determinants of microcomputer attitudes.

155 citations


Journal ArticleDOI
TL;DR: The review combines two separate foci in recent research: the diffusion and use of computer-mediated communication systems in organizations, and the conceptualization of communication as a process of interaction and convergence, as represented by the network paradigm.
Abstract: The review combines two separate foci in recent research: (1) the diffusion and use of computer-mediated communication (CMC) systems in organizations, and (2) the conceptualization of communication as a process of interaction and convergence, as represented by the network paradigm. The article discusses (1) rationales for this combined focus based upon the characteristics of CMC systems, (2) application of the network paradigm to study CMC systems, (3) the collection samples, usage data, network flows, and content by CMC systems, (4) some theoretical issues that may be illuminated through analyses of data collected by CMC systems. The article concludes by discussing issues of reliability, validity and ethics.

117 citations


Journal ArticleDOI
TL;DR: This experiment was performed to evaluate the effectiveness and efficiency of navigating with an automobile moving-map display relative to navigating with a conventionl paper map and along a memorized route, which served as a baseline for comparison.
Abstract: This experiment was performed to evaluate the effectiveness and efficiency of navigating with an automobile moving-map display relative to navigating with a conventionl paper map and along a memorized route, which served as a baseline for comparison. Results indicated that there were no differences in the quality of routes selected when using either the paper map or the moving map to navigate. However, the moving map significantly drew the driver's gaze away from the driving task relative to the norm established in the memorized route condition, as well as in comparison to the paper map. These findings are discussed in the context of the different navigation strategies evoked by use of the paper and moving-map methods of navigation.

98 citations


Journal ArticleDOI
TL;DR: The difficulty of expressing database queries was examined as a function of the language used as discussed by the authors, and two distinctly different query methods were investigated, one using a standard database query language, SQL, requiring users to express an English query using a formal syntax and appropriate combinations of boolean operators, and the other using a newly designed Truth-table Exemplar-Based Interface (TEBI), which only required subjects to choose examplars from a system-generated table representing a sample database.
Abstract: The difficulty of expressing database queries was examined as a function of the language used. Two distinctly different query methods were investigated. One used a standard database query language, SQL, requiring users to express an English query using a formal syntax and appropriate combinations of boolean operators. The second used a newly designed Truth-table Exemplar-Based Interface (TEBI), which only required subjects to be able to choose examplars from a system-generated table representing a sample database. Through users' choices of critical exemplars, the system could distinguish between interpretations of an otherwise ambiguous English query. Performance was measured by number correct, time to complete queries, and confidence in query correctness. Individual difference analyses were done to examine the relationship between subjects' characteristics and ability to express database queries. Subjects' performance was observed to be both better, and more resistant to variability in age and levels of cognitive skills, when using TEBI than when using SQL to specify queries. Possible reasons for these differences are discussed.

92 citations


Journal ArticleDOI
TL;DR: The authors identify three types of understanding failures the subject may experience and the additional strategies invoked in those cases and develop an operational description of these strategies and discuss the control structure of program understanding in the framework of schema theory.
Abstract: Various models of program undestanding have been developed from the Schema Theory. To data, the authors have sought to identify the knowledge that programmers have and use in understanding programs, i.e. Programming Plans and Rules of Discourse. However, knowledge is only one aspect of program understanding. The other aspect is the cognitive mechanisms that use knowledge. The contribution of this study is the identification of different mechanisms involved in program understanding by experts, specifically the mechanisms which cope with novelty. An experiment was conducted to identify and describe the expert's strategies involved in understanding usual (plan-like) and unusual (unplan-like) programs. While performing a fill-in-a-blank task, subjects were asked to talk aloud. The analysis of verbal protocols allowed the identification of four different strategies of understanding. Under “normal” conditions the strategy of sympbolic simulation is involved. But when failures occur additional strategies are required. The authors identified three types of understanding failures the subject may experience (no expectation, expectation clashes, insufficient expectations) and the additional strategies invoked in those cases: (1) reasoning according to rules of discourse and principles of the taks domain; (2) reasoning with plan constrains; (3) concrete simulation. The authors develop an operational description of these strategies and discuss the control structure of program understanding in the framework of schema theory.

82 citations


Journal ArticleDOI
TL;DR: Investigation of six researchers' perceptions of texts in terms of their use, content and structure indicates that individuals construe texts in Terms of three broad attributes: why read them, what type of information they contain, and how they are read.
Abstract: The advent of hypertext brings with it associated problems of how best to present nonlinear texts. As yet, knowledge of readers' models of texts and their uses is limited. Repertory grid analysis offers an insightful method of examining these issues and gaining an understanding of the type of texts that exist in the readers' worlds. The present study investigates six researchers' perceptions of texts in terms of their use, content and structure. Results indicate that individuals construe texts in terms of three broad attributes: why read them, what type of information they contain, and how they are read. When applied to a variety of texts these attributes facilitate a classificatory system incorporating both individual and task differences and provide guidance on how their electronic versions could be designed.

65 citations


Journal ArticleDOI
TL;DR: A formal model of “display-based competence” is outlined by extending the Task-Action Grammar notation by embedding two extensions within the organizing framework of TAG's feature-grammar to develop descriptions of interfaces which highlight aspects of (display) design that are outside the scope of other formal user models.
Abstract: This paper discusses the critical role played by aspects of the display in the use of many computer systems especially those driven by menus. We outline a formal model of “display-based competence” by extending the Task-Action Grammar notation ( Payne & Green, 1986 ). The model, D-TAG (for display-orinted task-action grammar) is illustrated with examples for the well-known Macintosh desk-top interface, and from a more deeply-nested menu interface to a device used for the remote testing of telephone line (RATES). D-TAG exploits two extensions of TAG to address important aspects of interface consistency. The most important extension uses a featural description of the display to capture the role of the display in structuring task-action mappings; the second describes the “side-effects” of a task, i.e. those effects not described by the semantic attributes of a task. By embedding these extensions within the organizing framework of TAG's feature-grammar, we are able to develop descriptions of interfaces which highlight aspects of (display) design that are outside the scope of other formal user models.

64 citations


Journal ArticleDOI
J. T. Nosek1, I. Roth1
TL;DR: An experiment was conducted to test the effectiveness of two popular knowledge representation schemes as communication vehicles between the human expert and the knowledge engineer and some of the strenths of Semantic Networks as communication tools during the validation process.
Abstract: An experiment was conducted to test the effectiveness of two popular knowledge representation schemes as communication vehicles between the human expert and the knowledge engineer. Validation by the human expert of the knowledge encapsulated depends upon how well the expert understands and interprets a representation scheme. A between-group experiment was conducted. Each group received two treatments of the same representation technique, with the second treatment slightly more complex that the first. All the scores for the Semantic Network representations were higher than that obtained for the Predicate Logic representations; and the Semantic Network were clearly better for comprehension and conceptualization tasks. The results demonstrate some of the weaknesses of Predicate Logic and some of the strenths of Semantic Networks as communication tools during the validation process.

Journal ArticleDOI
TL;DR: A taxonomy is developed that characterizes a range of misconceptions users have when performing subject-based search in an online catalog system andotheses about the causes of the misconceptions are suggested.
Abstract: We report results of an investigation where thirty subjects were observed performing subject-based search in an online catalog system. The observations have revealed a range of misconceptions users have when performing subject-based search. We have developed a taxonomy that characterizes these misconceptions and a knowledge representation which explains these misconceptions. Directions for improving search performance are also suggested.

Journal ArticleDOI
TL;DR: A number of design issues associated with hypermedia systems have been identified and a number of the more pre-eminent hyper media systems within the context of the issues they address are discussed.
Abstract: This article is intended to be a general introduction to the new information representation technology: hypermedia. There is a lack of consensus as to a specific definition of hypermedia. Several general characteristics and specific terms are, however, emerging and are presented here as an introduction to the area. A number of design issues associated with hypermedia systems have been identified. This survey presents these issues and discusses a number of the more pre-eminent hypermedia systems within the context of the issues they address. From one perspective, hypermedia is not an application. It is, instead, atechnology which can be used to develop and enhance many application areas. Hypermedia's potential contribution to some of these areas is discussed and some general conclusions presented.


Journal ArticleDOI
TL;DR: The value of incorporating decision analysis insights and techniques into the knowledge acquisition and decision making process is illustrated and the first step toward a full integration of insights from the two disciplines and their respective repertory grid and influence diagram representations are seen.
Abstract: The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex preference issues, but it has been largely overlooked by knowledge-based system researchers. This paper illustrates the value of incorporating decision analysis insights and techniques into the knowledge acquisition and decision making process. This approach is being implemented within Aquinas , a personal construct-based knowledge acquisition tool, and Axotl , a knowledge-based decision analysis tool. The need for explicit preference models in knowledge-based systems will be shown. The modeling of problems will be viewed from the perspectives of decision analysis and personal construct theory. We will outline the approach of Aquinas and then present an example that illustrates how preferences can be used to guide the knowledge acquisition process and the selection of alternatives in decision making. Techniques acquisition process and the selection of alternatives in decision making. Techniques for combining supervised and unsupervised inductive learning from data with expert judgment, and integration of knowledge and inference methods at varying levels of precision will be presented. Personal construct theory and decision theory are shown to be complementary: the former provides a plausible account of the dynamics of model formulation and revision, while the latter provides a consistent framework for model evaluation. Applied personal construct theory (in the form of tools such as Aquinas ) and applied decision theory (in the form of tools such as Axotl ) are moving along convergent paths. We see the approach in this paper as the first step toward a full integration of insights from the two disciplines and their respective repertory grid and influence diagram representations.

Journal ArticleDOI
TL;DR: It is argued that programming plans cannot be considered solely to be natural strategies that evolve independently of teaching nor as mere artifacts or static properties of a particular programming language, rather, such plans can be seen to be related to the expression of design-related skills.
Abstract: The notion of the programming plan as a description of one of the main types of strategy employed in the comprehension of programs is now widely accepted to form an adequate basis for an account of programming knowledge. Such plans are thought to be used universally in all programming languages by expert programmers. Recent work, however, has questioned the psychological reality of such plans and has suggested that they may be artifacts of the particular programming language used and the structure that it imposes on the programmer via the constraints of certain features of its notation. This paper considers the results of two experimental studies that suggest that the development and use of programming plans is strongly tied to the particular learning experience of the programmer. It is argued that programming plans cannot be considered solely to be natural strategies that evolve independently of teaching nor as mere artifacts or static properties of a particular programming language. Rather, such plans can be seen to be related to the expression of design-related skills. This has a number of important implications for our understanding of the nature and development of programming plans, and in particular, it appears that the notion of the programming plan provides too limited a view to adequately and straightforwardly explain the differences between novice and the expert's programming performance.

Journal ArticleDOI
TL;DR: An Intelligent Fuzzy Temporal Relational Database (IFTReD) is described, an intelligent system-independent SR which allows for almost any degree of individualization the designer wishes to incorporate and is anticipated that this IFTReD will provide a significant improvement over standard AI storage techniques for the SR.
Abstract: The student record (SR) is a major source of input for any decision making done by an Intelligent Tutoring System (ITS) and is a basis of the individualization in such systems. However, most ITSs still have “generalized” student models which represent a type of student rather than a particular one. Until the SR becomes truly representative of each individual student, the goal of providing individualized tutoring cannot be attained. In this paper we describe an Intelligent Fuzzy Temporal Relational Database (IFTReD), an intelligent system-independent SR which allows for almost any degree of individualization the designer wishes to incorporate. It is anticipated that this IFTReD will provide a significant improvement over standard AI storage techniques for the SR. These improvements will be realized in terms of: (1) intelligence; (2) greater storage efficiency; (3) greater speed in retrieval and query; (4) ability to handle linguistic codes, ranges, fuzzy possibilities, and incomplete data in student models; (5) friendliness of query language; (6) availability of temporal knowledge to give a history of past performance; and (7) a more holistic view of the student, permitting greater individualization of the tutor.

Journal ArticleDOI
TL;DR: Rough classification of patients after highly selective vagotomy for duodenal ulcer is analysed from the viewpoint of sensitivity of previously obtained results to minor changes in the norms of attributes, leading to the general conclusion that original norms following from medical experience were well defined.
Abstract: Rough classification of patients after highly selective vagotomy (HSV) for duodenal ulcer is analysed from the viewpoint of sensitivity of previously obtained results to minor changes in the norms of attributes. The norms translate exact values of pre-operating quantitative attributes into some qualitative terms, e.g. “low”, “medium” and “high”. An extensive computational experiment leads to the general conclusion that original norms following from medical experience were well defined, and that the results of analysis of the considered information system using rough sets theory are robust in the sense of low sensitivity to minor changes in the norms of attributes.

Journal ArticleDOI
TL;DR: A realistic example is presented to show how EMCUD enriches the knowledge base constructed by a repertory grid-oriented method and hence ease the refinement processes.
Abstract: In this paper, we propose a knowledge acquisition method EMCUD which can elicitembedded meanings of the initial knowledge provided by domain experts. EMCUD also helps experts to decide uncertainty of the embedded meanings according to the relationships of the embedded meanings and the initial knowledge. The strategy of EMCUD could easily be added to the repertory grid-oriented methods or systems to enhance the knowledge in the prototype. We present a realistic example to show how EMCUD enriches the knowledge base constructed by a repertory grid-oriented method and hence ease the refinement processes.

Journal ArticleDOI
TL;DR: A system which processes technical text semi-automatically and incrementally builds a conceptual model of the domain, which includes a parser with broad syntactic coverage, and a matcher retrieving subnetworks relevant to the current text fragment.
Abstract: We present a system which processes technical text semi-automatically andincrementally builds a conceptual model of the domain. Starting from an initial general model, knowledge-based text understanding is turned into knowledge acquisition. Incompletely understood text fragments may contain new information which should be integrated into the model under the control of an operator. The text is assumed to describe the domain fully. Typical problems in this domain are assumed to be solvable by indicating activities which manipulate objects. Activities, objects and their properties enter relationships that form a conceptual network. To test our representation, we have created a large hierarchy of concepts for PowerHouse Quiz. The system relies in its operation on the text and the growing network; it includes a parser with broad syntactic coverage, and a matcher retrieving subnetworks relevant to the current text fragment. The frequency of the operator's necessary interventions depends on the initial network's size which will be determined experimentally. We discuss the status of the system and outline further work.

Journal ArticleDOI
TL;DR: Results show that natural language is an efficient and powerful means for expressing requests and is simple enough to support casual users with a general knowledge of the database contents; and it is flexible enough to assist problem-solving behaviour.
Abstract: Although there is much controversy about the merits of natural language interfaces, little empirical research has been conducted on the use of natural language interfaces for database access, especially for casual users. In this work casual users were observed while interacting with a real-life database using a natural language interface, Intellect. Results show that natural language is an efficient and powerful means for expressing requests. This is especially true for users with a good knowledge of the database contents regardless of training or previous experience with computers. Users generally have a positive attitude towards natural language. The majority of errors users make are directly related to restrictions in the vocabulary. However, feedback helps users understand the language limitations and learn how to avoid or recover from errors. Natural language processing technology is developed enough to handle the limited domain of discourse associated with a database; it is simple enough to support casual users with a general knowledge of the database contents; and it is flexible enough to assist problem-solving behaviour.

Journal ArticleDOI
TL;DR: The ILTS system is based on a very complete and “objective” grammar knowledge base, and students can at any moment during an exercise ask the system questions about the grammar, and they are immediately answered without losing the exercise context.
Abstract: In this paper, we present the theoretical background and describe the design and implementation of an intelligent language tutoring system (ILTS). The most important properties of our system are: (1) The system is based on a very complete and “objective” grammar knowledge base; (2) Students can at any moment during an exercise ask the system questions about the grammar, and they are immediately answered without losing the exercise context. Thus the normal behaviour of a tutor is better simulated, which contributes to a user-friendly interface; and (3) It allows for individual correction of errors and reaction to errors. This is due to the fact that the system is firmly based on a linguistically well-founded analysis. The sentences formulated by the students are parsed and analysed. They are not simply matched against predefined answers as is still the case with many other more classically oriented systems.

Journal ArticleDOI
TL;DR: This paper studies models of natural language from three different, but related, viewpoints, studying theoretical models of language, including simple random generative models of letters and words whose output, like genuine natural language, obeys Zipf's law.
Abstract: A model of a natural language text is a collection of information that approximates the statistics and structure of the text being modeled The purpose of the model may be to give insight into rules which govern how language is generated, or to predict properties of future samples of it This paper studies models of natural language from three different, but related, viewpoints First, we examine the statistical regularities that are found empirically, based on the natural units of words and letters Second, we study theoretical models of language, including simple random generative models of letters and words whose output, like genuine natural language, obeys Zipf's law Innovation in text is also considered by modeling the appearance of previously unseen words as a Poisson process Finally, we review experiments that estimate the information content inherent in natural text

Journal ArticleDOI
TL;DR: A clear dissociation was noted between users' procedural knowledge of a task, reflected in their performance ability; and their metaknowledge of the task, i.e. their awareness of what procedural knowledge would be required in order to complete the task.
Abstract: Many people teach themselves how to use word-processing systems, but how successful are they in their endeavor? This study investigates a number of theoretical and practical issues associated with self-directed learning Users of differing experience were asked to perform a simple task, using an unfamiliar word-processing system However, they were given no information about the new system, prior to task commencement, save information they explicitly requested An analysis of users' questions revealed that only the most experienced had a suitable mental task description available to them Others relied upon visible components of the task to cue their questioning strategy in a manner which suggested reliance upon a recognition, rather than a recall strategy A clear dissociation was noted between users' procedural knowledge of a task, reflected in their performance ability; and their metaknowledge of the task, ie their awareness of what procedural knowledge would be required in order to complete the task The implications of these findings for the design of user support systems, and for user modelling are discussed

Journal ArticleDOI
TL;DR: It is suggested that gestural commands tend to be terse, common, unambiguous, iconic, and similar to the spontaneous hand gestures that accompany speech.
Abstract: A distinction is drawn between conventional lexical commands and gestural commands (e.g. circles, arrows, X'x, etc.). The distinction is discussed in the context of a central metaphor that likens computer use to communication between programmer and user. A number of limitations and benefits unique to gestural interfaces are described. It is suggested that gestural commands tend to be terse, common, unambiguous, iconic, and similar to the spontaneous hand gestures that accompany speech. The potential effects of these five qualities are outlined by summarizing selected research from cognitive and social psychology. Some potential applications are also described.

Journal ArticleDOI
TL;DR: This work proposes to develop and evaluate a knowledge-acquisition tool that helps with extending a knowledge base through interaction with a knowledge engineer and demonstrates on a complex extension to a large knowledge base.
Abstract: Knowledge integration is the task of incorporating new information into existing knowledge. The task is difficult because the consequences of an addition to an extensive knowledge base can be numerous and subtle. Current methods for automated knowledge acquisition largely ignore this task, although it is increasingly important with the move toward large scale, multifunctional knowledge bases. To study knowledge integration, we propose to develop and evaluate a knowledge-acquisition tool that helps with extending a knowledge base through interaction with a knowledge engineer. An initial prototype of this tool has been implemented and demonstrated on a complex extension to a large knowledge base.

Journal ArticleDOI
TL;DR: The aim of the study was to investigate the relationship between mental models required by a database system, clues provided by the system to these models, and users' behaviour in operating the system.
Abstract: The aim of the study was to investigate the relationship between mental models required by a database system, clues provided by the system to these models, and users' behaviour in operating the system. For this purpose a fulltext database system containing news articles about telecommunication and information technology was used. Ten non professional computer users participated in the study. The subjects' tasks were to retrieve and display certain articles and pieces of articles on the screen. By analysing knowledge needed to carry out these tasks, the required mental models could be identified. Then, through analysing the specific system clues to the required mental models, difficulties subjects would run into were predicted. The fulltext database system employed for the study operated on three different levels of display. That is, it operated as if it had three different modes, each level corresponding to a different mode. The clarity of clues to adequate mental models differed on the three levels. Most salient were clues concerning the order in which articles and pieces of articles were presented. These clues were least clear on the second level of display. As predicted this was also the level of display on which subjects' performance was worst ( p

Journal ArticleDOI
TL;DR: A model of program design is proposed to explain program variability, and is experimentally supported; Variability is shown to be the result of different decisions made by programmers during three stages in the design process.
Abstract: A model of program design is proposed to explain program variability, and is experimentally supported. Variability is shown to be the result of different decisions made by programmers during three stages in the design process. In the first stage, a solution is created based on a particular design approach. In the second stage, actions in the solution are organized by features they share. The actions may then be merged together to define a more concise solution in program code, the third stage of design. Different programs will be created depending, on the approach taken to design the features selected to group actions in a solution, and the features used to merge actions to form program code. Each of the variants observed in the study was traced to the use of a specific piece of information by a programmer at one of the three stages of program design. Many different programs were created as the process of design interacted with the knowledge of the programmer.

Journal ArticleDOI
TL;DR: An algorithm to segment handwritten postal codes is presented and results with handwritten Canadian postal codes show the algorithm to be effective, robust, and general enough that it can be suitably adapted for the postal codes of many other countries.
Abstract: Postal codes, although known by different names, are used in many countries to uniquely identify a location. In automated mail sorting, these postal codes are recognized by optical character readers. One problem faced for the recognition of handwritten postal codes is that of segmentation; each character in the postal code should ideally be assigned a different segment. Difficulties arise in segmentation because of the writing habits of people: characters may be broken, may touch one another, or may overlap one another. In this paper we present an algorithm to segment handwritten postal codes. The heuristic components of the algorithm were developed after analysing the writing habits of the people. Experimental results with handwritten Canadian postal codes show the algorithm to be effective, robust, and general enough that it can be suitably adapted for the postal codes of many other countries.

Journal ArticleDOI
TL;DR: This paper explains why certain elements of direct manipulation can invoke the second form of feedback, and demonstrates that feedback resulting from direct manipulation is more effective and time efficient than the distinct form of Feedback in conditions of high task complexity.
Abstract: Correctly designed feedback can play a pivotal role in improving performance.Human-computer interaction can generate various forms of feedback, and it is important to examine the effectiveness of the different options. This work compares: (1) feedback that is presented as information which is distinct from the user's action, with (2) feedback that is generated by direct manipulation and is embedded in the same information which facilitates the user's action. The former is the traditional form of feedback in which the user acts and receives feedback information from a distinct source. The latter is information generated during the user's action, and it becomes effective feedback only when the individual uses this information as feedback. This paper explains why certain elements of direct manipulation can invoke the second form of feedback. An experiment demonstrates that feedback resulting from direct manipulation is more effective and time efficient than the distinct form of feedback in conditions of high task complexity. However, direct manipulation has its limits and must be complemented with traditional forms of feedback for complex cognitive tasks.