scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1997"


Patent
06 Oct 1997
TL;DR: In this paper, an intelligent agent executes tasks by using intelligent agent learning modules which store information necessary to execute the tasks, such as a command to execute a task or a data which causes a task request to be generated.
Abstract: An intelligent agent executes tasks by using intelligent agent learning modules which store information necessary to execute the tasks. A computer receives a command to execute a task or receives data which causes a task request to be generated. The computer accesses appropriate information in the learning modules to execute the task, and outputs instructions for output devices to execute the tasks. The tasks may be executed at a future time and on a periodic basis. The learning modules build up a database of information from previously executed tasks, and the database is used to assist in executing future tasks. The tasks include physical commercial transactions. Portions of the intelligent agent may be remotely located and interconnected via remote communication devices.

541 citations


Journal ArticleDOI
TL;DR: It is argued that for many common machine learning problems, although in general the authors do not know the true (objective) prior for the problem, they do have some idea of a set of possible priors to which the true prior belongs.
Abstract: A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks.

496 citations


Patent
19 Dec 1997
TL;DR: In this article, the authors present a method for using a single abstract virtual machine execution stack with multiple independent stacks in order to improve the efficiency of distinguishing memory pointers from non-pointers.
Abstract: The invention is a method for use in executing portable virtual machine computer programs under real-time constraints. The invention includes a method for implementing a single abstract virtual machine execution stack with multiple independent stacks in order to improve the efficiency of distinguishing memory pointers from non-pointers. Further, the invention includes a method for rewriting certain of the virtual machine instructions into a new instruction set that more efficiently manipulates the multiple stacks. Additionally, using the multiple-stack technique to identify pointers on the run-time stack, the invention includes a method for performing efficient defragmenting real-time garbage collection using a mostly stationary technique. The invention also includes a method for efficiently mixing a combination of byte-code, native, and JIT-translated methods in the implementation of a particular task, where byte-code methods are represented in the instruction set of the virtual machine, native methods are written in a language like C and represented by native machine code, and JIT-translated methods result from automatic translation of byte-code methods into the native machine code of the host machine. Also included in the invention is a method to implement a real-time task dispatcher that supports arbitrary numbers of real-time task priorities given an underlying real-time operating system that supports at least three task priority levels. Finally, the invention includes a method to analyze and preconfigure virtual memory programs so that they can be stored in ROM memory prior to program.

267 citations


Patent
28 Aug 1997
TL;DR: Workflow systems interact with each other as peers using this mechanism by sending workflow execution requests, workflow script templates, and workflow execution environments to each other as discussed by the authors, where Task Request and Task Response messages are used to standardize the communication between Source Agents and Performer Agents, along with other messages for controlling and queuing Tasks.
Abstract: A mechanism for heterogeneous, peer-to-peer, and disconnected workflow execution across a network infrastructure. Performer Agent entities provide a homogeneous view of humans, applications, and heterogeneous workflow systems and components that act as Performers on the network by executing Tasks. Source Agent entities provide a homogeneous view of heterogeneous service requesters such as workflow scripts executing on different workflow systems, which generate Activities that need to execute on Performers as Tasks. Task Request and Task Response messages are used to standardize the communication between Source Agents and Performer Agents, along with other messages for controlling and queuing Tasks. Workflow systems interact with each other as peers using this mechanism by sending workflow execution requests, workflow script templates, and workflow execution environments to each other. Disconnected operation is handled by ensuring the continuous availability of Source Agents and Performer Agents on the network and providing a mechanism for Sources to disconnect from Source Agents and Performers to disconnect from Performer Agents.

253 citations


Patent
18 Apr 1997
TL;DR: In this paper, a modified "best-first" search technique that combines optimization, artificial intelligence, and constraint processing to arrive at near-optimal assignment and scheduling solutions is presented.
Abstract: A system and method for assigning and scheduling resource requests to resource providers use a modified "best-first" search technique that combines optimization, artificial intelligence, and constraint-processing to arrive at near-optimal assignment and scheduling solutions. In response to changes in a dynamic resource environment, potential changes to an existing assignment set are evaluated in a search for a better solution. New calls are assigned and scheduled as they are received, and the assignment set is readjusted as the field service environment changes, resulting in global optimization. Each search operation is in response to either an incremental change to the assignment set such as adding a new resource request, removing a pending resource request, reassigning a pending resource request, or to a request for further evaluation. Thus, the search technique assumes that the existing assignment set is already optimized, and limits the task only to evaluating the effects of the incremental change. In addition, each search operation produces a complete assignment and scheduling solution. Consequently, the search can be terminated to accept the best solution generated so far, making the technique an "anytime" search.

175 citations


Book
01 Oct 1997
TL;DR: In this article, the authors describe the evolution of the design process in a factory environment, including the use of technology to accelerate information flow and reduce delays in the process of designing a product.
Abstract: INTRODUCTION Revolution in the Factory Into the Witch Doctor's Tent There Are No Best Practices Where Ideas Come From The Organization of This Book PART ONE: THE DESIGN FACTORY 1. INTO THE DESIGN FACTORY Our Goals Are Economic Products vs. Designs Design-in-Process Inventory Rising Cost of Change Late-Breaking News One-Time Processes Expanding Work Summary PART TWO: THINKING TOOLS 2. MAKING PROFITS NOT PRODUCTS Project Models Application Models Models of Process Economics Tactical vs. Strategic Decisions Some Practical Tips Summary 3.ENTERING THE LAND OF QUEUES An Introduction to Queueing Theory The Economics of Queues Depicting Queues Implications of Queuing Theory Dealing with Queues Increasing Capacity / Managing Demand / Reducing Variability / Using Control Systems The Location of Batch Queues Little's Law Typical Queues Summary 4. IT'S ALL ABOUT INFORMATION Information Theory Efficient Generation of Information Maximizing Information: The Magic Number 50 Percent Information Differs in Value Timing: Earlier Is Better / Batch Size Affects Timing / Iterations Generate Early Information / The Potential Profit Impact Do It Right the First Time? Communicating Failures Protecting Against Failure Task Sequencing Monitoring Summary 5. JUST ADD FEEDBACK Systems Theory Systems with Feedback Properties of Systems with Feedback Difficulty in Troubleshooting / Instability and Chaos / Accuracy and Feedback / Variability Within a System More Complex Control Systems Summary PART THREE: ACTION TOOLS 6. CHOOSE THE RIGHT ORGANIZATION The Organization as a System Assessing Organizational Forms Efficiency: The Functional Organization Speed: The Autonomous Team Performance and Cost: Hybrid Organizations Dividing Responsibilities Communications Old Communications Tools / New Communications Technologies Colocation Summary 7. DESIGN THE DESIGN PROCESS Combining Structure and Freedom One-Time Processes / Modular Processes / A Pattern Language Designing Process Stages Input Subprocesses / Technology vs. Product Development / Controlling Queues / Subprocess Design / Output Processes Key Design Principles Sequential vs. Concurrent Processes / Managing Information Profiles / Decentralizing Control and Feedback / Location of Batch Queues Specific Process Implementations Evolving the Process Summary 8. PRODUCT ARCHITECTURE: THE INVISIBLE DESIGN Underlying Principles Modularity Segregating Variability/ Interface Management Specific Architectural Implementations Low-Expense Architectures / Low-Cost Architectures / High-Performance Architectures / Fast-Development Architectures Who Does It? Summary 9. GET THE PRODUCT SPECIFICATION RIGHT It Starts with Strategy Selecting the Customer Understanding the Customer Customer Interviews / Meticulous Observation / Focus Groups Creating a Good Specification The Minimalist Specification / A Product Mission / The Specification Process Using the Specification Specific Implementations Summary 10. USE THE RIGHT TOOLS The Use of Technology Accelerated Information Flow / Improved Productivity / Reduced Delays Implementation Principles Technology Changes Process / Pay Attention to Economics Technologies Design Automation / Prototyping and Testing / Communications / Information Storage and Retrieval Summary 11. MEASURE THE RIGHT THINGS General Principles Drive Metrics from Economics / The Control Triangle / Decentralizing Control / Selecting Metrics Project-Level Controls Expense-Focused Controls / Cost-Focused Controls / Performance-Focused Controls / Speed-Focused Controls Business Level Controls Expense-Focused Controls / Cost-Focused Controls / Performance-Focused Controls / Speed-Focused Controls Summary 12. MANAGE UNCERTAINTY AND RISK Market and Technical Risk Managing Market Risk Use a Substitute Product / Simulate the Risky Attribute / Make the Design Flexible / Move Fast Managing Technical Risk Controlling Subsystem Risk / Controlling System Integration Risk / Back-up Plans World-Class Testing Cheap Testing / Low Unit Cost Impact/Maximizing Performance / Fast Testing / Continuous Improvement Summary PART FOUR: NEXT STEPS 13. NOW WHAT DO I DO? Do Your Math Use Decision Rules Pay Attention to Capacity Utilization Pay Attention to Batch Size Respect Variability Think Clearly About Risk Think Systems Respect the People Design the Process Thoughtfully Pay Attention to Architecture Deeply Understand the Customer Eliminate Useless Controls Get to the Front Lines Avoid Slogans Selected Bibliography Index About the Author

172 citations


Proceedings ArticleDOI
27 Mar 1997
TL;DR: A study comparing user performance with Elastic Windows and traditional window management techniques for 2, 6, and 12 window situations suggests promising possibilities for multiple window operations and hierarchical nesting, which can be applied to the next generation of tiled as well as overlapped window managers.
Abstract: Most windowing systems follow the independent overlapping windows approach, which emerged as an answer to the needs of the 1980s’ technology. Due to advances in computers and display technology, and increased information needs, modem users demand more functionality from window management systems. We proposed Elastic Windows with improved spatial layout and rapid multi-window operations as an alternative to current window management strategies for efficient personal role management [12]. In this approach, multi-window operations are achieved by issurng operations on window groups hierarchically organized in a space-filling tiled layout. ‘l%is paper describes the Elastic Windows interface briefly and then presents a study comparing user performance with Elastic Windows and traditional window management techniques for 2, 6, and 12 window situations. Elastic Whdows users had statistically significantly faster performance for all 6 and 12 window situations, for task environment setup, task environment switching, and task execution. For some tasks there was a ten-fold speed-up in performance These results suggest promising possibilities for multiple window operations and hierarchical nesting, which can be applied to the next generation of tiled as well as overlapped window managers.

145 citations


Journal ArticleDOI
TL;DR: The results indicate that two concurrent tasks interfere, with a resulting increase in reaction time, if they require activation of overlapping parts of the cortex.

112 citations


Patent
02 Jun 1997
TL;DR: In this paper, a method of operating a multiprocessor system having a predefined number of processing units for processing data, includes obtaining load information representing a loading of each of a number of randomly selected ones of the processing units.
Abstract: A method of operating a multiprocessor system having a predefined number of processing units for processing data, includes obtaining load information representing a loading of each of a number of randomly selected ones of the processing units. The number of randomly selected processing units is greater than 1 and substantially less than the predefined number of processing units. A least loaded of the randomly selected processing units is identified from the obtained load information. The data is directed to the identified least loaded randomly selected processing unit for processing.

95 citations


Patent
17 Jun 1997
TL;DR: In this paper, a method for decoding an encoded MPEG video stream in an efficient manner making optimal use of available system memory and computational resources is presented. But the method is limited to the decoding of MPEG-video streams.
Abstract: A novel apparatus and method is disclosed to decode an encoded MPEG video stream in an efficient manner making optimal use of available system memory and computational resources. The present invention partitions the MPEG video decode task into software tasks which are executed by a CPU and hardware tasks which are implemented in dedicated video hardware. Software tasks represent those tasks which do not require extensive memory or computational resources. On the other hand, tasks implemented in dedicated video hardware represent those tasks which involve computational and memory mintensive operations. Synchronization between software tasks executed by the CPU and hardware tasks implemented in dedicated video hardware is achieved by means of various data structures, control structures and device drivers.

86 citations


Patent
29 Apr 1997
TL;DR: In this paper, the authors define a method for emulating an iterated process represented by a series of related tasks and a control mechanism that monitors and enables the iterative execution of those tasks until data associated with the process converges to predetermined goals or objectives.
Abstract: The present invention defines a method for emulating an iterated process represented by a series of related tasks and a control mechanism that monitors and enables the iterative execution of those tasks until data associated with the process converges to predetermined goals or objectives. The invention defines a method in which fuzzy neural networks and discreet algorithms are applied to perform the process tasks and in which configurable, reloadable finite state machines are applied to control the execution of those tasks. In particular, the present invention provides a method for emulating the process of designing integrated circuit (IC) applications and printed circuit board (PCB) applications for the purpose of simulating, emulating, analyzing, optimizing and predicting the behavioral and physical characteristics of the application at the earliest possible stage of the process. The invention applies fuzzy neural networks and configurable, reloadable finite state machines to emulate the IC or PCB design process, enabling the invention to emulate the the computer aided design (CAD) tools used to perform the design process tasks as well as the individuals using those tools. By emulating the combination of man and machine performances, the invention can more accurately predict the results of a given task than tools that consider only the machine element. The invention also provides a means to adapt the performance and behavior of any element of the invention using historical data compiled from previous design or manufacturing experiences, allowing the invention to incorporate the knowledge gained from previous designs into current designs.

Patent
18 Sep 1997
TL;DR: In this paper, the authors propose an information processing apparatus that calculates a total consumption power of devices used by each task, and assigns higher execution priority to a task which uses a device with the largest consumption power.
Abstract: An information processing apparatus, which operates in a multi-task mode, calculates a total consumption power of devices used by each task, and assigns higher execution priority to a task which uses a device with the largest consumption power, thereby shortening the execution time of the device with the largest consumption power, and suppressing the total consumption power of the apparatus. When a device is started upon switching of tasks, if the total consumption power exceeds the allowable power of the apparatus by a power consumed upon restarting of the device, the task is set in a waiting state until operations of other devices are completed, the consumption power is lowered, and it is ready to use the device by the task.

Patent
23 Jan 1997
TL;DR: In this article, a data processing system has multiple independent paths for communication between a host and a plurality of storage devices where each path has its own queue for servicing requests generated by the host for accessing the storage devices.
Abstract: A data processing system having multiple independent paths for communication between a host and a plurality of storage devices where each path has its own queue for servicing requests generated by the host for accessing the storage devices. Each request is assigned a unique sequential ID before it is stored, along with its unique ID, in all the queues. Each storage device has a "mailbox" register where the ID and the status of the latest request being carried out is stored. Queues are serviced and their status updated based on the content of the mailbox in each storage device. The combination of assigning a unique task ID to each request and a "mailbox" register in each storage device allows the queue in each path to be completely out of sync with each of the queues in the other paths without causing data integrity problems, duplication of requests at the device level, or a need for complex locking schemes to keep the queues in sync with each other.

Patent
24 Sep 1997
TL;DR: In this article, the authors present a structure and method for implementing a configurable and scalable A/V system that enables a user to perform processes across one or more audio/video processing devices coupled together via a network.
Abstract: The system and method of the present invention provides a structure and method for implementing a configurable and scalable A/V system that enables a user to perform processes across one or more A/V processing devices coupled together via a network. In one embodiment, a plurality of configurable A/V systems are coupled via a network. At least two of the A/V systems include digital signal processors (DSPs) that. are programmable. The A/V systems also include other resources, such as data storage, synchronizers, analog to digital converters, digital to analog converters, etc., to support the variety of audio/video processing to be performed. In one embodiment, the user inputs at least one task to performed. The task is broken down into basic processing components or primitives. These primitives are defined in a processor descriptor block maintained by the system. The processor descriptor block indicates the processing requirements and distributability of the process across the network. For example, in one embodiment, the processor descriptor block identifies the number of cycles necessary to perform the process, any resource dependencies, and whether the process can be performed across multiple networked systems. A control process therefore references the process descriptor block and determines the bandwidth and resource requirements. The bandwidth and resource requirements are then compared to the device and system configurations and allocations to determine if the primitive can be performed using available bandwidth and resources in the device and devices coupled via the network.

Journal ArticleDOI
TL;DR: Comprehensive computer simulation reveals that the average allocation time and waiting delay are much smaller than earlier schemes of comparable performances, irrespective of the size of meshes and distribution of the shape of the incoming tasks.
Abstract: Efficient allocation of processors to incoming tasks in parallel computer systems is very important for achieving the desired high performance. It requires recognizing the free available processors with minimum overhead. In this paper, we present an efficient task allocation scheme for 2D mesh architectures. By employing a new approach for searching the mesh, our scheme can find the available submesh without scanning the entire mesh, unlike earlier designs. Comprehensive computer simulation reveals that the average allocation time and waiting delay are much smaller than earlier schemes of comparable performances, irrespective of the size of meshes and distribution of the shape of the incoming tasks.

Journal ArticleDOI
TL;DR: A twostage heuristic procedure, which is based on an integer programming formulation of the problem, is provided, a simplexlike procedure which, while attempting to minimize the cycle time, also smooths out the workload among the workstations.

Patent
31 Jan 1997
TL;DR: In this article, a method and an apparatus for providing enhanced pay per view in a video server is described, in which a group of non-preemptible tasks corresponding to videos are scheduled on a predetermined number of processors, each task is defined by a computation time and a period.
Abstract: A method and an apparatus are disclosed for providing enhanced pay per view in a video server. Specifically, the present invention periodically schedules a group of non pre-emptible tasks corresponding to videos in a video server having a predetermined number of processors, wherein each task is defined by a computation time and a period. To schedule the group of tasks, the present invention divides the tasks into two groups according to whether they may be scheduled on less than one processor. The present invention schedules each group separately. For the group of tasks scheduleable on less than one processor, the present invention conducts a first determination of scheduleability. If the first determination of scheduleability deems the group of tasks not scheduleable, then the present invention conducts a second determination of scheduleability. If the second determination of scheduleability also deems the group of tasks not scheduleable, then the present invention recursively partitions the group of tasks in subsets and re-performs the second determination of scheduleability. Recursive partitioning continues until the group of tasks is deemed scheduleable or no longer partitionable. In the latter case, the group of tasks is deemed not scheduleable.

Journal Article
TL;DR: The key bimanual instrument tasks involved in laparoscopic surgery have been abstracted for use in a virtual reality surgical skills evaluator and trainer and represent a significant advance over the subjective assessment of training performances with existing "plastic box" basic trainers.
Abstract: The key bimanual instrument tasks involved in laparoscopic surgery have been abstracted for use in a virtual reality surgical skills evaluator and trainer. The trainer uses two laparoscopic instruments mounted on a frame with position sensors which provide instrument movement data that is translated into interactive real time graphics on a PC (P133, 16 Mb RAM, graphics acceleration card). An accurately scaled operating volume of 10 cm3 is represented by a 3D cube on the computer screen. "Camera" position and size of target objects can be varied for different skill levels. Targets appear randomly within the operating volume according to the skill task and can be grasped and manipulated with the instruments. Accuracy and errors during the tasks and time to completion are logged. Mist VR has tutorial, training, examination, analysis and configuration modes. Six tasks have been selected and include combinations of instrument approach, target acquisition, target manipulation and placement, transfer between instruments, target contact with optional diathermy, and controlled instrument withdrawal/replacement. Tasks can be configured for varying degrees of difficulty and the configurations saved to a library for reuse. Specific task configurations can be assigned to individual students. In the examination mode the supervisor can select the tasks, repetitions and order and save to a specific file for that trainee. Progress can be assessed and there is the option for playback of the training session or examination. Data analyses permit overall, including task, and right or left hand performances to be quantified. Mist VR represents a significant advance over the subjective assessment of training performances with existing "plastic box" basic trainers.

Proceedings ArticleDOI
Shugen Ma1, M. Konno1
20 Apr 1997
TL;DR: This work proposes a novel obstacle avoidance technique for the hyper redundant manipulator to perform a payload location task from point to point while avoiding existing static obstacles in the environment.
Abstract: A hyper redundant manipulator has a very large or infinite degree of kinematic redundancy, thus it is possessed of unconventional features such as the ability to enter a narrow space while avoiding obstacles. We propose a novel obstacle avoidance technique for the hyper redundant manipulator to perform a payload location task from point to point while avoiding existing static obstacles in the environment. The scheme is based on analysis in the defined posture space, where three parameters were used to determine the hyper redundant manipulator configurations. The scheme is verified by computer simulation in case of using the model of the developed Hyper-R Arm. It shows that our method works perfect and the obstacles are well avoided globally.

Patent
15 Jul 1997
TL;DR: In this article, a LED matrix display with a variety of enhanced functionalities including a scheduler which schedules tasks based upon commands correlated with real time, a tracking and accounting procedure which provides a database file which accounts for tasks as a function of the time a task is performed, sports display presentations of various sports information associated with different sports based upon corresponding sports IDs, and an improved error reporting capability in which detected errors are displayed using clear descriptive text information.
Abstract: A LED matrix display having a variety of enhanced functionalities including: a scheduler which schedules tasks based upon commands correlated with real time; a tracking and accounting procedure which provides a database file which accounts for tasks as a function of the time a task is performed; sports display presentations of various sports information associated with different sports based upon corresponding sports IDs; the use of virtual display images or windows which may vary over time and which overlay template or faceplate images; and an improved error reporting capability in which detected errors are displayed using clear descriptive text information.

Journal ArticleDOI
TL;DR: The present results are interpreted to show that actual performance of actions at study provides more information than does only the intention to perform actions at test, pointing to encoding processes as the critical variable.
Abstract: Memory for subject-performed tasks—that is, for simple actions such as lifting a pen, which subjects perform overtly—is better than memory for verbal tasks—that is, when subjects only listen to the action phrases. Here I investigated whether this effect depends on actual performance or whether it also shows up when there is only an intention to perform the task. Koriat, Ben-Zur, and Nussbaum (1990) found that the intention to perform items at test enhanced free recall more than did verbal tasks. Brooks and Gardiner (1994), however, were not able to replicate this finding. In four experiments, I attempted to reconcile this discrepancy by comparing subject-performed tasks, to-beperformed tasks, and verbal tasks under different conditions. The outcome depended on whether a within-subjects design or a between-subjects design was used. In the between-subjects design, memory for subject-performed tasks was better than memory for to-be-performed tasks, and both of these led to better recall performance than did verbal tasks. In a within-subjects design, in contrast, memory for to-be-performed tasks was no different from memory for verbal tasks. These results were independent of whether the test mode was congruent or incongruent. Thus, the discrepant findings of Koriat et al. and of Brooks and Gardiner seem to be due to the design used, pointing to encoding processes as the critical variable. The present results are interpreted to show that actual performance of actions at study provides more information than does only the intention to perform actions at test.

Patent
Minh Hoang1
21 Mar 1997
TL;DR: In this article, a software implementation of a modem, particularly designed to execute on a general purpose host processor, controlled by a non-real-time, multi-tasking operating system (OS), such as the Windows 95 OS, is presented.
Abstract: A software implementation of a modem, particularly designed to execute on a general purpose host processor, controlled by a non-real-time, multi-tasking operating system (OS), such as the Windows 95 OS. The software modem is scaleable and portable. In this fashion, communication protocols (particularly datapumps) may be easily added to, or removed from, the system, and the modem may be easily adapted for use on other types of processors and operating systems. The controller and datapump portions execute as a plurality of interacting subsystems, each of which can execute at at least one of several priority levels. A HRT level routine is responsible for handling an ASIC that buffers transmit and receive samples destined to and received from the phone lines. A SRT level task includes logic that needs time functionality, but which is not time-critical like the HRT logic. BRT routines execute in an event-driven basis and are used for many controller functions.

Journal ArticleDOI
Yan Alexander Li1, John K. Antonio1, Howard Jay Siegel1, Min Tan1, Daniel W. Watson1 
TL;DR: The methodology uses a block-based approach to transform the program into a flow analysis tree and computes the execution time distribution for the program, given the execution modes for each node in the flow analysisTree, and appropriate probabilistic models for control and data conditional constructs.

Patent
22 Apr 1997
TL;DR: In this paper, the onboard computer communicates with the pilot through a display and data input console (MCDU), which displays a list of tasks to be executed in the form of a series of selectable and activatable main zones (40, 50).
Abstract: The invention relates to air navigation assistance methods and devices, used in flight management systems on board aircraft. To facilitate the work of the pilot, the onboard computer (FMS) communicates with the pilot through a display and data input console (MCDU). This console displays a list of tasks to be executed in the form of a series of selectable and activatable main zones (40). Each zone corresponds to a task to be executed through the console, and when the task has been executed (validation key actuated by the pilot), the task-list is redisplayed, the zone for which the task has been executed and validated appearing in a different colour from the other zones. The zones preferably appear in duplicate (40, 50) when there is a co-pilot and it can clearly be seen which tasks have been executed and by which pilot, and which tasks remain to be executed.

Journal ArticleDOI
TL;DR: The authors are reimplementing the Honeywell batch scheduler as production quality software, describing the process of building the scheduler and lessons learned during this process.
Abstract: Batch manufacturing poses unique challenges to schedulers. The manufacturing processes are unpredictable, the environment is dynamic, and the required task and resource models are complicated. The Honeywell batch scheduler uses constraint envelope scheduling to address these needs, offering support for both schedule modifications and rescheduling. The authors are reimplementing the scheduler as production quality software. They describe the process of building the scheduler and lessons learned during this process.

Book ChapterDOI
01 Jan 1997
TL;DR: In this article, the authors present a framework for design of work support systems for a modern, dynamic work environment in which stable work procedures are replaced with discretionary tasks and the request of continuous learning and adaptation to change.
Abstract: Publisher Summary This chapter presents a framework for design of work support systems for a modern, dynamic work environment in which stable work procedures are replaced with discretionary tasks and the request of continuous learning and adaptation to change. In this situation, classic task analysis is less effective and a framework is therefore presented in the chapter for work analysis, separating a representation of the work domain, its means and ends, its relational structure, and the effective task strategies among that the user may choose, from a representation of the users' general background, resources, cognitive style, and subjective preferences. The aim is to design work support systems that leave the freedom open to a user to choose a work strategy that suites the user in the particular situation. An important feature of this ecological approach to system design for support of effective learning under changes in a dynamic environment is therefore a human-work interface directed toward a transparent presentation of the action possibilities and functional/intentional boundaries and constraints of the work domain relevant for typical task situations and user categories.

Patent
13 Jan 1997
TL;DR: A method and means for enhancing an embedded system includes means for and steps of executing a boot routine, activating a ROM loader routine, initializing an I/O subsystem, activating an embedded OS, creating a dynamically linked embedded system loader task, and having the embedded OS map the Global Coerced Memory (GCM) and the Global Shared Memory (GSM) into its address space so that it can access shared libraries.
Abstract: A method and means for enhancing an embedded system includes means for and steps of executing a boot routine; activating a ROM loader routine; initializing an I/O subsystem; activating an embedded OS; creating a dynamically linked embedded system loader task, and having the embedded OS map the Global Coerced Memory (GCM) and the Global Shared Memory (GSM) into its address space so that it can access shared libraries; loading each of a plurality of executable programs, and mapping the GCM and the GSM into each executable program's address space so it is able to access shared libraries.

Patent
Shin Ohtake1
30 Jan 1997
TL;DR: In this paper, a job development task performs image processing on image data of input jobs, and an output section control task causes the processed image data to be output from a print section, a FAX transmission/reception section, or the like.
Abstract: A job development task performs image processing on image data of input jobs An output section control task causes the processed image data to be output from a print section, a FAX transmission/reception section, or the like A job control task recognizes the execution state of a specified job A UI control task determines alterable process items relating to the specified job in accordance with the recognized execution state, and causes those process items to be displayed on a display device of an operation section As a result, in altering a certain process item of the job, the process items that can be set are displayed in accordance with the job state of the specified job, which allows the user to alter the process item correctly and quickly

Patent
14 Jul 1997
TL;DR: In this paper, the authors present a system and method for establishing a communication connection between two programs, each running on multiple processors of a distributed or shared memory parallel computer, or on multiple computers in a cluster of workstations or a set of network connected computers, which includes protocols that require one of the two programs that wish to communicate to actively initiate the communication session, while the other program passively accepts such direct communication session initiations.
Abstract: The invention is a system and method for establishing a communication connection between two programs, each running on multiple processors of a distributed or shared memory parallel computer, or on multiple computers in a cluster of workstations or a set of network connected workstations. The invention includes all protocols that require one of the two programs that wish to communicate to actively initiate the communication session, while the other program passively accepts such direct communication session initiations. No task of the active program of the connection will attempt to communicate with tasks of the passive program until after it has been notified that all passive program tasks are prepared to receive messages, and that all other active program tasks are prepare to receive messages from the passive programs tasks, and vice versa. Further, the tasks of the passive program are free running during establishment of the connection; while the active program tasks are free to run provided that they do not attempt to communicate with the second program. Another aspect of the invention provides a secondary indirect communication channel mediated by as resource manager separate from the active and passive programs.

Patent
27 Jun 1997
TL;DR: In this article, the aggregate task is divided into sub-tasks, and each sub-task is sized according to the established granularity, and the task execution units are independently operated on-demand to sequentially self-allocate and execute subtasks of the aggregate tasks.
Abstract: In a multiprocessing system, multiple concurrently operating task execution units are operated to perform an aggregate task by using incremental and on-demand sub-task allocation. A command is received to perform a machine-executed task divisible into multiple sub-tasks, i.e., an “aggregate task”. A granularity is then established, for dividing the aggregate task into sub-tasks. Preferably, the granularity is not too large to permit potentially uneven sub-task allocation, and not too small to incur excessive overhead in allocating sub-tasks. Having established the granularity, multiple task execution units are independently operated on-demand to sequentially self-allocate and execute sub-tasks of the aggregate tasks. Each sub-task is sized according to the established granularity. Operating “on-demand”, each task execution unit sequentially allocates and executes one sub-task at a time, then proceeding to the next unexecuted sub-task. Thus, the multiprocessing system operates like multiple people drinking water from a common glass through individual straws—although each drinker works independently, all finish simultaneously, thus completing the aggregate task as expeditiously as possible.