scispace - formally typeset
Search or ask a question
Author

Toshiyuki Inagaki

Other affiliations: University UCINF
Bio: Toshiyuki Inagaki is an academic researcher from University of Tsukuba. The author has contributed to research in topics: Driving simulator & Automation. The author has an hindex of 23, co-authored 112 publications receiving 2118 citations. Previous affiliations of Toshiyuki Inagaki include University UCINF.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, an experiment on adaptive automation is described, where reliability of automated fault diagnosis, mode of fault management (manual vs automated), and fault dynamics affect variables including root mean square error, avoidance of accidents and false shutdowns, subjective trust in the system, and operator self-confidence.
Abstract: An experiment on adaptive automation is described Reliability of automated fault diagnosis, mode of fault management (manual vs automated), and fault dynamics affect variables including root mean square error, avoidance of accidents and false shutdowns, subjective trust in the system, and operator self-confidence Results are discussed in relation to levels of automation, models of trust and self-confidence, and theories of human-machine function allocation Trust in automation but not self-confidence was strongly affected by automation reliability Operators controlled a continuous process with difficulty only while performing fault management but could prevent unnecessary shutdowns Final authority for decisions and action must be allocated to automation in time-critical situations Automation is any sensing, detection, informationprocessing, decision-making, or control action that could be performed by humans but is actually performed by machine In supervisory control, an automated control system is monitored by human operators, who intervene only if they believe that the system is faulty or want to improve

265 citations

Journal ArticleDOI
01 Jan 2001
TL;DR: This chapter clarifies why “who does what and when” considerations are necessary, and it explains the concept of adaptive automation in which the control of functions shifts between humans and machines dynamically, depending on environmental factors, operator workload, and performance.
Abstract: Function allocation is the design decision to determine which functions are to be performed by humans and which are to be performed by machines to achieve the required system goals, and it is closely related to the issue of automation. Some of the traditional strategies of function allocation include (a) assigning each function to the most capable agent (either human or machine), (b) allocating to machine every function that can be automated, and (c) finding an allocation scheme that ensures economical efficiency. However, such “who does what” decisions are not always appropriate from human factors viewpoints. This chapter clarifies why “who does what and when” considerations are necessary, and it explains the concept of adaptive automation in which the control of functions shifts between humans and machines dynamically, depending on environmental factors, operator workload, and performance. Who decides when the control of function must be shifted? That is one of the most crucial issues in adaptive automation. Letting the computer be in authority may conflict with the principle of human-centered automation which claims that the human must be maintained as the final authority over the automation. Qualitative discussions cannot solve the authority problem. This chapter proves the need for quantitative investigations with mathematical models, simulations, and experiments for a better understanding of the authority issue. Starting with the concept of function allocation, this chapter describes how the concept of adaptive automation was invented. The concept of levels of automation is used to explain interactions between humans and machines. Sharing and trading are distinguished to clarify the types of human-automation collaboration. Algorithms for implementing adaptive automation are categorized into three groups, and comparisons are made among them. Benefits and costs of adaptive automation, in relation to decision authority, trust-related issues, and human-interface design, are discussed with some examples.

237 citations

Journal ArticleDOI
TL;DR: In this paper, a unified combination rule for fusing information on plant states given by independent knowledge sources such as sensors or human operators is developed, and the best choice of combination rules depends on whether the safety-control policy is fault-warning or safety-preservation.
Abstract: The Dempster-Shafer (D-S) theory has been gaining popularity in fields where incomplete knowledge is a factor. The author explores the application of the D-S theory in system reliability and safety. Inappropriate application of the D-S theory to safety-control policies can degrade plant safety. This is proven in two phases: (1) a unified combination rule for fusing information on plant states given by independent knowledge sources such as sensors or human operators is developed; and (2) combination rules cannot be chosen in an arbitrary manner, i.e., the best choice of combination rules depends on whether the safety-control policy is fault-warning or safety-preservation. >

185 citations

Journal ArticleDOI
TL;DR: The existence of complacency can not be proven unless optimal behaviour is specified as a benchmark as mentioned in this paper. But this is not always the case, since complacent behaviour may rather be the fault of poor systems design.
Abstract: The problem of complacency is analysed, and it is shown that previous research that claims to show its existence is defective, because the existence of complacency can not be proved unless optimal behaviour is specified as a benchmark. Using gedanken experiments, it is further shown that, in general, not even with optimal monitoring can all signals be detected. Complacency is concerned with attention (monitoring, sampling), not with detection, and there is little evidence for complacent behaviour. To claim that behaviour is complacent is to blame the operator for failure to detect signals. This is undesirable, since so-called complacent behaviour may rather be the fault of poor systems design.

116 citations

Journal ArticleDOI
TL;DR: It is argued that human-centered automation must be multi-layered, by taking into account not only enhancement of situation awareness but also trading of authority between humans and machines.
Abstract: This paper discusses that human-centered automation for traffic safety can vary depending on transportation mode. Quality of human operators and time-criticality are factors characterizing the domain-dependence. The questions asked in this paper are: (1) Does the statement that, “The human must be in command,” have to hold at all times and on every occasion, and in every transportation mode? and (2) What the automation may do when it detected the human’s inappropriate behavior or performance while monitoring the human? Is it allowed only to give some warnings? Or, is it allowed to act autonomously to resolve the detected problem? This paper also argues that human-centered automation must be multi-layered, by taking into account not only enhancement of situation awareness but also trading of authority between humans and machines.

82 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
01 May 2000
TL;DR: A model for types and levels of automation is outlined that can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation.
Abstract: We outline a model for types and levels of automation that provides a framework and an objective basis for deciding which system functions should be automated and to what extent. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.

3,246 citations

Journal ArticleDOI
TL;DR: This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives, and considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust.
Abstract: Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.

3,105 citations

Book ChapterDOI
01 Jan 2001
TL;DR: A wide variety of media can be used in learning, including distance learning, such as print, lectures, conference sections, tutors, pictures, video, sound, and computers.
Abstract: A wide variety of media can be used in learning, including distance learning, such as print, lectures, conference sections, tutors, pictures, video, sound, and computers. Any one instance of distance learning will make choices among these media, perhaps using several.

2,940 citations