scispace - formally typeset
Search or ask a question
Author

Homayoun Najjaran

Bio: Homayoun Najjaran is an academic researcher from University of British Columbia. The author has contributed to research in topics: Computer science & Mobile robot. The author has an hindex of 26, co-authored 167 publications receiving 2416 citations. Previous affiliations of Homayoun Najjaran include University of Waterloo & National Research Council.


Papers
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of the state-of-the-art AV perception technology available today, which highlights future research areas and draws conclusions about the most effective methods for AV perception and its effect on localization and mapping.
Abstract: Perception system design is a vital step in the development of an autonomous vehicle (AV). With the vast selection of available off-the-shelf schemes and seemingly endless options of sensor systems implemented in research and commercial vehicles, it can be difficult to identify the optimal system for one’s AV application. This article presents a comprehensive review of the state-of-the-art AV perception technology available today. It provides up-to-date information about the advantages, disadvantages, limits, and ideal applications of specific AV sensors; the most prevalent sensors in current research and commercial AVs; autonomous features currently on the market; and localization and mapping methods currently implemented in AV research. This information is useful for newcomers to the AV field to gain a greater understanding of the current AV solution landscape and to guide experienced researchers towards research areas requiring further development. Furthermore, this paper highlights future research areas and draws conclusions about the most effective methods for AV perception and its effect on localization and mapping. Topics discussed in the Perception and Automotive Sensors section focus on the sensors themselves, whereas topics discussed in the Localization and Mapping section focus on how the vehicle perceives where it is on the road, providing context for the use of the automotive sensors. By improving on current state-of-the-art perception systems, AVs will become more robust, reliable, safe, and accessible, ultimately providing greater efficiency, mobility, and safety benefits to the public.

486 citations

Journal ArticleDOI
01 Dec 2015-Sensors
TL;DR: The integration of microfluidic and biosensor technologies provides the ability to merge chemical and biological components into a single platform and offers new opportunities for future biosensing applications including portability, disposability, real-time detection, unprecedented accuracies, and simultaneous analysis of different analytes in a single device.
Abstract: A biosensor can be defined as a compact analytical device or unit incorporating a biological or biologically derived sensitive recognition element immobilized on a physicochemical transducer to measure one or more analytes. Microfluidic systems, on the other hand, provide throughput processing, enhance transport for controlling the flow conditions, increase the mixing rate of different reagents, reduce sample and reagents volume (down to nanoliter), increase sensitivity of detection, and utilize the same platform for both sample preparation and detection. In view of these advantages, the integration of microfluidic and biosensor technologies provides the ability to merge chemical and biological components into a single platform and offers new opportunities for future biosensing applications including portability, disposability, real-time detection, unprecedented accuracies, and simultaneous analysis of different analytes in a single device. This review aims at representing advances and achievements in the field of microfluidic-based biosensing. The review also presents examples extracted from the literature to demonstrate the advantages of merging microfluidic and biosensing technologies and illustrate the versatility that such integration promises in the future biosensing for emerging areas of biological engineering, biomedical studies, point-of-care diagnostics, environmental monitoring, and precision agriculture.

319 citations

Journal ArticleDOI
29 Jul 2020-Sensors
TL;DR: This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments and focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping.
Abstract: Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In order to achieve this objective, self-driving vehicles are equipped with sensors that are used to sense and perceive both their surroundings and the faraway environment, using further advances in communication technologies, such as 5G. In the meantime, local perception, as with human beings, will continue to be an effective means for controlling the vehicle at short range. In the other hand, extended perception allows for anticipation of distant events and produces smarter behavior to guide the vehicle to its destination while respecting a set of criteria (safety, energy management, traffic optimization, comfort). In spite of the remarkable advancements of sensor technologies in terms of their effectiveness and applicability for AV systems in recent years, sensors can still fail because of noise, ambient conditions, or manufacturing defects, among other factors; hence, it is not advisable to rely on a single sensor for any of the autonomous driving tasks. The practical solution is to incorporate multiple competitive and complementary sensors that work synergistically to overcome their individual shortcomings. This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments. Specifically, it focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping. The article concludes by highlighting some of the current trends and possible future research directions.

181 citations

Journal ArticleDOI
TL;DR: This review concentrates on three main challenges of controlling a quadrotor unmanned aerial vehicle (UAV) as an underactuated mechanical system (UMS) and discusses the solutions mostly concerning the application of different control strategies presented in recent literature.

120 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a thorough review of the advances in sensor technology for measurement of common water quality parameters (pH, turbidity, free chlorine, dissolved oxygen, and conductivity) in drinking water distribution systems.
Abstract: Online drinking water quality monitoring technologies have made significant progress for source water surveillance and water treatment plant operation. The use of these technologies in the distribution system has not been favorable due to the high costs associated with installation, maintenance, and calibration of a large distributed array of monitoring sensors. This has led to a search for newer technologies that can be economically deployed on a large scale. This paper includes a brief description of important parameters for drinking water and current available technologies used in the field. The paper also provides a thorough review of the advances in sensor technology for measurement of common water quality parameters (pH, turbidity, free chlorine, dissolved oxygen, and conductivity) in drinking water distribution systems.

104 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: Computer and Robot Vision Vol.
Abstract: Computer and Robot Vision Vol. 1, by R.M. Haralick and Linda G. Shapiro, Addison-Wesley, 1992, ISBN 0-201-10887-1.

1,426 citations