scispace - formally typeset
Search or ask a question
Author

Van Huan Nguyen

Other affiliations: Inha University
Bio: Van Huan Nguyen is an academic researcher from Ton Duc Thang University. The author has contributed to research in topics: Blind deconvolution & Pixel. The author has an hindex of 7, co-authored 28 publications receiving 150 citations. Previous affiliations of Van Huan Nguyen include Inha University.

Papers
More filters
Journal ArticleDOI
01 Dec 2020
TL;DR: This paper investigates various scenarios and use cases of machine learning, machine vision and deep learning in global perspective with the lens of sustainability, and discusses the possibility of using Fourth Industrial Revolution [4.0 IR] technologies such as deep learning and computer vision robotics as a key for sustainable food production.
Abstract: Emerging technologies such as computer vision and Artificial Intelligence (AI) are estimated to leverage the accessibility of big data for active training and yielding operational real time smart machines and predictable models. This phenomenon of applying vision and learning methods for the improvement of food industry is termed as computer vision and AI driven food industry. This review contributes to provide an insight into state-of-the-art AI and computer vision technologies that can assist farmers in agriculture and food processing. This paper investigates various scenarios and use cases of machine learning, machine vision and deep learning in global perspective with the lens of sustainability. It explains the increasing demand towards the AgTech industry using computer vision and AI which might be a path towards sustainable food production to feed the future. Also, this review tosses some implications regarding challenges and recommendations in inclusion of technologies in real time farming, substantial global policies and investments. Finally, the paper discusses the possibility of using Fourth Industrial Revolution [4.0 IR] technologies such as deep learning and computer vision robotics as a key for sustainable food production.

142 citations

Journal ArticleDOI
TL;DR: The experimental results indicated that the use of SNA-based collaboration criteria to evaluate the collaborative process enhances the completeness and consistency of collaborative annotation.
Abstract: Proposed algorithm for semantic video annotation using consensus-based social network.Used SNA to maximize the opportunities for collaborative annotation.Used consensus for resolving conflict on annotated versions to get best annotation. Social TV represents a new form of shopping that enables consumers to view, select and buy products. This highlights the need for a collaborative video annotation technique. This paper proposes a collaborative algorithm for semantic video annotation using consensus-based social network analysis (SNA). The collaborative video annotation process is organized based on social networks. Here, the media content is shared with friends of friends who collaboratively annotate it. This study used an ontology-based approach to semantically describe the media content and allow sharing between users. A consensus-based method was used to reconcile conflicts between participants' annotations. The experimental results indicated that the use of SNA-based collaboration criteria to evaluate the collaborative process enhances the completeness and consistency of collaborative annotation. The more the collaboration criteria are satisfied by the collaborative group, the faster the group will reach a consensus. In addition, the consensus-based method is an effective approach for resolving conflicts on collaborative annotation.

22 citations

Journal ArticleDOI
Shengzhe Li1, Van Huan Nguyen1, Mingjie Ma1, Cheng-Bin Jin1, Trung Dung Do1, Hakil Kim1 
TL;DR: In this paper, a simple camera calibration method for estimating human height in video surveillance is presented, which uses a nonlinear regression model from the observed head and foot points of a walking human instead of estimating the vanishing line and point in the image.
Abstract: This paper presents a simple camera calibration method for estimating human height in video surveillance. Given that most cameras for video surveillance are installed in high positions at a slightly tilted angle, it is possible to retain only three calibration parameters in the original camera model, namely the focal length, the tilting angle and the camera height. These parameters can be directly estimated using a nonlinear regression model from the observed head and foot points of a walking human instead of estimating the vanishing line and point in the image, which is extremely sensitive to noise in practice. With only three unknown parameters, the nonlinear regression model can fit data efficiently. The experimental results show that the proposed method can predict the human height with a mean absolute error of only about 1.39 cm from ground truth data.

20 citations

Journal ArticleDOI
TL;DR: In this article, a viewpoint model is created for automatically determining the viewing angle during the testing phase, and a distance signal model is constructed to remove any areas with an attachment from a silhouette to reduce the interference in the resulting classification.
Abstract: It is common to view people in real applications walking in arbitrary directions, holding items, or wearing heavy coats. These factors are challenges in gait-based application methods, because they significantly change a person‘s appearance. This paper proposes a novel method for classifying human gender in real time using gait information. The use of an average gait image, rather than a gait energy image, allows this method to be computationally efficient and robust against view changes. A viewpoint model is created for automatically determining the viewing angle during the testing phase. A distance signal model is constructed to remove any areas with an attachment (carried items, worn coats) from a silhouette to reduce the interference in the resulting classification. Finally, the human gender is classified using multiple-view-dependent classifiers trained using a support vector machine. Experiment results confirm that the proposed method achieves a high accuracy of 98.8% on the CASIA Dataset B and outperforms the recent state-of-the-art methods.

17 citations

Journal ArticleDOI
31 Jul 2019-Sensors
TL;DR: For a better performance validation of the proposed self-calibration method on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.
Abstract: This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. A novel straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that novel loss to optimize the lens-distortion parameters using the Levenberg-Marquardt (LM) optimization approach. The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. In this study, an investigation was carried out on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ -residual rectification factor. The quantitative comparisons were carried out between the proposed method and traditional OpenCV method as well as contemporary state-of-the-art self-calibration techniques on KITTI dataset with synthetically generated distortion ranges. Famous image consistency metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and position error in salient points estimation were employed for the performance evaluations. Finally, for a better performance validation of the proposed system on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.

15 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2012

3,692 citations

Journal ArticleDOI
TL;DR: This work proposes a new influence-guided GDM model based on the following assumptions: experts influence each other and the more an expert trusts in another expert, the more his opinion is influenced by that expert.
Abstract: A promising research area in the field of group decision making (GDM) is the study of interpersonal influence and its impact on the evolution of experts’ opinions. In conventional GDM models, a group of experts express their individual preferences on a finite set of alternatives, then preferences are aggregated and the best alternative, satisfying the majority of experts, is selected. Nevertheless, in real situations, experts form their opinions in a complex interpersonal environment where preferences are liable to change due to social influence. In order to take into account the effects of social influence during the GDM process, we propose a new influence-guided GDM model based on the following assumptions: experts influence each other and the more an expert trusts in another expert, the more his opinion is influenced by that expert. The effects of social influence are especially relevant to cases when, due to domain complexity, limited expertise or pressure to make a decision, an expert is unable to express preferences on some alternatives, i.e., in presence of incomplete information. The proposed model adopts fuzzy rankings to collect both experts’ preferences on available alternatives and trust statements on other experts. Starting from collected information, possibly incomplete, the configuration and the strengths of interpersonal influences are evaluated and represented through a social influence network (SIN). The SIN, in its turn, is used to estimate missing preferences and evolve them by simulating the effects of experts’ interpersonal influence before aggregating them for the selection of the best alternative. The proposed model has been experimented with synthetic data to demonstrate the influence driven evolution of opinions and its convergence properties.

261 citations

Journal ArticleDOI
TL;DR: This approach incorporates the strength of social ties and social influence calculated by social network analysis methods regarding the decision-making process to address a ranking problem with incomplete additive preference relations (IAPRs).
Abstract: With the rapid growth of Web 2.0 technology, a new paradigm has been developed that allows many users to participate in decision-making processes within online social networks. The social information (i.e., social ties and social influence) of the members that is stored in online social networks provides a new perspective for investigating group decision-making (GDM) problems. In this paper, a new interactive GDM approach, based on online social networks, is proposed to address a ranking problem with incomplete additive preference relations (IAPRs). This approach incorporates the strength of social ties and social influence calculated by social network analysis methods regarding the decision-making process. After decision makers (DMs) provide IAPRs, a searching algorithm is developed to identify the optimal preference information transfer path from DMs to the decision supporters who can provide the corresponding preference information. Next, a linear programming model is constructed to complete the missing preference values of the IAPRs. The main features of the linear programming model include its ability to account for other DMs preference information and to maintain consistency. To help the group reach an agreement on the ranking of alternatives, a consensus reaching process is proposed. The strength of social ties and social influence are used to calculate the acceptable adjustment coefficients for DMs in the feedback mechanism. Finally, an illustrative example and further discussion demonstrate the validity of the proposed approach.

112 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed model is very efficient in recognizing six basic emotions while ensuring significant increase in average classification accuracy over radial basis function and multi-layered perceptron.

111 citations