Other affiliations: Edge Hill University, International Seismological Centre, University of Bradford ...read more
Bio: Hui Fang is an academic researcher from Loughborough University. The author has contributed to research in topics: Computer science & Artificial intelligence. The author has an hindex of 12, co-authored 66 publications receiving 531 citations. Previous affiliations of Hui Fang include Edge Hill University & International Seismological Centre.
Papers published on a yearly basis
TL;DR: An improved food supply chain in the post COVID-19 pandemic economy is conducted as an illustration to demonstrate an effective use of blockchain technology.
Abstract: Increasingly, blockchain technology is attracting significant attentions in various agricultural applications. These applications could satisfy the diverse needs in the ecosystem of agricultural products, e.g., increasing transparency of food safety and IoT based food quality control, provenance traceability, improvement of contract exchanges, and transactions efficiency. As multiple untrusted parties, including small-scale farmers, food processors, logistic companies, distributors and retailers, are involved into the complex farm-to-fork pipeline, it becomes vital to achieve optimal trade-off between efficiency and integrity of the agricultural management systems as required in contexts. In this paper, we provide a survey to study both techniques and applications of blockchain technology used in the agricultural sector. First, the technical elements, including data structure, cryptographic methods, and consensus mechanisms are explained in detail. Secondly, the existing agricultural blockchain applications are categorized and reviewed to demonstrate the use of the blockchain techniques. In addition, the popular platforms and smart contract are provided to show how practitioners use them to develop these agricultural applications. Thirdly, we identify the key challenges in many prospective agricultural systems, and discuss the efforts and potential solutions to tackle these problems. Further, we conduct an improved food supply chain in the post COVID-19 pandemic economy as an illustration to demonstrate an effective use of blockchain technology.
TL;DR: A fuzzy logic approach to integrate hybrid features for detecting shot boundaries inside general videos by using publicly available test data set from Carleton University and demonstrating that the proposed algorithm outperforms the representative existing algorithms in terms of the precision and recall rates.
Abstract: Video temporal segmentation is normally the first and important step for content-based video applications. Many features including the pixel difference, colour histogram, motion, and edge information etc. have been widely used and reported in the literature to detect shot cuts inside videos. Although existing research on shot cut detection is active and extensive, it still remains a challenge to achieve accurate detection of all types of shot boundaries with one single algorithm. In this paper, we propose a fuzzy logic approach to integrate hybrid features for detecting shot boundaries inside general videos. The fuzzy logic approach contains two processing modes, where one is dedicated to detection of abrupt shot cuts including those short dissolved shots, and the other for detection of gradual shot cuts. These two modes are unified by a mode-selector to decide which mode the scheme should work on in order to achieve the best possible detection performances. By using the publicly available test data set from Carleton University, extensive experiments were carried out and the test results illustrate that the proposed algorithm outperforms the representative existing algorithms in terms of the precision and recall rates.
TL;DR: Experimental results demonstrate that the proposed lossless scheme not only has remarkable imperceptibility and sufficient robustness but also provides reliable authentication, tamper detection, localization, and recovery functions, which outperforms existing schemes for protecting medical images.
Abstract: It is of great importance in telemedicine to protect authenticity and integrity of medical images. They are mainly addressed by two technologies, which are region of interest (ROI) lossless watermarking and reversible watermarking. However, the former causes biases on diagnosis by distorting region of none interest (RONI) and introduces security risks by segmenting image spatially for watermark embedding. The latter fails to provide reliable recovery function for the tampered areas when protecting image integrity. To address these issues, a novel robust reversible watermarking scheme is proposed in this paper. In our scheme, a reversible watermarking method is designed based on recursive dither modulation (RDM) to avoid biases on diagnosis. In addition, RDM is combined with Slantlet transform and singular value decomposition to provide a reliable solution for protecting image authenticity. Moreover, ROI and RONI are divided for watermark generation to design an effective recovery function under limited embedding capacity. Finally, watermarks are embedded into whole medical images to avoid the risks caused by segmenting image spatially. Experimental results demonstrate that our proposed lossless scheme not only has remarkable imperceptibility and sufficient robustness, but also provides reliable authentication, tamper detection, localization and recovery functions, which outperforms existing schemes for protecting medical images
TL;DR: A lightweight single image super-resolution network with an expectation-maximization attention mechanism (EMASRN) for better balancing performance and applicability and the experimental results demonstrate the superiority of the EMASRN over state-of-the-art lightweight SISR methods in terms of both quantitative metrics and visual quality.
Abstract: In recent years, with the rapid development of deep learning, super-resolution methods based on convolutional neural networks (CNNs) have made great progress. However, the parameters and the required consumption of computing resources of these methods are also increasing to the point that such methods are difficult to implement on devices with low computing power. To address this issue, we propose a lightweight single image super-resolution network with an expectation-maximization attention mechanism (EMASRN) for better balancing performance and applicability. Specifically, a progressive multi-scale feature extraction block (PMSFE) is proposed to extract feature maps of different sizes. Furthermore, we propose an HR-size expectation-maximization attention block (HREMAB) that directly captures the long-range dependencies of HR-size feature maps. We also utilize a feedback network to feed the high-level features of each generation into the next generationb’s shallow network. Compared with the existing lightweight single image super-resolution (SISR) methods, our EMASRN reduces the number of parameters by almost one-third. The experimental results demonstrate the superiority of our EMASRN over state-of-the-art lightweight SISR methods in terms of both quantitative metrics and visual quality. The source code can be downloaded at https://github.com/xyzhu1/EMASRN.
TL;DR: A novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions and outperforms static expression recognition systems in terms of recognition rate.
Abstract: Automatic facial expression analysis aims to analyse human facial expressions and classify them into discrete categories. Methods based on existing work are reliant on extracting information from video sequences and employ either some form of subjective thresholding of dynamic information or attempt to identify the particular individual frames in which the expected behaviour occurs. These methods are inefficient as they require either additional subjective information, tedious manual work or fail to take advantage of the information contained in the dynamic signature from facial movements for the task of expression recognition.In this paper, a novel framework is proposed for automatic facial expression analysis which extracts salient information from video sequences but does not rely on any subjective preprocessing or additional user-supplied information to select frames with peak expressions. The experimental framework demonstrates that the proposed method outperforms static expression recognition systems in terms of recognition rate. The approach does not rely on action units (AUs), and therefore, eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of AUs. The proposed framework explores a parametric space of over 300 dimensions and is tested with six state-of-the-art machine learning techniques. Such robust and extensive experimentation provides an important foundation for the assessment of the performance for future work. A further contribution of the paper is offered in the form of a user study. This was conducted in order to investigate the correlation between human cognitive systems and the proposed framework for the understanding of human emotion classification and the reliability of public databases. HighlightsExtraction of dynamic signals via a parametric space to improve the automatic facial expression recognition rate.An objective comparison with systems utilizing static apex expression recognition.The use of a visualisation technique for the analysis and initial understanding of facial feature data.An intuitive user study to investigate the correlation between human perception and machine vision.
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.
01 Jan 1999
TL;DR: The International Parkinson and Movement Disorder Society (MDS) Clinical Diagnostic Criteria for Parkinson9s disease as discussed by the authors have been proposed for clinical diagnosis, which are intended for use in clinical research, but may also be used to guide clinical diagnosis.
Abstract: Objective To present the International Parkinson and Movement Disorder Society (MDS) Clinical Diagnostic Criteria for Parkinson9s disease. Background Although several diagnostic criteria for Parkinson9s disease have been proposed, none have been officially adopted by an official Parkinson society. Moreover, the commonest-used criteria, the UK brain bank, were created more than 25 years ago. In recognition of the lack of standard criteria, the MDS initiated a task force to design new diagnostic criteria for clinical Parkinson9s disease. Methods/Results The MDS-PD Criteria are intended for use in clinical research, but may also be used to guide clinical diagnosis. The benchmark is expert clinical diagnosis; the criteria aim to systematize the diagnostic process, to make it reproducible across centers and applicable by clinicians with less expertise. Although motor abnormalities remain central, there is increasing recognition of non-motor manifestations; these are incorporated into both the current criteria and particularly into separate criteria for prodromal PD. Similar to previous criteria, the MDS-PD Criteria retain motor parkinsonism as the core disease feature, defined as bradykinesia plus rest tremor and/or rigidity. Explicit instructions for defining these cardinal features are included. After documentation of parkinsonism, determination of PD as the cause of parkinsonism relies upon three categories of diagnostic features; absolute exclusion criteria (which rule out PD), red flags (which must be counterbalanced by additional supportive criteria to allow diagnosis of PD), and supportive criteria (positive features that increase confidence of PD diagnosis). Two levels of certainty are delineated: Clinically-established PD (maximizing specificity at the expense of reduced sensitivity), and Probable PD (which balances sensitivity and specificity). Conclusion The MDS criteria retain elements proven valuable in previous criteria and omit aspects that are no longer justified, thereby encapsulating diagnosis according to current knowledge. As understanding of PD expands, criteria will need continuous revision to accommodate these advances. Disclosure: Dr. Postuma has received personal compensation for activities with Roche Diagnostics Corporation and Biotie Therapies. Dr. Berg has received research support from Michael J. Fox Foundation, the Bundesministerium fur Bildung und Forschung (BMBF), the German Parkinson Association and Novartis GmbH.