scispace - formally typeset
Search or ask a question

Showing papers in "Applied Sciences in 2020"


Journal ArticleDOI
TL;DR: A novel deep learning framework for the detection of pneumonia using the concept of transfer learning, where features from images are extracted using different neural network models pretrained on ImageNet, which then are fed into a classifier for prediction.
Abstract: Pneumonia is among the top diseases which cause most of the deaths all over the world. Virus, bacteria and fungi can all cause pneumonia. However, it is difficult to judge the pneumonia just by looking at chest X-rays. The aim of this study is to simplify the pneumonia detection process for experts as well as for novices. We suggest a novel deep learning framework for the detection of pneumonia using the concept of transfer learning. In this approach, features from images are extracted using different neural network models pretrained on ImageNet, which then are fed into a classifier for prediction. We prepared five different models and analyzed their performance. Thereafter, we proposed an ensemble model that combines outputs from all pretrained models, which outperformed individual models, reaching the state-of-the-art performance in pneumonia recognition. Our ensemble model reached an accuracy of 96.4% with a recall of 99.62% on unseen data from the Guangzhou Women and Children’s Medical Center dataset.

417 citations


Journal ArticleDOI
TL;DR: This work proposed a new IoT layered model: generic and stretched with the privacy and security components and layers identification, and implemented security certificates to allow data transfer between the layers of the proposed cloud/edge enabled IoT model.
Abstract: Privacy and security are among the significant challenges of the Internet of Things (IoT). Improper device updates, lack of efficient and robust security protocols, user unawareness, and famous active device monitoring are among the challenges that IoT is facing. In this work, we are exploring the background of IoT systems and security measures, and identifying (a) different security and privacy issues, (b) approaches used to secure the components of IoT-based environments and systems, (c) existing security solutions, and (d) the best privacy models necessary and suitable for different layers of IoT driven applications. In this work, we proposed a new IoT layered model: generic and stretched with the privacy and security components and layers identification. The proposed cloud/edge supported IoT system is implemented and evaluated. The lower layer represented by the IoT nodes generated from the Amazon Web Service (AWS) as Virtual Machines. The middle layer (edge) implemented as a Raspberry Pi 4 hardware kit with support of the Greengrass Edge Environment in AWS. We used the cloud-enabled IoT environment in AWS to implement the top layer (the cloud). The security protocols and critical management sessions were between each of these layers to ensure the privacy of the users’ information. We implemented security certificates to allow data transfer between the layers of the proposed cloud/edge enabled IoT model. Not only is the proposed system model eliminating possible security vulnerabilities, but it also can be used along with the best security techniques to countermeasure the cybersecurity threats facing each one of the layers; cloud, edge, and IoT.

247 citations


Journal ArticleDOI
TL;DR: A new CNN architecture for brain tumor classification of three tumor types is presented, simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images and two databases.
Abstract: The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.

220 citations


Journal ArticleDOI
TL;DR: This review summarizes the most updated findings on the impact of drought stress on plant morphological, biochemical and physiological features and highlights plant mechanisms of tolerance which could be exploited to increase the plant capability to survive under limited water availability.
Abstract: Plants are often exposed to unfavorable environmental conditions, for instance abiotic stresses, which dramatically alter distribution of plant species among ecological niches and limit the yields of crop species. Among these, drought stress is one of the most impacting factors which alter seriously the plant physiology, finally leading to the decline of the crop productivity. Drought stress causes in plants a set of morpho-anatomical, physiological and biochemical changes, mainly addressed to limit the loss of water by transpiration with the attempt to increase the plant water use efficiency. The stomata closure, one of the first consistent reactions observed under drought, results in a series of consequent physiological/biochemical adjustments aimed at balancing the photosynthetic process as well as at enhancing the plant defense barriers against drought-promoted stress (e.g., stimulation of antioxidant systems, accumulation of osmolytes and stimulation of aquaporin synthesis), all representing an attempt by the plant to overcome the unfavorable period of limited water availability. In view of the severe changes in water availability imposed by climate change factors and considering the increasing human population, it is therefore of outmost importance to highlight: (i) how plants react to drought; (ii) the mechanisms of tolerance exhibited by some species/cultivars; and (iii) the techniques aimed at increasing the tolerance of crop species against limited water availability. All these aspects are necessary to respond to the continuously increasing demand for food, which unfortunately parallels the loss of arable land due to changes in rainfall dynamics and prolonged period of drought provoked by climate change factors. This review summarizes the most updated findings on the impact of drought stress on plant morphological, biochemical and physiological features and highlights plant mechanisms of tolerance which could be exploited to increase the plant capability to survive under limited water availability. In addition, possible applicative strategies to help the plant in counteracting unfavorable drought periods are also discussed.

219 citations


Journal ArticleDOI
TL;DR: The proposed study can be useful in faster-diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.
Abstract: Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon at the right time and thus the early diagnosis of pneumonia is vital. The paper aims to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances in accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN): AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. A total of 5247 chest X-ray images consisting of bacterial, viral, and normal chest x-rays images were preprocessed and trained for the transfer learning-based classification task. In this study, the authors have reported three schemes of classifications: normal vs. pneumonia, bacterial vs. viral pneumonia, and normal, bacterial, and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial, and viral pneumonia were 98%, 95%, and 93.3%, respectively. This is the highest accuracy, in any scheme, of the accuracies reported in the literature. Therefore, the proposed study can be useful in more quickly diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.

214 citations


Journal ArticleDOI
TL;DR: In this article, the authors predicted that climate change is anticipated to exacerbate these recent trends on suitability for wine production, while wine typicity may also be threatened in most cases.
Abstract: Viticulture and winemaking are important socioeconomic sectors in many European regions. Climate plays a vital role in the terroir of a given wine region, as it strongly controls canopy microclimate, vine growth, vine physiology, yield, and berry composition, which together determine wine attributes and typicity. New challenges are, however, predicted to arise from climate change, as grapevine cultivation is deeply dependent on weather and climate conditions. Changes in viticultural suitability over the last decades, for viticulture in general or the use of specific varieties, have already been reported for many wine regions. Despite spatially heterogeneous impacts, climate change is anticipated to exacerbate these recent trends on suitability for wine production. These shifts may reshape the geographical distribution of wine regions, while wine typicity may also be threatened in most cases. Changing climates will thereby urge for the implementation of timely, suitable, and cost-effective adaptation strategies, which should also be thoroughly planned and tuned to local conditions for an effective risk reduction. Although the potential of the different adaptation options is not yet fully investigated, deserving further research activities, their adoption will be of utmost relevance to maintain the socioeconomic and environmental sustainability of the highly valued viticulture and winemaking sector in Europe.

210 citations


Journal ArticleDOI
TL;DR: FTIR analyses of a series of contaminants, such as various solvents, chemicals, enzymes, and possibly formed degradation by-products in the biomass conversion process along with poplar biomass are reported to prevent misunderstanding the FTIR analysis results of the processed biomass.
Abstract: With rapidly increased interests in biomass, diverse chemical and biological processes have been applied for biomass utilization Fourier transform infrared (FTIR) analysis has been used for characterizing different types of biomass and their products, including natural and processed biomass During biomass treatments, some solvents and/or catalysts can be retained and contaminate biomass In addition, contaminants can be generated by the decomposition of biomass components Herein, we report FTIR analyses of a series of contaminants, such as various solvents, chemicals, enzymes, and possibly formed degradation by-products in the biomass conversion process along with poplar biomass This information helps to prevent misunderstanding the FTIR analysis results of the processed biomass

202 citations


Journal ArticleDOI
TL;DR: The Value Sensitive Design (VSD) approach as mentioned in this paper is proposed as a principled framework to illustrate how technologies enabling human-machine symbiosis in the Factory of the Future can be designed to embody elicited human values and to illustrate actionable steps that engineers and designers can take in their design projects.
Abstract: Although manufacturing companies are currently situated at a transition point in what has been called Industry 40, a new revolutionary wave—Industry 50—is emerging as an ‘Age of Augmentation’ when the human and machine reconcile and work in perfect symbiosis with one another Recent years have indeed assisted in drawing attention to the human-centric design of Cyber-Physical Production Systems (CPPS) and to the genesis of the ‘Operator 40’, two novel concepts that raise significant ethical questions regarding the impact of technology on workers and society at large This paper argues that a value-oriented and ethical technology engineering in Industry 50 is an urgent and sensitive topic as demonstrated by a survey administered to industry leaders from different companies The Value Sensitive Design (VSD) approach is proposed as a principled framework to illustrate how technologies enabling human–machine symbiosis in the Factory of the Future can be designed to embody elicited human values and to illustrate actionable steps that engineers and designers can take in their design projects Use cases based on real solutions and prototypes discuss how a design-for-values approach aids in the investigation and mitigation of ethical issues emerging from the implementation of technological solutions and, hence, support the migration to a symbiotic Factory of the Future

177 citations


Journal ArticleDOI
TL;DR: The Experimental results on two public facial expression databases show that the convolutional neural network based on the improved activation function has a better performance than most-of-the-art activation functions.
Abstract: The convolutional neural network (CNN) has been widely used in image recognition field due to its good performance. This paper proposes a facial expression recognition method based on the CNN model. Regarding the complexity of the hierarchic structure of the CNN model, the activation function is its core, because the nonlinear ability of the activation function really makes the deep neural network have authentic artificial intelligence. Among common activation functions, the ReLu function is one of the best of them, but it also has some shortcomings. Since the derivative of the ReLu function is always zero when the input value is negative, it is likely to appear as the phenomenon of neuronal necrosis. In order to solve the above problem, the influence of the activation function in the CNN model is studied in this paper. According to the design principle of the activation function in CNN model, a new piecewise activation function is proposed. Five common activation functions (i.e., sigmoid, tanh, ReLu, leaky ReLus and softplus–ReLu, plus the new activation function) have been analysed and compared in facial expression recognition tasks based on the Keras framework. The Experimental results on two public facial expression databases (i.e., JAFFE and FER2013) show that the convolutional neural network based on the improved activation function has a better performance than most-of-the-art activation functions.

174 citations


Journal ArticleDOI
TL;DR: In this article, zinc oxide nanoparticles (ZnO NPs) were prepared using S. ebulus leaf extract, and their physicochemical properties were investigated using X-ray diffraction (XRD) results revealed that the prepared NPs are highly crystalline, having a wurtzite crystal structure.
Abstract: Plants are one of the best sources to obtain a variety of natural surfactants in the field of green synthesizing material. Sambucus ebulus, which has unique natural properties, has been considered a promising material in traditional Asian medicine. In this context, zinc oxide nanoparticles (ZnO NPs) were prepared using S. ebulus leaf extract, and their physicochemical properties were investigated. X-ray diffraction (XRD) results revealed that the prepared ZnO NPs are highly crystalline, having a wurtzite crystal structure. The average crystallite size of prepared NPs was around 17 nm. Green synthesized NPs showed excellent absorption in the UV region as well as strong yellow-orange emission at room temperature. Prepared nanoparticles exhibited good antibacterial activity against various organisms and a passable photocatalytic degradation of methylene blue dye pollutants. The obtained results demonstrated that the biosynthesized ZnO NPs reveal interesting characteristics for various potential applications in the future.

165 citations


Journal ArticleDOI
TL;DR: An attention-based Bi-LSTM+CNN hybrid model that capitalize on the advantages of LSTM and CNN with an additional attention mechanism is proposed that produces more accurate classification results, as well as higher recall and F1 scores, than individual multi-layer perceptron (MLP), CNN or L STM models as the hybrid models.
Abstract: There is a need to extract meaningful information from big data, classify it into different categories, and predict end-user behavior or emotions. Large amounts of data are generated from various sources such as social media and websites. Text classification is a representative research topic in the field of natural-language processing that categorizes unstructured text data into meaningful categorical classes. The long short-term memory (LSTM) model and the convolutional neural network for sentence classification produce accurate results and have been recently used in various natural-language processing (NLP) tasks. Convolutional neural network (CNN) models use convolutional layers and maximum pooling or max-overtime pooling layers to extract higher-level features, while LSTM models can capture long-term dependencies between word sequences hence are better used for text classification. However, even with the hybrid approach that leverages the powers of these two deep-learning models, the number of features to remember for classification remains huge, hence hindering the training process. In this study, we propose an attention-based Bi-LSTM+CNN hybrid model that capitalize on the advantages of LSTM and CNN with an additional attention mechanism. We trained the model using the Internet Movie Database (IMDB) movie review data to evaluate the performance of the proposed model, and the test results showed that the proposed hybrid attention Bi-LSTM+CNN model produces more accurate classification results, as well as higher recall and F1 scores, than individual multi-layer perceptron (MLP), CNN or LSTM models as well as the hybrid models.

Journal ArticleDOI
TL;DR: The purpose of this review is to assess how the novel nanomaterials fabricated by Au NPs can impact biomedical applications such as drug delivery and cancer therapy.
Abstract: Nanomaterials are popularly used in drug delivery, disease diagnosis and therapy. Among a number of functionalized nanomaterials such as carbon nanotubes, peptide nanostructures, liposomes and polymers, gold nanoparticles (Au NPs) make excellent drug and anticancer agent carriers in biomedical and cancer therapy application. Recent advances of synthetic technique improved the surface coating of Au NPs with accurate control of particle size, shape and surface chemistry. These make the gold nanomaterials a much easier and safer cancer agent and drug to be applied to the patient’s tumor. Although many studies on Au NPs have been published, more results are in the pipeline due to the rapid development of nanotechnology. The purpose of this review is to assess how the novel nanomaterials fabricated by Au NPs can impact biomedical applications such as drug delivery and cancer therapy. Moreover, this review explores the viability, property and cytotoxicity of various Au NPs.

Journal ArticleDOI
TL;DR: From results of the review, it can be concluded that the ANN models are capable of dealing with different modeling problems in rivers, lakes, reservoirs, wastewater treatment plants, groundwater, ponds, and streams.
Abstract: Water quality prediction plays an important role in environmental monitoring, ecosystem sustainability, and aquaculture. Traditional prediction methods cannot capture the nonlinear and non-stationarity of water quality well. In recent years, the rapid development of artificial neural networks (ANNs) has made them a hotspot in water quality prediction. We have conducted extensive investigation and analysis on ANN-based water quality prediction from three aspects, namely feedforward, recurrent, and hybrid architectures. Based on 151 papers published from 2008 to 2019, 23 types of water quality variables were highlighted. The variables were primarily collected by the sensor, followed by specialist experimental equipment, such as a UV-visible photometer, as there is no mature sensor for measurement at present. Five different output strategies, namely Univariate-Input-Itself-Output, Univariate-Input-Other-Output, Multivariate-Input-Other(multi), Multivariate-Input-Itself-Other-Output, and Multivariate-Input-Itself-Other (multi)-Output, are summarized. From results of the review, it can be concluded that the ANN models are capable of dealing with different modeling problems in rivers, lakes, reservoirs, wastewater treatment plants (WWTPs), groundwater, ponds, and streams. The results of many of the review articles are useful to researchers in prediction and similar fields. Several new architectures presented in the study, such as recurrent and hybrid structures, are able to improve the modeling quality of future development.

Journal ArticleDOI
TL;DR: This article presents a review of the use of CNN applied to different automatic processing tasks of fruit images: classification, quality control, and detection, and observes that in the last two years (2019–2020), theUse of CNN for fruit recognition has greatly increased obtaining excellent results.
Abstract: Agriculture has always been an important economic and social sector for humans. Fruit production is especially essential, with a great demand from all households. Therefore, the use of innovative technologies is of vital importance for the agri-food sector. Currently artificial intelligence is one very important technological tool widely used in modern society. Particularly, Deep Learning (DL) has several applications due to its ability to learn robust representations from images. Convolutional Neural Networks (CNN) is the main DL architecture for image classification. Based on the great attention that CNNs have had in the last years, we present a review of the use of CNN applied to different automatic processing tasks of fruit images: classification, quality control, and detection. We observe that in the last two years (2019–2020), the use of CNN for fruit recognition has greatly increased obtaining excellent results, either by using new models or with pre-trained networks for transfer learning. It is worth noting that different types of images are used in datasets according to the task performed. Besides, this article presents the fundamentals, tools, and two examples of the use of CNNs for fruit sorting and quality control.

Journal ArticleDOI
TL;DR: In this paper, the current status of stainless steel wire arc additive manufacturing (WAAM) was reviewed, covering the microstructure, mechanical properties, and defects related to different stainless steels and process parameters.
Abstract: Wire arc additive manufacturing (WAAM) has been considered as a promising technology for the production of large metallic structures with high deposition rates and low cost. Stainless steels are widely applied due to good mechanical properties and excellent corrosion resistance. This paper reviews the current status of stainless steel WAAM, covering the microstructure, mechanical properties, and defects related to different stainless steels and process parameters. Residual stress and distortion of the WAAM manufactured components are discussed. Specific WAAM techniques, material compositions, process parameters, shielding gas composition, post heat treatments, microstructure, and defects can significantly influence the mechanical properties of WAAM stainless steels. To achieve high quality WAAM stainless steel parts, there is still a strong need to further study the underlying physical metallurgy mechanisms of the WAAM process and post heat treatments to optimize the WAAM and heat treatment parameters and thus control the microstructure. WAAM samples often show considerable anisotropy both in microstructure and mechanical properties. The new in-situ rolling + WAAM process is very effective in reducing the anisotropy, which also can reduce the residual stress and distortion. For future industrial applications, fatigue properties, and corrosion behaviors of WAAMed stainless steels need to be deeply studied in the future. Additionally, further efforts should be made to improve the WAAM process to achieve faster deposition rates and better-quality control.

Journal ArticleDOI
TL;DR: Biochar has been widely used as an additive/support media during anaerobic digestion and as filter media for the removal of suspended matter, heavy metals, and pathogens as mentioned in this paper.
Abstract: Biochar as a stable carbon-rich material shows incredible potential to handle water/wastewater contaminants. Its application is gaining increasing interest due to the availability of feedstock, the simplicity of the preparation methods, and their enhanced physico-chemical properties. The efficacy of biochar to remove organic and inorganic pollutants depends on its surface area, pore size distribution, surface functional groups, and the size of the molecules to be removed, while the physical architecture and surface properties of biochar depend on the nature of feedstock and the preparation method/conditions. For instance, pyrolysis at high temperatures generally produces hydrophobic biochars with higher surface area and micropore volume, allowing it to be more suitable for organic contaminants sorption, whereas biochars produced at low temperatures own smaller pore size, lower surface area, and higher oxygen-containing functional groups and are more suitable to remove inorganic contaminants. In the field of water/wastewater treatment, biochar can have extensive application prospects. Biochar have been widely used as an additive/support media during anaerobic digestion and as filter media for the removal of suspended matter, heavy metals and pathogens. Biochar was also tested for its efficiency as a support-based catalyst for the degradation of dyes and recalcitrant contaminants. The current review discusses on the different methods for biochar production and provides an overview of current applications of biochar in wastewater treatment.

Journal ArticleDOI
TL;DR: An overview of the application of blockchain technologies for enabling traceability in the agri-food domain is provided and an extensive literature review on the integration of blockchain into traceability systems is conducted.
Abstract: Food holds a major role in human beings’ lives and in human societies in general across the planet. The food and agriculture sector is considered to be a major employer at a worldwide level. The large number and heterogeneity of the stakeholders involved from different sectors, such as farmers, distributers, retailers, consumers, etc., renders the agricultural supply chain management as one of the most complex and challenging tasks. It is the same vast complexity of the agriproducts supply chain that limits the development of global and efficient transparency and traceability solutions. The present paper provides an overview of the application of blockchain technologies for enabling traceability in the agri-food domain. Initially, the paper presents definitions, levels of adoption, tools and advantages of traceability, accompanied with a brief overview of the functionality and advantages of blockchain technology. It then conducts an extensive literature review on the integration of blockchain into traceability systems. It proceeds with discussing relevant existing commercial applications, highlighting the relevant challenges and future prospects of the application of blockchain technologies in the agri-food supply chain.

Journal ArticleDOI
TL;DR: In this article, the hidden peaks in the amide I band region of infrared and Raman spectra of four globular proteins in aqueous solution as well as hydrated zein and gluten proteins were identified and quantified using the Voigt function.
Abstract: FTIR and Raman spectroscopy are often used to investigate the secondary structure of proteins. Focus is then often laid on the different features that can be distinguished in the Amide I band (1600–1700 cm−1) and, to a lesser extent, the Amide II band (1510–1580 cm−1), signature regions for C=O stretching/N-H bending, and N-H bending/C-N stretching vibrations, respectively. Proper investigation of all hidden and overlapping features/peaks is a necessary step to achieve reliable analysis of FTIR and FT-Raman spectra of proteins. This paper discusses a method to identify, separate, and quantify the hidden peaks in the amide I band region of infrared and Raman spectra of four globular proteins in aqueous solution as well as hydrated zein and gluten proteins. The globular proteins studied, which differ widely in terms of their secondary structures, include immunoglobulin G, concanavalin A, lysozyme, and trypsin. Peak finding was done by analysis of the second derivative of the original spectra. Peak separation and quantification was achieved by curve fitting using the Voigt function. Structural data derived from the FT-Raman and FTIR analyses were compared to literature reports on protein structure. This manuscript proposes an accurate method to analyze protein secondary structure based on the amide I band in vibrational spectra.

Journal ArticleDOI
TL;DR: Lattice structures have many outstanding properties over foams and honeycombs, such as lightweight, high strength, absorbing energy, and reducing vibration, which has been extensively studied and concerned as mentioned in this paper.
Abstract: Cellular structures consist of foams, honeycombs, and lattices. Lattices have many outstanding properties over foams and honeycombs, such as lightweight, high strength, absorbing energy, and reducing vibration, which has been extensively studied and concerned. Because of excellent properties, lattice structures have been widely used in aviation, bio-engineering, automation, and other industrial fields. In particular, the application of additive manufacturing (AM) technology used for fabricating lattice structures has pushed the development of designing lattice structures to a new stage and made a breakthrough progress. By searching a large number of research literature, the primary work of this paper reviews the lattice structures. First, based on the introductions about lattices of literature, the definition and classification of lattice structures are concluded. Lattice structures are divided into two general categories in this paper: uniform and non-uniform. Second, the performance and application of lattice structures are introduced in detail. In addition, the fabricating methods of lattice structures, i.e., traditional processing and additive manufacturing, are evaluated. Third, for uniform lattice structures, the main concern during design is to develop highly functional unit cells, which in this paper is summarized as three different methods, i.e., geometric unit cell based, mathematical algorithm generated, and topology optimization. Forth, non-uniform lattice structures are reviewed from two aspects of gradient and topology optimization. These methods include Voronoi-tessellation, size gradient method (SGM), size matching and scaling (SMS), and homogenization, optimization, and construction (HOC). Finally, the future development of lattice structures is prospected from different aspects.

Journal ArticleDOI
TL;DR: This paper aims at providing a complete and critical review on the recent applications of AI techniques, particularly on machine learning (ML), deep learning (DL), and hybrid methods, as these branches of AI are becoming increasingly attractive.
Abstract: Forecasting is a crucial task for successfully integrating photovoltaic (PV) output power into the grid. The design of accurate photovoltaic output forecasters remains a challenging issue, particularly for multistep-ahead prediction. Accurate PV output power forecasting is critical in a number of applications, such as micro-grids (MGs), energy optimization and management, PV integrated in smart buildings, and electrical vehicle chartering. Over the last decade, a vast literature has been produced on this topic, investigating numerical and probabilistic methods, physical models, and artificial intelligence (AI) techniques. This paper aims at providing a complete and critical review on the recent applications of AI techniques; we will focus particularly on machine learning (ML), deep learning (DL), and hybrid methods, as these branches of AI are becoming increasingly attractive. Special attention will be paid to the recent development of the application of DL, as well as to the future trends in this topic.

Journal ArticleDOI
TL;DR: A review of the literature on AUVs, teams of AUVs designed to work within a common mission, and collaborative AUV teams and missions is presented, with the aim of analyzing their applicability, advantages, and limitations.
Abstract: Development of Autonomous Underwater Vehicles (AUVs) has permitted the automatization of many tasks originally achieved with manned vehicles in underwater environments. Teams of AUVs designed to work within a common mission are opening the possibilities for new and more complex applications. In underwater environments, communication, localization, and navigation of AUVs are considered challenges due to the impossibility of relying on radio communications and global positioning systems. For a long time, acoustic systems have been the main approach for solving these challenges. However, they present their own shortcomings, which are more relevant for AUV teams. As a result, researchers have explored different alternatives. To summarize and analyze these alternatives, a review of the literature is presented in this paper. Finally, a summary of collaborative AUV teams and missions is also included, with the aim of analyzing their applicability, advantages, and limitations.

Journal ArticleDOI
TL;DR: A recent systematic review as mentioned in this paper found that consumers mostly identified animal-and environment-related benefits, but there is plenty of potential to highlight personal benefits such as health and food safety.
Abstract: Cultured meat is one of a number of alternative proteins which can help to reduce the demand for meat from animals in the future. As cultured meat nears commercialization, research on consumers’ perceptions of the technology has proliferated. We build on our 2018 systematic review to identify 26 empirical studies on consumer acceptance of cultured meat published in peer-reviewed journals since then. We find support for many of the findings of our previous review, as well as novel insights into the market for cultured meat. We find evidence of a substantial market for cultured meat in many countries, as well as markets and demographics which are particularly open to the concept. Consumers mostly identified animal- and environment-related benefits, but there is plenty of potential to highlight personal benefits such as health and food safety. The safety of cultured meat and its nutritional qualities are intuitively seen as risks by some consumers, although some recognize potential benefits in these areas. Evidence suggests that acceptance can be increased with positive information, as well as frames which invoke more positive associations. We conclude by arguing that cultured meat will form one part of a varied landscape of future protein sources, each appealing to different groups of consumers to achieve an overall reduction in conventional meat consumption. We acknowledge a range of pro-cultured meat messaging strategies, and suggest that framing cultured meat as a solution to existing food safety problems may be an effective approach to increase acceptance. In the long-term, objections based in neophobia and norm violation will decrease, and widespread acceptance will depend in large part on the price and taste.

Journal ArticleDOI
TL;DR: This paper proposes a deep learning-based approach for detecting the fake images by using the contrastive loss and demonstrated that the proposed method significantly outperformed other state-of-the-art fake image detectors.
Abstract: Generative adversarial networks (GANs) can be used to generate a photo-realistic image from a low-dimension random noise. Such a synthesized (fake) image with inappropriate content can be used on social media networks, which can cause severe problems. With the aim to successfully detect fake images, an effective and efficient image forgery detector is necessary. However, conventional image forgery detectors fail to recognize fake images generated by the GAN-based generator since these images are generated and manipulated from the source image. Therefore, in this paper, we propose a deep learning-based approach for detecting the fake images by using the contrastive loss. First, several state-of-the-art GANs are employed to generate the fake–real image pairs. Next, the reduced DenseNet is developed to a two-streamed network structure to allow pairwise information as the input. Then, the proposed common fake feature network is trained using the pairwise learning to distinguish the features between the fake and real images. Finally, a classification layer is concatenated to the proposed common fake feature network to detect whether the input image is fake or real. The experimental results demonstrated that the proposed method significantly outperformed other state-of-the-art fake image detectors.

Journal ArticleDOI
TL;DR: This work focused on fine-tuning based on the comparison of the state-of-the-art architectures: AlexNet, GoogleNet, Inception V3, Residual Network (ResNet) 18, and ResNet 50, and concluded that this significantly success rate makes the GoogleNet model a useful tool for farmers in helping to identify and protect tomatoes from the diseases mentioned.
Abstract: Tomato plants are highly affected by diverse diseases. A timely and accurate diagnosis plays an important role to prevent the quality of crops. Recently, deep learning (DL), specifically convolutional neural networks (CNNs), have achieved extraordinary results in many applications, including the classification of plant diseases. This work focused on fine-tuning based on the comparison of the state-of-the-art architectures: AlexNet, GoogleNet, Inception V3, Residual Network (ResNet) 18, and ResNet 50. An evaluation of the comparison was finally performed. The dataset used for the experiments is contained by nine different classes of tomato diseases and a healthy class from PlantVillage. The models were evaluated through a multiclass statistical analysis based on accuracy, precision, sensitivity, specificity, F-Score, area under the curve (AUC), and receiving operating characteristic (ROC) curve. The results present significant values obtained by the GoogleNet technique, with 99.72% of AUC and 99.12% of sensitivity. It is possible to conclude that this significantly success rate makes the GoogleNet model a useful tool for farmers in helping to identify and protect tomatoes from the diseases mentioned.

Journal ArticleDOI
TL;DR: Results show a high sensitivity in the identification of COVID-19, around 100%, and with a high degree of specificity, which indicates that it can be used as a screening test.
Abstract: The spread of the SARS-CoV-2 virus has made the COVID-19 disease a worldwide epidemic The most common tests to identify COVID-19 are invasive, time consuming and limited in resources Imaging is a non-invasive technique to identify if individuals have symptoms of disease in their lungs However, the diagnosis by this method needs to be made by a specialist doctor, which limits the mass diagnosis of the population Image processing tools to support diagnosis reduce the load by ruling out negative cases Advanced artificial intelligence techniques such as Deep Learning have shown high effectiveness in identifying patterns such as those that can be found in diseased tissue This study analyzes the effectiveness of a VGG16-based Deep Learning model for the identification of pneumonia and COVID-19 using torso radiographs Results show a high sensitivity in the identification of COVID-19, around 100%, and with a high degree of specificity, which indicates that it can be used as a screening test AUCs on ROC curves are greater than 09 for all classes considered

Journal ArticleDOI
TL;DR: A deep convolutional neural network (DCNN) model that integrates three ideas including traditional and parallel Convolutional layers and residual connections along with global average pooling is designed that can significantly improve the performance considering a reduced number of images in the same domain of the target dataset.
Abstract: One of the main challenges of employing deep learning models in the field of medicine is a lack of training data due to difficulty in collecting and labeling data, which needs to be performed by experts. To overcome this drawback, transfer learning (TL) has been utilized to solve several medical imaging tasks using pre-trained state-of-the-art models from the ImageNet dataset. However, there are primary divergences in data features, sizes, and task characteristics between the natural image classification and the targeted medical imaging tasks. Therefore, TL can slightly improve performance if the source domain is completely different from the target domain. In this paper, we explore the benefit of TL from the same and different domains of the target tasks. To do so, we designed a deep convolutional neural network (DCNN) model that integrates three ideas including traditional and parallel convolutional layers and residual connections along with global average pooling. We trained the proposed model against several scenarios. We utilized the same and different domain TL with the diabetic foot ulcer (DFU) classification task and with the animal classification task. We have empirically shown that the source of TL from the same domain can significantly improve the performance considering a reduced number of images in the same domain of the target dataset. The proposed model with the DFU dataset achieved F1-score value of 86.6% when trained from scratch, 89.4% with TL from a different domain of the targeted dataset, and 97.6% with TL from the same domain of the targeted dataset.

Journal ArticleDOI
TL;DR: This review studies intensive research to obtain a comprehensive framework for Mixed reality applications and introduces MR development steps and analytical models, a simulation toolkit, system types, and architecture types, in addition to practical issues for stakeholders such as considering MR different domains.
Abstract: Currently, new technologies have enabled the design of smart applications that are used as decision-making tools in the problems of daily life. The key issue in designing such an application is the increasing level of user interaction. Mixed reality (MR) is an emerging technology that deals with maximum user interaction in the real world compared to other similar technologies. Developing an MR application is complicated, and depends on the different components that have been addressed in previous literature. In addition to the extraction of such components, a comprehensive study that presents a generic framework comprising all components required to develop MR applications needs to be performed. This review studies intensive research to obtain a comprehensive framework for MR applications. The suggested framework comprises five layers: the first layer considers system components; the second and third layers focus on architectural issues for component integration; the fourth layer is the application layer that executes the architecture; and the fifth layer is the user interface layer that enables user interaction. The merits of this study are as follows: this review can act as a proper resource for MR basic concepts, and it introduces MR development steps and analytical models, a simulation toolkit, system types, and architecture types, in addition to practical issues for stakeholders such as considering MR different domains.

Journal ArticleDOI
TL;DR: A decade of research work conducted between 2010 and November 2020 was surveyed to present a fundamental understanding of the intelligent techniques used for the prediction of student performance, where academic success is strictly measured using student learning outcomes as discussed by the authors.
Abstract: The prediction of student academic performance has drawn considerable attention in education. However, although the learning outcomes are believed to improve learning and teaching, prognosticating the attainment of student outcomes remains underexplored. A decade of research work conducted between 2010 and November 2020 was surveyed to present a fundamental understanding of the intelligent techniques used for the prediction of student performance, where academic success is strictly measured using student learning outcomes. The electronic bibliographic databases searched include ACM, IEEE Xplore, Google Scholar, Science Direct, Scopus, Springer, and Web of Science. Eventually, we synthesized and analyzed a total of 62 relevant papers with a focus on three perspectives, (1) the forms in which the learning outcomes are predicted, (2) the predictive analytics models developed to forecast student learning, and (3) the dominant factors impacting student outcomes. The best practices for conducting systematic literature reviews, e.g., PICO and PRISMA, were applied to synthesize and report the main results. The attainment of learning outcomes was measured mainly as performance class standings (i.e., ranks) and achievement scores (i.e., grades). Regression and supervised machine learning models were frequently employed to classify student performance. Finally, student online learning activities, term assessment grades, and student academic emotions were the most evident predictors of learning outcomes. We conclude the survey by highlighting some major research challenges and suggesting a summary of significant recommendations to motivate future works in this field.

Journal ArticleDOI
TL;DR: Almost 70 papers were analyzed to show different modern techniques widely applied for predicting students’ performance, together with the objectives they must reach in this field of Artificial Intelligence.
Abstract: Predicting students’ performance is one of the most important topics for learning contexts such as schools and universities, since it helps to design effective mechanisms that improve academic results and avoid dropout, among other things. These are benefited by the automation of many processes involved in usual students’ activities which handle massive volumes of data collected from software tools for technology-enhanced learning. Thus, analyzing and processing these data carefully can give us useful information about the students’ knowledge and the relationship between them and the academic tasks. This information is the source that feeds promising algorithms and methods able to predict students’ performance. In this study, almost 70 papers were analyzed to show different modern techniques widely applied for predicting students’ performance, together with the objectives they must reach in this field. These techniques and methods, which pertain to the area of Artificial Intelligence, are mainly Machine Learning, Collaborative Filtering, Recommender Systems, and Artificial Neural Networks, among others.

Journal ArticleDOI
TL;DR: Investigating the accuracy of a variety of time series modeling approaches for coronavirus outbreak detection in ten different countries with the highest number of confirmed cases demonstrates that machine learning time series methods can learn and scale to accurately estimate the percentage of the total population that will become affected in the future.
Abstract: The ongoing COVID-19 pandemic has caused worldwide socioeconomic unrest, forcing governments to introduce extreme measures to reduce its spread. Being able to accurately forecast when the outbreak will hit its peak would significantly diminish the impact of the disease, as it would allow governments to alter their policy accordingly and plan ahead for the preventive steps needed such as public health messaging, raising awareness of citizens and increasing the capacity of the health system. This study investigated the accuracy of a variety of time series modeling approaches for coronavirus outbreak detection in ten different countries with the highest number of confirmed cases as of 4 May 2020. For each of these countries, six different time series approaches were developed and compared using two publicly available datasets regarding the progression of the virus in each country and the population of each country, respectively. The results demonstrate that, given data produced using actual testing for a small portion of the population, machine learning time series methods can learn and scale to accurately estimate the percentage of the total population that will become affected in the future.