scispace - formally typeset
Search or ask a question

Showing papers on "Cloud computing published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
Abstract: Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.

161 citations


Journal ArticleDOI
16 Mar 2022-iMeta
TL;DR: In this paper , the authors present a platform consisting of three modules, which are preconfigured bioinformatic pipelines, cloud toolsets, and online omics' courses, which combine analytic tools for metagenomics, genomes, transcriptome, proteomics and metabolomics.
Abstract: The platform consists of three modules, which are pre-configured bioinformatic pipelines, cloud toolsets, and online omics' courses. The pre-configured bioinformatic pipelines not only combine analytic tools for metagenomics, genomes, transcriptome, proteomics and metabolomics, but also provide users with powerful and convenient interactive analysis reports, which allow them to analyze and mine data independently. As a useful supplement to the bioinformatics pipelines, a wide range of cloud toolsets can further meet the needs of users for daily biological data processing, statistics, and visualization. The rich online courses of multi-omics also provide a state-of-art platform to researchers in interactive communication and knowledge sharing.

121 citations


Journal ArticleDOI
TL;DR: An optimal approach to anonymization using small data is proposed in this study and it is shown that the suggested method will always finish ahead of the existing method by using the least amount of time while ensuring the greatest level of security.
Abstract: An optimal approach to anonymization using small data is proposed in this study. Map Reduce is a big data processing framework used across distributed applications. Prior to the development of a map reduce framework, data are distributed and clustered using a hybrid clustering algorithm. The algorithm used for grouping together similar techniques utilises the k-means clustering algorithm, along with the MFCM clustering algorithm. Clustered data is then fed into the map reduce frame work after it has been clustered. In order to guarantee privacy, the optimal k anonymization method is recommended. When using generalisation and randomization, there are two techniques that can be employed: K-anonymity, which is unique to each, depends on the type of the quasi identifier attribute. Our method replaces the standard k anonymization process by employing an optimization algorithm that dynamically determines the optimal k value. This algorithm uses the Modified Grey Wolf Optimization (MGWO) algorithm for optimization. The memory, execution time, accuracy, and error value are used to assess the recommended method’s practise. This experiment has shown that the suggested method will always finish ahead of the existing method by using the least amount of time while ensuring the greatest level of security. The current technique gets the lowest accuracy and the privacy proposed achieves the maximum accuracy while compared to the current technique. The solution is implemented in Java with Hadoop Map-Reduce, and it is tested and deployed in the cloud on Google Cloud Platform.

110 citations


Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , an innovation in the development of mobile radio models dual-band transceivers in wireless cellular communication is proposed, which is based on packet voice data transmission called push-to-talk.
Abstract: A modern telephone can only be used if it is a dual-band transceiver. Also, an indispensable condition is the availability of Internet access. Modern cell phones can only be used for their intended purpose: making calls. Due to the fact that the operating system is preinstalled on devices, the list of possibilities for gadgets could be expanded almost indefinitely. So you can even do a full-fledged dual-band transceiver from a cell phone. In this paper, an innovation in the development of mobile radio models dual-band transceivers in wireless cellular communication is proposed. For the dual-band transceiver in the phone to work, you need an Internet connection. Progress in the development of technologies for mobile networks does not stand still, and with each new standard and technology for mobile networks, new opportunities for using the network open up for end subscribers. It is based on packet voice data transmission called push-to-talk.

108 citations


Journal ArticleDOI
TL;DR: A service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing, which leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning.
Abstract: With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.

107 citations



Journal ArticleDOI
TL;DR: An exhaustive survey about utilizing AI in edge service optimization in IoV is conducted and a number of open issues in optimizing edge services with AI are discussed.

103 citations


Journal ArticleDOI
TL;DR: In this article , the role of ICT and education with environmental quality by controlling the roles of globalization, income, and financial development for developing countries over the period of 1996-2019.

95 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed edge computing based video pre-processing to eliminate the redundant frames, so that they migrate the partial or all the video processing task to the edge, thereby diminishing the computing, storage and network bandwidth requirements of the cloud center, and enhancing the effectiveness of video analyzes.

91 citations


Journal ArticleDOI
TL;DR: In this article , the concept and design of cloud-based smart BMSs are reviewed and some perspectives on their functionality and usability as well as their benefits for future battery applications are discussed.
Abstract: Energy storage plays an important role in the adoption of renewable energy to help solve climate change problems. Lithium-ion batteries (LIBs) are an excellent solution for energy storage due to their properties. In order to ensure the safety and efficient operation of LIB systems, battery management systems (BMSs) are required. The current design and functionality of BMSs suffer from a few critical drawbacks including low computational capability and limited data storage. Recently, there has been some effort in researching and developing smart BMSs utilizing the cloud platform. A cloud-based BMS would be able to solve the problems of computational capability and data storage in the current BMSs. It would also lead to more accurate and reliable battery algorithms and allow the development of other complex BMS functions. This study reviews the concept and design of cloud-based smart BMSs and provides some perspectives on their functionality and usability as well as their benefits for future battery applications. The potential division between the local and cloud functions of smart BMSs is also discussed. Cloud-based smart BMSs are expected to improve the reliability and overall performance of LIB systems, contributing to the mass adoption of renewable energy.

91 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a blockchain-empowered security and privacy protection scheme with traceable and direct revocation for COVID-19 medical records, which performs the blockchain for uniform identity authentication and all public keys, revocation lists, etc are stored on a blockchain.
Abstract: COVID-19 is currently a major global public health challenge. In the battle against the outbreak of COVID-19, how to manage and share the COVID-19 Electric Medical Records (CEMRs) safely and effectively in the world, prevent malicious users from tampering with CEMRs, and protect the privacy of patients are very worthy of attention. In particular, the semi-trusted medical cloud platform has become the primary means of hospital medical data management and information services. Security and privacy issues in the medical cloud platform are more prominent and should be addressed with priority. To address these issues, on the basis of ciphertext policy attribute-based encryption, we propose a blockchain-empowered security and privacy protection scheme with traceable and direct revocation for COVID-19 medical records. In this scheme, we perform the blockchain for uniform identity authentication and all public keys, revocation lists, etc are stored on a blockchain. The system manager server is responsible for generating the system parameters and publishes the private keys for the COVID-19 medical practitioners and users. The cloud service provider (CSP) stores the CEMRs and generates the intermediate decryption parameters using policy matching. The user can calculate the decryption key if the user has private keys and intermediate decrypt parameters. Only when attributes are satisfied access policy and the user's identity is out of the revocation list, the user can get the intermediate parameters by CSP. The malicious users may track according to the tracking list and can be directly revoked. The security analysis demonstrates that the proposed scheme is indicated to be safe under the Decision Bilinear Diffie-Hellman (DBDH) assumption and can resist many attacks. The simulation experiment demonstrates that the communication and storage overhead is less than other schemes in the public-private key generation, CEMRs encryption, and decryption stages. Besides, we also verify that the proposed scheme works well in the blockchain in terms of both throughput and delay.

Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic study of modern blockchain-based solutions for securing medical data with or without cloud computing, and implement and evaluate the different methods using blockchain in this paper.
Abstract: Since the last decade, cloud-based electronic health records (EHRs) have gained significant attention to enable remote patient monitoring. The recent development of Healthcare 4.0 using the Internet of Things (IoT) components and cloud computing to access medical operations remotely has gained the researcher's attention from a smart city perspective. Healthcare 4.0 mainly consisted of periodic medical data sensing, aggregation, data transmission, data sharing, and data storage. The sensitive and personal data of patients lead to several challenges while protecting it from hackers. Therefore storing, accessing, and sharing the patient medical information on the cloud needs security attention that data should not be compromised by the authorized user's components of E-healthcare systems. To achieve secure medical data storage, sharing, and accessing in cloud service provider, several cryptography algorithms are designed so far. However, such conventional solutions failed to achieve the trade-off between the requirements of EHR security solutions such as computational efficiency, service side verification, user side verifications, without the trusted third party, and strong security. Blockchain-based security solutions gained significant attention in the recent past due to the ability to provide strong security for data storage and sharing with the minimum computation efforts. The blockchain made focused on bitcoin technology among the researchers. Utilizing the blockchain which secure healthcare records management has been of recent interest. This paper presents the systematic study of modern blockchain-based solutions for securing medical data with or without cloud computing. We implement and evaluate the different methods using blockchain in this paper. According to the research studies, the research gaps, challenges, and future roadmap are the outcomes of this paper that boost emerging Healthcare 4.0 technology.

Journal ArticleDOI
TL;DR: In this paper, a novel energy-efficient offloading strategy based on a Self-Adaptive Particle Swarm Optimization algorithm using the Genetic Algorithm operators (SPSO-GA) is proposed.
Abstract: Deep Neural Networks (DNNs) have become an essential and important supporting technology for smart Internet-of-Things (IoT) systems. Due to the high computational costs of large-scale DNNs, it might be infeasible to directly deploy them in energy-constrained IoT devices. Through offloading computation-intensive tasks to the cloud or edges, the computation offloading technology offers a feasible solution to execute DNNs. However, energy-efficient offloading for DNN based smart IoT systems with deadline constraints in the cloud-edge environments is still an open challenge. To address this challenge, we first design a new system energy consumption model, which takes into account the runtime, switching, and computing energy consumption of all participating servers (from both the cloud and edge) and IoT devices. Next, a novel energy-efficient offloading strategy based on a Self-adaptive Particle Swarm Optimization algorithm using the Genetic Algorithm operators (SPSO-GA) is proposed. This new strategy can efficiently make offloading decisions for DNN layers with layer partition operations, which can lessen the encoding dimension and improve the execution time of SPSO-GA. Simulation results demonstrate that the proposed strategy can significantly reduce energy consumption compared to other classic methods.

Journal ArticleDOI
TL;DR: In this article , a model-free deep reinforcement learning-based distributed algorithm was proposed to minimize the expected long-term cost of task offloading in mobile edge computing systems. But the offloading decision of each mobile device was left to the edge nodes in a decentralized manner.
Abstract: In mobile edge computing systems, an edge node may have a high load when a large number of mobile devices offload their tasks to it. Those offloaded tasks may experience large processing delay or even be dropped when their deadlines expire. Due to the uncertain load dynamics at the edge nodes, it is challenging for each device to determine its offloading decision (i.e., whether to offload or not, and which edge node it should offload its task to) in a decentralized manner. In this work, we consider non-divisible and delay-sensitive tasks as well as edge load dynamics, and formulate a task offloading problem to minimize the expected long-term cost. We propose a model-free deep reinforcement learning-based distributed algorithm, where each device can determine its offloading decision without knowing the task models and offloading decision of other devices. To improve the estimation of the long-term cost in the algorithm, we incorporate the long short-term memory (LSTM), dueling deep Q-network (DQN), and double-DQN techniques. Simulation results show that our proposed algorithm can better exploit the processing capacities of the edge nodes and significantly reduce the ratio of dropped tasks and average delay when compared with several existing algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors conduct an exhaustive survey about utilizing AI in edge service optimization in the Internet of Vehicles (IoV) and explore the use of AI for edge server placement and service offloading.

Journal ArticleDOI
TL;DR: In this article , the authors provide a comprehensive review of associated topics such as the concept of big data, model driven and data driven methodologies, the framework, development, key technologies, and applications of BDA for intelligent manufacturing systems are discussed.

Journal ArticleDOI
TL;DR: This qualitative phenomenological study explored IT professionals’ perceptions regarding the integration of AI and Supervised-machine (S-machine) learning into cloud service platforms in the enhancement of the cloud ERP system.
Abstract: Enterprise Resource Planning (ERP) systems are necessary to improve an enterprise's management performance. However, the perception of information technology (IT) professionals about the integration of artificial intelligence (AI) and machine learning with ERP cloud service platforms is unknown. Few studies have examined how leaders can implement AI for strategic management, but no study has qualitatively explored AIs integration in the cloud ERP system. This qualitative phenomenological study explored IT professionals’ perceptions regarding the integration of AI and Supervised-machine (S-machine) learning into cloud service platforms in the enhancement of the cloud ERP system. Two research questions were developed for this study: 1) What are the perceptions of IT professionals regarding the use of an AI model to integrate SaaS and ERP? and 2) What are the perceptions of IT professionals regarding how AI can be integrated in order to enhance the security of using an ERP cloud-based system? Through a hermeneutical lens and a focus on integrating the Application Programming Interface (API), purposive sampling was used to interview five AI experts, three Machine Learning (ML) experts, five Cybersecurity experts, and two Cloud Service Providers provided their lived experiences with AI and S-machine learning. Five main themes emerged, including 1) use of an AI model to integrate SaaS and ERP helped perform work efficiently, 2) challenges for integrating AI into cloud service ERP and SaaS, 3) resources needed to fully implement an AI into cloud-service ERP or SaaS, 4) the best practices for developing and implementing an AI model for ERP and SaaS, and 5) how security of an ERP clouds-based system is optimized by integrating AI. The culmination of these findings has positive implications for individuals and organizations to improve management performance. While this study does not proposal a new theory, this study extends current literature on the application of theories related to technology integration.

Journal ArticleDOI
TL;DR: The review primarily focuses on the recently used wireless data acquisition system and execution of AI resources for data prediction and data diagnosis in RCC buildings and bridges and indicates the lag in real-world execution of structural health monitoring technologies despite advances in academia.

Journal ArticleDOI
TL;DR: A prior-dependent graph (PDG) construction method can achieve substantial performance, which can be deployed in edge computing module to provide efficient solutions for massive data management and applications in AIoT.

Journal ArticleDOI
TL;DR: In this article , the authors have proposed a definition of the Metaverse in Medicine as the medical Internet of Things (MIoT) facilitated using AR and/or VR glasses, and it is feasible to implement the three basic functions of the MIoT, namely, comprehensive perception, reliable transmission, and intelligent processing, by applying a metaverse platform, which is composed of AR and VR glasses and the medical IoT system, and integrated with the technologies of holographic construction, holographic emulation, virtuality-reality integration, and virtual reality interconnection.

Journal ArticleDOI
TL;DR: In this paper , an overview of the literature focusing on the issues, difficulties, and potential applications of big data and cloud computing is presented, such as improved data processing capabilities, increased scalability, and cost reduction.
Abstract: Big Data and cloud computing integration has become a formidable strategy for businesses to unlock the potential of enormous and complicated data sets. With the scalability, flexibility, and cost-effectiveness that this combination provides, businesses are able to handle and analyse massive amounts of data in a distributed, as-needed way. But there are also issues and restrictions that need to be resolved with this integration. This overview of the literature focuses on the issues, difficulties, and potential applications of big data and cloud computing. It offers information on the advantages of this integration, such as improved data processing capabilities, increased scalability, and cost reduction. The difficulties with data migration, security, privacy, data governance, talent needs, vendor lock-in, and compliance are all discussed. Future research areas are also highlighted, such as enhanced analytics methods, edge computing integration, privacy-preserving data analysis, hybrid cloud architectures, data governance,

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed edge computing based video pre-processing to eliminate the redundant frames, so that they migrate the partial or all the video processing task to the edge, thereby diminishing the computing, storage and network bandwidth requirements of the cloud center, and enhancing the effectiveness of video analyzes.

Journal ArticleDOI
TL;DR: In this article , the authors introduce cloud supply chain which is a business model based on cloud-enabled networking of some third-party physical and digital assets to design and manage a supply chain network.
Abstract: In this paper, we introduce cloud supply chain which is a business model based on cloud-enabled networking of some third-party physical and digital assets to design and manage a supply chain network. Cloud supply chain integrates concepts and technology of Industry 4.0 and digital platforms emerging in the “supply chain-as-a-service” paradigm. This paper conceptualizes the cloud supply chain as a new and distinct research area. Through analysis of practical cases, we deduce some generalized characteristics of the cloud supply chain. In the generalised model, we formalize supply chain multi-structural dynamics and dynamic service composition. Our results show that the main generalized characteristics of the cloud supply chain are related to (i) multi-structural dynamics; (ii) platforms, digital supply chains, ecosystems, and visibility, (iii) dynamic service composition with dynamically changing buyer/supplier roles, (iv) resilience and viability, and (v) intertwined supply networks and circular economy. We close by discussing future research directions including novel context of Industry 5.0.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: A systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems is provided to help researchers better understand the connection between FL and EC enabling technologies and concepts.
Abstract: Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an effective confidential management solution on the cloud, whose basic idea is to deploy a trusted local server between the untrusted cloud and each trusted client of a medical information management system, responsible for running an EMR cloud hierarchical storage model and an eMR cloud segmentation query model.

Journal ArticleDOI
TL;DR: In this article , the authors present the key design requirements for enabling 6G through the use of a digital twin, and the architectural components and trends such as edge-based twins, cloud-based-twins, and edge-cloud-based twin are presented.
Abstract: Internet of Everything (IoE) applications such as haptics, human-computer interaction, and extended reality, using the sixth-generation (6G) of wireless systems have diverse requirements in terms of latency, reliability, data rate, and user-defined performance metrics. Therefore, enabling IoE applications over 6G requires a new framework that can be used to manage, operate, and optimize the 6G wireless system and its underlying IoE services. Such a new framework for 6G can be based on digital twins. Digital twins use a virtual representation of the 6G physical system along with the associated algorithms (e.g., machine learning, optimization), communication technologies (e.g., millimeter-wave and terahertz communication), computing systems (e.g., edge computing and cloud computing), as well as privacy and security-related technologists (e.g., blockchain). First, we present the key design requirements for enabling 6G through the use of a digital twin. Next, the architectural components and trends such as edge-based twins, cloud-based-twins, and edge-cloud-based twins are presented. Furthermore, we provide a comparative description of various twins. Finally, we outline and recommend guidelines for several future research directions.

Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic literature review of 98 research papers on various digital supply chain twin dimensions with sustainable performance objectives and present a sustainable digital twin implementation framework for supply chains.

Journal ArticleDOI
TL;DR: In this article , a two-level resource allocation and incentive mechanism design problem is considered in the Hierarchical Federated Learning (HFL) framework, where cluster heads are designated to support the data owners through intermediate model aggregation.
Abstract: To enable the large scale and efficient deployment of Artificial Intelligence (AI), the confluence of AI and Edge Computing has given rise to Edge Intelligence, which leverages on the computation and communication capabilities of end devices and edge servers to process data closer to where it is produced. One of the enabling technologies of Edge Intelligence is the privacy preserving machine learning paradigm known as Federated Learning (FL), which enables data owners to conduct model training without having to transmit their raw data to third-party servers. However, the FL network is envisioned to involve thousands of heterogeneous distributed devices. As a result, communication inefficiency remains a key bottleneck. To reduce node failures and device dropouts, the Hierarchical Federated Learning (HFL) framework has been proposed whereby cluster heads are designated to support the data owners through intermediate model aggregation. This decentralized learning approach reduces the reliance on a central controller, e.g., the model owner. However, the issues of resource allocation and incentive design are not well-studied in the HFL framework. In this article, we consider a two-level resource allocation and incentive mechanism design problem. In the lower level, the cluster heads offer rewards in exchange for the data owners' participation, and the data owners are free to choose which cluster to join. Specifically, we apply the evolutionary game theory to model the dynamics of the cluster selection process. In the upper level, each cluster head can choose to serve a model owner, whereas the model owners have to compete amongst each other for the services of the cluster heads. As such, we propose a deep learning based auction mechanism to derive the valuation of each cluster head's services. The performance evaluation shows the uniqueness and stability of our proposed evolutionary game, as well as the revenue maximizing properties of the deep learning based auction.

Journal ArticleDOI
TL;DR: A novel deep convolutional neural network based human activity recognition classifier is presented to enhance identification accuracy in electrocardiogram (ECG) patterns monitoring during daily activity.
Abstract: In next-generation network architecture, the Cybertwin drove the sixth generation of cellular networks sixth-generation (6G) to play an active role in many applications, such as healthcare and computer vision. Although the previous sixth-generation (5G) network provides the concept of edge cloud and core cloud, the internal communication mechanism has not been explained with a specific application. This article introduces a possible Cybertwin based multimodal network (beyond 5G) for electrocardiogram (ECG) patterns monitoring during daily activity. This network paradigm consists of a cloud-centric network and several Cybertwin communication ends. The Cybertwin nodes combine support locator/identifier identification, data caching, behavior logger, and communications assistant in the edge cloud. The application focuses on monitoring the ECG patterns during daily activity because few studies analyze them under different motions. We present a novel deep convolutional neural network based human activity recognition classifier to enhance identification accuracy. The healthcare monitoring values and potential clinical medicine are provided by the Cybertwin based network for ECG patterns observing.

Journal ArticleDOI
TL;DR: A comprehensive review of advances in data acquisition, processing, diagnosis, and retrieval stages of Structural Health Monitoring both academically and commercially is presented in this article , which primarily focuses on the recently used wireless data acquisition system and execution of AI resources for data prediction and data diagnosis in RCC buildings and bridges.