scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2020"


Journal ArticleDOI
TL;DR: It is inferred that combining the NIR-I/II spectral windows and suitable fluorescence probes might improve image-guided surgery in the clinic and help the fluorescence-guided surgical resection of liver tumours in patients.
Abstract: The second near-infrared wavelength window (NIR-II, 1,000–1,700 nm) enables fluorescence imaging of tissue with enhanced contrast at depths of millimetres and at micrometre-scale resolution. However, the lack of clinically viable NIR-II equipment has hindered the clinical translation of NIR-II imaging. Here, we describe an optical-imaging instrument that integrates a visible multispectral imaging system with the detection of NIR-II and NIR-I (700–900 nm in wavelength) fluorescence (by using the dye indocyanine green) for aiding the fluorescence-guided surgical resection of primary and metastatic liver tumours in 23 patients. We found that, compared with NIR-I imaging, intraoperative NIR-II imaging provided a higher tumour-detection sensitivity (100% versus 90.6%; with 95% confidence intervals of 89.1%–100% and 75.0%–98.0%, respectively), a higher tumour-to-normal-liver-tissue signal ratio (5.33 versus 1.45) and an enhanced tumour-detection rate (56.41% versus 46.15%). We infer that combining the NIR-I/II spectral windows and suitable fluorescence probes might improve image-guided surgery in the clinic. An optical-imaging instrument that integrates a visible multispectral imaging system with the detection of near-infrared fluorescence in the first and second windows aids the fluorescence-guided surgical resection of liver tumours in patients.

475 citations


Journal ArticleDOI
TL;DR: There is still significant room for development in macrophage‐mediated immune modulation and macrophages‐mediated drug delivery, which will further enhance current tumor therapies against various malignant solid tumors, including drug‐resistant tumors and metastatic tumors.
Abstract: Macrophages play an important role in cancer development and metastasis. Proinflammatory M1 macrophages can phagocytose tumor cells, while anti-inflammatory M2 macrophages such as tumor-associated macrophages (TAMs) promote tumor growth and invasion. Modulating the tumor immune microenvironment through engineering macrophages is efficacious in tumor therapy. M1 macrophages target cancerous cells and, therefore, can be used as drug carriers for tumor therapy. Herein, the strategies to engineer macrophages for cancer immunotherapy, such as inhibition of macrophage recruitment, depletion of TAMs, reprograming of TAMs, and blocking of the CD47-SIRPα pathway, are discussed. Further, the recent advances in drug delivery using M1 macrophages, macrophage-derived exosomes, and macrophage-membrane-coated nanoparticles are elaborated. Overall, there is still significant room for development in macrophage-mediated immune modulation and macrophage-mediated drug delivery, which will further enhance current tumor therapies against various malignant solid tumors, including drug-resistant tumors and metastatic tumors.

330 citations


Journal ArticleDOI
TL;DR: This paper offers a detailed introduction to the background of data fusion and machine learning in terms of definitions, applications, architectures, processes, and typical techniques, and proposes a number of requirements to review and evaluate the performance of existing fusion methods based on machine learning.

309 citations


Journal ArticleDOI
TL;DR: This letter presents a power efficient scheme to design the secure transmit power allocation and the surface reflecting phase shift and proposes an alternative optimization algorithm and the semidefinite programming (SDP) relaxation to deal with this issue.
Abstract: In this letter, we propose intelligent reflecting surface (IRS) aided multi-antenna physical layer security. We present a power efficient scheme to design the secure transmit power allocation and the surface reflecting phase shift. It aims to minimize the transmit power subject to the secrecy rate constraint at the legitimate user. Due to the non-convex nature of the formulated problem, we propose an alternative optimization algorithm and the semidefinite programming (SDP) relaxation to deal with this issue. Also, the closed-form expression of the optimal secure beamformer is derived. Finally, simulation results are presented to validate the proposed algorithm, which highlights the performance gains of the IRS to improve the secure transmission.

257 citations


Proceedings ArticleDOI
01 Jul 2020
TL;DR: FLAT as discussed by the authors converts the lattice structure into a flat structure consisting of spans, each span corresponds to a character or latent word and its position in the original lattice, which has an excellent parallel ability.
Abstract: Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information. However, since the lattice structure is complex and dynamic, the lattice-based models are hard to fully utilize the parallel computation of GPUs and usually have a low inference speed. In this paper, we propose FLAT: Flat-LAttice Transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans. Each span corresponds to a character or latent word and its position in the original lattice. With the power of Transformer and well-designed position encoding, FLAT can fully leverage the lattice information and has an excellent parallel ability. Experiments on four datasets show FLAT outperforms other lexicon-based models in performance and efficiency.

253 citations


Journal ArticleDOI
TL;DR: This article develops an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem and designs a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems.
Abstract: Mobile-edge computing (MEC) is a promising paradigm to improve the quality of computation experience of mobile devices because it allows mobile devices to offload computing tasks to MEC servers, benefiting from the powerful computing resources of MEC servers. However, the existing computation-offloading works have also some open issues: 1) security and privacy issues; 2) cooperative computation offloading; and 3) dynamic optimization. To address the security and privacy issues, we employ the blockchain technology that ensures the reliability and irreversibility of data in MEC systems. Meanwhile, we jointly design and optimize the performance of blockchain and MEC. In this article, we develop a cooperative computation offloading and resource allocation framework for blockchain-enabled MEC systems. In the framework, we design a multiobjective function to maximize the computation rate of MEC systems and the transaction throughput of blockchain systems by jointly optimizing offloading decision, power allocation, block size, and block interval. Due to the dynamic characteristics of the wireless fading channel and the processing queues at MEC servers, the joint optimization is formulated as a Markov decision process (MDP). To tackle the dynamics and complexity of the blockchain-enabled MEC system, we develop an asynchronous advantage actor–critic-based cooperation computation offloading and resource allocation algorithm to solve the MDP problem. In the algorithm, deep neural networks are optimized by utilizing asynchronous gradient descent and eliminating the correlation of data. The simulation results show that the proposed algorithm converges fast and achieves significant performance improvements over existing schemes in terms of total reward.

241 citations


Journal ArticleDOI
TL;DR: A software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs is proposed, where SDN is introduced to provide supports for the centralized network and vehicle information management.
Abstract: Recently, the rapid advance of vehicular networks has led to the emergence of diverse delay-sensitive vehicular applications such as automatic driving, auto navigation. Note that existing resource-constrained vehicles cannot adequately meet these demands on low / ultra-low latency. By offloading parts of the vehicles’ compute-intensive tasks to the edge servers in proximity, mobile edge computing is envisioned as a promising paradigm, giving rise to the vehicular edge computing networks (VECNs). However, most existing works on task offloading in VECNs did not take the load balancing of the computation resources at the edge servers into account. To address these issues and given the high dynamics of vehicular networks, we introduce fiber-wireless (FiWi) technology to enhance VECNs, due to its advantages on centralized network management and supporting multiple communication techniques. Aiming to minimize the processing delay of the vehicles’ computation tasks, we propose a software-defined networking (SDN) based load-balancing task offloading scheme in FiWi enhanced VECNs, where SDN is introduced to provide supports for the centralized network and vehicle information management. Extensive analysis and numerical results corroborate that our proposed load-balancing scheme can achieve superior performance on processing delay reduction by utilizing the edge servers’ computation resources more efficiently.

239 citations


Journal ArticleDOI
TL;DR: This review gives a thinking based on the generic mechanisms rather than simply dividing them as different types of combination of materials, which is unique and valuable for understanding and developing the novel hybrid materials in the future.
Abstract: Chemi-resistive sensors based on hybrid functional materials are promising candidates for gas sensing with high responsivity, good selectivity, fast response/recovery, great stability/repeatability, room-working temperature, low cost, and easy-to-fabricate, for versatile applications. This progress report reviews the advantages and advances of these sensing structures compared with the single constituent, according to five main sensing forms: manipulating/constructing heterojunctions, catalytic reaction, charge transfer, charge carrier transport, molecular binding/sieving, and their combinations. Promises and challenges of the advances of each form are presented and discussed. Critical thinking and ideas regarding the orientation of the development of hybrid material-based gas sensor in the future are discussed.

237 citations


Journal ArticleDOI
TL;DR: A multitask deep-learning framework that simultaneously predicts the node flow and edge flow throughout a spatio-temporal network based on fully convolutional networks is proposed.
Abstract: Predicting flows (e.g., the traffic of vehicles, crowds, and bikes), consisting of the in-out traffic at a node and transitions between different nodes, in a spatio-temporal network plays an important role in transportation systems. However, this is a very challenging problem, affected by multiple complex factors, such as the spatial correlation between different locations, temporal correlation among different time intervals, and external factors (like events and weather). In addition, the flow at a node (called node flow) and transitions between nodes (edge flow) mutually influence each other. To address these issues, we propose a multitask deep-learning framework that simultaneously predicts the node flow and edge flow throughout a spatio-temporal network. Based on fully convolutional networks, our approach designs two sophisticated models for predicting node flow and edge flow, respectively. These two models are connected by coupling their latent representations of middle layers, and trained together. The external factor is also integrated into the framework through a gating fusion mechanism. In the edge flow prediction model, we employ an embedding component to deal with the sparse transitions between nodes. We evaluate our method based on the taxicab data in Beijing and New York City. Experimental results show the advantages of our method beyond 11 baselines, such as ConvLSTM, CNN, and Markov Random Field.

236 citations


Journal ArticleDOI
TL;DR: A channel-wise and spatial feature modulation (CSFM) network in which a series of feature modulation memory (FMM) modules are cascaded with a densely connected structure to transform shallow features to high informative features and maintain long-term information for image super-resolution.
Abstract: The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, the most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural network grows, the long-term information coming from preceding layers is easy to be weaken or lost at later layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a series of feature modulation memory (FMM) modules are cascaded with a densely connected structure to transform shallow features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate the multi-level features in global and local manners. This feature modulation strategy enables the valuable information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. The extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods.

228 citations


Posted ContentDOI
23 Mar 2020-medRxiv
TL;DR: An AI system that automatically analyzes CT images to detect COVID-19 pneumonia features and was able to overcome a series of challenges in this particular situation and deploy the system in four weeks.
Abstract: The sudden outbreak of novel coronavirus 2019 (COVID-19) increased the diagnostic burden of radiologists. In the time of an epidemic crisis, we hoped artificial intelligence (AI) to help reduce physician workload in regions with the outbreak, and improve the diagnosis accuracy for physicians before they could acquire enough experience with the new disease. Here, we present our experience in building and deploying an AI system that automatically analyzes CT images to detect COVID-19 pneumonia features. Different from conventional medical AI, we were dealing with an epidemic crisis. Working in an interdisciplinary team of over 30 people with medical and / or AI background, geographically distributed in Beijing and Wuhan, we were able to overcome a series of challenges in this particular situation and deploy the system in four weeks. Using 1,136 training cases (723 positives for COVID-19) from five hospitals, we were able to achieve a sensitivity of 0.974 and specificity of 0.922 on the test dataset, which included a variety of pulmonary diseases. Besides, the system automatically highlighted all lesion regions for faster examination. As of today, we have deployed the system in 16 hospitals, and it is performing over 1,300 screenings per day.

Journal ArticleDOI
TL;DR: In this article, a metasurface-based decoupling method was proposed to reduce the mutual couplings at two independent bands of two coupled multiple-input-multiple-output (MIMO) antennas.
Abstract: In this communication, a metasurface-based decoupling method (MDM) is proposed to reduce the mutual couplings at two independent bands of two coupled multiple-input-multiple-output (MIMO) antennas. The metasurface superstrate is composed of pairs of non-uniform cut wires with two different lengths. It is compact in size and effective in decoupling two nearby dual-band patch antennas that are strongly coupled in the H-plane with the edge-to-edge spacing of only 0.008 wavelength at low-frequency band (LB). The antenna is fabricated and measured and the results show that the isolation between two dual-band antennas can be improved to more than 25 dB at both 2.5–2.7 GHz and 3.4–3.6 GHz bands, while their reflection coefficients remain to be below −10 dB after the metasurface superstrate is introduced. Moreover, the total efficiency is improved by about 15% in the low band and the envelope correlation coefficient (ECC) between the two antennas is reduced from 0.46 to 0.08 at 2.6 GHz and 0.08 to 0.01 at 3.5 GHz. The proposed method can find plenty of applications in dual-band MIMO and 5G communication systems.

Posted Content
TL;DR: A new holistic attention network (HAN) is proposed, which consists of a layer attention module (LAM) and a channel-spatial attention Module (CSAM) to model the holistic interdependencies among layers, channels, and positions.
Abstract: Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super-resolution approaches.

Posted Content
TL;DR: A comprehensive discussion of 6G is given based on the review of 5G developments, covering visions and requirements, technology trends and challenges, aiming at tackling the challenge of coverage, capacity, the user data rate and movement speed of mobile communication system.
Abstract: Since 5G new radio comes with non-standalone (NSA) and standalone (SA) versions in 3GPP, research on 6G has been on schedule by academics and industries. Though 6G is supposed to have much higher capabilities than 5G, yet there is no clear description about what 6G is. In this article, a comprehensive discussion of 6G is given based on the review of 5G developments, covering visions and requirements, technology trends and challenges, aiming at tackling the challenge of coverage, capacity, the user data rate and movement speed of mobile communication system. The vision of 6G is to fully support the development of a Ubiquitous Intelligent Mobile Society with intelligent life and industries. Finally, the roadmap of the 6G standard is suggested for the future.

Journal ArticleDOI
TL;DR: A comprehensive survey on the use of ML in MEC systems is provided, offering an insight into the current progress of this research area and helpful guidance is supplied by pointing out which MEC challenges can be solved by ML solutions, what are the current trending algorithms in frontier ML research and how they could be used in M EC.
Abstract: Mobile Edge Computing (MEC) is considered an essential future service for the implementation of 5G networks and the Internet of Things, as it is the best method of delivering computation and communication resources to mobile devices. It is based on the connection of the users to servers located on the edge of the network, which is especially relevant for real-time applications that demand minimal latency. In order to guarantee a resource-efficient MEC (which, for example, could mean improved Quality of Service for users or lower costs for service providers), it is important to consider certain aspects of the service model, such as where to offload the tasks generated by the devices, how many resources to allocate to each user (specially in the wired or wireless device-server communication) and how to handle inter-server communication. However, in the MEC scenarios with many and varied users, servers and applications, these problems are characterized by parameters with exceedingly high levels of dimensionality, resulting in too much data to be processed and complicating the task of finding efficient configurations. This will be particularly troublesome when 5G networks and Internet of Things roll out, with their massive amounts of devices. To address this concern, the best solution is to utilize Machine Learning (ML) algorithms, which enable the computer to draw conclusions and make predictions based on existing data without human supervision, leading to quick near-optimal solutions even in problems with high dimensionality. Indeed, in scenarios with too much data and too many parameters, ML algorithms are often the only feasible alternative. In this paper, a comprehensive survey on the use of ML in MEC systems is provided, offering an insight into the current progress of this research area. Furthermore, helpful guidance is supplied by pointing out which MEC challenges can be solved by ML solutions, what are the current trending algorithms in frontier ML research and how they could be used in MEC. These pieces of information should prove fundamental in encouraging future research that combines ML and MEC.

Journal ArticleDOI
TL;DR: Performance evaluation results validate that the proposed scheme is indeed capable of reducing the latency as well as improving the reliability of the EC-SDIoV.
Abstract: Internet of Vehicles (IoV) has drawn great interest recent years. Various IoV applications have emerged for improving the safety, efficiency, and comfort on the road. Cloud computing constitutes a popular technique for supporting delay-tolerant entertainment applications. However, for advanced latency-sensitive applications (e.g., auto/assisted driving and emergency failure management), cloud computing may result in excessive delay. Edge computing, which extends computing and storage capabilities to the edge of the network, emerges as an attractive technology. Therefore, to support these computationally intensive and latency-sensitive applications in IoVs, in this article, we integrate mobile-edge computing nodes (i.e., mobile vehicles) and fixed edge computing nodes (i.e., fixed road infrastructures) to provide low-latency computing services cooperatively. For better exploiting these heterogeneous edge computing resources, the concept of software-defined networking (SDN) and edge-computing-aided IoV (EC-SDIoV) is conceived. Moreover, in a complex and dynamic IoV environment, the outage of both processing nodes and communication links becomes inevitable, which may have life-threatening consequences. In order to ensure the completion with high reliability of latency-sensitive IoV services, we introduce both partial computation offloading and reliable task allocation with the reprocessing mechanism to EC-SDIoV. Since the optimization problem is nonconvex and NP-hard, a heuristic algorithm, fault-tolerant particle swarm optimization algorithm is designed for maximizing the reliability (FPSO-MR) with latency constraints. Performance evaluation results validate that the proposed scheme is indeed capable of reducing the latency as well as improving the reliability of the EC-SDIoV.

Journal ArticleDOI
TL;DR: The authors develop a method to test a large area of graphene and show that even with edge defects it displays near-ideal mechanical performance, as well as resilience and mechanical robustness that allows for flexible electronics and mechatronics applications.
Abstract: The sp2 nature of graphene endows the hexagonal lattice with very high theoretical stiffness, strength and resilience, all well-documented. However, the ultimate stretchability of graphene has not yet been demonstrated due to the difficulties in experimental design. Here, directly performing in situ tensile tests in a scanning electron microscope after developing a protocol for sample transfer, shaping and straining, we report the elastic properties and stretchability of free-standing single-crystalline monolayer graphene grown by chemical vapor deposition. The measured Young’s modulus is close to 1 TPa, aligning well with the theoretical value, while the representative engineering tensile strength reaches ~50-60 GPa with sample-wide elastic strain up to ~6%. Our findings demonstrate that single-crystalline monolayer graphene can indeed display near ideal mechanical performance, even in a large area with edge defects, as well as resilience and mechanical robustness that allows for flexible electronics and mechatronics applications. The extraordinary mechanical properties of graphene are usually measured on very small or supported samples. Here, the authors develop a method to test a large area of graphene and show that even with edge defects it displays near-ideal mechanical performance.

Journal ArticleDOI
TL;DR: A novel blockchain-enabled federated learning (FL-Block) scheme that enables the autonomous machine learning without any centralized authority to maintain the global model and coordinates by using a Proof-of-Work consensus mechanism of the blockchain.
Abstract: As the extension of cloud computing and a foundation of IoT, fog computing is experiencing fast prosperity because of its potential to mitigate some troublesome issues, such as network congestion, latency, and local autonomy. However, privacy issues and the subsequent inefficiency are dragging down the performances of fog computing. The majority of existing works hardly consider a reasonable balance between them while suffering from poisoning attacks. To address the aforementioned issues, we propose a novel blockchain-enabled federated learning (FL-Block) scheme to close the gap. FL-Block allows local learning updates of end devices exchanges with a blockchain-based global learning model, which is verified by miners. Built upon this, FL-Block enables the autonomous machine learning without any centralized authority to maintain the global model and coordinates by using a Proof-of-Work consensus mechanism of the blockchain. Furthermore, we analyze the latency performance of FL-Block and further derive the optimal block generation rate by taking communication, consensus delays, and computation cost into consideration. Extensive evaluation results show the superior performances of FL-Block from the aspects of privacy protection, efficiency, and resistance to the poisoning attack.

Journal ArticleDOI
01 Feb 2020
TL;DR: An overview of SDN/NFV-enabled IoV is provided, in which software-defined networking and network function virtualization technologies are leveraged to enhance the performance of IoV and enable diverse IoV scenarios and applications.
Abstract: Internet-of-Vehicles (IoV) connects vehicles, sensors, pedestrians, mobile devices, and the Internet with advanced communication and networking technologies, which can enhance road safety, improve road traffic management, and support immerse user experience. However, the increasing number of vehicles and other IoV devices, high vehicle mobility, and diverse service requirements render the operation and management of IoV intractable. Software-defined networking (SDN) and network function virtualization (NFV) technologies offer potential solutions to achieve flexible and automated network management, global network optimization, and efficient network resource orchestration with cost-effectiveness and are envisioned as a key enabler to future IoV. In this article, we provide an overview of SDN/NFV-enabled IoV, in which SDN/NFV technologies are leveraged to enhance the performance of IoV and enable diverse IoV scenarios and applications. In particular, the IoV and SDN/NFV technologies are first introduced. Then, the state-of-the-art research works are surveyed comprehensively, which is categorized into topics according to the role that the SDN/NFV technologies play in IoV, i.e., enhancing the performance of data communication, computing, and caching, respectively. Some open research issues are discussed for future directions.

Journal ArticleDOI
TL;DR: A new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN), named Joint CNN-MF (JCM), which is capable of using the learned deep latent features of neighbors to infer the features of a user or a service.
Abstract: Along with the popularity of intelligent services and mobile services, service recommendation has become a key task, especially the task based on quality-of-service (QoS) in edge computing environment. Most existing service recommendation methods have some serious defects, and cannot be directly adopted in edge computing environment. For example, most of existing methods cannot learn deep features of users or services, but in edge computing environment, there are a variety of devices with different configurations and different functions, and it is necessary to learn deep features behind those complex devices. In order to fully utilize hidden features, this paper proposes a new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN). The proposed mode is named Joint CNN-MF (JCM). JCM is capable of using the learned deep latent features of neighbors to infer the features of a user or a service. Meanwhile, to improve the accuracy of neighbors selection, the proposed model contains a novel similarity computation method. CNN learns the neighbors features, forms a feature matrix and infers the features of the target user or target service. We conducted experiments on a real-world service dataset under a batch of cases of data densities, to reflect the complex invocation cases in edge computing environment. The experimental results verify that compared to counterpart methods, our method can consistently achieve higher QoS prediction results.

Journal ArticleDOI
TL;DR: A deep learning-based radiomic nomogram built based on the images from multi-phase computed tomography for preoperatively determining the number of lymph node metastasis in locally advanced gastric cancer (LAGC) had good predictive value for LNM in LAGC.

Journal ArticleDOI
TL;DR: A holistic framework to attack the QoS prediction in the IoT environment, which is based on neural collaborative filtering (NCF) and fuzzy clustering and a new combined similarity computation method is developed.
Abstract: With the prevalent application of Internet of Things (IoT) in real world, services have become a widely used means of providing configurable resources. As the number of services is large and is also increasing fast, it is an inevitable mission to determine the suitability of a service to a user. Two typical tasks are needed, which are service recommendation and service selection. The prediction for Quality of Service (QoS) is an important way to accomplish the two tasks, and there have been a series of methods proposed to predict QoS values. However, few methods have been used to study the QoS prediction in IoT environments, where contextual information is vital. In this article, we develop a holistic framework to attack the QoS prediction in the IoT environment, which is based on neural collaborative filtering (NCF) and fuzzy clustering. We design a fuzzy clustering algorithm that is capable of clustering contextual information and then propose a new combined similarity computation method. Next, a new NCF model is designed that can leverage local and global features. Sufficient experiments are implemented on two real-world data sets, and the experimental results verify the effectiveness of the proposed framework.

Journal ArticleDOI
TL;DR: This work investigates three mainstream consensus mechanisms in the blockchain, namely, Proof of Work (PoW), Proof of Stake (PoS), and Direct Acyclic Graph (DAG), and identifies their performances in terms of the average time to generate a new block, the confirmation delay, the Transaction Per Second (TPS) and the confirmation failure probability.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: Zhang et al. as mentioned in this paper proposed a no-reference IQA metric based on deep meta-learning, which learns the meta-knowledge shared by human when evaluating the quality of images with various distortions, which can then be adapted to unknown distortions easily.
Abstract: Recently, increasing interest has been drawn in exploiting deep convolutional neural networks (DCNNs) for no-reference image quality assessment (NR-IQA). Despite of the notable success achieved, there is a broad consensus that training DCNNs heavily relies on massive annotated data. Unfortunately, IQA is a typical small sample problem. Therefore, most of the existing DCNN-based IQA metrics operate based on pre-trained networks. However, these pre-trained networks are not designed for IQA task, leading to generalization problem when evaluating different types of distortions. With this motivation, this paper presents a no-reference IQA metric based on deep meta-learning. The underlying idea is to learn the meta-knowledge shared by human when evaluating the quality of images with various distortions, which can then be adapted to unknown distortions easily. Specifically, we first collect a number of NR-IQA tasks for different distortions. Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions. Finally, the quality prior model is fine-tuned on a target NR-IQA task for quickly obtaining the quality model. Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin. Furthermore, the meta-model learned from synthetic distortions can also be easily generalized to authentic distortions, which is highly desired in real-world applications of IQA metrics.

Journal ArticleDOI
Getao Du1, Xu Cao1, Jimin Liang1, Xueli Chen1, Yonghua Zhan1 
TL;DR: The method of combining the original U-nets architecture with deep learning and a method for improving the U-net network are introduced, which can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also improve accuracy in the diagnosis by medical images.
Abstract: Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.

Journal ArticleDOI
TL;DR: A new vision of Digital Twin Edge Networks (DITEN) where digital twins of edge servers estimate edge servers’ states and DT of the entire MEC system provides training data for offloading decision is presented, which effectively diminishes the average offloading latency, the offloading failure rate, and the service migration rate, while saving the system cost with DT assistance.
Abstract: 6G is envisioned to empower wireless communication and computation through the digitalization and connectivity of everything, by establishing a digital representation of the real network environment. Mobile edge computing (MEC), as one of the key enabling factors, meets unprecedented challenges during mobile offloading due to the extremely complicated and unpredictable network environment in 6G. The existing works on offloading in MEC mainly ignore the effects of user mobility and the unpredictable MEC environment. In this paper, we present a new vision of Digital Twin Edge Networks (DITEN) where digital twins (DTs) of edge servers estimate edge servers’ states and DT of the entire MEC system provides training data for offloading decision. A mobile offloading scheme is proposed in DITEN to minimize the offloading latency under the constraints of accumulated consumed service migration cost during user mobility. The Lyapunov optimization method is leveraged to simplify the long-term migration cost constraint to a multi-objective dynamic optimization problem, which is then solved by $Actor$ - $Critic$ deep reinforcement learning. Simulations results show that our proposed scheme effectively diminishes the average offloading latency, the offloading failure rate, and the service migration rate, as compared with benchmark schemes, while saving the system cost with DT assistance.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors examined how prepared enterprises are for green innovation in terms of technology readiness, organization readiness, and environment readiness, hypothesizing that the necessary and sufficient conditions along each dimension enable and facilitate green innovation, leading to competitive advantage through the mediation of environmental performance and firm performance.

Journal ArticleDOI
TL;DR: An overview of the network architecture and security functionality of the 3GPP 5G networks is presented, and the new features and techniques including the support of massive Internet of Things (IoT) devices, Device to Device (D2D) communication, Vehicle to Everything (V2X), and network slice are focused on.
Abstract: With the continuous development of mobile communication technologies, Third Generation Partnership Project (3GPP) has proposed related standards with the fifth generation mobile communication technology (5G), which marks the official start of the evolution from the current Long Term Evolution (LTE) system to the next generation mobile communication system (5GS). This paper makes a large number of contributions to the security aspects of 3GPP 5G networks. Firstly, we present an overview of the network architecture and security functionality of the 3GPP 5G networks. Subsequently, we focus on the new features and techniques including the support of massive Internet of Things (IoT) devices, Device to Device (D2D) communication, Vehicle to Everything (V2X) communication, and network slice, which incur the huge challenges for the security aspects in 3GPP 5G networks. Finally, we discuss in detail the security features, security requirements or security vulnerabilities, existing security solutions and some open research issues about the new features and techniques in 3GPP 5G network.

Journal ArticleDOI
TL;DR: A cloud-centric three-factor authentication and key agreement protocol integrating passwords, biometrics and smart cards to ensure secure access to both cloud and AVs is proposed, whose findings demonstrate that the protocol achieves high security strength with reasonable computation and communication costs.
Abstract: Autonomous vehicles (AVs) are increasingly common, although there remain a number of limitations that need to be addressed in order for their deployment to be more widespread. For example, to mitigate the failure of self-driving functions in AVs, introducing the remote control capability (which allows a human driver to operate the vehicle remotely in certain circumferences) is one of several countermeasures proposed. However, the remote control capability breaks the isolation of onboard driving systems and can be potentially exploited by malicious actors to take over control of the AVs; thus, risking the safety of the passengers and pedestrians (e.g., AVs are remotely taken over by terrorist groups to carry out coordinated attacks in places of mass gatherings). Therefore, security is a key, mandatory feature in the design of AVs. In this paper, we propose a cloud-centric three-factor authentication and key agreement protocol (CT-AKA) integrating passwords, biometrics and smart cards to ensure secure access to both cloud and AVs. Three typical biometric encryption approaches, including fuzzy vault, fuzzy commitment, and fuzzy extractor, are unified to achieve three-factor authentication without leaking the biometric privacy of users. Moreover, two session keys are negotiated in our protocol, namely: one between the user and AV to support secure remote control of the AV, and the other is negotiated between the mobile device and the cloud to introduce resilience to the compromise of ephemeral security parameters to ensure cloud data access security with a high security guarantee. Finally, we formally verify the security properties and evaluate the efficiency of CT-AKA, whose findings demonstrate that the protocol achieves high security strength with reasonable computation and communication costs.

Journal ArticleDOI
TL;DR: An overview of the Underwater Internet of Things with emphasis on current advances, future system architecture, applications, challenges, and open issues, and a five-layer system architecture for the future UIoT, which consists of a sensing, communication, networking, fusion, and application layer.
Abstract: The development of the smart ocean requires that various features of the ocean be explored and understood. The Underwater Internet of Things (UIoT), an extension of the Internet of Things (IoT) to the underwater environment, constitutes powerful technology for achieving the smart ocean. This article provides an overview of the UIoT with emphasis on current advances, future system architecture, applications, challenges, and open issues. The UIoT is enabled by the most recent developments in autonomous underwater vehicles, smart sensors, underwater communication technologies, and underwater routing protocols. In the coming years, the UIoT is expected to bridge diverse technologies for sensing the ocean, allowing it to become a smart network of interconnected underwater objects that has self-learning and intelligent computing capabilities. This article first provides a horizontal overview of the UIoT. Then, we present a five-layer system architecture for the future UIoT, which consists of a sensing, communication, networking, fusion, and application layer. Finally, we suggest the current challenges and the future UIoT research trends, in which cloud computing, fog computing, and artificial intelligence are combined.