Institution
Beijing University of Posts and Telecommunications
Education•Beijing, Beijing, China•
About: Beijing University of Posts and Telecommunications is a education organization based out in Beijing, Beijing, China. It is known for research contribution in the topics: MIMO & Quality of service. The organization has 39576 authors who have published 41525 publications receiving 403759 citations. The organization is also known as: BUPT.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The experimental results successfully demonstrate that the multi-CNN fusion model is very suitable for providing a classification method with high accuracy and low complexity on the NSL-KDD dataset and its performance is also superior to those of traditional machine learning methods and other recent deep learning approaches for binary classification and multiclass classification.
160 citations
••
26 Feb 2010TL;DR: A new handover algorithm based on the UE’s speed and QoS is proposed and shows that the algorithms proposed have a better performance in the reducing of unnecessary handovers and the number of handovers.
Abstract: The femtocell networks that use Home eNodeB and existing networks as backhaul connectivity can fulfill the upcoming demand of high data rate for wireless communication system as well as can extend the coverage area. It is also of interest to minimize operational effort by introducing self-optimizing mechanisms, and the optimization of the Home eNodeB involved handover is an important goal of LTE-Advanced. Since the different network architecture and functionality between Home eNodeB and LTE eNodeB, the handover procedure between the femtocell and macrocell should be modified in LTE network. In this paper, modified signaling procedure of handover is presented in the Home eNodeB gateway based femtocell network architecture. A new handover algorithm based on the UE’s speed and QoS is proposed. The comparison between the proposed algorithm and the traditional handover algorithm shows that the algorithms proposed in this paper have a better performance in the reducing of unnecessary handovers and the number of handovers.
160 citations
••
TL;DR: Li et al. as discussed by the authors achieved the synthesis of ultrathin Li3VO4 nanoribbons through the layer-by-layer assembly method, which shows not only a high specific reversible capacitance (up to 452.5 µm−h−g−1 after 200 cycles) but also an excellent cycling performance.
160 citations
••
TL;DR: An optimal double-layer PBFT is proposed and it is proved that when the nodes are evenly distributed within the sub-groups in the second layer, the communication complexity is minimized and the security threshold is analyzed based on faulty probability determined (FPD) and faulty number determined models, respectively.
Abstract: Practical Byzantine Fault Tolerance (PBFT) consensus mechanism shows a great potential to break the performance bottleneck of the Proof-of-Work (PoW)-based blockchain systems, which typically support only dozens of transactions per second and require minutes to hours for transaction confirmation. However, due to frequent inter-node communications, PBFT mechanism has a poor node scalability and thus it is typically adopted in small networks. To enable PBFT in large systems such as massive Internet of Things (IoT) ecosystems and blockchain, in this article, a scalable multi-layer PBFT-based consensus mechanism is proposed by hierarchically grouping nodes into different layers and limiting the communication within the group. We first propose an optimal double-layer PBFT and show that the communication complexity is significantly reduced. Specifically, we prove that when the nodes are evenly distributed within the sub-groups in the second layer, the communication complexity is minimized. The security threshold is analyzed based on faulty probability determined (FPD) and faulty number determined (FND) models, respectively. We also provide a practical protocol for the proposed double-layer PBFT system. Finally, the results are extended to arbitrary-layer PBFT systems with communication complexity and security analysis. Simulation results verify the effectiveness of the analytical results.
160 citations
••
TL;DR: This paper introduces two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage and proposes a new model named CAmouflage-REsistant GNN (CARE-GNN), to enhance the GNN aggregation process with three unique modules against camouflages.
Abstract: Graph Neural Networks (GNNs) have been widely applied to fraud detection problems in recent years, revealing the suspiciousness of nodes by aggregating their neighborhood information via different relations However, few prior works have noticed the camouflage behavior of fraudsters, which could hamper the performance of GNN-based fraud detectors during the aggregation process In this paper, we introduce two types of camouflages based on recent empirical studies, ie, the feature camouflage and the relation camouflage Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems Alternatively, we propose a new model named CAmouflage-REsistant GNN (CARE-GNN), to enhance the GNN aggregation process with three unique modules against camouflages Concretely, we first devise a label-aware similarity measure to find informative neighboring nodes Then, we leverage reinforcement learning (RL) to find the optimal amounts of neighbors to be selected Finally, the selected neighbors across different relations are aggregated together Comprehensive experiments on two real-world fraud datasets demonstrate the effectiveness of the RL algorithm The proposed CARE-GNN also outperforms state-of-the-art GNNs and GNN-based fraud detectors We integrate all GNN-based fraud detectors as an opensource toolbox: this https URL The CARE-GNN code and datasets are available at this https URL
160 citations
Authors
Showing all 39925 results
Name | H-index | Papers | Citations |
---|---|---|---|
Jie Zhang | 178 | 4857 | 221720 |
Jian Li | 133 | 2863 | 87131 |
Ming Li | 103 | 1669 | 62672 |
Kang G. Shin | 98 | 885 | 38572 |
Lei Liu | 98 | 2041 | 51163 |
Muhammad Shoaib | 97 | 1333 | 47617 |
Stan Z. Li | 97 | 532 | 41793 |
Qi Tian | 96 | 1030 | 41010 |
Xiaodong Xu | 94 | 1122 | 50817 |
Qi-Kun Xue | 84 | 589 | 30908 |
Long Wang | 84 | 835 | 30926 |
Jing Zhou | 84 | 533 | 37101 |
Hao Yu | 81 | 981 | 27765 |
Mohsen Guizani | 79 | 1110 | 31282 |
Muhammad Iqbal | 77 | 961 | 23821 |