scispace - formally typeset
Search or ask a question
Author

Yu-Wei Lu

Bio: Yu-Wei Lu is an academic researcher from National Cheng Kung University. The author has contributed to research in topics: Telecommunications link & Throughput. The author has an hindex of 1, co-authored 1 publications receiving 52 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Numerical results show that the proposed algorithm outperforms traditional Greedy algorithm in terms of throughput maximization while satisfying QoS requirements, and its performance is close to the optimal design.
Abstract: Providing diverse and strict quality-of-service (QoS) guarantees is one of the most important requirements in machine-to-machine (M2M) communications, which is particularly need for appropriate resource allocation for a large number of M2M devices. To efficiently allocate resource blocks (RBs) for M2M devices while satisfying QoS requirements, we propose group-based M2M communications, in which M2M devices are clustered based on their wireless transmission protocols, their QoS characteristics, and their requirements. To perform joint RB and power allocation in SC-FDMA-based LTE-A networks, we formulate a sum-throughput maximization problem, while respecting all the constraints associated with SC-FDMA scheme, as well as QoS requirements in M2M devices. The constraints in uplink SC-FDMA air interface in LTE-A networks complicate the resource allocation problem. We solve the resource allocation problem by first transforming it into a binary integer programming problem and then formulate a dual problem using the Lagrange duality theory. Numerical results show that the proposed algorithm outperforms traditional Greedy algorithm in terms of throughput maximization while satisfying QoS requirements, and its performance is close to the optimal design.

63 citations

Journal ArticleDOI
01 May 2023
TL;DR: In this article , a time-sharing TDC (ts-TDC) architecture of global shutter CSA is proposed to combine interfacing, readout and sampling circuits in pixel and dynamically adjust the electrode size according to the characteristic of biosample.
Abstract: Capacitive sensor array (CSA) plays an important role in life science applications, such as cell monitoring, DNA detection, or drug screening. This brief presents a $480\times 960$ time-sharing TDC (ts-TDC) architecture of global shutter CSA. It provides an opportunity to merge interfacing, read-out and sampling circuits in pixel and dynamically adjust the electrode size according to the characteristic of biosample. Each pixel size is within $2.25\times 2.25\,\,\mu m^{2}$ . The sensitivity of the proposed CSA is 6.896 codes/fF, and 43 fps can be adopted for real-time monitoring. Experiment results show that noise canceling significantly improves the sensing performance. Moreover, sensitivity can be optimized by given fusion-pixel pattern, making our proposal very flexible and suitable for cell biology applications and personalized medicine.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission in the mMTC scenario and provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem.
Abstract: The ever-increasing number of resource-constrained machine-type communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced mobile broadband (eMBB), massive machine type communications (mMTCs), and ultra-reliable and low latency communications (URLLCs), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include quality of service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead, and radio access network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy random access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions toward addressing RAN congestion problem, and then identify potential advantages, challenges, and use cases for the applications of emerging machine learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity $Q$ -learning approach in the mMTC scenario along with the recent advances toward enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions.

290 citations

Journal ArticleDOI
TL;DR: This paper identifies different ways to implement M2M communications in the UDNs from the perspectives of layered architecture, including physical, media access control, network, and application layers, and addresses security and network virtualization issues.
Abstract: To achieve 1000-fold capacity increase in 5G wireless communications, ultradense network (UDN) is believed to be one of the key enabling technologies. Most of the previous research activities on UDNs were based very much on human-to-human communications. However, to provide ubiquitous Internet of Things services, machine-to-machine (M2M) communications will play a critical role in 5G systems. As the number of machine-oriented connections increases, it is expected that supporting M2M communications is an essential requirement in all future UDNs. In this paper, we aim to bridge the gaps between M2M communications and UDNs, which were commonly considered as two separate issues in the literature. The paper begins with a brief introduction on M2M communications and UDNs, and then will discuss the issues on the roles of M2M communications in future UDNs. We will identify different ways to implement M2M communications in the UDNs from the perspectives of layered architecture, including physical, media access control, network, and application layers. Other two important issues, i.e., security and network virtualization, will also be addressed. Before the end of this paper, we will give a summary on identified research topics for future studies.

116 citations

Journal ArticleDOI
TL;DR: A contract-based incentive mechanism to motivate some delay-tolerant machine-type communication devices to postpone their access demands in exchange for higher access opportunities is proposed, and a long-term cross-layer online resource allocation approach is proposed based on Lyapunov optimization.
Abstract: Machine-to-machine communication with autonomous data acquisition and exchange plays a key role in realizing the “control”-oriented tactile Internet applications such as industrial automation. In this paper, we develop a two-stage access control and resource allocation algorithm. In the first stage, we propose a contract-based incentive mechanism to motivate some delay-tolerant machine-type communication devices to postpone their access demands in exchange for higher access opportunities. In the second stage, a long-term cross-layer online resource allocation approach is proposed based on Lyapunov optimization, which jointly optimizes rate control, power allocation, and channel selection without prior knowledge of channel states. Particularly, the joint power allocation and channel selection problem is formulated as a two-dimensional matching problem, and solved by a pricing-based stable matching approach. Finally, the performance of the proposed algorithm is verified under various simulation scenarios.

112 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a system for out-patient (OP) centric Long Term Evolution-Advanced (LTE-A) network optimization, where big data harvested from the OPs' medical records, along with current readings from their body-connected medical IoT sensors are processed and analyzed to predict the likelihood of a life-threatening medical condition, for instance, an imminent stroke.
Abstract: Big data analytics is one of the state-of-the-art tools to optimize networks and transform them from merely being a blind tube that conveys data, into a cognitive, conscious, and self-optimizing entity that can intelligently adapt according to the needs of its users. This, in fact, can be regarded as one of the highest forthcoming priorities of future networks. In this paper, we propose a system for Out-Patient (OP) centric Long Term Evolution-Advanced (LTE-A) network optimization. Big data harvested from the OPs' medical records, along with current readings from their body-connected medical IoT sensors are processed and analyzed to predict the likelihood of a life-threatening medical condition, for instance, an imminent stroke. This prediction is used to ensure that the OP is assigned an optimal LTE-A Physical Resource Blocks (PRBs) to transmit their critical data to their healthcare provider with minimal delay. To the best of our knowledge, this is the first time big data analytics are utilized to optimize a cellular network in an OP-conscious manner. The PRBs assignment is optimized using Mixed Integer Linear Programming (MILP) and a real-time heuristic. Two approaches are proposed, the Weighted Sum Rate Maximization (WSRMax) approach and the Proportional Fairness (PF) approach. The approaches increased the OPs' average SINR by 26.6% and 40.5%, respectively. The WSRMax approach increased the system's total SINR to a level higher than that of the PF approach, however, the PF approach reported higher SINRs for the OPs, better fairness and a lower margin of error.

89 citations

Journal ArticleDOI
TL;DR: A two-stage 3-D matching algorithm that can approach the optimal performance with a low complexity of M2M-TXs via the joint optimization of channel selection, peer discovery, power control, and time allocation is proposed.
Abstract: Energy harvesting-based cognitive machine-to-machine (EH-CM2M) communication has been proposed to overcome the problem of spectrum scarcity and limited battery capacity by enabling M2M transmitters (M2M-TXs) to harvest energy from ambient radio frequency signals, as well as to reuse the resource blocks (RBs) allocated to cellular users (CUs) in an opportunistic manner. However, the complex interference scenarios and the stringent quality of service (QoS) requirements pose new challenges on resource allocation optimization. In this paper, we consider how to maximize the energy efficiency of M2M-TXs via the joint optimization of channel selection, peer discovery, power control, and time allocation. We propose a two-stage 3-D matching algorithm. In the first stage, M2M-TXs, M2M receivers (M2M-RXs) and RBs are temporally matched together, and then the joint power control and time allocation problem is solved by combining alternating optimization (AO), nonlinear fractional programming, and linear programming to construct the preference lists. In the second stage, the joint channel selection and peer discovery problem is solved by the proposed pricing-based matching algorithm based on the established preference lists. Simulation results confirm that the proposed algorithm can approach the optimal performance with a low complexity.

82 citations