scispace - formally typeset
Search or ask a question

Showing papers on "Performance metric published in 2021"


Proceedings Article
12 Jan 2021
TL;DR: This work provides a benchmark with inference tasks and suitable performance metrics for ‘likelihood-free’ inference algorithms, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods.
Abstract: Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods. However, a public benchmark with appropriate performance metrics for such 'likelihood-free' algorithms has been lacking. This has made it difficult to compare algorithms and identify their strengths and weaknesses. We set out to fill this gap: We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods. We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency. Neural network-based approaches generally exhibit better performance, but there is no uniformly best algorithm. We provide practical advice and highlight the potential of the benchmark to diagnose problems and improve algorithms. The results can be explored interactively on a companion website. All code is open source, making it possible to contribute further benchmark tasks and inference algorithms.

69 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an energy-efficient resource allocation (RA) problem in NOMA-backscatter communication networks with QoS guarantee, where the transmit power of the base station and the reflection coefficient of the backscatter device are jointly optimized.
Abstract: Energy efficiency (EE) is an important performance metric in communication systems. However, to the best of our knowledge, the energy-efficient resource allocation (RA) problem in non-orthogonal multiple access enabled backscatter communication networks (NOMA-BackComNet) comprehensively considering the user’s quality of service (QoS) has not been investigated. In this letter, we present the first attempt to solve the EE-based RA problem for NOMA-BackComNet with QoS guarantee. The objective is to maximize the EE of users subject to the QoS requirements of users, the decoding order of successive interference cancellation and the reflection coefficient (RC) constraint, where the transmit power of the base station and the RC of the backscatter device are jointly optimized. To solve this non-convex problem, we develop a novel iteration algorithm by using Dinkelbach’s method and the quadratic transformation approach. Simulation results verify the effectiveness of the proposed scheme in improving the EE by comparing it with the other schemes.

68 citations


Journal ArticleDOI
TL;DR: The key mechanism of the proposed JTARO strategy is to employ the optimization technique to jointly optimize the target-to-radar assignment, revisit time control, bandwidth, and dwell time allocation subject to several resource constraints, while achieving better tracking accuracies of multiple targets and low probability of intercept (LPI) performance of phased array radar network.
Abstract: In this article, a joint target assignment and resource optimization (JTARO) strategy is proposed for the application of multitarget tracking in phased array radar network system. The key mechanism of our proposed JTARO strategy is to employ the optimization technique to jointly optimize the target-to-radar assignment, revisit time control, bandwidth, and dwell time allocation subject to several resource constraints, while achieving better tracking accuracies of multiple targets and low probability of intercept (LPI) performance of phased array radar network. The analytical expression for Bayesian Cramer–Rao lower bound with the aforementioned adaptable parameters is calculated and subsequently adopted as the performance metric for multitarget tracking. After problem partition and reformulation, an efficient three-stage solution methodology is developed to resolve the underlying mixed-integer, nonlinear, and nonconvex optimization problem. To be specific, in Step 1, the revisit time for each target is determined. In Step 2, we implement the joint signal bandwidth and dwell time allocation for fixed target-to-radar assignments, which combine the cyclic minimization algorithm and interior point method. In Step 3, the optimal target-to-radar assignments are obtained, which results in the minimization of both the tracking accuracy for multiple targets and the total dwell time consumption of the network system. Simulation results are provided to demonstrate the advantages of the presented JTARO strategy, in terms of the achievable multitarget tracking accuracy and LPI performance of phased array radar network.

67 citations


Journal ArticleDOI
TL;DR: This article first analyzes access control, packet collisions and packet errors in mMTC respectively, and derive the closed-form expression of the average age of information for all MTCDs as the performance metric, and then proposes a jointAccess control, frame division and subchannel allocation scheme to improve the overall status update performance.
Abstract: In this article, we investigate the performance of massive machine type communications (mMTC) in status update systems, where massive machine type communication devices (MTCDs) send status packets to the BS for system monitoring. However, massive MTCDs sending status packets to the BS will cause severe packet collisions, which will have a negative impact on status update performance. In this case, it is necessary to carry out reasonable access control and resource allocation scheme to improve the status update performance for mMTC. In this article, taking the features of mMTC into consideration, we first analyze access control, packet collisions and packet errors in mMTC respectively, and derive the closed-form expression of the average age of information for all MTCDs as the performance metric, and then propose a joint access control, frame division and subchannel allocation scheme to improve the overall status update performance. Simulation and numerical results verify the correctness of theoretical results and show that our proposed scheme can achieve almost the same performance as the exhaustive search method and outperforms benchmark schemes.

48 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the stability region of a WNCS, where a controller transmits quantized and encoded control codewords to a remote actuator through a wireless channel, and adopted a detailed model of the wireless communication system, which jointly considered the interrelated communication parameters.
Abstract: Wireless networked control systems (WNCSs) provide a key enabling technique for Industrial Internet of Things (IIoT). However, in the literature of WNCSs, most of the research focuses on the control perspective and has considered oversimplified models of wireless communications that do not capture the key parameters of a practical wireless communication system, such as latency, data rate, and reliability. In this article, we focus on a WNCS, where a controller transmits quantized and encoded control codewords to a remote actuator through a wireless channel, and adopt a detailed model of the wireless communication system, which jointly considers the interrelated communication parameters. We derive the stability region of the WNCS. If and only if the tuple of the communication parameters lies in the region, the average cost function, i.e., a performance metric of the WNCS, is bounded. We further obtain a necessary and sufficient condition under which the stability region is $n$ -bounded, where $n$ is the control codeword blocklength. We also analyze the average cost function of the WNCS. Such analysis is nontrivial because the finite-bit control-signal quantizer introduces a nonlinear and discontinuous quantization function that makes the performance analysis very difficult. We derive tight upper and lower bounds on the average cost function in terms of latency, data rate, and reliability. Our analytical results provide important insights into the design of the optimal parameters to minimize the average cost within the stability region.

39 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide the first study of the AoI of a scheme in this family, namely irregular repetition slotted ALOHA (IRSA), by means of a Markovian analysis, and derive a compact closed form expression for its stationary distribution.
Abstract: Age of information (AoI) is gaining attention as a valuable performance metric for many IoT systems, in which a large number of devices report time-stamped updates to a central gateway. This is the case, for instance, of remote sensing, monitoring, or tracking, with broad applications in the industrial, vehicular, and environmental domain. In these settings, AoI provides insights that are complementary to those offered by throughput or latency, capturing the ability of the system to maintain an up-to-date view of the status of each transmitting device. From this standpoint, while a good understanding of the metric has been reached for point-to-point links, relatively little attention has been devoted to the impact that link layer solutions employed in IoT systems may have on AoI. In particular, no result is available for modern random access protocols, which have recently emerged as promising solutions to support massive machine-type communications. To start addressing this gap we provide in this paper the first study of the AoI of a scheme in this family, namely irregular repetition slotted ALOHA (IRSA). By means of a Markovian analysis, we track the AoI evolution at the gateway, prove that the process is ergodic, and derive a compact closed form expression for its stationary distribution. Leaning on this, we compute exact formulations for the average AoI and the age violation probability. The study reveals non-trivial design trade-offs for IRSA and highlights the key role played by the protocol operating frame size. Moreover, a comparison with the performance of a simpler slotted ALOHA strategy highlights a remarkable potential for modern random access schemes in terms of information freshness.

38 citations


Proceedings ArticleDOI
14 Jun 2021
TL;DR: TENET as mentioned in this paper is a framework that models hardware dataflow of tensor applications using relation-centric notation, which is more expressive than the compute-centric and data-centric notations by using more sophisticated affine transformations.
Abstract: Accelerating tensor applications on spatial architectures provides high performance and energy-efficiency, but requires accurate performance models for evaluating various dataflow alternatives. Such modeling relies on the notation of tensor dataflow and the formulation of performance metrics. Recent proposed compute-centric and data-centric notations describe the dataflow using imperative directives. However, these two notations are less expressive and thus lead to limited optimization opportunities and inaccurate performance models. In this paper, we propose a framework TENET that models hardware dataflow of tensor applications. We start by introducing a relation-centric notation, which formally describes the hardware dataflow for tensor computation. The relation-centric notation specifies the hardware dataflow, PE interconnection, and data assignment in a uniform manner using relations. The relation-centric notation is more expressive than the compute-centric and data-centric notations by using more sophisticated affine transformations. Another advantage of relation-centric notation is that it inherently supports accurate metrics estimation, including data reuse, bandwidth, latency, and energy. TENET computes each performance metric by counting the relations using integer set structures and operators. Overall, TENET achieves 37.4% and 51.4% latency reduction for CONV and GEMM kernels compared with the state-of-the-art data-centric notation by identifying more sophisticated hardware dataflows.

38 citations


Journal ArticleDOI
TL;DR: This work proposes a new aggregation-based iterative algorithm to calculate the performance metrics of a multi-machine serial line by representing it using a group of virtual two-machine lines using a throughput-equivalent aggregation procedure.
Abstract: Performance metric calculation is one of the most important problems in production system research. In this paper, we consider serial production lines with finite buffers and machines following the...

29 citations


Journal ArticleDOI
TL;DR: This article analyzes the joint optimization of various unmanned aerial vehicle (UAV) systems parameters, including the UAV’s position, height, beamwidth, and the resource allocation for uplink communications between ground Internet-of-Things (IoT) devices and a UAV employing short ultrareliable and low-latency (URLLC) data packets.
Abstract: Efficient resource allocation can maximize power efficiency, which is an important performance metric in future fifth-generation (5G) communications. The minimization of sum uplink power in order to enable green communications while concurrently fulfilling the strict demands of ultrareliability for short packets is an essential and central challenge that needs to be addressed in the design of 5G and subsequent wireless communication systems. To address this challenge, this article analyzes the joint optimization of various unmanned aerial vehicle (UAV) systems parameters, including the UAV’s position, height, beamwidth, and the resource allocation for uplink communications between ground Internet-of-Things (IoT) devices and a UAV employing short ultrareliable and low-latency (URLLC) data packets. Toward achieving the aforesaid task, we proposed a perturbation-based iterative optimization to minimize the sum uplink power in order to determine the optimal position for the UAV, its height, beamwidth of its antenna, and the blocklength allocated for each IoT device. It is shown that the proposed algorithm has lower time complexity, yields better performance than other benchmark algorithms, and achieves similar performance to exhaustive search. Moreover, the results also demonstrate that Shannon’s formula is not an optimum choice for modeling sum power for short packets as it can significantly underestimate the sum power, where our calculations show that there is an average difference of 47.51% for the given parameters between our proposed approach and Shannon’s formula. Finally, our results confirm that the proposed algorithm allows ultrahigh reliability for all the users and converges rapidly.

28 citations


Posted ContentDOI
17 May 2021-bioRxiv
TL;DR: In this article, the effects of age range, sample size, and age-bias correction on the model performance metrics r, R2, Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE) were assessed.
Abstract: Estimating age based on neuroimaging-derived data has become a popular approach to developing markers for brain integrity and health. While a variety of machine-learning algorithms can provide accurate predictions of age based on brain characteristics, there is significant variation in model accuracy reported across studies. We predicted age based on neuroimaging data in two population-based datasets, and assessed the effects of age range, sample size, and age-bias correction on the model performance metrics r, R2, Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). The results showed that these metrics vary considerably depending on cohort age range; r and R2 values are lower when measured in samples with a narrower age range. RMSE and MAE are also lower in samples with a narrower age range due to smaller errors/brain age delta values when predictions are closer to the mean age of the group. Across subsets with different age ranges, performance metrics improve with increasing sample size. Performance metrics further vary depending on prediction variance as well as mean age difference between training and test sets, and age-bias corrected metrics indicate high accuracy - also for models showing poor initial performance. In conclusion, performance metrics used for evaluating age prediction models depend on cohort and study-specific data characteristics, and cannot be directly compared across different studies. Since age-bias corrected metrics in general indicate high accuracy, even for poorly performing models, inspection of uncorrected model results provides important information about underlying model attributes such as prediction variance.

26 citations


Journal ArticleDOI
TL;DR: This paper investigates how multiple IRSs affect the performance of multi-user full-duplex communication systems under hardware impairment at each node, wherein the base station (BS) and the uplink users are subject to maximum transmission power constraints.
Abstract: Smart and reconfigurable wireless communication environments can be established by exploiting well-designed intelligent reflecting surfaces (IRSs) to shape the communication channels. In this paper, we investigate how multiple IRSs affect the performance of multi-user full-duplex communication systems under hardware impairment at each node, wherein the base station (BS) and the uplink users are subject to maximum transmission power constraints. Firstly, the uplink-downlink system weighted sum-rate (SWSR) is derived as a system performance metric. Then, we formulate the resource allocation design to maximize the SWSR as an optimization problem which jointly optimizes the beamforming and the combining vectors at the BS, the transmit powers of the uplink users, and the phase shifts of multiple IRSs. Since the SWSR optimization problem is non-convex, an efficient iterative alternating approach is proposed to obtain a suboptimal solution for the design problem. In particular, we first reformulate the main problem into an equivalent weighted minimum mean-square-error form and then transform it into several convex sub-problems which can be analytically solved for given phase shifts. Then, the IRSs phases are optimized via a gradient ascent-based algorithm. Finally, numerical results are presented to clarify how multiple IRSs enhance the performance metric under hardware impairment.

Journal ArticleDOI
TL;DR: This study aims to present an open queuing network (OQN) and a software-based tool that can estimate some important performance measures from a pre-defined SBS/RS warehouse design, promptly.

Proceedings ArticleDOI
14 Aug 2021
TL;DR: In this article, Deep Neural Auctions (DNAs) are proposed to enable end-to-end auction learning by proposing a differentiable model to relax the discrete sorting operation, a key component in auctions.
Abstract: In e-commerce advertising, it is crucial to jointly consider various performance metrics, e.g., user experience, advertiser utility, and platform revenue. Traditional auction mechanisms, such as GSP and VCG auctions, can be suboptimal due to their fixed allocation rules to optimize a single performance metric (e.g., revenue or social welfare). Recently, data-driven auctions, learned directly from auction outcomes to optimize multiple performance metrics, have attracted increasing research interests. However, the procedure of auction mechanisms involves various discrete calculation operations, making it challenging to be compatible with continuous optimization pipelines in machine learning. In this paper, we design Deep Neural Auctions (DNAs) to enable end-to-end auction learning by proposing a differentiable model to relax the discrete sorting operation, a key component in auctions. We optimize the performance metrics by developing deep models to efficiently extract contexts from auctions, providing rich features for auction design. We further integrate the game theoretical conditions within the model design, to guarantee the stability of the auctions. DNAs have been successfully deployed in the e-commerce advertising system at Taobao. Experimental evaluation results on both large-scale data set as well as online A/B test demonstrated that DNAs significantly outperformed other mechanisms widely adopted in industry.

Journal ArticleDOI
Linnan Wang1, Saining Xie2, Teng Li2, Rodrigo Fonseca1, Yuandong Tian2 
TL;DR: In this paper, the authors proposed Latent Action Neural Architecture Search (LaNAS), which learns actions to recursively partition the search space into good or bad regions that contain networks with similar performance metrics.
Abstract: Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing MCTS based NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy), leading to sample-inefficient explorations of architectures. To improve the sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS), which learns actions to recursively partition the search space into good or bad regions that contain networks with similar performance metrics. During the search phase, as different action sequences lead to regions with different performance, the search efficiency can be significantly improved by biasing towards the good regions. On three NAS tasks, empirical results demonstrate that LaNAS is at least an order more sample efficient than baseline methods including evolutionary algorithms, Bayesian optimizations, and random search. When applied in practice, both one-shot and regular LaNAS consistently outperform existing results. Particularly, LaNAS achieves 99.0% accuracy on CIFAR-10 and 80.8% top1 accuracy at 600 MFLOPS on ImageNet in only 800 samples, significantly outperforming AmoebaNet with 33x fewer samples. Our code is publicly available at https://github.com/facebookresearch/LaMCTS.

Journal ArticleDOI
TL;DR: This paper designs RS precoders for an overloaded multicarrier multigroup multicast downlink system, and analyses the error performance, showing that the RS precoder outperforms its counterparts in terms of the fairness rate, with Gaussian signalling.
Abstract: Employing multi-antenna rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receivers, has emerged as a powerful transceiver strategy for multi-antenna networks. In this paper, we design RS precoders for an overloaded multicarrier multigroup multicast downlink system, and analyse the error performance. RS splits each group message into degraded and designated parts. The degraded parts are combined and encoded into a degraded stream, while the designated parts are encoded in designated streams. All streams are precoded and superimposed in a non-orthogonal fashion before being transmitted over the same time-frequency resource. We first derive the optimized RS-based precoder, where the design philosophy is to achieve a fair user group rate for the considered scenario by solving a joint max-min fairness and sum subcarrier rate optimization problem. Comparing with other precoding schemes including the state-of-the-art multicast transmission scheme, we show that the RS precoder outperforms its counterparts in terms of the fairness rate, with Gaussian signalling, i.e., idealistic assumptions. Then we integrate the optimized RS precoder into a practical transceiver design for link-level simulations (LLS), with realistic assumptions such as finite alphabet inputs and finite code block length. The performance metric becomes the coded bit error rate (BER). In the system under study, low-density parity-check (LDPC) encoding is applied at the transmitter, and iterative soft-input soft-output detection and decoding are employed at the successive interference cancellation based receiver, which completes the LLS processing chain and helps to generate the coded error performance results which validate the effectiveness of the proposed RS precoding scheme compared with benchmark schemes, in terms of the error performance. More importantly, we unveil the corresponding relations between the achievable rate in the idealistic case and coded BER in the realistic case, e.g., with finite alphabet input, for the RS precoded multicarrier multigroup multicast scenario.

Journal ArticleDOI
TL;DR: In this article, a general analytical framework is proposed to approximate the outage probability and effective throughput for short-packet communications to provide both reliability and security guarantees simultaneously with an eavesdropper.
Abstract: Exploiting short packets for communications is one of the key technologies for realizing emerging application scenarios such as massive machine type communications (mMTC) and ultra-reliable low-latency communications (uRLLC). In this paper, we investigate short-packet communications to provide both reliability and security guarantees simultaneously with an eavesdropper. In particular, an outage probability considering both reliability and secrecy is defined according to the characteristics of short-packet transmission, while the effective throughput in the sense of outage is established as the performance metric. Specifically, a general analytical framework is proposed to approximate the outage probability and effective throughput. Furthermore, closed-form expressions for these quantities are derived for the high signal-to-noise ratio (SNR) regime. Both effective throughput obtained via a general analytical framework and a high-SNR approximation are maximized under an outage-probability constraint by searching for the optimal blocklength. Numerical results verify the feasibility and accuracy of the proposed analytical framework, and illustrate the influence of the main system parameters on the blocklength and system performance under the outage-probability constraint.

Journal ArticleDOI
24 Jun 2021
TL;DR: In this paper, the authors investigate the policies that minimize the average AoI, formulating a Markov decision process (MDP) to choose the optimal actions of either updating from one of the sources or remaining idle, based on the current energy level and the AoI at the monitoring node.
Abstract: Age of information (AoI) is a key performance metric for the Internet of things (IoT). Timely status updates are essential for many IoT applications; however, they often suffer from harsh energy constraints and the unreliability of underlying information sources. To overcome these unpredictabilities, one can employ multiple sources that track the same process of interest, but with different energy costs and reliabilities. We consider an energy-harvesting (EH) monitoring node equipped with a finite-size battery and collecting status updates from multiple heterogeneous information sources. We investigate the policies that minimize the average AoI, formulating a Markov decision process (MDP) to choose the optimal actions of either updating from one of the sources or remaining idle, based on the current energy level and the AoI at the monitoring node. We analyze the structure of the optimal solution for different cost/AoI distribution combinations, and compare its performance with an aggressive policy that transmits whenever possible.

Journal ArticleDOI
TL;DR: In this paper, a joint optimization problem in the MEC-assisted V2I networks and a multi-objective optimization scheme to solve the problem through adjusting the minimum contention window under the IEEE 802.11 DCF mode according to the velocities of vehicles.
Abstract: Platooning strategy is an important part of autonomous driving technology. Due to the limited resource of autonomous vehicles in platoons, mobile edge computing (MEC) is usually used to assist vehicles in platoons to obtain useful information, increasing its safety. Specifically, vehicles usually adopt the IEEE 802.11 distributed coordination function (DCF) mechanism to transmit large amount of data to the base station (BS) through vehicle-to-infrastructure (V2I) communications, where the useful information can be extracted by the edge server connected to the BS and then sent back to the vehicles to make correct decisions in time. However, vehicles may be moving on different lanes with different velocities, which incurs the unfair access due to the characteristics of platoons, i.e., vehicles on different lanes transmit different amount of data to the BS when they pass through the coverage of the BS, which also results in the different amount of useful information received by various vehicles. Moreover, age of information (AoI) is an important performance metric to measure the freshness of the data. Large average age of data implies not receiving the useful information in time. It is necessary to design an access scheme to jointly optimize the fairness and data freshness. In this paper, we formulate a joint optimization problem in the MEC-assisted V2I networks and present a multi-objective optimization scheme to solve the problem through adjusting the minimum contention window under the IEEE 802.11 DCF mode according to the velocities of vehicles. The effectiveness of the scheme has been demonstrated by simulation.

Journal ArticleDOI
TL;DR: The research work presented in the paper presents a novel approach to identify the most suitable configuration in a RMS addressing a major challenge faced by the industry.
Abstract: This paper proposes a new metric for product flow configuration selection for reconfigurable manufacturing system (RMS) that considers nine industrially relevant important factors. The metrics for ...

Journal ArticleDOI
TL;DR: The pyUPMASK as mentioned in this paper algorithm is an unsupervised clustering method for stellar clusters that builds upon the original UPMASk package, which is written entirely in Python and is made available through a public repository.
Abstract: Aims. We present pyUPMASK, an unsupervised clustering method for stellar clusters that builds upon the original UPMASK package. Its general approach makes it plausible to be applied to analyses that deal with binary classes of any kind, as long as the fundamental hypotheses are met. The code is written entirely in Python and is made available through a public repository. Methods.The core of the algorithm follows the method developed in UPMASK but introducing several key enhancements. These enhancements not only make pyUPMASK more general, they also improve its performance considerably. Results. We thoroughly tested the performance of pyUPMASK on 600 synthetic clusters, affected by varying degrees of contamination by field stars. To assess the performance we employed six different statistical metrics that measure the accuracy of probabilistic classification. Conclusions. Our results show that pyUPMASK is better performant than UPMASK for every statistical performance metric, while still managing to be many times faster.

Journal ArticleDOI
01 Sep 2021
TL;DR: A sensor node redeployment-based shrewd mechanism (NRSM) has been proposed where new intended positions for sensor node are rummaged out in the coverage area and the moving distance between initial and intended node position is shrewdly reduced.
Abstract: Despite numerous advantages, the challenges for wireless sensor communication always remains open due to which a continuous effort is being applied to tackle the unavoidable conditions regarding wireless network coverage. Somehow, the uncouth deployment of the sensor nodes is making the tribulation queue longer day by day which eventually has great impact over sensor coverage range. To address the issues related with network coverage and uncouth energy wastage, a sensor node redeployment-based shrewd mechanism (NRSM) has been proposed where new intended positions for sensor node are rummaged out in the coverage area. The proposed algorithm operates in two phases; in first phase it locates the intended node positions through Dissimilitude Enhancement Scheme (DES) and moves the node to new position. While second phase is called a Depuration, when the moving distance between initial and intended node position is shrewdly reduced. Further, different variation factors of NRSM such as loudness, pulse emission rate, maximum frequency, and sensing radius have been explored and related optimized parameters are identified. The performance metric has been meticulously analyzed through simulation rounds in Matlab and compared with state of art algorithms like Fruit Fly Optimization Algorithm (FOA), Jenga-inspired optimization algorithm (JOA) and Bacterial Foraging Algorithm (BFA) in terms of mean coverage range, computation time, standard deviation and network energy diminution. The performance metrics vouches the effectiveness of the proposed algorithm as compared to the FOA, JOA and BFA.

Journal ArticleDOI
28 Sep 2021-Energies
TL;DR: A technique for controlling the distribution network, based on the factoring-in of the type of damage during an emergency in real time, as well as a technique for arranging the measuring devices and the creation of an information and communication network are proposed.
Abstract: At present, the entire world is moving towards digitalization, including in the electric power industry. Digitalization is in its heyday and a lot of articles and reports are devoted to this topic. At the same time, the least digitalized of the electrical networks are distribution networks that account for a very large share in electric power systems. The article proposes a methodology for creating a flexible distribution network based on the use of digital technology. Additionally, we elaborate a methodology with the identification and collection of the necessary information to create digital networks, develop ways to adapt the required equipment, and suggest methods of recognition of some short circuits. Furthermore, we address the issue of reliability of the information obtained from digital devices, develop a technique for arranging the devices to cover the entire network as required to improve the power system protection of electrical power distribution networks. The above measures make it possible to ensure the flexibility of the active distribution network, as well as to adjust the parameters of the actuation of power system protection depending on changes in external conditions and in the event of emergencies. We propose a technique for controlling the distribution network, based on the factoring-in of the type of damage during an emergency in real time, as well as a technique for arranging the measuring devices and the creation of an information and communication network. We provide recommendations for the design and operation of electric power distribution networks with digital network control technology.

Journal ArticleDOI
TL;DR: The pyUPMASK as discussed by the authors algorithm is an unsupervised clustering method for stellar clusters that builds upon the original UPMASk package, which is written entirely in Python and is made available through a public repository.
Abstract: Aims. We present pyUPMASK, an unsupervised clustering method for stellar clusters that builds upon the original UPMASK package. Its general approach makes it plausible to be applied to analyses that deal with binary classes of any kind, as long as the fundamental hypotheses are met. The code is written entirely in Python and is made available through a public repository. Methods.The core of the algorithm follows the method developed in UPMASK but introducing several key enhancements. These enhancements not only make pyUPMASK more general, they also improve its performance considerably. Results. We thoroughly tested the performance of pyUPMASK on 600 synthetic clusters, affected by varying degrees of contamination by field stars. To assess the performance we employed six different statistical metrics that measure the accuracy of probabilistic classification. Conclusions. Our results show that pyUPMASK is better performant than UPMASK for every statistical performance metric, while still managing to be many times faster.

Journal ArticleDOI
TL;DR: The findings show that 1) the BO algorithm can explore different network architectures using the proposed encoding schemes and successfully designs well-performing architectures, and 2) the optimization time is significantly reduced by using MRS, without compromising the performance as compared to the architectures obtained from the actual training procedure.

Journal ArticleDOI
TL;DR: In this article, a multi-relay assisted computation offloading framework for MEC-EH systems is proposed, where a computation task can be executed by offloading to the MEC server with the help of multiple relay nodes, such as the neighboring nodes.
Abstract: In multi-access edge computing systems with energy harvesting (MEC-EH), the mobile devices are empowered with unstable energy harvested from renewable energy sources. To prolong the life of mobile devices, as many computation-intensive tasks as possible should be offloaded to the MEC server. However, when the system states of mobile device and MEC server are unstable, e.g. poor communication channel conditions, a great number of tasks will be executed locally, leading to a long execution time. Even worse, some tasks may be dropped due to low energy levels. To address this problem, in this paper, we propose a multi-relay assisted computation offloading framework for MEC-EH systems. In this framework, a computation task can be executed by offloading to the MEC server with the help of multiple relay nodes, such as the neighboring nodes. We introduce execution cost as a performance metric to incorporate both the task execution time and task failure. We then develop a low-complexity online algorithm, namely MRACO algorithm, to minimize the average execution cost. MRACO algorithm can select the optimal execution strategy for each task from (1) executing the task locally, (2) offloading it to the MEC server directly, (3) offloading it to the MEC server with the help of the most suitable neighboring nodes, and (4) simply dropping it. Moreover, we also develop an algorithm for selecting the suitable neighboring devices to act as relays and determining the optimal task splitting ratio between them. Finally, performance evaluation shows that the proposed MRACO algorithm greatly outperforms the benchmarks in terms of both average execution time and task drop rate.

Proceedings ArticleDOI
19 May 2021
TL;DR: In this article, the minimum average inter-sample time (MAIST) generated by periodic event-triggered control (PETC) of linear systems is estimated using a bisimulation refinement algorithm.
Abstract: In the context of networked control systems, event-triggered control (ETC) has emerged as a major topic due to its alleged resource usage reduction capabilities. However, this is mainly supported by numerical simulations, and very little is formally known about the traffic generated by ETC. This work devises a method to estimate, and in some cases to determine exactly, the minimum average inter-sample time (MAIST) generated by periodic event-triggered control (PETC) of linear systems. The method involves abstracting the traffic model using a bisimulation refinement algorithm and finding the cycle of minimum average length in the graph associated to it. This always gives a lower bound to the actual MAIST. Moreover, if this cycle turns out to be related to a periodic solution of the closed-loop PETC system, the performance metric is exact.

Journal ArticleDOI
TL;DR: ParSecureML as mentioned in this paper proposes a GPU-based framework ParSecureML to improve the performance of secure machine learning algorithms based on two-party computation and achieves an average of 33.8× speedup.
Abstract: Machine learning is widely used in our daily lives. Large amounts of data have been continuously produced and transmitted to the cloud for model training and data processing, which raises a problem: how to preserve the security of the data. Recently, a secure machine learning system named SecureML has been proposed to solve this issue using two-party computation. However, due to the excessive computation expenses of two-party computation, the secure machine learning is about 2× slower than the original machine learning methods. Previous work on secure machine learning mostly focused on novel protocols or improving accuracy, while the performance metric has been ignored. In this article, we propose a GPU-based framework ParSecureML to improve the performance of secure machine learning algorithms based on two-party computation. The main challenges of developing ParSecureML lie in the complex computation patterns, frequent intra-node data transmission between CPU and GPU, and complicated inter-node data dependence. To handle these challenges, we propose a series of novel solutions, including profiling-guided adaptive GPU utilization, fine-grained double pipeline for intra-node CPU-GPU cooperation, and compressed transmission for inter-node communication. Moreover, we integrate architecture specific optimizations, such as Tensor Cores, into ParSecureML. As far as we know, this is the first GPU-based secure machine learning framework. Compared to the state-of-the-art framework, ParSecureML achieves an average of 33.8× speedup. ParSecureML can also be applied to inferences, which achieves 31.7× speedup on average.

Journal ArticleDOI
TL;DR: In this article, the authors characterize various aspects of stochastic behavior of intervehicular interference by modeling location of road vehicles as a spatial Poisson point process and provide an analytical framework to access the performance for both vehicular-radio frequency (V-RF) communication and vehicular visible light communication (VLC) for dense, medium, and sparse traffic scenarios.
Abstract: In this article, we characterize various aspects of stochastic behavior of intervehicular interference by modeling location of road vehicles as a spatial Poisson point process. We make use of various analytical tools of stochastic geometry to provide an analytical framework to access the performance for both vehicular-radio frequency (V-RF) communication and vehicular-visible light communication (V-VLC) for dense, medium, and sparse traffic scenarios. The developed framework is also precise in terms of capturing the impact of reducing field-of-view (FOV) of receiver on the level of interference experienced from interferers for V-VLC. The performance has been evaluated and compared under normal atmospheric conditions as well as different environmental deterrents viz., light fog, dense fog, and dry snow conditions in terms of probability of successful transmission as a performance metric. Irrespective of any traffic scenario, the performance of V-VLC communication under normal atmospheric condition always outperforms V-RF communication. However, the performance of V-RF communication is comparatively better than V-VLC under various environmental deterrents. The proposed result motivates the benefit of employing RF-based or VLC-based vehicular-to-vehicular (V2V) communication which takes into account different environmental conditions as well as meets the diverse application requirements for future intelligent transportation system.

Journal ArticleDOI
Tatsuaki Kimura1
TL;DR: This study theoretically analyze the performance of cellular-relay (CR) V2V communications, in which a vehicle first transmits a message to its nearest BS via uplink, and then, the BS forwards the message to a destination vehicle via downlink.
Abstract: Vehicle-to-vehicle (V2V) communication-based cooperative vehicular networks are a key technology for future smarter transportation systems as they provide various applications for addressing road safety and traffic congestion. To alleviate the performance limitations of existing dedicated short-range communication (DSRC) protocols, cellular-assisted V2V communications have been proposed recently, wherein cellular base stations (BSs) relay the transmission from one vehicle to another. Such cellular-assisted communications have shown promise for more reliable V2V communications with a wider capacity and longer transmission distances. In this study, we theoretically analyze the performance of cellular-relay (CR) V2V communications, in which a vehicle first transmits a message to its nearest BS via uplink, and then, the BS forwards the message to a destination vehicle via downlink. We model the road segments through a Poisson line process and the positions of vehicles through a Poisson point process on the roads; subsequently, we derive a theoretical expression for the probability of a successful CR transmission and the mean local delay in the CR transmission. Furthermore, we propose a performance metric to evaluate the difference between the performances of CR and direct V2V communications, and it can be applied to automatic transmission mode selection (i.e., CR or direct) for V2V communications. We evaluate the analytical results using numerical examples and demonstrate the impacts of various system parameters on the performances of CR and direct V2V communications.

Journal ArticleDOI
TL;DR: AoI in multi-hop wireless networks is studied for the very first time and its potential relationships with throughput is explored, particularly focusing on the impacts of flexible routes on the two metrics, i.e., AoI and throughput.
Abstract: While considerable work has addressed the optimal AoI under different circumstances in single-hop networks, the exploration of AoI in multi-hop wireless networks is rarely attempted. More importantly, the inherent relationships between AoI and throughput are yet to be explored, especially in multi-hop networks. This paper studies AoI in multi-hop wireless networks and explores its potential relationships with throughput for the very first time, particularly focusing on the impacts of flexible routes on the two metrics, i.e., AoI and throughput. By developing a rigorous mathematical model with interference, channel allocation, link scheduling, and routing path selection taken into consideration, we build the interrelation between AoI and throughput in multi-hop networks. A multi-criteria optimization problem is formulated with the goal of simultaneously minimizing AoI and maximizing network throughput. By qualitatively analyzing their relationships, we exhibit that the two metrics may conflict with each other, implying the optimal solutions for the multi-criteria problem will include a set of Pareto-optimal points rather than a single point existing in the traditional optimization problem. We resort to a novel approach by transforming the multi-criteria problem into a single objective one so as to find the weakly Pareto-optimal points iteratively, thereby allowing us to screen all Pareto-optimal points for the solution. Through formal proof, our solution is demonstrated to be able to identify all Pareto-optimal points and terminate in a finite number of iterations. We conduct the simulation evaluation to identify the optimal tradeoff points of AoI and throughput, demonstrating that one performance metric may improve at the expense of degrading the other, with the routing path found as one of the key factors in determining such a tradeoff.