scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (signal processing) published in 2022"


Journal ArticleDOI
06 Oct 2022
TL;DR: In this paper , the authors proposed a novel framework for clustered inference on average treatment effects, which incorporates a design component that accounts for the variability induced on the estimator by the treatment assignment mechanism.
Abstract: Abstract Clustered standard errors, with clusters defined by factors such as geography, are widespread in empirical research in economics and many other disciplines. Formally, clustered standard errors adjust for the correlations induced by sampling the outcome variable from a data-generating process with unobserved cluster-level components. However, the standard econometric framework for clustering leaves important questions unanswered: (i) Why do we adjust standard errors for clustering in some ways but not others, for example, by state but not by gender, and in observational studies but not in completely randomized experiments? (ii) Is the clustered variance estimator valid if we observe a large fraction of the clusters in the population? (iii) In what settings does the choice of whether and how to cluster make a difference? We address these and other questions using a novel framework for clustered inference on average treatment effects. In addition to the common sampling component, the new framework incorporates a design component that accounts for the variability induced on the estimator by the treatment assignment mechanism. We show that, when the number of clusters in the sample is a nonnegligible fraction of the number of clusters in the population, conventional clustered standard errors can be severely inflated, and propose new variance estimators that correct for this bias.

71 citations


Journal ArticleDOI
TL;DR: Zuchongzhi 2.1 as discussed by the authors is a superconducting quantum computing system with 66 qubits in a two-dimensional array in a tunable coupler architecture, which has a system scale of up to 60 qubits and 24 cycles and fidelity of 3.66±0.345.

70 citations


Journal ArticleDOI
01 Dec 2022
TL;DR: In this article , an adaptive reference vector reinforcement learning (RVRL) approach was proposed to decomposition-based algorithms for industrial copper burdening optimization, where the RL operation treated the reference vector adaptation process as an RL task, where each reference vector learns from the environmental feedback and selects optimal actions for gradually fitting the problem characteristics.
Abstract: The performance of decomposition-based algorithms is sensitive to the Pareto front shapes since their reference vectors preset in advance are not always adaptable to various problem characteristics with no a priori knowledge. For this issue, this article proposes an adaptive reference vector reinforcement learning (RVRL) approach to decomposition-based algorithms for industrial copper burdening optimization. The proposed approach involves two main operations, that is: 1) a reinforcement learning (RL) operation and 2) a reference point sampling operation. Given the fact that the states of reference vectors interact with the landscape environment (quite often), the RL operation treats the reference vector adaption process as an RL task, where each reference vector learns from the environmental feedback and selects optimal actions for gradually fitting the problem characteristics. Accordingly, the reference point sampling operation uses estimation-of-distribution learning models to sample new reference points. Finally, the resultant algorithm is applied to handle the proposed industrial copper burdening problem. For this problem, an adaptive penalty function and a soft constraint-based relaxing approach are used to handle complex constraints. Experimental results on both benchmark problems and real-world instances verify the competitiveness and effectiveness of the proposed algorithm.

53 citations


Journal ArticleDOI
TL;DR: Hénin et al. as discussed by the authors proposed enhanced sampling methods for molecular dynamics simulations, which can be used for simulation of molecular dynamics models with an enhanced sampling method for molecular simulations.
Abstract: Enhanced sampling methods for molecular dynamics simulations [Article v1.0] Jérôme Hénin1,2*, Tony Lelièvre3*, Michael R. Shirts4*, Omar Valsson5,6*, Lucie Delemotte7* 1Laboratoire de Biochimie Théorique UPR 9080, CNRS, Paris, France; 2Institut de Biologie Physico-Chimique–Fondation Edmond de Rothschild, Paris, France; 3CERMICS, Ecole des Ponts, INRIA, Marne-la-Vallée, France; 4Department of Chemical and Biological Engineering, University of Colorado Boulder, Boulder, CO, USA, 80309; 5University of North Texas, Department of Chemistry, Denton, TX, USA; 6Max Planck Institute for Polymer Research, Mainz, Germany; 7KTH Royal Institute of Technology, Science for Life Laboratory, Stockholm, Sweden

53 citations


Journal ArticleDOI
TL;DR: A systematic review of the literature and official documents supplemented with a formal World Health Organisation country consultation was conducted in this article to summarize the current use of self-sampling in cervical cancer screening.

47 citations


Journal ArticleDOI
01 Feb 2022-Chest
TL;DR: In this article , the authors provide comprehensive evidence regarding the usefulness and diagnostic yield of shape-sensing robotic-assisted bronchoscopy (ssRAB) in the sampling of pulmonary parenchymal lesions.

45 citations



Journal ArticleDOI
TL;DR: In this paper , adaptive sampling is used for enrichment of rarer species within metagenomic samples, creating a synthetic mock community and constructing sequencing libraries with a range of mean read lengths.
Abstract: Adaptive sampling is a method of software-controlled enrichment unique to nanopore sequencing platforms. To test its potential for enrichment of rarer species within metagenomic samples, we create a synthetic mock community and construct sequencing libraries with a range of mean read lengths. Enrichment is up to 13.87-fold for the least abundant species in the longest read length library; factoring in reduced yields from rejecting molecules the calculated efficiency raises this to 4.93-fold. Finally, we introduce a mathematical model of enrichment based on molecule length and relative abundance, whose predictions correlate strongly with mock and complex real-world microbial communities.

43 citations


Journal ArticleDOI
TL;DR: Aminoarylbenzosuberene (AAB) molecules were chosen for in silico analysis to develop effective and more competent 11β-hydroxysteroid dehydrogenase (11β-HSD1) protein inhibitors as mentioned in this paper .

43 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of sampling methods for efficient training of GCN can be found in this paper, where the authors categorize sampling methods based on the sampling mechanisms and provide a comprehensive comparison within each category.
Abstract: Graph convolutional networks (GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations. Although GCN performs well compared with other methods, it still faces challenges. Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs. Therefore, motivated by an urgent need in terms of efficiency and scalability in training GCN, sampling methods have been proposed and achieved a significant effect. In this paper, we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN. To highlight the characteristics and differences of sampling methods, we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories. Finally, we discuss some challenges and future research directions of the sampling methods.

40 citations


Journal ArticleDOI
TL;DR: In this paper , a cooperative resilient control method for dc microgrid (MG) is proposed to dispel the adverse influences of both communication delays and denial-of-service (DoS) attacks.
Abstract: In this article, a cooperative resilient control method for dc microgrid (MG) is proposed to dispel the adverse influences of both communication delays and denial-of-service (DoS) attacks. To avoid that the sampling period is captured by intelligent attackers, a new time-varying sampling period, and an improved communication mechanism are first introduced under the sampling control framework. Based on the designed sampling period and communication mechanism, a resilient secondary controller is designed. It is theoretically shown that the developed method can achieve the goals of bus voltage restoration and current sharing even in the presence of both DoS attacks and heterogeneous communication delays. Finally, a dc MG test system is built in a controller-hardware-in-the-loop testing platform to illustrate and verify the effectiveness of our developed method against both communication delays and DoS attacks.

Journal ArticleDOI
TL;DR: In this paper , the authors present a structured process for sample development and present eight key sampling considerations, as well as extending discussions surrounding knowledge construction, standards of reporting, and design research impact.

Journal ArticleDOI
TL;DR: A programmable quantum computer based on fiber optics outperforms classical computers with a high level of confidence as mentioned in this paper , which is the state-of-the-art for quantum computing.
Abstract: A programmable quantum computer based on fiber optics outperforms classical computers with a high level of confidence.

Journal ArticleDOI
p2tnmtk9521
TL;DR: In this paper , the authors demonstrate that DNA from terrestrial animals can be extracted from air samples collected in natural settings representing a powerful tool for terrestrial ecology and used to identify species and their ecological interactions.

Journal ArticleDOI
TL;DR: In this paper , the authors discuss the mathematical foundation of the importance sampling technique and discuss two general classes of methods to construct the importance sample density (or probability measure) for reliability analysis, and explore the performances of the two classes of importance sampling methods through several benchmark numerical examples.

Journal ArticleDOI
TL;DR: In this paper , the roughness heterogeneity of rock joint surface roughness is characterized based on a statistical analysis of all samples extracted from different locations of a given rock joint, and the results show that the expected value obtained from conventional methods failed to accurately represent the overall roughness.
Abstract: Abstract Rock joint surface roughness is usually characterized by heterogeneity, but the determination of a required number of samples for achieving a reasonable heterogeneity assessment remains a challenge. In this paper, a novel method, the global search method, was proposed to investigate the heterogeneity of rock joint roughness. In this method, the roughness heterogeneity was characterized based on a statistical analysis of the roughness of all samples extracted from different locations of a given rock joint. Analyses of the effective sample number were conducted, which showed that sampling bias was caused by an inadequate number of samples. To overcome this drawback, a large natural slate joint sample (1000 mm × 1000 mm in size) was digitized in a laboratory using a high-accuracy laser scanner. The roughness heterogeneities of both two-dimensional (2D) profiles and three-dimensional (3D) surface topographies were systematically investigated. The results show that the expected value obtained from conventional methods failed to accurately represent the overall roughness. The relative errors between the population parameter and the expected value varied not only from sample to sample but also with the scale. The roughness heterogeneity characteristics of joint samples of various sizes can be obtained using the global search method. This new method could facilitate the determination of the most representative samples and their positions.

Journal ArticleDOI
TL;DR: The Global Ecosystem Dynamics Investigation (GEDI) was designed to retrieve vegetation structure within a novel, theoretical sampling design that explicitly quantifies biomass and its uncertainty across a variety of spatial scales as mentioned in this paper .
Abstract: Accurate estimation of aboveground forest biomass stocks is required to assess the impacts of land use changes such as deforestation and subsequent regrowth on concentrations of atmospheric CO2. The Global Ecosystem Dynamics Investigation (GEDI) is a lidar mission launched by NASA to the International Space Station in 2018. GEDI was specifically designed to retrieve vegetation structure within a novel, theoretical sampling design that explicitly quantifies biomass and its uncertainty across a variety of spatial scales. In this paper we provide the estimates of pan-tropical and temperate biomass derived from two years of GEDI observations. We present estimates of mean biomass densities at 1 km resolution, as well as estimates aggregated to the national level for every country GEDI observes, and at the sub-national level for the United States. For all estimates we provide the standard error of the mean biomass. These data serve as a baseline for current biomass stocks and their future changes, and the mission’s integrated use of formal statistical inference points the way towards the possibility of a new generation of powerful monitoring tools from space.

Journal ArticleDOI
TL;DR: In this article , a critical review of the state of sampling in recent, high-quality software engineering research is presented, concluding that sampling, representativeness and randomness often appear misunderstood.
Abstract: Representative sampling appears rare in empirical software engineering research. Not all studies need representative samples, but a general lack of representative sampling undermines a scientific field. This article therefore reports a critical review of the state of sampling in recent, high-quality software engineering research. The key findings are: (1) random sampling is rare; (2) sophisticated sampling strategies are very rare; (3) sampling, representativeness and randomness often appear misunderstood. These findings suggest that software engineering research has a generalizability crisis. To address these problems, this paper synthesizes existing knowledge of sampling into a succinct primer and proposes extensive guidelines for improving the conduct, presentation and evaluation of sampling in software engineering research. It is further recommended that while researchers should strive for more representative samples, disparaging non-probability sampling is generally capricious and particularly misguided for predominately qualitative research.

Journal ArticleDOI
TL;DR: In this article , a two-stage reconstruction method is proposed for continuous monitoring of cardiovascular diseases, where the first stage aims to give a tentative recovered signal, on which a peak detection technique is developed to identify whether there is a peak in current segment and, if so, its location.
Abstract: For continuous monitoring of cardiovascular diseases, this article presents a novel framework for heart sound acquisition. The proposed approach uses compressed sensing for signal sampling, and a two-stage reconstruction is developed for reconstruction. The first stage aims to give a tentative recovered signal, on which a peak detection technique is developed to identify whether there is a peak in current segment and, if so, its location. With such information, an adaptive dictionary is selected for the second round reconstruction. Because the selected dictionary is adaptive to the morphology of current frame, the signal reconstruction performance is consequently promoted. Experiment results indicate that a satisfactory performance can be obtained when the frame length is 256 and the signal morphology is divided into 16 categories. Furthermore, the proposed algorithm is compared with a series of counterparts, and the results well demonstrate the advantages of our proposal, especially at high compression ratios.

Journal ArticleDOI
TL;DR: In this article , an adaptive Kriging-based method is proposed for the estimation of failure probability with high accuracy, where a small set of initial design of experiments (DoE) is constructed and iteratively refined by adding judiciously selected sample points to the DoE.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: Yudeng et al. as discussed by the authors proposed a novel approach that regulates point sampling and radiance field learning on 2D manifolds, embodied as a set of learned implicit surfaces in the 3D volume.
Abstract: 3D-aware image generative modeling aims to generate 3D-consistent images with explicitly controllable camera poses. Recent works have shown promising results by training neural radiance field (NeRF) generators on unstructured 2D images, but still cannot generate highly-realistic images with fine details. A critical reason is that the high memory and computation cost of volumetric representation learning greatly restricts the number of point samples for radiance integration during training. Deficient sampling not only limits the expressive power of the generator to handle fine details but also impedes effective GAN training due to the noise caused by unstable Monte Carlo sampling. We propose a novel approach that regulates point sampling and radiance field learning on 2D manifolds, embodied as a set of learned implicit surfaces in the 3D volume. For each viewing ray, we calculate ray-surface intersections and accumulate their radiance generated by the network. By training and rendering such radiance mani folds, our generator can produce high quality images with realistic fine details and strong visual 3D consistency. 1 1 Project page: https://yudeng.github.io/GRAM/

Journal ArticleDOI
TL;DR: In this paper , a new fuzzy aperiodic intermittent sampled-data control strategy was proposed for distributed parameter systems (DPSs) with stochastic disturbances and multiple time-varying delays.
Abstract: In this article, the extended dissipative performance of distributed parameter systems (DPSs) with stochastic disturbances and multiple time-varying delays is studied by using a new fuzzy aperiodic intermittent sampled-data control strategy. Different from the previous fuzzy sampled-data control results, the state sampling of the proposed sampled-data controller occurs only in space and is intermittent rather than continuous in the time domain. By introducing a novel multitime-delay-dependent switched Lyapunov functional to explore the dynamic characteristics of the controlled system, and by means of the famous Jensen’s inequality with reciprocally convex approach, Wirtinger’s inequality, the criterion of the system’s mean square stabilization is established based on the LMI technique, which quantitatively reveals the relationship between the control period, the control length, and the upper bound of the control sampling interval. Especially, the optimal control gain is given by designing an optimized algorithm in the article, which greatly reduces the cost. Finally, two numerical examples are presented to demonstrate the effectiveness and superiority of the proposed approach.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the spatial patterns of SARS-CoV-2 in sewage through a spatial sampling strategy across neighborhood-scale sewershed catchments and characterized the correlations between the sub-catchments over the sampling period.

Journal ArticleDOI
TL;DR: In this article , a 2 × 2 framework based on sampling goal and methodology for screening and evaluating the quality of online samples is proposed, where screeners can be categorized as direct, which screens individual responses; and as statistical, which provides quantitative signals of low quality.

Journal ArticleDOI
TL;DR: In this article , the authors discuss recent advances in large-scale quantum mechanical (QM) modeling of biochemical systems that have reduced the cost of high-accuracy models and tradeoffs between sampling and accuracy have motivated modeling with molecular mechanics in a multiscale QM/MM or iterative approach.

Journal ArticleDOI
TL;DR: In this article , the authors examined the spatial patterns of SARS-CoV-2 in sewage through a spatial sampling strategy across neighborhood-scale sewershed catchments and characterized the correlations between the sub-catchments over the sampling period.

Proceedings ArticleDOI
28 Mar 2022
TL;DR: GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to the task of graph sampling or model training, and proposes a new pre-sampling based caching policy that takes both sampling algorithms and GNN datasets into account, and shows an efficient and robust caching performance.
Abstract: We propose GNNLab, a sample-based GNN training system in a single machine multi-GPU setup. GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to the task of graph sampling or model training. It accelerates both tasks by eliminating GPU memory contention. To balance GPU workloads, GNNLab applies a global queue to bridge GPUs asynchronously and adopts a simple yet effective method to adaptively allocate GPUs for different tasks. GNNLab further leverages temporarily switching to avoid idle waiting on GPUs. Furthermore, GNNLab proposes a new pre-sampling based caching policy that takes both sampling algorithms and GNN datasets into account, and shows an efficient and robust caching performance. Evaluations on three representative GNN models and four real-life graphs show that GNNLab outperforms the state-of-the-art GNN systems DGL and PyG by up to 9.1× (from 2.4×) and 74.3× (from 10.2×), respectively. In addition, our pre-sampling based caching policy achieves 90% -- 99% of the optimal cache hit rate in all experiments.

Journal ArticleDOI
TL;DR: In this paper , a 9-degree-of-freedom (DOF) rigid-flexible coupling (RFC) robot was developed to assist the COVID-19 OP-swab sampling.
Abstract: The outbreak of novel coronavirus pneumonia (COVID-19) has caused mortality and morbidity worldwide. Oropharyngeal-swab (OP-swab) sampling is widely used for the diagnosis of COVID-19 in the world. To avoid the clinical staff from being affected by the virus, we developed a 9-degree-of-freedom (DOF) rigid-flexible coupling (RFC) robot to assist the COVID-19 OP-swab sampling. This robot is composed of a visual system, UR5 robot arm, micro-pneumatic actuator and force-sensing system. The robot is expected to reduce risk and free up the clinical staff from the long-term repetitive sampling work. Compared with a rigid sampling robot, the developed force-sensing RFC robot can facilitate OP-swab sampling procedures in a safer and softer way. In addition, a varying-parameter zeroing neural network-based optimization method is also proposed for motion planning of the 9-DOF redundant manipulator. The developed robot system is validated by OP-swab sampling on both oral cavity phantoms and volunteers.

Journal ArticleDOI
TL;DR: In this paper , a kernel extreme learning machine (KELM)-based response surface model (RSM) was proposed for parameter inverse analysis of concrete dams, where the KELM-based RSM was used to explore the relationship between material parameters and displacement response of dam-foundation systems.

Journal ArticleDOI
TL;DR: A review of available information on microplastics in China Sea was reviewed in this article, including studies on seawater, sediment, and biota, and the status and limits of sampling methods of MPs were summarized, such as sampling tools, sampling volume and depth of sampling.