scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2022"


Journal ArticleDOI
01 Sep 2022
TL;DR: The authors discusses two important limitations of the common practice of testing for preexisting differences in trends (pre-trends) when using difference-in-differences and related methods.
Abstract: This paper discusses two important limitations of the common practice of testing for preexisting differences in trends (“ pre-trends”) when using difference-in-differences and related methods. First, conventional pre-trends tests may have low power. Second, conditioning the analysis on the result of a pretest can distort estimation and inference, potentially exacerbating the bias of point estimates and under-coverage of confidence intervals. I analyze these issues both in theory and in simulations calibrated to a survey of recent papers in leading economics journals, which suggest that these limitations are important in practice. I conclude with practical recommendations for mitigating these issues. (JEL A14, C23, C51)

129 citations


Journal ArticleDOI
TL;DR: The M5 Accuracy Challenge as discussed by the authors was the first of two parallel challenges in the latest M competition with the aim of advancing the theory and practice of forecasting, and the main objective was to accurately predict 42,840 time series representing the hierarchical unit sales for the largest retail company in the world by revenue, Walmart.

129 citations


Journal ArticleDOI
TL;DR: This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years and lays out a long-term vision for NLG evaluation and proposes concrete steps to improve their evaluation processes.
Abstract: Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural generation models have improved to the point where their outputs can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for evaluation research and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 generation papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.

71 citations


Proceedings ArticleDOI
13 May 2022
TL;DR: It is found that the rate with which shown suggestions are accepted, rather than more specific metrics regarding the persistence of completions in the code over time, drives developers’ perception of productivity.
Abstract: Neural code synthesis has reached a point where snippet generation is accurate enough to be considered for integration into human software development workflows. Commercial products aim to increase programmers’ productivity, without being able to measure it directly. In this case study, we asked users of GitHub Copilot about its impact on their productivity, and sought to find a reflection of their perception in directly measurable user data. We find that the rate with which shown suggestions are accepted, rather than more specific metrics regarding the persistence of completions in the code over time, drives developers’ perception of productivity.

52 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a meta-control and de-control in metaverses for control intelligence and knowledge automation in cyber-physical-social systems based on circular causality.
Abstract: Briefing: An investigation and outline of MetaControl and DeControl in Metaverses for control intelligence and knowledge automation are presented. Prescriptive control with prescriptive knowledge and parallel philosophy is proposed as the starting point for the new control philosophy and technology, especially for computational control of metasystems in cyber-physical-social systems. We argue that circular causality, the generalized feedback mechanism for complex and purposive systems, should be adapted as the fundamental principle for control and management of metasystems with metacomplexity in metaverses. Particularly, an interdisciplinary approach is suggested for MetaControl and DeControl as a new form of intelligent control based on five control metaverses: Meta Verses, MultiVerses, InterVerses, TransVerse, and Deep Verses.

49 citations


Journal ArticleDOI
TL;DR: In this article , a new YOLOv5-B model was constructed using the InvolutionBottleneck module in the network structures of CNN and improving the loss function to improve the accuracy and speed of small target detection.

45 citations


Journal ArticleDOI
TL;DR: Point Instance Network (PINet) as discussed by the authors is a traffic line detection method based on the key points estimation and instance segmentation approach, which includes several hourglass models that are trained simultaneously with the same loss function.
Abstract: Perception techniques for autonomous driving should be adaptive to various environments. In essential perception modules for traffic line detection, many conditions should be considered, such as a number of traffic lines and computing power of the target system. To address these problems, in this paper, we propose a traffic line detection method called Point Instance Network (PINet); the method is based on the key points estimation and instance segmentation approach. The PINet includes several hourglass models that are trained simultaneously with the same loss function. Therefore, the size of the trained models can be chosen according to the target environment’s computing power. We cast a clustering problem of the predicted key points as an instance segmentation problem; the PINet can be trained regardless of the number of the traffic lines. The PINet achieves competitive accuracy and false positive on CULane and TuSimple datasets, popular public datasets for lane detection. Our code is available at https://github.com/koyeongmin/PINet_new

43 citations


Journal ArticleDOI
TL;DR: In this article , the authors present a very important point of view on the mathematical models and algorithms used for decision-making in our society, and the main aim of the book is to break the myths about the mathematical model and their results.
Abstract: The book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy provides a very important point of view on the mathematical models and algorithms used for decisionmaking in our society. As the title suggests the author sees the possible dangerous influences of algorithms used for the specific groups in the society as huge and disastrous. The book describes the mechanism and functioning of mathematical models and algorithms in a comprehensible manner. It uses practical examples of various mechanisms commonly used in today's society. The main aim of the book is to break the myths about the mathematical models and their results, mainly the widespread beliefs about the algorithms as being fair, objective, and unbiased.

42 citations



Journal ArticleDOI
TL;DR: In this paper , the authors used an Artificial Neural Network (ANN) to predict bridge behavior using acceleration measurements from multiple passes of a traversing vehicle, and demonstrated that using the CP-response in the ANN provided improved performance over the traditionally used axle-response.

37 citations


Journal ArticleDOI
TL;DR: The basic anatomy and physiology of important body cavities is introduced to understand their characteristics as it is a pivotal foundation for the successful development of in-mouth devices.
Abstract: Cavitas sensors and point-of-need sensors capable of providing physical and biochemical information from the oral cavity and saliva have attracted great attention because they offer remarkable advantages for noninvasive sensing systems. Herein, we introduce the basic anatomy and physiology of important body cavities to understand their characteristics as it is a pivotal foundation for the successful development of in-mouth devices. Next, the advanced development in lab-in-a-mouth sensors and point-of-need sensors for analyzing saliva are explained. In addition, we discuss the integrations of artificial intelligence and electronic technologies in smart sensing networks for healthcare systems. This review ends with a discussion of the challenges, future research trends, and opportunities in relevant disciplines. Mouthguard-based sensors and conventional salivary sensing devices will continue to be significant for the progress in the next-generation sensing technologies and smart healthcare systems.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a new detection method, which applies the 3D-BoNet instance-segmentation model with a multi-scale grouping structure and transfer learning to a 3D point-cloud tunnel dataset.

Journal ArticleDOI
TL;DR: In this article , the authors compare and contrast low-code and model-driven approaches, identifying their differences and commonalities, analysing their strong and weak points, and proposing directions for cross-pollination.
Abstract: Abstract The last few years have witnessed a significant growth of so-called low-code development platforms (LCDPs) both in gaining traction on the market and attracting interest from academia. LCDPs are advertised as visual development platforms, typically running on the cloud, reducing the need for manual coding and also targeting non-professional programmers. Since LCDPs share many of the goals and features of model-driven engineering approaches, it is a common point of debate whether low-code is just a new buzzword for model-driven technologies, or whether the two terms refer to genuinely distinct approaches. To contribute to this discussion, in this expert-voice paper, we compare and contrast low-code and model-driven approaches, identifying their differences and commonalities, analysing their strong and weak points, and proposing directions for cross-pollination.

Journal ArticleDOI
TL;DR: In this paper , the authors introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km.
Abstract: With the recent availability and affordability of commercial depth sensors and 3D scanners, an increasing number of 3D (i.e., RGBD, point cloud) datasets have been publicized to facilitate research in 3D computer vision. However, existing datasets either cover relatively small areas or have limited semantic annotations. Fine-grained understanding of urban-scale 3D scenes is still in its infancy. In this paper, we introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km $$^2$$ . Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset. In addition to the more commonly encountered categories such as road and vegetation, urban-level categories including rail, bridge, and river are also included in our dataset. Based on this dataset, we further build a benchmark to evaluate the performance of state-of-the-art segmentation algorithms. In particular, we provide a comprehensive analysis and identify several key challenges limiting urban-scale point cloud understanding. The dataset is available at http://point-cloud-analysis.cs.ox.ac.uk/ .

OtherDOI
21 Jan 2022
TL;DR: Ergonomics is an applied science that draws on many interrelated disciplines, such as engineering, psychology, occupational hygiene and physiology as mentioned in this paper , focusing on understanding the relationship between people in their work environment (both the physical and the organisation).
Abstract: Ergonomics is derived from the Greek word ergon (work) and nomos (laws) to indicate the science of work. Ergonomics is an applied science that draws on many interrelated disciplines, such as engineering, psychology, occupational hygiene and physiology. It is the study of humans as components of complex systems, focusing on the interaction between people and technology. Ergonomicsis about focusing on understanding the relationship between people in their work environment (both the physical and the organisation) and aims to ensure compatibility between the operator's capabilities and limitations with the demands of the task, job and/or work environment.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a cross self-attention network (CSANet) to solve raw point cloud classification and segmentation tasks, which has permutation invariance and can learn the coordinates and features of point cloud at the same time.
Abstract: It is a challenge to design a deep neural network for raw point cloud, which is disordered and unstructured data. In this paper, we introduce a cross self-attention network (CSANet) to solve raw point cloud classification and segmentation tasks. It has permutation invariance and can learn the coordinates and features of point cloud at the same time. To better capture features of different scales, a multi-scale fusion (MF) module is proposed, which can adaptively consider the information of different scales and establish a fast descent branch to bring richer gradient information. Extensive experiments on ModelNet40, ShapeNetPart, and S3DIS demonstrate that the proposed method can achieve competitive results.

Journal ArticleDOI
TL;DR: In this article , a comparison between two types of methodologies, those based on lexicons and those based in machine and deep learning techniques, was made in the Italian market, exploiting an ad hoc dataset.
Abstract: Recent evolutions in the e-commerce market have led to an increasing importance attributed by consumers to product reviews made by third parties before proceeding to purchase. The industry, in order to improve the offer intercepting the discontent of consumers, has placed increasing attention towards systems able to identify the sentiment expressed by buyers, whether positive or negative. From a technological point of view, the literature in recent years has seen the development of two types of methodologies: those based on lexicons and those based on machine and deep learning techniques. This study proposes a comparison between these technologies in the Italian market, one of the largest in the world, exploiting an ad hoc dataset: scientific evidence generally shows the superiority of language models such as BERT built on deep neural networks, but it opens several considerations on the effectiveness and improvement of these solutions when compared to those based on lexicons in the presence of datasets of reduced size such as the one under study, a common condition for languages other than English or Chinese.

Journal ArticleDOI
TL;DR: In this paper , a Conical-like cavity was formed near the water source, and the preferential passage of water flow appeared below the cavity, where a two-dimensional seepage diffusion calculation model was established and preliminarily verified by example calculations.
Abstract: • A Conical-like cavity was formed near the water source, and the preferential passage of water flow appeared below the cavity. • The wetting body shape gradually transformed from oblate ellipsoid to long ellipsoid after the formation of the passage. • Seepage process can be defined as four periods: rapid infiltration, stable infiltration, slow infiltration, and preferential infiltration. • A two-dimensional seepage diffusion calculation model was established and preliminarily verified by example calculations.

Journal ArticleDOI
TL;DR: In this article , an innovative geometric error compensation method is proposed to the machining with non-rotary cutters based on two main points: 1) the cutter rotation angle is considered to both the modeling and compensation of the geometric errors; 2) the instantaneous ideal contact point, which is calculated according to the generation process of the worm grinding of face gears, is chosen as the reference point to the compensation algorithm rather than the cutter tip point used in the previous method.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper analyzed and established the definition of non-IID data problems, and put forward a series of challenges that this problem may bring to federated learning.

Journal ArticleDOI
TL;DR: Rough set theory and belief function theory, two popular mathematical frameworks for uncertainty representation, have been widely applied in different settings and contexts as mentioned in this paper , and the most relevant contributions studying the links between these two uncertainty representation formalisms have been reviewed.

Journal ArticleDOI
TL;DR: In this article , a survey of point cloud datasets that were automatically processed with machine learning methods in the construction and infrastructure domains is presented, with a reference for researchers to acknowledge the state-of-the-art applications of automatically-processed point cloud models.

Journal ArticleDOI
TL;DR: Simulation results showed that the accuracy of the algorithm could meet the requirements of painting technology, and it has promising engineering practicability.
Abstract: An algorithm for automatically planning trajectories designed for painting large objects is proposed in this paper to eliminate the difficulty of painting large objects and ensure their surface quality. The algorithm was divided into three phases, comprising the target point acquisition phase, the trajectory planning phase, and the UR5 robot inverse solution acquisition phase. In the target point acquisition phase, the standard triangle language (STL) file, algorithm of principal component analyses (PCA), and k-dimensional tree (k-d tree) were employed to obtain the point cloud model of the car roof to be painted. Simultaneously, the point cloud data were compressed as per the requirements of the painting process. In the trajectory planning phase, combined with the maximum operating space of the UR5 robot, the painting trajectory of the target points was converted into multiple traveling salesman problem (TSP) models, and each TSP model was created with a genetic algorithm (GA). In the last phase, in conformity with the singularities of the UR5 robot’s motion space, the painting trajectory was divided into a recommended area trajectory and a non-recommended area trajectory and created by the analytical method and sequential quadratic programming (SQP). Finally, the proposed algorithm for painting large objects was deployed in a simulation experiment. Simulation results showed that the accuracy of the algorithm could meet the requirements of painting technology, and it has promising engineering practicability.

Journal ArticleDOI
TL;DR: In this article , the authors discuss the maximum torque per ampere (MTPA) control of synchronous motors, which has become an indispensable part of highly efficient motor drives.
Abstract: This article discusses the maximum torque per ampere (MTPA) control of synchronous motors, which have become an indispensable part of highly efficient motor drives. It explains the nature of torque produced by synchronous motors, ways to find its maximum and algorithms to operate at this point, despite changes of loads and motor parameter variations. The authors propose a classification of the MTPA methods, based on the features of each algorithm or group of similar methods. They demonstrate the conventional control scheme and discuss the modifications necessary for the implementation of each method. This article reviews existing MTPA control algorithms, discusses their pros and cons, and suggests possible areas of usage for each group of methods. The authors of the article share their vast experience in the industry and research aspects, which were obtained by developing industrial, commercial, traction, and military drives, and report on their views on the perspective of each method taken into consideration.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed two kinds of time-inconsistent preferences (i.e., time-flow inconsistency and time-point inconsistency) to advance research in this field.
Abstract: Abstract Considering that the assumption of time consistency does not adequately reveal the mechanisms of exit decisions of venture capital (VC), this study proposes two kinds of time-inconsistent preferences (i.e., time-flow inconsistency and time-point inconsistency) to advance research in this field. Time-flow inconsistency is in line with the previous time inconsistency literature, while time-point inconsistency is rooted in the VC fund’s finite lifespan. Based on the assumption about the strategies guiding future behaviors, we consider four types of venture capitalists: time-consistent, time-point-inconsistent, naïve, and sophisticated venture capitalists, of which the latter three are time-inconsistent. We derive and compare the exit thresholds of these four types of venture capitalists. The main results include: (1) time-inconsistent preferences accelerate the exits of venture capitalists; (2) the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits; and (3) future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Our study provides a behavioral explanation for the empirical fact of young VCs’ grandstanding.

Journal ArticleDOI
Yingchun Fan1, Qichi Zhang1, Yuliang Tang1, Shaofeng Liu1, Hong Han1 
TL;DR: Zhang et al. as discussed by the authors proposed a semantic SLAM system named Blitz-SLAM to remove the noise blocks in the local point cloud by combining the advantages of semantic and geometric information of mask, RGB and depth images.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: Point-BERT as mentioned in this paper proposes a Masked Point Modeling (MPM) task to pre-train point cloud Transformers and achieves state-of-the-art performance on few-shot point cloud classification task.
Abstract: We present Point-BERT, a new paradigm for learning Transformers to generalize the concept of BERT [8] to 3D point cloud. Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud Transformers. Specifically, we first divide a point cloud into several local point patches, and a point cloud Tokenizer with a discrete Variational AutoEncoder (dVAE) is designed to generate discrete point tokens containing meaningful local information. Then, we randomly mask out some patches of input point clouds and feed them into the backbone Transformers. The pre-training objective is to recover the original point tokens at the masked locations under the supervision of point tokens obtained by the Tokenizer. Extensive experiments demonstrate that the proposed BERT-style pre-training strategy significantly improves the performance of standard point cloud Transformers. Equipped with our pre-training strategy, we show that a pure Transformer architecture attains 93.8% accuracy on ModelNet40 and 83.1% accuracy on the hardest setting of ScanObjectNN, surpassing carefully designed point cloud models with much fewer hand-made designs. We also demonstrate that the representations learned by Point-BERT transfer well to new tasks and domains, where our models largely advance the state-of-the-art of few-shot point cloud classification task. The code and pre-trained models are available at https://github.com/lulutang0608/Point-BERT.

Journal ArticleDOI
TL;DR: In this paper , a framework for forecasting multivariate data that follow known linear constraints is developed and used to construct densities and draw samples from a reconciled probabilistic forecast.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a semantic SLAM system named Blitz-SLAM to remove the noise blocks in the local point cloud by combining the advantages of semantic and geometric information of mask, RGB and depth images.

Journal ArticleDOI
TL;DR: The Accuracy and Uncertainty Forecasting Competition (M5) as mentioned in this paper focused on a retail sales forecasting application with the objective to produce the most accurate point forecasts for 42,840 time series that represent the hierarchical unit sales of the largest retail company in the world, Walmart.