scispace - formally typeset
Search or ask a question

Showing papers on "Field (mathematics) published in 2020"


Posted Content
TL;DR: This article provides a taxonomy of GNN-based recommendation models according to the types of information used and recommendation tasks and systematically analyze the challenges of applying GNN on different types of data.
Abstract: Owing to the superiority of GNN in learning on graph data and its efficacy in capturing collaborative signals and sequential patterns, utilizing GNN techniques in recommender systems has gain increasing interests in academia and industry. In this survey, we provide a comprehensive review of the most recent works on GNN-based recommender systems. We proposed a classification scheme for organizing existing works. For each category, we briefly clarify the main issues, and detail the corresponding strategies adopted by the representative models. We also discuss the advantages and limitations of the existing strategies. Furthermore, we suggest several promising directions for future researches. We hope this survey can provide readers with a general understanding of the recent progress in this field, and shed some light on future developments.

314 citations


Journal ArticleDOI
07 Mar 2020
TL;DR: The conclusion is that IoT and UAV are two of the most important technologies that transform traditional cultivation practices into a new perspective of intelligence in precision agriculture.
Abstract: Internet of Things (IoT) and Unmanned Aerial Vehicles (UAVs) are two hot technologies utilized in cultivation fields, which transform traditional farming practices into a new era of precision agriculture. In this paper, we perform a survey of the last research on IoT and UAV technology applied in agriculture. We describe the main principles of IoT technology, including intelligent sensors, IoT sensor types, networks and protocols used in agriculture, as well as IoT applications and solutions in smart farming. Moreover, we present the role of UAV technology in smart agriculture, by analyzing the applications of UAVs in various scenarios, including irrigation, fertilization, use of pesticides, weed management, plant growth monitoring, crop disease management, and field-level phenotyping. Furthermore, the utilization of UAV systems in complex agricultural environments is also analyzed. Our conclusion is that IoT and UAV are two of the most important technologies that transform traditional cultivation practices into a new perspective of intelligence in precision agriculture.

301 citations


Journal ArticleDOI
TL;DR: This survey extensively summarizes existing works in this field by categorizing them with respect to application types, control models and studied algorithms and discusses the challenges and open questions regarding deep RL-based transportation applications.
Abstract: Latest technological improvements increased the quality of transportation. New data-driven approaches bring out a new research direction for all control-based systems, e.g., in transportation, robotics, IoT and power systems. Combining data-driven applications with transportation systems plays a key role in recent transportation applications. In this paper, the latest deep reinforcement learning (RL) based traffic control applications are surveyed. Specifically, traffic signal control (TSC) applications based on (deep) RL, which have been studied extensively in the literature, are discussed in detail. Different problem formulations, RL parameters, and simulation environments for TSC are discussed comprehensively. In the literature, there are also several autonomous driving applications studied with deep RL models. Our survey extensively summarizes existing works in this field by categorizing them with respect to application types, control models and studied algorithms. In the end, we discuss the challenges and open questions regarding deep RL-based transportation applications.

234 citations


Posted Content
TL;DR: A field guide to explore the space of explainable deep learning for those in the AI/ML field who are uninitiated and hopes it is seen as a starting point for those embarking on this research field.
Abstract: Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks. Yet, due to its black-box nature, it is inherently difficult to understand which aspects of the input data drive the decisions of the network. There are various real-world scenarios in which humans need to make actionable decisions based on the output DNNs. Such decision support systems can be found in critical domains, such as legislation, law enforcement, etc. It is important that the humans making high-level decisions can be sure that the DNN decisions are driven by combinations of data features that are appropriate in the context of the deployment of the decision support system and that the decisions made are legally or ethically defensible. Due to the incredible pace at which DNN technology is being developed, the development of new methods and studies on explaining the decision-making process of DNNs has blossomed into an active research field. A practitioner beginning to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field is taking. This complexity is further exacerbated by the general confusion that exists in defining what it means to be able to explain the actions of a deep learning system and to evaluate a system's "ability to explain". To alleviate this problem, this article offers a "field guide" to deep learning explainability for those uninitiated in the field. The field guide: i) Discusses the traits of a deep learning system that researchers enhance in explainability research, ii) places explainability in the context of other related deep learning research areas, and iii) introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning. The guide is designed as an easy-to-digest starting point for those just embarking in the field.

229 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide insight into the hierarchical motion planning problem and describe the basics of Deep Reinforcement Learning (DRL), and present state-of-the-art solutions systematized by different tasks and levels of autonomous driving, such as car-following, lane-keeping, trajectory following, merging, or driving in dense traffic.
Abstract: Academic research in the field of autonomous vehicles has reached high popularity in recent years related to several topics as sensor technologies, V2X communications, safety, security, decision making, control, and even legal and standardization rules. Besides classic control design approaches, Artificial Intelligence and Machine Learning methods are present in almost all of these fields. Another part of research focuses on different layers of Motion Planning, such as strategic decisions, trajectory planning, and control. A wide range of techniques in Machine Learning itself have been developed, and this article describes one of these fields, Deep Reinforcement Learning (DRL). The paper provides insight into the hierarchical motion planning problem and describes the basics of DRL. The main elements of designing such a system are the modeling of the environment, the modeling abstractions, the description of the state and the perception models, the appropriate rewarding, and the realization of the underlying neural network. The paper describes vehicle models, simulation possibilities and computational requirements. Strategic decisions on different layers and the observation models, e.g., continuous and discrete state representations, grid-based, and camera-based solutions are presented. The paper surveys the state-of-art solutions systematized by the different tasks and levels of autonomous driving, such as car-following, lane-keeping, trajectory following, merging, or driving in dense traffic. Finally, open questions and future challenges are discussed.

208 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a review on the deep learning methods for prediction in video sequences, as well as mandatory background concepts and the most used datasets, and carefully analyze existing video prediction models organized according to a proposed taxonomy.
Abstract: The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems. In light of the success of deep learning in computer vision, deep-learning-based video prediction emerged as a promising research direction. Defined as a self-supervised learning task, video prediction represents a suitable framework for representation learning, as it demonstrated potential capabilities for extracting meaningful representations of the underlying patterns in natural videos. Motivated by the increasing interest in this task, we provide a review on the deep learning methods for prediction in video sequences. We firstly define the video prediction fundamentals, as well as mandatory background concepts and the most used datasets. Next, we carefully analyze existing video prediction models organized according to a proposed taxonomy, highlighting their contributions and their significance in the field. The summary of the datasets and methods is accompanied with experimental results that facilitate the assessment of the state of the art on a quantitative basis. The paper is summarized by drawing some general conclusions, identifying open research challenges and by pointing out future research directions.

141 citations


Posted Content
TL;DR: By mapping found challenges to the steps of the machine learning deployment workflow it is shown that practitioners face issues at each stage of the deployment process.
Abstract: In recent years, machine learning has received increased interest both as an academic research field and as a solution for real-world business problems. However, the deployment of machine learning models in production systems can present a number of issues and concerns. This survey reviews published reports of deploying machine learning solutions in a variety of use cases, industries and applications and extracts practical considerations corresponding to stages of the machine learning deployment workflow. Our survey shows that practitioners face challenges at each stage of the deployment. The goal of this paper is to layout a research agenda to explore approaches addressing these challenges.

139 citations


Journal ArticleDOI
TL;DR: This article points out the shortcomings and under-explored, yet key aspects of this field that are necessary to attain true sentiment understanding and attempts to chart a possible course forThis field that covers many overlooked and unanswered questions.
Abstract: Sentiment analysis as a field has come a long way since it was first introduced as a task nearly 20 years ago. It has widespread commercial applications in various domains like marketing, risk management, market research, and politics, to name a few. Given its saturation in specific subtasks -- such as sentiment polarity classification -- and datasets, there is an underlying perception that this field has reached its maturity. In this article, we discuss this perception by pointing out the shortcomings and under-explored, yet key aspects of this field that are necessary to attain true sentiment understanding. We analyze the significant leaps responsible for its current relevance. Further, we attempt to chart a possible course for this field that covers many overlooked and unanswered questions.

119 citations


Journal ArticleDOI
TL;DR: The issues and challenges that are related to extraction of different aspects and their relevant sentiments, relational mapping between aspects, interactions, dependencies and contextual-semantic relationships between different data objects for improved sentiment accuracy, and prediction of sentiment evolution dynamicity are emphasized.
Abstract: The domain of Aspect-based Sentiment Analysis, in which aspects are extracted, their sentiments are analyzed and sentiments are evolved over time, is getting much attention with increasing feedback of public and customers on social media. The immense advancements in the field urged researchers to devise new techniques and approaches, each sermonizing a different research analysis/question, that cope with upcoming issues and complex scenarios of Aspect-based Sentiment Analysis. Therefore, this survey emphasized on the issues and challenges that are related to extraction of different aspects and their relevant sentiments, relational mapping between aspects, interactions, dependencies and contextual-semantic relationships between different data objects for improved sentiment accuracy, and prediction of sentiment evolution dynamicity. A rigorous overview of the recent progress is summarized based on whether they contributed towards highlighting and mitigating the issue of Aspect Extraction, Aspect Sentiment Analysis or Sentiment Evolution. The reported performance for each scrutinized study of Aspect Extraction and Aspect Sentiment Analysis is also given, showing the quantitative evaluation of the proposed approach. Future research directions are proposed and discussed, by critically analysing the presented recent solutions, that will be helpful for researchers and beneficial for improving sentiment classification at aspect-level.

89 citations


Proceedings Article
Lin Wang1, Kuk-Jin Yoon1
13 Jun 2020
TL;DR: In this article, a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically used for vision tasks is provided. And the authors systematically analyze the research status of KD in vision applications.
Abstract: Deep neural models, in recent years, have been successful in almost every field. However, these models are huge, demanding heavy computation power. Besides, the performance boost is highly dependent on redundant labeled data. To achieve faster speeds and to handle the problems caused by the lack of labeled data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another. KD is often characterized by the so-called ‘Student-Teacher’ (S-T) learning framework and has been broadly applied in model compression and knowledge transfer. This paper is about KD and S-T learning, which are being actively studied in recent years. First, we aim to provide explanations of what KD is and how/why it works. Then, we provide a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically used for vision tasks. In general, we investigate some fundamental questions that have been driving this research area and thoroughly generalize the research progress and technical details. Additionally, we systematically analyze the research status of KD in vision applications. Finally, we discuss the potentials and open challenges of existing methods and prospect the future directions of KD and S-T learning.

84 citations


Journal ArticleDOI
TL;DR: This paper provides a broad and systematic review of research in the field of text summarization published from 2008 to 2019 and describes the techniques and methods that are often used by researchers as a comparison and means for developing methods.

Journal ArticleDOI
Benoit Vicedo1
TL;DR: The notion of dihedral affine Gaudin models has been introduced in this article, where a broad family of classical integrable field theories can be recast as examples of such classical dihedral FGF models through (anti-)linear automorphisms.
Abstract: We introduce the notion of a classical dihedral affine Gaudin model, associated with an untwisted affine Kac–Moody algebra |$\widetilde{\mathfrak{g}}$| equipped with an action of the dihedral group |$D_{2T}$|⁠, |$T \geq 1$| through (anti-)linear automorphisms. We show that a very broad family of classical integrable field theories can be recast as examples of such classical dihedral affine Gaudin models. Among these are the principal chiral model on an arbitrary real Lie group |$G_0$| and the |$\mathbb{Z}_T$|-graded coset |$\sigma $|-model on any coset of |$G_0$| defined in terms of an order |$T$| automorphism of its complexification. Most of the multi-parameter integrable deformations of these |$\sigma $|-models recently constructed in the literature provide further examples. The common feature shared by all these integrable field theories, which makes it possible to reformulate them as classical dihedral affine Gaudin models, is the fact that they are non-ultralocal. In particular, we also obtain affine Toda field theory in its lesser-known non-ultralocal formulation as another example of this construction. We propose that the interpretation of a given classical non-ultralocal integrable field theory as a classical dihedral affine Gaudin model provides a natural setting within which to address its quantisation. At the same time, it may also furnish a general framework for understanding the massive ordinary differential equations (ODE)/integrals of motion (IM) correspondence since the known examples of integrable field theories for which such a correspondence has been formulated can all be viewed as dihedral affine Gaudin models.

Posted Content
TL;DR: This work proposes linear filtration as a intuitive, computationally efficient sanitization method for class-wide deletion requests for classification models (e.g. deep neural networks).
Abstract: Recently enacted legislation grants individuals certain rights to decide in what fashion their personal data may be used, and in particular a "right to be forgotten". This poses a challenge to machine learning: how to proceed when an individual retracts permission to use data which has been part of the training process of a model? From this question emerges the field of machine unlearning, which could be broadly described as the investigation of how to "delete training data from models". Our work complements this direction of research for the specific setting of class-wide deletion requests for classification models (e.g. deep neural networks). As a first step, we propose linear filtration as a intuitive, computationally efficient sanitization method. Our experiments demonstrate benefits in an adversarial setting over naive deletion schemes.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the vertex operator algebra of superconformal field theories and showed that they have a completely uniform description, parameterized by the dual Coxeter number of the corresponding global symmetry group.
Abstract: We analyze the $$\mathcal {N}=2$$ superconformal field theories that arise when a pair of D3-branes probe an F-theory singularity from the perspective of the associated vertex operator algebra. We identify these vertex operator algebras for all cases; we find that they have a completely uniform description, parameterized by the dual Coxeter number of the corresponding global symmetry group. We further present free field realizations for these algebras in the style of recent work by three of the authors. These realizations transparently reflect the algebraic structure of the Higgs branches of these theories. We find fourth-order linear modular differential equations for the vacuum characters/Schur indices of these theories, which are again uniform across the full family of theories and parameterized by the dual Coxeter number. We comment briefly on expectations for the still higher-rank cases.

Posted Content
TL;DR: This work explores the various DT features and current approaches, the shortcomings and reasons behind the delay in the implementation and adoption of digital twin, and identifies novel research questions that will help to better understand and advance the theory and practice of digital twins.
Abstract: Digital Twin was introduced over a decade ago, as an innovative all-encompassing tool, with perceived benefits including real-time monitoring, simulation and forecasting. However, the theoretical framework and practical implementations of digital twins (DT) are still far from this vision. Although successful implementations exist, sufficient implementation details are not publicly available, therefore it is difficult to assess their effectiveness, draw comparisons and jointly advance the DT methodology. This work explores the various DT features and current approaches, the shortcomings and reasons behind the delay in the implementation and adoption of digital twin. Advancements in machine learning, internet of things and big data have contributed hugely to the improvements in DT with regards to its real-time monitoring and forecasting properties. Despite this progress and individual company-based efforts, certain research gaps exist in the field, which have caused delay in the widespread adoption of this concept. We reviewed relevant works and identified that the major reasons for this delay are the lack of a universal reference framework, domain dependence, security concerns of shared data, reliance of digital twin on other technologies, and lack of quantitative metrics. We define the necessary components of a digital twin required for a universal reference framework, which also validate its uniqueness as a concept compared to similar concepts like simulation, autonomous systems, etc. This work further assesses the digital twin applications in different domains and the current state of machine learning and big data in it. It thus answers and identifies novel research questions, both of which will help to better understand and advance the theory and practice of digital twins.

Posted Content
TL;DR: The authors summarizes the rapidly growing literature of contest theory on affirmative action and other policies that level the playing field, and outlines research on player and contest designer behavior under a multitude of policy mechanisms; and discuss the theoretical, experimental, and empirical results in relation to some of the common debates surrounding AA.
Abstract: The heterogeneous abilities of the players in various competitive contexts often lead to undesirable outcomes such as low effort provision, lack of diversity, and inequality. A range of policies are implemented to mitigate such issues by enforcing competitive balance, i.e., leveling the playing field. While a number of such policies are aimed at increasing competition, affirmative action (AA) policies are historically practiced in an ethical response to historical discrimination against particular social groups among winners. This survey summarizes the rapidly growing literature of contest theory on AA and other policies that level the playing field. Using a general theoretical structure, we outline research on player and contest designer behavior under a multitude of policy mechanisms; and discuss the theoretical, experimental, and empirical results in relation to some of the common debates surrounding AA.

Journal ArticleDOI
Linheng Li1, Jing Gan1, Xinkai Ji1, Xu Qu1, Bin Ran1 
TL;DR: A driving risk potential field-based car-following model that fully considers the dynamic effect of the vehicle’s acceleration and steering angle and can reasonably explain the influencing factors between driver types and lane-changing safety conditions in practice is proposed.
Abstract: This paper proposes a new dynamic driving risk potential field model under the connected and automated vehicles environment that fully considers the dynamic effect of the vehicle's acceleration and steering angle. The statistical analysis of the model's parameter reveals that acceleration and steering angle will directly affect the distribution of the driving risk potential field and that this strong correlation should not be ignored if one is interested in the vehicle's microscopic motion behavior. We further develop a driving risk potential field-based car-following model (DRPFM) to remedy the failure of acceleration consideration under the conventional environment, whose parameters are calibrated by filtered I-80 NGSIM data with frequent traf?c oscillations. Simulation results indicate that our proposed DRPFM model is proved to be a good description of car-following behavior and outperforms two classical car-following models (Optimal Velocity Model and Intelligent Driver Model) in frequent oscillation phases due to our consideration of potential acceleration data acquisition in real-time under the CAVs environment. In addition, this DRPFM model is applied to deduce the safety conditions for vehicle lane-changing. The analysis results prove that this model can reasonably explain the influencing factors between driver types and lane-changing safety conditions in practice.

Posted Content
TL;DR: This survey, the first of its kind, systematically overviews the recent deep learning based MDS models and proposes a novel taxonomy to summarize the design strategies of neural networks and conduct a comprehensive summary of the state of the art.
Abstract: Multi-document summarization (MDS) is an effective tool for information aggregation which generates an informative and concise summary from a cluster of topic-related documents. Our survey structurally overviews the recent deep learning based multi-document summarization models via a proposed taxonomy and it is the first of its kind. Particularly, we propose a novel mechanism to summarize the design strategies of neural networks and conduct a comprehensive summary of the state-of-the-art. We highlight the differences among various objective functions which are rarely discussed in the existing literature. Finally, we propose several future directions pertaining to this new and exciting development of the field.

Posted Content
TL;DR: The aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys, which focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies.
Abstract: Deep learning had a remarkable impact in different scientific disciplines during the last years. This was demonstrated in numerous tasks, where deep learning algorithms were able to outperform the cutting-edge methods, like in image processing and analysis. Moreover, deep learning delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even contexts where deep learning outperformed humans, like object recognition and gaming. Another field in which this development is showing a huge potential is the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals, but it relates also to data collected by general practitioners, mobile healthcare apps, or online websites, just to name a few. This trend resulted in new, massive research efforts during the last years. In Q2/2020, the search engine PubMed returned already over 11.000 results for the search term $'$deep learning$'$, and around 90% of these publications are from the last three years. Hence, a complete overview of the field of $'$medical deep learning$'$ is almost impossible to obtain and getting a full overview of medical sub-fields becomes increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been presented within the last years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as foundation, the aim of this contribution is to provide a very first high-level, systematic meta-review of medical deep learning surveys.

Journal ArticleDOI
TL;DR: Solve the distributivity equations between 2-uninorms and overlap(grouping) functions, which are two classes of special aggregation functions found to possess wide applications in image processing, decision making, classification and fuzzy detection problems.

Journal ArticleDOI
TL;DR: A novel gated stacked target-related autoencoder (GSTAE) is proposed for improving modeling performance in view of the above two issues by adding prediction errors of target values into the loss function when executing a layerwise pretraining procedure.
Abstract: These days, data-driven soft sensors have been widely applied to estimate the difficult-to-measure quality variables in the industrial process. How to extract effective feature representations from complex process data is still the difficult and hot spot in the soft sensing application field. Deep learning (DL), which has made great progresses in many fields recently, has been used for process monitoring and quality prediction purposes for its outstanding nonlinear modeling and feature extraction abilities. In this work, deep stacked autoencoder (SAE) is introduced to construct a soft sensor model. Nevertheless, conventional SAE-based methods do not take information related to target values in the pretraining stage and just use the feature representations in the last hidden layer for final prediction. To this end, a novel gated stacked target-related autoencoder (GSTAE) is proposed for improving modeling performance in view of the above two issues. By adding prediction errors of target values into the loss function when executing a layerwise pretraining procedure, the target-related information is used to guide the feature learning process. Besides, gated neurons are utilized to control the information flow from different layers to the final output neuron that take full advantage of different levels of abstraction representations and quantify their contributions. Finally, the effectiveness and feasibility of the proposed approach are verified in two real industrial cases.

Posted Content
TL;DR: The necessary logical notation and the main ILP learning settings are introduced, the main building blocks of an ILP system are described, and several ILP systems on several dimensions are compared.
Abstract: Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises given training examples. In contrast to most forms of machine learning, ILP can learn human-readable hypotheses from small amounts of data. As ILP approaches 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main ILP learning settings. We describe the main building blocks of an ILP system. We compare several ILP systems on several dimensions. We describe in detail four systems (Aleph, TILDE, ASPAL, and Metagol). We document some of the main application areas of ILP. Finally, we summarise the current limitations and outline promising directions for future research.

Journal ArticleDOI
TL;DR: In this article, Fyodorov and Bouchaud conjectured an exact formula for the density of the total mass of (subcritical) Gaussian multiplicative chaos (GMC) associated to the Gaussian free field (GFF) on the unit circle.
Abstract: In a remarkable paper in 2008, Fyodorov and Bouchaud conjectured an exact formula for the density of the total mass of (subcritical) Gaussian multiplicative chaos (GMC) associated to the Gaussian free field (GFF) on the unit circle. In this paper we will give a proof of this formula. In the mathematical literature this is the first occurrence of an explicit probability density for the total mass of a GMC measure. The key observation of our proof is that the negative moments of the total mass of GMC determine its law and are equal to one-point correlation functions of Liouville conformal field theory in the disk recently defined by Huang, Rhodes, and Vargas. The rest of the proof then consists in implementing rigorously the framework of conformal field theory (Belavin–Polyakov–Zamolodchikov equations for degenerate field insertions) in a probabilistic setting to compute the negative moments. Finally, we will discuss applications to random matrix theory, asymptotics of the maximum of the GFF, and tail expansions of GMC.

Journal ArticleDOI
TL;DR: In this paper, an expression for the spectral density in terms of the light spectrum of two-dimensional unitary conformal field theories, with no extended chiral algebra and c > 1.
Abstract: A classical result from analytic number theory by Rademacher gives an exact formula for the Fourier coefficients of modular forms of non-positive weight. We apply similar techniques to study the spectrum of two-dimensional unitary conformal field theories, with no extended chiral algebra and c > 1. By exploiting the full modular constraints of the partition function we propose an expression for the spectral density in terms of the light spectrum of the theory. The expression is given in terms of a Rademacher expansion, which converges for spin j ≠ 0. For a finite number of light operators the expression agrees with a variant of the Poincare construction developed by Maloney, Witten and Keller. With this framework we study the presence of negative density of states in the partition function dual to pure gravity, and propose a scenario to cure this negativity.

Posted Content
TL;DR: A Reinforcement Learning (RL) algorithm to solve infinite horizon asymptotic Mean Field Game (MFG) and Mean Field Control (MFC) problems is presented, described as a unified two-timescale Mean Field Q-learning.
Abstract: We present a Reinforcement Learning (RL) algorithm to solve infinite horizon asymptotic Mean Field Game (MFG) and Mean Field Control (MFC) problems. Our approach can be described as a unified two-timescale Mean Field Q-learning: The same algorithm can learn either the MFG or the MFC solution by simply tuning a parameter. The algorithm is in discrete time and space where the agent not only provides an action to the environment but also a distribution of the state in order to take into account the mean field feature of the problem. Importantly, we assume that the agent can not observe the population's distribution and needs to estimate it in a model-free manner. The asymptotic MFG and MFC problems are presented in continuous time and space, and compared with classical (non-asymptotic or stationary) MFG and MFC problems. They lead to explicit solutions in the linear-quadratic (LQ) case that are used as benchmarks for the results of our algorithm.

Posted Content
Yao Jiang1, Tao Zhou, Ge-Peng Ji1, Keren Fu1, Qijun Zhao, Deng-Ping Fan 
TL;DR: This paper provides the first comprehensive review and a benchmark for light field SOD, which has long been lacking in the saliency community and benchmarking results are publicly available at https://github.com/kerenfu/LFSOD-Survey.
Abstract: Salient object detection (SOD) is a long-standing research topic in computer vision and has drawn an increasing amount of research interest in the past decade. This paper provides the first comprehensive review and benchmark for light field SOD, which has long been lacking in the saliency community. Firstly, we introduce preliminary knowledge on light fields, including theory and data forms, and then review existing studies on light field SOD, covering ten traditional models, seven deep learning-based models, one comparative study, and one brief review. Existing datasets for light field SOD are also summarized with detailed information and statistical analyses. Secondly, we benchmark seven representative light field SOD models together with several cutting-edge RGB-D SOD models on four widely used light field datasets, from which insightful discussions and analyses, including a comparison between light field SOD and RGB-D SOD models, are achieved. Besides, due to the inconsistency of datasets in their current forms, we further generate complete data and supplement focal stacks, depth maps and multi-view images for the inconsistent datasets, making them consistent and unified. Our supplemental data makes a universal benchmark possible. Lastly, because light field SOD is quite a special problem attributed to its diverse data representations and high dependency on acquisition hardware, making it differ greatly from other saliency detection tasks, we provide nine hints into the challenges and future directions, and outline several open issues. We hope our review and benchmarking could serve as a catalyst to advance research in this field. All the materials including collected models, datasets, benchmarking results, and supplemented light field datasets will be publicly available on our project site this https URL.

Journal ArticleDOI
TL;DR: In this paper, a short overview of the field of Computational Paralinguistics, its history and exemplary use cases, as well as (de-)anonymization and peculiarities of speech and text data, and proposing rules for good practice in the field, such as choosing the right performance measure, and accounting for representativity and interpretability.
Abstract: With the advent of ‘heavy Artificial Intelligence’ - big data, deep learning, and ubiquitous use of the internet, ethical considerations are widely dealt with in public discussions and governmental bodies. Within Computational Paralinguistics with its manifold topics and possible applications (modelling of long-term, medium-term, and short-term traits and states such as personality, emotion, or speech pathology), we have not yet seen that many contributions. In this article, we try to set the scene by (1) giving a short overview of ethics and privacy, (2) describing the field of Computational Paralinguistics, its history and exemplary use cases, as well as (de-)anonymisation and peculiarities of speech and text data, and (3) proposing rules for good practice in the field, such as choosing the right performance measure, and accounting for representativity and interpretability.

Journal ArticleDOI
TL;DR: The paper introduced prediction methods and divided them into three main categories: memory-based methods, model- based methods, and Collaborative Filtering methods combined with other methods.
Abstract: Nowadays, there are many Web services with similar functionality on the Internet. Users consider Quality of Service (QoS) of the services to select the best service from among them. The prediction of QoS values of the Web services and recommendations of the best service based on these values to the users is one of the major challenges in the web service area. Major studies in this field use collaboration filtering based methods for prediction. The paper introduced prediction methods and divided them into three main categories: memory-based methods, model-based methods, and Collaborative Filtering (CF) methods combined with other methods. In each category, some of the most famous studies were introduced, and then the problems and benefits of each category were reviewed. Finally, we have a discussion about these methods and propose suggestions for future works.

Posted Content
TL;DR: This is the first systematic study on the basic features used in BCSA by leveraging interpretable feature engineering on a large-scale benchmark and shows that a simple interpretable model with a few basic features can achieve a comparable result to that of recent deep learning-based approaches.
Abstract: Binary code similarity analysis (BCSA) is widely used for diverse security applications such as plagiarism detection, software license violation detection, and vulnerability discovery. Despite the surging research interest in BCSA, it is significantly challenging to perform new research in this field for several reasons. First, most existing approaches focus only on the end results, namely, increasing the success rate of BCSA, by adopting uninterpretable machine learning. Moreover, they utilize their own benchmark sharing neither the source code nor the entire dataset. Finally, researchers often use different terminologies or even use the same technique without citing the previous literature properly, which makes it difficult to reproduce or extend previous work. To address these problems, we take a step back from the mainstream and contemplate fundamental research questions for BCSA. Why does a certain technique or a feature show better results than the others? Specifically, we conduct the first systematic study on the basic features used in BCSA by leveraging interpretable feature engineering on a large-scale benchmark. Our study reveals various useful insights on BCSA. For example, we show that a simple interpretable model with a few basic features can achieve a comparable result to that of recent deep learning-based approaches. Furthermore, we show that the way we compile binaries or the correctness of underlying binary analysis tools can significantly affect the performance of BCSA. Lastly, we make all our source code and benchmark public and suggest future directions in this field to help further research.

Journal ArticleDOI
TL;DR: Many kinds of barriers and biases revealed in these studies could potentially be ameliorated through changes to the OSS software environments and tools.
Abstract: Previous research has revealed that newcomer women are disproportionately affected by gender-biased barriers in open source software (OSS) projects. However, this research has focused mainly on social/cultural factors, neglecting the software tools and infrastructure. To shed light on how OSS tools and infrastructure might factor into OSS barriers to entry, we conducted two studies: (1) a field study with five teams of software professionals, who worked through five use cases to analyze the tools and infrastructure used in their OSS projects; and (2) a diary study with 22 newcomers (9 women and 13 men) to investigate whether the barriers matched the ones identified by the software professionals. The field study produced a bleak result: software professionals found gender biases in 73% of all the newcomer barriers they identified. Further, the diary study confirmed these results: Women newcomers encountered gender biases in 63% of barriers they faced. Fortunately, many the kinds of barriers and biases revealed in these studies could potentially be ameliorated through changes to the OSS software environments and tool.