Institution
Telecom SudParis
About: Telecom SudParis is a based out in . It is known for research contribution in the topics: Cloud computing & Context (language use). The organization has 805 authors who have published 2111 publications receiving 24734 citations. The organization is also known as: Telecom sud paris & TSP.
Papers published on a yearly basis
Papers
More filters
••
Eindhoven University of Technology1, Queensland University of Technology2, Capgemini3, University of Rome Tor Vergata4, Humboldt University of Berlin5, Software AG6, University of Padua7, Polytechnic University of Catalonia8, Hewlett-Packard9, Ghent University10, New Mexico State University11, IBM12, University of Milan13, University of Tartu14, University of Vienna15, Technical University of Lisbon16, Telecom SudParis17, Rabobank18, Infosys19, University of Calabria20, Fujitsu21, Pennsylvania State University22, University of Bari23, University of Bologna24, Vienna University of Economics and Business25, Free University of Bozen-Bolzano26, Stevens Institute of Technology27, Indian Council of Agricultural Research28, Pontifical Catholic University of Chile29, University of Haifa30, Ulsan National Institute of Science and Technology31, Cranfield University32, Katholieke Universiteit Leuven33, Deloitte34, Tsinghua University35, University of Innsbruck36, Hasso Plattner Institute37
TL;DR: This manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users to increase the maturity of process mining as a new tool to improve the design, control, and support of operational business processes.
Abstract: Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.
1,135 citations
•
TL;DR: It is shown empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search and that if a teacher network is trained with label smoothed, knowledge distillation into a student network is much less effective.
Abstract: The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation and speech recognition. Despite its widespread use, label smoothing is still poorly understood. Here we show empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search. However, we also observe that if a teacher network is trained with label smoothing, knowledge distillation into a student network is much less effective. To explain these observations, we visualize how label smoothing changes the representations learned by the penultimate layer of the network. We show that label smoothing encourages the representations of training examples from the same class to group in tight clusters. This results in loss of information in the logits about resemblances between instances of different classes, which is necessary for distillation, but does not hurt generalization or calibration of the model's predictions.
971 citations
••
27 Mar 2014TL;DR: IoT and cloud computing integration is not that simple and bears some key issues, so key issues along with their respective potential solutions have been highlighted in this paper.
Abstract: With the trend going on in ubiquitous computing, everything is going to be connected to the Internet and its data will be used for various progressive purposes, creating not only information from it, but also, knowledge and even wisdom. Internet of Things (IoT) becoming so pervasive that it is becoming important to integrate it with cloud computing because of the amount of data IoT's could generate and their requirement to have the privilege of virtual resources utilization and storage capacity, but also, to make it possible to create more usefulness from the data generated by IoT's and develop smart applications for the users. This IoT and cloud computing integration is referred to as Cloud of Things in this paper. IoT's and cloud computing integration is not that simple and bears some key issues. Those key issues along with their respective potential solutions have been highlighted in this paper.
394 citations
••
TL;DR: An adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion is proposed.
Abstract: In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion. The method, called M-PMC, is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performance of the proposed scheme is studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme.
302 citations
••
TL;DR: In this article, the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems is studied and it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points.
Abstract: We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Under the assumption of decreasing stepsize, it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points. As an important feature, the algorithm does not require the double-stochasticity of the gossip matrices. It is in particular suitable for use in a natural broadcast scenario for which no feedback messages between agents are required. It is proved that our results also holds if the number of communications in the network per unit of time vanishes at moderate speed as time increases, allowing potential savings of the network's energy. Applications to power allocation in wireless ad-hoc networks are discussed. Finally, we provide numerical results which sustain our claims.
294 citations
Authors
Showing all 805 results
Name | H-index | Papers | Citations |
---|---|---|---|
Daqing Zhang | 67 | 331 | 16675 |
Imran Khan | 56 | 361 | 27722 |
Zhiwen Yu | 52 | 538 | 11573 |
Bin Guo | 40 | 294 | 6737 |
Hervé Debar | 36 | 148 | 6717 |
Daniel E. Clark | 35 | 150 | 4042 |
Bernadette Dorizzi | 34 | 175 | 4068 |
Noel Crespi | 31 | 360 | 4696 |
Randal Douc | 31 | 96 | 4901 |
Djamal Zeghlache | 30 | 235 | 3986 |
Gaspar Delso | 29 | 128 | 4239 |
Pierre-Yves Brillet | 29 | 136 | 2786 |
Bin Li | 29 | 151 | 3477 |
Gérard Chollet | 29 | 224 | 3088 |
Chao Chen | 28 | 139 | 3540 |