scispace - formally typeset
Search or ask a question
Author

Hong-Linh Truong

Bio: Hong-Linh Truong is an academic researcher from Aalto University. The author has contributed to research in topics: Cloud computing & Web service. The author has an hindex of 34, co-authored 225 publications receiving 4614 citations. Previous affiliations of Hong-Linh Truong include University of Vienna & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: It is hard to find a truly context‐aware web service‐based system that is interoperable and secure, and operates on multi‐organizational environments.
Abstract: Purpose – This survey aims to study and analyze current techniques and methods for context‐aware web service systems, to discuss future trends and propose further steps on making web services systems context‐aware.Design/methodology/approach – The paper analyzes and compares existing context‐aware web service‐based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi‐organization support and level of web services implementation.Findings – Supporting context‐aware web service‐based systems is increasing. It is hard to find a truly context‐aware web service‐based system that is interoperable and secure, and operates on multi‐organizational environments. Various issues, such as distributed context management, context‐aware service modeling and engineering, context reasoning and quality of context, se...

226 citations

Journal ArticleDOI
TL;DR: The overall architecture of the ASKALON tool set is described and the basic functionality of the four constituent tools are outlined, enabling tool interoperability and demonstrating the usefulness and effectiveness of ASKalON by applying the tools to real‐world applications.
Abstract: Performance engineering of parallel and distributed applications is a complex task that iterates through various phases, ranging from modeling and prediction, to performance measurement, experiment ...

202 citations

Proceedings ArticleDOI
13 Nov 2005
TL;DR: The ASKALON environment is centered around a set of high-level services for transparent and effective Grid access, including a Scheduler for optimized mapping of workflows onto the Grid, an Enactment Engine for reliable application execution, and a Resource Manager covering both computers and application components.
Abstract: We present the ASKALON environment whose goal is to simplify the development and execution of workflow applications on the Grid. ASKALON is centered around a set of high-level services for transparent and effective Grid access, including a Scheduler for optimized mapping of workflows onto the Grid, an Enactment Engine for reliable application execution, a Resource Manager covering both computers and application components, and a Performance Prediction service based on training phase and statistical methods. A sophisticated XML-based programming interface that shields the user from the Grid middleware details allows the high-level composition of workflow applications. ASKALON is used to develop and port scientific applications as workflows in the Austrian Grid project. We present experimental results using two real-world scientific applications to demonstrate the effectiveness of our approach.

198 citations

Journal ArticleDOI
TL;DR: The requirements for process data analysis pipelines are characterized and recommendations are offered for each phase of the pipeline, showing a stronger focus on the storage and analysis phases of pipelines than on the ingestion, communication, and visualization stages.
Abstract: Smart manufacturing is strongly correlated with the digitization of all manufacturing activities. This increases the amount of data available to drive productivity and profit through data-driven decision making programs. The goal of this article is to assist data engineers in designing big data analysis pipelines for manufacturing process data. Thus, this paper characterizes the requirements for process data analysis pipelines and surveys existing platforms from academic literature. The results demonstrate a stronger focus on the storage and analysis phases of pipelines than on the ingestion, communication, and visualization stages. Results also show a tendency towards custom tools for ingestion and visualization, and relational data tools for storage and analysis. Tools for handling heterogeneous data are generally well-represented throughout the pipeline. Finally, batch processing tools are more widely adopted than real-time stream processing frameworks, and most pipelines opt for a common script-based data processing approach. Based on these results, recommendations are offered for each phase of the pipeline.

177 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce elastic processes, which are based on explicitly modeling resources, cost, and quality, and show how they improve on the state-of-the-art.
Abstract: Cloud computing's success has made on-demand computing with a pay-as-you-go pricing model popular. However, cloud computing's focus on resources and costs limits progress in realizing more flexible, adaptive processes. The authors introduce elastic processes, which are based on explicitly modeling resources, cost, and quality, and show how they improve on the state of the art.

170 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This survey makes an exhaustive review of wireless evolution toward 5G networks, including the new architectural changes associated with the radio access network (RAN) design, including air interfaces, smart antennas, cloud and heterogeneous RAN, and underlying novel mm-wave physical layer technologies.
Abstract: The vision of next generation 5G wireless communications lies in providing very high data rates (typically of Gbps order), extremely low latency, manifold increase in base station capacity, and significant improvement in users’ perceived quality of service (QoS), compared to current 4G LTE networks. Ever increasing proliferation of smart devices, introduction of new emerging multimedia applications, together with an exponential rise in wireless data (multimedia) demand and usage is already creating a significant burden on existing cellular networks. 5G wireless systems, with improved data rates, capacity, latency, and QoS are expected to be the panacea of most of the current cellular networks’ problems. In this survey, we make an exhaustive review of wireless evolution toward 5G networks. We first discuss the new architectural changes associated with the radio access network (RAN) design, including air interfaces, smart antennas, cloud and heterogeneous RAN. Subsequently, we make an in-depth survey of underlying novel mm-wave physical layer technologies, encompassing new channel model estimation, directional antenna design, beamforming algorithms, and massive MIMO technologies. Next, the details of MAC layer protocols and multiplexing schemes needed to efficiently support this new physical layer are discussed. We also look into the killer applications, considered as the major driving force behind 5G. In order to understand the improved user experience, we provide highlights of new QoS, QoE, and SON features associated with the 5G evolution. For alleviating the increased network energy consumption and operating expenditure, we make a detail review on energy awareness and cost efficiency. As understanding the current status of 5G implementation is important for its eventual commercialization, we also discuss relevant field trials, drive tests, and simulation experiments. Finally, we point out major existing research issues and identify possible future research directions.

2,624 citations

Journal Article
TL;DR: A framework for model driven engineering is set out, which proposes an organisation of the modelling 'space' and how to locate models in that space, and identifies the need for defining families of languages and transformations, and for developing techniques for generating/configuring tools from such definitions.
Abstract: The Object Management Group's (OMG) Model Driven Architecture (MDA) strategy envisages a world where models play a more direct role in software production, being amenable to manipulation and transformation by machine. Model Driven Engineering (MDE) is wider in scope than MDA. MDE combines process and analysis with architecture. This article sets out a framework for model driven engineering, which can be used as a point of reference for activity in this area. It proposes an organisation of the modelling 'space' and how to locate models in that space. It discusses different kinds of mappings between models. It explains why process and architecture are tightly connected. It discusses the importance and nature of tools. It identifies the need for defining families of languages and transformations, and for developing techniques for generating/configuring tools from such definitions. It concludes with a call to align metamodelling with formal language engineering techniques.

1,476 citations