scispace - formally typeset
Search or ask a question

Showing papers by "Ron S. Kenett published in 2021"



Journal ArticleDOI
TL;DR: In this article, the authors review the fundamental challenges with respect to modeling and analysis in cybermanufacturing and introduce the existing efforts in computation pipeline recommendation, which aims at identifying an optimal sequence of method options for data analytics/machine learning without time-consuming trial-and-error.
Abstract: In Industry 4.0, smart manufacturing is facing its next stage, cybermanufacturing, founded upon advanced communication, computation, and control infrastructure. Cybermanufacturing will unleash the potential of multi-modal manufacturing data, and provide a new perspective called computation service, as a part of service-oriented architecture (SOA), where on-demand computation requests throughout manufacturing operations are seamlessly satisfied by data analytics and machine learning. However, the complexity of information technology infrastructure leads to fundamental challenges in modeling and analysis under cybermanufacturing, ranging from information-poor datasets to a lack of reproducibility of analytical studies. Nevertheless, existing reviews have focused on the overall architecture of cybermanufacturing/SOA or its technical components (e.g., communication protocol), rather than the potential bottleneck of computation service with respect to modeling and analysis. In this paper, we review the fundamental challenges with respect to modeling and analysis in cybermanufacturing. Then, we introduce the existing efforts in computation pipeline recommendation, which aims at identifying an optimal sequence of method options for data analytics/machine learning without time-consuming trial-and-error. We envision computation pipeline recommendation as a promising research field to address the fundamental challenges in cybermanufacturing. We also expect that computation pipeline recommendation can be a driving force to flexible and resilient manufacturing operations in the post-COVID-19 industry.

8 citations


Journal ArticleDOI
TL;DR: A recent extension to the logratio approach to compositional data analysis is described which allows absolute information about the total of the compositional parts (body mass) to be considered alongside relative information about body composition.
Abstract: Human body composition is made up of mutually exclusive and exhaustive parts (e.g. %truncal fat, %non-truncal fat and %fat-free mass) which are constrained to sum to the same total (100%). In statistical analyses, individual parts of body composition (e.g. %truncal fat or %fat-free mass) have traditionally been used as proxies for body composition, and have been linked with a range of health outcomes. But analysis of individual parts omits information about the other parts, which are intrinsically co-dependent because of the constant sum constraint of 100%. Further, body mass may be associated with health outcomes. We describe a statistical approach for body composition based on compositional data analysis. The body composition data are expressed as logratios to allow relative information about all the compositional parts to be explored simultaneously in relation to health outcomes. We describe a recent extension to the logratio approach to compositional data analysis which allows absolute information about the total of the compositional parts (body mass) to be considered alongside relative information about body composition. The statistical approach is illustrated by an example that explores the relationships between adults' body composition, body mass and bone strength.

8 citations




Journal ArticleDOI
TL;DR: In this article, the authors introduce a "border of meaning", abbreviated BOM, as a mode of representation of research findings that supplements statistical tests, and demonstrate using examples from clinical research and translational medicine.
Abstract: Research aims at generating research claims. The paper introduces a "border of meaning", abbreviated BOM, as a mode of representation of research findings that supplements statistical tests. The suggested approach was originally developed in a pedagogical context of promoting conceptual understanding in education. Here we aim at helping better understand research claims stated in a scientific paper. Considering new approaches to the presentation of findings, has an impact on the reproducibility of research. The BOM approach is demonstrated using examples from clinical research and translational medicine. Specifically, we map research findings into a list that delineates a demarcation line between alternative representations of the research claims, some with meaning equivalence and some with surface similarity. Such a mapping can be statistically evaluated by sin type error tests. Our main message is that findings should be presented and generalized with a BOM.

4 citations


Journal ArticleDOI
TL;DR: These CoDa methods show the relationship between itemsets, their strength, and direction of dependency, and use these visualizations and statistical tests for investigating the association of negative mood emotions to various types of headache/migraine events.
Abstract: This research has been supported by theSpanish Ministry of Economy, Industry and Competitiveness under the project CODAMET (Ref: RTI2018-095518-B-C21)

3 citations



Book ChapterDOI
01 Jan 2021
TL;DR: Information quality (InfoQ) has been proposed by Kenett and Shmueli as a framework for assessing the quality of information generated by empirical studies by using specific empirical methods such as regression models, analysis of variance or predictive analytics.
Abstract: The challenge in educational technology (EdTech) is to apply modern analytics to educational data in order to derive information. Information quality (InfoQ) has been proposed by Kenett and Shmueli as a framework for assessing the quality of information generated by empirical studies by using specific empirical methods such as regression models, analysis of variance or predictive analytics. InfoQ is determined by eight dimensions: 1) Data Resolution, 2) Data Structure, 3) Data Integration 4) Temporal Relevance, 5) Chronology of Data and Goal, 6) Generalizability, 7) Operationalization and 8) Communication.

1 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider different perspectives of indicators produced by official statistics agencies, with an emphasis on technical aspects, and discuss statistical methods, impact, scope and action operationalisation of official statistic indicators.
Abstract: This paper considers different perspectives of indicators produced by official statistics agencies, with an emphasis on technical aspects. We discuss statistical methods, impact, scope and action operationalisation of official statistic indicators. The focus is on multivariate aspects in analysing and communicating such indicators. To illustrate the points made in the paper, we use examples from well-being indicators, from the UN sustainable development goals and a Eurobarometer example. The overall objective is to enhance the added value of official statistics indicators, as they are communicated, and thus strengthen evidence-based policy-making.

1 citations


Journal ArticleDOI
TL;DR: An introduction to conceptual thinking and meaning equivalence and the application of Meaning Equivalence Reusable Learning Objects (MERLO) in the classroom and how MERLO and concept science can be applied in the domain of Statistics and Data Science is described.
Abstract: Computer age statistics, machine learning, data science and in general, data analytics, are having an ubiquitous impact on industry, business and services. Deploying a data transformation strategy requires a workforce which is up to the job in terms of knowledge, experience and capabilities. The application of analytics needs to address organizational needs, invoke proper methods, build on adequate infrastructures and ensure availability of the right skills to the right people. such upskilling requires a focus on conceptual understanding affecting both the pedagogical approach and the complementary learning assessment tools, This paper is about the application of advanced educational concepts to the teaching and evaluation of statistical and data science related concepts. Two educational elements will be included in the discussion: i) the use of simulations to facilitate problem based experiential learning and ii) an emphasis on information quality, as the overall objective of statistics and data science activity. We begin with an introduction to conceptual thinking and meaning equivalence and the application of Meaning Equivalence Reusable Learning Objects (MERLO) in the classroom. We then describe how MERLO and concept science can be applied in the domain of Statistics and Data Science. Section 3 is about the use of simulations in statistical education. In section 4 we discuss practical aspects of an education program focused on conceptual understanding. Section 5 is on the conceptual mapping of information quality as a MERLO infrastructure. The paper concludes with a discussion.

Posted ContentDOI
10 Sep 2021
TL;DR: In this article, the authors examined the effect of mobility restriction measures in Italy and Israel and compared the association between health and population mobility data to assess the impact of pandemic management and mitigation policies on pandemic spread and population activity.
Abstract: The response to the COVID19 pandemic has been highly variable, both in terms of between-nations variation and within the same nation, at different waves. In this context, governments applied different mitigation policy responses with varying impact on social and economic measures over time. This article examines the effect of mobility restriction measures in Italy and Israel and compares the association between health and population mobility data. Facing the pandemic, Israel and Italy implemented different policy measures and experienced different public activity patterns. The analysis we conducted is a staged approach using Bayesian Networks and Structural Equations Models to investigate these patterns. The goal is to assess the impact of pandemic management and mitigation policies on pandemic spread and population activity. We propose a methodology that first models data from health registries and Google mobility data and then shows how decision makers can conduct scenario analysis to help support pandemic management policies.