scispace - formally typeset
Search or ask a question

Showing papers on "Naturalness published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors discuss the scope and naturalness of the proton mass decomposition (or sum rule) published in Phys. Rev. 74, 1071 (1995) and answer a few criticisms that appeared recently in the literature, focusing particularly on its interpretation and the quantum anomalous energy contribution.
Abstract: I discuss the scope and naturalness of the proton mass decomposition (or sum rule) published in Phys. Rev. Lett. 74, 1071 (1995) and answer a few criticisms that appeared recently in the literature, focusing particularly on its interpretation and the quantum anomalous energy contribution. I comment on the so-called frame-independent or invariant-mass decomposition from the trace of the energy-momentum tensor. I stress the importance of measuring the quantum anomalous energy through experiments. Finally, I point out a large discrepancy in the scalar radius of the nucleon extracted from vector-meson productions and lattice QCD calculations.

44 citations


Journal ArticleDOI
TL;DR: This model is the first no-reference quality assessment method for 360-degree images that combines multifrequency information and image naturalness and outperforms state-of-the-art full-reference (FR) and NR approaches.
Abstract: 360-degree/omnidirectional images (OIs) have received remarkable attention due to the increasing applications of virtual reality (VR). Compared to conventional 2D images, OIs can provide more immersive experiences to consumers, benefiting from the higher resolution and plentiful field of views (FoVs). Moreover, observing OIs is usually in a head-mounted display (HMD) without references. Therefore, an efficient blind quality assessment method, which is specifically designed for 360-degree images, is urgently desired. In this paper, motivated by the characteristics of the human visual system (HVS) and the viewing process of VR visual content, we propose a novel and effective no-reference omnidirectional image quality assessment (NR OIQA) algorithm by MultiFrequency Information and Local-Global Naturalness (MFILGN). Specifically, inspired by the frequency-dependent property of the visual cortex, we first decompose the projected equirectangular projection (ERP) maps into wavelet subbands by using discrete Haar wavelet transform (DHWT). Then, the entropy intensities of low-frequency and high-frequency subbands are exploited to measure the multifrequency information of OIs. In addition to considering the global naturalness of ERP maps, owing to the browsed FoVs, we extract the natural scene statistics (NSS) features from each viewport image as the measure of local naturalness. With the proposed multifrequency information measurement and local-global naturalness measurement, we utilize support vector regression (SVR) as the final image quality regressor to train the quality evaluation model from visual quality-related features to human ratings. To our knowledge, the proposed model is the first no-reference quality assessment method for 360-degree images that combines multifrequency information and image naturalness. Experimental results on two publicly available OIQA databases demonstrate that our proposed MFILGN outperforms state-of-the-art full-reference (FR) and NR approaches.

39 citations


Proceedings ArticleDOI
Isaac Elias1, Heiga Zen1, Jonathan Shen1, Yu Zhang1, Ye Jia1, Ron Weiss1, Yonghui Wu1 
06 Jun 2021
TL;DR: Parallel Tacotron as mentioned in this paper uses a variational autoencoder-based residual encoder for text-to-speech models, which is highly parallelizable during both training and inference.
Abstract: Although neural end-to-end text-to-speech models can synthesize highly natural speech, there is still room for improvements to its efficiency and naturalness. This paper proposes a non-autoregressive neural text-to-speech model augmented with a variational autoencoder-based residual encoder. This model, called Parallel Tacotron, is highly parallelizable during both training and inference, allowing efficient synthesis on modern parallel hardware. The use of the variational autoencoder relaxes the one-to-many mapping nature of the text-to-speech problem and improves naturalness. To further improve the naturalness, we use lightweight convolutions, which can efficiently capture local contexts, and introduce an iterative spectrogram loss inspired by iterative refinement. Experimental results show that Parallel Tacotron matches a strong autoregressive baseline in subjective evaluations with significantly decreased inference time.

35 citations


Journal ArticleDOI
TL;DR: In this article, the fundamental parameters of a theory are probabilistically localised around the critical value and the universe finds itself at the edge of a phase transition, which is called self-organised localisation.
Abstract: We describe a new phenomenon in quantum cosmology: self-organised localisation. When the fundamental parameters of a theory are functions of a scalar field subject to large fluctuations during inflation, quantum phase transitions can act as dynamical attractors. As a result, the theory parameters are probabilistically localised around the critical value and the Universe finds itself at the edge of a phase transition. We illustrate how self-organised localisation could account for the observed near-criticality of the Higgs self-coupling, the naturalness of the Higgs mass, or the smallness of the cosmological constant.

27 citations


Journal ArticleDOI
01 Jul 2021-Synthese
TL;DR: It is explained why the necessity to define a probability distribution renders arguments from naturalness internally contradictory, and why it is conceptually questionable to single out assumptions about dimensionless parameters from among a host of other assumptions.
Abstract: We critically analyze the rationale of arguments from finetuning and naturalness in particle physics and cosmology, notably the small values of the mass of the Higgs-boson and the cosmological constant. We identify several new reasons why these arguments are not scientifically relevant. Besides laying out why the necessity to define a probability distribution renders arguments from naturalness internally contradictory, it is also explained why it is conceptually questionable to single out assumptions about dimensionless parameters from among a host of other assumptions. Some other numerological coincidences and their problems are also discussed.

26 citations


Journal ArticleDOI
05 Nov 2021-Appetite
TL;DR: In this paper, the authors explored the perception of Norwegian and French consumers' attitudes, barriers and opportunities to increase the likelihood of a shift in diet and found that there is a strong gap between respondents' desired behaviour (balancing nutrition, eating less meat) and their actual behaviour: meat is very important, and the menu is often organized around it.

23 citations


Journal ArticleDOI
TL;DR: In this article, a predictive model for explaining the apparent deviation of the muon anomalous magnetic moment from the Standard Model expectation is proposed, and the model provides a calculable example violating the Wilsonian notion of naturalness.
Abstract: We study a predictive model for explaining the apparent deviation of the muon anomalous magnetic moment from the Standard Model expectation. There are no new scalars and hence no new hierarchy puzzles beyond those associated with the Higgs; the only new particles at the TeV scale are vector-like singlet and doublet leptons. Interestingly, this simple model provides a calculable example violating the Wilsonian notion of naturalness: despite the absence of any symmetries prohibiting its generation, the coefficient of the naively leading dimension-six operator for (g − 2) vanishes at one-loop. While effective field theorists interpret this either as a surprising UV cancellation of power divergences, or as a delicate cancellation between matching UV and calculable IR corrections to (g − 2) from parametrically separated scales, there is a simple explanation in the full theory: the loop integrand is a total derivative of a function vanishing in both the deep UV and IR. The leading contribution to (g − 2) arises from dimension-eight operators, and thus the required masses of new fermions are lower than naively expected, with a sizeable portion of parameter space already covered by direct searches at the LHC. The viable parameter space free of fine-tuning for the muon mass will be fully covered by future direct LHC searches, and all of the parameter space can be probed by precision measurements at planned future lepton colliders.

21 citations


Journal ArticleDOI
TL;DR: This paper found that product naturalness depends more on whether the processing technique is deemed traditional (old) or new, and not on whether processing produced chemical or physical transformations, while produce type and production scale have largely additive effects and consumers do not necessarily conflate these two attributes.

19 citations


Journal ArticleDOI
TL;DR: In this article, the degree of naturalness of images was manipulated by rotating their color gamut rigidly in the color space CIELAB, which changed just the hue composition, but preserved saturation and lightness.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed that the approach of social acceptance of renewable energy technology needs to include the concept of naturalness to understand the social rejection of biogas technology, because naturalness concerns are not only strongly associated with the physical emotions of disgust and fear but also with disgust as a moral emotion, which is experienced as an indignity to the community, they have the potential to prevent energy projects from succeeding.
Abstract: This paper proposes that the approach of social acceptance of renewable energy technology needs to include the concept of naturalness to understand the social rejection of biogas technology. Because naturalness concerns are not only strongly associated with the physical emotions of disgust and fear but also with disgust as a moral emotion, which is experienced as an indignity to the community, they have the potential to prevent energy projects from succeeding. Results from a survey and a case study conducted in South Africa demonstrate that relative to other renewable energy technologies, biogas technology elicited stronger naturalness concerns and the emotions of disgust and fear (Study 1: N = 452) and that indignity experiences of community members of an informal settlement were sufficient to reject a small scale biogas technology project (Study 2: N = 155). The implications of our findings are discussed and solutions are provided to address the naturalness concerns about biogas technology.

13 citations


Journal ArticleDOI
04 May 2021
TL;DR: In this article, an adaptive interval type-2 fuzzy filter was proposed for image dehazing, which is able to handle uncertainties in homogeneous regions and spatial ambiguities at edges.
Abstract: Artificial intelligence (AI) offers fuzzy set theory (FST) as one of the popular AI agents and decision making tools for digital image processing to increase the robustness of vision-based applications. FST is capable to handle uncertainties while enhancing image quality and preserving its naturalness. This article proposes a novel Adaptive Interval Type-2 Fuzzy Filter (AIT2FF) as an AI agent for preserving the naturalness of nonuniformly illuminated images. The proposed AIT2FF estimates coarse illumination of the input image, which is further processed to obtain the reflectance and refined coarse illumination for the composition of enhanced image. The estimated coarse illumination preserves the naturalness by eliminating uncertainties due to grayness ambiguities in homogeneous regions and spatial ambiguities at edges. The effectiveness of the proposed filter has been presented quantitatively in terms of lightness order error (LOE) and naturalness image quality evaluator (NIQE) score on images from the high dynamic range dataset and images captured with commercial digital cameras. The qualitative comparison visualizes that the enhanced images obtained using the proposed filter maintains visual realism and naturalness. The applications of the proposed filter have also been presented for image dehazing, vehicle tracking, and gradients estimation. Moreover, the detection results for dehazed images have been compared with state-of-the-art methods on real-world task driven testing set from REalistic Single Image DEhazing- $\beta$ dataset. The comparisons demonstrate that the enhanced images obtained using the proposed AIT2FF approach are better and outperform others in terms of LOE and NIQE measures. Impact Statement —The objective of vision-based applications is to automate the human visual perception. This requires a preprocessed input image with fine details. The image details to achieve human vision suffer from uncertainties in homogeneous regions and at edges. This result in an unnatural preprocessed image and degrades the performance. For example: Noisy images are not good to train deep learning models. AI offers one of the best tools, i.e., fuzzy set theory, which is capable to handle uncertainties. This article proposes an adaptive interval Type-2 fuzzy filter for elimination of uncertainties. The proposed filter enhances the image details and maintains its natural appearance. The elimination of uncertainties using proposed filter drops the error value from 2.573 to 1.929 for nonuniformly illuminated images. The proposed approach is able to boost the performance of various vision-based applications like object detection, vehicle tracking, security, medical diagnostics, etc.

Posted Content
TL;DR: In this article, a cosmological solution to the electroweak hierarchy problem is presented, which can be decoupled from the strong-CP problem and discuss its possible implementations and phenomenology.
Abstract: We present a cosmological solution to the electroweak hierarchy problem. After discussing general features of cosmological approaches to naturalness, we extend the Standard Model with two light scalars very weakly coupled to the Higgs and present the mechanism, which we recently introduced in a companion paper to explain jointly the electroweak hierarchy and the strong-CP problem. In this work we show that this solution can be decoupled from the strong-CP problem and discuss its possible implementations and phenomenology. The mechanism works with any standard inflationary sector, it does not require weak-scale inflation or a large number of e-folds, and does not introduce ambiguities related to eternal inflation. The cutoff of the theory can be as large as the Planck scale, both for the Cosmological Constant and for the Higgs sector. Reproducing the observed dark matter relic density fixes the couplings of the two new scalars to the Standard Model, offering a target to future axion or fifth force searches. Depending on the specific interaction of the scalars with the Standard Model, the mechanism either yields rich phenomenology at colliders or provides a novel joint solution to the strong-CP problem. We highlight what predictions are common to most realizations of cosmological selection of the weak scale and will allow to test this general framework in the near future.

Posted Content
TL;DR: In this paper, the fundamental parameters of a theory are probabilistically localised around the critical value and the universe finds itself at the edge of a phase transition, which is called self-organised localisation.
Abstract: We describe a new phenomenon in quantum cosmology: self-organised localisation. When the fundamental parameters of a theory are functions of a scalar field subject to large fluctuations during inflation, quantum phase transitions can act as dynamical attractors. As a result, the theory parameters are probabilistically localised around the critical value and the Universe finds itself at the edge of a phase transition. We illustrate how self-organised localisation could account for the observed near-criticality of the Higgs self-coupling, the naturalness of the Higgs mass, or the smallness of the cosmological constant.

Posted Content
TL;DR: In this paper, a fine-grained style control on the transformer-based text-to-speech synthesis (TransformerTTS) system is proposed by extracting a time sequence of local style tokens (LST) from the reference speech.
Abstract: In this paper, we present a novel architecture to realize fine-grained style control on the transformer-based text-to-speech synthesis (TransformerTTS). Specifically, we model the speaking style by extracting a time sequence of local style tokens (LST) from the reference speech. The existing content encoder in TransformerTTS is then replaced by our designed cross-attention blocks for fusion and alignment between content and style. As the fusion is performed along with the skip connection, our cross-attention block provides a good inductive bias to gradually infuse the phoneme representation with a given style. Additionally, we prevent the style embedding from encoding linguistic content by randomly truncating LST during training and using wav2vec 2.0 features. Experiments show that with fine-grained style control, our system performs better in terms of naturalness, intelligibility, and style transferability. Our code and samples are publicly available.

Journal ArticleDOI
TL;DR: This article argued that the distinction between the natural and the unnatural does not have any moral relevance, either because the distinction does not make sense or because, even if it does make sense, it does not necessarily make any moral sense.
Abstract: Many scholars have argued that the distinction between the natural and the unnatural does not have any moral relevance, either because the distinction does not make sense or because, even if it does make sense, it does not make any moral sense. Before we can decide on the latter, we must therefore determine first whether a semantic distinction can be made. In this article, I argue that the distinction can be maintained. In spite of the fact that the categories of the natural and the unnatural are blurred as no unnatural things are completely unnatural, I argue that we can meaningfully distinguish between different types of unnaturalness along the natural-unnatural spectrum. To my knowledge, this article is the first publication to distinguish between three types of unnaturalness.

Journal ArticleDOI
TL;DR: This article found that products made by smaller firms are perceived to be more natural, whether they are directly experienced or seen in ads, and that the association of firm size and naturalness is held non-consciously (study 3) and also consciously (study 2).
Abstract: Firms of varying size can produce the same product. Do consumers make inferences about products based on firm size? We focus on perceptions of product naturalness and show, in four studies, that products made by smaller firms are perceived to be more natural — whether they are directly experienced (study 1) or seen in ads (study 2). Additionally, we show that the association of firm size and naturalness is held non=consciously (study 3), and also consciously (study 2); and that it impacts purchase intention (studies 2 and 4). Our research has many implications for firms conveying product naturalness. Importantly, it highlights the need to explore possible associations between firm characteristics and product perceptions.

Journal ArticleDOI
TL;DR: This article found that calls to look natural maintain the value of attractiveness while adding the consumer concern that others will discount their attractiveness if overt effort is present, thus consumers may engage in a self-presentational strategy wherein they construct an appearance of naturalness to signal low effort to others, thereby augmenting their attractiveness.
Abstract: Consumers seek naturalness across many domains, including physical appearance. It seems that the desire for natural beauty would discourage artificial appearance-enhancement consumption, such as cosmetic use. However, across an analysis of the “no-makeup movement” on Twitter and Nielsen cosmetic sales (Study 1a), an image analysis of #nomakeup selfies using machine learning approaches (Study 1b), and three experiments (Studies 2–4), we find that calls to look natural can be associated with increased artificial beauty practices. Drawing from attribution theory, we theorize that calls to look natural maintain the value of attractiveness while adding the consumer concern that others will discount their attractiveness if overt effort is present. Thus, rather than investing less effort, consumers may engage in a self-presentational strategy wherein they construct an appearance of naturalness to signal low effort to others, thereby augmenting their attractiveness. This work contributes to attribution and self-presentation theory and offers practical implications for naturalness consumption.

Journal ArticleDOI
TL;DR: The authors found that products made by smaller firms are perceived to be more natural, whether they are directly experienced or seen in ads, and that the association of firm size and naturalness is held nonconsciously (study 3) and also consciously (study 2).

Journal ArticleDOI
01 Jun 2021-Synthese
TL;DR: This paper proposes two desiderata for a theory of natural kinds and discusses one example of a ‘general’ epistemology-only theory, proposed by Marc Ereshefsky and Thomas Reydon, and argues that theories like theirs fail to provide adequate criteria ofnatural kinds.
Abstract: Several philosophers have recently tried to define natural kinds in epistemic terms only. Given the persistent problems with finding a successful metaphysical theory, these philosophers argue that we would do better to describe natural kinds solely in terms of their epistemic usefulness, such as their role in supporting inductive inferences. In this paper, I argue against these epistemology-only theories of natural kinds and in favor of, at least partly, metaphysical theories. I do so in three steps. In the first section of the paper, I propose two desiderata for a theory of natural kinds. In the second section, I discuss one example of a ‘general’ epistemology-only theory, proposed by Marc Ereshefsky and Thomas Reydon, and argue that theories like theirs fail to provide adequate criteria of natural kinds. In the third section, I focus on one example of a ‘specific’ epistemology-only theory, proposed by P. D. Magnus, and use it to show why such theories cannot justify the claim that the proposed epistemic criteria account for the naturalness of kinds.

Journal ArticleDOI
27 Aug 2021
TL;DR: This research study provides three different vision descriptions for image recoloring methods, each with its own unique twist, and shows that the supervised learning method outperforms other conventional methods based on performance measures such as naturalness index and feature similarity index.
Abstract: Recent research has discovered new applications for object tracking and identification by simulating the colour distribution of a homogeneous region. The colour distribution of an object is resilient when it is subjected to partial occlusion, scaling, and distortion. When rotated in depth, it may remain relatively stable in other applications. The challenging task in image recoloring is the identification of the dichromatic color appearance, which is remaining as a significant requirement in many recoloring imaging sectors. This research study provides three different vision descriptions for image recoloring methods, each with its own unique twist. The descriptions of protanopia, deuteranopia, and tritanopia may be incorporated and evaluated using parametric, machine learning, and reinforcement learning techniques, among others. Through the use of different image recoloring techniques, it has been shown that the supervised learning method outperforms other conventional methods based on performance measures such as naturalness index and feature similarity index (FSIM).

Proceedings ArticleDOI
19 Sep 2021
TL;DR: This paper addresses the problem of blind stereoscopic image quality assessment (NR-SIQA) using a new multi-task deep learning based-method and compute naturalness-based features using a Natural Scene Statistics (NSS) model in the complex wavelet domain.
Abstract: This paper addresses the problem of blind stereoscopic image quality assessment (NR-SIQA) using a new multi-task deep learning based-method. In the field of stereoscopic vision, the information is fairly distributed between the left and right views as well as the binocular phenomenon. In this work, we propose to integrate these characteristics to estimate the quality of stereoscopic images without reference through a convolutional neural network. Our method is based on two main tasks: the first task predicts naturalness analysis based features adapted to stereo images, while the second task predicts the quality of such images. The former, so-called auxiliary task, aims to find more robust and relevant features to improve the quality prediction. To do this, we compute naturalness-based features using a Natural Scene Statistics (NSS) model in the complex wavelet domain. It allows to capture the statistical dependency between pairs of the stereoscopic images. Experiments are conducted on the well known LIVE PHASE I and LIVE PHASE II databases. The results obtained show the relevance of our method when comparing with those of the state-of-the-art. Our code is available online on this https URL.

Journal ArticleDOI
TL;DR: A fuzzy c-means clustering-based method for image enhancement is proposed which enhances the perceptually invisible image along with preserving its color and naturalness, and enhances the image contrast and maintains the naturalness without introducing any artifacts.
Abstract: Image enhancement is a basic requirement for any computer vision application for further processing of an image. A common limitation with most of the existing methods, when applied to nearly invisible images, is the loss of color details during the enhancement process. So, a fuzzy c-means clustering-based method for image enhancement is proposed which enhances the perceptually invisible image along with preserving its color and naturalness. In this method, the image pixels are grouped into different clusters and are assigned membership values to those clusters. Based on this membership value, its intensity level is modified in the spatial domain. Modification of the gray levels proportional to the membership values leads to the stretching of the image histogram, similar in shape, to the original histogram. The process results in a very small shift in the mean intensity which preserves the color and brightness-related information of the image. The method enhances the image contrast and maintains the naturalness without introducing any artifacts. The simulation results on standard datasets reflect that the proposed algorithm is superior to many state-of-the-art and traditional methods for perceptually invisible images.

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the authors discuss some approaches to increasing the naturalness and flexibility of human-robot interaction, with examples from the WikiTalk dialogue system, and discuss the need for a Wikipedia-based listening capability to enable robots to follow the changing topics in human conversation.
Abstract: The chapter discusses some approaches to increasing the naturalness and flexibility of human-robot interaction, with examples from the WikiTalk dialogue system. WikiTalk enables robots to talk fluently about thousands of topics using Wikipedia-based talking. However, there are three challenging areas that need to be addressed to make the system more natural: speech interaction, face recognition, interaction history. We address these challenges and describe more context-aware approaches taking the individual partner into account when generating responses. Finally, we discuss the need for a Wikipedia-based listening capability to enable robots to follow the changing topics in human conversation. This would allow robots to join in the conversation using Wikipedia-based talking to make new topically relevant dialogue contributions.

Journal ArticleDOI
01 Jan 2021
TL;DR: The crushing industry is called upon to modify its processing methods in response to a rising demand for vegetable proteins, while at the same time increasing transparency and naturality as mentioned in this paper, and changing the processes without taking this request into account would mean risking rejection and failure.
Abstract: The crushing industry is called upon to modify its processing methods in response to a rising demand for vegetable proteins, while at the same time increasing transparency and naturality. Changing the processes without taking this request into account would mean risking rejection and failure. The social sciences have shown that the collective unconscious inextricably links the notion of naturalness to healthy eating, respect for the environment, and social honesty. However, this notion goes beyond what’s rational and proves difficult to pin down when it comes to evaluating products. France does not recognize the ISO data sheet that defines what a “natural” ingredient is. Yet we do need a standard, if only to make informed choices between different possible technological paths. This standard of reference could be inspired by available norms in related fields, or it could be based on the best available technologies within a framework that takes into account both societal aspirations and the technical and economic possibilities of the industrial world. To achieve this, the sector’s representative bodies, the State, and consumer advocate groups should engage in a collective approach.

Posted Content
TL;DR: In this article, the authors compare discrete and soft speech units as input features and find that discrete representations effectively remove speaker information but discard some linguistic content, leading to mispronunciations.
Abstract: The goal of voice conversion is to transform source speech into a target voice, keeping the content unchanged. In this paper, we focus on self-supervised representation learning for voice conversion. Specifically, we compare discrete and soft speech units as input features. We find that discrete representations effectively remove speaker information but discard some linguistic content - leading to mispronunciations. As a solution, we propose soft speech units. To learn soft units, we predict a distribution over discrete speech units. By modeling uncertainty, soft units capture more content information, improving the intelligibility and naturalness of converted speech. Samples available at https://ubisoft-laforge.github.io/speech/soft-vc/

Journal ArticleDOI
TL;DR: In this article, an objectively defined Food Naturalness Index (FNI) was used to predict perceived naturalness with a high degree of accuracy with a sample of 179 participants ranked 28 snacks, ranging from least natural to the most natural.

Journal ArticleDOI
TL;DR: Qualitative, quantitative, and subjective evaluation experiments show that the proposed Optimized Dichromacy Projection as a novel high-speed image recolouring technique for dichromacy compensation can generate images of qualities competitive to the state-of-the-art methods while drastically improving computation time.


Journal ArticleDOI
11 Jun 2021
TL;DR: In this paper, the concept of naturalness with respect to a particular data collection practice is revisited, when the researcher themselves is a participant in the recorded data, and it is argued that analysis may be guided by how the researcher-participant is treated by others in the data and that researchers may be considered as any other participant if treated as making activity-adequate (rather than research-sufficient) contributions.
Abstract: Conversation analysis strives to use naturalistic data in its research, but the definition of “natural” is often unclear (Speer, 2002) and can be at odds with both ethnomethodological understandings of data (Lynch, 2002) and practices of data collection (e.g., Stevanovic et al., 2017; Goodwin, 2018). In this paper, I reconsider the concept of naturalness with respect to a particular data collection practice: When the researcher themselves is a participant in the recorded data. I argue that analysis may be guided by how the researcher-participant is treated by others in the data, and that researchers may be considered as any other participant if treated as making activity-adequate (rather than research-adequate) contributions. Furthermore, researcher presence can demonstrate unique adequacy and provides opportunities to experiment with situated practices that otherwise are atypical or hard to access. This version of “natural” respecifies naturalness as a members’ concern in recorded interaction.

Journal ArticleDOI
TL;DR: In this article, a novel approach is proposed to evaluate the impact of forestry on ecosystem quality in life cycle assessment (LCA) combining a naturalness assessment model with a species richness relationship, applied to a case study evaluating different forest management strategies involving concomitantly silvicultural scenarios (plantation only, careful logging only or the current mix of both) combined with an increasing share of protected area for wood production in a Quebec black spruce forest.
Abstract: A novel approach is proposed to evaluate the impact of forestry on ecosystem quality in life cycle assessment (LCA) combining a naturalness assessment model with a species richness relationship. The approach is applied to a case study evaluating different forest management strategies involving concomitantly silvicultural scenarios (plantation only, careful logging only or the current mix of both) combined with an increasing share of protected area for wood production in a Quebec black spruce forest. The naturalness index is useful to compare forest management scenarios and can help evaluate conservation needs considering the type of management foreseen for wood production. The results indicate that it is preferable to intensify forest management over a small proportion of the forest territory while ensuring strict protection over the remaining portion, compared to extensive forest management over most of the forested area. To explore naturalness introduction in LCA, a provisory curve relating the naturalness index (NI) with the potential disappeared fraction of species (PDF) was developed using species richness data from the literature. LCA impact scores in PDF for producing 1 m3 of wood might lead to consistent results with the naturalness index but the uncertainty is high while the window leading to consistent results is narrow.