scispace - formally typeset
Search or ask a question

Showing papers by "Brunel University London published in 2020"


Journal ArticleDOI
TL;DR: A review and roadmap to systematically cover the development of IFD following the progress of machine learning theories and offer a future perspective is presented.

1,173 citations


Journal ArticleDOI
TL;DR: It is indicated that mean temperature has a positive linear relationship with the number of COVID-19 cases with a threshold of 3 °C, and there is no evidence supporting that case counts of CO VID-19 could decline when the weather becomes warmer, which provides useful implications for policymakers and the public.

667 citations


Journal ArticleDOI
01 Feb 2020
TL;DR: An Access Control Policy Algorithm for improving data accessibility between healthcare providers is proposed, assisting in the simulation of environments to implement the Hyperledger-based eletronic healthcare record (EHR) sharing system that uses the concept of a chaincode.
Abstract: Modern healthcare systems are characterized as being highly complex and costly. However, this can be reduced through improved health record management, utilization of insurance agencies, and blockchain technology. Blockchain was first introduced to provide distributed records of money-related exchanges that were not dependent on centralized authorities or financial institutions. Breakthroughs in blockchain technology have led to improved transactions involving medical records, insurance billing, and smart contracts, enabling permanent access to and security of data, as well as providing a distributed database of transactions. One significant advantage of using blockchain technology in the healthcare industry is that it can reform the interoperability of healthcare databases, providing increased access to patient medical records, device tracking, prescription databases, and hospital assets, including the complete life cycle of a device within the blockchain infrastructure. Access to patients’ medical histories is essential to correctly prescribe medication, with blockchain being able to dramatically enhance the healthcare services framework. In this paper, several solutions for improving current limitations in healthcare systems using blockchain technology are explored, including frameworks and tools to measure the performance of such systems, e.g., Hyperledger Fabric, Composer, Docker Container, Hyperledger Caliper, and the Wireshark capture engine. Further, this paper proposes an Access Control Policy Algorithm for improving data accessibility between healthcare providers, assisting in the simulation of environments to implement the Hyperledger-based eletronic healthcare record (EHR) sharing system that uses the concept of a chaincode. Performance metrics in blockchain networks, such as latency, throughput, Round Trip Time (RTT). have also been optimized for achieving enhanced results. Compared to traditional EHR systems, which use client-server architecture, the proposed system uses blockchain for improving efficiency and security.

493 citations


Journal ArticleDOI
TL;DR: This Expert Consensus Statement reflects on how these ten KCs can be used to identify, organize and utilize mechanistic data when evaluating chemicals as EDCs, and uses diethylstilbestrol, bisphenol A and perchlorate as examples to illustrate this approach.
Abstract: Endocrine-disrupting chemicals (EDCs) are exogenous chemicals that interfere with hormone action, thereby increasing the risk of adverse health outcomes, including cancer, reproductive impairment, cognitive deficits and obesity. A complex literature of mechanistic studies provides evidence on the hazards of EDC exposure, yet there is no widely accepted systematic method to integrate these data to help identify EDC hazards. Inspired by work to improve hazard identification of carcinogens using key characteristics (KCs), we have developed ten KCs of EDCs based on our knowledge of hormone actions and EDC effects. In this Expert Consensus Statement, we describe the logic by which these KCs are identified and the assays that could be used to assess several of these KCs. We reflect on how these ten KCs can be used to identify, organize and utilize mechanistic data when evaluating chemicals as EDCs, and we use diethylstilbestrol, bisphenol A and perchlorate as examples to illustrate this approach.

390 citations


Journal ArticleDOI
TL;DR: Results show that media richness negatively predicts citizen engagement through government social media, but dialogic loop facilitates engagement, and all relationships were contingent upon the emotional valence of each Weibo post.

372 citations


Posted Content
TL;DR: This paper investigates how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and shows that the uncertainty in prediction is highly correlates with accuracy of prediction.
Abstract: Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (SARS-CoV-2) infections are a major healthcare challenge around the world. Detecting COVID-19 in X-ray images is crucial for diagnosis, assessment and treatment. However, diagnostic uncertainty in the report is a challenging and yet inevitable task for radiologist. In this paper, we investigate how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and show that the uncertainty in prediction is highly correlates with accuracy of prediction. We believe that the availability of uncertainty-aware deep learning solution will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.

295 citations


Journal ArticleDOI
TL;DR: In this article, an overview of superhydrophobic surfaces (SHS) is provided, and then their fabrication methods discussed, and the corrosion resistance of these SHS fabricated by various methods and their chemical stability and mechanical stability are reviewed.

288 citations


Journal ArticleDOI
TL;DR: An optimization algorithm is developed to minimize the trace of the estimated ellipsoid set, and the effect from the adopted event-triggered threshold is thoroughly discussed as well.
Abstract: This paper is concerned with the distributed set-membership filtering problem for a class of general discrete-time nonlinear systems under event-triggered communication protocols over sensor networks. To mitigate the communication burden, each intelligent sensing node broadcasts its measurement to the neighboring nodes only when a predetermined event-based media-access condition is satisfied. According to the interval mathematics theory, a recursive distributed set-membership scheme is designed to obtain an ellipsoid set containing the target states of interest via adequately fusing the measurements from neighboring nodes, where both the accurate estimate on Lagrange remainder and the event-based media-access condition are skillfully utilized to improve the filter performance. Furthermore, such a scheme is only dependent on neighbor information and local adjacency weights, thereby fulfilling the scalability requirement of sensor networks. In addition, an optimization algorithm is developed to minimize the trace of the estimated ellipsoid set, and the effect from the adopted event-triggered threshold is thoroughly discussed as well. Finally, a simulation example is utilized to illustrate the usefulness of the proposed distributed set-membership filtering scheme.

271 citations


Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2248 moreInstitutions (155)
TL;DR: For the first time, predictions from pythia8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.
Abstract: New sets of CMS underlying-event parameters (“tunes”) are presented for the pythia8 event generator. These tunes use the NNPDF3.1 parton distribution functions (PDFs) at leading (LO), next-to-leading (NLO), or next-to-next-to-leading (NNLO) orders in perturbative quantum chromodynamics, and the strong coupling evolution at LO or NLO. Measurements of charged-particle multiplicity and transverse momentum densities at various hadron collision energies are fit simultaneously to determine the parameters of the tunes. Comparisons of the predictions of the new tunes are provided for observables sensitive to the event shapes at LEP, global underlying event, soft multiparton interactions, and double-parton scattering contributions. In addition, comparisons are made for observables measured in various specific processes, such as multijet, Drell–Yan, and top quark-antiquark pair production including jet substructure observables. The simulation of the underlying event provided by the new tunes is interfaced to a higher-order matrix-element calculation. For the first time, predictions from pythia8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.

265 citations


Journal ArticleDOI
17 Jul 2020
TL;DR: In this article, different methods of thermal energy storage including sensible heat storage, latent heat storage and thermochemical energy storage, focusing mainly on phase change materials (PCMs) as a form of suitable solution for energy utilisation to fill the gap between demand and supply to improve the energy efficiency of a system.
Abstract: The achievement of European climate energy objectives which are contained in the European Union's (EU) “20–20–20″ targets and in the European Commission's (EC) Energy Roadmap 2050 is possible, among other things, through the use of energy storage technologies. The use of thermal energy storage (TES) in the energy system allows to conserving energy, increase the overall efficiency of the systems by eliminating differences between supply and demand for energy. The article presents different methods of thermal energy storage including sensible heat storage, latent heat storage and thermochemical energy storage, focusing mainly on phase change materials (PCMs) as a form of suitable solution for energy utilisation to fill the gap between demand and supply to improve the energy efficiency of a system. PCMs allow the storage of latent thermal energy during phase change at almost stable temperature. The article presents a classification of PCMs according to their chemical nature as organic, inorganic and eutectic and by the phase transition with their advantages and disadvantages. In addition, different methods of improving the effectiveness of the PCM materials such as employing cascaded latent heat thermal energy storage system, encapsulation of PCMs and shape-stabilisation are presented in the paper. Furthermore, the use of PCM materials in buildings, power generation, food industry and automotive applications are presented and the modelling tools for analysing the functionality of PCMs materials are compared and classified.

223 citations


Journal ArticleDOI
TL;DR: In this article, a mixed method approach was adopted to investigate current practices of C&DW management and circular construction (reuse, recycle and recovery of materials) concept awareness in the UK.

Journal ArticleDOI
TL;DR: This work examines the known and potential impacts of ocean pollution on human health, identifies gaps in knowledge, project future trends, and proposes priorities for interventions to control and prevent pollution of the seas and safeguard human health.
Abstract: Background: Pollution – unwanted waste released to air, water, and land by human activity – is the largest environmental cause of disease in the world today. It is responsible for an estimated nine million premature deaths per year, enormous economic losses, erosion of human capital, and degradation of ecosystems. Ocean pollution is an important, but insufficiently recognized and inadequately controlled component of global pollution. It poses serious threats to human health and well-being. The nature and magnitude of these impacts are only beginning to be understood. Goals: (1) Broadly examine the known and potential impacts of ocean pollution on human health. (2) Inform policy makers, government leaders, international organizations, civil society, and the global public of these threats. (3) Propose priorities for interventions to control and prevent pollution of the seas and safeguard human health. Methods: Topic-focused reviews that examine the effects of ocean pollution on human health, identify gaps in knowledge, project future trends, and offer evidence-based guidance for effective intervention. Environmental Findings: Pollution of the oceans is widespread, worsening, and in most countries poorly controlled. It is a complex mixture of toxic metals, plastics, manufactured chemicals, petroleum, urban and industrial wastes, pesticides, fertilizers, pharmaceutical chemicals, agricultural runoff, and sewage. More than 80% arises from land-based sources. It reaches the oceans through rivers, runoff, atmospheric deposition and direct discharges. It is often heaviest near the coasts and most highly concentrated along the coasts of low- and middle-income countries. Plastic is a rapidly increasing and highly visible component of ocean pollution, and an estimated 10 million metric tons of plastic waste enter the seas each year. Mercury is the metal pollutant of greatest concern in the oceans; it is released from two main sources – coal combustion and small-scale gold mining. Global spread of industrialized agriculture with increasing use of chemical fertilizer leads to extension of Harmful Algal Blooms (HABs) to previously unaffected regions. Chemical pollutants are ubiquitous and contaminate seas and marine organisms from the high Arctic to the abyssal depths. Ecosystem Findings: Ocean pollution has multiple negative impacts on marine ecosystems, and these impacts are exacerbated by global climate change. Petroleum-based pollutants reduce photosynthesis in marine microorganisms that generate oxygen. Increasing absorption of carbon dioxide into the seas causes ocean acidification, which destroys coral reefs, impairs shellfish development, dissolves calcium-containing microorganisms at the base of the marine food web, and increases the toxicity of some pollutants. Plastic pollution threatens marine mammals, fish, and seabirds and accumulates in large mid-ocean gyres. It breaks down into microplastic and nanoplastic particles containing multiple manufactured chemicals that can enter the tissues of marine organisms, including species consumed by humans. Industrial releases, runoff, and sewage increase frequency and severity of HABs, bacterial pollution, and anti-microbial resistance. Pollution and sea surface warming are triggering poleward migration of dangerous pathogens such as the Vibrio species. Industrial discharges, pharmaceutical wastes, pesticides, and sewage contribute to global declines in fish stocks. Human Health Findings: Methylmercury and PCBs are the ocean pollutants whose human health effects are best understood. Exposures of infants in utero to these pollutants through maternal consumption of contaminated seafood can damage developing brains, reduce IQ and increase children’s risks for autism, ADHD and learning disorders. Adult exposures to methylmercury increase risks for cardiovascular disease and dementia. Manufactured chemicals – phthalates, bisphenol A, flame retardants, and perfluorinated chemicals, many of them released into the seas from plastic waste – can disrupt endocrine signaling, reduce male fertility, damage the nervous system, and increase risk of cancer. HABs produce potent toxins that accumulate in fish and shellfish. When ingested, these toxins can cause severe neurological impairment and rapid death. HAB toxins can also become airborne and cause respiratory disease. Pathogenic marine bacteria cause gastrointestinal diseases and deep wound infections. With climate change and increasing pollution, risk is high that Vibrio infections, including cholera, will increase in frequency and extend to new areas. All of the health impacts of ocean pollution fall disproportionately on vulnerable populations in the Global South – environmental injustice on a planetary scale. Conclusions: Ocean pollution is a global problem. It arises from multiple sources and crosses national boundaries. It is the consequence of reckless, shortsighted, and unsustainable exploitation of the earth’s resources. It endangers marine ecosystems. It impedes the production of atmospheric oxygen. Its threats to human health are great and growing, but still incompletely understood. Its economic costs are only beginning to be counted. Ocean pollution can be prevented. Like all forms of pollution, ocean pollution can be controlled by deploying data-driven strategies based on law, policy, technology, and enforcement that target priority pollution sources. Many countries have used these tools to control air and water pollution and are now applying them to ocean pollution. Successes achieved to date demonstrate that broader control is feasible. Heavily polluted harbors have been cleaned, estuaries rejuvenated, and coral reefs restored. Prevention of ocean pollution creates many benefits. It boosts economies, increases tourism, helps restore fisheries, and improves human health and well-being. It advances the Sustainable Development Goals (SDG). These benefits will last for centuries. Recommendations: World leaders who recognize the gravity of ocean pollution, acknowledge its growing dangers, engage civil society and the global public, and take bold, evidence-based action to stop pollution at source will be critical to preventing ocean pollution and safeguarding human health. Prevention of pollution from land-based sources is key. Eliminating coal combustion and banning all uses of mercury will reduce mercury pollution. Bans on single-use plastic and better management of plastic waste reduce plastic pollution. Bans on persistent organic pollutants (POPs) have reduced pollution by PCBs and DDT. Control of industrial discharges, treatment of sewage, and reduced applications of fertilizers have mitigated coastal pollution and are reducing frequency of HABs. National, regional and international marine pollution control programs that are adequately funded and backed by strong enforcement have been shown to be effective. Robust monitoring is essential to track progress. Further interventions that hold great promise include wide-scale transition to renewable fuels; transition to a circular economy that creates little waste and focuses on equity rather than on endless growth; embracing the principles of green chemistry; and building scientific capacity in all countries. Designation of Marine Protected Areas (MPAs) will safeguard critical ecosystems, protect vulnerable fish stocks, and enhance human health and well-being. Creation of MPAs is an important manifestation of national and international commitment to protecting the health of the seas.

Journal ArticleDOI
TL;DR: A data-driven method based on neural network (NN) and Q -learning algorithm is developed, which achieves superior performance on cost-effective schedules for HEM system, and demonstrates the effectiveness of the newly developed framework.
Abstract: This paper proposes a novel framework for home energy management (HEM) based on reinforcement learning in achieving efficient home-based demand response (DR). The concerned hour-ahead energy consumption scheduling problem is duly formulated as a finite Markov decision process (FMDP) with discrete time steps. To tackle this problem, a data-driven method based on neural network (NN) and ${Q}$ -learning algorithm is developed, which achieves superior performance on cost-effective schedules for HEM system. Specifically, real data of electricity price and solar photovoltaic (PV) generation are timely processed for uncertainty prediction by extreme learning machine (ELM) in the rolling time windows. The scheduling decisions of the household appliances and electric vehicles (EVs) can be subsequently obtained through the newly developed framework, of which the objective is dual, i.e., to minimize the electricity bill as well as the DR induced dissatisfaction. Simulations are performed on a residential house level with multiple home appliances, an EV and several PV panels. The test results demonstrate the effectiveness of the proposed data-driven based HEM framework.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors investigated the evolution process of a particle swarm optimization algorithm with care, and then proposed to incorporate more dynamic information into it for avoiding accuracy loss caused by premature convergence without extra computation burden.
Abstract: High-dimensional and sparse (HiDS) matrices are frequently found in various industrial applications. A latent factor analysis (LFA) model is commonly adopted to extract useful knowledge from an HiDS matrix, whose parameter training mostly relies on a stochastic gradient descent (SGD) algorithm. However, an SGD-based LFA model's learning rate is hard to tune in real applications, making it vital to implement its self-adaptation. To address this critical issue, this study firstly investigates the evolution process of a particle swarm optimization algorithm with care, and then proposes to incorporate more dynamic information into it for avoiding accuracy loss caused by premature convergence without extra computation burden, thereby innovatively achieving a novel position-transitional particle swarm optimization (P2SO) algorithm. It is subsequently adopted to implement a P2SO-based LFA (PLFA) model that builds a learning rate swarm applied to the same group of LFs. Thus, a PLFA model implements highly efficient learning rate adaptation as well as represents an HiDS matrix precisely. Experimental results on four HiDS matrices emerging from real applications demonstrate that compared with an SGD-based LFA model, a PLFA model no longer suffers from a tedious and expensive tuning process of its learning rate to achieve higher prediction accuracy for missing data.

Journal ArticleDOI
TL;DR: This article tackles the recursive filtering problem for a class of stochastic nonlinear time-varying complex networks suffering from both the state saturations and the deception attacks, and designs a state-saturated recursive filter such that a certain upper bound is guaranteed on the filtering error covariance and is then minimized at each time instant.
Abstract: This article tackles the recursive filtering problem for a class of stochastic nonlinear time-varying complex networks (CNs) suffering from both the state saturations and the deception attacks. The nonlinear inner coupling and the state saturations are taken into account to characterize the nonlinear nature of CNs. From the defender’s perspective, the randomly occurring deception attack is governed by a set of Bernoulli binary distributed white sequence with a given probability. The objective of the addressed problem is to design a state-saturated recursive filter such that, in the simultaneous presence of the state saturations and the randomly occurring deception attacks, a certain upper bound is guaranteed on the filtering error covariance, and such an upper bound is then minimized at each time instant. By employing the induction method, an upper bound on the filtering error variance is first constructed in terms of the solutions to a set of matrix difference equations. Subsequently, the filter parameters are appropriately designed to minimize such an upper bound. Finally, a numerical simulation example is provided to demonstrate the feasibility and usefulness of the proposed filtering scheme.

Journal ArticleDOI
05 Jan 2020
TL;DR: In this article, the entire production process of aluminium from ore to the finished metallic alloy product is discussed, with a focus on potential applications within the industry for waste heat recovery technologies.
Abstract: Aluminium is becoming more frequently used across industries due to its beneficial properties, generally within an alloyed form. This paper outlines the entire production process of aluminium from ore to the finished metallic alloy product. In addition, the article looks at the current state of the art technologies used in each discrete process step. Particular interest is directed towards casting technologies and secondary recycling as the relative proportion of recycled aluminium is increasing dramatically and aluminium is much more energy efficient to recycle than to produce through primary methods. Future developments within the industries are discussed, in particular inert anode technology. Aluminium production is responsible for a large environmental impact and the gaseous emissions and solid residue by-products are discussed. In addition to the environmental impact, the industry is highly energy intensive and releases a large proportion of energy to atmosphere in the form of waste heat. One method of reducing energy consumption and decreasing the environmental impact of emissions is by installing waste heat recovery technology. Applied methods to reduce energy consumption are examined, with a latter focus on potential applications within the industry for waste heat recovery technologies.

Journal ArticleDOI
TL;DR: A novel MHE strategy is developed to cope with the underlying NLS with unknown inputs by dedicatedly introducing certain temporary estimates of unknown inputs, where the desired estimator parameters are designed to decouple the estimation error dynamics from the unknown inputs.
Abstract: This article is concerned with the moving horizon estimation (MHE) problem for networked linear systems (NLSs) with unknown inputs under dynamic quantization effects. For the NLSs with unknown input signals, the conventional MHE strategy is incapable of guaranteeing the satisfactory performance as the estimation error is dependent on the external disturbances. In this work, a novel MHE strategy is developed to cope with the underlying NLS with unknown inputs by dedicatedly introducing certain temporary estimates of unknown inputs, where the desired estimator parameters are designed to decouple the estimation error dynamics from the unknown inputs. A two-step design strategy (namely, decoupling step and convergence step) is proposed to obtain the estimator parameters. In the decoupling step, the decoupling parameter of the moving horizon estimator is designed based on certain assumptions on system parameters and quantization parameters. In the convergence step, by employing a special observability decomposition scheme, the convergence parameters of the moving horizon estimator are achieved such that the estimation error dynamics is ultimately bounded. Moreover, the developed MHE strategy is extended to the scenario with direct feedthrough of unknown inputs. Two simulation examples are given to demonstrate the correctness and effectiveness of the proposed MHE strategies.

Journal ArticleDOI
TL;DR: The objective of the addressed variance-constrained estimation problem is to construct a recursive state estimator such that, in the simultaneous presence of event-based transmission strategy, randomly switching topologies as well as multiple missing measurements, a locally optimal upper bound is guaranteed on the estimation error covariance by properly determining the estimator gain.

Journal ArticleDOI
TL;DR: The main purpose of the addressed filtering problem is to design a set of distributed filters such that, in the simultaneous presence of the RR transmission protocol, the multirate mechanism, and the bounded noises, there exists a certain ellipsoid that includes all possible error states at each time instant.
Abstract: In this paper, the distributed set-membership filtering problem is dealt with for a class of time-varying multirate systems in sensor networks with the communication protocol. For relieving the communication burden, the round-Robin (RR) protocol is exploited to orchestrate the transmission order, under which each sensor node only broadcasts partial information to both the corresponding local filter and its neighboring nodes. In order to meet the practical transmission requirements as well as reduce communication cost, the multirate strategy is proposed to govern the sampling/update rate of the plant, the sensors, and the filters. By means of the lifting technique, the augmented filtering error system is established with a unified sampling rate. The main purpose of the addressed filtering problem is to design a set of distributed filters such that, in the simultaneous presence of the RR transmission protocol, the multirate mechanism, and the bounded noises, there exists a certain ellipsoid that includes all possible error states at each time instant. Then, the desired distributed filter gains are obtained by minimizing such an ellipsoid in the sense of the minimum trace of the weighted matrix. The proposed resource-efficient filtering algorithm is of a recursive form, thereby facilitating the online implementation. A numerical simulation example is given to demonstrate the effectiveness of the proposed protocol-based distributed filter design method.

Journal ArticleDOI
TL;DR: A new framework by which to map the components of an AI solution and to identify and manage the value-destruction potential of AI and ML for businesses is proposed.

Journal ArticleDOI
TL;DR: This paper proposes a time-varying MH estimation method for linear systems with non-uniform sampling under component-based dynamic event-triggered transmission (DETT) scheme, where the desired estimator parameter is calculated by solving a set of linear matrix inequalities.

Journal ArticleDOI
TL;DR: The most pressing need is to research the negative biopsychosocial impacts of the COVID‐19 pandemic to facilitate immediate and longer‐term recovery.
Abstract: The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) that has caused the coronavirus disease 2019 (COVID-19) pandemic represents the greatest international biopsychosocial emergency the world has faced for a century, and psychological science has an integral role to offer in helping societies recover. The aim of this paper is to set out the shorter- and longer-term priorities for research in psychological science that will (a) frame the breadth and scope of potential contributions from across the discipline; (b) enable researchers to focus their resources on gaps in knowledge; and (c) help funders and policymakers make informed decisions about future research priorities in order to best meet the needs of societies as they emerge from the acute phase of the pandemic. The research priorities were informed by an expert panel convened by the British Psychological Society that reflects the breadth of the discipline; a wider advisory panel with international input; and a survey of 539 psychological scientists conducted early in May 2020. The most pressing need is to research the negative biopsychosocial impacts of the COVID-19 pandemic to facilitate immediate and longer-term recovery, not only in relation to mental health, but also in relation to behaviour change and adherence, work, education, children and families, physical health and the brain, and social cohesion and connectedness. We call on psychological scientists to work collaboratively with other scientists and stakeholders, establish consortia, and develop innovative research methods while maintaining high-quality, open, and rigorous research standards.

Journal ArticleDOI
TL;DR: The prevention of food loss throughout the supply chain, including manufacturers, has become a major challenge for a number of organizations as discussed by the authors, and consumers are also increasingly interested in food loss prevention.
Abstract: The prevention of food loss throughout the supply chain, including manufacturers, has become a major challenge for a number of organisations. In addition, consumers are also increasingly interested...

Journal ArticleDOI
TL;DR: Overall, results supported the use of music listening across a range of physical activities to promote more positive affective valence, enhance physical performance, reduce perceived exertion, and improve physiological efficiency.
Abstract: Regular physical activity has multifarious benefits for physical and mental health, and music has been found to exert positive effects on physical activity. Summative literature reviews and conceptual models have hypothesized potential benefits and salient mechanisms associated with music listening in exercise and sport contexts, although no large-scale objective summary of the literature has been conducted. A multilevel meta-analysis of 139 studies was used to quantify the effects of music listening in exercise and sport domains. In total, 598 effect sizes from four categories of potential benefits (i.e., psychological responses, physiological responses, psychophysical responses, and performance outcomes) were calculated based on 3,599 participants. Music was associated with significant beneficial effects on affective valence (g = 0.48, CI [0.39, 0.56]), physical performance (g = 0.31, CI [0.25, 0.36]), perceived exertion (g = 0.22, CI [0.14, 0.30]), and oxygen consumption (g = 0.15, CI [0.02, 0.27]). No significant benefit of music was found for heart rate (g = 0.07, CI [-0.03, 0.16]). Performance effects were moderated by study domain (exercise > sport) and music tempo (fast > slow-to-medium). Overall, results supported the use of music listening across a range of physical activities to promote more positive affective valence, enhance physical performance (i.e., ergogenic effect), reduce perceived exertion, and improve physiological efficiency. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of social media influencer endorsements on purchase intention, more specifically, the impact advertising disclosure and source credibility have in this process, and they found that advertising disclosure has a significant impact on source credibility subdimensions of attractiveness, trustworthiness and expertise.
Abstract: This paper investigates the impact of social media influencer endorsements on purchase intention, more specifically, the impact advertising disclosure and source credibility have in this process. The proposed framework argues that advertising disclosure has a significant impact on source credibility subdimensions of attractiveness, trustworthiness and expertise; subdimensions that positively influence consumer purchase intention. Empirical findings based on 306 German Instagram users between 18 and 34 years of age reveal that source attractiveness, source trustworthiness and source expertise significantly increase consumer purchase intention; whilst advertising disclosure indirectly influences consumer purchase intention by influencing source attractiveness. Furthermore, the results reveal that the number of followers positively influences source attractiveness, source trustworthiness as well as purchase intention. All in all, this paper makes a unique contribution to product endorsement literature, with evidence highlighting how social media influencers and advertising disclosure may be used on Instagram to effectively increase consumer purchase intention.

Journal ArticleDOI
TL;DR: Existing studies that have applied deep learning to prevalent construction challenges like structural health monitoring, construction site safety, building occupancy modelling and energy demand prediction are reviewed.
Abstract: The construction industry is known to be overwhelmed with resource planning, risk management and logistic challenges which often result in design defects, project delivery delays, cost overruns and contractual disputes. These challenges have instigated research in the application of advanced machine learning algorithms such as deep learning to help with diagnostic and prescriptive analysis of causes and preventive measures. However, the publicity created by tech firms like Google, Facebook and Amazon about Artificial Intelligence and applications to unstructured data is not the end of the field. There abound many applications of deep learning, particularly within the construction sector in areas such as site planning and management, health and safety and construction cost prediction, which are yet to be explored. The overall aim of this article was to review existing studies that have applied deep learning to prevalent construction challenges like structural health monitoring, construction site safety, building occupancy modelling and energy demand prediction. To the best of our knowledge, there is currently no extensive survey of the applications of deep learning techniques within the construction industry. This review would inspire future research into how best to apply image processing, computer vision, natural language processing techniques of deep learning to numerous challenges in the industry. Limitations of deep learning such as the black box challenge, ethics and GDPR, cybersecurity and cost, that can be expected by construction researchers and practitioners when adopting some of these techniques were also discussed.

Journal ArticleDOI
TL;DR: The aim is to design a distributed filter for each sensor node such that an upper bound on the filtering error variance is guaranteed and subsequently minimized at each iteration under the dynamic event-triggered transmission protocol.

Journal ArticleDOI
TL;DR: In this article, the exact analytical solution for RMSE calculation based on the Lambert W function is proposed and the results obtained show that the RMSE values were not calculated correctly in most of the methods presented in the literature since the exact expression of the calculated cell output current was not used.

Journal ArticleDOI
TL;DR: The Lancaster Sensorimotor Norms are unique and innovative in a number of respects: they represent the largest-ever set of semantic norms for English, at 40,000 words × 11 dimensions, they extend perceptual strength norming to the new modality of interoception, and they include the first norming of action strength across separate bodily effectors.
Abstract: Sensorimotor information plays a fundamental role in cognition. However, the existing materials that measure the sensorimotor basis of word meanings and concepts have been restricted in terms of their sample size and breadth of sensorimotor experience. Here we present norms of sensorimotor strength for 39,707 concepts across six perceptual modalities (touch, hearing, smell, taste, vision, and interoception) and five action effectors (mouth/throat, hand/arm, foot/leg, head excluding mouth/throat, and torso), gathered from a total of 3,500 individual participants using Amazon’s Mechanical Turk platform. The Lancaster Sensorimotor Norms are unique and innovative in a number of respects: They represent the largest-ever set of semantic norms for English, at 40,000 words × 11 dimensions (plus several informative cross-dimensional variables), they extend perceptual strength norming to the new modality of interoception, and they include the first norming of action strength across separate bodily effectors. In the first study, we describe the data collection procedures, provide summary descriptives of the dataset, and interpret the relations observed between sensorimotor dimensions. We then report two further studies, in which we (1) extracted an optimal single-variable composite of the 11-dimension sensorimotor profile (Minkowski 3 strength) and (2) demonstrated the utility of both perceptual and action strength in facilitating lexical decision times and accuracy in two separate datasets. These norms provide a valuable resource to researchers in diverse areas, including psycholinguistics, grounded cognition, cognitive semantics, knowledge representation, machine learning, and big-data approaches to the analysis of language and conceptual representations. The data are accessible via the Open Science Framework (http://osf.io/7emr6/) and an interactive web application (https://www.lancaster.ac.uk/psychology/lsnorms/).

Journal ArticleDOI
TL;DR: Recent genomic advances, the role of innate and adaptive immune mechanisms, and the presence of an established immunosuppressive GBM microenvironment that suppresses and/or prevents the anti-tumor host response are discussed.
Abstract: Glioblastoma (GBM) is the most aggressive primary brain tumor in adults, with a poor prognosis, despite surgical resection combined with radio- and chemotherapy. The major clinical obstacles contributing to poor GBM prognosis are late diagnosis, diffuse infiltration, pseudo-palisading necrosis, microvascular proliferation, and resistance to conventional therapy. These challenges are further compounded by extensive inter- and intra-tumor heterogeneity and the dynamic plasticity of GBM cells. The complex heterogeneous nature of GBM cells is facilitated by the local inflammatory tumor microenvironment, which mostly induces tumor aggressiveness and drug resistance. An immunosuppressive tumor microenvironment of GBM provides multiple pathways for tumor immune evasion. Infiltrating immune cells, mostly tumor-associated macrophages, comprise much of the non-neoplastic population in GBM. Further understanding of the immune microenvironment of GBM is essential to make advances in the development of immunotherapeutics. Recently, whole-genome sequencing, epigenomics and transcriptional profiling have significantly helped improve the prognostic and therapeutic outcomes of GBM patients. Here, we discuss recent genomic advances, the role of innate and adaptive immune mechanisms, and the presence of an established immunosuppressive GBM microenvironment that suppresses and/or prevents the anti-tumor host response.