scispace - formally typeset
Search or ask a question

Showing papers by "Open University published in 2017"


Journal ArticleDOI
Shadab Alam1, Metin Ata2, Stephen Bailey3, Florian Beutler3, Dmitry Bizyaev4, Dmitry Bizyaev5, Jonathan Blazek6, Adam S. Bolton7, Joel R. Brownstein7, Angela Burden8, Chia-Hsun Chuang9, Chia-Hsun Chuang2, Johan Comparat9, Antonio J. Cuesta10, Kyle S. Dawson7, Daniel J. Eisenstein11, Stephanie Escoffier12, Héctor Gil-Marín13, Héctor Gil-Marín14, Jan Niklas Grieb15, Nick Hand16, Shirley Ho1, Karen Kinemuchi4, D. Kirkby17, Francisco S. Kitaura16, Francisco S. Kitaura2, Francisco S. Kitaura3, Elena Malanushenko4, Viktor Malanushenko4, Claudia Maraston18, Cameron K. McBride11, Robert C. Nichol18, Matthew D. Olmstead19, Daniel Oravetz4, Nikhil Padmanabhan8, Nathalie Palanque-Delabrouille, Kaike Pan4, Marcos Pellejero-Ibanez20, Marcos Pellejero-Ibanez21, Will J. Percival18, Patrick Petitjean22, Francisco Prada21, Francisco Prada9, Adrian M. Price-Whelan23, Beth Reid3, Beth Reid16, Sergio Rodríguez-Torres9, Sergio Rodríguez-Torres21, Natalie A. Roe3, Ashley J. Ross6, Ashley J. Ross18, Nicholas P. Ross24, Graziano Rossi25, Jose Alberto Rubino-Martin20, Jose Alberto Rubino-Martin21, Shun Saito15, Salvador Salazar-Albornoz15, Lado Samushia26, Ariel G. Sánchez15, Siddharth Satpathy1, David J. Schlegel3, Donald P. Schneider27, Claudia G. Scóccola28, Claudia G. Scóccola9, Claudia G. Scóccola29, Hee-Jong Seo30, Erin Sheldon31, Audrey Simmons4, Anže Slosar31, Michael A. Strauss23, Molly E. C. Swanson11, Daniel Thomas18, Jeremy L. Tinker32, Rita Tojeiro33, Mariana Vargas Magaña34, Mariana Vargas Magaña1, Jose Alberto Vazquez31, Licia Verde, David A. Wake35, David A. Wake36, Yuting Wang37, Yuting Wang18, David H. Weinberg6, Martin White16, Martin White3, W. Michael Wood-Vasey38, Christophe Yèche, Idit Zehavi39, Zhongxu Zhai33, Gong-Bo Zhao37, Gong-Bo Zhao18 
TL;DR: In this article, the authors present cosmological results from the final galaxy clustering data set of the Baryon Oscillation Spectroscopic Survey, part of the Sloan Digital Sky Survey III.
Abstract: We present cosmological results from the final galaxy clustering data set of the Baryon Oscillation Spectroscopic Survey, part of the Sloan Digital Sky Survey III. Our combined galaxy sample comprises 1.2 million massive galaxies over an effective area of 9329 deg^2 and volume of 18.7 Gpc^3, divided into three partially overlapping redshift slices centred at effective redshifts 0.38, 0.51 and 0.61. We measure the angular diameter distance and Hubble parameter H from the baryon acoustic oscillation (BAO) method, in combination with a cosmic microwave background prior on the sound horizon scale, after applying reconstruction to reduce non-linear effects on the BAO feature. Using the anisotropic clustering of the pre-reconstruction density field, we measure the product D_MH from the Alcock–Paczynski (AP) effect and the growth of structure, quantified by fσ_8(z), from redshift-space distortions (RSD). We combine individual measurements presented in seven companion papers into a set of consensus values and likelihoods, obtaining constraints that are tighter and more robust than those from any one method; in particular, the AP measurement from sub-BAO scales sharpens constraints from post-reconstruction BAOs by breaking degeneracy between D_M and H. Combined with Planck 2016 cosmic microwave background measurements, our distance scale measurements simultaneously imply curvature Ω_K = 0.0003 ± 0.0026 and a dark energy equation-of-state parameter w = −1.01 ± 0.06, in strong affirmation of the spatially flat cold dark matter (CDM) model with a cosmological constant (ΛCDM). Our RSD measurements of fσ_8, at 6 per cent precision, are similarly consistent with this model. When combined with supernova Ia data, we find H_0 = 67.3 ± 1.0 km s^−1 Mpc^−1 even for our most general dark energy model, in tension with some direct measurements. Adding extra relativistic species as a degree of freedom loosens the constraint only slightly, to H_0 = 67.8 ± 1.2 km s^−1 Mpc^−1. Assuming flat ΛCDM, we find Ω_m = 0.310 ± 0.005 and H_0 = 67.6 ± 0.5 km s^−1 Mpc^−1, and we find a 95 per cent upper limit of 0.16 eV c^−2 on the neutrino mass sum.

2,413 citations


Journal ArticleDOI
TL;DR: SDSS-IV as mentioned in this paper is a project encompassing three major spectroscopic programs: the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), the Extended Baryon Oscillation Spectroscopic Survey (eBOSS), and the Time Domain Spectroscopy Survey (TDSS).
Abstract: We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median $z\sim 0.03$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $z\sim 0.6$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.

1,200 citations


Journal ArticleDOI
TL;DR: The 2017 plasmas roadmap as mentioned in this paper is the first update of a planned series of periodic updates of the Plasma Roadmap, which was published by the Journal of Physics D: Applied Physics in 2012.
Abstract: Journal of Physics D: Applied Physics published the first Plasma Roadmap in 2012 consisting of the individual perspectives of 16 leading experts in the various sub-fields of low temperature plasma science and technology. The 2017 Plasma Roadmap is the first update of a planned series of periodic updates of the Plasma Roadmap. The continuously growing interdisciplinary nature of the low temperature plasma field and its equally broad range of applications are making it increasingly difficult to identify major challenges that encompass all of the many sub-fields and applications. This intellectual diversity is ultimately a strength of the field. The current state of the art for the 19 sub-fields addressed in this roadmap demonstrates the enviable track record of the low temperature plasma field in the development of plasmas as an enabling technology for a vast range of technologies that underpin our modern society. At the same time, the many important scientific and technological challenges shared in this roadmap show that the path forward is not only scientifically rich but has the potential to make wide and far reaching contributions to many societal challenges.

677 citations


Journal ArticleDOI
TL;DR: A suite of methods for extracting microplastics ingested by biota, including dissection, depuration, digestion and density separation are evaluated, and the urgent need for the standardisation of protocols is discussed to promote consistency in data collection and analysis is discussed.
Abstract: Microplastic debris (<5 mm) is a prolific environmental pollutant, found worldwide in marine, freshwater and terrestrial ecosystems. Interactions between biota and microplastics are prevalent, and there is growing evidence that microplastics can incite significant health effects in exposed organisms. To date, the methods used to quantify such interactions have varied greatly between studies. Here, we critically review methods for sampling, isolating and identifying microplastics ingested by environmentally and laboratory exposed fish and invertebrates. We aim to draw attention to the strengths and weaknesses of the suite of published microplastic extraction and enumeration techniques. Firstly, we highlight the risk of microplastic losses and accumulation during biotic sampling and storage, and suggest protocols for mitigating contamination in the field and laboratory. We evaluate a suite of methods for extracting microplastics ingested by biota, including dissection, depuration, digestion and density separation. Lastly, we consider the applicability of visual identification and chemical analyses in categorising microplastics. We discuss the urgent need for the standardisation of protocols to promote consistency in data collection and analysis. Harmonized methods will allow for more accurate assessment of the impacts and risks microplastics pose to biota and increase comparability between studies.

669 citations


Journal ArticleDOI
TL;DR: It is concluded here that tyre wear and tear is a stealthy source of microplastics in the authors' environment, which can only be addressed effectively if awareness increases, knowledge gaps on quantities and effects are being closed, and creative technical solutions are being sought.
Abstract: Wear and tear from tyres significantly contributes to the flow of (micro-)plastics into the environment. This paper compiles the fragmented knowledge on tyre wear and tear characteristics, amounts of particles emitted, pathways in the environment, and the possible effects on humans. The estimated per capita emission ranges from 0.23 to 4.7 kg/year, with a global average of 0.81 kg/year. The emissions from car tyres (100%) are substantially higher than those of other sources of microplastics, e.g., airplane tyres (2%), artificial turf (12–50%), brake wear (8%) and road markings (5%). Emissions and pathways depend on local factors like road type or sewage systems. The relative contribution of tyre wear and tear to the total global amount of plastics ending up in our oceans is estimated to be 5–10%. In air, 3–7% of the particulate matter (PM2.5) is estimated to consist of tyre wear and tear, indicating that it may contribute to the global health burden of air pollution which has been projected by the World Health Organization (WHO) at 3 million deaths in 2012. The wear and tear also enters our food chain, but further research is needed to assess human health risks. It is concluded here that tyre wear and tear is a stealthy source of microplastics in our environment, which can only be addressed effectively if awareness increases, knowledge gaps on quantities and effects are being closed, and creative technical solutions are being sought. This requires a global effort from all stakeholders; consumers, regulators, industry and researchers alike.

628 citations


Journal ArticleDOI
TL;DR: Data Release 13 (DR13) as discussed by the authors provides the first 1390 spatially resolved integral field unit observations of nearby galaxies from the Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2), Mapping Nearby Galaxies at APO (MaNGA), and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) began observations in 2014 July. It pursues three core programs: the Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2), Mapping Nearby Galaxies at APO (MaNGA), and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS). As well as its core program, eBOSS contains two major subprograms: the Time Domain Spectroscopic Survey (TDSS) and the SPectroscopic IDentification of ERosita Sources (SPIDERS). This paper describes the first data release from SDSS-IV, Data Release 13 (DR13). DR13 makes publicly available the first 1390 spatially resolved integral field unit observations of nearby galaxies from MaNGA. It includes new observations from eBOSS, completing the Sloan Extended QUasar, Emission-line galaxy, Luminous red galaxy Survey (SEQUELS), which also targeted variability-selected objects and X-ray-selected objects. DR13 includes new reductions of the SDSS-III BOSS data, improving the spectrophotometric calibration and redshift classification, and new reductions of the SDSS-III APOGEE-1 data, improving stellar parameters for dwarf stars and cooler stars. DR13 provides more robust and precise photometric calibrations. Value-added target catalogs relevant for eBOSS, TDSS, and SPIDERS and an updated red-clump catalog for APOGEE are also available. This paper describes the location and format of the data and provides references to important technical papers. The SDSS web site, http://www.sdss.org, provides links to the data, tutorials, examples of data access, and extensive documentation of the reduction and analysis procedures. DR13 is the first of a scheduled set that will contain new data and analyses from the planned ∼6 yr operations of SDSS-IV.

532 citations


Journal ArticleDOI
Timothy W. Shimwell1, Huub Röttgering1, Philip Best2, Wendy L. Williams3, T. J. Dijkema4, F. de Gasperin1, Martin J. Hardcastle3, George Heald5, D. N. Hoang1, A. Horneffer6, Huib Intema1, Elizabeth K. Mahony4, Elizabeth K. Mahony7, Subhash C. Mandal1, A. P. Mechev1, Leah K. Morabito1, J. B. R. Oonk1, J. B. R. Oonk4, D. A. Rafferty8, E. Retana-Montenegro1, J. Sabater2, Cyril Tasse9, Cyril Tasse10, R. J. van Weeren11, Marcus Brüggen8, Gianfranco Brunetti12, Krzysztof T. Chyzy13, John Conway14, Marijke Haverkorn15, Neal Jackson16, Matt J. Jarvis17, Matt J. Jarvis18, John McKean4, George K. Miley1, Raffaella Morganti4, Raffaella Morganti19, Glenn J. White20, Glenn J. White21, Michael W. Wise22, Michael W. Wise4, I. van Bemmel23, Rainer Beck6, Marisa Brienza4, Annalisa Bonafede8, G. Calistro Rivera1, Rossella Cassano12, A. O. Clarke16, D. Cseh15, Adam Deller4, A. Drabent, W. van Driel24, W. van Driel9, D. Engels8, Heino Falcke4, Heino Falcke15, Chiara Ferrari25, S. Fröhlich26, M. A. Garrett4, Jeremy J. Harwood4, Volker Heesen27, Matthias Hoeft23, Cathy Horellou14, Frank P. Israel1, Anna D. Kapińska28, Anna D. Kapińska29, Magdalena Kunert-Bajraszewska, D. J. McKay21, D. J. McKay30, N. R. Mohan31, Emanuela Orru4, R. Pizzo19, R. Pizzo4, Isabella Prandoni12, Dominik J. Schwarz32, Aleksandar Shulevski4, M. Sipior4, Daniel J. Smith3, S. S. Sridhar4, S. S. Sridhar19, Matthias Steinmetz33, Andra Stroe34, Eskil Varenius14, P. van der Werf1, J. A. Zensus6, Jonathan T. L. Zwart18, Jonathan T. L. Zwart35 
TL;DR: The LOFAR Two-metre Sky Survey (LoTSS) as mentioned in this paper is a deep 120-168 MHz imaging survey that will eventually cover the entire northern sky, where each of the 3170 pointings will be observed for 8 h, which, at most declinations, is sufficient to produce ~5? resolution images with a sensitivity of ~100?Jy/beam and accomplish the main scientific aims of the survey, which are to explore the formation and evolution of massive black holes, galaxies, clusters of galaxies and large-scale structure.
Abstract: The LOFAR Two-metre Sky Survey (LoTSS) is a deep 120-168 MHz imaging survey that will eventually cover the entire northern sky. Each of the 3170 pointings will be observed for 8 h, which, at most declinations, is sufficient to produce ~5? resolution images with a sensitivity of ~100 ?Jy/beam and accomplish the main scientific aims of the survey, which are to explore the formation and evolution of massive black holes, galaxies, clusters of galaxies and large-scale structure. Owing to the compact core and long baselines of LOFAR, the images provide excellent sensitivity to both highly extended and compact emission. For legacy value, the data are archived at high spectral and time resolution to facilitate subarcsecond imaging and spectral line studies. In this paper we provide an overview of the LoTSS. We outline the survey strategy, the observational status, the current calibration techniques, a preliminary data release, and the anticipated scientific impact. The preliminary images that we have released were created using a fully automated but direction-independent calibration strategy and are significantly more sensitive than those produced by any existing large-Area low-frequency survey. In excess of 44 000 sources are detected in the images that have a resolution of 25?, typical noise levels of less than 0.5 mJy/beam, and cover an area of over 350 square degrees in the region of the HETDEX Spring Field (right ascension 10h45m00s to 15h30m00s and declination 45°00?00? to 57°00?00?).

447 citations


Journal ArticleDOI
TL;DR: In this paper, a typology of coproduction in public administration is presented, which includes three levels (individual, group, collective) and four phases (commissioning, design, delivery, assessment).
Abstract: Despite an international resurgence of interest in coproduction, confusion about the concept remains. This article attempts to make sense of the disparate literature and clarify the concept of coproduction in public administration. Based on some definitional distinctions and considerations about who is involved in coproduction, when in the service cycle it occurs, and what is generated in the process, the article offers and develops a typology of coproduction that includes three levels (individual, group, collective) and four phases (commissioning, design, delivery, assessment). The levels, phases, and typology as a whole are illustrated with several examples. The article concludes with a discussion of implications for research and practice.

390 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present scientific evidence that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital.

387 citations


Journal ArticleDOI
TL;DR: This scoping review highlights six major categories of tested applied games for mental health and demonstrates that it is feasible to translate traditional evidence-based interventions into computer gaming formats and to exploit features of computer games for therapeutic change.
Abstract: Computer games are ubiquitous and can be utilized for serious purposes such as health and education. "Applied games" including serious games (in brief, computerized games for serious purposes) and gamification (gaming elements used outside of games) have the potential to increase the impact of mental health internet interventions via three processes. First, by extending the reach of online programs to those who might not otherwise use them. Second, by improving engagement through both game-based and "serious" motivational dynamics. Third, by utilizing varied mechanisms for change, including therapeutic processes and gaming features. In this scoping review, we aim to advance the field by exploring the potential and opportunities available in this area. We review engagement factors which may be exploited and demonstrate that there is promising evidence of effectiveness for serious games for depression from contemporary systematic reviews. We illustrate six major categories of tested applied games for mental health (exergames, virtual reality, cognitive behavior therapy-based games, entertainment games, biofeedback, and cognitive training games) and demonstrate that it is feasible to translate traditional evidence-based interventions into computer gaming formats and to exploit features of computer games for therapeutic change. Applied games have considerable potential for increasing the impact of online interventions for mental health. However, there are few independent trials, and direct comparisons of game-based and non-game-based interventions are lacking. Further research, faster iterations, rapid testing, non-traditional collaborations, and user-centered approaches are needed to respond to diverse user needs and preferences in rapidly changing environments.

361 citations


Proceedings Article
01 Oct 2017
TL;DR: This paper developed a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects, such as human-written and less noisy language, the dialogues in the dataset reflect our daily communication way and cover various topics about our daily life.
Abstract: We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on http://yanran.li/dailydialog

Journal ArticleDOI
TL;DR: Several inspiring use case scenarios of Fog computing are described, several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them are identified to shed light on future research directions on realizing Fog computing for building sustainable smart cities.
Abstract: The Internet of Things (IoT) aims to connect billions of smart objects to the Internet, which can bring a promising future to smart cities. These objects are expected to generate large amounts of data and send the data to the cloud for further processing, especially for knowledge discovery, in order that appropriate actions can be taken. However, in reality sensing all possible data items captured by a smart object and then sending the complete captured data to the cloud is less useful. Further, such an approach would also lead to resource wastage (e.g., network, storage, etc.). The Fog (Edge) computing paradigm has been proposed to counterpart the weakness by pushing processes of knowledge discovery using data analytics to the edges. However, edge devices have limited computational capabilities. Due to inherited strengths and weaknesses, neither Cloud computing nor Fog computing paradigm addresses these challenges alone. Therefore, both paradigms need to work together in order to build a sustainable IoT infrastructure for smart cities. In this article, we review existing approaches that have been proposed to tackle the challenges in the Fog computing domain. Specifically, we describe several inspiring use case scenarios of Fog computing, identify ten key characteristics and common features of Fog computing, and compare more than 30 existing research efforts in this domain. Based on our review, we further identify several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them, to shed light on future research directions on realizing Fog computing for building sustainable smart cities.

Journal ArticleDOI
TL;DR: In this article, the mediating effects of innovative work behavior on the relationship between organizational climate for innovation and organizational performance are investigated, based on a survey of 202 managers working in Malaysian companies.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a catalogue of similar to 3000 submillimetre sources detected at 850 mu m over similar to 5 deg(2) surveyed as part of the SCUBA-2 Cosmology Legacy Survey (S2CLS).
Abstract: We present a catalogue of similar to 3000 submillimetre sources detected (>= 3.5 sigma) at 850 mu m over similar to 5 deg(2) surveyed as part of the James Clerk Maxwell Telescope (JCMT) SCUBA-2 Cosmology Legacy Survey (S2CLS). This is the largest survey of its kind at 850 mu m, increasing the sample size of 850 mu m selected submillimetre galaxies by an order of magnitude. The wide 850 mu m survey component of S2CLS covers the extragalactic fields: UKIDSS-UDS, COSMOS, Akari-NEP, Extended Groth Strip, Lockman Hole North, SSA22 and GOODS-North. The average 1s depth of S2CLS is 1.2 mJy beam(-1), approaching the SCUBA-2 850 mu m confusion limit, which we determine to be sigma(c) approximate to 0.8 mJy beam(-1). We measure the 850 mu m number counts, reducing the Poisson errors on the differential counts to approximately 4 per cent at S-850 approximate to 3 mJy. With several independent fields, we investigate field-to-field variance, finding that the number counts on 0.5 degrees-1 degrees scales are generally within 50 per cent of the S2CLS mean for S-850 > 3 mJy, with scatter consistent with the Poisson and estimated cosmic variance uncertainties, although there is a marginal (2 sigma) density enhancement in GOODS-North. The observed counts are in reasonable agreement with recent phenomenological and semi-analytic models, although determining the shape of the faint-end slope (S-850 10 mJy there are approximately 10 sources per square degree, and we detect the distinctive up-turn in the number counts indicative of the detection of local sources of 850 mu m emission

Journal ArticleDOI
TL;DR: It is argued that this energy consumption currently is in the range of 100–500 MW and alternative schemes that are less energy demanding are outlined, and also here energy consumption is not of primary concern.

Journal ArticleDOI
TL;DR: In this paper, the authors suggest changes to the theory of public value and, in particular, the strategic triangle framework, in order to adapt it to an emerging world where policy makers and managers in the public, private, voluntary and informal community sectors have to somehow separately and jointly create public value.
Abstract: This essay suggests changes to the theory of public value and, in particular, the strategic triangle framework, in order to adapt it to an emerging world where policy makers and managers in the public, private, voluntary and informal community sectors have to somehow separately and jointly create public value. One set of possible changes concerns what might be in the centre of the strategic triangle besides the public manager. Additional suggestions are made concerning how multiple actors, levels, arenas and/or spheres of action, and logics might be accommodated. Finally, possibilities are outlined for how the strategic triangle might be adapted to complex policy fields in which there are multiple, often conflicting organizations, interests and agendas. In other words, how might politics be more explicitly accommodated. The essay concludes with a number of research suggestions.

Journal ArticleDOI
TL;DR: An evidence-informed plea to teachers, administrators and researchers to stop propagating the learning styles myth is delivered.
Abstract: We all differ from each other in a multitude of ways, and as such we also prefer many different things whether it is music, food or learning. Because of this, many students, parents, teachers, administrators and even researchers feel that it is intuitively correct to say that since different people prefer to learn visually, auditively, kinesthetically or whatever other way one can think of, we should also tailor teaching, learning situations and learning materials to those preferences. Is this a problem? The answer is a resounding: Yes! Broadly speaking, there are a number of major problems with the notion of learning styles. First, there is quite a difference between the way that someone prefers to learn and that which actually leads to effective and efficient learning. Second, a preference for how one studies is not a learning style. Most so-called learning styles are based on types; they classify people into distinct groups. The assumption that people cluster into distinct groups, however, receives very little support from objective studies. Finally, nearly all studies that report evidence for learning styles fail to satisfy just about all of the key criteria for scientific validity. This article delivers an evidence-informed plea to teachers, administrators and researchers to stop propagating the learning styles myth.

Journal ArticleDOI
01 Mar 2017-Geology
TL;DR: The authors reconstructs the rise of a segment of the southern flank of the Himalaya-Tibet orogen, to the south of the Lhasa terrane, using a paleoaltimeter based on paleoenthalpy encoded in fossil leaves from two new assemblages in southern Tibet (Liuqu and Qiabulin) and four previously known floras from the foreland basin.
Abstract: We reconstruct the rise of a segment of the southern flank of the Himalaya-Tibet orogen, to the south of the Lhasa terrane, using a paleoaltimeter based on paleoenthalpy encoded in fossil leaves from two new assemblages in southern Tibet (Liuqu and Qiabulin) and four previously known floras from the Himalaya foreland basin. U-Pb dating of zircons constrains the Liuqu flora to the latest Paleocene (ca. 56 Ma) and the Qiabulin flora to the earliest Miocene (21- 19 Ma). The proto-Himalaya grew slowly against a high (similar to 4 km) proto-Tibetan Plateau from similar to 1 km in the late Paleocene to similar to 2.3 km at the beginning of the Miocene, and achieved at least similar to 5.5 km by ca. 15 Ma. Contrasting precipitation patterns between the Himalaya-Tibet edifice and the Himalaya foreland basin for the past similar to 56 m.y. show progressive drying across southern Tibet, seemingly linked to the uplift of the Himalaya orogen.

Journal ArticleDOI
TL;DR: Findings suggest that the proposed framework and procedure for creating a composite search index adopted in a generalized dynamic factor model improves the forecast accuracy better than two benchmark models: a traditional time series model and a model with an index created by principal component analysis.

Journal ArticleDOI
31 Aug 2017-Nature
TL;DR: Boron isotope data are presented that show that the ocean surface pH was persistently low during the PETM, and enhanced burial of organic matter seems to have been important in eventually sequestering the released carbon and accelerating the recovery of the Earth system.
Abstract: The Palaeocene–Eocene Thermal Maximum1,2 (PETM) was a global warming event that occurred about 56 million years ago, and is commonly thought to have been driven primarily by the destabilization of carbon from surface sedimentary reservoirs such as methane hydrates3. However, it remains controversial whether such reservoirs were indeed the source of the carbon that drove the warming1,3,4,5. Resolving this issue is key to understanding the proximal cause of the warming, and to quantifying the roles of triggers versus feedbacks. Here we present boron isotope data—a proxy for seawater pH—that show that the ocean surface pH was persistently low during the PETM. We combine our pH data with a paired carbon isotope record in an Earth system model in order to reconstruct the unfolding carbon-cycle dynamics during the event6,7. We find strong evidence for a much larger (more than 10,000 petagrams)—and, on average, isotopically heavier—carbon source than considered previously8,9. This leads us to identify volcanism associated with the North Atlantic Igneous Province10,11, rather than carbon from a surface reservoir, as the main driver of the PETM. This finding implies that climate-driven amplification of organic carbon feedbacks probably played only a minor part in driving the event. However, we find that enhanced burial of organic matter seems to have been important in eventually sequestering the released carbon and accelerating the recovery of the Earth system12.

Journal ArticleDOI
TL;DR: In this paper, the sources and effects of marine litter and the effects of policies and other actions taken worldwide are discussed and a good basis for effective action is provided. But the search for appropriate responses could be based on possible interventions and profound understanding of the context specific factors for success.

Journal ArticleDOI
TL;DR: In this paper, the SDSS-IV MaNGA survey is described and the final properties of the main samples along with important considerations for using these samples for science, while simultaneously optimizing the size distribution of the integral field units (IFUs), the IFU allocation strategy and the target density to produce a survey defined in terms of maximizing S/N, spatial resolution, and sample size.
Abstract: We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing S/N, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude ($M_i$), or, for a small subset of our sample, $M_i$ and color (NUV-i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to $M_i$ and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (Re), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range $5\times10^8 \leq M_* \leq 3\times10^{11} M_{\odot}$ and are sampled at median physical resolutions of 1.37 kpc and 2.5 kpc for the Primary and Secondary samples respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume limited sample.

Journal ArticleDOI
TL;DR: A dataset, containing demographic data together with aggregated clickstream data of students’ interactions in the Virtual Learning Environment (VLE), that enables the analysis of student behaviour, represented by their actions.
Abstract: Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

Journal ArticleDOI
TL;DR: This paper sets out to address the problem of the imbalance between the number of quantitative and qualitative articles published in highly ranked research journals, by providing guidelines for the design, implementation and reporting of qualitative research.
Abstract: This paper sets out to address the problem of the imbalance between the number of quantitative and qualitative articles published in highly ranked research journals, by providing guidelines for the design, implementation and reporting of qualitative research. Clarification is provided of key terms (such as quantitative and qualitative) and the interrelationships between them. The relative risks and benefits of using guidelines for qualitative research are considered, and the importance of using any such guidelines flexibly is highlighted. The proposed guidelines are based on a synthesis of existing guidelines and syntheses of guidelines from a range of fields.

Journal ArticleDOI
TL;DR: A broad consensus has been reached in the astrochemistry community on how to suitably treat gas-phase processes in models, and also how to present the necessary reaction data in databases; however, no such consensus has yet been reached for grain-surface processes.
Abstract: The cross-disciplinary field of astrochemistry exists to understand the formation, destruction, and survival of molecules in astrophysical environments. Molecules in space are synthesized via a large variety of gas-phase reactions, and reactions on dust-grain surfaces, where the surface acts as a catalyst. A broad consensus has been reached in the astrochemistry community on how to suitably treat gas-phase processes in models, and also on how to present the necessary reaction data in databases; however, no such consensus has yet been reached for grain-surface processes. A team of ${\sim}25$ experts covering observational, laboratory and theoretical (astro)chemistry met in summer of 2014 at the Lorentz Center in Leiden with the aim to provide solutions for this problem and to review the current state-of-the-art of grain surface models, both in terms of technical implementation into models as well as the most up-to-date information available from experiments and chemical computations. This review builds on the results of this workshop and gives an outlook for future directions.

Journal ArticleDOI
TL;DR: By comparing with observational datasets, it is shown that these models produce a good representation of many aspects of the climate system, including the land and sea surface temperatures, precipitation, ocean circulation, and vegetation, which motivates continued development and scientific use of the HadCM3B family of coupled climate models.
Abstract: . Understanding natural and anthropogenic climate change processes involves using computational models that represent the main components of the Earth system: the atmosphere, ocean, sea ice, and land surface. These models have become increasingly computationally expensive as resolution is increased and more complex process representations are included. However, to gain robust insight into how climate may respond to a given forcing, and to meaningfully quantify the associated uncertainty, it is often required to use either or both ensemble approaches and very long integrations. For this reason, more computationally efficient models can be very valuable tools. Here we provide a comprehensive overview of the suite of climate models based around the HadCM3 coupled general circulation model. This model was developed at the UK Met Office and has been heavily used during the last 15 years for a range of future (and past) climate change studies, but has now been largely superseded for many scientific studies by more recently developed models. However, it continues to be extensively used by various institutions, including the BRIDGE (Bristol Research Initiative for the Dynamic Global Environment) research group at the University of Bristol, who have made modest adaptations to the base HadCM3 model over time. These adaptations mean that the original documentation is not entirely representative, and several other relatively undocumented configurations are in use. We therefore describe the key features of a number of configurations of the HadCM3 climate model family, which together make up HadCM3@Bristol version 1.0. In order to differentiate variants that have undergone development at BRIDGE, we have introduced the letter B into the model nomenclature. We include descriptions of the atmosphere-only model (HadAM3B), the coupled model with a low-resolution ocean (HadCM3BL), the high-resolution atmosphere-only model (HadAM3BH), and the regional model (HadRM3B). These also include three versions of the land surface scheme. By comparing with observational datasets, we show that these models produce a good representation of many aspects of the climate system, including the land and sea surface temperatures, precipitation, ocean circulation, and vegetation. This evaluation, combined with the relatively fast computational speed (up to 1000 times faster than some CMIP6 models), motivates continued development and scientific use of the HadCM3B family of coupled climate models, predominantly for quantifying uncertainty and for long multi-millennial-scale simulations.

Journal ArticleDOI
14 Dec 2017-Nature
TL;DR: Close agreement is found between the ‘top-down’ and combined ‘bottom-up’ estimates of large CH4 emissions from trees adapted to permanent or seasonal inundation can account for the emission source that is required to close the Amazon CH4 budget.
Abstract: Wetlands are the largest global source of atmospheric methane (CH4), a potent greenhouse gas. However, methane emission inventories from the Amazon floodplain, the largest natural geographic source of CH4 in the tropics, consistently underestimate the atmospheric burden of CH4 determined via remote sensing and inversion modelling, pointing to a major gap in our understanding of the contribution of these ecosystems to CH4 emissions. Here we report CH4 fluxes from the stems of 2,357 individual Amazonian floodplain trees from 13 locations across the central Amazon basin. We find that escape of soil gas through wetland trees is the dominant source of regional CH4 emissions. Methane fluxes from Amazon tree stems were up to 200 times larger than emissions reported for temperate wet forests and tropical peat swamp forests, representing the largest non-ebullitive wetland fluxes observed. Emissions from trees had an average stable carbon isotope value (δ13C) of -66.2 ± 6.4 per mil, consistent with a soil biogenic origin. We estimate that floodplain trees emit 15.1 ± 1.8 to 21.2 ± 2.5 teragrams of CH4 a year, in addition to the 20.5 ± 5.3 teragrams a year emitted regionally from other sources. Furthermore, we provide a 'top-down' regional estimate of CH4 emissions of 42.7 ± 5.6 teragrams of CH4 a year for the Amazon basin, based on regular vertical lower-troposphere CH4 profiles covering the period 2010-2013. We find close agreement between our 'top-down' and combined 'bottom-up' estimates, indicating that large CH4 emissions from trees adapted to permanent or seasonal inundation can account for the emission source that is required to close the Amazon CH4 budget. Our findings demonstrate the importance of tree stem surfaces in mediating approximately half of all wetland CH4 emissions in the Amazon floodplain, a region that represents up to one-third of the global wetland CH4 source when trees are combined with other emission sources.

Journal ArticleDOI
TL;DR: A definition of work–nonwork balance is proposed drawing from theory, empirical evidence from the review, and normative information about how balance should be defined about how it is defined to remedy concerns raised by the review.
Abstract: We review research on work-nonwork balance to examine the presence of the jingle fallacy-attributing different meanings to a single construct label-and the jangle fallacy-using different labels for a single construct. In 290 papers, we found 233 conceptual definitions that clustered into 5 distinct, interpretable types, suggesting evidence of the jingle fallacy. We calculated Euclidean distances to quantify the extent of the jingle fallacy and found high divergence in definitions across time and publication outlet. One exception was more agreement recently in better journals to conceptualize balance as unidimensional, psychological, and distinct from conflict and enrichment. Yet, over time many authors have committed the jangle fallacy by labeling measures of conflict and/or enrichment as balance, and disagreement persists even in better journals about the meanings attributed to balance (e.g., effectiveness, satisfaction). To examine the empirical implications of the jingle and jangle fallacies, we conducted meta-analyses of distinct operational definitions of balance with job, life, and family satisfaction. Effect sizes for conflict and enrichment measures were typically smaller than effects for balance measures, providing evidence of a unique balance construct that is not interchangeable with conflict and enrichment. To begin to remedy concerns raised by our review, we propose a definition of work-nonwork balance drawing from theory, empirical evidence from our review, and normative information about how balance should be defined. We conclude with a theory-based agenda for future research. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: In this paper, a large-scale experiment on a mature, phosphorous-limited eucalypt forest showed that aboveground productivity was not significantly stimulated by elevated CO2, despite a sustained 19% increase in leaf photosynthesis.
Abstract: Experimental evidence from a mature, phosphorous-limited, eucalypt forest finds that aboveground productivity was not significantly stimulated by elevated CO2. Findings suggest that this effect may be limited across much of the tropics. Rising atmospheric CO2 stimulates photosynthesis and productivity of forests, offsetting CO2 emissions1,2. Elevated CO2 experiments in temperate planted forests yielded ∼23% increases in productivity3 over the initial years. Whether similar CO2 stimulation occurs in mature evergreen broadleaved forests on low-phosphorus (P) soils is unknown, largely due to lack of experimental evidence4. This knowledge gap creates major uncertainties in future climate projections5,6 as a large part of the tropics is P-limited. Here, we increased atmospheric CO2 concentration in a mature broadleaved evergreen eucalypt forest for three years, in the first large-scale experiment on a P-limited site. We show that tree growth and other aboveground productivity components did not significantly increase in response to elevated CO2 in three years, despite a sustained 19% increase in leaf photosynthesis. Moreover, tree growth in ambient CO2 was strongly P-limited and increased by ∼35% with added phosphorus. The findings suggest that P availability may potentially constrain CO2-enhanced productivity in P-limited forests; hence, future atmospheric CO2 trajectories may be higher than predicted by some models. As a result, coupled climate–carbon models should incorporate both nitrogen and phosphorus limitations to vegetation productivity7 in estimating future carbon sinks.

Journal ArticleDOI
TL;DR: The sequestration of Ca2+ by mitochondria during physiological signalling appears necessary to maintain cellular bio-energetics, thereby suppressing AMPK-dependent autophagy.