Showing papers by "University of Zagreb published in 2016"
••
James Bentham1, Mariachiara Di Cesare1, Mariachiara Di Cesare2, Gretchen A Stevens3 +787 more•Institutions (246)
TL;DR: The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
Abstract: Being taller is associated with enhanced longevity, and higher education and earnings. We reanalysed 1472 population-based studies, with measurement of height on more than 18.6 million participants to estimate mean height for people born between 1896 and 1996 in 200 countries. The largest gain in adult height over the past century has occurred in South Korean women and Iranian men, who became 20.2 cm (95% credible interval 17.5–22.7) and 16.5 cm (13.3–19.7) taller, respectively. In contrast, there was little change in adult height in some sub-Saharan African countries and in South Asia over the century of analysis. The tallest people over these 100 years are men born in the Netherlands in the last quarter of 20th century, whose average heights surpassed 182.5 cm, and the shortest were women born in Guatemala in 1896 (140.3 cm; 135.8–144.8). The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
1,348 citations
••
Hampton University1, Thomas Jefferson National Accelerator Facility2, University of Paris-Sud3, University of Santiago, Chile4, Brookhaven National Laboratory5, University of Pavia6, University of Groningen7, Federico Santa María Technical University8, Shandong University9, Goethe University Frankfurt10, Stony Brook University11, Baruch College12, Duke University13, Argonne National Laboratory14, The Catholic University of America15, Old Dominion University16, Lawrence Berkeley National Laboratory17, Ohio State University18, University of Zagreb19, University of Jyväskylä20, Tel Aviv University21, CERN22, Temple University23, Massachusetts Institute of Technology24, Columbia University25, Ruhr University Bochum26, California Institute of Technology27, University of Massachusetts Amherst28, University of Buenos Aires29, University of the Basque Country30, University of Connecticut31, University of Tübingen32, Pennsylvania State University33, Stanford University34, Dalhousie University35, Central China Normal University36
TL;DR: In this article, the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community, is presented.
Abstract: This White Paper presents the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community. It was commissioned by the managements of Brookhaven National Laboratory (BNL) and Thomas Jefferson National Accelerator Facility (JLab) with the objective of presenting a summary of scientific opportunities and goals of the EIC as a follow-up to the 2007 NSAC Long Range plan. This document is a culmination of a community-wide effort in nuclear science following a series of workshops on EIC physics over the past decades and, in particular, the focused ten-week program on “Gluons and quark sea at high energies” at the Institute for Nuclear Theory in Fall 2010. It contains a brief description of a few golden physics measurements along with accelerator and detector concepts required to achieve them. It has been benefited profoundly from inputs by the users’ communities of BNL and JLab. This White Paper offers the promise to propel the QCD science program in the US, established with the CEBAF accelerator at JLab and the RHIC collider at BNL, to the next QCD frontier.
1,022 citations
••
University of Paris1, Centre national de la recherche scientifique2, Academia Sinica3, California Institute of Technology4, Institute for the Physics and Mathematics of the Universe5, University of Cambridge6, University of Geneva7, Valparaiso University8, Smithsonian Astrophysical Observatory9, University of Edinburgh10, Niels Bohr Institute11, University of Rochester12, Space Telescope Science Institute13, ETH Zurich14, University of Bologna15, Max Planck Society16, University of Zagreb17, Kindai University18, University of the Western Cape19
TL;DR: The COSMOS2015(24) catalog as mentioned in this paper contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSmOS field, which is highly optimized for the study of galaxy evolution and environments in the early universe.
Abstract: We present the COSMOS2015(24) catalog, which contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSMOS field. Including new YJHK(s) images from the UltraVISTA-DR2 survey, Y-band images from Subaru/Hyper-Suprime-Cam, and infrared data from the Spitzer Large Area Survey with the Hyper-Suprime-Cam Spitzer legacy program, this near-infrared-selected catalog is highly optimized for the study of galaxy evolution and environments in the early universe. To maximize catalog completeness for bluer objects and at higher redshifts, objects have been detected on a chi(2) sum of the YJHK(s) and z(++) images. The catalog contains similar to 6 x 10(5) objects in the 1.5 deg(2) UltraVISTA-DR2 region and similar to 1.5 x 10(5) objects are detected in the “ultra-deep stripes” (0.62 deg(2)) at K-s \textless= 24.7 (3 sigma, 3 `', AB magnitude). Through a comparison with the zCOSMOS-bright spectroscopic redshifts, we measure a photometric redshift precision of sigma(Delta z(1) (+ zs)) = 0.007 and a catastrophic failure fraction of eta = 0.5%. At 3 \textless z \textless 6, using the unique database of spectroscopic redshifts in COSMOS, we find sigma(Delta z(1) (+ zs)) = 0.021 and eta = 13.2%. The deepest regions reach a 90% completeness limit of 10(10)M(circle dot) to z = 4. Detailed comparisons of the color distributions, number counts, and clustering show excellent agreement with the literature in the same mass ranges. COSMOS2015 represents a unique, publicly available, valuable resource with which to investigate the evolution of galaxies within their environment back to the earliest stages of the history of the universe. The COSMOS2015 catalog is distributed via anonymous ftp and through the usual astronomical archive systems (CDS, ESO Phase 3, IRSA).
1,002 citations
••
TL;DR: The scope of genomic predictions is expanded, with predictions available for more than 200 organisms, and the SIFT 4G algorithm, which is a faster version of SIFT that enables practical computations on reference genomes, is described.
Abstract: The SIFT (sorting intolerant from tolerant) algorithm helps bridge the gap between mutations and phenotypic variations by predicting whether an amino acid substitution is deleterious. SIFT has been used in disease, mutation and genetic studies, and a protocol for its use has been previously published with Nature Protocols. This updated protocol describes SIFT 4G (SIFT for genomes), which is a faster version of SIFT that enables practical computations on reference genomes. Users can get predictions for single-nucleotide variants from their organism of interest using the SIFT 4G annotator with SIFT 4G's precomputed databases. The scope of genomic predictions is expanded, with predictions available for more than 200 organisms. Users can also run the SIFT 4G algorithm themselves. SIFT predictions can be retrieved for 6.7 million variants in 4 min once the database has been downloaded. If precomputed predictions are not available, the SIFT 4G algorithm can compute predictions at a rate of 2.6 s per protein sequence. SIFT 4G is available from http://sift-dna.org/sift4g.
921 citations
••
National Institutes of Health1, European Society of Cardiology2, Ghent University3, Karolinska Institutet4, Lille University of Science and Technology5, Charles University in Prague6, Hospital Universitario La Paz7, University of Sarajevo8, Shupyk National Medical Academy of Postgraduate Education9, University of Latvia10, Ljubljana University Medical Centre11, University of Ioannina12, University of Würzburg13, Vilnius University14, University Hospital Centre Zagreb15, Nicosia General Hospital16, Jagiellonian University Medical College17, University of Zagreb18, Valve Corporation19, Hacettepe University20, University of Banja Luka21
TL;DR: A large majority of coronary patients do not achieve the guideline standards for secondary prevention with high prevalences of persistent smoking, unhealthy diets, physical inactivity and consequently most patients are overweight or obese with a high prevalence of diabetes.
Abstract: AimsTo determine whether the Joint European Societies guidelines on cardiovascular prevention are being followed in everyday clinical practice of secondary prevention and to describe the lifestyle,...
833 citations
••
TL;DR: A CRISPR-Cas9-based tool for specific DNA methylation consisting of deactivated Cas9 (dCas9) nuclease and catalytic domain of the DNA methyltransferase DNMT3A targeted by co–expression of a guide RNA to any 20 bp DNA sequence followed by the NGG trinucleotide is developed.
Abstract: Epigenetic studies relied so far on correlations between epigenetic marks and gene expression pattern. Technologies developed for epigenome editing now enable direct study of functional relevance of precise epigenetic modifications and gene regulation. The reversible nature of epigenetic modifications, including DNA methylation, has been already exploited in cancer therapy for remodeling the aberrant epigenetic landscape. However, this was achieved non-selectively using epigenetic inhibitors. Epigenetic editing at specific loci represents a novel approach that might selectively and heritably alter gene expression. Here, we developed a CRISPR-Cas9-based tool for specific DNA methylation consisting of deactivated Cas9 (dCas9) nuclease and catalytic domain of the DNA methyltransferase DNMT3A targeted by co-expression of a guide RNA to any 20 bp DNA sequence followed by the NGG trinucleotide. We demonstrated targeted CpG methylation in a ∼35 bp wide region by the fusion protein. We also showed that multiple guide RNAs could target the dCas9-DNMT3A construct to multiple adjacent sites, which enabled methylation of a larger part of the promoter. DNA methylation activity was specific for the targeted region and heritable across mitotic divisions. Finally, we demonstrated that directed DNA methylation of a wider promoter region of the target loci IL6ST and BACH2 decreased their expression.
612 citations
••
06 Jan 2016
TL;DR: It is concluded that answering key questions on the relationship between Aβ and tau pathology should lead to a better understanding of the nature of secondary tauopathies, especially AD, and open new therapeutic targets and strategies.
Abstract: Abnormal deposition of misprocessed and aggregated proteins is a common final pathway of most neurodegenerative diseases, including Alzheimer's disease (AD). AD is characterized by the extraneuronal deposition of the amyloid β (Aβ) protein in the form of plaques and the intraneuronal aggregation of the microtubule-associated protein tau in the form of filaments. Based on the biochemically diverse range of pathological tau proteins, a number of approaches have been proposed to develop new potential therapeutics. Here we discuss some of the most promising ones: inhibition of tau phosphorylation, proteolysis and aggregation, promotion of intra- and extracellular tau clearance, and stabilization of microtubules. We also emphasize the need to achieve a full understanding of the biological roles and post-translational modifications of normal tau, as well as the molecular events responsible for selective neuronal vulnerability to tau pathology and its propagation. It is concluded that answering key questions on the relationship between Aβ and tau pathology should lead to a better understanding of the nature of secondary tauopathies, especially AD, and open new therapeutic targets and strategies.
492 citations
••
Yale University1, Harvard University2, University of Bologna3, INAF4, Instituto Politécnico Nacional5, University of Hawaii6, Durham University7, University of Helsinki8, Max Planck Society9, California Institute of Technology10, University of Hawaii at Hilo11, Rochester Institute of Technology12, University of Bonn13, University of California, San Diego14, National Autonomous University of Mexico15, University of Sussex16, ETH Zurich17, National Radio Astronomy Observatory18, Institute for the Physics and Mathematics of the Universe19, University of Zagreb20, University of Copenhagen21, University of Concepción22
TL;DR: The COSMOS-Legacy survey as discussed by the authors is a 4.6Ms Chandra program that has imaged 2.2 deg2 of the COS-MOS field with an effective exposure of ≃ 160 ks over the central 1.5 deg^2 and ≃ 80 ks in the remaining area.
Abstract: The COSMOS-Legacy survey is a 4.6 Ms Chandra program that has imaged 2.2 deg2 of the COSMOS field with an effective exposure of ≃ 160 ks over the central 1.5 deg^2 and of ≃ 80 ks in the remaining area. The survey is the combination of 56 new observations obtained as an X-ray Visionary Project with the previous C-COSMOS survey. We describe the reduction and analysis of the new observations and the properties of 2273 point sources detected above a spurious probability of 2 × 10^(−5). We also present the updated properties of the C-COSMOS sources detected in the new data. The whole survey includes 4016 point sources (3814, 2920 and 2440 in the full, soft, and hard band). The limiting depths are 2.2 × 10^(−16), 1.5 × 10^(−15), and 8.9 × 10^(−16) erg cm^(-2)s^(-1) in the 0.5–2, 2–10, and 0.5–10 keV bands, respectively. The observed fraction of obscured active galactic nuclei with a column density >10^(22) cm^(−2) from the hardness ratio (HR) is 50_(-16)^(+17)%. Given the large sample we compute source number counts in the hard and soft bands, significantly reducing the uncertainties of 5%–10%. For the first time we compute number counts for obscured (HR > −0.2) and unobscured (HR < −0.2) sources and find significant differences between the two populations in the soft band. Due to the unprecedent large exposure, COSMOS-Legacy area is three times larger than surveys at similar depths and its depth is three times fainter than surveys covering similar areas. The area-flux region occupied by COSMOS-Legacy is likely to remain unsurpassed for years to come.
424 citations
••
TL;DR: The interaction between the charge carriers flowing inside graphene and the plasmons enables a highly efficient two-dimensional Čerenkov emission, giving a versatile, tunable and ultrafast conversion mechanism from electrical signal to plasmonic excitation.
Abstract: Graphene plasmons have been found to be an exciting plasmonic platform, thanks to their high field confinement and low phase velocity, motivating contemporary research to revisit established concepts in light–matter interaction. In a conceptual breakthrough over 80 years old, Cerenkov showed how charged particles emit shockwaves of light when moving faster than the phase velocity of light in a medium. To modern eyes, the Cerenkov effect offers a direct and ultrafast energy conversion scheme from charge particles to photons. The requirement for relativistic particles, however, makes Cerenkov emission inaccessible to most nanoscale electronic and photonic devices. Here we show that graphene plasmons provide the means to overcome this limitation through their low phase velocity and high field confinement. The interaction between the charge carriers flowing inside graphene and the plasmons enables a highly efficient two-dimensional Cerenkov emission, giving a versatile, tunable and ultrafast conversion mechanism from electrical signal to plasmonic excitation.
344 citations
••
TL;DR: UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents.
342 citations
••
TL;DR: GraphMap is a mapping algorithm designed to analyse nanopore sequencing reads, which progressively refines candidate alignments to robustly handle potentially high-error rates and a fast graph traversal to align long reads with speed and high precision.
Abstract: Realizing the democratic promise of nanopore sequencing requires the development of new bioinformatics approaches to deal with its specific error characteristics. Here we present GraphMap, a mapping algorithm designed to analyse nanopore sequencing reads, which progressively refines candidate alignments to robustly handle potentially high-error rates and a fast graph traversal to align long reads with speed and high precision (>95%). Evaluation on MinION sequencing data sets against short- and long-read mappers indicates that GraphMap increases mapping sensitivity by 10–80% and maps >95% of bases. GraphMap alignments enabled single-nucleotide variant calling on the human genome with increased sensitivity (15%) over the next best mapper, precise detection of structural variants from length 100 bp to 4 kbp, and species and strain-specific identification of pathogens using MinION reads. GraphMap is available open source under the MIT license at https://github.com/isovic/graphmap.
••
DSM1, University of Bonn2, Centre national de la recherche scientifique3, Istanbul University4, University of Nice Sophia Antipolis5, University of Zagreb6, University of Bristol7, University of Bologna8, University of Western Australia9, Max Planck Society10, Paris Diderot University11, University of Geneva12, European Southern Observatory13, University of Birmingham14, Ludwig Maximilian University of Munich15, University of Oxford16, University of Paris17, University of Michigan18, University of Liège19, Aryabhatta Research Institute of Observational Sciences20, Durham University21, University of KwaZulu-Natal22, Université Paris-Saclay23, Chalmers University of Technology24, INAF25, University of Victoria26, Liverpool John Moores University27, Australian Astronomical Observatory28, University of Chicago29, Taras Shevchenko National University of Kyiv30, University of Illinois at Urbana–Champaign31, National Institute of Astrophysics, Optics and Electronics32, Aristotle University of Thessaloniki33, University of Copenhagen34, Presidency University, Kolkata35, Leiden University36, Stanford University37, Goddard Space Flight Center38, Princeton University39, University of California, Davis40
TL;DR: The XXL-XMM survey as discussed by the authors provides constraints on the dark energy equation of state from the space-time distribution of clusters of galaxies and serves as a pathfinder for future, wide-area X-ray missions.
Abstract: Context. The quest for the cosmological parameters that describe our universe continues to motivate the scientific community to undertake very large survey initiatives across the electromagnetic spectrum. Over the past two decades, the Chandra and XMM-Newton observatories have supported numerous studies of X-ray-selected clusters of galaxies, active galactic nuclei (AGNs), and the X-ray background. The present paper is the first in a series reporting results of the XXL-XMM survey; it comes at a time when the Planck mission results are being finalised. Aims. We present the XXL Survey, the largest XMM programme totaling some 6.9 Ms to date and involving an international consortium of roughly 100 members. The XXL Survey covers two extragalactic areas of 25 deg(2) each at a point-source sensitivity of similar to 5 x 10(-15) erg s(-1) cm(-2) in the [0.5-2] keV band (completeness limit). The survey's main goals are to provide constraints on the dark energy equation of state from the space-time distribution of clusters of galaxies and to serve as a pathfinder for future, wide-area X-ray missions. We review science objectives, including cluster studies, AGN evolution, and large-scale structure, that are being conducted with the support of approximately 30 follow-up programmes. Methods. We describe the 542 XMM observations along with the associated multi-lambda and numerical simulation programmes. We give a detailed account of the X-ray processing steps and describe innovative tools being developed for the cosmological analysis. Results. The paper provides a thorough evaluation of the X-ray data, including quality controls, photon statistics, exposure and background maps, and sky coverage. Source catalogue construction and multi-lambda associations are briefly described. This material will be the basis for the calculation of the cluster and AGN selection functions, critical elements of the cosmological and science analyses. Conclusions. The XXL multi-lambda data set will have a unique lasting legacy value for cosmological and extragalactic studies and will serve as a calibration resource for future dark energy studies with clusters and other X-ray selected sources. With the present article, we release the XMM XXL photon and smoothed images along with the corresponding exposure maps.
••
Masaryk University1, Wageningen University and Research Centre2, University of Bayreuth3, University of Greifswald4, University of Belgrade5, Düzce University6, Bulgarian Academy of Sciences7, University of Graz8, University of Göttingen9, University of the Basque Country10, Slovenian Academy of Sciences and Arts11, University of Pécs12, Research Institute for Nature and Forest13, University of Patras14, Aarhus University15, Russian Academy of Sciences16, Carlos III Health Institute17, University of Barcelona18, Complutense University of Madrid19, University of Palermo20, Ministry of Interior (Bahrain)21, Transilvania University of Brașov22, Celal Bayar University23, Martin Luther University of Halle-Wittenberg24, University of Wrocław25, Forest Research Institute26, Taras Shevchenko National University of Kyiv27, University of Novi Sad28, University of Zagreb29, University of Picardie Jules Verne30, National Research Council31, Kazan Federal University32, Babeș-Bolyai University33, University of Latvia34, Slovak Academy of Sciences35, Aristotle University of Thessaloniki36, University of Perugia37, University of Oulu38
TL;DR: The European Vegetation Archive (EVA) as mentioned in this paper is a database of European vegetation plots developed by the IAVS Working Group Europe Vegetation Survey (WGSVSS) since 2012 and made available for use in research projects in 2014.
Abstract: The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management of multiple databases that use different species lists. This is facilitated by the SynBioSys Taxon Database, a system of taxon names and concepts used in the individual European databases and their corresponding names on a unified list of European flora. TURBOVEG 3 also includes procedures for handling data requests, selections and provisions according to the approved EVA Data Property and Governance Rules. By 30 June 2015, 61 databases from all European regions have joined EVA, contributing in total 1 027 376 vegetation plots, 82% of them with geographic coordinates, from 57 countries. EVA provides a unique data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database.
••
TL;DR: In this paper, the authors review the current status of the cleaner cement manufacturing, the cement industry's shifting to alternative raw materials and alternative energy sources, and the modelling of the thermo-chemical processes inside the cement combustion units.
••
University of Bologna1, Harvard University2, Yale University3, Max Planck Society4, National Autonomous University of Mexico5, University of California, San Diego6, University of Concepción7, University of Helsinki8, Boston University9, University of Bonn10, Centre national de la recherche scientifique11, Lund University12, ETH Zurich13, National Radio Astronomy Observatory14, University of Tokyo15, University of Zagreb16
TL;DR: The catalog of optical and infrared counterparts of the Chandra COSMOS-Legacy Survey, a 4.6Ms Chandra program on the 2.2 deg2 of the COS MOS field, was presented in this article.
Abstract: We present the catalog of optical and infrared counterparts of the Chandra COSMOS-Legacy Survey, a 4.6 Ms Chandra program on the 2.2 deg2 of the COSMOS field, combination of 56 new overlapping observations obtained in Cycle 14 with the previous C-COSMOS survey. In this Paper we report the i, K, and 3.6 μm identifications of the 2273 X-ray point sources detected in the new Cycle 14 observations. We use the likelihood ratio technique to derive the association of optical/infrared (IR) counterparts for 97% of the X-ray sources. We also update the information for the 1743 sources detected in C-COSMOS, using new K and 3.6 μm information not available when the C-COSMOS analysis was performed. The final catalog contains 4016 X-ray sources, 97% of which have an optical/IR counterpart and a photometric redshift, while sime54% of the sources have a spectroscopic redshift. The full catalog, including spectroscopic and photometric redshifts and optical and X-ray properties described here in detail, is available online. We study several X-ray to optical (X/O) properties: with our large statistics we put better constraints on the X/O flux ratio locus, finding a shift toward faint optical magnitudes in both soft and hard X-ray band. We confirm the existence of a correlation between X/O and the the 2–10 keV luminosity for Type 2 sources. We extend to low luminosities the analysis of the correlation between the fraction of obscured AGNs and the hard band luminosity, finding a different behavior between the optically and X-ray classified obscured fraction.
••
TL;DR: In this article, the authors provide a comprehensive overview of the current research on ESS allocation (ESS sizing and siting), giving a unique insight into issues and challenges of integrating ESS into distribution networks and thus giving framework guidelines for future ESS research.
Abstract: Changes in the electricity business environment, dictated mostly by the increasing integration of renewable energy sources characterised by variable and uncertain generation, create new challenges especially in the liberalised market environment. The role of energy storage systems (ESS) is recognised as a mean to provide additional system security, reliability and flexibility to respond to changes that are still difficult to accurately forecast. However, there are still open questions about benefits these units bring to the generation side, system operators and the consumers. This study provides a comprehensive overview of the current research on ESS allocation (ESS sizing and siting), giving a unique insight into issues and challenges of integrating ESS into distribution networks and thus giving framework guidelines for future ESS research.
•
TL;DR: In this paper, the contract number W911NF-13-D-0001 was assigned to the U.S. Army Research Office for nanotechnologies (USARO).
Abstract: United States. Army Research Office. Institute for Soldier Nanotechnologies (contract number W911NF-13-D-0001)
••
TL;DR: In this paper, phenolic grape skin extracts were prepared by using five choline chloride based natural deep eutectic solvents (NADESs) containing glucose, fructose, xylose, glycerol, malic acid and valorised by testing their biological activity in vitro using two human tumour cell lines (HeLa and MCF-7).
Abstract: In the present study phenolic grape skin extracts were prepared by using five choline chloride based natural deep eutectic solvents (NADESs) containing glucose, fructose, xylose, glycerol, malic acid and valorised by testing their biological activity in vitro using two human tumour cell lines (HeLa and MCF-7). Initially, used NADESs were investigated regard to their toxicity and low cytotoxicity of solvents was observed toward HeLa and MCF-7 cells (EC50 values > 2000 mg/L). Among used choline chloride based NADESs, the one containing malic acid showed the best performance concerning extraction efficiency (total phenolic and total anthocyanin were 91 and 24 mg/g dw), as well as antioxidant (ORAC values were 371 μmol TE/g dw) and antiproliferative activity (percentage of cell viability were about 20%). Herein, for the first time it was showed that NADES components could be chosen not only to fine-tune solvent physicochemical characteristics but also to enhance biological activity of extracts prepared in NADESs. Therefore, our research confirmed that NADESs are excellent and promising choice of solvents for sustainable and green extraction, which will lead to its novel application in food and pharmaceutical industry.
••
TL;DR: European evidence-based (S3) guideline for the treatment of acne – update 2016 – short version A. Nast.
Abstract: European evidence-based (S3) guideline for the treatment of acne – update 2016 – short version A. Nast,* B. Dr eno, V. Bettoli, Z. Bukvic Mokos, K. Degitz, C. Dressler, A.Y. Finlay, M. Haedersdal, J. Lambert, A. Layton, H.B. Lomholt, J.L. L opez-Estebaranz, F. Ochsendorf, C. Oprica, S. Rosumeck, T. Simonart, R.N. Werner, H. Gollnick Division of Evidence-Based Medicine, Klinik f€ ur Dermatologie, Charit e Universit€atsmedizin Berlin, Berlin, Germany Department of Dermatocancerolgy, Nantes University Hospital, Hôtel-Dieu, Nantes, France 3 Department of Clinical and Experimental Medicine, Section of Dermatology, University of Ferrara, Ferrara, Italy Department of Dermatology, School of Medicine University of Zagreb, Zagreb, Croatia 5 Private practice, Munich, Germany Department of Dermatology and Wound Healing, Cardiff University School of Medicine, Cardiff, UK Department of Dermatology, Bispebjerg Hospital, University of Copenhagen, Copenhagen, Denmark University Hospital of Antwerp, University of Antwerp, Antwerp, Belgium Department of Dermatology, Harrogate and District Foundation Trust, Harrogate, North Yorkshire, UK Aarhus Universitet, Aarhus, Denmark Dermatology Department, Alcorcon University Hospital Foundation, Alcorc on, Madrid, Spain Department of Dermatology and Venereology, University of Frankfurt, Frankfurt, Germany Department of Laboratory Medicine, Karolinska Institutet, Karolinska University Hospital Huddinge and Diagnostiskt Centrum Hud, Stockholm, Sweden Private practice, Anderlecht, Belgium Department of Dermatology and Venereology, University of Magdeburg, Magdeburg, Germany *Correspondence: A. Nast. E-mail: alexander.nast@charite.de
••
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4 +976 more•Institutions (100)
TL;DR: In this article, direct photon spectra down to pT≈1 GeV/c were extracted for the 20−40% and 0−20% centrality classes, respectively.
••
TL;DR: UAE could be used as an efficient technique for the extraction of pectin from tomato waste and by-products with strong emphasis on environmental friendly processing approach, according to obtained results.
••
TL;DR: In this article, a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh is proposed, called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells.
Abstract: We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source.
••
TL;DR: The isoAdvector method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes and results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency.
Abstract: We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists in two parts: First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple 2D and 3D interface advection problems both on structured and unstructured meshes. The results are very satisfactory both in terms of volume conservation, boundedness, surface sharpness, and efficiency. The isoAdvector method was implemented as an OpenFOAM(R) extension and is published as open source.
•
01 Dec 2016
TL;DR: High-order character n-grams were the most successful feature, and the best classification approaches included traditional supervised learning methods such as SVM, logistic regression, and language models, while deep learning approaches did not perform very well.
Abstract: We present the results of the third edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the VarDial’2016 workshop at COLING’2016. The challenge offered two subtasks: subtask 1 focused on the identification of very similar languages and language varieties in newswire texts, whereas subtask 2 dealt with Arabic dialect identification in speech transcripts. A total of 37 teams registered to participate in the task, 24 teams submitted test results, and 20 teams also wrote system description papers. High-order character n-grams were the most successful feature, and the best classification approaches included traditional supervised learning methods such as SVM, logistic regression, and language models, while deep learning approaches did not perform very well.
••
TL;DR: In this article, the authors used a fully self-consistent covariant density functional theory (CDFT) framework to evaluate the sensitivity of heavy element nucleosynthesis to weak interaction reaction rates.
Abstract: Background: $r$-process nucleosynthesis models rely, by necessity, on nuclear structure models for input. Particularly important are $\ensuremath{\beta}$-decay half-lives of neutron-rich nuclei. At present only a single systematic calculation exists that provides values for all relevant nuclei making it difficult to test the sensitivity of nucleosynthesis models to this input. Additionally, even though there are indications that their contribution may be significant, the impact of first-forbidden transitions on decay rates has not been systematically studied within a consistent model.Purpose: Our goal is to provide a table of $\ensuremath{\beta}$-decay half-lives and $\ensuremath{\beta}$-delayed neutron emission probabilities, including first-forbidden transitions, calculated within a fully self-consistent microscopic theoretical framework. The results are used in an $r$-process nucleosynthesis calculation to asses the sensitivity of heavy element nucleosynthesis to weak interaction reaction rates.Method: We use a fully self-consistent covariant density functional theory (CDFT) framework. The ground state of all nuclei is calculated with the relativistic Hartree-Bogoliubov (RHB) model, and excited states are obtained within the proton-neutron relativistic quasiparticle random phase approximation ($pn$-RQRPA).Results: The $\ensuremath{\beta}$-decay half-lives, $\ensuremath{\beta}$-delayed neutron emission probabilities, and the average number of emitted neutrons have been calculated for 5409 nuclei in the neutron-rich region of the nuclear chart. We observe a significant contribution of the first-forbidden transitions to the total decay rate in nuclei far from the valley of stability. The experimental half-lives are in general well reproduced for even-even, odd-$A$, and odd-odd nuclei, in particular for short-lived nuclei. The resulting data table is included with the article as Supplemental Material.Conclusions: In certain regions of the nuclear chart, first-forbidden transitions constitute a large fraction of the total decay rate and must be taken into account consistently in modern evaluations of half-lives. Both the $\ensuremath{\beta}$-decay half-lives and $\ensuremath{\beta}$-delayed neutron emission probabilities have a noticeable impact on the results of heavy element nucleosynthesis models.
••
TL;DR: It was showed that plasma treatment had positive influences on anthocyanins stability and color change in cloudy pomegranate juice.
••
Jaroslav Adam1, Dagmar Adamová2, Madan M. Aggarwal3, G. Aglieri Rinella4 +986 more•Institutions (95)
TL;DR: The pseudorapidity density of charged particles, dNch/dη, at midrapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV as discussed by the authors.
Abstract: The pseudorapidity density of charged particles, dNch/dη, at midrapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of √sNN=5.02 TeV. For the 5% most central collisions, we measure a value of 1943 ± 54. The rise in dNch/dη as a function of √sNN p is steeper than that observed in proton-proton collisions and follows the trend established by measurements at lower energy. The increase of dNch/dη as a function of the average number of participant nucleons, ⟨Npart⟩, calculated in a Glauber model, is compared with the previous measurement at √sNN=2.76 TeV. A constant factor of about 1.2 describes the increase in dNch/dη from √sNN=2.76 to 5.02 TeV for all centrality classes, within the measured range of 0%–80% centrality. The results are also compared to models based on different mechanisms for particle production in nuclear collisions.
••
TL;DR: Site-specific DNA methylation changes in IBD relate to underlying genotype and associate with cell-specific alteration in gene expression, andSeparated cell data shows that IBD-associated hypermethylation within the TXK promoter region negatively correlates with gene expression in whole-blood and CD8+ T cells, but not other cell types.
Abstract: Epigenetic alterations may provide important insights into gene-environment interaction in inflammatory bowel disease (IBD). Here we observe epigenome-wide DNA methylation differences in 240 newly- ...
••
TL;DR: Glycans are involved in virtually all physiological processes, and inter-individual variation in glycome composition is large, and these differences associate with disease risk, disease course and the response to therapy.
••
TL;DR: These findings show that trees prioritize the investment of assimilates below ground, probably to regain root functions after drought, and propose that root restoration plays a key role in ecosystem resilience to drought, in that the increased sink activity controls the recovery of carbon balances.
Abstract: Climate projections predict higher precipitation variability with more frequent dry extremes(1). CO2 assimilation of forests decreases during drought, either by stomatal closure(2) or by direct environmental control of sink tissue activities(3). Ultimately, drought effects on forests depend on the ability of forests to recover, but the mechanisms controlling ecosystem resilience are uncertain(4). Here, we have investigated the effects of drought and drought release on the carbon balances in beech trees by combining CO2 flux measurements, metabolomics and (13)CO2 pulse labelling. During drought, net photosynthesis (AN), soil respiration (RS) and the allocation of recent assimilates below ground were reduced. Carbohydrates accumulated in metabolically resting roots but not in leaves, indicating sink control of the tree carbon balance. After drought release, RS recovered faster than AN and CO2 fluxes exceeded those in continuously watered trees for months. This stimulation was related to greater assimilate allocation to and metabolization in the rhizosphere. These findings show that trees prioritize the investment of assimilates below ground, probably to regain root functions after drought. We propose that root restoration plays a key role in ecosystem resilience to drought, in that the increased sink activity controls the recovery of carbon balances.