scispace - formally typeset
Search or ask a question

Showing papers by "Ben-Gurion University of the Negev published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: The discovery that rumen microbiome components are tightly linked to cows' ability to extract energy from their feed, termed feed efficiency, is reported.
Abstract: Ruminants have the remarkable ability to convert human-indigestible plant biomass into human-digestible food products, due to a complex microbiome residing in the rumen compartment of their upper digestive tract. Here we report the discovery that rumen microbiome components are tightly linked to cows' ability to extract energy from their feed, termed feed efficiency. Feed efficiency was measured in 146 milking cows and analyses of the taxonomic composition, gene content, microbial activity and metabolomic composition was performed on the rumen microbiomes from the 78 most extreme animals. Lower richness of microbiome gene content and taxa was tightly linked to higher feed efficiency. Microbiome genes and species accurately predicted the animals' feed efficiency phenotype. Specific enrichment of microbes and metabolic pathways in each of these microbiome groups resulted in better energy and carbon channeling to the animal, while lowering methane emissions to the atmosphere. This ecological and mechanistic understanding of the rumen microbiome could lead to an increase in available food resources and environmentally friendly livestock agriculture.

448 citations


Book ChapterDOI
08 Oct 2016
TL;DR: This work presents a low cost and fast method to recover high quality hyperspectral images directly from RGB using a novel, larger-than-ever database of hyperspectRAL images serves as a hyperspectrals prior.
Abstract: Hyperspectral imaging is an important visual modality with growing interest and range of applications. The latter, however, is hindered by the fact that existing devices are limited in either spatial, spectral, and/or temporal resolution, while yet being both complicated and expensive. We present a low cost and fast method to recover high quality hyperspectral images directly from RGB. Our approach first leverages hyperspectral prior in order to create a sparse dictionary of hyperspectral signatures and their corresponding RGB projections. Describing novel RGB images via the latter then facilitates reconstruction of the hyperspectral image via the former. A novel, larger-than-ever database of hyperspectral images serves as a hyperspectral prior. This database further allows for evaluation of our methodology at an unprecedented scale, and is provided for the benefit of the research community. Our approach is fast, accurate, and provides high resolution hyperspectral cubes despite using RGB-only input.

422 citations


Journal ArticleDOI
TL;DR: Oxybenzone poses a hazard to coral reef conservation and threatens the resiliency of coral reefs to climate change, and is a skeletal endocrine disruptor to corals.
Abstract: Benzophenone-3 (BP-3; oxybenzone) is an ingredient in sunscreen lotions and personal-care products that protects against the damaging effects of ultraviolet light. Oxybenzone is an emerging contaminant of concern in marine environments—produced by swimmers and municipal, residential, and boat/ship wastewater discharges. We examined the effects of oxybenzone on the larval form (planula) of the coral Stylophora pistillata, as well as its toxicity in vitro to coral cells from this and six other coral species. Oxybenzone is a photo-toxicant; adverse effects are exacerbated in the light. Whether in darkness or light, oxybenzone transformed planulae from a motile state to a deformed, sessile condition. Planulae exhibited an increasing rate of coral bleaching in response to increasing concentrations of oxybenzone. Oxybenzone is a genotoxicant to corals, exhibiting a positive relationship between DNA-AP lesions and increasing oxybenzone concentrations. Oxybenzone is a skeletal endocrine disruptor; it induced ossification of the planula, encasing the entire planula in its own skeleton. The LC50 of planulae exposed to oxybenzone in the light for an 8- and 24-h exposure was 3.1 mg/L and 139 µg/L, respectively. The LC50s for oxybenzone in darkness for the same time points were 16.8 mg/L and 779 µg/L. Deformity EC20 levels (24 h) of planulae exposed to oxybenzone were 6.5 µg/L in the light and 10 µg/L in darkness. Coral cell LC50s (4 h, in the light) for 7 different coral species ranges from 8 to 340 µg/L, whereas LC20s (4 h, in the light) for the same species ranges from 0.062 to 8 µg/L. Coral reef contamination of oxybenzone in the U.S. Virgin Islands ranged from 75 µg/L to 1.4 mg/L, whereas Hawaiian sites were contaminated between 0.8 and 19.2 µg/L. Oxybenzone poses a hazard to coral reef conservation and threatens the resiliency of coral reefs to climate change.

366 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate and discuss serious limitations of the fourth generation (4G) cellular networks and corresponding new features of 5G networks, and present a comparative study of the proposed architectures that can be categorized on the basis of energy-efficiency, network hierarchy, and network types.

363 citations


Journal ArticleDOI
TL;DR: A computational strategy is developed and designed an AChE variant bearing 51 mutations that improved core packing, surface polarity, and backbone rigidity and obtained enhanced stability and/or higher yields of soluble and active protein in E. coli.

322 citations



Journal ArticleDOI
TL;DR: An overview of the potential, recent advances, and challenges of optical security and encryption using free space optics is presented, highlighting the need for more specialized hardware and image processing algorithms.
Abstract: Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Perez-Cabre], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.

317 citations


Journal ArticleDOI
TL;DR: A new systematic review and meta-analysis comprising 5865 men shows that cigarette smoking is associated with reduced sperm count and motility, and Deterioration of semen quality is more pronounced in moderate and heavy smokers.

300 citations


Journal ArticleDOI
TL;DR: A broad overview of the wide array of metrics currently in use in academia and research is provided, including traditional metrics and article-level metrics, some of which are applied to researchers for a greater understanding of a particular concept.
Abstract: Traditionally, the success of a researcher is assessed by the number of publications he or she publishes in peer-reviewed, indexed, high impact journals. This essential yardstick, often referred to as the impact of a specific researcher, is assessed through the use of various metrics. While researchers may be acquainted with such matrices, many do not know how to use them to enhance their careers. In addition to these metrics, a number of other factors should be taken into consideration to objectively evaluate a scientist's profile as a researcher and academician. Moreover, each metric has its own limitations that need to be considered when selecting an appropriate metric for evaluation. This paper provides a broad overview of the wide array of metrics currently in use in academia and research. Popular metrics are discussed and defined, including traditional metrics and article-level metrics, some of which are applied to researchers for a greater understanding of a particular concept, including varicocele that is the thematic area of this Special Issue of Asian Journal of Andrology. We recommend the combined use of quantitative and qualitative evaluation using judiciously selected metrics for a more objective assessment of scholarly output and research impact.

284 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing maximal independent sets, maximal matchings, vertex colorings, and ruling sets.
Abstract: Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing MISs (maximal independent sets), maximal matchings, vertex colorings, and ruling sets. A small sample of our results includes the following: —An MIS algorithm running in O(log2Δ + 2o(√log log n)) time, where Δ is the maximum degree. This is the first MIS algorithm to improve on the 1986 algorithms of Luby and Alon, Babai, and Itai, when log n L Δ L 2√log n, and comes close to the Ω(log Δ / log log Δ lower bound of Kuhn, Moscibroda, and Wattenhofer. —A maximal matching algorithm running in O(log Δ + log 4log n) time. This is the first significant improvement to the 1986 algorithm of Israeli and Itai. Moreover, its dependence on Δ is nearly optimal. —A (Δ + 1)-coloring algorithm requiring O(log Δ + 2o(√log log n) time, improving on an O(log Δ + √log n)-time algorithm of Schneider and Wattenhofer. —A method for reducing symmetry-breaking problems in low arboricity/degeneracy graphs to low-degree graphs. (Roughly speaking, the arboricity or degeneracy of a graph bounds the density of any subgraph.) Corollaries of this reduction include an O(√log n)-time maximal matching algorithm for graphs with arboricity up to 2√log n and an O(log 2/3n)-time MIS algorithm for graphs with arboricity up to 2(log n)1/3. Each of our algorithms is based on a simple but powerful technique for reducing a randomized symmetry-breaking task to a corresponding deterministic one on a poly(log n)-size graph.

Journal ArticleDOI
TL;DR: An introduction to the subject is provided by explaining how a decision forest can be created and when it is most valuable and some popular methods for generating the forest, fusion the individual trees’ outputs and thinning large decision forests are reviewed.

Journal ArticleDOI
TL;DR: The vast range of review and research articles that have reported on the anti-inflammatory effects of extracts and/or pure compounds derived from natural products, pinpoints some interesting traditionally used medicinal plants that were not investigated yet.
Abstract: This article presents highlights of the published literature regarding the anti-inflammatory activities of natural products. Many review articles were published in this regard, however, most of them have presented this important issue from a regional, limited perspective. This paper summarizes the vast range of review and research articles that have reported on the anti-inflammatory effects of extracts and/or pure compounds derived from natural products. Moreover, this review pinpoints some interesting traditionally used medicinal plants that were not investigated yet.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors presented lattice doping as a strategy to improve the structural stability and voltage fade on prolonged cycling of LiNi0.6Co0.2Mn0.4
Abstract: Ni-rich layered lithiated transition metal oxides Li[NixCoyMnz]O2 (x + y + z = 1) are the most promising materials for positive electrodes for advanced Li-ion batteries. However, one of the drawbacks of these materials is their low intrinsic stability during prolonged cycling. In this work, we present lattice doping as a strategy to improve the structural stability and voltage fade on prolonged cycling of LiNi0.6Co0.2Mn0.2O2 (NCM-622) doped with zirconium (+4). It was found that LiNi0.56Zr0.04Co0.2Mn0.2O2 is stable upon galvanostatic cycling, in contrast to the undoped material, which undergoes partial structural layered-to-spinel transformation during cycling. The current study provides sub-nanoscale insight into the role of Zr4+ doping on such a transformation in Ni-rich Li[NixCoyMnz]O2 materials by adopting a combined experimental and first-principles theory approach. A possible mechanism for a Ni-mediated layered-to-spinel transformation in Ni-rich NCMs is also proposed.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: This work proposes abstract models that capture secure outsourced storage systems in sufficient generality, and identifies two basic sources of leakage, namely access pattern and ommunication volume, and develops generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked.
Abstract: Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with "full-fledged" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources. We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way. Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.

Journal ArticleDOI
TL;DR: While anatomical abnormalities may be present in distinct subgroups of ASD individuals, the current findings show that many previously reported anatomical measures are likely to be of low clinical and scientific significance for understanding ASD neuropathology as a whole in individuals 6-35 years old.
Abstract: Substantial controversy exists regarding the presence and significance of anatomical abnormalities in autism spectrum disorders (ASD). The release of the Autism Brain Imaging Data Exchange (∼1000 participants, age 6–65 years) offers an unprecedented opportunity to conduct large-scale comparisons of anatomical MRI scans across groups and to resolve many of the outstanding questions. Comprehensive univariate analyses using volumetric, thickness, and surface area measures of over 180 anatomically defined brain areas, revealed significantly larger ventricular volumes, smaller corpus callosum volume (central segment only), and several cortical areas with increased thickness in the ASD group. Previously reported anatomical abnormalities in ASD including larger intracranial volumes, smallercerebellar volumes, and largeramygdalavolumes were not substantiated by the current study. In addition, multivariate classification analyses yielded modest decoding accuracies of individuals’ group identity (<60%), suggesting that the examined anatomical measures are of limited diagnostic utility for ASD. While anatomical abnormalities may be present in distinct subgroups of ASD individuals, the current findings show that many previously reported anatomical measures are likely to be of low clinical and scientific significance for understanding ASD neuropathology as a whole in individuals 6–35 years old.

Journal ArticleDOI
TL;DR: Biomarker research faces several challenges; however, biomarkers could substantially improve the management of people with epilepsy and could lead to prevention in the right person at the right time, rather than just symptomatic treatment.
Abstract: Over 50 million people worldwide have epilepsy. In nearly 30% of these cases, epilepsy remains unsatisfactorily controlled despite the availability of over 20 antiepileptic drugs. Moreover, no treatments exist to prevent the development of epilepsy in those at risk, despite an increasing understanding of the underlying molecular and cellular pathways. One of the major factors that have impeded rapid progress in these areas is the complex and multifactorial nature of epilepsy, and its heterogeneity. Therefore, the vision of developing targeted treatments for epilepsy relies upon the development of biomarkers that allow individually tailored treatment. Biomarkers for epilepsy typically fall into two broad categories: diagnostic biomarkers, which provide information on the clinical status of, and potentially the sensitivity to, specific treatments, and prognostic biomarkers, which allow prediction of future clinical features, such as the speed of progression, severity of epilepsy, development of comorbidities, or prediction of remission or cure. Prognostic biomarkers are of particular importance because they could be used to identify which patients will develop epilepsy and which might benefit from preventive treatments. Biomarker research faces several challenges; however, biomarkers could substantially improve the management of people with epilepsy and could lead to prevention in the right person at the right time, rather than just symptomatic treatment.

Journal ArticleDOI
TL;DR: This study reveals that grapevine berries respond to drought by modulating several secondary metabolic pathways, and particularly, by stimulating the production of phenylpropanoids, the carotenoid zeaxanthin, and of volatile organic compounds such as monoterpenes, with potential effects on grape and wine antioxidant potential, composition, and sensory features.
Abstract: Secondary metabolism contributes to the adaptation of a plant to its environment. In wine grapes, fruit secondary metabolism largely determines wine quality. Climate change is predicted to exacerbate drought events in several viticultural areas, potentially affecting the wine quality. In red grapes, water deficit modulates flavonoid accumulation, leading to major quantitative and compositional changes in the profile of the anthocyanin pigments; in white grapes, the effect of water deficit on secondary metabolism is still largely unknown. In this study we investigated the impact of water deficit on the secondary metabolism of white grapes using a large scale metabolite and transcript profiling approach in a season characterized by prolonged drought. Irrigated grapevines were compared to non-irrigated grapevines that suffered from water deficit from early stages of berry development to harvest. A large effect of water deficit on fruit secondary metabolism was observed. Increased concentrations of phenylpropanoids, monoterpenes, and tocopherols were detected, while carotenoid and flavonoid accumulations were differentially modulated by water deficit according to the berry developmental stage. The RNA-sequencing analysis carried out on berries collected at three developmental stages—before, at the onset, and at late ripening—indicated that water deficit affected the expression of 4,889 genes. The Gene Ontology category secondary metabolic process was overrepresented within up-regulated genes at all the stages of fruit development considered, and within down-regulated genes before ripening. Eighteen phenylpropanoid, 16 flavonoid, 9 carotenoid, and 16 terpenoid structural genes were modulated by water deficit, indicating the transcriptional regulation of these metabolic pathways in fruit exposed to water deficit. An integrated network and promoter analyses identified a transcriptional regulatory module that encompasses terpenoid genes, transcription factors, and enriched drought-responsive elements in the promoter regions of those genes as part of the grapes response to drought. Our study reveals that grapevine berries respond to drought by modulating several secondary metabolic pathways, and particularly, by stimulating the production of phenylpropanoids, the carotenoid zeaxanthin, and of volatile organic compounds such as monoterpenes, with potential effects on grape and wine antioxidant potential, composition, and sensory features.

Journal ArticleDOI
TL;DR: A comprehensive review of available tools for modelling outdoor human comfort and thermal stress is presented, explains the physical equations that drive these models, and shows their applicability based on climate and the findings of previous research.
Abstract: Outdoor human comfort is an essential parameter to assess the quality of the urban microclimate, and to provide guidelines for sustainable urban development. This paper presents a comprehensive review of available tools for modelling outdoor human comfort and thermal stress, explains the physical equations that drive these models, and shows their applicability based on climate and the findings of previous research. The existing procedures are subdivided into three main categories: Thermal indices, Empirical indices and indices based on Linear Equations; for each approach, case studies are presented and subdivided according to Koeppen Climatic Classification (Polar, Cold, Temperate, Arid and Tropical). International regulations and software available to quantify outdoor human comfort and microclimate are presented, as well as a graphic thermal scale to compare the ability of each procedure to respond to the 11-point thermal sensation scale (from Sweltering to Extremely Cold). Finally, the models are presented as function of their ability to analyse climate, microclimate and human-related characteristics of the selected built environment. This paper aims at bringing a comprehensive introduction to the topic of the outdoor human comfort, helping the reader to understand the existing procedures and guiding the choice of the suitable options according to specific research needs.

Journal ArticleDOI
TL;DR: The therapeutic potential of injectable acellular alginate implants to inhibit the damaging processes after MI, leading to myocardial repair and tissue reconstruction is shown.

Journal ArticleDOI
TL;DR: In this article, an as-cast AlCoCrFeNi alloy was examined after various heat treatments using XRD, SEM, micro-hardness and compression tests, and it was found that the alloy solidified dendritically with an Al and Ni-rich dendrite core and inter-dendritic regions rich in Co, Cr, and Fe.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: In this article, a tensoring operation was introduced to obtain a conceptually simpler derivation of previous constructions and present new constructions for m-party FSS schemes, which are useful for applications that involve private reading from or writing to distributed databases while minimizing the amount of communication.
Abstract: Function Secret Sharing (FSS), introduced by Boyle et al. (Eurocrypt 2015), provides a way for additively secret-sharing a function from a given function family F. More concretely, an m-party FSS scheme splits a function f : {0, 1}n -> G, for some abelian group G, into functions f1,...,fm, described by keys k1,...,km, such that f = f1 + ... + fm and every strict subset of the keys hides f. A Distributed Point Function (DPF) is a special case where F is the family of point functions, namely functions f_{a,b} that evaluate to b on the input a and to 0 on all other inputs. FSS schemes are useful for applications that involve privately reading from or writing to distributed databases while minimizing the amount of communication. These include different flavors of private information retrieval (PIR), as well as a recent application of DPF for large-scale anonymous messaging. We improve and extend previous results in several ways: * Simplified FSS constructions. We introduce a tensoring operation for FSS which is used to obtain a conceptually simpler derivation of previous constructions and present our new constructions. * Improved 2-party DPF. We reduce the key size of the PRG-based DPF scheme of Boyle et al. roughly by a factor of 4 and optimize its computational cost. The optimized DPF significantly improves the concrete costs of 2-server PIR and related primitives. * FSS for new function families. We present an efficient PRG-based 2-party FSS scheme for the family of decision trees, leaking only the topology of the tree and the internal node labels. We apply this towards FSS for multi-dimensional intervals. We also present a general technique for extending FSS schemes by increasing the number of parties. * Verifiable FSS. We present efficient protocols for verifying that keys (k*/1,...,k*/m ), obtained from a potentially malicious user, are consistent with some f in F. Such a verification may be critical for applications that involve private writing or voting by many users.

Journal ArticleDOI
TL;DR: This paper investigated residential parcel and neighborhood scale variations in urban land surface temperature, land cover, and residents' perceptions of landscapes and heat illnesses in the subtropical desert city of Phoenix, AZ USA.
Abstract: With rapidly expanding urban regions, the effects of land cover changes on urban surface temperatures and the consequences of these changes for human health are becoming progressively larger problems. We investigated residential parcel and neighborhood scale variations in urban land surface temperature, land cover, and residents’ perceptions of landscapes and heat illnesses in the subtropical desert city of Phoenix, AZ USA. We conducted an airborne imaging campaign that acquired high resolution urban land surface temperature data (7 m/pixel) during the day and night. We performed a geographic overlay of these data with high resolution land cover maps, parcel boundaries, neighborhood boundaries, and a household survey. Land cover composition, including percentages of vegetated, building, and road areas, and values for NDVI, and albedo, was correlated with residential parcel surface temperatures and the effects differed between day and night. Vegetation was more effective at cooling hotter neighborhoods. We found consistencies between heat risk factors in neighborhood environments and residents’ perceptions of these factors. Symptoms of heat-related illness were correlated with parcel scale surface temperature patterns during the daytime but no corresponding relationship was observed with nighttime surface temperatures. Residents’ experiences of heat vulnerability were related to the daytime land surface thermal environment, which is influenced by micro-scale variation in land cover composition. These results provide a first look at parcel-scale causes and consequences of urban surface temperature variation and provide a critically needed perspective on heat vulnerability assessment studies conducted at much coarser scales.

Journal ArticleDOI
12 Feb 2016-Science
TL;DR: It is found that the DUB module deubiquitinates H2B both in the context of the nucleosome and in H2A/H2B dimers complexed with the histone chaperone, FACT, suggesting that SAGA could target H 2B at multiple stages of nucleosomes disassembly and reassembly during transcription.
Abstract: Monoubiquitinated histone H2B plays multiple roles in transcription activation. H2B is deubiquitinated by the Spt-Ada-Gcn5 acetyltransferase (SAGA) coactivator, which contains a four-protein subcomplex known as the deubiquitinating (DUB) module. The crystal structure of the Ubp8/Sgf11/Sus1/Sgf73 DUB module bound to a ubiquitinated nucleosome reveals that the DUB module primarily contacts H2A/H2B, with an arginine cluster on the Sgf11 zinc finger domain docking on the conserved H2A/H2B acidic patch. The Ubp8 catalytic domain mediates additional contacts with H2B, as well as with the conjugated ubiquitin. We find that the DUB module deubiquitinates H2B both in the context of the nucleosome and in H2A/H2B dimers complexed with the histone chaperone, FACT, suggesting that SAGA could target H2B at multiple stages of nucleosome disassembly and reassembly during transcription.

Proceedings ArticleDOI
19 Jun 2016
TL;DR: The first upper bounds on the number of samples required to answer more general families of queries, including arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries), are proved.
Abstract: Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated a general formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen ``queries'' about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions towards resolving this question: We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. (STOC, 2015), and have been applied in subsequent work by those authors (Science, 2015; NIPS, 2015). We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that the stability notion guaranteed by differential privacy implies low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.

Journal ArticleDOI
TL;DR: Increased PM2.5 exposure in specific prenatal windows may be associated with poorer function across memory and attention domains with variable associations based on sex, and refined determination of time window- and sex-specific associations may enhance insight into underlying mechanisms and identification of vulnerable subgroups.

Journal ArticleDOI
TL;DR: This research presents a novel probabilistic procedure called “spot-spot analysis” that allows for real-time analysis of the response of the immune system to natural disasters.
Abstract: [This corrects the article DOI: 10.1186/s13054-016-1208-6.].

Journal ArticleDOI
TL;DR: Since C. reinhardtii was traditionally used as a model organism for the development of transformation systems and their subsequent improvement, similar technologies can be adapted for other microalgae that may have higher biotechnological value.
Abstract: Microalgae comprise a biodiverse group of photosynthetic organisms that reside in water sources and sediments. The green microalgae Chlamydomonas reinhardtii was adopted as a useful model organism for studying various physiological systems. Its ability to grow under both photosynthetic and heterotrophic conditions allows efficient growth of non-photosynthetic mutants, making Chlamydomonas a useful genetic tool to study photosynthesis. In addition, this green alga can grow as haploid or diploid cells, similar to yeast, providing a powerful genetic system. As a result, easy and efficient transformation systems have been developed for Chlamydomonas, targeting both the chloroplast and nuclear genomes. Since microalgae comprise a rich repertoire of species that offer variable advantages for biotech and biomed industries, gene transfer technologies were further developed for many microalgae to allow for the expression of foreign proteins of interest. Expressing foreign genes in the chloroplast enables the targeting of foreign DNA to specific sites by homologous recombination. Chloroplast transformation also allows for the introduction of genes encoding several enzymes from a complex pathway, possibly as an operon. Expressing foreign proteins in the chloroplast can also be achieved by introducing the target gene into the nuclear genome, with the protein product bearing a targeting signal that directs import of the transgene-product into the chloroplast, like other endogenous chloroplast proteins. Integration of foreign genes into the nuclear genome is mostly random, resulting in large variability between different clones, such that extensive screening is required. The use of different selection modalities is also described, with special emphasis on the use of herbicides and metabolic markers which are considered to be friendly to the environment, as compared to drug-resistance genes that are commonly used. Finally, despite the development of a wide range of transformation tools and approaches, expression of foreign genes in microalgae suffers from low efficiency. Thus, novel tools have appeared in recent years to deal with this problem. Finally, while Chlamydomonas reinhardtii was traditionally used as a model organism for the development of transformation systems and their subsequent improvement, similar technologies can be adapted for other microalgae that may have higher biotechnological value.

Journal ArticleDOI
TL;DR: The theory that shame evolved as a defense against being devalued by others is tested, and a close match between shame intensities and audience devaluation is indicated, which suggests that shame is an adaptation.
Abstract: We test the theory that shame evolved as a defense against being devalued by others. By hypothesis, shame is a neurocomputational program tailored by selection to orchestrate cognition, motivation, physiology, and behavior in the service of: (i) deterring the individual from making choices where the prospective costs of devaluation exceed the benefits, (ii) preventing negative information about the self from reaching others, and (iii) minimizing the adverse effects of devaluation when it occurs. Because the unnecessary activation of a defense is costly, the shame system should estimate the magnitude of the devaluative threat and use those estimates to cost-effectively calibrate its activation: Traits or actions that elicit more negative evaluations from others should elicit more shame. As predicted, shame closely tracks the threat of devaluation in the United States (r = .69), India (r = .79), and Israel (r = .67). Moreover, shame in each country strongly tracks devaluation in the others, suggesting that shame and devaluation are informed by a common species-wide logic of social valuation. The shame–devaluation link is also specific: Sadness and anxiety—emotions that coactivate with shame—fail to track devaluation. To our knowledge, this constitutes the first empirical demonstration of a close, specific match between shame and devaluation within and across cultures.

Journal ArticleDOI
TL;DR: The S66x8 dataset for noncovalent interactions of biochemical relevance has been re-examined by means of MP2-F12 and CCSD(F12*)(T) methods and an improved, parameter-free scaling for the (T) contribution is proposed.
Abstract: The S66x8 dataset for noncovalent interactions of biochemical relevance has been re-examined by means of MP2-F12 and CCSD(F12*)(T) methods. We deem our revised benchmark data to be reliable to about 0.05 kcal mol−1 RMS. Most levels of DFT perform quite poorly in the absence of dispersion corrections: somewhat surprisingly, that is even the case for the double hybrids and for dRPA75. Analysis of optimized D3BJ parameters reveals that the main benefit of dRPA75 and DSD double hybrids alike is the treatment of midrange dispersion. dRPA75-D3BJ is the best performer overall at RMSD = 0.10 kcal mol−1. The nonlocal VV10 dispersion functional is especially beneficial for the double hybrids, particularly in DSD-PBEP86-NL (RMSD = 0.12 kcal mol−1). Other recommended dispersion-corrected functionals with favorable price/performance ratios are ωB97X-V, and, surprisingly, B3LYP-D3BJ and BLYP-D3BJ (RMSDs of 0.23, 0.20 and 0.23 kcal mol−1, respectively). Without dispersion correction (but parametrized for midrange interactions) M06-2X has the lead (RMSD = 0.45 kcal mol−1). A collection of three energy-based diagnostics yields similar information to an SAPT analysis about the nature of the noncovalent interaction. Two of those are the percentages of Hartree–Fock and of post-MP2 correlation effects in the interaction energy; the third, CSPI = [IE(2)ss − IE(2)ab]/[IE(2)ss + IE(2)ab] or its derived quantity DEBC = CSPI/(1 + CSPI2)1/2, describes the character of the MP2 correlation contribution, ranging from 0 (purely dispersion) to 1 (purely other effects). In addition, we propose an improved, parameter-free scaling for the (T) contribution based on the Ecorr[CCSD-F12b]/Ecorr[CCSD] and Ecorr[CCSD(F12*)]/Ecorr[CCSD] ratios. For Hartree–Fock and conventional DFT calculations, full counterpoise generally yields the fastest basis set convergence, while for double hybrids, half-counterpoise yields faster convergence, as previously established for correlated ab initio methods.