scispace - formally typeset
Search or ask a question
Author

Gang Chen

Bio: Gang Chen is an academic researcher from Harbin Institute of Technology. The author has contributed to research in topics: Medicine & Thermal conductivity. The author has an hindex of 167, co-authored 3372 publications receiving 149819 citations. Previous affiliations of Gang Chen include Beijing Institute of Technology & University of Electronic Science and Technology of China.


Papers
More filters
Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations

Journal ArticleDOI
02 May 2008-Science
TL;DR: Electrical transport measurements, coupled with microstructure studies and modeling, show that the ZT improvement is the result of low thermal conductivity caused by the increased phonon scattering by grain boundaries and defects, which makes these materials useful for cooling and power generation.
Abstract: The dimensionless thermoelectric figure of merit (ZT) in bismuth antimony telluride (BiSbTe) bulk alloys has remained around 1 for more than 50 years. We show that a peak ZT of 1.4 at 100°C can be achieved in a p-type nanocrystalline BiSbTe bulk alloy. These nanocrystalline bulk materials were made by hot pressing nanopowders that were ball-milled from crystalline ingots under inert conditions. Electrical transport measurements, coupled with microstructure studies and modeling, show that the ZT improvement is the result of low thermal conductivity caused by the increased phonon scattering by grain boundaries and defects. More importantly, ZT is about 1.2 at room temperature and 0.8 at 250°C, which makes these materials useful for cooling and power generation. Cooling devices that use these materials have produced high-temperature differences of 86°, 106°, and 119°C with hot-side temperatures set at 50°, 100°, and 150°C, respectively. This discovery sets the stage for use of a new nanocomposite approach in developing high-performance low-cost bulk thermoelectric materials.

4,695 citations

Journal ArticleDOI
TL;DR: In this article, the ability to achieve a simultaneous increase in the power factor and a decrease in the thermal conductivity of the same nanocomposite sample and for transport in the same direction is discussed.
Abstract: Many of the recent advances in enhancing the thermoelectric figure of merit are linked to nanoscale phenomena found both in bulk samples containing nanoscale constituents and in nanoscale samples themselves. Prior theoretical and experimental proof-of-principle studies on quantum-well superlattice and quantum-wire samples have now evolved into studies on bulk samples containing nanostructured constituents prepared by chemical or physical approaches. In this Review, nanostructural composites are shown to exhibit nanostructures and properties that show promise for thermoelectric applications, thus bringing together low-dimensional and bulk materials for thermoelectric applications. Particular emphasis is given in this Review to the ability to achieve 1) a simultaneous increase in the power factor and a decrease in the thermal conductivity in the same nanocomposite sample and for transport in the same direction and 2) lower values of the thermal conductivity in these nanocomposites as compared to alloy samples of the same chemical composition. The outlook for future research directions for nanocomposite thermoelectric materials is also discussed.

3,562 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the principles and present status of bulk nanostructured materials, then describe some of the unanswered questions about carrier transport and how current research is addressing these questions.
Abstract: Thermoelectrics have long been recognized as a potentially transformative energy conversion technology due to their ability to convert heat directly into electricity. Despite this potential, thermoelectric devices are not in common use because of their low efficiency, and today they are only used in niche markets where reliability and simplicity are more important than performance. However, the ability to create nanostructured thermoelectric materials has led to remarkable progress in enhancing thermoelectric properties, making it plausible that thermoelectrics could start being used in new settings in the near future. Of the various types of nanostructured materials, bulk nanostructured materials have shown the most promise for commercial use because, unlike many other nanostructured materials, they can be fabricated in large quantities and in a form that is compatible with existing thermoelectric device configurations. The first generation of these materials is currently being developed for commercialization, but creating the second generation will require a fundamental understanding of carrier transport in these complex materials which is presently lacking. In this review we introduce the principles and present status of bulk nanostructured materials, then describe some of the unanswered questions about carrier transport and how current research is addressing these questions. Finally, we discuss several research directions which could lead to the next generation of bulk nanostructured materials.

1,742 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations