scispace - formally typeset
Search or ask a question
Author

Lei Wang

Bio: Lei Wang is an academic researcher from Shenzhen University. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 73, co-authored 1283 publications receiving 26333 citations. Previous affiliations of Lei Wang include New York State Department of Health & Zhejiang University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the energy of a large number of oxidation reactions of $3d$ transition metal oxides is computed using the generalized gradient approach (GGA) and γ-U + γ U methods.
Abstract: The energy of a large number of oxidation reactions of $3d$ transition metal oxides is computed using the generalized gradient approach (GGA) and $\mathrm{GGA}+\mathrm{U}$ methods Two substantial contributions to the error in GGA oxidation energies are identified The first contribution originates from the overbinding of GGA in the ${\mathrm{O}}_{2}$ molecule and only occurs when the oxidant is ${\mathrm{O}}_{2}$ The second error occurs in all oxidation reactions and is related to the correlation error in $3d$ orbitals in GGA Strong self-interaction in GGA systematically penalizes a reduced state (with more $d$ electrons) over an oxidized state, resulting in an overestimation of oxidation energies The constant error in the oxidation energy from the ${\mathrm{O}}_{2}$ binding error can be corrected by fitting the formation enthalpy of simple nontransition metal oxides Removal of the ${\mathrm{O}}_{2}$ binding error makes it possible to address the correlation effects in $3d$ transition metal oxides with the $\mathrm{GGA}+\mathrm{U}$ method Calculated oxidation energies agree well with experimental data for reasonable and consistent values of U

2,013 citations

Journal ArticleDOI
TL;DR: In this paper, the ground state energies of all known compounds in the quaternary Li−Fe−P−O2 system were calculated using the generalized gradient approximation (GGA) approximation to density functional theory (DFT) and the DFT+U extension to it.
Abstract: We present an efficient way to calculate the phase diagram of the quaternary Li−Fe−P−O2 system using ab initio methods. The ground-state energies of all known compounds in the Li−Fe−P−O2 system were calculated using the generalized gradient approximation (GGA) approximation to density functional theory (DFT) and the DFT+U extension to it. Considering only the entropy of gaseous phases, the phase diagram was constructed as a function of oxidation conditions, with the oxygen chemical potential, μO2, capturing both temperature and oxygen partial pressure dependence. A modified Ellingham diagram was also developed by incorporating the experimental entropy data of gaseous phases. The phase diagram shows LiFePO4 to be stable over a wide range of oxidation environments, being the first Fe2+-containing phase to appear upon reduction at μO2 = −11.52 eV and the last of the Fe-containing phosphates to be reduced at μO2 = −16.74 eV. Lower μO2 represents more reducing conditions, which generally correspond to higher t...

606 citations

Journal ArticleDOI
TL;DR: In this article, the effect of rare earth (RE) elements on the microstructure, mechanical properties, wetting behavior of certain Pb-free solder alloys is summarized. But, the authors do not consider the effects of RE elements on ICs.
Abstract: Due to the inherent toxicity of lead (Pb), environmental regulations around the world have been targeted to eliminate the usage of Pb-bearing solders in electronic assemblies. This has prompted the development of “Pb-free” solders, and has enhanced the research activities in this field. In order to become a successful solder material, Pb-free alloys need to be reliable over long term use. Although many of these alloys possess higher strength than the traditional Sn–Pb ones, there still exist reliability problems such as electromigration and creep. Also, the solderability of many Pb-free alloys is inferior to that of Sn–Pb and any improvement or replacement will be welcomed by industry. In order to develop new Pb-free solders with better properties, trace amounts of rare earth (RE) elements were selected by some researchers as alloying additions into Sn-based solders. These solder alloys are mainly Sn–Ag, Sn–Cu, Sn–Zn and Sn–Ag–Cu. In general, the resulting RE-doped solders are found to have better performances than their original ones. The improvements include better wettability, creep strength and tensile strength. In particular, the increase in creep resistance in some RE-doped alloys gives creep rupture time increases by over four times for Sn–Ag and seven times for Sn–Cu and Sn–Ag–Cu. Like other Sn-based alloys, their creep rates are controlled by dislocation pipe diffusion in the Sn matrix. Also, it was found that the creep rate of these Sn-based alloys can be represented by a single empirical equation. With the addition of RE elements, solders for bonding on difficult substrates such as on semiconductors, diamond, and optical materials have also been developed. This report summarizes the effect of RE elements on the microstructure, mechanical properties, wetting behavior of certain Pb-free solder alloys. As an illustration of the advantage of RE doping, interfacial studies were carried out for electronic interconnections with RE-doped Pb-free alloys. It was found that the intermetallic compound (IMC) layer thickness and the amount of interfacial reaction were reduced in a Ball Grid Array (BGA) package. These results indicate that RE elements would play an important role in providing better electronic interconnections.

558 citations

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A simple modification to localize the soft-assignment coding is proposed, which surprisingly achieves comparable or even better performance than existing sparse or local coding schemes while maintaining its computational advantage.
Abstract: In object recognition, soft-assignment coding enjoys computational efficiency and conceptual simplicity. However, its classification performance is inferior to the newly developed sparse or local coding schemes. It would be highly desirable if its classification performance could become comparable to the state-of-the-art, leading to a coding scheme which perfectly combines computational efficiency and classification performance. To achieve this, we revisit soft-assignment coding from two key aspects: classification performance and probabilistic interpretation. For the first aspect, we argue that the inferiority of soft-assignment coding is due to its neglect of the underlying manifold structure of local features. To remedy this, we propose a simple modification to localize the soft-assignment coding, which surprisingly achieves comparable or even better performance than existing sparse or local coding schemes while maintaining its computational advantage. For the second aspect, based on our probabilistic interpretation of the soft-assignment coding, we give a probabilistic explanation to the magic max-pooling operation, which has successfully been used by sparse or local coding schemes but still poorly understood. This probability explanation motivates us to develop a new mix-order max-pooling operation which further improves the classification performance of the proposed coding scheme. As experimentally demonstrated, the localized soft-assignment coding achieves the state-of-the-art classification performance with the highest computational efficiency among the existing coding schemes.

474 citations

Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work proposes a Deep Nearest Neighbor Neural Network (DN4), a simple, effective, and computationally efficient framework for few-shot learning that not only learns the optimal deep local descriptors for the image-to-class measure, but also utilizes the higher efficiency of such a measure in the case of example scarcity.
Abstract: Few-shot learning in image classification aims to learn a classifier to classify images when only few training examples are available for each class. Recent work has achieved promising classification performance, where an image-level feature based measure is usually used. In this paper, we argue that a measure at such a level may not be effective enough in light of the scarcity of examples in few-shot learning. Instead, we think a local descriptor based image-to-class measure should be taken, inspired by its surprising success in the heydays of local invariant features. Specifically, building upon the recent episodic training mechanism, we propose a Deep Nearest Neighbor Neural Network (DN4 in short) and train it in an end-to-end manner. Its key difference from the literature is the replacement of the image-level feature based measure in the final layer by a local descriptor based image-to-class measure. This measure is conducted online via a k-nearest neighbor search over the deep local descriptors of convolutional feature maps. The proposed DN4 not only learns the optimal deep local descriptors for the image-to-class measure, but also utilizes the higher efficiency of such a measure in the case of example scarcity, thanks to the exchangeability of visual patterns across the images in the same class. Our work leads to a simple, effective, and computationally efficient framework for few-shot learning. Experimental study on benchmark datasets consistently shows its superiority over the related state-of-the-art, with the largest absolute improvement of 17% over the next best. The source code can be available from https://github.com/WenbinLee/DN4.git.

428 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: Preface to the Princeton Landmarks in Biology Edition vii Preface xi Symbols used xiii 1.
Abstract: Preface to the Princeton Landmarks in Biology Edition vii Preface xi Symbols Used xiii 1. The Importance of Islands 3 2. Area and Number of Speicies 8 3. Further Explanations of the Area-Diversity Pattern 19 4. The Strategy of Colonization 68 5. Invasibility and the Variable Niche 94 6. Stepping Stones and Biotic Exchange 123 7. Evolutionary Changes Following Colonization 145 8. Prospect 181 Glossary 185 References 193 Index 201

14,171 citations