scispace - formally typeset
Search or ask a question
Institution

Chinese Academy of Sciences

GovernmentBeijing, Beijing, China
About: Chinese Academy of Sciences is a government organization based out in Beijing, Beijing, China. It is known for research contribution in the topics: Catalysis & Population. The organization has 421602 authors who have published 634849 publications receiving 14894293 citations. The organization is also known as: CAS.
Topics: Catalysis, Population, Laser, Adsorption, Graphene


Papers
More filters
Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames, and presents elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features.
Abstract: How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond realtime speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-theart trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.

772 citations

Journal ArticleDOI
Feng Zhang1, Wenbin Zhang1, Zhun Shi1, Dong Wang1, Jian Jin1, Lei Jiang1 
TL;DR: A novel all-inorganic Cu(OH)2 nanowire-haired membrane with superhydrophilicity and underwater ultralow adhesive superoleophobicity is fabricated by a facile surface oxidation of copper mesh that allows effective separation of both immiscible oil/water mixtures and oil-in-water emulsions solely driven by gravity, with extremely high separation efficiency.
Abstract: A novel all-inorganic Cu(OH)2 nanowire-haired membrane with superhydrophilicity and underwater ultralow adhesive superoleophobicity is fabricated by a facile surface oxidation of copper mesh that allows effective separation of both immiscible oil/water mixtures and oil-in-water emulsions solely driven by gravity, with extremely high separation efficiency. The all-inorganic membrane exhibits superior solvent and alkaline resistance and antifouling property compared to organic-based membranes.

772 citations

Proceedings ArticleDOI
15 Apr 2018
TL;DR: The Speech-Transformer is presented, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency and a 2D-Attention mechanism which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech- Transformer.
Abstract: Recurrent sequence-to-sequence models using encoder-decoder architecture have made great progress in speech recognition task. However, they suffer from the drawback of slow training speed because the internal recurrence limits the training parallelization. In this paper, we present the Speech-Transformer, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency. We also propose a 2D-Attention mechanism, which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech-Transformer. Evaluated on the Wall Street Journal (WSJ) speech recognition dataset, our best model achieves competitive word error rate (WER) of 10.9%, while the whole training process only takes 1.2 days on 1 GPU, significantly faster than the published results of recurrent sequence-to-sequence models.

771 citations

Journal ArticleDOI
TL;DR: A one-pot, large-scale protocol for the controlled synthesis of new one-dimensional bamboo-like carbon nanotube/Fe(3)C nanoparticle hybrid nanoelectrocatalysts, which are directly prepared by annealing a mixture of PEG-PPG-PEG Pluronic P123, melamine, and Fe(NO(3))( 3) at 800 °C in N(2).
Abstract: The design of a new class of non-noble-metal catalysts with oxygen reduction reaction (ORR) activity superior to that of Pt is extremely important for future fuel cell devices. Here we demonstrate a one-pot, large-scale protocol for the controlled synthesis of new one-dimensional bamboo-like carbon nanotube/Fe3C nanoparticle hybrid nanoelectrocatalysts, which are directly prepared by annealing a mixture of PEG-PPG-PEG Pluronic P123, melamine, and Fe(NO3)3 at 800 °C in N2. The resulting hybrid electrocatalysts show very high ORR activity with a half-wave potential of 0.861 V (vs reversible hydrogen electrode) in 0.10 M KOH solution, 49 mV more positive than that of 20 wt% Pt/C catalyst. Furthermore, they exhibit good ORR activity in acidic media, with an onset potential comparable to that of the Pt/C catalyst. Most importantly, they show much higher stability and better methanol tolerance, with almost no ORR polarization curve shift and no change of the oxygen reduction peak in the cyclic voltammogram in t...

770 citations

Journal ArticleDOI
TL;DR: In this article, the authors measured the mass of the MSP J0740+6620 to be ${\mathbf{2.14} + 2.09} + 0.10% credibility interval.
Abstract: Despite its importance to our understanding of physics at supranuclear densities, the equation of state (EoS) of matter deep within neutron stars remains poorly understood. Millisecond pulsars (MSPs) are among the most useful astrophysical objects in the Universe for testing fundamental physics, and place some of the most stringent constraints on this high-density EoS. Pulsar timing—the process of accounting for every rotation of a pulsar over long time periods—can precisely measure a wide variety of physical phenomena, including those that allow the measurement of the masses of the components of a pulsar binary system1. One of these, called relativistic Shapiro delay2, can yield precise masses for both an MSP and its companion; however, it is only easily observed in a small subset of high-precision, highly inclined (nearly edge-on) binary pulsar systems. By combining data from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 12.5-yr data set with recent orbital-phase-specific observations using the Green Bank Telescope, we have measured the mass of the MSP J0740+6620 to be $${\mathbf{2}}{\mathbf{.14}}_{ - {\mathbf{0}}{\mathbf{.09}}}^{ + {\mathbf{0}}{\mathbf{.10}}}$$ M⊙ (68.3% credibility interval; the 95.4% credibility interval is $${\mathbf{2}}{\mathbf{.14}}_{ - {\mathbf{0}}{\mathbf{.18}}}^{ + {\mathbf{0}}{\mathbf{.20}}}$$ M⊙). It is highly likely to be the most massive neutron star yet observed, and serves as a strong constraint on the neutron star interior EoS. Cromartie et al. have probably found the most massive neutron star discovered so far by combining NANOGrav 12.5-yr data with radio data from the Green Bank Telescope. Millisecond pulsar J0740+6620 has a mass of 2.14 M⊙, ~0.1 M⊙ more massive than the previous record holder, and very close to the upper limit on neutron star masses from Laser Interferometer Gravitational-Wave Observatory measurements.

770 citations


Authors

Showing all 422053 results

NameH-indexPapersCitations
Frank B. Hu2501675253464
Zhong Lin Wang2452529259003
Yi Chen2174342293080
Jing Wang1844046202769
Peidong Yang183562144351
Xiaohui Fan183878168522
H. S. Chen1792401178529
Douglas Scott1781111185229
Jie Zhang1784857221720
Pulickel M. Ajayan1761223136241
Feng Zhang1721278181865
Andrea Bocci1722402176461
Yang Yang1712644153049
Lei Jiang1702244135205
Yang Gao1682047146301
Network Information
Related Institutions (5)
Tsinghua University
200.5K papers, 4.5M citations

94% related

Zhejiang University
183.2K papers, 3.4M citations

93% related

Peking University
181K papers, 4.1M citations

93% related

Centre national de la recherche scientifique
382.4K papers, 13.6M citations

93% related

Spanish National Research Council
220.4K papers, 7.6M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023170
20222,918
202159,109
202055,057
201952,186
201846,329