scispace - formally typeset
Search or ask a question
Institution

KAIST

EducationDaejeon, South Korea
About: KAIST is a education organization based out in Daejeon, South Korea. It is known for research contribution in the topics: Thin film & Catalysis. The organization has 35562 authors who have published 77661 publications receiving 1852854 citations. The organization is also known as: Korea Advanced Institute of Science and Technology & KAIST university.


Papers
More filters
Journal ArticleDOI
30 Aug 2013-Science
TL;DR: Metal-organic frameworks are porous materials that have potential for applications such as gas storage and separation, as well as catalysis, and methods are being developed for making nanocrystals and supercrystals of MOFs for their incorporation into devices.
Abstract: Crystalline metal-organic frameworks (MOFs) are formed by reticular synthesis, which creates strong bonds between inorganic and organic units. Careful selection of MOF constituents can yield crystals of ultrahigh porosity and high thermal and chemical stability. These characteristics allow the interior of MOFs to be chemically altered for use in gas separation, gas storage, and catalysis, among other applications. The precision commonly exercised in their chemical modification and the ability to expand their metrics without changing the underlying topology have not been achieved with other solids. MOFs whose chemical composition and shape of building units can be multiply varied within a particular structure already exist and may lead to materials that offer a synergistic combination of properties.

10,934 citations

Proceedings ArticleDOI
26 Apr 2010
TL;DR: In this paper, the authors have crawled the entire Twittersphere and found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks.
Abstract: Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing.We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet.To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.

6,108 citations

Posted Content
TL;DR: The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.
Abstract: We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS~COCO detection, and VOC~2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.

5,757 citations

Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

5,668 citations

Book ChapterDOI
08 Sep 2018
TL;DR: Convolutional Block Attention Module (CBAM) as discussed by the authors is a simple yet effective attention module for feed-forward convolutional neural networks, given an intermediate feature map, the module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement.
Abstract: We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.

5,335 citations


Authors

Showing all 35844 results

NameH-indexPapersCitations
Robert Langer2812324326306
Yi Cui2201015199725
Hyun-Chul Kim1764076183227
Kari Alitalo174817114231
Yury Gogotsi171956144520
Omar M. Yaghi165459163918
Hannes Jung1592069125069
Yongsun Kim1562588145619
Xiang Zhang1541733117576
William A. Goddard1511653123322
Jongmin Lee1502257134772
J. Fraser Stoddart147123996083
Bernhard O. Palsson14783185051
A. Paul Alivisatos146470101741
Taeghwan Hyeon13956375814
Network Information
Related Institutions (5)
Nanyang Technological University
112.8K papers, 3.2M citations

96% related

Georgia Institute of Technology
119K papers, 4.6M citations

95% related

École Polytechnique Fédérale de Lausanne
98.2K papers, 4.3M citations

94% related

Tsinghua University
200.5K papers, 4.5M citations

94% related

Delft University of Technology
94.4K papers, 2.7M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023108
2022480
20214,169
20204,412
20194,204
20183,988