Institution
King Abdullah University of Science and Technology
Education•Jeddah, Saudi Arabia•
About: King Abdullah University of Science and Technology is a education organization based out in Jeddah, Saudi Arabia. It is known for research contribution in the topics: Membrane & Catalysis. The organization has 6221 authors who have published 22019 publications receiving 625706 citations. The organization is also known as: KAUST.
Topics: Membrane, Catalysis, Fading, Population, Combustion
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The draft genome for Thellungiella parvula is presented and it is shown that short reads can be assembled to a near-complete chromosome level for a eukaryotic species lacking prior genetic information.
Abstract: Dong-Ha Oh and colleagues report the draft genome of the extremophile crucifer plant Thellungiella parvula. This species is endemic to highly saline environments subject to extreme temperatures. The genome was primarily assembled using next-generation sequencing data.
331 citations
••
14 Jun 2020TL;DR: G-TAD as mentioned in this paper proposes a graph convolutional network (GCN) model to adaptively incorporate multi-level semantic context into video features and cast temporal action detection as a sub-graph localization problem.
Abstract: Temporal action detection is a fundamental yet challenging task in video understanding. Video context is a critical cue to effectively detect actions, but current works mainly focus on temporal context, while neglecting semantic context as well as other important context properties. In this work, we propose a graph convolutional network (GCN) model to adaptively incorporate multi-level semantic context into video features and cast temporal action detection as a sub-graph localization problem. Specifically, we formulate video snippets as graph nodes, snippet-snippet correlations as edges, and actions associated with context as target sub-graphs. With graph convolution as the basic operation, we design a GCN block called GCNeXt, which learns the features of each node by aggregating its context and dynamically updates the edges in the graph. To localize each sub-graph, we also design an SGAlign layer to embed each sub-graph into the Euclidean space. Extensive experiments show that G-TAD is capable of finding effective video context without extra supervision and achieves state-of-the-art performance on two detection benchmarks. On ActivityNet-1.3 it obtains an average mAP of 34.09%; on THUMOS14 it reaches 51.6% at IoU@0.5 when combined with a proposal processing method. The code has been made available at https://github.com/frostinassiky/gtad.
330 citations
••
TL;DR: In this paper, a simple and scalable direct laser machining process to fabricate MXene-on-paper coplanar microsupercapacitors is reported, where commercially available printing paper is employed as a platform to coat either hydrofluoric acid-etched or clay-like 2D Ti3C2 MXene sheets.
Abstract: A simple and scalable direct laser machining process to fabricate MXene-on-paper coplanar microsupercapacitors is reported. Commercially available printing paper is employed as a platform in order to coat either hydrofluoric acid-etched or clay-like 2D Ti3C2 MXene sheets, followed by laser machining to fabricate thick-film MXene coplanar electrodes over a large area. The size, morphology, and conductivity of the 2D MXene sheets are found to strongly affect the electrochemical performance due to the efficiency of the ion-electron kinetics within the layered MXene sheets. The areal performance metrics of Ti3C2 MXene-on-paper microsupercapacitors show very competitive power-energy densities, comparable to the reported state-of-the-art paper-based microsupercapacitors. Various device architectures are fabricated using the MXene-on-paper electrodes and successfully demonstrated as a micropower source for light emitting diodes. The MXene-on-paper electrodes show promise for flexible on-paper energy storage devices.
329 citations
••
TL;DR: The retrograde solubility of various hybrid perovskites through the correct choice of solvent(s) is shown and their solubilities curves are reported.
327 citations
•
TL;DR: A convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning, which outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully.
Abstract: Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available.
327 citations
Authors
Showing all 6430 results
Name | H-index | Papers | Citations |
---|---|---|---|
Jian-Kang Zhu | 161 | 550 | 105551 |
Jean M. J. Fréchet | 154 | 726 | 90295 |
Kevin Murphy | 146 | 728 | 120475 |
Jean-Luc Brédas | 134 | 1026 | 85803 |
Carlos M. Duarte | 132 | 1173 | 86672 |
Kazunari Domen | 130 | 908 | 77964 |
Jian Zhou | 128 | 3007 | 91402 |
Tai-Shung Chung | 119 | 879 | 54067 |
Donal D. C. Bradley | 115 | 652 | 65837 |
Lain-Jong Li | 113 | 627 | 58035 |
Hong Wang | 110 | 1633 | 51811 |
Peng Wang | 108 | 1672 | 54529 |
Juan Bisquert | 107 | 450 | 46267 |
Jian Zhang | 107 | 3064 | 69715 |
Karl Leo | 104 | 832 | 42575 |