scispace - formally typeset
Search or ask a question
Author

Jin-Soo Kim

Other affiliations: Hewlett-Packard, KAIST, Seoul National University  ...read more
Bio: Jin-Soo Kim is an academic researcher from Sungkyunkwan University. The author has contributed to research in topics: Quantum dot & Flash file system. The author has an hindex of 38, co-authored 219 publications receiving 6010 citations. Previous affiliations of Jin-Soo Kim include Hewlett-Packard & KAIST.


Papers
More filters
Proceedings ArticleDOI
22 Oct 2006
TL;DR: A novel superblockbased FTL scheme, which combines a set of adjacent logical blocks into a superblock, which decreases the garbage collection overhead up to 40% compared to previous FTL schemes.
Abstract: In NAND flash-based storage systems, an intermediate software layer called a flash translation layer (FTL)is usually employed to hide the erase-before-write characteristics of NAND flash memory. This paper proposes a novel superblockbased FTL scheme, which combines a set of adjacent logical blocks into a superblock. In the proposed FTL scheme, superblocks are mapped at coarse granularity,while pages inside the superblock are mapped freely at fine granularity to any location in several physical blocks. To reduce extra storage and flash memory operations, the fine-grain mapping information is stored in the spare area of NAND flash memory. This hybrid mapping technique has the flexibility provided by fine-grain address translation, while reducing the memory overhead to the level of coarse-grain address translation. Our experimental results show that the proposed FTL scheme decreases the garbage collection overhead up to 40% compared to previous FTL schemes.

452 citations

Proceedings ArticleDOI
Seon-Yeong Park1, Dawoon Jung1, Jeong-Uk Kang1, Jin-Soo Kim1, Joonwon Lee1 
22 Oct 2006
TL;DR: The Clean-First LRU (CFLRU) replacement algorithm is proposed that exploits the characteristics of flash memory and reduces the average replacement cost by 28.4% in swap system and by 26.2% in buffer cache, compared with LRU algorithm.
Abstract: In most operating systems which are customized for disk-based storage system, the replacement algorithm concerns only the number of memory hits. However, flash memory has different read and write cost in the aspects of time and energy so the replacement algorithm with flash memory should consider not only the hit count but also the replacement cost caused by selecting dirty victims. The replacement cost of dirty page is higher than that of clean page with regard to both access time and energy consumption. In this paper, we propose the Clean-First LRU (CFLRU) replacement algorithm that exploits the characteristics of flash memory. CFLRU splits the LRU list into the working region and the clean-first region and adopts a policy that evicts clean pages preferentially in the clean-first region until the number of page hits in the working region is preserved in a suitable level. Using the trace-driven simulation, the proposed algorithm reduces the average replacement cost by 28.4% in swap system and by 26.2% in buffer cache, compared with LRU algorithm. We also implement the CFLRU algorithm in the Linux kernel and present some optimization issues.

434 citations

Journal ArticleDOI
TL;DR: In this article, the authors compared the abilities of frequency ratio (FR), analytic hierarchy process (AHP), logistic regression (LR), and artificial neural network (ANN) models to produce landslide susceptibility index (LSI) maps for use in predicting possible landslide occurrence and limiting damage.
Abstract: Every year, the Republic of Korea experiences numerous landslides, resulting in property damage and casualties. This study compared the abilities of frequency ratio (FR), analytic hierarchy process (AHP), logistic regression (LR), and artificial neural network (ANN) models to produce landslide susceptibility index (LSI) maps for use in predicting possible landslide occurrence and limiting damage. The areas under the relative operating characteristic (ROC) curves for the FR, AHP, LR, and ANN LSI maps were 0.794, 0.789, 0.794, and 0.806, respectively. Thus, the LSI maps developed by all the models had similar accuracy. A cross-tabulation analysis of landslide occurrence against non-occurrence areas showed generally similar overall accuracies of 65.27, 64.35, 65.51, and 68.47 % for the FR, AHP, LR, and ANN models, respectively. A correlation analysis between the models demonstrated that the LR and ANN models had the highest correlation (0.829), whereas the FR and AHP models had the lowest correlation (0.619).

321 citations

Journal ArticleDOI
Heeseung Jo1, Jeong-Uk Kang1, Seon-Yeong Park1, Jin-Soo Kim1, Joonwon Lee1 
TL;DR: A flash-aware buffer management scheme that reduces the number of erase operations by selecting a victim based on its page utilization rather than based on the traditional LRU policy is suggested.
Abstract: This paper presents a novel buffer management scheme for portable media players equipped with flash memory. Though flash memory has various advantages over magnetic disks such as small and lightweight form factor, solid-state reliability, low power consumption, and shock resistance, its physical characteristics imposes several limitations. Most notably, it takes relatively long time to write data in flash memory and the data cannot be overwritten before being erased first. Since an erase operation is performed as a unit of larger block, the employed strategy for mapping logical blocks onto physical pages affects real performance of flash memory. This article suggests a flash-aware buffer management scheme that reduces the number of erase operations by selecting a victim based on its page utilization rather than based on the traditional LRU policy. Our scheme effectively minimizes the number of write and erase operations in flash memory, reducing the total execution time by 17% compared to the LRU policy.

297 citations

Journal ArticleDOI
TL;DR: This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources using the core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time.

250 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations

Proceedings ArticleDOI
16 Oct 2006
TL;DR: This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks that improve over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements.
Abstract: Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.

1,561 citations

OtherDOI
TL;DR: In this article, the amplitude amplification algorithm was proposed to find a good solution after an expected number of applications of the algorithm and its inverse which is proportional to a factor proportional to 1/a.
Abstract: Consider a Boolean function $\chi: X \to \{0,1\}$ that partitions set $X$ between its good and bad elements, where $x$ is good if $\chi(x)=1$ and bad otherwise. Consider also a quantum algorithm $\mathcal A$ such that $A |0\rangle= \sum_{x\in X} \alpha_x |x\rangle$ is a quantum superposition of the elements of $X$, and let $a$ denote the probability that a good element is produced if $A |0\rangle$ is measured. If we repeat the process of running $A$, measuring the output, and using $\chi$ to check the validity of the result, we shall expect to repeat $1/a$ times on the average before a solution is found. *Amplitude amplification* is a process that allows to find a good $x$ after an expected number of applications of $A$ and its inverse which is proportional to $1/\sqrt{a}$, assuming algorithm $A$ makes no measurements. This is a generalization of Grover's searching algorithm in which $A$ was restricted to producing an equal superposition of all members of $X$ and we had a promise that a single $x$ existed such that $\chi(x)=1$. Our algorithm works whether or not the value of $a$ is known ahead of time. In case the value of $a$ is known, we can find a good $x$ after a number of applications of $A$ and its inverse which is proportional to $1/\sqrt{a}$ even in the worst case. We show that this quadratic speedup can also be obtained for a large family of search problems for which good classical heuristics exist. Finally, as our main result, we combine ideas from Grover's and Shor's quantum algorithms to perform amplitude estimation, a process that allows to estimate the value of $a$. We apply amplitude estimation to the problem of *approximate counting*, in which we wish to estimate the number of $x\in X$ such that $\chi(x)=1$. We obtain optimal quantum algorithms in a variety of settings.

1,276 citations

Journal ArticleDOI
TL;DR: The essential Raman scattering processes of the entire first- and second-order modes in intrinsic graphene are described and the extensive capabilities of Raman spectroscopy for the investigation of the fundamental properties of graphene under external perturbations are described.
Abstract: Graphene-based materials exhibit remarkable electronic, optical, and mechanical properties, which has resulted in both high scientific interest and huge potential for a variety of applications. Furthermore, the family of graphene-based materials is growing because of developments in preparation methods. Raman spectroscopy is a versatile tool to identify and characterize the chemical and physical properties of these materials, both at the laboratory and mass-production scale. This technique is so important that most of the papers published concerning these materials contain at least one Raman spectrum. Thus, here, we systematically review the developments in Raman spectroscopy of graphene-based materials from both fundamental research and practical (i.e., device applications) perspectives. We describe the essential Raman scattering processes of the entire first- and second-order modes in intrinsic graphene. Furthermore, the shear, layer-breathing, G and 2D modes of multilayer graphene with different stacking orders are discussed. Techniques to determine the number of graphene layers, to probe resonance Raman spectra of monolayer and multilayer graphenes and to obtain Raman images of graphene-based materials are also presented. The extensive capabilities of Raman spectroscopy for the investigation of the fundamental properties of graphene under external perturbations are described, which have also been extended to other graphene-based materials, such as graphene quantum dots, carbon dots, graphene oxide, nanoribbons, chemical vapor deposition-grown and SiC epitaxially grown graphene flakes, composites, and graphene-based van der Waals heterostructures. These fundamental properties have been used to probe the states, effects, and mechanisms of graphene materials present in the related heterostructures and devices. We hope that this review will be beneficial in all the aspects of graphene investigations, from basic research to material synthesis and device applications.

1,184 citations