scispace - formally typeset
Search or ask a question
Topic

String (computer science)

About: String (computer science) is a research topic. Over the lifetime, 19430 publications have been published within this topic receiving 333247 citations. The topic is also known as: str & s.


Papers
More filters
Patent
Jae-Hun Jeong1, Han-soo Kim, Wonseok Cho, Jae-Hoon Jang, Sunil Shim 
02 Feb 2010
TL;DR: In this paper, the first selection line is connected to the at least one pair of first selection transistors of the NAND string and a plurality of word lines are coupled to the plurality of memory cells.
Abstract: A non-volatile memory device having a vertical structure includes a NAND string having a vertical structure. The NAND string includes a plurality of memory cells, and at least one pair of first selection transistors arranged to be adjacent to a first end of the plurality of memory cells. A plurality of word lines are coupled to the plurality of memory cells of the NAND string. A first selection line is commonly connected to the at least one pair of first selection transistors of the NAND string.

182 citations

Patent
21 Nov 2006
TL;DR: In this article, a system for finding and presenting content items in response to keystrokes entered by a user on an input device having a known layout of overloaded keys selected from a set of key layouts.
Abstract: A system for finding and presenting content items in response to keystrokes entered by a user on an input device having a known layout of overloaded keys selected from a set of key layouts. The system includes (1) a database containing content items and terms characterizing the content items; (2) input logic for receiving keystrokes from the user and building a string corresponding to incremental entries by the user, each item in the string having the set of alphanumeric symbols associated with a corresponding keystroke; (3) mapping logic to map the string to the database to find the most likely content items corresponding to the incremental entries, the mapping logic operating in accordance with a defined error model corresponding to the known layout of overloaded keys; and (4) presentation logic for ordering the most likely content items identified by the mapping logic and for presenting the most likely content items.

181 citations

Proceedings ArticleDOI
07 Nov 2005
TL;DR: DACODA is subjected to quantitative analysis with a symbolic execution tool called DACODA and it is concluded that single contiguous byte string signatures are not effective for content filtering, and token-basedbyte string signatures composed of smaller substrings are only semantically rich enough to be effective forContent filtering if the vulnerability lies in a part of a protocol that is not commonly used.
Abstract: Vulnerabilities that allow worms to hijack the control flow of each host that they spread to are typically discovered months before the worm outbreak, but are also typically discovered by third party researchers. A determined attacker could discover vulnerabilities as easily and create zero-day worms for vulnerabilities unknown to network defenses. It is important for an analysis tool to be able to generalize from a new exploit observed and derive protection for the vulnerability.Many researchers have observed that certain predicates of the exploit vector must be present for the exploit to work and that therefore these predicates place a limit on the amount of polymorphism and metamorphism available to the attacker. We formalize this idea and subject it to quantitative analysis with a symbolic execution tool called DACODA. Using DACODA we provide an empirical analysis of 14 exploits (seven of them actual worms or attacks from the Internet, caught by Minos with no prior knowledge of the vulnerabilities and no false positives observed over a period of six months) for four operating systems.Evaluation of our results in the light of these two models leads us to conclude that 1) single contiguous byte string signatures are not effective for content filtering, and token-based byte string signatures composed of smaller substrings are only semantically rich enough to be effective for content filtering if the vulnerability lies in a part of a protocol that is not commonly used, and that 2) practical exploit analysis must account for multiple processes, multithreading, and kernel processing of network data necessitating a focus on primitives instead of vulnerabilities.

180 citations

PatentDOI
TL;DR: A speech synthesizing method includes determining the accent type of the input character string, selecting the prosodic model data from a prosody dictionary for storing typical ones of the Prosodic models representing the prosody information for the character strings in a word dictionary, and connecting the selected waveform data with each other.
Abstract: A speech synthesizing method includes determining the accent type of the input character string, selecting the prosodic model data from a prosody dictionary for storing typical ones of the prosodic models representing the prosodic information for the character strings in a word dictionary, based on the input character string and the accent type, transforming the prosodic information of the prosodic model when the character string of the selected prosodic model is not coincident with the input character string, selecting the waveform data corresponding to each character of the input character string from a waveform dictionary, based on the prosodic model data after transformation, and connecting the selected waveform data with each other. Therefore, a difference between an input character string and a character string stored in a dictionary is absorbed, then it is possible to synthesize a natural voice.

179 citations

Journal ArticleDOI
TL;DR: This paper gives the first nontrivial compressed matching algorithm for the classic adaptive compression scheme, the LZ77 algorithm, which is known to compress more than other dictionary compression schemes, such as LZ78 and LZW, though for strings with constant per bit entropy, all these schemes compress optimally in the limit.
Abstract: String matching and compression are two widely studied areas of computer science. The theory of string matching has a long association with compression algorithms. Data structures from string matching can be used to derive fast implementations of many important compression schemes, most notably the Lempel—Ziv (LZ77) algorithm. Intuitively, once a string has been compressed—and therefore its repetitive nature has been elucidated—one might be tempted to exploit this knowledge to speed up string matching. The Compressed Matching Problem is that of performing string matching in a compressed text, without uncompressing it. More formally, let T be a text, let Z be the compressed string representing T , and let P be a pattern. The Compressed Matching Problem is that of deciding if P occurs in T , given only P and Z . Compressed matching algorithms have been given for several compression schemes such as LZW. In this paper we give the first nontrivial compressed matching algorithm for the classic adaptive compression scheme, the LZ77 algorithm. In practice, the LZ77 algorithm is known to compress more than other dictionary compression schemes, such as LZ78 and LZW, though for strings with constant per bit entropy, all these schemes compress optimally in the limit. However, for strings with o(1) per bit entropy, while it was recently shown that the LZ77 gives compression to within a constant factor of optimal, schemes such as LZ78 and LZW may deviate from optimality by an exponential factor. Asymptotically, compressed matching is only relevant if |Z|=o(|T|) , i.e., if the compression ratio |T|/|Z| is more than a constant. These results show that LZ77 is the appropriate compression method in such settings. We present an LZ77 compressed matching algorithm which runs in time O(n log 2 u/n + p) where n=|Z| , u=|T| , and p=|P| . Compare with the naive ``decompresion'' algorithm, which takes time Θ(u+p) to decide if P occurs in T . Writing u+p as (n u)/n+p , we see that we have improved the complexity, replacing the compression factor u/n by a factor log 2 u/n . Our algorithm is competitive in the sense that O(n log 2 u/n + p)=O(u+p) , and opportunistic in the sense that O(n log 2 u/n + p)=o(u+p) if n=o(u) and p=o(u) .

179 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
88% related
Tree (data structure)
44.9K papers, 749.6K citations
86% related
Graph (abstract data type)
69.9K papers, 1.2M citations
85% related
Computational complexity theory
30.8K papers, 711.2K citations
82% related
Supervised learning
20.8K papers, 710.5K citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021491
2020704
2019759
2018816
2017806