scispace - formally typeset
Search or ask a question
Topic

Cache

About: Cache is a research topic. Over the lifetime, 59167 publications have been published within this topic receiving 976633 citations.


Papers
More filters
Journal ArticleDOI
Hiroshi Inoue1, Toshio Nakatani1
25 Oct 2009
TL;DR: A sampling-based profiler that exploits a processor's HPM (Hardware Performance Monitor) to collect information on running Java applications for use by the Java VM and proposes new techniques to leverage the sampling facility of the HPM to generate object creation profiles and lock activity profiles.
Abstract: This paper describes our sampling-based profiler that exploits a processor's HPM (Hardware Performance Monitor) to collect information on running Java applications for use by the Java VM. Our profiler provides two novel features: Java-level event profiling and lightweight context-sensitive event profiling. For Java events, we propose new techniques to leverage the sampling facility of the HPM to generate object creation profiles and lock activity profiles. The HPM sampling is the key to achieve a smaller overhead compared to profilers that do not rely on hardware helps. To sample the object creations with the HPM, which can only sample hardware events such as executed instructions or cache misses, we correlate the object creations with the store instructions for Java object headers. For the lock activity profile, we introduce an instrumentation-based technique, called ProbeNOP, which uses a special NOP instruction whose executions are counted by the HPM. For the context-sensitive event profiling, we propose a new technique called CallerChaining, which detects the calling context of HPM events based on the call stack depth (the value of the stack frame pointer). We show that it can detect the calling contexts in many programs including a large commercial application. Our proposed techniques enable both programmers and runtime systems to get more valuable information from the HPM to understand and optimize the programs without adding significant runtime overhead.

23 citations

Patent
27 May 1999
TL;DR: In this paper, the authors present a method for trusted verification of instructions in a module of a computer program first determining whether a suspect module to be loaded is from an untrusted source, such as on the internet.
Abstract: A method, computer program, signal transmission and apparatus for trusted verification of instructions in a module of a computer program first determine whether a suspect module to be loaded is from an untrusted source, such as on the internet. If from an untrusted source, the suspect module is loaded and one-module-at-a-time pre-verification is performed on the suspect module before linking. If the suspect module passes such pre-verification, the module is stored in a cache.

23 citations

Patent
05 Jun 1997
TL;DR: In this paper, an address cache is connected to the address cache for storing missed addresses in the order that the cache is probed, and a data queue receives data stored at the missed addresses from the memory controller.
Abstract: A cache includes an address cache for storing memory addresses. An address queue is connected to the address cache for storing missed addresses in the order that the address cache is probed. A memory controller receives the missed addresses from the address queue. A data queue receives data stored at the missed addresses from the memory controller. A probe result queue is connected to the address cache for storing data cache line addresses and hit/miss information. A multiplexer connected to the data cache, the data queue, and the probe result queue selects output data from the data cache or the data queue depending on the hit/miss information.

23 citations

Patent
30 Jan 2013
TL;DR: In this paper, a method and a device for reading data based on data cache is presented, which comprises the steps of: receiving read access request information (of a user) for obtaining data input, and extracting a generated key word; determining a data key value corresponding to a key word which is not subjected to cache generation in a cache databank, and initializing a mark value of data to be obtained.
Abstract: The invention discloses a method and a device for reading data based on data cache. The method comprises the steps of: receiving read access request information (of a user) for obtaining data input, and extracting a generated key word; determining a data key value corresponding to a key word which is not subjected to cache generation in a cache databank, and initializing a mark value of data to be obtained; or determining a data key value corresponding to a key word which is subjected to cache generation in the cache databank, wherein the data included in the cached data key value is invalid, and setting the mark value of the data to be obtained as the mark value included in the obtained data key value; reading and checking a sub-bank of the databank so as to obtain data corresponding to the generated key word; obtaining time stamp information of the sub-bank of the databank, setting data key value information of key value pairs according to the set mark of the data to be obtained, the read time stamp information of the sub-bank of the databank and the data to be obtained, and updating the cache databank; and outputting the data obtained through checking. With the adoption of the method and the device, the cache efficiency of the data is improved, and the comprehensive property in data cache is optimized.

23 citations

Patent
29 Jan 2001
TL;DR: In this paper, a process tree data structure is defined for image processing, and each frame is rendered in a time determined by the amount of processing defined by the process tree, and the user can selectively cache intermediately rendered frames at nodes where the contributing process tree branches are relatively stable in their configuration.
Abstract: An image processing system processes image data in response to a sequence of image processing steps defined by a process tree data structure. The process tree comprises a plurality of interconnected nodes, including input nodes and at least one output node. Output rendering is performed a frame at a time, and each frame is rendered in a time determined by the amount of processing defined by the process tree. The process tree may comprise many branches of interconnected nodes, and the user can selectively cache intermediately rendered frames at nodes where the contributing process tree branches are relatively stable in their configuration. The user may then make modifications to processes in other parts of the process tree, without having to wait for image data to be rendered from unchanged parts of the process tree.

23 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
90% related
Network packet
159.7K papers, 2.2M citations
87% related
Mobile computing
51.3K papers, 1M citations
87% related
Wireless ad hoc network
49K papers, 1.1M citations
86% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023665
20221,574
20211,395
20202,689
20193,544
20183,574