scispace - formally typeset
Search or ask a question

Answers from top 10 papers

More filters
Papers (10)Insight
Proceedings ArticleDOI
O. Thomas, Amara Amara 
25 May 2003
76 Citations
In this paper we propose a new four transistors self-refresh memory cell operating in the subthreshold region.
Periodic refresh operation is a viable approach to maintain data integrity but aggravates the performance and energy consumption with the scaling of eDRAM cells into deep sub-micron technology nodes.
This paper presents a novel, energy-efficient DRAM refresh technique called massed refresh that simultaneously leverages bank-level and sub array-level concurrency to reduce the overhead of distributed refresh operations in the Hybrid Memory Cube (HMC).
Proceedings ArticleDOI
23 Feb 2013
93 Citations
Leveraging this property, we propose Refresh Pausing, a solution that is highly effective at alleviating the contention from refresh operations.
We propose a new method to reduce the refresh power consumption by effectively extending the memory cell retention time.
Open accessProceedings ArticleDOI
10 Jun 2014
29 Citations
Completely tracking multiple types of refresh information (e. g., row retention time and data validity) maximizes refresh reduction and lets us choose the most effective refresh schemes.
Therefore, a refresh operation has well-defined points at which it can potentially be Paused to service a pending read request.
In this paper, we show that even when skipping a high percentage of refresh operations, existing row-granurality refresh techniques are mostly ineffective due to the inherent efficiency disparity between ACT/PRE and the JEDEC auto-refresh mechanism.
In this way, the refresh operations can be minimized.
Slow refresh may cause a loss of data stored in a DRAM cell, which affects the correctness of the computation using the lost data.

See what other people are reading

How does residual stress affects dram device performance?
5 answers
Residual stress significantly impacts DRAM device performance. It can lead to failures in DRAM chipsand induce deformation, altering the frequency response and sensitivity of MEMS accelerometers. In 3D-stacked DRAMs, thermomechanical stress from differential material contraction affects parameters like latency and power consumption. Additionally, residual stress from through-silicon vias (TSVs) can alter the mobility of MOSFETs, affecting their performance. Controlling residual stress is crucial for preventing dynamic refresh failures, improving reliability, and mitigating performance variations in DRAM devices. Strategies like using alternative package substrates and incorporating strain engineering techniques can help reduce stress-induced variations and enhance DRAM performance.
What is the meaning of the word utilization?
5 answers
Utilization refers to the effective use or application of something, such as information systems, computer programs, or materials, to achieve a specific purpose or goal. It involves making the most out of available resources or features. In the context of information systems, utilization is crucial for enhancing performance. For computer programs, increasing utilization involves analyzing user behavior to identify unused or underutilized features and then guiding users towards utilizing them effectively. Designers also focus on utilization by creating products that encourage users to explore and make the most of the available features and materials. In the realm of technology, utilization can involve optimizing memory systems to efficiently access and utilize data for computational tasks.
What is Edge Percolated Component in cytohubba?
5 answers
Edge Percolated Component in CytoHubba, a Cytoscape plugin, is a topological analysis method used for ranking nodes in biological networks based on their network features. This method focuses on identifying important elements within the network structure by analyzing the connectivity and relationships between nodes. CytoHubba offers a user-friendly interface that integrates eleven different topological analysis methods, including Edge Percolated Component, to provide a comprehensive understanding of network nodes' significance. By utilizing this method, researchers can gain insights into essential regulatory networks and potential protein drug targets within biological systems. The Edge Percolated Component analysis, along with other centralities and metrics provided by CytoHubba, aids in uncovering key nodes crucial for understanding network dynamics and functions.
What is impulsive adoption?
5 answers
Impulsive adoption refers to the spontaneous and immediate acceptance and utilization of a technology or application without much premeditation. In the context of mobile shopping apps in India, the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model was extended to include impulsiveness as a construct in understanding the adoption of such apps. This study found that impulsiveness, along with other factors like performance expectancy, effort expectancy, hedonic motivation, habit, and behavioral intention, significantly influences the use behavior of mobile shopping apps. The addition of impulsiveness to the model highlights the impulsive nature of some users in quickly embracing and engaging with mobile shopping applications, showcasing the diverse factors influencing technology adoption in different contexts.
How many citations watson?
5 answers
Watson and Crick's 1953 paper on the double helix structure of DNA has been cited extensively. Despite being cited over 2,000 times since 1961, Watson and Crick themselves have only referenced their own paper twice. The reasons for citing their work vary, with a significant proportion of citations being historical rather than actively using the findings. Additionally, a parallel version of Galton-Watson processes has been studied for generating tree-shaped structures, with over 49,000 citations attesting to the usefulness of random trees. Furthermore, the WATSON COST Action, initiated in 2020, aims to integrate scientific knowledge on stable isotopes in water to understand water dynamics in the Earth's Critical Zone, involving around 230 members from 37 European countries.
Which is better BCA or B VOC in software development?
5 answers
In software development, the B method has been widely utilized for safety-critical systems. On the other hand, the BC-BSP system has been proposed for large-scale data processing, showing higher efficiency compared to MapReduce-based applications when data fits in memory. The BC-VoD model, utilizing BC graphs and independent spanning trees, has demonstrated advantageous performance in VoD systems. While B method has proven successful in industrial projects like the METEOR automatic train control system, showing cost-effectiveness and reliability, the BC-BSP system offers scalability and fault tolerance for large-scale graph processing. Therefore, the choice between B method and BC-BSP depends on the specific requirements of the software project, with B method being suitable for safety-critical systems and BC-BSP for large-scale data processing.
Ieee paper reviiew on ATM Controller Using Verilog?
5 answers
The IEEE papers provide insights into the design and implementation of controllers for advanced technologies. One paper discusses the VHDL implementation of a neuro-fuzzy based adaptive bandwidth controller for ATM networks, focusing on managing adaptive bandwidth prediction, congestion control, and call admission control. Another paper presents a VHDL simulation model for the F4/F5 block of the ATM protocol, emphasizing the block functionality and the use of local tables for recording the global state of the block. These papers showcase the application of hardware description languages like VHDL in developing complex controllers for efficient network operations, highlighting the importance of simulation models and verification environments in ensuring the functionality and performance of such controllers.
What is the impact of LLM on customer support ticketing systems?
5 answers
Large Language Models (LLMs) like ChatGPT are increasingly integrated into various applications, including customer support ticketing systems. LLMs can be modulated via natural language prompts, making them susceptible to adversarial attacks like Prompt Injection (PI). This integration blurs the line between data and instructions, opening up new attack vectors such as Indirect Prompt Injection, where prompts can be strategically injected into data likely to be retrieved, enabling remote exploitation of LLM-integrated applications. While LLMs offer powerful capabilities, the lack of effective mitigations against emerging threats poses risks such as data theft, information ecosystem contamination, and arbitrary code execution. Awareness of these vulnerabilities is crucial for the safe deployment of LLMs in customer support ticketing systems.
What is RAMP™ (Reconfigurable Analog Module Processor) technology in analog computers for cyber security?
10 answers
RAMP™ (Reconfigurable Analog Module Processor) technology represents a significant advancement in the field of analog computing, particularly in enhancing cybersecurity measures. This technology is built upon the foundation of Field-Programmable Analog Arrays (FPAAs) and incorporates various innovative features to secure intellectual property (IP) and protect against security vulnerabilities in analog and mixed-signal circuits. The core of RAMP technology involves the use of obfuscation techniques to mask the topology of analog circuits, thereby creating a large search space for potential attackers and making brute force or topology attacks highly impractical. This is achieved by efficiently hiding the configuration of differential mode and single-ended mode analog circuits, which significantly enhances the security robustness of the system against unauthorized access or tampering. Moreover, RAMP technology integrates dynamic protection mechanisms against security vulnerabilities through the use of authentication blocks within the signal chain. This ensures that only validated components can communicate, further fortifying the system's defense against potential breaches. The programmability aspect of RAMP, facilitated by configurable analog blocks (CABs) and digital blocks (CDBs), allows for a wide range of analog functions and mixed-signal processing capabilities. This flexibility is crucial for adapting to various cybersecurity requirements without the need for dedicated hardware for each specific application. Additionally, RAMP technology supports the development of general-purpose analog signal processing systems through configurable integrator blocks (CIBs), enabling the implementation of a broad range of signal processing operations essential for cybersecurity applications. The RAMP project also emphasizes the importance of community collaboration and open-source development for tackling the challenges of parallel processing, which is vital for advancing cybersecurity measures in analog computing. Lastly, the inclusion of memory modules with logic support for various access modes within the RAMP system underscores the technology's versatility and capability to handle complex data processing tasks, further enhancing its applicability in cybersecurity domains. In summary, RAMP™ technology leverages the strengths of reconfigurable analog processing, advanced security features, and community-driven innovation to address the pressing cybersecurity challenges in analog computing environments.
How has Felix Uchenna Samuel's contributions influenced the development of modern computing systems?
5 answers
Felix Uchenna Samuel's contributions have significantly impacted the development of modern computing systems. By proposing an in-memory implementation of fast and energy-efficient logic (FELIX) that combines Processing-in-Memory (PIM) with memories, Samuel introduced a novel approach to reduce data movement and latency in computing systems. This innovation enabled single-cycle operations directly in crossbar memory, enhancing efficiency and speed in processing tasks. Additionally, Samuel's work extended single-cycle operations to implement complex functions like XOR and addition in memory, achieving lower latency compared to existing techniques. These advancements in in-memory computing have paved the way for more energy-efficient and high-performance computing systems, aligning with the growing demand for ultra-low power ICs in the era of the Internet of Things.
What does connectivity mean in the mass balance chain of custody system?
5 answers
Connectivity in the mass balance chain of custody system refers to the interlinked relationships and processes involved in tracking ownership or transactions of assets. In the context of market infrastructure, custody chains have evolved independently from investors and issuers, eroding investor rights due to the complex network of custodians connected through bilateral links. Furthermore, in the digital forensic investigations realm, blockchain technology is utilized to create a secure and transparent process for storing evidence (chain of custody) in a private permissioned encrypted blockchain ledger, ensuring robust information integrity and immutability. Similarly, a method and system for tracking ownership of devices like power tools involves storing a chain-of-custody in a memory, updating ownership information as the device changes hands, emphasizing secure transmission to prevent unauthorized access.