scispace - formally typeset
Search or ask a question

Who is Linda Ahall? 


Best insight from top research papers

Linda Ahall is not mentioned in any of the provided abstracts.

Answers from top 5 papers

More filters
Papers (5)Insight
The provided paper does not mention anyone named Linda Ahall. The paper is about the use of Linda as a framework for distributed database systems.
The paper does not mention anyone named Linda Ahall.
The provided paper does not mention anyone named Linda Ahall.
The provided paper does not mention anyone named Linda Ahall. The paper is about the efficiency of the Linda programming language for general-purpose scientific programming.
Journal ArticleDOI
Ahuja, Carriero, Gelernter 
01 Aug 1986-IEEE Computer
572 Citations
The provided paper does not mention anyone named Linda Ahall.

See what other people are reading

How does locking affect database index performance?
5 answers
What are the current status and trends of Open data research?
10 answers
How does the Jaccard index measure similarity between sets?
4 answers
How does the Jaccard index measure similarity between sets?
4 answers
What are the innovations of log management system?
4 answers
The innovations of log management systems include various advancements such as large-capacity work record generation methods for efficient data processingand log management methods that are independent from product production processes, ensuring complete log records even during abnormal operations. Additionally, the integration of blockchain technology in log management systems ensures secure storage of log data through decentralized and tamper-proof mechanisms, addressing issues of data confidentiality and integrity. Furthermore, system log management methods involve screening, categorizing, and combining log information to facilitate concise and clear management, enhancing storage and visualization processes for log servers. These innovations collectively improve data processing efficiency, data security, and log management practices in various systems.
What are papers that talk about standards in transforming data into transactional data for association rule mining?
4 answers
Several papers discuss standards in transforming data into transactional data for association rule mining. Amrit Pal et al. propose a distributed frequent itemset generation and association rule mining algorithm using the MapReduce programming model, focusing on mining distributed data effectively. Telikani and Shahbahrami analyze various sanitization algorithms for association rule hiding, emphasizing the importance of transaction and item selection methods in optimizing the hiding process. Additionally, Luo et al. present a method for mining inter-transaction association rules based on dynamic support thresholds, addressing the challenges of setting minimum support thresholds in stream data mining. These papers collectively contribute to the development of standards and techniques for transforming data into transactional form for association rule mining.
What are definition and scope for software development?
5 answers
Software development scope refers to the boundaries and details of a project, crucial for success and avoiding failures. Inadequate scope definition often leads to delays, rework, and overruns, emphasizing the need for a well-defined scope early on. Various tools and methods exist to verify and control scope, but a complete definition remains a challenge. The complexity of distributed systems in software development necessitates a systematic approach, highlighting the importance of collaborative environments and tools for successful development. Defining software engineering processes and activities is essential for the industry's growth, requiring consensus on the scope and boundaries of software engineering endeavors. Overall, a well-managed scope is vital for qualitative, cost-effective, and timely project completion in software development.
What is fairness definition in fair multi-resource allocation in cloud computing?
4 answers
Fairness in multi-resource allocation in cloud computing refers to equitable distribution of resources among users to ensure no individual is disadvantaged. Various fairness properties are crucial in this context, such as envy-freeness, Pareto optimality, strategy-proofness, sharing incentive, and bottleneck fairness. Envy-freeness ensures no user desires another's allocation, while Pareto optimality guarantees no user can be made better off without making another worse off. Strategy-proofness ensures users cannot manipulate the system, and sharing incentive promotes fair sharing. Bottleneck fairness aims to prevent resource wastage and performance degradation. These properties collectively ensure a balanced and just allocation of resources in cloud computing environments, enhancing overall efficiency and user satisfaction.
Do agents need to share learned polices and select the most suitable ones with each other in distributed W-learning?
4 answers
In distributed W-learning, agents do not need to share learned policies but rather select the most suitable ones independently. The approach involves each agent updating its policy based on its own and neighbors' information, leading to a distributed learning process. Similarly, in a consensus-driven Federated Learning (FL) system, devices independently select fragments of the deep neural network (DNN) to share with neighbors, optimizing communication efficiency. This decentralized approach allows agents to learn from each other without sharing data, weights, or weight updates, enhancing training efficiency while preserving privacy. Therefore, in distributed W-learning, the emphasis lies on independent policy updates and selective sharing of information rather than sharing learned policies among agents.
How does the architecture of different data lakes differ in terms of scalability and performance?
5 answers
The architecture of different data lakes significantly varies in terms of scalability and performance, reflecting the diverse approaches to handling big data's volume, velocity, variety, and veracity challenges. Traditional data lakes, as described in several studies, focus on storing vast amounts of raw data in any format, aiming to provide a flexible and scalable environment for data analysis and management. This flexibility is crucial for adapting to the exponential increase in data production, especially from digital technologies and the Internet of Things. However, the scalability and performance of data lakes can be affected by their underlying architecture. For instance, the introduction of hierarchical and scalable designs in data lakes aims to reduce complexity and improve code reuse, scalability, and development efficiency. This approach addresses the need for high reliability, availability, fault tolerance, throughput, and concurrent processing in big data systems. On the other hand, the application of queuing networks modeling techniques to data lake architectures helps in identifying bottlenecks and performance degradation, suggesting that not all designs equally support scalability and high performance under different workload scenarios. Moreover, the integration of blockchain technology into data lake architectures introduces a novel approach to enhancing data security and access, while also supporting complex business scenarios suitable for big data analysis. This Blockchain Data Lake (BDL) system architecture represents a significant shift towards ensuring data integrity and security in scalable data management systems. The concept of a metadata lake further extends the architecture of data lakes by focusing on gathering, linking, curating, and enriching metadata to support advanced capabilities and improve performance. This approach underlines the importance of metadata management in achieving scalability and high performance in federated data management architectures. In academic settings, data lakes are emerging as flexible, secure environments for computing with licensed data, highlighting the differences from traditional data warehousing solutions and the specific expertise required to create successful data lakes. This indicates a broader application of data lakes beyond corporate settings, emphasizing their adaptability and performance in various contexts. In summary, the architecture of data lakes differs in terms of scalability and performance based on their design principles, such as hierarchical structures, integration of blockchain, and emphasis on metadata management. These differences reflect the evolving nature of data lake architectures to meet the demands of big data analysis and management.
What are the local rrs of this topic?
5 answers
Local Regional Religious Systems (RRS) refer to the spatial distribution of religious sites within specific regions, influenced by socio-cultural and economic factors. In the context of distributed computing, the concept of locality is crucial, focusing on tasks solvable within a limited communication radius in large distributed systems. This notion extends to local topic detection using spatio-temporal social media, emphasizing the importance of considering the continuity of time and location for improved detection accuracy. Moreover, the study of resonant Raman scattering in tooth samples showcases the local structure analysis of calcium in different tooth layers, emphasizing the significance of spatially correlated data analysis techniques for understanding material properties at a local level. Overall, local RRS encompass the localized patterns of religious sites, communication constraints in distributed systems, spatial-temporal correlations in social media data, and local structural analysis in material science.