scispace - formally typeset
Search or ask a question

Is software that lets you run local compute messaging data caching sync and machine learning inference capabilities for connected devices in a secure way? 

Answers from top 8 papers

More filters
Papers (8)Insight
The use of in-network caching in ICN enhances the data availability in the network, overcomes the issue of single-point failure, and improves IoT devices power efficiency.
In such applications, caching data at some edge devices can greatly improve data availability, retrieval robustness, and delivery latency.
Proceedings ArticleDOI
Narottam Chand, R.C. Joshi, Manoj Misra 
07 Aug 2006
48 Citations
Caching of frequently accessed data in multi-hop ad hoc environment is a potential technique that can improve the data access performance and availability.
In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training.
This article proposes a novel deep learning-based proactive caching framework in cellular networks, called DeepCachNet, in which a vast amount of data is collected from the mobile devices of users connected to SBSs.
However, the caching management is a challenging and complex task, especially in those scenarios of shared storage resources on edge devices to support multiple concurrent applications (e. g., industrial, mobile users, and connected cars applications).
The optimal data caching strategy in the edge computing environment will minimize the data caching cost while maximizing the reduction in service latency.
Proceedings ArticleDOI
Bin Liu, Wang-Chien Lee, Dik Lun Lee 
09 May 2005
20 Citations
In this paper, we propose a novel approach for caching multidimensional data in a cluster of mobile devices.

Related Questions

How can machine learning and deep learning algorithms be used to improve the security of computer systems and networks?4 answersMachine learning and deep learning algorithms can be used to improve the security of computer systems and networks in several ways. These algorithms can help in real-time active attack prevention, protecting against security breaches, malware, and viruses, and providing more accurate network protection. They can also be used to monitor and prevent network attacks, identify system irregularities that may signal an ongoing attack, and make systems safer. Various machine learning and deep learning algorithms, such as regression, clustering, classification, Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and LSTM plus CNN, have been employed to enhance cybersecurity. These algorithms have shown promising results in detecting multiple attacks and defending against external or internal threats to the network.
Exist there AI, which can be local hosted and trained with own data?5 answersThere are AI systems that can be locally hosted and trained with their own data. These systems allow users to train machine learning models on their local datasets and preserve data privacy. One approach to distributed machine learning is federated learning, where local models are trained on individual devices and then aggregated into a single, stronger model. Another approach involves co-designing AI systems with local communities, empowering them to address regional concerns and collect and explain community data using AI. Additionally, radiologist-trained AI models have been developed to differentiate between suboptimal and optimal chest radiographs, enabling radiographers to repeat suboptimal images when necessary. In the context of a permissionless AI marketplace, there is a platform that allows data providers, virtual servers, and AI designers to collaborate and protect data access and security while training AI models.
What research paper exist that focus on security for mobile using machine learning?5 answersMobile network payment gateways have become more accessible to consumers, leading to the need for security measures such as malware detection, authentication, and fraud detection. A recommended solution is the Secure Mobile Electronic Payment Framework Enhanced by Machine Learning. The rapid growth of Android smartphones and applications has also raised security concerns, particularly in detecting malware applications. A proposed solution is the DeepAutoD framework, which uses deep deception call chains to detect malicious code in a short time. Advancements in wireless communication technologies have resulted in increased security breaches in mobile networks. To address this, an artificial intelligence-based security model is suggested to ensure the secrecy, integrity, and authenticity of the system. Mobile authentication using behavioral biometrics is another area of research, with a focus on improving discriminative features and addressing challenges such as data privacy and side-channel leaks. In the context of IoT systems, machine learning techniques are being used to manage security aspects and detect attacks, with the Random Forest algorithm achieving high accuracy.
How does quantum computing help overcome the limitations of machine learning in the field of ransomware detection?5 answersQuantum computing helps overcome the limitations of machine learning in the field of ransomware detection by providing faster and more accurate detection methods. The use of quantum machine learning (QML) algorithms, such as Quantum Support Vector Machine (QSVM) and Quantum Convolutional Neural Network (QCNN), allows for high-performance intrusion detection with big data inputs. QML-based intrusion detection has been shown to process big data inputs with high accuracy (98%) and twice the speed of conventional machine learning algorithms. Additionally, the combination of quantum information and machine learning techniques can enhance the security and privacy of machine learning models. Quantum technologies can be used to make quantum classifiers more secure and private, as well as provide advantages in bagging and boosting methods.
How can we use machine learning to improve the security of cloud computing?4 answersMachine learning (ML) techniques are being used to improve the security of cloud computing. ML is used to prevent or detect attacks and security gaps in the cloud. ML models play an essential role in addressing security issues such as real-time attack detection, data leakage vulnerability assessments, and adversarial attacks. ML algorithms are used to make real-time decisions, process big data, reduce cycle time for learning, and ensure cost-efficiency and error-free processing. ML is also used in Machine Learning as a Service (MLaaS) cloud platforms, which have become popular for offering efficient and cost-effective services. Researchers are focusing on both attacking and defending different attacks on MLaaS platforms. However, there are still open research issues that require further investigation in this area.
What advantages do you get by running inference on a server instead of distributing the model to all your clients?14 answers

See what other people are reading

What are the negative impact of Virtual Reality in terms of Technical Issues for Researchers and Scientists?
5 answers
Virtual Reality (VR) poses technical challenges for researchers and scientists. Issues include unexpected features activated by participants, such as Skin Conductance and Heart rate changes due to headset mounting. Moreover, VR may lead to negative impacts like triggering upsetting memories or overuse consequences. Concerns also extend to children's development, with contradictory findings on psychosocial impacts of VR, emphasizing the need for adherence to health and safety guidelines. Additionally, VR's potential in cultural heritage conservation may face accessibility barriers, necessitating the use of ICT tools like VR to enhance understanding and research capabilities. Researchers and scientists must address these technical challenges to harness VR's benefits effectively.
How effective are Transformer models in generating machine learning models compared to traditional architectures?
5 answers
Transformer models have revolutionized machine learning by achieving state-of-the-art results in various domains, but their effectiveness compared to traditional architectures is a subject of interest. While Transformers excel in accuracy across AI tasks, they pose challenges in on-device execution due to high computational demands. Deploying Transformers on resource-constrained hardware platforms has been a hurdle due to high memory requirements and low parallelizability, necessitating accelerators like the Coral Edge TPU for efficient deployment. Recent advancements propose dynamic training frameworks like DynaProp and co-design techniques like TransCODE to optimize training processes, reduce memory consumption, and enhance model performance on accelerators, achieving higher accuracy with lower latency and energy consumption compared to existing pairs.
What can i use to learn about rocky programe wich deal with dem?
5 answers
To learn about the ROCKY program, which deals with improving the robustness of STT-MRAM cache memory hierarchy against write failures, you can refer to research by Talebi et al.. The ROCKY architecture proposes efficient replacement policies to enhance the reliability of STT-MRAM memories by reducing susceptible transitions during write operations. The study demonstrates that ROCKY can decrease the Write Error Rate (WER) of STT-MRAM cache memories by up to 35.4% with minimal performance overhead. This research provides valuable insights into addressing the critical reliability challenges faced by STT-MRAM technology, particularly focusing on mitigating write failures in cache memory hierarchies. By studying the ROCKY program, you can gain a deeper understanding of strategies to enhance the robustness of STT-MRAM-based systems in the face of reliability issues.
What are the appropriate crops for an agrivoltaic system?
5 answers
The selection of crops for an agrivoltaic system is crucial to ensure both agricultural productivity and energy generation. Soybean crops have been studied under different shade depths in agrivoltaic systems, showing varied effects on morphology, physiology, and yield. Additionally, research emphasizes the importance of crops that can maintain productivity under shading caused by photovoltaic panels, with studies focusing on vegetables like komatsuna, mizuna, kabu, and spinach. Furthermore, a digital replica and genomic optimization framework have been developed to simulate crop performance under agrivoltaic conditions, with a case study on SunnySD tomatoes demonstrating significant optimization of combined crop and energy production. German research on agrivoltaics has also explored crops like potato, celeriac, clover grass, and winter wheat, showcasing increased land productivity under such systems.
What are the current challenges and future research directions for implementing edge computing to reduce latency in 6G networks?
5 answers
Current challenges in implementing edge computing for reducing latency in 6G networks include security and privacy concerns, offloading and task division issues, bandwidth constraints, service management complexities, advanced communication requirements, general-purpose computing on edge nodes, quantum computing integration, and managing IoT heterogeneous platforms. Future research directions involve exploring academic-industry collaborations, developing lightweight algorithms and libraries, implementing virtualization and micro-operating systems for edge devices. Additionally, the sustainability of future computing infrastructures at the edge involves addressing energy efficiency challenges, fault-tolerant automated operations, and collaborative resource management for improved resource efficiency. The evolution towards 6G aims to enhance smart services quality by maximizing throughput and minimizing latency, necessitating innovative approaches like edge computing to overcome traditional cloud limitations.
How does blockchain technology differ from traditional database systems?
5 answers
Blockchain technology differs from traditional database systems by offering features like immutability, traceability, transparency, availability, and decentralization. Unlike traditional systems, blockchain ensures secure data sharing through attribute encryption, on-chain/off-chain collaboration, and smart contract-based log tracking. It enhances data transparency, information security, and knowledge sharing, surpassing traditional methods. Blockchain's decentralized nature, immutability, and anonymity address privacy concerns in identity management systems, unlike centralized systems with privacy threats and single points of failure. Additionally, blockchain tackles the scalability issue of full replica storage in traditional systems by introducing an Information-Centric Networking-based storage architecture, improving data availability and access time significantly.
What are the current challenges and limitations associated with edge computing for wireless communication?
5 answers
Current challenges and limitations associated with edge computing for wireless communication include the need for energy-efficient and economically viable deployment, fault-tolerant automated operation, and collaborative resource management to improve resource efficiency. Additionally, the rapid expansion of the Internet of Everything (IoE) has led to challenges in traditional cloud computing models, such as bandwidth load, slow response speed, security, and privacy concerns, prompting the development of edge computing solutions. Furthermore, while edge learning (EL) enables local model training on geographically dispersed nodes, the separation of EL deployment and communication optimization can result in excessive signaling overhead, long processing delays, and unstable learning convergence in wireless communication systems. These challenges highlight the importance of addressing energy efficiency, security, privacy, and optimization in edge computing for wireless communication.
How does the use of digital twin technology impact strategic decision-making in various industries?
5 answers
The implementation of digital twin technology significantly impacts strategic decision-making across various industries. Digital twins provide real-time data exchange between physical and digital models, enabling better decision-making, reduced operational costs, increased productivity, and improved predictive maintenance. They offer accurate predictions under different interventions, aiding in optimizing processes and resource allocation. By simulating past, present, and future scenarios, digital twins enhance understanding, support decision-making, and drive effective interventions. In healthcare, digital human twins revolutionize patient care through personalized data-driven medicine, precise treatment planning, and medical education. The technology's potential spans industries like manufacturing, agriculture, education, construction, and retail, showcasing its versatility and transformative impact on strategic decision-making processes.
How does statistical physics contribute to the development of predictive models in various fields?
5 answers
Statistical physics plays a crucial role in developing predictive models across various fields by providing analytical tools and algorithms to extract relevant information from large datasets. It complements other approaches in understanding machine learning processes and stochastic modeling, enabling the study of dynamical properties and equilibrium states in training processes. By utilizing methods like the Ising model and mean-field solutions, statistical physics aids in predicting single-site magnetization, learning variable couplings, and reconstructing interaction networks from data. This interdisciplinary approach not only identifies fundamental limitations in solutions but also proposes efficient algorithms for individual instances, making statistical physics a valuable tool in tackling complex inference problems encountered in machine learning and other scientific disciplines.
What are the benefits of Edge Computing especially in integration of AI to almost all of daily applications?
5 answers
Edge AI, the fusion of edge computing and Artificial Intelligence (AI), offers numerous benefits in enhancing daily applications. By processing AI algorithms on users' devices, Edge AI reduces latency, improves energy consumption, and enables various applications like facial recognition and real-time traffic updates on smartphones. The performance of Edge AI tasks relies on integrated sensing-communication-computation (ISCC) for resource optimization and achieving customized goals. With the accumulation of IoT edge devices, computing shifts from cloud servers to networked embedded systems, revolutionizing real-time and intelligent enterprises. Additionally, the improvement in computation capabilities of edge devices reduces cloud computation overload, while processing-in-memory using memristor-based neural network hardware addresses data processing challenges for edge applications. These advancements highlight the significant role of Edge Computing in integrating AI into daily applications for enhanced efficiency and functionality.
Why does Computer Vision Driving Intelligent Edge?
5 answers
Computer Vision drives Intelligent Edge due to the need for real-time processing and analysis of vast amounts of data in autonomous driving systems. By leveraging Edge Intelligence (EI) and 6G wireless networking, an efficient and reliable autonomous driving framework is established, allowing autonomous vehicles to process tasks locally while ensuring inference accuracy. This approach decentralizes processing from the cloud to the edge of the network, enhancing performance and reducing latency in autonomous driving services. Additionally, the integration of deep learning technology and network slicing in Internet of Vehicles (IoV) services enhances wireless transmission and cooperative communication, improving safety and efficiency in vehicle driving. Furthermore, deploying AI on the Edge using tools like the OpenVINO toolkit enables high-performance computer vision applications at the edge devices, overcoming limitations faced by cloud-based systems.