scispace - formally typeset
Search or ask a question
Author

S. Monikandan

Bio: S. Monikandan is an academic researcher. The author has contributed to research in topics: Cache pollution & Cache. The author has an hindex of 1, co-authored 1 publications receiving 13 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2009
TL;DR: This work aims at implementing SEMALRU- A Semantic and Least Recently Used Web cache replacement policy, which tunes the performance of the existing replacement algorithms through analyzing the semantic content of the document and the recency of the documents.
Abstract: The increasing demand for World Wide Web (WWW) services has made document caching a necessity to decrease download times and reduce Internet traffic. This work aims at implementing SEMALRU- A Semantic and Least Recently Used Web cache replacement policy. The basic LRU replacement policy is augmented with Semantic content of web pages to improve the efficiency of the replacement algorithms in terms of Hit Rate and Byte Hit Rate and to minimize the number of replacements made in cache. There are many well-known cache replacement policies based on size, recency, and frequency. This new improvised replacement policy attempts to implement cache replacement based on two parameters namely the semantics of the contents of web pages and the time of last access of the document. SEMALRU evicts documents that are less related to an incoming document or least recently used document which needs to be stored in the cache. This makes sure that only related documents are stored in the cache; hence the contents of the cache represent the documents of interest to the user and then ranked by recency. This policy tunes the performance of the existing replacement algorithms through analyzing the semantic content of the document and the recency of the document. A detailed algorithm to identify unrelated documents and documents that are least recently used has been devised. The policy was tested in a simulated environment with the related and unrelated set of user access pattern. . The parameters pertinent to cache replacement algorithms are computed and the results showing the improvement in the efficiency of the algorithm are furnished.

13 citations


Cited by
More filters
Book ChapterDOI
11 May 2012
TL;DR: This paper presents a novel cloud cache replacement policy that optimizes cloud data-out charge, the overall responsiveness of data loadings and the scalability of cloud infrastructure.
Abstract: To run off-premise private cloud, consumer needs budget for public cloud data-out charge. This amount of expenditure can be considerable for data-intensive organization. Deploying web cache can prevent consumer from duplicated data loading out of their private cloud up to some extent. In present existence, however, there is no cache replacement strategy designed specifically for cloud computing. Devising a cache replacement strategy to truly suit cloud computing paradigm requires ground-breaking design perspective. This paper presents a novel cloud cache replacement policy that optimizes cloud data-out charge, the overall responsiveness of data loadings and the scalability of cloud infrastructure. The measurements demonstrate that the proposed policy achieves superior cost-saving, delay-saving and byte-hit ratios against the other well-known web cache replacement policies.

13 citations

Proceedings ArticleDOI
01 Aug 2013
TL;DR: This paper proposes and implements a Web cache based on the LRU replacement algorithm that outperforms than typical LRU in selected performance metrics and is applied to the Multi-Media Conference System and improves the performance significantly.
Abstract: Web cache is considered as an efficient mechanism to reduce network traffic and origin servers load. It aims to keep close to users the most popular Web objects. And the performance of a Web cache always depends on the replacement strategy it chooses. In view of these, we are devoted to propose and implement a Web cache based on the LRU replacement algorithm in this paper. To improve the performance of LRU algorithm, we present an expansion to the typical LRU by using a new data storing structure to store caching data and add other factors to evict in-cache objects. We evaluate the performance of this algorithm by doing some experiments. The results show that this algorithm outperforms than typical LRU in selected performance metrics. The Web cache is applied to our Multi-Media Conference System and it improves the performance significantly.

13 citations

Journal ArticleDOI
TL;DR: The evaluation showed that it is possible to reach shorter startup times using the framework than using the strategies of other players, and it was demonstrated that a prefetching of elements does not lead to an increased download volume in every case.
Abstract: Since the invention of digital video significant progress has been made in reducing the amount of data needed to be transferred in the World Wide Web while improving viewing experience. However the paradigm of linear behavior has not changed at all. While the feature set of traditional digital video may be sufficient for some applications, there are several use cases where a significantly improved way of interacting with the content is highly desirable. It is possible to organize a video in an interactive and non-linear way. Additional information (for example high resolution images) can be added to any scene of the video. The non-linearity of the video flow and the implementation of additional content not found in traditional videos may lead to an increased download volume and/or a playback with many breaks for downloading missing elements. This paper describes a player framework for interactive non-linear videos. We developed the framework and its associated algorithms to simulate the playback of non-linear video content. It minimizes interruption when the sequence of scenes is directly influenced by interaction, while the traditional viewing experience is not altered. The evaluation showed that it is possible to reach shorter startup times using our strategies than using the strategies of other players. Furthermore, we demonstrated that a prefetching of elements does not lead to an increased download volume in every case. In contrast, it can even decrease the download volume if the right delete strategy is selected. It can be noted that the knowledge of the structure of interactive non-linear videos can be used to minimize startup times at the beginning of scenes while the download volume is not increased.

13 citations

Journal ArticleDOI
TL;DR: This research provides an improvement in web caching by combining the result of web usage mining with traditional web caching techniques and develops a system that can record users’ browsing behavior at the resources level, resulting in a faster web browsing experience and less data bandwidth.
Abstract: Web caching is one of the fundamental techniques for reducing bandwidth usage and download time while browsing the World Wide Web. In this research, we provide an improvement in web caching by combining the result of web usage mining with traditional web caching techniques. Web cache replacement policy is used to select which object should be removed from the cache when the cache is full and which new object should be put into the cache. There are several attributes used for selecting the object to be removed, such as the size of the object, the number of times the object was used, and the time when the object was added into the cache. However, the flaw in these previous approaches is that each object is treated separately without considering the relation among those objects. We have developed a system that can record users’ browsing behavior at the resources level. By using information gathered from this system, we can improve web cache replacement policy so that the number of cache hits will increase, resulting in a faster web browsing experience and less data bandwidth, especially at lower cache storage environments such as on smart phones.

11 citations

Journal ArticleDOI
TL;DR: i-Cloud is capable of reducing public cloud data-out expenses, improving cloud network scalability and lowering cloud service access latencies specifically in multi-provider cloud environments and is not only able to achieve optimal performances in all metrics simultaneously but also delivered relatively stable performances across all performance metrics.

7 citations