LRU based small latency first replacement (SLFR) algorithm for the proxy cache
Proceedings IEEE/WIC International Conference on Web Intelligence (WI 2003)
Today, many replacement algorithms have been proposed to improve the performance of Web caching. Most suggested algorithms replace documents through calculating the network cost by several parameters. These algorithms require many parameters and need a long time to select the document for replacement. In this paper, we introduce new algorithm, called LRU based Small Latency First Replacement (LRU-SLFR), which combines LRU policy with real latency to achieve the best overall performance.
... algorithm is an extension of LRU policy with real network latency and access-count. We make the linked-list as the LRU policy and make groups by our algorithms. If proxy must replace a certain document, proxy that uses our algorithm replaces the document that takes the smallest time to load in the Same Conditional Group Window. Simulated Proxy Cache We simulate the proxy cache managed by two approaches. One is the general LRU algorithm. The other is the LRU-SLFR algorithm. Proxy cache, in our simulation, makes the linked-list structure for replacement. We collect several proxy traces for modeling; KAIST, DEC. Network Connections We model the networking status. Because the proxies are normally located on the organization's network, the communication between the client and proxy is normally a small portion of the overall latency. On the other side, the proxy-server communication normally accounts for a significant majority of the total event latency . Thus, We divide the network model into internal-network and external-network. In this environment, external-latency is much larger than internal-latency and internallatency is almost fixed. We assume that internal-latency is fixed and external-latency only changes in each request. We measure the external-latency for applying the result to our model. We use PING program to test several Web servers, such as KAIST, SNU, Berkeley and Yahoo server and etc. After testing, we have learned that variance of latency is so small in KAIST, Berkeley, Metalab and UIUC server. It is only yahoo server that changes seriously.