我有以下代码,我正在尝试编写 LRU 缓存。我有一个运行器类,我正在针对缓存的随机容量运行。但是,缓存大小超出了它的容量。当我使 FixLRU 方法同步时,当缓存大小超过 100 时它会变得更准确,但速度会变慢。当我删除 synchronized 关键字时,缓存变得不太准确。
关于如何使它正常工作的任何想法?更准确的?
import java.util.concurrent.ConcurrentHashMap;
public abstract class Cache<TKey, TValue> implements ICache<TKey,TValue>{
private final ConcurrentHashMap<TKey,TValue> _cache;
protected Cache()
{
_cache= new ConcurrentHashMap<TKey, TValue>();
}
protected Cache(int capacity){
_cache = new ConcurrentHashMap<TKey, TValue>(capacity);
}
@Override
public void Put(TKey key, TValue value) {
_cache.put(key, value);
}
@Override
public TValue Get(TKey key) {
TValue value = _cache.get(key);
return value;
}
@Override
public void Delete(TKey key) {
_cache.remove(key);
}
@Override
public void Purge() {
for(TKey key : _cache.keySet()){
_cache.remove(key);
}
}
public void IterateCache(){
for(TKey key: _cache.keySet()){
System.out.println("key:"+key+" , value:"+_cache.get(key));
}
}
public int Count()
{
return _cache.size();
}
}
import java.util.concurrent.ConcurrentLinkedQueue;
public class LRUCache<TKey,TValue> extends Cache<TKey,TValue> implements ICache<TKey, TValue> {
private ConcurrentLinkedQueue<TKey> _queue;
private int capacity;
public LRUCache(){
_queue = new ConcurrentLinkedQueue<TKey>();
}
public LRUCache(int capacity){
this();
this.capacity = capacity;
}
public void Put(TKey key, TValue value)
{
FixLRU(key);
super.Put(key, value);
}
private void FixLRU(TKey key)
{
if(_queue.contains(key))
{
_queue.remove(key);
super.Delete(key);
}
_queue.offer(key);
while(_queue.size() > capacity){
TKey keytoRemove =_queue.poll();
super.Delete(keytoRemove);
}
}
public TValue Get(TKey key){
TValue _value = super.Get(key);
if(_value == null){
return null;
}
FixLRU(key);
return _value;
}
public void Delete(TKey key){
super.Delete(key);
}
}
public class RunningLRU extends Thread{
static LRUCache<String, String> cache = new LRUCache<String, String>(50);
public static void main(String [ ] args) throws InterruptedException{
Thread t1 = new RunningLRU();
t1.start();
Thread t2 = new RunningLRU();
t2.start();
Thread t3 = new RunningLRU();
t3.start();
Thread t4 = new RunningLRU();
t4.start();
try {
t1.join();
t2.join();
t3.join();
t4.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(cache.toString());
cache.IterateCache();
System.out.println(cache.Count());
}
@Override
public void run() {
for(int i=0;i<100000;i++)
cache.Put("test"+i, "test"+i);
}
}
最佳答案
我会在添加您的条目后清理其他条目。这样可以最大程度地减少缓存比您想要的大的时间。您还可以触发 size() 来执行清理。
Any ideas how to make this work properly?
您的测试是否反射(reflect)了您的应用程序的行为方式?可能是缓存在您没有对其进行锤击时表现正常(或更接近它)。 ;)
如果此测试确实反射(reflect)了您的应用程序行为,那么 LRUCache 可能不是最佳选择。
关于java - 同步不太行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9718929/