-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lower contention LruCache #6875
base: master
Are you sure you want to change the base?
Conversation
Not sure about the design, wouldn't concurrent dictionary be better in terms of Set's? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read through the PR and provided some comments.
As a general one, this PR makes the cleaning separate and results in potentially occasionally bigger dictionaries. This separation makes me wonder, whether we could drop the linkage of LinkedListNode
altogether and
- create
Node<T>
that would have aT Value
andlong Epoch
. - on operation, you
var epoch = Interlocked.Increment(ref _counter)
and set theEpoch
in the item - you enqueue
(node, epoch)
- during cleanup, you pull enqueue items, and check
tuple.node.Epoch == tuple.epoch
, if right and the map is to big you remove the thing. Otherwise you discard because something else touched this.
No allocations for LinkedListItem at all, as _accesses
store now value tuples, in a similar way as _cacheMap
. Yes, I'm aware that this does not come for free as the _accessess
segments will have bigger items. Still, no obj header for the nodes I think.
This would make is similar to ConcurrentQueue
a bit 😅 using the epoch thing.
WDYT @benaadams ?
{ | ||
bool didGet; | ||
LinkedListNode<LruCacheItem>? node; | ||
using (var handle = _lock.AcquireWrite()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AcquireWrite
? Is it intentional?
ref LinkedListNode<LruCacheItem>? node = ref Unsafe.NullRef<LinkedListNode<LruCacheItem>?>(); | ||
using (var handle = _lock.AcquireWrite()) | ||
{ | ||
node = ref CollectionsMarshal.GetValueRefOrAddDefault(_cacheMap, key, out _); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a ref to an array. If another thread resizes the array, this ref will point to an array no longer in use. Am I missing something or is it a bug?
{ | ||
while (!ct.IsCancellationRequested && _accesses.TryDequeue(out LinkedListNode<LruCacheItem>? node)) | ||
{ | ||
bool exists = _cacheMap.ContainsKey(node.Value.Key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be broken as _cacheMap
might be written to concurrently?
} | ||
|
||
bool exists = false; // Captured by lambda | ||
LinkedListNode<LruCacheItem> node = _cacheMap.AddOrUpdate( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we always allocate an item on non-existence of the key here and not reuse the recent as thsi PR separates this two? Just asking, can be by design.
break; | ||
} | ||
|
||
_cacheMap.Remove(leastRecentlyUsed.Value.Key, out leastRecentlyUsed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that .Remove
is called in two places only, one on Delete
and second in here.
{ | ||
using NonBlocking; | ||
|
||
public sealed class NonBlockingLruCache<TKey, TValue> : IThreadPoolWorkItem, ICache<TKey, TValue> where TKey : notnull |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see how you split implementations now. Why not to hide them behind LruCache
generic class and return one depending on the Unsafe.SizeOf<TKey>
?
Changes
Types of changes
What types of changes does your code introduce?
Testing
Requires testing
If yes, did you write tests?