|
Re: What's (currently) your favourite snippet of code?
[Re: Kartoffel]
#444849
08/19/14 20:31
08/19/14 20:31
|
Joined: Aug 2003
Posts: 7,440 Red Dwarf
Michael_Schwarz
Senior Expert
|
Senior Expert
Joined: Aug 2003
Posts: 7,440
Red Dwarf
|
Damn we forgot the Ackchievements in Ackmania!
"Sometimes JCL reminds me of Notch, but more competent" ~ Kiyaku
|
|
|
Re: What's (currently) your favourite snippet of code?
[Re: FBL]
#446193
10/07/14 22:54
10/07/14 22:54
|
Joined: Apr 2007
Posts: 3,751 Canada
WretchedSid
Expert
|
Expert
Joined: Apr 2007
Posts: 3,751
Canada
|
This lock free ringbuffer implementation. Supports one reader and one writer.
template<class T, size_t Size>
class lock_free_ring_buffer
{
public:
lock_free_ring_buffer() :
_head(0),
_tail(0)
{}
bool push(const T &val)
{
size_t tail = _tail.load(std::memory_order_relaxed);
size_t next = (index + 1) % capacity;
if(next != _head.load(std::memory_order_acquire))
{
_buffer[tail] = val;
_tail.store(next, std::memory_order_release);
return true;
}
return false;
}
bool pop(T &val)
{
size_t head = _head.load(std::memory_order_relaxed);
if(head != _tail.load(std::memory_order_acquire))
{
val = std::move(_buffer[head]);
_head.store((index + 1) % capacity, std::memory_order_release);
return true;
}
return false;
}
bool was_empty() const
{
return (_head.load() == _tail.load());
}
private:
enum { capacity = Size + 1 };
std::atomic<size_t> _head;
std::atomic<size_t> _tail;
std::array<T, capacity> _buffer;
};
Shitlord by trade and passion. Graphics programmer at Laminar Research. I write blog posts at feresignum.com
|
|
|
Re: What's (currently) your favourite snippet of code?
[Re: Redeemer]
#446236
10/08/14 16:57
10/08/14 16:57
|
Joined: Sep 2003
Posts: 9,859
FBL
Senior Expert
|
Senior Expert
Joined: Sep 2003
Posts: 9,859
|
|
|
|
Re: What's (currently) your favourite snippet of code?
[Re: FBL]
#451002
04/27/15 00:02
04/27/15 00:02
|
Joined: Apr 2007
Posts: 3,751 Canada
WretchedSid
Expert
|
Expert
Joined: Apr 2007
Posts: 3,751
Canada
|
The following reference counting implementation:
Object *Object::Retain()
{
_refCount.fetch_add(1, std::memory_order_relaxed);
return this;
}
void Object::Release()
{
if(_refCount.fetch_sub(1, std::memory_order_release) == 1)
{
std::atomic_thread_fence(std::memory_order_acquire); // Synchronize all accesses to this object before deleting it
CleanUp();
delete this;
}
}
For the longest time I did reference counting with strong memory fences (std::memory_order_acq_rel), until I learned that relaxed ordering works with read-modify-write operations just fine. The release operation is a bit more complicated because if this is the last reference the thread holds to the object, it needs to have a full release operation. However, only the final thread that relinquishes the reference needs an acquire operation to make sure all changes to the object are observed before entering the destructor. Clever as fuck. The basic idea comes from Herb Sutter's Atomic<> Weapons talk, although he used std::memory_order_acq_rel for dropping a reference. Of course this doesn't do anything for strongly ordered architectures like x86, but it produces much better code on weakly ordered CPUs like ARM.
Shitlord by trade and passion. Graphics programmer at Laminar Research. I write blog posts at feresignum.com
|
|
|
Re: What's (currently) your favourite snippet of code?
[Re: WretchedSid]
#451019
04/27/15 07:26
04/27/15 07:26
|
Joined: Nov 2008
Posts: 946
the_clown
OP
User
|
OP
User
Joined: Nov 2008
Posts: 946
|
Funny enough, the last thing I've implemented in C++ was a reference counted object class to see how that stuff is put together - without any synchronization though, thread safe code is something I still have to explore. I have one question though (very offtopic but hey these forums are more or less anarchistic by now anyways), Sid: You have this nice Object class in Rayne, which gives you all these nifty features like reflection, reference counting, etc, but the fact that they're reference counted enforces allocation of objects on the heap - so how do you make that work nicely with cache-friendly processing of objects? I suppose you're using a custom allocator (or multiple) to ensure objects are allocated close to each other, but I still can't imagine that this is as optimal as it could be with tightly packed object buffers. Or, to rephrase my actual question in a way so you could answer it with a yes or a no, did the comfortable features of having this base object class outweigh the possible performance gains of having a less abstracted, but more data-oriented object model for you? Or do you perform some kind of programming magic in the background so the nice API of your object model just hides an optimized implementation? I hope my question is somewhat clear, I'm doing a lot of research regarding different object models currently and I'm basically just trying to squeeze every sponge I can get into my hands, if you will, for first hand experiences.  (Especially because most opinions you find on public blogs are those of very very C-affine data-orientation fanatics, which may be a fascinating and important topic, but it's not really helpful to simply be told to scrap all OOP and make everything a system stacked onto another system then feed them data, without any explanations on how to architect such a structure in a codebase that's still somewhat flexible).
|
|
|
|