Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 10/13/2009 in all areas

  1. Vociferously Incoherent? Vehemently Intractable? Veritably Incredulous? Vaguely Interesting? 6?
    1 point
  2. I am one of those people who do use globals occasionally, so I think that classes only make the use cases where they're legitimate even more legitimate. Before I continue, let me say that I am perfectly aware of the issues with globals. If you need to write in multiple places, you have no way of preventing race conditions short of using some kind of locking mechanism like a semaphore, which is still problematic from a dataflow perspective and can cause deadlocks. So no globals in that case. I'm not really worried about the copy issue, since I use globals for small things. The most legitimate use case is a case where you have a single writer and multiple readers. You could use a notifier with previewing for this (and I would if I need to generate N copies), but if you only need one copy, a global is the easiest method. I wouldn't use a global for something like ref-counting (an LV2 glob is better suited for that due to the race conditions issue), but I would be happy to use read-only globals as constants. I even posted an idea to that effect here. The solution to that is to make the queue unnamed and put its reference into the class data cluster. An equivalent in 2009 is the references to data feature, which doesn't even allow you to name the reference, partly to avoid this issue.
    1 point
  3. Have a read of this.... http://books.google....itching&f=false In particular 9.2.3 and (replace the word "Process" with "Execution System") and ask me again.
    1 point
  4. Its not so much waiting for it to become available, it more to do with the CPU having to save state information between switching from the global in one context or another. Have a Google for "context switch" its a big subject. But suffice to say, the least, the better. With what I have described above, you will never lose data. But the downside of that is that if vi B is producing faster than you are consuming then your queues will grow until you run out of memory. If this is a possibility (and undesirable) then all you need to do is "pause" vi B populating one or both of the queues when the queues are full (fixed length queues) and resume when A or B have caught up. Or (as you rightly say) use a lossy queue. The choice is really if you require sequential losses or random losses. But he above will enable you to easily change how you manage your processes with minimum effort and run most effeciently.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.