-
Posts
4,883 -
Joined
-
Days Won
297
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
One thing that I haven't gotten round to yet. If you re operating on a large JSON stream, you cannot process any other JSON streams as it seems to beblocking. I think it just needs setting some of the subVIs to re-entrant, but like I said. I haven't gotten round to looking as yet.
-
As you know. Any mutexes and we will be back to square one. I'm still not quite getting it. Why do we need an "op count"?. All pointers are considered valid until we unregister all readers and writers. Unregistering a single reader amongst multiple readers doesn't mean we need to free any pointers as there are only three (the buffer, the reader index array and the Cursor). The only time we deallocate anything is when there is no longer any readers or writers at which point we deallocate all pointers. Now. We already have a list of readers (the booleans set to true in the index array) and we know how many (reader count). The adding and removing of readers, i.e. the manipulation of the booleans and the count, is "locked" by the registration manager (non-reentrant VI boundary). So I see the "issue" as how do we know that any asynchronous readers on the block diagram have read whatever flags and exited and therefore the writer can now exit and pointers can be deallocated (the writer only exits when there are no readers (race condition on startup? Will have to play......). The writer won't read any reader booleans since by this time the registration manager has set the reader count to zero (this order will change with my proposal below since it will exit before it is zero). So it goes into a NOP state and exits. It can set a boolen that says "I have no readers and have exited". The registration manager already knows there are no readers, so it just needs to know the writer has exited before deallocating pointers. The readers only need to watch their flag to see if the registration manager has changed it and go into a NOP state and exit. At this point I can see there might be a scenario whereby the reader has read it's flag (which is still OK) and by the time it gets to reading buffers and writing indexes the writer has said "I have no readers and have exited". Well. Lets put another flag in the reader index array that says "I'm active" which is just a copy of the registration managers boolean but only written once all memory reads have completed and causes an exit . Now we have a situation where deallocation can only occur if the booleans controlled by the registration manager are all false (or true depending on which sense is safer) AND the "I am active" booleans controlled by the individual readers are all false (ditto sense) AND the writer has said "I have no readers and have exited". This decomposes into just the writer saying "I have no readers and have exited" as the writer can read both fields whilst it is doing it's thing (it has to read the entire block anyway), AND them together and exit when they are all false (sense again) and set the "I have no readers and have exited". So in the end-game. The registration manager unsets all the registered booleans, waits for the "I have no readers and have exited" boolean then deallocates the pointers. Does this seem reasonable? Being colour-blind (colour confused is a better term). I really have no opinion on this Well. Get rid of the "Check For Errors" page then (I've never had an error out via this route since about LV 7.1)
-
Ah. Yes. IC. Here's what I think we could do....... The registration manager knows how many and which ones are allocated. It will pass back an available ID index to the reader as part of the registration process (the ID is just an offset from the base pointer in # blocks). When there is a registration request, it scans the booleans in the reader index array (no locks required) and passes the first F it comes across. If all are in use, it would increase the size of the array (setting all new values to F and indexes to the current cursor position) then give the next ID to the reader. The reader then uses that ID and starts its counter at the set cursor position. At this point the manager sets the boolean field for the newly registered reader to T and sets the reader count to +1. The next scan by the write will now iterate through the newly increased size and pick up that the new readers bool is T. Why can't the pointer cluster actually be the private data of a class? I don't see any reason for refnums if it's going in a class. Instantiating the class creates all the pointers (well. there is no constructor in LV, so until we get my atomic reads and write, they will probably call an init). If the block of pointers you are talking about is to do with the ID. Then we don't need any since the ID number is sufficient and the user will have no idea which IDs are being used as that's internal (as described previously) Well. I'm not part of the "Crashing is an acceptable behaviour" club. They are idiots (although I think you have a couple over there at NI when it comes to CLFNs )At worst, we could unregister all the readers (set all the booleans to false) then wait 2 weeks to make sure everything has had a chance to read the booleans and exit before we finally crowbar the memory. But I'm sure we can come up with something better than that You're thinking a bit ahead of me at the moment. I tend to think in chunks with a bit of prototyping for feasability (iterative development). I'm only just starting to formulate details about the registration let alone about the API itself. What do you suggest?
-
I couldn't look at your snipit because Lavag seems to be stripping the meta data again and I can't import it as code. But based on your quoted comment........ The varpointers.vi I attached in the first post uses MoveBlock and it works once but dies when deallocating the pointer the second time around because LabVIEW kills the variant at some point or, more probably, I've killed it when I shouldn't have.
-
Hmmm. OK. The Elapsed Time.vi is set to subroutine. Does setting it to "Normal" help? (I don't have access to a dual core ATM)
-
I don't understand what you are getting at here. Refnums? The disruptor pattern can cope with multiple writers (ours currently can't). They use a two stage process where a writer "claims" a slot before writing to it. What's the problem with having multiple readers? (isn't that the point!) We effectively have a reference counter (number of readers) and we have a method of skipping in the writer (the bool field of the pointer indexes which the readers could also use so they don't go ahead and read data-the manager only manipulates that field) so we would only deallocate the pointer when the last is unregistered. The deinit would effectively iterate through unregistering all the readers (it becomes an "unregister all readers" then kill pointer rather than just a pointer killer). I think we just need a NOP for the readers (they need to not read until the registration manager has given them an index slot and they know the current cursor position) Don't know. Just thinking on my feet at the moment but "seems" doable and I think it might give us the same sort of feature as the queues when the handle is destroyed (error out).. Not really. I have visions of fixing the i64 cursor wrap-around problem mathematically by using the fact that it goes negative but still increases (gets less negative, minus + minus = plus, sort of thing). As I haven't fixed it yet, you could use a negative number as you will hit the same problem at the same point and everything starts at zero. Well. I think we need to look at the Disruptor code again to see how they handle it (they probably have the same problem). If we can't think of a way to be able to reorganise, we can at least dynamically increase as that doesn't overwrite existing indexes; just adds new ones and disables old ones (haven't seen a realloc but have seen functions to resize handles......somewhere). So we could create, say 5 at a time and increase in blocks. We are back to the MJE kind of approach which I first outlined with the flags then. (I did say IF we were smart ) I think this is where we need to revisit the disruptor pattern details. They talk about consumer barriers, claiming slots and getting the "highest" index rather than just seeing if it is greater (as we currently do). I think we have the circular buffer nailed. It's the management and interfaces we are beginning to brainstorm now and they must have solved these already.
-
In the writer there is a disable structure. If you disable the currently enabled (and enable the other) it will use the previous index reading method. Does this affect the result you see?
-
VxWorks paths are case sensitive (windows aren't by default),
-
TCP Host to Multiple cRIO Architecture
ShaunR replied to CraigC's topic in Application Design & Architecture
Yes. That's the tricky bit though since you cannot serialise LabVIEW objects so it's not just a case of "(un)Flatten To String" to get them across a network.. This thread on "Lapdog over the network" will highlight most of the issues (and solutions).. -
You can't use a negative number with this because the indexes can be negative (strange, I know, but it is to do with wrap-around). I'm thinking that we could have a boolean with the R and Cnt (a block is converted to a cluster for processing-see the CB GetIndexes3.vi) which tells the index iterator (in the write) whether that index should be considered in the "lowest" check. This would have very little impact on performance (read a few more bytes in the block at a time and 2 AND operators.). Then you would need a "registration manager". That would be responsible for finding out how many are currently registered, updating the registered count and choosing a slot which has a F flag to give to the reader that wants to register (not disimilar to what you are currently doing). The additional difference would be that it is also capable of resizing the array if there are not enough index slots It could even shrink the array and reorganise if we wanted to be really smart so that the index iterator doesn't process any inactive slots and do away with the choosing of an inactive slot altogether. This only ever needs to happen when something registers/unregisters so even if this is locking, it will not have much of an impact during the general running (now we are starting to get to the Disruptor pattern )
-
Yeah. Don't want to go to a DLL. Been there, done that, torn holes in all my T-shirts. Is there no way we can utilise the callbacks (I thought that was what the InstanceDataPointer was for). That wouldn't work well for my use cases since I don't know how many I need up front (could be 10, could be 30 and may change during the program lifetime as modules are loaded and unloaded->plugins). The rest (about first call etc) is valid. It really needs to be in the next layer up but as that doesn't exist yet, its in the current read and writes. What we (or more specifically, I) don't want is for them to be reliant on shift registers in the users program to maintain the state info. How that is handled will depend whether the next layer is class based or classic labview based which is why I havn't broached it yet. At that layer, I don't envisage a user selectable "Init". More that the init is invoked on a read or write depending on what gets there first (self initialising). I ultimately want to get to a point where you just slap a write VI down and slap read VIs wherever you need them (even in different diagrams) without having to connect wires (see my queue wrapper vi for how this works, although not sure how it will in this case, ATM, since the queue wrapper uses names for multiple instances).
-
Indeed.I know the issues and have some ideas (easiest of which is to have a boolean in the index array cluster alongside the R and Cnt). Do you have a suggestion/method/example ?
-
Done. MJE mentioned this previously and, until now I have resisted since I was more concerned with absolute speed performance. But it has nasty side effects on thread constrained systems (Dual cores etc) unless they can yield. You lose performance of using subroutine on constrained CPUs and spread obviously increases as we give labview the chance to interfere. But nowhere near as much as the queue or event primitives. So you see an increase in mean and STD but the median and quartile (which I think are better indicators as they are robust to spurii) remains pretty much unchanged. I've also now modified the index reader in the write to be able to cope with an arbitrary number of multiple readers. We just need to think how we store and manage registration (a reader jumps on board when the writer is halfway through the buffer, for example). Also added the event version of the test harnesses. Version 3 is here. After this version I will be switching to a normal versioning number system (x.x.x style) as I think we are near the end of prototyping.
-
TCP Host to Multiple cRIO Architecture
ShaunR replied to CraigC's topic in Application Design & Architecture
Take a look at Dispatcher in the CR It may provide most of your TCPIP architecture as it is publisher/subscriber and can cope with many clients/servers. -
The first call (in the write) is because when everything starts up all indexes are zero. The readers only continue when the write index is greater so they sit there until the cursor index is 1. With the writer, it has to check that the the lowest reader index.isn't the same as its current write. At start-up,, when everything is zero, it is the same therefore if you don't have the First Call, it will hang on start. Once the writer and reader indexes get out of sync, everything is fine (until the I64 wraps around of course-needs to be addressed). If you have a situation where you reset all the indexes and the cursor back to zero AND it is not the first call; it will hang as neither the readers or the writer can proceed.
-
Indeed. Events are a more natural topological fit (as I think I mentioned many posts ago). Initially, I was afraid that the OS would interfere more with events than queues (not knowing exactly how they work under the hood, but I did know they each had their own queue). For completeness, I'll add an event test harness so we can explore all the different options.
-
I disagree. The processes are most certainly not parallel,even though your computations are (as is seen from the time). In your second example you are attempting to "pipeline" and now using 2 queues (I say attempting becauseit doesn't quite work in this instance). You are a) only processing 100 values instead of all 200.(true case in bottom loop never gets executed) b) lucky there is nothing in the other cases because pipelining is "hybrid" serial (they have something to say about that in the video) c) lucky that the shortest is last (try swapping the 5 with the -1 so -1 is top with a) fixed->back to 600ms) d) No different to my test harness (effectively) if you place the second enqueue straight after the bottom loops dequeue instead of after the case statement (which fixes c).
-
Need help from a LAVA moderator
ShaunR replied to Antoine Chalons's topic in Site Feedback & Support
Will Smith beat up some aliens, aparently -
Well. You have managed to concisely put into a paragraph or two what I have been unsuccessfully trying to get across in about 3 pages of posts . (+1).
-
I have faith in you !
-
Indeed. However a variant is constructed (constant on the diagram), but from what AQ was saying about descriptors, a simple copy is not adequate since the "void" variant has the wrong ones. I'm now wondering about the LVVariantGetContents and LVVariantSetContents which may be the correct way to modify a variants data and cause the descriptor to be updated appropriately.
-
Hmmmm. Not sure what this "top swap" actually is. Is that just swapping handles/pointers so basically the pointer in the buffer then points to the empty variant for a read? That would be fine for a write, but for a read the variant needs to stay in the buffer. Can you demonstrate with my original example?
-
Not quite.. The execution time is 500+100 (they aren't parallel processes). Unlike the buffer which is 500 (the greater of the two). Try again please
-
OK. Words aren't working. What is your "single queue that dequeues an element and hands it to N processes running in parallel", implementation of this?: AQ1.vi