Jump to content

Mellroth

Members
  • Posts

    602
  • Joined

  • Last visited

  • Days Won

    16

Posts posted by Mellroth

  1. bsvingen,

    I think the LV2 global approach is nice, but it does not allow for simulatenous access, e.g. if one action involves waiting for an response from an instrument, all other actions would have to wait for this response to arrive.

    If we could preserve the LVOOP benefits, but with the addition of synchronization, we could probably increase the overall performance and still have "almost" simultaneous access.

    I have included a small example of how synchronization could be achieved, but I haven't tested this with LVOOP classes.

    It basically adds a copy of the data in a synchronization queue, while keeping local copies in each wire branch.

    Please let me know what you think.

    Edit: I know, you will probably say that the LV2-global will outperform this in terms of performance, and you are right.

    I made a LV2-style core that did the same thing much faster, synchronization was down to ~250ns in the case of no update

    /J

    Download File:post-5958-1160553720.zip

  2. So if you have only one non-reentrant VI in your call chaing to W4MN or W4N then you have good chances of failing to catch a notifier.

    Exactly my point.

    If a W4N node does not keep a per reference memory, we will see this in many places.

    I experienced this in LV7.1.1, but did not have time to track down the bug (I actually implemented a LV2 global that acted as a notifier to solve it).

    With Aristos explanation it makes sense, but it surely must be a bug if one notifier can prevent another from wake up, on completely different locations on the block diagram.

    /J

  3. Aristos,

    what if we have a reentrant VI, that acts on received notifiers, loaded N times using VI-server (separate notifier for each instance).

    In this process the WFN node is encapsulated in a non-reentrant VI.

    Could this scenario also hang due to that notifiers need to arrive in correct order?

    The question is really if the W4N node is sharing the sequence-number-memory between the processes?

    Do you see other scenarios were we could have this behaviour?

    /J

    The sequence number behavior is key to how the node knows not to return values that it has already seen. A given node does not record "sequence number for each refnum". Storing such information would be an expensive penalty to the performance and thus would be of negative value in most cases (the vast majority of the time the notifier nodes are used with either the same refnum every time or with cascading refnums that are accessed in a particular order or where lossy transmission is acceptable). In the cases where you need to hear every message from a set of notifiers, that's what the W4MN node is for -- it records the last message it heard, but when it listens again, it returns all the new messages.
  4. Why don't you all stop complaining about what LV should or could offer and do a by ref Graphical Language of your own? You will have to realize someday that you are not asking things in line with the nature of LV and therefore not working for its advancement, but asking for the replacement of its founding concept. And if it ever succeeds, LV will be dead. You should/could/would then call it RefVIEW!

    The by-ref design allready exists in LabVIEW, we have VI-server, queues, notifers, file-I/O, xControl etc. For me it is therefore clear that by-ref can co-exist with the dataflow. The existing by-ref GOOP implementations also shows that OOP is possible with a by-ref approach in LabVIEW.

    Whether our by-ref wish is inline with the nature of LV, I leave to others, but since we have a multi threaded system we need synchronization.

    A by-ref system that solves synchronization is, in my opinion, easier to debug than a system using globals etc., since I can follow my references to find all access nodes.

    My wish, and many others I belive, was that the LVOOP could step in and replace all the GOOP implementations with a native one, that could be optimized in terms of memory and performance. Now I hope that NI will present a synchronization scheme that efficiently can share data between wires and threads. This way we can have simultaneous access to data while still using the powers of LVOOP.

  5. I was about to make a clean test-VI of the bug, but then it suddenly disappeared?

    Timeout values were still hard coded to -1, and the only change I made was the handling in case of a timeout and the type of data used in the queues.

    I tried to go back to the original code, but the bug was gone.

    If it returns I

  6. Your code does indeed work.

    However, I'm reading the Details sections of the Match Pattern help page, and I don't understand

    what the regex <[^>]*> means? I can't decipher it from the Match Pattern Details page.

    On the other hand, I've used Perl alot in the past, so <.*?> is more natural for me.

    mlewis

    The match pattern reads

    < match of the '<' character

    [^>]* any number of characters other than '>'

    > match of the '>' character

    /J

  7. As an addition to this, if I remember correctly, Xcontrols do allow you to create user accessible properties. Has anyone considered using Xcontrols to do GOOP?

    I actually looked into this in LV8, since this means that new methods gets public the very instance they are created. And you also get good protection for your data members.

    The reason I stopped working with this, is that an xControl also inherits all general control attributes, and the access methods are therefore cluttered with unneccessary information.

    Which also ment that I had to create wrappers to get a clean user interface.

    The usage of "property node" and "invoke node" is more in line with my ideas of a native OO implementation in LabVIEW, so maybe it is time to give it another shot in LV8.20.

    But then we mightl loose all the goodies in LVOOP (inheritacne/dispatch etc.)?

    Yes that's what I meant as well. Immediate actions on the referenced object. Why retrieve and store it in the first place ? That was only necesary for GOOP, but not for NI ! The GOOP-ers could not access the stuff under the bonnet of LabVIEW, but NI can.

    Good point!

    /J

  8. Hi,

    I'm using a single element queue as a buffer, where each buffer update involves

    1. Dequeue element (ms timeout set to -1)

    2. Enqueue new/updated element

    The update is done in separate loops or in separate processes.

    The strange thing is that the dequeue function sometimes return with timeout = TRUE (but no error).

    Since I'm using the ms timeout=-1, this should not be possible?

    If I set the ms timeout to a positive value, e.g. 100ms, everything works just fine, no timeouts detected, and buffer handling works as expected.

    Has anyone else experienced this?

    I'm at home at the moment, but will try to upload a simple test VI on Monday.

    /J

  9. Hi,

    do you have the Simulation Interface Toolkit (SIT) for LabVIEW?

    This toolkit allows you to interface a simulink model running either in Matlab/Simulink, or as a standalone DLL.

    Check out: http://sine.ni.com/nips/cds/view/p/lang/en/nid/11763

    /J

    Hello!!!!

    I'm new in this forum and first of all I want to say hello to everybody and congratulations for this excelent forum.

    I`m working in a project with Simulink(Matlab) but in this moment I'm stopped. Although I have made the model in Simulink, I want to simulate it in real time with Labview but I have to pass my model I have done to a Labview model.

    I have Labview 20th anniversary edition (Labview 8.20) and I have read in www.ni.com that with this version you can pass from Matlab to Labview without problems. However, I don't know how I could do it. :headbang: I have read some web sites and all of them say it's posible but none explains like to do it.

    In short, if you could help me to pass my Simulink model to a Labview model, I would be very grateful.

    Thank you very much. :rolleyes:

    A greeting.

    Ziman.

  10. No, a native implementation would not take care of this. Let me lay that myth to rest right now.

    Thanks for the clarification.

    Anyway, I asked in another thread that it would be nice to know how many users that will actually use LVOOP out of the box, compared to the number of users that are going to wrap LVOOP in some kind of reference system. Have you made such a poll at NI?

    Regarding by_value vs. by_reference:

    If the by_value need is 99.9% (within the LabVIEW community), why haven't anyone already created this? It should not be more difficult than creating the by_ref systems that exists today.

    Personally, I think that the current implementations shows that the by_ref systems are used more, but this is just my opinion.

    I haven't really played around with LVOOP, because it doesn't support all platforms (RT), and I need that portability, but I do like what I've seen of inheritance etc..

    /J

  11. The problem is that you have to set the selection in order to set color.

    After the loop is done, set the text selection to the next write position.

    If you are going to edit a lot of text, you will have to come up with a clever scheme to reduce updates, timing maybe?

    Download File:post-5958-1159980405.vi

    /J

    Fantastic :thumbup: yes that made my code easier :)

    Your color the comments vi worked great when run once, but I'm aiming at updating the colouring when typing.

    Maybe an event structure is the way to go.

    But as you see in my attachment it constantly marks all the text, and then the user deletes what he has already typed in.

  12. What kind of speed can you get using call by ref?

    Have you considered encapsulating in a non-reentrant VI for each global you need (given that you know how many you need in your application).

    /J

    Yes LV2 globals ala' LCOD. The way i use them in a very large application i have is to make them reentrant, then i call them with a call by ref node. Then i can have as many as i want, all by ref.
  13. Hi,

    I just took a quick look at your solution, and I think you can make this even easier.

    1. Enable "Replace All" option and skip the loop.

    2. remove the \ to cancel special interpretation of / (at the beginning and at the end of the search string)

    I made a quick implementation to colorize your comments, to give you an idea how it can be done.

    Hope it works :blink:

    Download File:post-5958-1159977170.vi

    /J

    Edit: I updated the attachment so that it is saved in LV8 instead of LV8.20, sorry...

    Thanks guys I worked it out with the "Search and Replace String" function.

    The regular expression to remove CSS-style comments is : /\*[^*]*\*/ :ninja:

  14. I don't understand this. In a functional global all the members are internal to the global, there are no get and set, only unbundle and bundle at most (but that isn't neccesary either). If the global is non-reentrant, then you can have alot of different actions. However, you can also use reentrant globals and call them by a reference node, there will still be no get and set. I use this all the time, no problems yet.

    Just to confirm, you are speaking about an LV2 global that not only stores data, but contains the methods to act on it?

    If this is the case then I agree that in the simple cases, i.e. numeric operations etc. this is the way to go.

    On the otherhand, if you only use LV2 as storage, then you must have get/set methods to act on the data, and then race conditions are likely to occur if no locking mechanism is used.

    To use the "LV2 global with actions" approach when building instrument drivers etc, it becomes way to cluttered. And you can only perform one action at the time, since actions are performed within the global itself (and the global can not be reentrant, or at least encapsulated in a non-reentrant VI). Using GOOP with locking, methods can be called almost simulataneously.

    Don't get me wrong, I like and use the LV2 globals all the time to: pass data, as intermediate buffers, data filtering etc., I'm just not ready to throw locking out.

    I also don't think we would have had this discussion, if NI had gone the by_reference path instead, since the native implementation would take care of this.

    /J

  15. I don't think locking is about keeping your program executing in the right order, for me it is a way to avoid race-conditions; in order to modify data you must first get data from memory.

    With a LV2 global, you can do the same thing, but then you must protect your global by having only one GMS VI, and this VI must be non-reentrant.

    LabVIEW will then schedule the GMS access automatically. If you allow more than one GMS method, then the protection against race-conditions is gone.

    With a get/lock method you can lock data, perform actions on the data in subsequent VIs; and then update the object when the action is done.

    This allows us to create methods that only updates a single attribute, leaving the others intact. Resulting in an easier API for the user, compared to one single GMS VI in the LV2 global case, at least when we have many attributes to handle.

    But as I said, I'm not an expert.

    /J

  16. I'm by no means an expert in this, but here goes...

    If we have a counter, c, in the attributes, together with two methods

    1. Init

    Resets the counter value to 0

    2. Increase counter

    Increases the counter value by 1.

    With locking we always know that the counter will start at zero and continue counting until we call init again, e.g.

    0, 1, 2, 3, 4, 0, 1, 2, 3...

    Without locking, the increase counter wouldn't wait for init to finish, and the above sequence could be

    0, 1, 2, 3, 4, 0, 5, 6, 7...

    This can happen if the Increase Counter does not detect the second Init.

    The program flow determines how many times the IncreaseCounter will be called between the first and second Init, but the main thing is that we know that the counter will really be a counter, i.e. contain numbers in ascending order.

    /J

  17. Jimi,

    I tried to add an acknowledge notifier that is sent back to the creator VI. This helps a bit, program does not hang as often as before.

    I then added a small delay in the creation loop (10~20ms), and together with the ACK-notifier the program then finished all 100 iterations.

    Maybe I can accept that you need to get an ACK before proceeding to the next step, but a delay? If the delay is too small, the program still hangs.

    Does this mean that a notifier can not be created too close to the previous notifier?

    Or, is it the "prepare for reentrant run" that can not be called at this rate?

    /J

  18. I meant that the loop halts when it comes to a missing notification, sorry for that.

    Maybe you are right that it is a bug, but I have experienced this behaviour before; with queues/notifiers/occurrences. So it seems like a good practice to always get a confirmation that a process is started, and ready for operation.

    /J

    About the step 3 loop, it doesn't require the notifications to arrive in certain order as long as all of the return notification arrive. It goes trough them in certain order, but it all notification eventually arrive no matter which order, the loop should pass.
  19. Hi Jimi,

    I think the problem is that you fire the start notifier too soon after loading the reentrant instance.

    This forces some of the launched processes to hang, and in the end the loop (in step 3) will hang since it expects all notifiers in a certain order.

    I tried to put a delay before the start notifier, and then it worked.

    You could just add a response notifier to be sent back after a process has started, and wait for this in your "Open Instance.vi".

    /J

    I have some problems with notifiers. The more I have communication trough the notifiers, the more there seems to be missed notifications; although I see no reason they really should be missed. The misses disappear if the program execution speed is slowed down.

    I attatch the following LabVIEW 8.0 project that represents the problem. Open the project and run the file "Test Scalability.vi". When it's ran on my computer, it runs some iterations and then freezes because it fails to catch a notification or a notification is not properly send. The reason it freezes it not completely clear to me. The problem may be either in the notification system or even in the scheduler or there could be a bug in my code that I fail to find.

    EDIT: I forgot, I tested and the behaviour is still present in LabVIEW 8.20.

    Download File:post-4014-1159808271.zip

  20. If you want to get even closer to the 1000us limit, check how much idle time you have got in the TC loop.

    Then offset your non-TC loops by this value (if they are sharing the same timing source). This way collisions are less likely to occur and, in my experience, better performance is gained.

    If you really need to log data to disk, I think it would be possible with some tweaking. But I do not know how much idle time you've got left in the TC loop.

    One thing that might be important when streaming to disk in RT, is that you write data in the correct block size. I even think I have seen some discussions at NI.com about that.

    If you need any more help, let us know.

    /J

    Just finished adding the code to stream the data over TCP...and after several minutes of continuous logging the largest loop rate I observed was ~1100 uSec. (as opposed to 7000-8000 using file write) and most were much closer to 1000usec. Is file writing just not able to be done deterministically? And yes, I have one pink cluster, but it's only read from once when I start a certain mode, other than that it just sits and is not written to or read from...oh, and the error cluster...everything else is pre built arrays, or single elements, few subvi's, no globals, few locals, and lots of shift registers. I tried to grab some screen shots, but like I said, the code is too large to see anything meaningful in one screen capture...

    As of now I am happy with the performance, just disappointed the 30gig hard drive on the PXI is just about useless for my application...oh well...

    thanks for the help!

  21. bsvingens original idea is just a way to have several LV2 globals accessed by reference, but still it is just globals.

    If you need locking/protection etc. then GOOP is the way to go. In fact, if you add locking mechanism to this LV2 global, then you are getting very close to the GOOP versions based on uninitialized-shift-registers.

    I'm not saying that it is bad to add protection to this implementation; I'm just saying that there is room for both.

    If you are planning to use this system as a general LVOOP core you might want to make the core VI reentrant, and wrap that in a non-reentrant VI (specific to each class).

    Otherwise you will have a dependency between classes, since method calls from different classes might collide.

    /J

    To increase the performance of get-modify-pass sequence I wrote a semaphore based on the ideas of besvingen's pointer system. I submitted it to the repository, it'll appear if the admins accept it.
  22. Hi Konroh,

    You could try to stream data to the host over TCP/IP, instead of streaming data to disk. Then let the host write data to disk.

    I made a system that were running engine simulations at 2kHz, logging all data over TCP/IP, and never had a loop running late.

    Are you using timed loops, and if you are are they using the same timing source? By using a the same timing source you have the possibility to set the loop offsets so that collissions between loop activity is less likely to occur.

    /J

  23. I don't think that create/dispose methods are that critical, you only do this once, but the get/set methods are called more often.

    To make create methods more or less independant of the number of elements in the array, one can implement a linked list that holds indices of the free elements, and points to the next.

    To create a new pointer, only remove first element from that list, i.e. no search for next free element. Dispose then adds free pointer to the end.

    I'll see if I can find some old linked list implementation on my disks.

    /J

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.