Jump to content

Mellroth

Members
  • Posts

    602
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by Mellroth

  1. I guess it is a VISA error, just open Help->Explain error... and paste your code into the error code field. Error -1073807343 occurred at an unidentified location Possible reason(s): VISA: (Hex 0xBFFF0011) Insufficient location information or the device or resource is not present in the system. Seems like you are using a bad address? /J
  2. bsvingen, I think the LV2 global approach is nice, but it does not allow for simulatenous access, e.g. if one action involves waiting for an response from an instrument, all other actions would have to wait for this response to arrive. If we could preserve the LVOOP benefits, but with the addition of synchronization, we could probably increase the overall performance and still have "almost" simultaneous access. I have included a small example of how synchronization could be achieved, but I haven't tested this with LVOOP classes. It basically adds a copy of the data in a synchronization queue, while keeping local copies in each wire branch. Please let me know what you think. Edit: I know, you will probably say that the LV2-global will outperform this in terms of performance, and you are right. I made a LV2-style core that did the same thing much faster, synchronization was down to ~250ns in the case of no update /J Download File:post-5958-1160553720.zip
  3. Exactly my point. If a W4N node does not keep a per reference memory, we will see this in many places. I experienced this in LV7.1.1, but did not have time to track down the bug (I actually implemented a LV2 global that acted as a notifier to solve it). With Aristos explanation it makes sense, but it surely must be a bug if one notifier can prevent another from wake up, on completely different locations on the block diagram. /J
  4. Aristos, what if we have a reentrant VI, that acts on received notifiers, loaded N times using VI-server (separate notifier for each instance). In this process the WFN node is encapsulated in a non-reentrant VI. Could this scenario also hang due to that notifiers need to arrive in correct order? The question is really if the W4N node is sharing the sequence-number-memory between the processes? Do you see other scenarios were we could have this behaviour? /J
  5. It should be possible to update SW-modules when running LabVIEW RT, as long as you have loaded them dynamically through VI-server; 1. update the module on the windows host 2. download (e.g. FTP) to RT target 3. unload old version and load the new version from disk on RT target the system should now use the new version /J
  6. The by-ref design allready exists in LabVIEW, we have VI-server, queues, notifers, file-I/O, xControl etc. For me it is therefore clear that by-ref can co-exist with the dataflow. The existing by-ref GOOP implementations also shows that OOP is possible with a by-ref approach in LabVIEW. Whether our by-ref wish is inline with the nature of LV, I leave to others, but since we have a multi threaded system we need synchronization. A by-ref system that solves synchronization is, in my opinion, easier to debug than a system using globals etc., since I can follow my references to find all access nodes. My wish, and many others I belive, was that the LVOOP could step in and replace all the GOOP implementations with a native one, that could be optimized in terms of memory and performance. Now I hope that NI will present a synchronization scheme that efficiently can share data between wires and threads. This way we can have simultaneous access to data while still using the powers of LVOOP.
  7. I was about to make a clean test-VI of the bug, but then it suddenly disappeared? Timeout values were still hard coded to -1, and the only change I made was the handling in case of a timeout and the type of data used in the queues. I tried to go back to the original code, but the bug was gone. If it returns I
  8. The match pattern reads < match of the '<' character [^>]* any number of characters other than '>' > match of the '>' character /J
  9. I actually looked into this in LV8, since this means that new methods gets public the very instance they are created. And you also get good protection for your data members. The reason I stopped working with this, is that an xControl also inherits all general control attributes, and the access methods are therefore cluttered with unneccessary information. Which also ment that I had to create wrappers to get a clean user interface. The usage of "property node" and "invoke node" is more in line with my ideas of a native OO implementation in LabVIEW, so maybe it is time to give it another shot in LV8.20. But then we mightl loose all the goodies in LVOOP (inheritacne/dispatch etc.)? Good point! /J
  10. Hi, I'm using a single element queue as a buffer, where each buffer update involves 1. Dequeue element (ms timeout set to -1) 2. Enqueue new/updated element The update is done in separate loops or in separate processes. The strange thing is that the dequeue function sometimes return with timeout = TRUE (but no error). Since I'm using the ms timeout=-1, this should not be possible? If I set the ms timeout to a positive value, e.g. 100ms, everything works just fine, no timeouts detected, and buffer handling works as expected. Has anyone else experienced this? I'm at home at the moment, but will try to upload a simple test VI on Monday. /J
  11. Hi, do you have the Simulation Interface Toolkit (SIT) for LabVIEW? This toolkit allows you to interface a simulink model running either in Matlab/Simulink, or as a standalone DLL. Check out: http://sine.ni.com/nips/cds/view/p/lang/en/nid/11763 /J
  12. Thanks for the clarification. Anyway, I asked in another thread that it would be nice to know how many users that will actually use LVOOP out of the box, compared to the number of users that are going to wrap LVOOP in some kind of reference system. Have you made such a poll at NI? Regarding by_value vs. by_reference: If the by_value need is 99.9% (within the LabVIEW community), why haven't anyone already created this? It should not be more difficult than creating the by_ref systems that exists today. Personally, I think that the current implementations shows that the by_ref systems are used more, but this is just my opinion. I haven't really played around with LVOOP, because it doesn't support all platforms (RT), and I need that portability, but I do like what I've seen of inheritance etc.. /J
  13. The problem is that you have to set the selection in order to set color. After the loop is done, set the text selection to the next write position. If you are going to edit a lot of text, you will have to come up with a clever scheme to reduce updates, timing maybe? Download File:post-5958-1159980405.vi /J
  14. What kind of speed can you get using call by ref? Have you considered encapsulating in a non-reentrant VI for each global you need (given that you know how many you need in your application). /J
  15. Hi, I just took a quick look at your solution, and I think you can make this even easier. 1. Enable "Replace All" option and skip the loop. 2. remove the \ to cancel special interpretation of / (at the beginning and at the end of the search string) I made a quick implementation to colorize your comments, to give you an idea how it can be done. Hope it works Download File:post-5958-1159977170.vi /J Edit: I updated the attachment so that it is saved in LV8 instead of LV8.20, sorry...
  16. Just to confirm, you are speaking about an LV2 global that not only stores data, but contains the methods to act on it? If this is the case then I agree that in the simple cases, i.e. numeric operations etc. this is the way to go. On the otherhand, if you only use LV2 as storage, then you must have get/set methods to act on the data, and then race conditions are likely to occur if no locking mechanism is used. To use the "LV2 global with actions" approach when building instrument drivers etc, it becomes way to cluttered. And you can only perform one action at the time, since actions are performed within the global itself (and the global can not be reentrant, or at least encapsulated in a non-reentrant VI). Using GOOP with locking, methods can be called almost simulataneously. Don't get me wrong, I like and use the LV2 globals all the time to: pass data, as intermediate buffers, data filtering etc., I'm just not ready to throw locking out. I also don't think we would have had this discussion, if NI had gone the by_reference path instead, since the native implementation would take care of this. /J
  17. konroh, I did a quick search on NI.com, and found the information I was talking about earlier. Take a look at: http://zone.ni.com/devzone/cda/tut/p/id/3746 for information regarding block size in LabVIEW RT. /J
  18. I don't think locking is about keeping your program executing in the right order, for me it is a way to avoid race-conditions; in order to modify data you must first get data from memory. With a LV2 global, you can do the same thing, but then you must protect your global by having only one GMS VI, and this VI must be non-reentrant. LabVIEW will then schedule the GMS access automatically. If you allow more than one GMS method, then the protection against race-conditions is gone. With a get/lock method you can lock data, perform actions on the data in subsequent VIs; and then update the object when the action is done. This allows us to create methods that only updates a single attribute, leaving the others intact. Resulting in an easier API for the user, compared to one single GMS VI in the LV2 global case, at least when we have many attributes to handle. But as I said, I'm not an expert. /J
  19. I'm by no means an expert in this, but here goes... If we have a counter, c, in the attributes, together with two methods 1. Init Resets the counter value to 0 2. Increase counter Increases the counter value by 1. With locking we always know that the counter will start at zero and continue counting until we call init again, e.g. 0, 1, 2, 3, 4, 0, 1, 2, 3... Without locking, the increase counter wouldn't wait for init to finish, and the above sequence could be 0, 1, 2, 3, 4, 0, 5, 6, 7... This can happen if the Increase Counter does not detect the second Init. The program flow determines how many times the IncreaseCounter will be called between the first and second Init, but the main thing is that we know that the counter will really be a counter, i.e. contain numbers in ascending order. /J
  20. Jimi, I tried to add an acknowledge notifier that is sent back to the creator VI. This helps a bit, program does not hang as often as before. I then added a small delay in the creation loop (10~20ms), and together with the ACK-notifier the program then finished all 100 iterations. Maybe I can accept that you need to get an ACK before proceeding to the next step, but a delay? If the delay is too small, the program still hangs. Does this mean that a notifier can not be created too close to the previous notifier? Or, is it the "prepare for reentrant run" that can not be called at this rate? /J
  21. I meant that the loop halts when it comes to a missing notification, sorry for that. Maybe you are right that it is a bug, but I have experienced this behaviour before; with queues/notifiers/occurrences. So it seems like a good practice to always get a confirmation that a process is started, and ready for operation. /J
  22. Hi Jimi, I think the problem is that you fire the start notifier too soon after loading the reentrant instance. This forces some of the launched processes to hang, and in the end the loop (in step 3) will hang since it expects all notifiers in a certain order. I tried to put a delay before the start notifier, and then it worked. You could just add a response notifier to be sent back after a process has started, and wait for this in your "Open Instance.vi". /J
  23. If you want to get even closer to the 1000us limit, check how much idle time you have got in the TC loop. Then offset your non-TC loops by this value (if they are sharing the same timing source). This way collisions are less likely to occur and, in my experience, better performance is gained. If you really need to log data to disk, I think it would be possible with some tweaking. But I do not know how much idle time you've got left in the TC loop. One thing that might be important when streaming to disk in RT, is that you write data in the correct block size. I even think I have seen some discussions at NI.com about that. If you need any more help, let us know. /J
  24. bsvingens original idea is just a way to have several LV2 globals accessed by reference, but still it is just globals. If you need locking/protection etc. then GOOP is the way to go. In fact, if you add locking mechanism to this LV2 global, then you are getting very close to the GOOP versions based on uninitialized-shift-registers. I'm not saying that it is bad to add protection to this implementation; I'm just saying that there is room for both. If you are planning to use this system as a general LVOOP core you might want to make the core VI reentrant, and wrap that in a non-reentrant VI (specific to each class). Otherwise you will have a dependency between classes, since method calls from different classes might collide. /J
  25. Hi Konroh, You could try to stream data to the host over TCP/IP, instead of streaming data to disk. Then let the host write data to disk. I made a system that were running engine simulations at 2kHz, logging all data over TCP/IP, and never had a loop running late. Are you using timed loops, and if you are are they using the same timing source? By using a the same timing source you have the possibility to set the loop offsets so that collissions between loop activity is less likely to occur. /J
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.