Jump to content

OlivierL

Members
  • Posts

    76
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OlivierL

  1. Hi Logman, As you suggest, VC can (should?...) be merged into one VI in my opinion. We design most of our code with VC merged into one VI (actually offen multiples VI for different interfaces) while the Model is really just doing its core task. Also, we try to not use the GUI of the Model/Actor directly so it can run in a higher priority thread instead of in the UI thread. It makes a difference for us in some cases on Windows and it helps us when using our libraries on a Real-Time targets as we already have the all GUI clearly separated and are able to easily debug from any PC (assuming your actors support some kind of TCP communication) even when the Target is running independently.
  2. We ran into a situation many months ago when a new device was added to a system and communication was done through a cheap FTDI interface chip. We ended up having BSOD at random intervals ranging from minutes to hours. It was easy to troubleshoot and we replaced the adapter with an industrial grade USB-Serial device (Moxa) before switching to a PCIe card (because it was cheaper.) That simple change fixed everything. We'll never know what the driver was doing but there are many things that might go wrong with Prolific/FTDI, including the counterfeit parts that others have mentioned. I'm definitely bookmarking this thread for future discussions with clients who don't like to invest for quality hardware. It seems like most of us have had our share of stories with cheap USB-Serial interfaces. Tim, all this being said, if you have en error on your VISA Close, there may be something that needs to be addressed in your code. If you can post it, there will likely be someone looking at it and provide you with a better feedback. Finally, if you can mention the error codes that you are seeing, it makes things easier to troubleshoot and helps other find this thread if they face the same problem.
  3. We have been having the same interrogation that Jim brought up (almost a year ago) at our firm for a few months and I finally got around to read on the topic this morning. After reading a few other interesting threads, I found this one and thought I would bring it up again to know how 2014 (sp1) and 2015 (so far) have fared for other developers, if some of those issues have been resolved or if any new issues have been discovered. Thanks to all of those who contributed and shared their experiences. Based on all this information and the known problems with the RT (the posts seem to only point to cRIO, not PXI RT failures...), I think we will move forward and benefit from the easier SCC management and smaller commits. For those who have had the issues with the cRIO, was it with the newer Linux based RT or with the older VxWorks (& Pharlaps)? Does the OS (and the compile chain) make any difference in the amount of issues that you faced?
  4. I agree with smithd in that we always use more than one communication method. We often have multiple network streams and shared variables at the very least. Beyond that, we sometime implement other forms of messaging but unless your system is fairly slow (to rely only on SV), I would expect you'll need at least 2 methods and multiple "instances" of each. Hoovah is correct about the need for a third application but unlike the PC and RT applications, unless you need to put custom logic, the FPGA program is usually fairly simple with basic I/O access and DMA FIFO access. You still have to write it but it is a lot of copy paste from existing examples with little more required. Do not think that the FPGA requires a large effort if you will only stream AI/AOs with the RT portion.
  5. If the call to PostLVUserEvent() becomes blocking as you mentioned instead of just "broadcasting" the information to any VI that registered it, how can this handle multiple VIs registering the same Callback?
  6. We also use many classes where we create a DVR for each copy of an object and allow different sections of code to access the same object and directly call public "ByRef" methods. We find that this work well for certain applications. In the future, we will likely change this to the method that you describe where the object itself is passed "By Value" and its private data is a DVR. This will help us with inheritance and dynamic dispatch for certain drivers. The lack of "Timeout" in the DVR IPE structure is the most frustrating part of that feature to me. During development, those sometimes happen and I would much rather get a specific error code than having my code hanging forever! In that respect, SEQ is superior. The other issue that we have been facing is the lifetime of the DVR. Keep in mind that it is associated with the Top Level VI in the call chain when the "New Data Value Reference" was created and NOT with the VI that actually created it (such as an FGV used to pass the DVR to different section of code). Therefore, if that Top Level VI goes idle, you will lose your DVR because LabVIEW will automatically clean it up even if many other VI's still access the object and even if the Sub VI that called the "New Data Value Reference" is loaded in memory. This is also valid for TCP/IP connections and any other type of reference. You have to be especially careful with this concept if you tend to Start/Stop VIs while your application is running or at shutdown time to stop your VIs in the right order. About messaging, we also use that method but I find that being able to call the object's method directly makes it a lot easier to handle sequential commands and error handling that may be happen during the method's execution.
  7. It works for me if you first run the SubVI. Look at modified files. Does this solve what you are trying to achieve? subvi.vi main.vi
  8. Hi Thomas, considering where you are at right now, I would recommend implementing an automated test by starting at system level because creating Unit Tests for a significant subset of your 5000 VIs will take a lot of time. After creating the system level tests, then I would look into getting more test cases for each individual part of your application. For the implementation, we have used UTF in the past but we usually create our own VIs for system level tests as the setup can be quite complex and we do not feel that UTF offers us much of a benefit at that level. What we do is add a script ran by the OS (in your case you would have a slightly different script running on three PCs or likely 3VMs <- VMs allow you to share HW resources more easily...) every night that: 1 - Updates SVN folders that are part of the test 2 - Calls an VI in an executable format (that you could compile independently on each platform) and which performs the system level test while monitoring RAM & CPU usage in the background. 3 - Call another script/application to parse all the result files and create a separate report including graphs of the logged data overlayed on top of known good results. We do this for the values that are harder to analyze automatically but for which a human eye can analyze in a glance the following morning. Since our error handling includes a FGV holding all the errors/Warnings generated, we can easily include those in the reports along with all other results and be part of our P/F criteria. One thing to keep in mind is that the task often looks daunting but you have to start somewhere. I find it a lot easier to start with a limited scope which grows a lot over time. With this incremental process, creating your first set of tests is more manageable and you can add more test cases as problems are discovered and you want to make sure to cover those in future releases. When implementing the automated testing after the facts, we usually begin creating unit tests as we find the most critical sections that can break, and as we make modifications to existing code. All new code changes should include implementing proper unit test for the given module. Hope this helps.
  9. We are quite fans of TDMS here as well. Read speeds can definitely be an issue but as pointed out by Manu, SSD helps a lot. Also,we have not tested it yet but the 2015 version but the API now includes a "TDMS In Memory" palette which should offer very fast access if you need it in your application without having to install external tools such as "RAM Disk". As an aside, another tool we really like for viewing TDMS files is Diadem. We use it mostly as an engineer tools as we've had issues with the reporting feature in the past. It is a LOT faster and easier to use than Excel when it comes time to crunching a lot of data and looking at many graphs quickly. Unfortunately, at the moment, it doesn't support display of 4D graphs but I posted a question on the NI Forum a question about a possible way to implement such a feature through scripts. We don't have the skills or time to do it internally at the moment but I would really like to know if anyone created such a function and wants to share it. There is also a KB that you can look at here but I do not think that it will meet your requirement for 4D display.
  10. The files located in "C:\Program Files (x86)\National Instruments\LabVIEW 2013\vi.lib\utf" refer to the "Unit Test Framework. It is one of the component that you choose too install with LabVIEW. Open the installer again and try including it as part of the LabVIEW install (you shouldn't need to reinstall everything, just this new component). Afterward, the mentioned file should be found. I am not sure why this is a dependency of your project though but hopefully it solves your problem. Good luck.
  11. Never done that but if I had to try, I would likely start by enabling the ActiveX server in the Build properties of the EXE you want to be able to load inside the other. This assumes that you are in the Windows environment though. Good luck and let us know if you succeed. That could definitely be useful one day.
  12. So I assume that this would be similar to a (buffered) network shared variable that automatically adapts to type (vs having to configure it in a separate window) and possibly buffered depth. From a representation on the BD, however, how would that translate to any gains? I think your point is valid if we take it to a system level design, where relationships can be drawn at a higher level and where the Host and cRIO are both "sub VIs" at the system level representation. But then again, the caveats I see is that I will likely have multiple such communication methods (so many wires between the two targets representation) and again, why can I not acquire the queue/stream/event on the system level diagram and pass a reference using dataflow to both top level VI (Host and cRIO). I think that this could help the design process, if the same Queue reference could simply be passed to both targets and to let LabVIEW handle all the complex TCP/connection/encryption/authentication for me. That is awesome. However, does it really need to be "asynchronous" wire that does not follow the data flow paradigm on the block diagram and crosses structures boundaries differently from any other wire content? Maybe hovering over a Queue or a right click menu option could allow you to see what other VI have access to the Queue including every target and quickly identify any Enqueue/Dequeue operation done on the given reference. This could be offered at design time as well since the code is already compiled.
  13. I agree with you, from our design perspective, that that Timeout should not be used as a regular lock mechanism and that a SEQ or semaphore is a better way to do it. However, that is probably up to the developers to decide how they would like to use that new option. We would use it as a protection and a simple way to prevent complete systems "freeze", but it could offer more benefits that we do not foresee at the moment. Yes, I guess that the fact that "ByRef" objects cannot use dynamic dispatch is another example of R&D'S reluctance to references! But even if the DVR is inside the private data of the object, you could absolutely run into the same circular reference bug that I describe. That does not protect you in any way, does it? So you don't have a reference to an object, you have the object itself and the private data is a Queue, right? This is very similar to including the DVR inside the private data. It is definitely another way to implement a similar solution and but are protected against circular references "hang" with aproper timeout. One benefit of the DVR access method is for objects who owns large amount of data (large arrays for graphing for example), there is no need to make memory copies in the same way that the SEQ does. That was also one of our reason to choose the DVR and ByRef methods for certain objects.
  14. Yes, and those are great because there is a time out feature that allows the application to continue, recover and provide useful information for fixing the problem. Do you also mean storing the object itself in a SEQ instead of sharing a DVR? Does that mean that all of your class methods must be surrounded by the Dequeue/Enqueue? I don't think that the idea is to use DVR to lock a resource. For many of our objects, we call methods on them using DVR (ByRef). Every now and then, we realize that two objects end up with a circular dependency that was not expected and we experience a deadlock that just cannot be recovered from. I appreciate that limiting the use of DVRs to basic accessors reduces the risk but it also restricts us from using more useful methods that are properly protected inside the object. Drawing the line can be challenging between reducing the risk and protecting class methods. Is there any VI Analyzer functions that allow for automatic check for an IPE inside an IPE where both have a DVR? It may not prevent the problem but it could help locate the problem.
  15. I heard about this problem before and personally ran into it quite a few times: DVR access don't have a timeout option which means that if the code in DVR A uses DVR B and that "by mistake", DVR B attempts to use DVR A, your code just hangs forever. I am quite surprised to see that I cannot find a post about this issue on LAVA. I could find one idea on the idea exchange but it is not very popular: http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Timeout-for-In-Place-Element-Structure-with-DVR-nodes/idc-p/3181029#M31895 Are we a very small minority to use this DVR and IPE with objects and sometimes run into circular references during development? Are others actually implementing their own Semaphores manually before the IPE structure? Is there a clean and simple solution that I am not aware of to circumvent this problem? I am curious to know how people are fairing with this issue and if more people are interested, to make the suggestion from Chris more popular.
  16. I was very skeptical when I first heard about those channel wires for the first time last March. After checking it out and reading this thread, I am still of the opinion that those wires would not help me or my team program better applications nor to program faster. As others pointed out, these do not offer any benefit in applications where VIs are dynamically launched which is the main reason I need to acquire the same reference multiple time in the first place (or use FGV, DVR, ...). Otherwise, I can acquire the Queue Ref or the User Event outside of parallel loops and respect the dataflow paradigm by passing the reference. Anyhow, I am very curious to see what others might do with them. This being said about development, I could see a benefit to this tool for debugging, like CraigC suggested. If I could, at run time, select a Queue or User Event on a wire and then visually see who is enqueuing/generating event, I think that this could save me time. Imagine a "Highlight execution" that is only activated when data is produced/consumed to/from the selected Queue/Event, lasts for a few seconds until the data gets onto this "Channel Wire" that appears on the BD and then continues the highlight execution for a few seconds on the consumer side, again with the "Channel Wire" appearing momentarily to visually show the "real" dataflow. Alternatively, if those wires could be displayed automatically, during debugging when the execution is paused, to see at run time every VI that currently holds a given Queue reference or Event reference and real-time consuming/producing, that could be a good ally to troubleshoot communication problems.
  17. Thanks Mark. I missed many of those sessions so I really appreciate you sharing with the community! By the way, is there a reason why you did not upload any videos from the CLA summit?
  18. I understand why you would want a progress bar in the past. We created something similar with a warning before the operator launched the process. I'll play with the new functions later to see about the performance but it would be nice if someone from NI could give us some details about the implementation. I should probably post on ni.com for that...! The basic help files do not offer any extra information.
  19. I'm quite happy about the TDMS in memory functions which spares the need to use Ramdisk programs but even more about the "Delete from TDMS" file feature. Those files can become very large and until now, you had to recreate the file keeping only the channels that you wanted to keep which could take a very long time. I remember having a discussion with a couple of NI's engineers at NI Week 2014 about that feature. Even though they did not seem convinced after our discussions, I'm glad they found other reasons to implement the feature. I wonder how the feature was implemented, either as a "real" delete or if they recreate a new file under the hood (which can take a long time.) The problem with the faster response is likely that the freed memory is hard to recover unless you "defrag" the file but that feature does not seem to have been added. I look forward to trying this on a future project.
  20. Yes, your assumptions are correct. Our Sub VIs have standard Error handling (case structure) so not much more happens until the current case completes and the error handling case executes on the next iteration of the While loop. From there, we can either attempt a recovery, ignore it or request the main application to restart or to shut down safely. One thing I mentioned and strongly recommend to others is to include the case where the error was generated when reporting the error. It made our life a lot easier when we added that information instead of simply offering the name of the VI where the error occurred. Another nice thing when debugging is that you can still put a break point in the error case and quickly backtrack to the faulty case (assuming Retain Wire Value is enabled...) and understand what happened. It's nice that we came to similar conclusions but I would be interested in hearing from others who do things differently.
  21. I find this thread quite interesting as this is a good problem that we have all been facing for a long time. Our team did some research last year to try to create a unified error handler similar to the SEH that would meet all of our requirements but in the end, after testing a few different flavors, we chose to integrate our error handling as a separate state in our templates. We do this in a very similar way that JKI State machines template but we include a few more parameters such as which case was the error generated in (to facilitate debugging) as well as automatic logging and display in a non blocking manner. Based on the error code, we can either let a default behavior programmed in a Sub VI or override the handling of any specific code or range of error codes on a "per code module" basis. We could not find any other way that would offer us the flexibility and reliability that we need in our projects. We find that by including it in this way, as part of our template, our developers were much more inclined to add the proper error handling code as it was easy to do so. Moreover, that specific code is visible on the BD (not hidden in an Express VI.) With this approach, it is easier to handle specific errors in each module/driver and it is almost "synchronous" (in between each cases). This seems to be better than generating an event when an error occurred which would still only be handled when your code gets back to the event structure. As for the format, we find that the current cluster offers enough information even though the default "tags" in the Source element are not obvious at first. The idea of objects and more complex error handler built in a smarter subVI is nice but we have given up on finding the silver bullet and went for a practical approach that work well in our code. We'll definitely keep our eyes open in case something better comes around...
  22. That happened to me previously as well, when a VI became corrupted and LabVIEW didn't want to open it. I don't recall the details of the problem but I remember that I was fortunately able to open the VI in a more recent version of LabVIEW (I think that the corrupted VI was in LV2011 and I opened it with LV2014) and then saving it back to 2011 did the trick and allowed me to recover all of my changes. Might be worth a try if it happens again and if you're not already in the most recent version of LV! And for the BSOD, yes, I still see them every now and then in Win 7. I think that they are mostly linked to faulty third party instrument drivers however.
  23. I miss those podcasts as well. I would be very happy if Michael wanted to produce new episodes and would definitely listen to them. With NI Week approaching, it seems like a good time to make new ones!
  24. Thanks to all the organizers of this event! Looking forward to attending again this year.
  25. I know this is an old thread but I recently faced the exact same problem, where killing the labview task simply wouldn't work. It was caused by a driver issue (USB-RS232 Prolilfic 2303). In development environment, the application would hang forever in a VISA Read and in the executable format, the application would just appear to freeze. The only thing that would free the execution was to disconnect the USB device and then VISA Read would return an error. Everything went away once we selected a different RS232 adapter (both MOXA Uport and Startech PCIe adapter solved the issue.) We saw a few Blue Screen of Death over that development period with the Prolific IC. Since your application seems to be calling a USB instrument, it is possible that its driver is also be the root cause of your issue. If the instrument is working properly until you close your application, make sure that you call the proper functions to close the driver properly to allow it to stop executing. If you also see some strange behaviors during execution, consider getting a better device/driver.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.