Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Manudelavega

  1. Definitely. The worst thing about the LabVIEW 1 or 2-button dialogs is they use the root loop, so while displayed, many functions such as Open VI Reference just have to wait their turn to access the root loop, causing very undesirable behaviors! A few months ago I tracked down and replaced all those dialogs with my own standard dialogbox.
  2. The factory pattern is great and I like to use it all over the place. I think your approach is good, and the variant method definitely works, I've seen it many times before. However it requires you to be extra diligent since a slight difference between what was passed inside the variant and the way to cast it back into usable data would generate an error that could be tough to troubleshoot along the road. Unless you use a typedef, modifying the data type on one side won't automatically modify the other side. One alternative idea would be that the initialization data is itself an object. You could have an abstract class "initialization data", and then child classes such as "InitDataClassA" and "InitDataClassB". Look at the decorator pattern, it might give you some idea. Of course you're adding another layer of LVOOP, so there is a complexity trade-off.
  3. This means this then that? True, I'm surprise your link still doesn't mention Windows 10!
  4. According to that link and this link, LabVIEW 2015 is officially supported by Windows 10.
  5. This thread is not a question, I just wanted to share the experienced I gained today by troubleshooting our application: Symptom: Engine A encounters an error (expected, so no problem so far) and display it to the user. Engine B, which is totally unrelated to engine A, freezes, and only comes back to like after the user acknowledges the error message from engine A. Consequence: The software engineer (aka me) is pulling his hair and yelling "what the h*** is going on in here?" Then he does some diligent troubleshooting and finds the culprit. Explanation: Engine A calls the "Simple Error Handler" VI, which itself calls the "General Error Handler" VI. This VI analyzes the error and opens a pop-up when there is an error to display. Engine B calls a subvi which calls a subvi...........which calls a subvi which calls "General Error Handler". This subvi doesn't have any error, but still calls "General Error Handler" because it knows that if there is no error, "General Error Handler" will simply return without doing anything. Problem: "General Error Handler" is not reentrant, meaning while it's busy waiting for the pop-up it called to be closed, it can't be used by the sub-sub...subvi of engine B. Therefore engine B is in a frozen state. Conclusion: Those error handlers are a great quick tool for creating super basic application, but not appropriate at all for large, professional applications. I'm pretty sure some of you will think "Well duh, we've been knowing that since LabVIEW 1.0!".
  6. Old thread, but explains exactly what I've been working on for the last 3 weeks! I understand the concept of loading VIs into memory while in the dev environment. But what about building an executable? Let's say I have some VIs in a lvlib that are not being used anywhere in a certain application I'm going to build. But since there are other VIs in that lvlib that are being called, I see the lvlib and all its members in the dependencies. The unused VIs won't be loaded in memory, but are they going to be included in the executable?
  7. I started a new thread because I didn't want to hijack the great article about dependencies between classes : https://lavag.org/topic/19421-visualizing-dependencies-between-labview-classes/ Historically our code has contained numerous circular dependencies, where a member of library A would call a member of library B, and another member of library B would call another member of library A. As you know this situation isn't great for at least 2 reasons (but I'd love to hear even more reasons from you) 1) There is no way to load just a basic library in a small project without loading almost all the source code of the application 2) There is no hope to be able to switch to Packed Project Library one day After many days, I managed to refactor the code and got rid of almost all of the circular dependencies. I'd like to have a visual way to show the difference between the "before" and "after". The VI Hierarchy tool does show that but there is just way too much going on to really make sense of what we're seeing. I'd like a similar tool that would only show the dependencies between libraries, without seeing the details of the libraries content. Do you know such tool? Thanks!
  8. Thank you Rolfk! Seems like our best move is to use either 2 PCs (be it through virtual machines or not). We do have to guarantee the quality as those builds go to customers.
  9. Thanks, I think you understood my question, but I thought I would post a picture to be a bit more explicit about what I intend to do:
  10. Dear all, My team and I are in the process of migrating our application from LV2011 to LV2015. While in this process, we need to be able to make 2015 builds for testing purposes while continuing to build 2011 releases for our current customers. We have a dedicated build PC and I initially wanted to use to build both versions. The issue is: the version 15.0 of many drivers such as NI-CAN, NI-XNET, NI-VISA doesn't support LV2011. When I installed the version 15.0, the libraries were actually removed from the vi.lib folder of LV2011. The thing is for the LV2015 version, I do want to use the latest libraries (15.0) Before trying to reach a solution, I would like to understand one thing about those libraries: when I install them, they do 2 things: (A) Install the libraries in the vi.lib folder to be used by the development environment (Development PC) (B) Install some resource files to be used by the run-time engine (Deployment PC, aka customer PC) Here is the question: Can I use a version 14.0 for (A) and 15.0 for (B)? I'd guess not but I like dare asking naive questions, you never know... If I want to be able to keep my ideal scheme (14.0 for LV2011 and 15.0 for LV2015), the 2 solutions I see would be: 1) I copy the folder from the vi.lib of another computer which has the 15.0 installed to the LV2015 folder of the build PC which has the 14.0 installed (so I don't install 15.0 per say) 2) OBVIOUSLY I get a second build PC Have you ever had to deal with those issues? Thanks!
  11. They probably want you to compare total cost and the total input. Use a "greater or equal to" function which will generate a Boolean and wire this Boolean to the Select input: If Input >= Cost then Change Due = Input - Cost and Additional Money Needed = 0 If Input < Cost then Change Due = 0 and Additional Money Needed = Cost - Input
  12. Our plan has been delayed until further notice, so we haven't made the move yet. So thanks for your valuable input.
  13. Hi, I am planning to buy a few books for my company, to be shared within the software team. On my list are already the classic and must-have LabVIEW Style Book by Peter Blume and LabVIEW For Everyone by Jeffrey Travis and Jim Kring. If I were to buy a 3rd (probably not right now), I would consider Effective LabVIEW Programming by Thomas Bress, but since it's more recent (2013), I haven't seen much feedback about it. Has anyone read it and can comment on it? Any other suggestions welcome!
  14. Fabiola De La Cueva made a great presentation about this at NIWeek: https://decibel.ni.com/content/docs/DOC-43414
  15. CLA practice exams are good examples of how the MVC diagram can be applied.
  16. There is rarely feedback on practices on LAVA. You'll have better chances here: http://forums.ni.com/t5/Certification/Sample-Exam-Solutions-for-Review/td-p/1824703/page/27
  17. I would even suggest to remove this post. The list of potential actual exams shouldn't be posted anywhere.
  18. Glad to announce that I successfully passed my CLA! Thanks crossrulz for your guidance!
  19. I see, thanks for the tips. So far I already have a XNET solution, but on the DAQmx side, on a tethered cDAQ. So I will have to migrate this code to the LVRT side.
  20. Thanks everybody. Well in our case we'll also need Ethernet, CAN, and RS232 communication in order to send commands to the different hardware devices in a fast and deterministic manner. I don't know yet if the SCAN engine support those or if we'll have to write FPGA code for those. Now that I think about it, Ethernet and RS232 ports might be available on the RT Controller and will directly accessed through the LVRT? Only the CAN might be a module in the chassis and therefore require the FPGA layer?
  21. Thanks a lot smithd. My application is fairly complex and there are probably like 30 subpanels if I count subpanels of VIs inserted inside subpanels of other VIs and so on... So stripping down the subpanels is just not an option. From your answer, it seems I'll need to split my application into 2: one HMI for the PC, one RT for the cRIO or PXI. For the cRIO I understand that the HMI and the RT can communicate through shared variables or network streams. But what about the PXI? Is it a common practice to have an RT application in the PXI controller and an HMI on a PC? And how would those 2 applications communicate? Cheers
  22. Hi, My company has been developing and maintaining its own SCADA software in LabVIEW for a few years. It is fairly comprehensive: datalogging, graphing, alarm monitoring, automation, loops for equations and PIDs, and so on. It is a PC-based solution and communicates with many different kinds of hardware through the COM(RS232), USB, and Ethernet ports of the PC. This solution works well and allows keeping the costs low for most of our customers. Most of the loops run around 10Hz (100ms). However, more and more we are running into customer specifications that require high control rate (few milliseconds), high determinism (to the millisecond), and high reliability. Not surprisingly, the PC solution becomes unacceptable. We feel it is time to look into a real-time, embedded solution for those customers. That's why I'm currently investigating the different NI embedded RT solutions (namely PXI and cRIO). I can find plenty of resources on each of them, but close to nothing when it comes to comparing the 2 solutions and choosing which one to go with. Would you mind giving me some guidance? I guess you'll need more information, which I'll be happy to provide. A few elements already: - There is no request for MHz loops, so the FPGA side of the cRIO is not required I believe - Our application contains many VIs that are both the engine and the HMI, so there will be some decoupling effort if we need to split it into an HMI application on the PC and an RT application on the cRIO. Would a PXI solution avoid this issue by plugging a monitor directly to the display port of the controller? But then if I have all the code in the PXI controller, is it likely that I will lose my control rate and determinism? Thanks! Emmanuel
  23. Also, I would abandon the regular API and rely only on the Advanced API. The regular API lacks a lot of flexibility, it's only intended for very basic usage.
  24. I also use DVRs inside my object when I know I'm going to fork it. If you want to avoid DVR, I guess you can us a SEQ (Single Element Queue). Dequeue the object to "check out" and re-enqueue it to "check in". While the queue is empty, the next place that needs to R/W the object will wait inside its dequeue (as long as you let the timeout to -1 or make it long enough).
  25. Thanks. So if I build my application while this error still exists, could it explain why my DLL call is not working?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.