Jump to content

Tim_S

Members
  • Posts

    873
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by Tim_S

  1. What it sounds like you're asking for is where NI has gotten out of (<100 Hz) because there are tons of remote I/O options Rockwell, Siemens, GE, etc., already on the market. All of these use some form of bus (RS232, RS485, PROFIBUS/NET, Ethernet/IP, CAN...). Depending on what you pick you can spend just as much as a cDAQ in time and materials.

    NI acquired Measurement Computing some years ago. I've never used the hardware, but they do list having LabVIEW drivers. Some devices look to be able to handle industrial logic levels (24V).

  2. I've run in to TestStand twice... The first was a project I got pulled in to at the 11th-and-then-some hour to make some changes. I got a quick intro to implementing LabVIEW code in TestStand and made the changes. I didn't get to do much with it, however the impression I got from people who had been with the project was very negative in how difficult the system was to use to do what was wanted and they intended not to use it again. This was with TestStand version 1 (and I believe LabVIEW 5) from people who (today) would be certified LabVIEW CLD, so take that for what you will.

    The second time I ran into it was as part of a major revision and re-write of an existing system including sequencer written in LabVIEW. We (two from my company and a local alliance partner) seriously considered using TestStand, but concluded that the amount of work learning TestStand, porting existing code to work with TestStand, build a system around this that did all the other things that were needed, and make it look like a homogeneous system would be more costly that just writing it in LabVIEW (version 8.6 at the time).

  3. You can easily check the work by calculating what the value by hand and then comparing the output of the VIs. If you had done this then you would have answered your question.

    You do have errors in your code. Things you need to watch for is the data types matter and how the logical shift primitive works. As for style, making code neat and organized is important with any language. LabVIEW is a 'visual' language rather than a text language so what constitutes neat and organized means something a little different; there is a code cleanup feature up in the button bar area that can help with this.

  4. It appears you need help with the math expression. The LabVIEW help topics of "Formula Node" and "Precedence of Operators in Formula Nodes and Expression Nodes" should help. Have you read these?

    • Like 2
  5. If you are using a formula node then it's just a matter of setting up the inputs and outputs of the node and typing in the formula. LabVIEW has very good help that installs with it, but there is a little tutorial here. The first formula has one gotcha but the error message tells you what the issue is. The second formula is a little wonky in that it looks like x-squared is being squared. Make sure you read the help on the formula node.

    • Like 1
  6. There's plenty of examples on serial communication that ship with LabVIEW, so that's just looking at the examples and the ADV manuals/documentation on your part.

    The forums is a great place to ask questions and share information however you seem to be asking for someone to spend a great deal of time working with you on this code. If that is the case, people here make a living writing LabVIEW code and I'm sure someone would be glad to provide a quote.

  7. INI files were the way to go years ago, but they have the challenges when handling complex data types and arrays. I used XML (DCOM) in the last revision of my core application (designed for medium to large system). It worked well in small scale, but was a significant impact (30+ seconds) when my configuration editor and application tried reading in the file of a full system. I switched to JSON using this package which has greatly improved performance. Each plugin can have its own section for any configuration, so additions are easy. The files aren't intended to be edited directly, but I do install Notepad++ on the PCs to make it easy to go in and take a quick look (useful for repairs should anything bad happen).

  8. I used the NI CVT as a reference. There was nothing wrong with the CVT itself, but I needed something that worked with objects in packed libraries (plugins) which don't seem to share the same memory space.I didn't use the CCC with my inter-application communication.

    Not used DCAF before... just getting back to looking at it now because of your question. Looking at a demo video of DCAF, it's not an equivalent to CVT. CVT only stores values for read/write where DCAF can interface with hardware, run PID loops, etc., where storing values is a subset of what it does. This video goes through how the DCAF engine works. CVT is meant for asynchronous accesses from a central repository; DCAF appears to iterate through each, object that is configured/loaded in the system using strict by-value data transfer.

  9. 20 hours ago, ensegre said:

    But does this indicate a leak? I note the free of 4 bytes, which maybe you squelched with the threshold

    Drat, you're right... Today I've had an IT-pushed Windows update and reboot that has changed the behavior to closer to your screenshot. I was able to eliminate the memory leak by completely eliminating all of the Read Variables in my application, so I know something's related.

  10. 5 hours ago, ensegre said:

    I don't know if I'm looking at the same as you, and I haven't investigated either, but I don't see leaks. LV17 32bit Win (where else do you have SV?) 10. I suppose allocation and deallocation sizes might depend on the variable content too, don't they?

    My trace has many more events, are you somehow filtering them? I don't fully understand your throttling of the second loop based on timeout of an occurrence, but there you know better, maybe it has to do with your architecture at large.

    Untitled Project 1.lvproj.det

    Sorry, should have mentioned I set the capture settings to have a memory threshold of 500 bytes. There are a lot of typically uninteresting allocations/deallocations occurring that spam the capture otherwise.

    The memory allocated for a shared variable handle I would expect to be the same for a particular data type independent of the contents, though I've not dived this deep into the bowels of shared variables before.

    The use of the occurrence is (an old) way to throttle loop rate and control parallel loop termination; used to see it quite a bit before events were added to LabVIEW. The bottom loop keeps running until the top loop ends at which point the occurrence is set. The timeout of the occurrence acts the same as a Wait (ms) of the same time.

  11. 11 hours ago, JKSH said:

    I don't have a fix for the leak (and I haven't investigated it in detail), but I have an alternative architecture for auto-connecting comms.

    Instead of opening/closing in one loop and passing the variable reference to another loop, is it feasible to keep everything in one loop using a state machine? The states could be:

    ...

    If I have to go back and refactor then I'll have something closer to that. The code is a communication library that gets used by anything trying to talk to an application. The Initialize launches a VI that opens and maintains the connection and then individual VIs get used to read the shared variables (thus the code using the library only reads what is needs). There's certainly other ways to do this, but it's worked well except for the memory leak.

  12. Thought I'd pass this along and see if anyone can reproduce with different versions of LabVIEW. Appreciate it if anyone has seen this and has a fix.

    I'm using shared variables to communicate between applications (1:N). I'd been seeing some memory creep that was inconsistent and somewhat bizarre. Eventually managed to track it down to that I'm programmatically opening a connection to a shared variable in one loop, then reading the value in a different loop (the different loops have to do with reconnecting on connection loss and startup). There is a functional global used to pass the variable to the second loop. The Read Variable primitive deallocates all but 4 bytes of memory for the previous loop handle and then allocates memory for a new handle on each iteration of the while loop, hence creating a leak. This behavior does not occur if there is only one loop where there is an open, while loop with a read, and a close.

    Main.vi demonstrates the issue. Main 2.vi is more like the NI example.

    I've got service request #7728859 with NI going, but I think I got the guy's first day.

    LabVIEW 2015 SP1 32-bit on Win7 64-bit. Shared Variables memory leak.zip

  13. Tried loading the code in LV2012 and 2015; in both there was an error attempting to load the .NET control from PDFBox-0.7.3.dll. I expect some other DLL is needed, which is a rabbit hole to start going down.

    With no documentation nor knowledge of the contents of the .NET control it is very difficult to provide suggestions. With LV2012, the .NET control attempts to use an object which is NULL, so it throws an unhandled exception that is reported back up to LabVIEW. There is no information as to what object is the issue. Without being able to see the properties or methods it's impossible to attempt relate what is missing.

    The error 1386, which I had to look up as "The specified .NET class is not available in LabVIEW.", implies that something is missing or broken in the .NET control. I'm expecting it's missing a file.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.