Jump to content

Mark Smith

Members
  • Posts

    330
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by Mark Smith

  1. Is there a list somewhere of the operations that block a thread?   In other words, when do I have to start worrying about threads or execution systems?

    The only time I've had to worry about thread exhaustion is when making interop calls to .NET/DLL/COM. LabVIEW will grab a thread for the interop call and not release it until the call returns. If you make simultaneous calls to DLLs (assuming they are thread-safe and not called in the UI thread), LabVIEW will allocate a thread and not return that thread until the DLL call is complete. This can lead to thread starvation and blocking if the DLL call takes a non-trivial amount of time to complete and you reach the thread limit for a given execution system. DAQmx is susceptible because it's all calls to the DAQmx DLL under the hood.

     

    Mark

  2. As Rolf said in the referenced post, the GAC is the only sure bet for a .NET DLL (assembly). I presume the MySQL assemblies are signed and can be installed in the GAC. If MySQL won't install to the GAC automatically (I suspect it will) you'll need to include an installer that will install a .NET DLL to the GAC because the LabVIEW installer won't. The installer can be built from Visual Studio -most any version.

     

    Mark

  3. The Desktop Execution Trace toolkit (http://sine.ni.com/nips/cds/view/p/lang/en/nid/209044 ) will help you find memory leaks in LabVIEW. It can show all the memory allocation and deallocation that LabVIEW does.

     

    If it's not LabVIEW leaking memory (if it is the .NET DLL) then maybe the Process Explorer (http://technet.microsoft.com/en-US/sysinternals) can help. Or maybe one of the other SysInternals tools - it's been a while since I needed them and my memory may not be accurate. At any rate, that toolset can be very valuable to a Windows platform developer.

     

    Mark

  4. Rolf,

     

    On the attached picture, the ref wire from the _SmaoMain property node used to get the ref to the SmaoMain.Info property goes around the While loop where the Enabled Devices get queried  and into a Close Ref. There's no data dependency I see that would keep the Close from operating on the Initialized _SmaoMain object before the property of that object gets queried. If the object that contains the property data gets closed, wouldn't the data associated with that property likely get disposed as well? Am I missing something obvious?

     

    Mark

  5. Looks to me like you close the ref to SMAOMain in parallel with the loop you use to try collect the Enabled Drivers info. Once you close that ref, your SMAOMain object gets cleaned up and you're likely looping on a closed ref to the SMAOMain.info property when you try to get the enabled drivers.

     

    Mark

  6. I followed this topic with interest but didn't have anything to add until today. I was looking for some C#/.NET code to support a project and found a nice library in the Code Project. Since this thread had got me worried about using third-party code and licensing, I took a look at the license under which it was released. After reading this license, I think it's a pretty good model and easy to understand. I can use the library as is or derive from it with the restrictions that I don't remove the original author's attribution, I don't try to pass this work off as my own, and I include a link (somewhere) to the license agreement.

    http://www.codeproject.com/info/cpol10.aspx

    Mark

    • Like 1
  7. For the project I'm working on we created a LabVIEW build spec to create an interop dll that exports interfaces for several of our VI's. When we give the interop dll to 3rd parties that need to call the dll, they claim that they need to create a "wrapper dll" to call our interop.dll. They indicate that the exported interfaces are "static" and that they can't call the interopdll directly. They were also asking if there a way to make the interfaces of an interop dll created in LabVIEW "COM Visible"?

    Are we building the interop dll improperly in LabVIEW? Do we need to embed a manifest file for them to call the interop dll directly?

    Is it possible that they cannot call the dll directly because they forgot to include the reference to the NationalInstruments.LabVIEW.Interop.dll?

    Sorry for the barrage of questions. This is the first time I've worked with interop dll's in LabVIEW

    I can't imagine why one couldn't call a static method from a .NET assembly from any .NET language. I'm wondering from the questions you ask if the wrapper dll your customers write is a COM wrapper, since no, you can't just register a .NET assembly as a COM object and have it show up through an ActiveX type interface. So maybe they want to call the .NET DLL from an unmanaged context (C++ maybe?). You can, it appears, make your .NET DLL COM Visible (see "Exposing .NET Framework Objects to COM", http://msdn.microsoft.com/en-us/library/zsfww439.aspx), but not directly from LabVIEW as far as I can tell. You would have to do the work explained in the link above using Visual Studio or such.

    Alternately, you can build a LabVIEW exe as an ActiveX server and that will show up as a registered COM component, but that's a whole 'nuther can of worms. It works, but it ain't pretty.

  8. By convention, the Event Registration Refnum is wired exclusively to one and only one Event Structure, and this wire is never branched -- the exact same convention as the "Register for Events" node. The difference? The User Event ref remains private to the containing Messenger class; no naked refs are exposed to be manipulated with User Event prims by callers

    When I tried the event registration refnums, they were private data in one of my classes so they needed accessor VI's to be of use to any VI that wasn't a class member. That means that I had no choice except to share the event registration refnum, which turned out to be a bad idea for my use case since it doesn't really allow multiple subscribers. If I just expose the event refs,then its easy for any VI that wants to consume the event to subscribe. I don't find it ever necessary to preload event queues since I don't use the events for any kind of input control - they are only used to publish data to subscribers. More importantly, how do you accomodate multiple subscribers?

  9. There's a valid reason to pass an event registration into a SubVI to be SubPanelled, and that's when you want to "pre-stuff" the mailbox with some events prior to the SubVI running. This is because the "Register for Events" node is unaware of User Events that have been sent along a User Event Refnum -- which incidentally demonstrates the beauty and power of separating the messenger (User Event Ref) from the mailbox (Event Registration Refnum).

    If you stuff the event registration with events and then wire that event registration to more than one event handler (as one might, in a publish-suscribe tye design), how do you ever know which event structure will handle which event? It has been my empirical observation that you can end up with a scenario where 1) you load events into your event registration ref (mailbox) 2) you wire that node to more than one dynamic event terminal 3) One event structure registers first and goes and empties the mailbox (event registration) before the other event loop gets a chance to check. As far as I can tell, this happens because the event queue has only one registered listener when the events are consumed so it discards the events as soon as the first event structure consumes them. Or, I have even observed behavior where it appears the second event handler gets there somewhere in the middle of emptying the mailbox and gets some but not all of the events that were registered.

    I got nervous enough about this sort of thing that I stopped exposing event registration refs and just expose event refs in my designs. It can't handle the scenario you described, but I don't trust using events that way in a publish-suscribe type architecture where there are likely to be multiple listeners for any event.

  10. Thanks for the info guys, just to get it really explicit, does anyone know what the overhead is like with using an event to send data? As I mentioned in the OP, say I want to display a 512kb image (1024x512px, greyscale) in a sub-vi, at a rate of 40Hz, how would this compare to sending that information by queue, or just viewing the data in a graph in the main message handler when it arrives (rather than sending to a sub-vi).

    I don't think your limiting factor will be passing the data whether you use queues or events - the most resource intensive activity will be the graph display front panel updates no matter if they occur in the main message handler or a sub-vi.

    Edit - one issue you may run into passing the data thru a queue or event to a sub-vi is making inadvertent copies of the data that would not happen if you display in the main VI. Just something to watch out for.

  11. The event structure. Events are handled in the owning vi.

    Not sure what you mean here about events not working in subpanels - I've got more than one project that uses subpanels and and the subpanel will handle most any front panel event (save for the keydown event - that one always gets handled by the calling VI) and any user-defined event I've tried.

    As far as AlexA's question, I've found it easier to just expose the events and let the subvi's handle registration rather than using the event registration refnum.

  12. I think mje pretty well summed it up, but the explanation I like the most is from AQ on this NI blog (https://decibel.ni.com/content/groups/large-labview-application-development/blog/2012/06/02/when-should-the-to-more-specific-or-preserve-tun-time-class-primitives-be-used-with-oop-in-labview) describing the difference . The post referenced is worth a read as well.

    "...The "To More Specific" tests the incoming object vs the type of the wire at the middle terminal. The "Preserve Run-Time Class" tests the incoming object vs. the type of the object on the wire at the middle terminal. "

  13. I think you'll need at least one thread for every .NET DLL call you want to run asynchronously to avoid blocking by the .NET DLL calls. LabVIEW can schedule multiple activities on one thread for internal calls so if a VI goes dormant while waiting that thread can be reassigned to another as long as all of the code is native LabVIEW. This isn't true for DLL calls. LabVIEW can't reuse the thread assigned to the .NET call and must wait for it to return. So as soon as there are more .NET calls than threads assigned for the execution system they run in, someone has to wait. You can set the number of threads per execution system using the vi.lib\Utility\sysinfo.llb\threadconfig.vi referenced in the first link Ned pointed to. If you need more threads than can be assigned to one execution system, then you may have to split the .NET calls across multiple execution systems.

    Mark

  14. Can you explain this a bit mo(o)re?

    Is it a LV-bug that the cast does not accept a cluster containing an array?

    I'm pretty sure it's because you can't type cast any cluster that has a variable length element - if you add a string to a cluster, the type cast won't accept that as a vaild input, either.

    From the LabVIEW help on flattened data

    "LabVIEW converts data from the format in memory to a form more suitable for writing to or reading from a file. This more suitable format is called flattened data.

    Because LabVIEW stores strings, arrays, and paths in handles (pointers to pointers in separate regions of memory), clusters that contain these strings and arrays are noncontiguous."

    A basic type cast operation just tries to take a chunk of memory and change the type associated with that chunk. I don't know the internal implementation of the LV type cast, but I'm sure it's more involved and does checks on the validity of the cast. But still, since the memory chunk containing the elements referenced above is not contiguous, you can't just call that memory chunk a new type (like a string) since that chunk doesn't contain all of the data referenced by that type.

  15. This is working exactly the way one should expect. If you have a fixed size array that you need to pass (it's always the same size) you can use a cluster instead of an array. LabVIEW will flatten a cluster of simple types into a stream without adding any length element. If your data array may be different sizes on each call then you're doing it the only way I know of.

    Mark

  16. Won't this land you in the Program Files directory? That's where my .exe is and I thought you cannot write to that in Windows 7 without Admin privileges?

    I used to store the config.ini file there but was thinking of moving it to the root directory and allowing edits through the application builder. I don't want to bury it in AppData or UserFiles/x/y/z/1000FoldersDeep. This application normally runs on an embedded PC with not much else on it so I want to store test files, test setup files, and config files in a big folder that's easy to get to in a couple of touchscreen taps. It's hard enough getting in/out of windows folders with a touchscreen and desktop shortcuts seem to get lost often by users with big greasy fingers.

    I don't use the complete path to the class - I strip the class name so a class like TempSensor.lvclass would have config file named <Public Application Data>\MyProject\TempSensor.ini or something like that. And the reason I use the <Public Application Data> is that it works on all the Windows targets I use for all users (don't need admin rights to modify). I don't have to deploy to Windows embedded, so I can't speak to that, so your needs may be different.

    Mark

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.