Jump to content

Nested DLLs


Recommended Posts

Jack, can you please reply to the post #23? After rereading your suggestion, I see it is the same as mine and I just didn't understand the text after first glance.

 

No prob.

 

First, your statement "nice simple example from NI engineer" threw my BS detector into red-alert mode  :lol:

 

"space heater" means CPU, polling without any sort of throttle, or more appropriately interrupt-driven procedure, turns into this: https://www.google.com/search?q=%22space+heater%22&tbm=isch

 

Time bug -- publisher waits only long enough to allow "the thing" on the other side of `PostLVUserEvent()` to make a memory copy, but not to reasonably ACK that the event has been processed and the handler is effectively ready for the next event. In other words, publisher is asynchronized from subscriber -- or even bigger problem, subscribers (plural). 

 

Space bug -- subscriber is unable to put a cap on its recv queue size in labview, unlike, say, a BSD socket with a cap on recv buffer that has `connect()`-ed to a `bind()`-ed publisher. In that scenario, TCP backpressure from subscriber to publisher is capable of effectively enforcing timeouts and message drops with EAGAIN or ETIMEDOUT on the publisher -- meaning the lack of specifying and limiting space on the subscriber side in labview (in the form of limitign the queue size of an `Register For Events` node) severely limits the publisher to act sanely (since, no backpressure on `PostLVUserEvent()`). You have to jump through totally bogus, undocumented, unsupported, uncharacterized, and unintuitive hoops to hack together a viable solution using `PostLVUserEvent`, `Register Callback VI` node, LabVIEW queues, and and additional User Event to grossly approximate a decent `recv` buffer limit. I do not say this proudly -- I say this defeated and deflated -- but I may have accidentally characterized the only solution in existence to such a fundamental, gaping hole in `extcode.h` -- the ability for C to synchronously call into *LabVIEW* and synchronously wait for a response on the call stack (rather, than the other way around). Summarized: it's easy to create memory leaks when the `send` endpoint cannot detect the `recv` endpoint is not processing quickly enough.

 

Basically, `PostLVUserEvent()` makes wildly dumb assumptions about both space (memory) and time (synchronization between `send` and `recv` endpoints), and perhaps nobody on earth is using it properly, except for serendipity of an application domain where it happens to work, and/or hearty `sleep()` sprinklings.

 

I'm not bitter, just jaded and irritated with `extcode.h`

Link to comment

As if by magic, the shopkeeper appeared. :P

 

Thanks for actually running this and confirming my suspicion; i was just poking thru source code on a 13" macbook air and xcode, and of course couldn't even load the VI or C without compiler error  :lol:

 

I seem to recall @Darin.K commenting before on the quality of the PRNG in LabVIEW in the past; @ShaunR, perhaps you have demonstrated a better one here!  :P

Link to comment

As if by magic, the shopkeeper appeared. :P

 

attachicon.gifUntitled.png

 

(LabVIEW 64 bit)

 

You getting scared yet, Mr Pate? :D

 

Well this error is usually because of setting the compiler default alignment to 1!  :D

 

The 64 Bit platforms, and non-x86 in general, don't use a special compiler alignment at all. So you have to be careful about adding this compiler option to your project.

 

Basically, never use unconditional pragma pack's in your code if you ever intend to port the code to any other LabVIEW platform and yes x64 is an entirely different platform from x86 technically, eventhough NI doesn't treat it differently from a licensing point of view.

 

The proper way to do this is actually to replace the two pragmas with the includes for lv_prolog.h and lv_epilog.h. They contain the proper magic to make the compiler behave. Now of course you should apply those includes ONLY around structures that will somehow be interfacing to the LabVIEW datatype-system. For other structures and elements you'll have to decide for yourself what alignment is the right one, and if you want to take that into the source code or if you rather use a globally set alignment through a compiler option. Personally I think that if you need a specific alignment for some code it belongs into that code and only there and anything else should remain with default alignment. Changing the default alignment only makes sense if you control more or less the entire platform and the target hardware has a significant performance penalty with the default alignment.

 

But most often you have to deal with constraints put on you from the software components you interface with. There the alignment should be locally changed where necessary and otherwise left alone to the default alignment.

 

Why LabVIEW uses 1 byte alignement on x86? Well, when LabVIEW was ported from the Mac to Windows 3.1, computers with 4MB of physcal memory were considered state of the art machines. 8MB was seen as high end. 16MB wasn't even possible on most because of BIOS, chipset and board design limitations. There, a default alignment of 8 bytes could waste a lot of memory on a platform that used predominantly 32 bit entities with the exception of a double precision floating point, which wasn't that special in LabVIEW as it was an engineering tool and used often for floating point calculations. Yes Jack, it is all soooooooo 1900 but that is the century when LabVIEW was developed. And something like byte alignment can't be changed later on on a whim without rendering almost every DLL interface that has been developed until then incompatible. The problem will however soon solve itself with the obsoletion of the x86 platform in general and in LabVIEW especially.  :D

 

Your other remarks do sound more angry than jaded, Jack!  :cool:

 

Yes I also feel the pain from extcode.h which is in some ways a bit dated and hasn't really seen much of development in the last 20 years. PostLVUserEvent() was one of the few additions in that timeframe and it wasn't the greatest design for sure. Incidentially NI doesn't really use it themselves but they rather use the undocument OM (Object Manager) API which supports also event posting (and custom refnums like DAQmx, IMAQdx, etc) but uses an API that is basically impossible to use without further detailed insight into the LabVIEW source code, despite a documentation leak in the 8.0 cintools headers for some of them.

 

And the fact that you can't tell PostLVUserEvent() to take ownership of the data is certainly a flaw, however if  you use it for posting large amounts of data to the user event loop, you certainly have a much bigger problem in your software architecture. It's easy to do that I know, but it is absolutely not a clean design to send lots of data through such channels. The event handling should be able to proceed quickly and cleanly and should not do large data handling at all. It's much better to limit the event to enough data to allow the receiver to identify the data in some ways and retrieve it from your DLL directly when your final data handler sees fit, rather than forcing the whole data handling into the event itself. That is not only because of the limit of PostLVUserEvent() but generally a better design than coupling event handling and data processing tightly together, even if PostLVUSerEvent would have an explicitedly synchronously working sibbling (which only could work with callback VIs and a new Filter User Event). Personally I think the fact that user events work with LabVIEW callback VIs is not so much an intended design feature than more of a somehow unintentional side effect from adding ActiveX event support to the user event infrastructure. Or was that even before the User Event structure??  :P

 

Also your observation that you can't throttle the sender through the receiver is valid, but should be again solved outside of the event handling. Once you let the event delegate the data handling to some specific actor or action engine, or whatever, the retrieval of the data through this entity gives you the possibility to implement whatever data throttling you would want on the sender side. Yes I know, queues in LabVIEW are a lot easier to use than in C code, but I have to admit that it would be a pretty involved exercise to come up with a LabVIEW C API that allows to address all the caveats you mentioned about PostLVUserEvent() and would still be usable without a major in computer science. And with such a degree doing that yourself in your DLL is not that a hard exercise anymore and allows the generic LabVIEW interface to be simple.

 

I seem to recall @Darin.K commenting before on the quality of the PRNG in LabVIEW in the past; @ShaunR, perhaps you have demonstrated a better one here!   :P

 

That would be a really bad PRNG. Relyng on random data in memory is anything bad truely random.  :lol:

Link to comment

It's discussions like this that make me happy that I do not have to deal with these kinds of situations very often. Luckily for me most of these issues raised probably will not affect my project if I am careful (which I definitely will be!).

 

Really good insight here, I am quite envious of the technical grokking that you fellows have on these quite arcane issues.

Link to comment

OK, having downloaded that example; the data posted is allocated on the stack, not on the heap. Major difference. That's why that particular example you cite has no memory leak.

 

Jack, this may be obvious to you, but you say the data is posted on the stack (and not the heap). Is that because 

interruptStruct_t interruptStruct;

is declated as a local variable in the function and so automatically uses the stack?

 

I thought the stack was physically a different "type" of memory, more like a L2 cache inside the CPU. A bit of googling tells me not that this is not the case.

 

Computers are complicated...

Edited by Neil Pate
Link to comment

Jack, this may be obvious to you, but you say the data is posted on the stack (and not the heap). Is that because 

interruptStruct_t interruptStruct;

is declated as a local variable in the function and so automatically uses the stack?

 

I thought the stack was physically a different "type" of memory, more like a L2 cache inside the CPU. A bit of googling tells me not that this is not the case.

 

Computers are complicated...

 

Yes, local variables are generally placed on the stack by the C compiler. I say generally, since there exist CPU architectures which handle that differently but they do not really have any significance outside of very specialized embedded architectures.

 

However they are not posted on the stack (and in my opinion even allocated feels wrong, as I associate that with an explicit malloc or similar call) but in a broader sense I suppose allocated is a sensible term here. The PostLVUserEvent() function then "posts" the data to the LabVIEW event queue associated with the event structure that registered for the user event.

 

And yes, the stack is typically not explicitly put in the cache, although it certainly could and probably does end up there, but that is not of concern to you, but very much of the CPU designer who has to devise all sorts of tricks and protections to make sure everything stays coherent anyways.

The stack is usually in a reserved area of the heap that for most processor architectures starts at a high address and grows downwards until it meets a stack limit or the normally managed heap memory limit, which is when you get the stack overflow error.

Link to comment

Yes Jack, it is all soooooooo 1900 but that is the century when LabVIEW was developed. And something like byte alignment can't be changed later on on a whim without rendering almost every DLL interface that has been developed until then incompatible. The problem will however soon solve itself with the obsoletion of the x86 platform in general and in LabVIEW especially.  :D

 

This part actually still fascinates me, how labview provides such a clean abstraction over the underlying memory ...

 

 

Your other remarks do sound more angry than jaded, Jack!  :cool:

 

Yes I also feel the pain from extcode.h which is in some ways a bit dated and hasn't really seen much of development in the last 20 years. PostLVUserEvent() was one of the few additions in that timeframe and it wasn't the greatest design for sure. Incidentially NI doesn't really use it themselves but they rather use the undocument OM (Object Manager) API which supports also event posting (and custom refnums like DAQmx, IMAQdx, etc) but uses an API that is basically impossible to use without further detailed insight into the LabVIEW source code, despite a documentation leak in the 8.0 cintools headers for some of them.

 

... and you pretty much were able to hit every facet from which my frustrations stem. 1) underpowered interface that 2) never changes that 3) was not dogfooded so 4) it totally blows and therefore so does my developer experience while trying to do something meaningful with the API while 5) NI has their own far more powerful interface, and are therefore unconcerned day-to-day with 1)

Link to comment

Jack, this may be obvious to you, but you say the data is posted on the stack (and not the heap). Is that because 

interruptStruct_t interruptStruct;

is declated as a local variable in the function and so automatically uses the stack?

 

I thought the stack was physically a different "type" of memory, more like a L2 cache inside the CPU. A bit of googling tells me not that this is not the case.

 

Computers are complicated...

 

Indeed they are.

In fact. It is this that means that the function calls must be run in the root loop (orange node).. That is really, really bad for performance..

 

If you passed the ref into the callback function as a parameter then you could turn those orange nodes into yellow ones. This means you can just call the EventThread directly without all the C++ threading, (which is problematic anyway) and integrate into the LabVIEW threading and scheduling. The problem then becomes how do you stop it since you can't use the global flag, b_ThreadState, for the same reasons. I'll leave you to figure that one out since you will have to solve it for your use case if you want "Any Thread" operation :P:book:

 

When you get into this stuff, you realise just how protected from all this crap you are by LabVIEW and why engineers don't need to be speccy nerds in basements to program in it (present company excepted, of course  :worshippy:  ). Unfortunately, the speccy nerds are trying their damnedest to get their favourite programming languages' crappy features caveats into LabVIEW. Luckily for us non-speccy nerds; NI stopped progressing the language around version 8.x. :angry::lol:

Edited by ShaunR
Link to comment

Really good insight here, I am quite envious of the technical grokking that you fellows have on these quite arcane issues.

 

Envious should be the wrong word here. Such "arcane issues" should have been made more developer-friendly long, long ago -- these topics definitely feel "foreign" coming from labview, but "arcane" -- we're on the far other end of the spectrum from "arcane" as far as C is concerned here -- this knowledge should be some straightforward day-to-day interop. But it's not, unfortunately.  :(

 

That said ... you'll find relatively few questions unanswered, and will rarely be disappointed, appending "RolfK" to your searches on these topics :-)

 

Jack, this may be obvious to you, but you say the data is posted on the stack (and not the heap).

 

I had started to type a response here, but it got too long -- this is actually one topic with many excellent explanations findable in a search, and to help guide your search, it might be helpful to also look for "data segment" (since this is sometimes where other types of static memory are compiled into the object file).

It is this that means that the function calls must be run in the root loop (orange node).. That is really, really bad for performance..

 

If you passed the ref into the callback function as a parameter then you could turn those orange nodes into yellow ones. This means you can just call the EventThread directly without all the C++ threading, (which is problematic anyway) and integrate into the LabVIEW threading and scheduling. The problem then becomes how do you stop it since you can't use the global flag, b_ThreadState, for the same reasons. I'll leave you to figure that one out since you will have to solve it for your use case if you want any thread operation :P:book:

 

It took me a minute to understand why you had to bring the orange banner into the mix -- but by all means, declaring `dw_LabViewEventRef` as a global variable *might* be justified as a gross simplification for an example program, but this is by no means production-quality.

 

I think this advice holds pretty much absolutely in this domain -- if you find yourself using the orange banner to "solve" problems, it's going to be a long, cold day.

 

But that said -- and to refine what @ShaunR is saying here -- you can't pass the ref into the callback function, since it's the main library who invokes and passes parameters into your callback function. When telling the main library what callback function to use (e.g., I would expect the library has a function with a name along the lines of "register callback function"), does it also give you an opaque data pointer you can define yourself that it will pass to your callback? This is where to initialize the main library with the UserEventRef from labview.

 

That said ... the next topic would be how and when to allocate and deallocate that memory on the heap  :book:

Link to comment

I am pretty happy myself, that I decided to "help" in this topic, otherwise I would never find out, about PostLVUse.... making a deep copy of supplied data  :wacko:

 

What on earth are those orange and yellow nodes? Are you refering to "run in any thread" and "run in UI thread" for the call library nodes ?

 

If so, how does it relate to the "NI engineer" example? The only issue I see, is that you cannot use the global DLL variable for some session based app design, but it still doesn't explain the orange and yellow nodes.

Edited by bublina
  • Like 1
Link to comment

If so, how does it relate to the "NI engineer" example? The only issue I see, is that you cannot use the global DLL variable for some session based app design, but it still doesn't explain the orange and yellow nodes.

 

Well, generally if your DLL uses global variables, one of the easier ways to guarantee that it is safe to call this DLL from LabVIEW more than once is to set all the Call Library Nodes calling any function that reads or writes this global to run in the UI thread. However in this case since the callback is also called from the internal thread, that is not enough to make it strictly safe. The callback after all only makes any sense when it is called from another context than your LabVIEW diagram. Even in this trivial example that isn't really meant to show a real use case but just how to use the PostLVUserEvent() function, the callback is called from a new thread inside the DLL, and therefore can access the global variable at the same time as your LabVIEW diagram.

 

Now these are all general rules and the reality is a bit more complicated. In this case, without some alignment pragmas that put the global variables on an unaligned address, each read of the two global variables inside the callback is really atomic on any modern system. Even if your LabVIEW code calls the initialize function at exactly the same time, the read in the callback will either read the old value or the new one but never a mix of them. So with careful safeguarding of the order of execution and copying the global into a local variable inside the callback first before checking it to be valid (non-null) and using it, it is maybe not truely thread safe but safe enough in real world use. Same about the b_ThreadState variable which is actually used here as protection and being a single byte even fully thread safe for a single read. Still, calling ResetLabVIEWInterrupt and SetLabVIEWInterrupt in a non sequentual way (no strict datadependency) without setting the Call Library Nodes to UI thread could cause nasty race conditions. So you could either document that these functions can't be called in parallel ever to avoid undefined behaviour or simply protect it by setting them to run in the UI thread. The second is definitely more safe as some potential LabVIEW users may not even understand what parallel execution means.

 

Years ago I did some embedded programming with pretty basic micros like the 8051, and they had a hardware stack that you would push to and pop from. Maybe this is where I am getting confused, I do not have any formal CS training so have just cobbled together knowledge over time.

 

The original 8051 was special in that it had only 128 byte of internal RAM and the lowest bank of it was reserved for the stack. The stack there also grows upwards while most CPU architectures have a stack that grows downwards. Modern 8051 designs allow to have 64 kb of RAM or more and the stack simply is in the lowest area of that RAM but not really in a different sort memory than the rest of the heap.

 

As to PUSH and POP that are still the low level assembly commands used on most CPUs nowadays. Compiled C code still contains them to push the parameters on the stack and pull (pop) them from it inside the function.

Link to comment

What on earth are those orange and yellow nodes? Are you refering to "run in any thread" and "run in UI thread" for the call library nodes ?

Yes

Orange="run in UI thread"

Yellow="run in any thread"

If so, how does it relate to the "NI engineer" example? The only issue I see, is that you cannot use the global DLL variable for some session based app design, but it still doesn't explain the orange and yellow nodes.

Orange:

Requires the LabVIEW root loop. All kinds of heartache here but you are guaranteed all nodes will be called from a single LabVIEW thread context. This is used for non thread-safe library calls when you use a 3rd party library that isn't (thread-safe) or you don't know. If you are writing a library for LabVIEW, you shouldn't be using this as there are obnoxious and unintuitive side effects and is orders of magnitude slower. This is the choice of last resort but the safest for most non C programmers who have been dragged kicking an screaming into do it :)

Yellow:

Runs in any thread that LabVIEW decides to use. LabVIEW uses a pre-emptively scheduled thread pool (see the execution subsystems) therefore libraries must be thread-safe as each node can execute in an arbitrary thread context.. Some light reading that you may like - section 9 :)

If you are writing your own DLL then you should be here - writing thread safe ones. Most people used to LabvIEW don't know how to. Hell, Most C programmers don't know how to. Most of the time, I don't know how to and have to relearn it  :yes:  If you have a pet C programmer.; keep yelling "thread-safe" until his ears bleed. If he says "what's that?" trade him in for a newer model :lol: It has nothing to do with your application architecture but it will bring your application crashing down for seemingly random reasons.

I think I see a JackDunaway NI Days presentation coming along in the not too distant future :lightbulb::D

Edited by ShaunR
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.