Jump to content

Is LabVIEW a "pure" dataflow language?


Recommended Posts

The outputs from subvis are not available until all of outputs are available even without data dependencies. LabVIEW would be more of a pure dataflow language if vi outputs became available as soon as they are ready.

This is not something for the idea exchange. Changing this behavior would break nearly all code since the error cluster is probably used more as a sequence structure wire than it is for error handling.

I don't know if I have a point. Maybe it will spark an interesting philosophical debate about using the error cluster as a sequence structure wire. It is necessary in a lot of situations. But that causes LabVIEW code to execute in a programmer controlled sequence making it less "dataflowey".

A picture is worth a thousand words. And yes if there are two independent functions they don't belong in the same vi - this is just for illustration.

gallery_0_46_25088.jpg

Link to comment

This is the way simulation subsystems work - they are "macroed" inside the top simulation diagram, say this one:

post-2886-0-20966200-1304361658_thumb.pn

would be executed same way as top level simulation diagram and as a subsystem. So, ading this feature to LabVIEW would not be a big problem (but totally pointless).

and yes, I would agree with you that a warning may be justified when the subVI is created out of 2 totally independent things - and this might be a good topic for idea xchange.

Edited by mzu
Link to comment

The outputs of a node (subVI, primitive, structure etc...) are not available until all inputs have been received and the node has finished executing.

From a design point of view, if the two tasks are truly parallel, and therefore independent of each other, why couple them inside the same subVI? Not a feature I have played with a lot: but would inling the subVI solve this issue. Therefore, I don't think the two examples are exactly the same code because in (2) you have always enforced sequence order.

And yes, I would not want this to change :)

Link to comment

Yes two parallel tasks belong in separate vis. The thing I was getting at is that subvis draw an implicit sequence structure around the code which is why you can run an error wire straight through it to control execution order. If outputs became available as soon as they were ready then all hell would break loose with existing code! The little subvi with error in and error out and Wait mS that comes in so handy would not work anymore since error out would be immediately available.

I also would not want this to change. Maybe there could be a setting in properties under execution. I don't even think I have ever had a need for something like this.

But I think that a pure dataflow language would make vi outputs available as they become available.

I just did an experiment.

I inlined the dual adder subvi and changed the wait to 2500mS then ran the calling vi. Both sum1 and sum2 became available at the same time.

So inlining doesn't really inline the code. It does but it effectively places a single frame sequence structure around it. This is a good thing - try to imagine the confusion if it didn't! :rolleyes:

And no I still don't have a point or any kind of programming issue that I am trying to resolve. This is just some stuff that I have been thinking about lately.

Link to comment

But I think that a pure dataflow language would make vi outputs available as they become available.

Can you provide the reasoning behind this? It seems like a perfectly reasonable behavior in theory, but so does the existing behavior where the outputs are available only after the entire block of code finishes executing, so what would make this "pure" data flow?

Link to comment

I think as implemented you do have pure dataflow. As stated earlier an node's outputs are available when the node completes. You must have some mechanism for controlling the sequence of operations. This would make debugging extremely difficult and make code harder to understand since the reader would have absolutely no way of understanding the flow of the program. You would never know when you would get partial outputs and when code would start firing. From a programming perspective I believe you need some way to allow the programmer to understand the flow of execution. Sequence structures are already abused. I think if this change were made they would be abused even more.

Link to comment

This strikes me as the kind of debate/discussion that probably raged inside NI a couple decades ago :). I'd love to hear how the decision was made.

I don't have much to add (and I like the current behavior, thank-you-very-much), but as other people have pointed out the notion of all of a nodes' outputs blocking until every output's data is ready is a fundamental part of LabVIEW's semantics. I would note, though, that this is exactly the kind of question I hear from students and LabVIEW novices (like on my FRC team). We all take the current behavior for granted and it's useful once you start to understand dataflow, but I'm not sure it's super-intuitive right off the bat.

As a thought experiment, what would it mean to put a breakpoint on a subVI (Steve's dual adder.vi, for instance) if the output data arrived asynchronously?

Link to comment

Michael, I was probably a little unclear. This is not a suggestion or even a programming issue that I am having. It is just one of those random thoughts swirling around in my head :rolleyes:

Yair, I have no reasoning other than what I have already said. LabVIEW is indeed a pure dataflow language up to the boundry of a node. This is perfectly reasonable and I would not want it to change. It could be an interesting execution option. More likely it would be an option available only for inlined subvis.

Mark, I agree that it is pure dataflw up to the node. But it would be more pure to let the outputs become available naturally as if the subvi were in the top level diagram. And yes it would be very difficult to understand and debug code. The node boundary is a very logical place to control dataflow.

We do need a way to control the flow of execution which is a very good use for the error wire. Controlling execution order is what strikes me as less pure dataflow. I was reading a post on the idea exchange that got me thinking about this. Somewhere in an alternate universe NI implemented the subvi such that outputs become available when they are ready rather than waiting for them all to be ready. I am sure glad that I do not program in that universe!

Link to comment

Interesting thoughts Steve--I see where you're coming from.

It would be fun if there were an option to "Inline Everything" on the top level block diagram, just to see the differences. Hmm... thinking about it a little more it occurs to me you can fairly easily create that kind of behavior simply by making sure every node (prim and sub vi) you use has only a single output. Off the top of my head I suspect any most prims with multiple outputs could be broken down into multiple operations with a single output. I'm thinking a "pure" dataflow language would be similar to programming in a lower level language, in that we'd have to manually construct artifacts to control parallel execution sequences (lots more sequence structures) and do a lot more parameter validation, since we couldn't output an error cluster from a node.

I don't see how this suggestion would make it more like dataflow. If anything, won't it make it more asynchronous?

Seems to me pure data flow is completely asynchronous.

Can you provide the reasoning behind this? It seems like a perfectly reasonable behavior in theory, but so does the existing behavior where the outputs are available only after the entire block of code finishes executing, so what would make this "pure" data flow?

When a node has more than one output, holding one output until the other finishes strikes me as an inherent violation of "pure" data flow.

Link to comment

I agree you could make LabVIEW "more dataflow" by changing the semantics of the language; that is, by changing the rule that a node with multiple outputs releases them all at the same time. However, i don't see it as being useful.

So many of the advanced programming techniques like the queued message handler, notifiers, the event structure, and even the lowly global variable are useful in LabVIEW because they violate dataflow. If you could do it all with dataflow, you wouldn't have to fiddle with those advanced techniques.

The dataflow paradigm is really great for LabVIEW, but in its pure state it doesn't really get the job done. I don't think any more purity would help.

  • Like 1
Link to comment

So many of the advanced programming techniques like the queued message handler, notifiers, the event structure, and even the lowly global variable are useful in LabVIEW because they violate dataflow.

(This is me thinking aloud, not me preaching from a soapbox...)

I'm curious, how do you define "dataflow" and what does it mean to violate it? Personally I've struggled to come up with a definition I felt adequately explained it. Usually we (me included) associate "dataflow" with visible wires, so we call queues, notifiers, dvrs, etc, dataflow violations. I'm not convinced that's entirely accurate. Take this diagram for example:

post-7603-0-38076400-1304456422_thumb.pn

Why should this be considered contrary to dataflow principles? Neither flat sequence executes before all its inputs have been filled and a refnum is a valid piece of data for a refnum input to a node. Looks to me like data flow is maintained. To be honest, I'm not convinced it's possible to violate dataflow on the inputs. There's no node/structure I know of in Labview that begins executing before all its inputs have been satisfied. On the other hand, almost every node/structure violates dataflow on the outputs by waiting until all outputs are available before releasing any of them to the following nodes.

Refnums and queues can definitely be used to circumvent Labview's predominantly by-value behavior, but in my head dataflow != strict by-value behavior.

Link to comment

I'm curious, how do you define "dataflow" and what does it mean to violate it? Personally I've struggled to come up with a definition I felt adequately explained it. Usually we (me included) associate "dataflow" with visible wires, so we call queues, notifiers, dvrs, etc, dataflow violations. I'm not convinced that's entirely accurate. Take this diagram for example:

Well I'm not going to google around for formal definitions of dataflow, but to me dataflow means that the order of operations is solely determined by the availability of input data. Of course most mainstream languages set operation order based on the position of the instruction in the source code text file, with the ability to jump into subroutines and then return, so that's the cool thing, dataflow is totally different than normal programming languages. (I know you knew that already)

The basic syntax of LabVIEW is that data flows down a wire between nodes. I'm calling that "pure dataflow". Any time data goes from one node to another without a wire (via queue, notifier, global, local, shared variable, writing to disk then reading back, RT FIFO, DVR, etc) then you are not using pure dataflow. All those things are possible and acceptable in LabVIEW because it turns out that it's just too hard to create useful software with just pure dataflow LabVIEW syntax.

One thing I do take as a LabVIEW axiom is "Always prefer a pure dataflow construction". In other words, don't use queues, globals, and all that unless there is no reasonable way to do it with just wires.

Well anyway, that's what I meant. If you call your diagram "pure dataflow" then you don't have to agree with anything else I said. Of course your diagram is perfectly valid LabVIEW, because LabVIEW is not a pure dataflow language. It's a dataflow language with a bunch of non-dataflow features added to make it useful. You could say the data is "flowing through" the queue but for me the dataflow concept starts to lose its meaning if you bandy the term around like that.

So my definition of pure dataflow is different from Steve's in the original post, but it's a definition that is more useful for me in my daily work of creating LV diagrams. Sorry for the confusion.

Jason

Edited by jdunham
Link to comment

Why should this be considered contrary to dataflow principles? Neither flat sequence executes before all its inputs have been filled and a refnum is a valid piece of data for a refnum input to a node. Looks to me like data flow is maintained.

Well I just posted, and I feel like I didn't adequately answer your real question, so I'll try again.

I would say that the Dequeue Element node is a special node that fires not when its input data is valid, but rather when an event occurs, like data is enqueued somewhere, or the queue is destroyed. So sure, technically it fires right away, and its function is to wait for the event, so it "starts executing" on the valid inputs but then it sits around doing nothing until some non-dataflow-linked part of your code drops an element into the queue.

So that node (Dequeue Element) is executed under dataflow rules, like all LabVIEW nodes are, but what goes on inside that node is a non-dataflow operation, at least the way I see it. It's "event-driven" rather than "dataflow-driven" inside the node.

Similarly a refnum is a piece of data that can and should be handled with dataflow, but the fact that a refnum points to some other object is a non-dataflow component of the LabVIEW language (we're not still calling it 'G', are we?).

Edited by jdunham
Link to comment

There's no node/structure I know of in Labview that begins executing before all its inputs have been satisfied.

There is one exception (sort of) -

post-1431-0-45447200-1304516183_thumb.pn

This actually also allows you to do this, so it also applies to the timing of outputs -

post-1431-0-31619700-1304516186_thumb.pn

Link to comment

Thanks for the detailed response Jason. Just to be clear, I'm not claiming I'm right and you're wrong. I'm just talking through this and presenting different ideas with the hope that the discussion will help me develop a better understanding of Labview and dataflow. This is more than just an academic discussion. AQ has mentioned in the past that the compiler is able to do more optimization if we don't break dataflow, implying it has more to do with determinism and complexity than whether or not we are using any kind of references. So I'm often left wondering exactly what dataflow means and how I can know if I have broken it. Let me try to explain through some examples...

Looking at the same example I posted earlier,

post-7603-0-97505800-1304520926_thumb.pn

(ex 1)

even though it uses a queue it is still deterministic. The compiler can (in theory, though perhaps not in practice) reduce it to,

post-7603-0-16842200-1304521800_thumb.pn

(ex 2)

The fact that it is possible for the compiler to remove the queue without affecting the output indicates to me that dataflow hasn't really been broken, even though according to the common interpretation it has. Let's add a little more complexity to the diagram.

post-7603-0-11311300-1304533562_thumb.pn

(ex 3)

At first glance it looks like the queue can't be factored away, so dataflow must be broken. But a little thought leads us to,

post-7603-0-85397300-1304533663_thumb.pn

(ex 4) - Note: 'Greater Than' should be 'Less Than'

which clearly doesn't break dataflow. What about this?

post-7603-0-06640300-1304533348_thumb.pn

(ex 5)

On the surface this may appear to be functionally identical to example 3, but it's not. Here it's possible for either of the top two threads to create the queue, enqueue the value, and release the queue before either of the other threads obtains their own reference to the queue. If that happens the vi will hang. This block diagram is non-deterministic, so I'm inclined to think that this does break dataflow. (Incidentally this is one reason why I almost never use named queues.)

So let's remove the Release Queue prims from the upper two threads.

post-7603-0-87719400-1304533287_thumb.pn

(ex 6)

Just like that, it can once again be reduced to example 4 and dataflow is restored. But if I wrap the enqueue operations in sub vis,

post-7603-0-89307100-1304533152_thumb.pn

(ex 7)

I've lost the reduction again since the sub vis will automatically release their handles to the queue when the sub vi exits. Hopefully these examples help explain why I find the common interpretation of dataflow insufficient.

[Edit - I just realized examples 3 and 6 can't be directly reduced to example 4. It is possible for the second frame of the sequence with the longer delay time to execute before the second frame of the sequence with the shorter delay time. Probably won't happen, but it is possible. Regardless, my gut reaction is that determinism is broken but not dataflow...]

---------------

So that node (Dequeue Element) is executed under dataflow rules, like all LabVIEW nodes are, but what goes on inside that node is a non-dataflow operation, at least the way I see it. It's "event-driven" rather than "dataflow-driven" inside the node.

It appears event driven from our perspective because we don't have to write queue polling code to see if there's an element waiting. Somewhere down in the compiled code there is a cpu thread doing exactly that. So what...? What about the "TimeDelay" or "Wait Until Next ms Multiple" nodes? They're essentially "event-driven" as well, waking up periodically and checking a resource to see if it should exit. But we don't typically consider these to break dataflow. This also raises questions of what happens if we change the timeout value from -1 to 0. Many of the "good" examples above are no longer deterministic without an infinite timeout. Murky waters indeed.

Any time data goes from one node to another without a wire (via queue, notifier, global, local, shared variable, writing to disk then reading back, RT FIFO, DVR, etc) then you are not using pure dataflow... One thing I do take as a LabVIEW axiom is "Always prefer a pure dataflow construction".

I agree using references (queues, dvrs, etc) adds complexity and a certain amount of opacity to the code; however, what you are calling "pure dataflow construction" I think is better described as "by-value construction." In that, I agree with you. By-value construction is easier to follow, has less chance for race conditions, and (ignoring the cost of memory copies) allows the compiler to do more optimizations.

----------

It seems to me "dataflow" is a word that has been used at various times to desribe several distinct ideas. I propose we use the following:

Dataflow - This is the idea that a node begins execution only after all its inputs have been satisfied. More precisely, it refers to the execution environment for Labview code, not the development environment. It is what enables Labview's powerful parallelism. We, as developers, do not have the ability to "break" dataflow and only have indirect control over sequencing in the execution environment. Given multiple nodes with all inputs satisfied, it's up to the RTE to decide which one will execute first.

Simplicity - Simplicity is reducing the code to the bare elements needed to perform the required action. Examples 1, 3, and 6 can all be simplified as shown. (5 and 7 are just bad code.) That simplification can be done either in the source code or by the compiler. In theory either option will result in the same compiled bits. Simplification benefits the developer, not the compiler or executable.

Determinism - If a block of code always gives the same output for a given input, it is said to be deterministic. Mathematical functions are deterministic. Dequeue prims are not. Deterministic code tends to be easier to follow and lets the compiler do more optimizations, so we should code deterministically when it makes sense to do so. However, even though a dequeue prim is not deterministic in and of itself, it can be used in a chunk of code that is deterministic, such as all the examples above except for 5 and 7. (Perhaps AQ was referring more to determinism than dataflow with his comments regarding optimizations?)

-----------

There is one exception (sort of) -

I can always count on Yair to know the arcane exception to the rule... :worshippy::lol:

Link to comment

I'm just talking through this and presenting different ideas with the hope that the discussion will help me develop a better understanding of Labview and dataflow. This is more than just an academic discussion. AQ has mentioned in the past that the compiler is able to do more optimization if we don't break dataflow, implying it has more to do with determinism and complexity than whether or not we are using any kind of references. So I'm often left wondering exactly what dataflow means and how I can know if I have broken it.

Well it still seems sort of academic. The examples you show don't seem suitable for compiler optimization into "simple dataflow" constructions. In most non-trivial implementations you might end up with one of the queue functions in a subvi. Whether or not your queue is named, once you let that refnum touch a control or indicator, you are not deterministic. It's not that hard to write a dynamic VI which could use VI Server calls to scrape that refnum and start calling more queue functions against it.

I know you're just trying to find an example, but that one doesn't go too far.

Just like that, it can once again be reduced to example 4 and dataflow is restored. But if I wrap the enqueue operations in sub vis,

...[ex. 7]...

I've lost the reduction again since the sub vis will automatically release their handles to the queue when the sub vi exits. Hopefully these examples help explain why I find the common interpretation of dataflow insufficient.

I didn't quite get this The subvis will release their references to the queue, but they will still enqueue their elements first, and it will be as deterministic as any of your other examples. The queue refnums may be different, but they all point to the same queue, which will continue to exist until all references to it are destroyed or are automatically released when their callers go out of scope.

It appears event driven from our perspective because we don't have to write queue polling code to see if there's an element waiting. Somewhere down in the compiled code there is a cpu thread doing exactly that. So what...? What about the "TimeDelay" or "Wait Until Next ms Multiple" nodes? They're essentially "event-driven" as well, waking up periodically and checking a resource to see if it should exit. But we don't typically consider these to break dataflow. This also raises questions of what happens if we change the timeout value from -1 to 0. Many of the "good" examples above are no longer deterministic without an infinite timeout. Murky waters indeed.

Well, yes it's kind of murky, but the time functions are basically I/O ('I' in this case). They return the current state of an external quantity, injecting it into the dataflow. All I/O calls represent a local boundary of the dataflow world. In contrast, the queue functions (and globals, locals, etc.) transfer data from one labview dataflow wire to a totally unconnected labview dataflow wire. So I think that supports my saying that these break dataflow, while the timing functions don't.

BTW, I don't believe the queues poll. I'm pretty sure AQ has said those threads "go to sleep" and it only took me a minute or two to figure out how to implement the wakeup without any polling. The NI guys are way smarter than me, so they probably figured it out too.

I agree using references (queues, dvrs, etc) adds complexity and a certain amount of opacity to the code; however, what you are calling "pure dataflow construction" I think is better described as "by-value construction." In that, I agree with you. By-value construction is easier to follow, has less chance for race conditions, and (ignoring the cost of memory copies) allows the compiler to do more optimizations.

Except when I call the strict by-value/deterministic stuff "pure dataflow" to my co-workers, they immediately understand what I am talking about, whereas I would constantly get quizzical looks if I switched over to saying "by-value construction" (even though they understand those words).

Anyway, I'm fine with using your definitions within the scope of this thread, assuming I bore you all yet again with any more replies.

I can always count on Yair to know the arcane exception to the rule... :worshippy::lol:

post-1431-0-31619700-1304516186_thumb.pn

Oh gosh, I found that disturbing. I think it was a mistake on NI's part to allow this in the language syntax.

Link to comment

Whether or not your queue is named, once you let that refnum touch a control or indicator, you are not deterministic. It's not that hard to write a dynamic VI which could use VI Server calls to scrape that refnum and start calling more queue functions against it.

I'll take your word for that since its not something I've ever tried. However, if the dynamic vi is part of the project being built then the compiler will know about it and won't be able to perform that reduction. I'm not trying to say the reduction can always be done, only that it can sometimes be done, (which I think is clearly illustrated by examples 1 and 2.) If the reduction can be done sometimes, then using a queue cannot be sufficient reason to claim dataflow has been broken.

I didn't quite get this The subvis will release their references to the queue, but they will still enqueue their elements first, and it will be as deterministic as any of your other examples.

Yep, you're right. I was thinking the queues would be released when the sub vis exited. That what I get for mixing programming languages... :wacko:

In contrast, the queue functions (and globals, locals, etc.) transfer data from one labview dataflow wire to a totally unconnected labview dataflow wire. So I think that supports my saying that these break dataflow, while the timing functions don't.

Let me try to explain it a little differently...

"Dataflow" is something that occurs in Labview's execution environment. Defining "dataflow" in terms of visible wires on the block diagram doesn't make any sense because there is no "wire" construct in the execution environment. What we see as wires on the block diagram are simply graphical representations of assigning memory pointers. When we hook up a wire we're declaring, I am assigning the memory address for the data in output terminal x to the memory pointer defined by input terminal y. How can Labview be a dataflow language if we define dataflow to be something that doesn't exist?

I agree with you that we should favor visible wires over "invisible" data propogation when possible--it helps tremendously with code clarity. I'm just saying that interpretation doesn't seem to accurately describe what dataflow is.

BTW, I don't believe the queues poll. I'm pretty sure AQ has said those threads "go to sleep" and it only took me a minute or two to figure out how to implement the wakeup without any polling. The NI guys are way smarter than me, so they probably figured it out too.

I was thinking about cpu operations when I made the original comment. ("Somewhere down in compiled code...") Application events don't map to cpu interrupts. Somewhere down the line a form of polling has to occur. Labview abstracts away all that low level polling and thread management, but it still happens. The point being, from the perspective of low level code there is little difference between a vi waiting for something to be put on a queue and a vi waiting for some amount of time to expire.

Except when I call the strict by-value/deterministic stuff "pure dataflow" to my co-workers, they immediately understand what I am talking about, whereas I would constantly get quizzical looks if I switched over to saying "by-value construction" (even though they understand those words).

I can't argue with you there. I just can't help but think we can come up with better terminology that doesn't overload the meaning of "dataflow." ("By-value construction" doesn't really accurately describe what you're talking about either. Maybe "directly connected?" "Referenceless?" "Visible construction?" )

assuming I bore you all yet again with any more replies.

Aww... don't go away. :( I learn a lot from these discussions.

Link to comment
I was thinking about cpu operations when I made the original comment. ("Somewhere down in compiled code...") Application events don't map to cpu interrupts. Somewhere down the line a form of polling has to occur. Labview abstracts away all that low level polling and thread management, but it still happens. The point being, from the perspective of low level code there is little difference between a vi waiting for something to be put on a queue and a vi waiting for some amount of time to expire.

I don't know anything factual about the internals of labview, which is lucky for you, because if I did, I wouldn't be allowed to post this. So anyway, it's highly likely that the LV compiler generates executable clumps of code, and that each execution thread in labview is some array or FIFO queue of these clumps. The execution engine probably does a round-robin sequencing of each thread queue and executes the next clump in line. So when a clump contains a Dequeue Element function, and the data queue is empty, this clump's thread is flagged for a wait or more likely removed from some master list of active threads.

Then some identifier for that thread or that clump is put in the data queue's control data structure (whatever private data LV uses to manage a queue). That part is surely true since Get Queue Status will tell you the number of "pending remove" instances which are waiting. In the meantime, that thread is removed from the list of threads allowed to execute and the engine goes off and keeps working on the other threads which are unblocked. There's no need to poll that blocked thread, because it's easier to just keep a list of unblocked threads and work on those.

When data is finally enqueued, the queue manager takes the list of blocked threads and clears their flag or adds them back to the list of threads allowed to execute. No interrupts, no polling. Of course if all threads are blocked, the processor has to waste electricity somehow, so it might do some NOOP instructions (forgetting about all the other OS shenanigans going on), but you can't really call that polling the queue's thread. It's really cool that the implementation of a dataflow language can be done without a lot of polling.

For the system clock/timer, that's surely a hardware interrupt, so that's code executed whenever some motherboard trace sees a rising edge, and the OS passes that to LV and then something very similar to the above happens. So that's not really polled either.

Let me try to explain it a little differently...

"Dataflow" is something that occurs in Labview's execution environment. Defining "dataflow" in terms of visible wires on the block diagram doesn't make any sense because there is no "wire" construct in the execution environment. What we see as wires on the block diagram are simply graphical representations of assigning memory pointers. When we hook up a wire we're declaring, I am assigning the memory address for the data in output terminal x to the memory pointer defined by input terminal y. How can Labview be a dataflow language if we define dataflow to be something that doesn't exist?

OK, I had to answer this out of order, since it follows from the previous fiction I wrote above. Between each clump of code in a thread, there should be a data clump/cluster/list that contains the output wire-data from one clump to be used as the input wire-data of the next one. That's the low-level embodiment of the wire, and whether any C++ pointers were harmed in the making of it is not relevant.

Now if the code clump starts off with a Dequeue function, it gets that data not from the dataflow data clump, but rather from the queue's control data structure off in the heap somewhere. It's from a global memory store, and anyone with that queue refnum can see a copy of it, rather than from the dataflow, which is private to the adjacent code clumps in the thread. Well anyway, they do undoubtedly use some pointers here so that memory doesn't have to be copied from data clump to data clump. But those pointers are still private to that thread and point to memory that is not visible to any clump that doesn't have a simple dataflow connection.

I think your mental model of how the internals might work is actually getting in the way here. Yes Virginia, there *is* a wire construct in the execution environment. I grant that my mental model could be wrong too (AQ is probably ROTFLHAO at this point), but hopefully you can see why I think real dataflow is as simple as it looks on a diagram.

I can't argue with you there. I just can't help but think we can come up with better terminology that doesn't overload the meaning of "dataflow." ("By-value construction" doesn't really accurately describe what you're talking about either. Maybe "directly connected?" "Referenceless?" "Visible construction?" )

Well we're not getting any closer on this. I still think that other stuff is not pure dataflow, and 'dataflow' is a very useful word. If you say that every iota of LabVIEW is dataflow, then the only place that word is useful is in the marketing literature, and I'm not willing to cede it to them. Maybe the key is adding a modifier, like 'pure', 'simple', or 'explicit' dataflow.

Aww... don't go away. :( I learn a lot from these discussions.

Hey I'm learning too, and by the way I really want to try out the Lapdog stuff. I may come back to you for help on that.

Edited by jdunham
  • Like 1
Link to comment

First, apologies to Steve for totally hijacking his thread. But notice it doesn't stop me from continuing...

Maybe the key is adding a modifier, like 'pure', 'simple', or 'explicit' dataflow.

Ooooo... "explicit dataflow." I like that. It emphasizes visible dataflow for the programmer without undermining what dataflow really is. And instead of queues, etc. "breaking dataflow," they simply use "implicit dataflow."

So anyway, it's highly likely that the LV compiler generates executable clumps of code, and that each execution thread in labview is some array or FIFO queue of these clumps. The execution engine probably does a round-robin sequencing of each thread queue and executes the next clump in line. So when a clump contains a Dequeue Element function, and the data queue is empty, this clump's thread is flagged for a wait or more likely removed from some master list of active threads.

I don't claim any special knowledge either--just what I've picked up from various sources. I think you are mostly right, but let me try to describe my understanding...

Labview has 26 different "execution systems." There are 6 named systems: UI, standard, instrument i/o, data acquisition, other 1, and other 2. UI is a single execution system. The other five named systems are actually groups of 5 different execution systems, one for each priority level. So there are independent execution systems named standard-high, standard-normal, standard-background, etc. Unless users explicitly assign a vi to a different execution system, the code runs in standard-normal, so from here on out I'm referring to that execution system. The execution system is assigned some number of operating system threads--some literature says two threads per cpu core, other sources say four.

There are a couple places where I think you're slightly off mark. First, there is a single clump queue for each execution system, not for each thread. Clumps are added to the queue (by Labview's scheduler?) when all inputs are satisfied. The execution system assigns clump at the front of the queue to the next thread that become available. Second, when the dequeue prim is waiting for data, the thread is not put to sleep. If that happened then two (or four) dequeues waiting for data would block the entire execution system and nothing else would execute. There are a few things I can think of that might happen:

<wild speculation>

1. The clump containing the dequeue prim signals the execution system that it is waiting for data. The execution system pulls it off the thread and puts it at the back of the queue. When it gets back to the front of the queue the execution system gives it to a thread and the dequeue prim checks the queue again to see if there is any data ready. So while the dequeue does "sleep," it also is woken up periodically to check (poll) the queue. One way to look at this is to imagine the queue refnum is a clump input and the data returned is a clump output.

2. Labview's clumping algorithm essentially divides the dequeue prim's functionality into separate pieces--the waiting and the fetching. When the compiler reaches a dequeue prim that has to wait for a future action it says, "I can't clump effectively beyond this point, so I'll stop here." The wait clump has the queue refnum as an input but no data output. (I think it would have to have send some sort of sequencing signal to the fetch clump to maintain sequencing though.) During execution the wait clump finishes executing even if the dequeue is waiting, so there is no need to put it back on an active thread and check to see if there's data available. The fetching clump includes the operations downstream from the dequeue prim and has two inputs, the sequence signal from the wait clump and the data input from the enqueue prim's clump. Since it hasn't received data from the enqueue clump, the fetching clump sits in the waiting room, that area reserved from clumps that are not ready to execute.

Option 2 is potentially more efficient since it doesn't require repetively putting the same clump back on an active thread to check the queue for data. (There is still "polling" though, it has just been pushed off to some function--using a different thread altogether--checking all the clumps in the waiting room.) Option 2 is also way more complicated and I haven't been able to figure out how that kind of implementation would exhibit the behavior we see when two or more dequeues are working on the same queue.

</wild speculation>

For the system clock/timer, that's surely a hardware interrupt, so that's code executed whenever some motherboard trace sees a rising edge, and the OS passes that to LV and then something very similar to the above happens. So that's not really polled either.

I'd be really surprised if an OS passes access to the system timer interrupt through to Labview. First, no operating system in the world is going to allow user level code to be injected into the system timer's interrupt service routine. (Queue Yair linking to one... :) ) ISRs need to be as short and fast as possible so the cpu can get back to doing what it's supposed to be doing. Second, since the OS uses the system timer for things like scheduling threads it would be a huge security hole if application code were allowed in the ISR.

The OS kernal abstracts away the interrupt and provides timing services that may (or may not) be based on the interrupt. Suppose my system timer interrupts every 10 ms and I want to wait for 100 ms. Somewhere, either in the kernal code or in the application code, the number of interrupts that have occurred since the waiting started needs to be counted and compared to an exit condition. That's a kind of polling.

Here's more evidence the Get Tick Count and Wait (ms) prims don't map to the system timer interrupt. Execute this vi a couple times. Try changing the wait time. It works about as expected, right? The system timer interrupt on windows typically fires every 10-15 ms. If these prims used the system timer interrupt we wouldn't be able to get resolution less than 10-15 ms. There are ways to get higher resolution times, but they are tied to other sources and don't use hardware interrupts.

post-7603-0-52726000-1304795808_thumb.pn

by the way I really want to try out the Lapdog stuff. I may come back to you for help on that.

I'll be around. Post a message if you have questions. (I might see it quicker if you send me a pm too.)

Link to comment

index.php?app=core&module=attach&section=attach&attach_rel_module=post&attach_id=4453

Oh gosh, I found that disturbing. I think it was a mistake on NI's part to allow this in the language syntax.

Yeah, well, flat sequences are evil. Long live the stacked sequence! :D

First, no operating system in the world is going to allow user level code to be injected into the system timer's interrupt service routine. (Queue Yair linking to one... :) )

I think you meant "cue Rolf". That's a bit outside my comfort zone (by about that much - :o).

Link to comment

Something occurred to me this weekend while puzzling over these questions. Earlier I suggested the compiler needed deterministic code to be able to reduce it to explicit dataflow. I don't think deterministic is the right word to use. As this diagram illustrates, non-deterministic code can be reduced just as easily.

post-7603-0-44489600-1304860521_thumb.pn

What allows the reduction is that the code is robust against variations in timing. Regardless of which branch (the enqueue branch or the dequeue branch) starts executing first or the amount of time that passes between when they start, there is only one possible data source the add function's upper terminal map to--the constant 1 or the random function. Perhaps that is what AQ is referring to as "dataflow."

That gives us three different kinds of dataflow. I think all are valid, they just view it from slightly different perspectives.

[Note: I'm defining "source node" as something that generates a value. Examples of source nodes are constants and the random function. Examples that are not source nodes are the dequeue and switch function outputs, although these outputs may map back to a single source node depending on how the source code is constructed.]

Explicit Dataflow - (Jason's interpretation.) Data always travels on visible wires from the source node to the destination. Maintaining explicit dataflow promotes code transparency and clarity, but explicit dataflow is not required by Labview. Breaking explicit dataflow may or may not change how the program executes at runtime.

Constant-Source Dataflow - (My interpretation of AQ's interpretation.) Every input terminal maps to a single source node. Maintaining constant-source dataflow allows the compiler to do more optimization, resulting in more efficient code execution. Breaking constant-source dataflow requires run-time checks, which limit the compiler's ability to optimize compiled code.

Execution Dataflow - (My interpretation.) At runtime nothing executes until is has all the data it needs. This refers to how the execution environment determines which clumps are ready to run and is what makes Labview a dataflow language. It is impossible to "break" execution dataflow in Labview.

I'm waffling a bit on the naming of Constant-Source Dataflow. I considered "Temporally Robust Dataflow" as a way to communicate how variations in timing don't affect the outcome, but I didn't really like the wording and it puts a slightly different emphasis on what to focus on. It may turn out that constant source dataflow and temporally robust dataflow mean the same thing... I'll have to think about that for a while.

I think you meant "cue Rolf".

Oh gawd. I think my brain has been taken over by NI... I am unable to differentiate between "queue" and "cue." (I kept looking at that thinking something doesn't seem right...)

Link to comment

There are a couple places where I think you're slightly off mark. First, there is a single clump queue for each execution system, not for each thread. Clumps are added to the queue (by Labview's scheduler?) when all inputs are satisfied. The execution system assigns clump at the front of the queue to the next thread that become available. Second, when the dequeue prim is waiting for data, the thread is not put to sleep. If that happened then two (or four) dequeues waiting for data would block the entire execution system and nothing else would execute. ...So while the dequeue does "sleep," it also is woken up periodically to check (poll) the queue....

Well I was hoping that AQ would have backed me up by now, but then I realized that he backed me up on this last time I tried to tell people that queues don't poll. So if you won't take my word for it, you can post a rebuttal on that other thread.

Another related post in the same thread has some interesting info on thread management and parallelism.

I'd be really surprised if an OS passes access to the system timer interrupt through to Labview. First, no operating system in the world is going to allow user level code to be injected into the system timer's interrupt service routine. (Queue Yair linking to one... ) ISRs need to be as short and fast as possible so the cpu can get back to doing what it's supposed to be doing. Second, since the OS uses the system timer for things like scheduling threads it would be a huge security hole if application code were allowed in the ISR.

The OS kernal abstracts away the interrupt and provides timing services that may (or may not) be based on the interrupt. Suppose my system timer interrupts every 10 ms and I want to wait for 100 ms. Somewhere, either in the kernal code or in the application code, the number of interrupts that have occurred since the waiting started needs to be counted and compared to an exit condition. That's a kind of polling.

Here's more evidence the Get Tick Count and Wait (ms) prims don't map to the system timer interrupt. Execute this vi a couple times. Try changing the wait time. It works about as expected, right? The system timer interrupt on windows typically fires every 10-15 ms. If these prims used the system timer interrupt we wouldn't be able to get resolution less than 10-15 ms. There are ways to get higher resolution times, but they are tied to other sources and don't use hardware interrupts.

Well I didn't mean that the OS allows LabVIEW to own a hardware interrupt and service it directly. But an OS provides things like a way to register callbacks so that they are invoked on system events.The interrupt can invoke a callback (asynchronous procedure call). Do you think LabVIEW polls the keyboard too?

Back in the old days, the DOS-based system timer resolution was only 55ms for the PC and LV made a big deal of using the newly-available multimedia system timer for true 1ms resolution. I think that's still the away it is, and 1ms event-driven timing is available to LabVIEW.

That gives us three different kinds of dataflow. I think all are valid, they just view it from slightly different perspectives.

[Note: I'm defining "source node" as something that generates a value. Examples of source nodes are constants and the random function. Examples that are not source nodes are the dequeue and switch function outputs, although these outputs may map back to a single source node depending on how the source code is constructed.]

Explicit Dataflow - ...

Constant-Source Dataflow - (My interpretation of AQ's interpretation.) Every input terminal maps to a single source node. Maintaining constant-source dataflow allows the compiler to do more optimization, resulting in more efficient code execution. Breaking constant-source dataflow requires run-time checks, which limit the compiler's ability to optimize compiled code.

Execution Dataflow - ...

I'm waffling a bit on the naming of Constant-Source Dataflow. I considered "Temporally Robust Dataflow" as a way to communicate how variations in timing don't affect the outcome, but I didn't really like the wording and it puts a slightly different emphasis on what to focus on. It may turn out that constant source dataflow and temporally robust dataflow mean the same thing... I'll have to think about that for a while.

Based on re-reading this old AQ post, I'm trying to reconcile your concept of constant-source dataflow with the clumping rules and the apparent fact that a clump boundary is caused whenever you use a node that can put the thread to sleep. It sounds like it is much harder to for LV to optimize code which crosses a clump boundary. If someone cares about optimization (which in general, they shouldn't), then worrying about where the code is put to sleep might matter more than the data sources.

Overall I'm having trouble seeing the utility of your second category. Yes, your queue examples can be simplified, but it's basically a trivial case, and 99.99% of queue usage is not going to be optimizable like that so NI will probably never add that optimization. I'm not able to visualize other examples where you could have constant-source dataflow in a way that would matter.

For example, you could have a global variable that is only written in one place, so it might seem like the compiler could infer a constant-source dataflow connection to all its readers. But it never could, because you could dynamically load some other VI which writes the global and breaks that constant-source relationship. So I just don't see how to apply this to any real-world programming situations.

Link to comment

First, apologies to Steve for totally hijacking his thread.

Hey, no worries, this has turned into a much more interesting thread than I had hoped for!

About the only input I would have is on polling vs. not polling. I think you hit the nail on the head. At a low enough level every event driven programming environment is in fact polling.

I just read the link to AQs post that jdunham provided where he said there is no CPU activity from a sleeping queue prim. He would know and I would agree. But he goes on to say that the thread with the queue prim goes to sleep until another obviously running thread wakes it up. The distinction between event driven vs. polling is really just a matter of perspective. It is event driven if the polling is provided for you. The LabVIEW runtime never really sleeps and it is doing the polling on behalf of the sleeping thread with the queue prim. Even if it does sleep it is awakened by the OS which is doing the polling. CPUs have come a long way but they still need to continuously execute instructions which conditionally execute other instructions :rolleyes:

I wish I could write pages of interesting thoughts but I will go back to lurk mode for a while.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.