Jump to content

Map, implemented with classes, for 8.5


Recommended Posts

QUOTE(robijn @ Aug 29 2007, 10:21 AM)

Wrong lesson to take away from this. The actual lesson is, "Wow, AQ coded a Map class in just three hours that is clean, efficient AND dataflow-safe without introducing the semaphore/mutex locking mechanisms that reference classes would've required."

QUOTE(orko @ Aug 29 2007, 02:49 PM)

QUOTE(Gavin Burnell @ Aug 29 2007, 02:38 PM)

When you live on the cutting edge, you don't get documentation. ;-)

And now, let's see if I can get a few of Tomi's questions answered:

Question 1: Can I get more details about the "simple" Swap in the insert case?

Tomi asked for further details about the "simple" case where we swap in data into the Map during an insert... He gave some examples of differences in behavior that he sees from the Show Buffer Allocations tool when using the Swap prim with different data types. I'm not sure what he saw exactly, but here's a more detailed explanation of my code:

Suppose you have a cluster containing a string. You can unbundle, append to the string and bundle back into the cluster. LV doesn't make a copy of the string because it knows that the result is going back to the same place as the original data. Thus the unbundle terminal is said to be "inplace" to the bundle terminal.

But, if you unbundled the string, appended to it and then sent that string somewhere else, without putting it back in the bundle, LV would make a copy of the string. The data isn't coming back to the cluster, so it cannot share memory with the original cluster because the original cluster needs to continue its execution with the unmodified string value.

Now, consider those two concepts with respect to the Map class. The Map has a piece of data in location P. The new node being inserted has a piece of data at location Q. There are downstream terminals X and Y:

http://forums.lavag.org/index.php?act=attach&type=post&id=6806''>http://forums.lavag.org/index.php?act=attach&type=post&id=6806'>http://forums.lavag.org/index.php?act=attach&type=post&id=6806

Position P would like to be inplace to position X. But if I have new data coming into the cluster AND I need to have the original cluster available for writing, then LV has to make a copy of the data at P. Further, the data that is in Q is being moved into X, but those two memory locations cannot be made in place to each other because they are part of different allocated clusters in memory. So we also make a copy of Q. This is the case of the Unbundled data not going back to the original Bundle node after being modified.

Using the Swap primitive avoids both copies. Instead of copying data out of the cluster and sending it somewhere else, we're now modifying the value in the cluster. How are we modifying the value? We're Unbundling data, then exchanging the value in that memory location with the value at another memory location, and then the wire is going back into the Bundle node. Thus LV knows not to make a copy of the data at the Unbundle node.

QUOTE

1. So my first very simple question is what exactly does Switch Values node and In-Place Memory structure do? Why do they behave differently for different but apparently similar data types.

I have no idea what you're seeing. The class and the cluster should behave identically in the cases you're describing. Please post a VI.

QUOTE

2. How can I know if an operation really happens in-place or not if it's such a crusial thing when manipulating hierarchical data structures.

The Show Buffer Allocations tool. And it isn't crucial, just more efficient. Yes, there is a difference. The Insert function will work without using the Swap, just with less efficiency. Nothing in LV was ever designed to do what I've done with Map. It is a coincidence arising from a thousand different design decisions that this is even viable.

QUOTE

3. In-place memory structure is a rather intuitive thing when you look at the examples of its usage in LabVIEW help. However when you make a subVI call inside in-place memory structure the things get more complicated. How does the subVI know that it needs to operate in-place.

The subVI doesn't know. The subVI is compiled with its own inplaceness. If it doesn't preserve inplaceness across outputs, then the caller will copy the output back into the original buffer. There aren't any references.

QUOTE

If we just don't use in-place memory structures and don't mind of making copies, will we be able to use hierarchical data structures such as binary tree without any tricks and risks of interfering parallel wires?

Honestly, I'm not sure. But I believe that the natural dataflow expression of LV will kick in and make creating the tree impossible.

QUOTE

QUOTE
The node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT.

Haven't we used the Unbundle node to tell LabVIEW that we are going to use two of the data members, Left and Right in this case? Was the problem that if we didn't use Swap Values nodes to write dummy values l and r to the original object X(L,R) private data, LabVIEW would need to make data copies of the Left (L) and Right ® in order to be able to remove original object X(L,R) from memory? And it wouldn't be an in-place operation any more and we would loose the efficiency.
You got it. If we don't use the Swap primitive, then LV will kindly make a copy of the left and right subtrees for us so that we have independent copies separate from the node being deleted.

QUOTE

Exactly when would X(L,R) disappear from memory?

Whenever the wire that the data is sitting in next executes and new data on the wire stomps on the old data.

QUOTE

How do you know when is LabVIEW going to optimize copies incorrectly either in
LV
8.5 or any of the future versions? I guess this is an undocumented issue... How would LabVIEW incorrectly optimize the code in this particular case?

Let me be very clear here: If I do not wire the output of the Unbundle node, LV is correct to optimize out the Bundle node. Every rule of dataflow says that the execution of that bundle node should not be necessary -- there's nothing downstream from the Bundle node to need the data. Remember, we're violating dataflow here -- we're saying that we need LV to change the value on a wire even though that wire isn't going anywhere because the memory is actually shared with items that are going on down the wire.

How will this keep working in future LV versions? Don't know. In fact, I'm pretty sure that LV's compiler will eventually be smart enough to optimize this out unless I deliberately put something in to keep this working. I'm not entirely sure that it should keep working. By many arguments, the code I've written for the Map is a bug that should be fixed, probably in 8.5.1. This bit of magick arises as the confluence of multiple features, and I've posted it here to see what everyone thinks. I see a certain elegance to it, but this may be the same romance a moth feels for a flame.

QUOTE

So what exatly as happened here. We have had four handles (pointer to pointer) for LabVIEW objects LH,RH,lH and rH. The handles themselves remain in the same addresses but the pointers the handles refer to get exchanged. So LH that has originally referred to a Lp to L. Now refers to another pointer lp to l. Am I right or wrong here?

Completely correct.

QUOTE

How do we know that in a future version of LabVIEW this trick will not be optimized as a no-op?

See previous comments.

QUOTE

Actually to complitely understand these things, I'd need more examples of what I shouldn't do and what actions are not allowed rather than what actions the correct answers. I need to ask this one more time. If I don't try to do things in-place, will I still have risks of crashing LabVIEW or something similar when I modify hierarchical data structures?

I honestly am not sure. I tried to get people -- both inside NI and outside -- to work on this problem before release. And I put it in the white paper after release. Everyone got this blank look on their faces and responded with comments like, "I'm not really sure what you're even talking about. I think I'd need to see a working situation to be able to help." So, now everyone can see the scenario that I'm talking about.

If we are going to see a problem, it will definitely show up in the graph case. If the graph can be built -- which I'm not sure is possible -- but if it can, the bug, if it exists, will have to come to light.

Link to comment
  • Replies 89
  • Created
  • Last Reply

Top Posters In This Topic

QUOTE(Aristos Queue @ Aug 30 2007, 02:32 AM)

I was unable to replicate what I saw yesterday. I'll post a VI if I can replicate the weird behaviour I saw.

QUOTE(Aristos Queue @ Aug 30 2007, 02:32 AM)

The subVI doesn't know. The subVI is compiled with its own inplaceness. If it doesn't preserve inplaceness across outputs, then the caller will copy the output back into the original buffer. There aren't any references.

If we take a look at your example class method below, I don't actually see how this VI is an in-place VI. The input buffer and the output buffer are clearly different. Or in other words, the output buffer is not a modified version of the input buffer. So how does LabVIEW interpret this VI to preserve inplaceness across outputs?

index.php?act=attach&type=post&id=6789

QUOTE(Aristos Queue @ Aug 30 2007, 02:32 AM)

Dataflow and functional programming are very similar in concepts. Actually dataflow can be considered to be a subset of functional programming. And functional programming has always allowed this kind of data structures and their manipulation in a very efficient and easy way. So I don't see why LabVIEW in principle couldn't do this. As I already said I've used recursive data structures before in LabVIEW in this particular way you used them now but without new memory management nodes and structures. And they appear to work somehow.

To proove myself right here, I attach a version of AQ's map where all the in-place tircks are removed. And it appears to work. This is perhaps what has confused me the most in this thread. I've used these kind of memory models for LVOOP for ages (1 year) and they have appeared to work. Now AQ has given a message that actually they may not work or they are not intended to work. This has really confused me. Especially when I don't understand why these shouldn't work. I still don't see what is the catch why these shouldn't work.

http://forums.lavag.org/index.php?act=attach&type=post&id=6810

QUOTE(Aristos Queue @ Aug 30 2007, 02:32 AM)

I actually meant why did you have those copy always nodes there. You said that you had them there to avoid incorrect optimization. What kind of incorrect optimization could LV do?

QUOTE(Aristos Queue @ Aug 30 2007, 02:32 AM)

How will this keep working in future
LV
versions? Don't know. In fact, I'm pretty sure that
LV
's compiler will eventually be smart enough to optimize this out unless I deliberately put something in to keep this working. I'm not entirely sure that it should keep working. By many arguments, the code I've written for the Map is a bug that should be fixed, probably in 8.5.1. This bit of magick arises as the confluence of multiple features, and I've posted it here to see what everyone thinks. I see a certain elegance to it, but this may be the same romance a moth feels for a flame.

So to write future proof code, avoid relying on the memory model or compiler optimizations or in-placeness.

Link to comment

QUOTE(Aristos Queue @ Aug 30 2007, 01:32 AM)

3 hours, that's quick !

I like the consistency that you basically say "we have dataflow, now we just need to flow efficiently". So just make sure that the data arrives at the correct place in a single time, without accidently generating copies. A copy can probably not be prevented everywhere. And it is tricky: can you guarantee the efficiency when it's embedded in a larger application ? I.e. will the tree not become slow because of the way the tree is used in the app requires LV to make copies of the tree when a tree manipulation method is called ?

Oh BTW, I think a user should never need to think about locking anyway (unless he wants something unusual). Read back my short story on multi-level locking (which works, proven). But that's of no relevance now. But you also still have the locking requirement: if you call a tree manipulation function from two places at the same time, you would still need to lock the entire tree to prevent one mutation to get lost.

QUOTE(Aristos Queue @ Aug 30 2007, 01:32 AM)

Let me be very clear here: If I do not wire the output of the Unbundle node,
LV
is
correct
to optimize out the Bundle node. Every rule of dataflow says that the execution of that bundle node should not be necessary -- there's nothing downstream from the Bundle node to need the data. Remember, we're violating dataflow here -- we're saying that we need
LV
to change the value on a wire even though that wire isn't going anywhere because the memory is actually shared with items that are going on down the wire.

[..]

thinks. I see a certain elegance to it, but this may be the same romance a moth feels for a flame.

:) I think I know some fatal attraction

Joris

Link to comment

QUOTE(robijn @ Aug 30 2007, 02:35 PM)

But you also still have the locking requirement: if you call a tree manipulation function from two places at the same time, you would still need to lock the entire tree to prevent one mutation to get lost.

No you wouldn't need locking. If you call tree manipulation from two places at the same time, you would manipulate two different trees.

Link to comment
  • 3 weeks later...

QUOTE(Aristos Queue @ Aug 29 2007, 09:50 AM)

So, you tell me... should we close this hole in the language syntax?

Yes.

I'm all for "openness" and "accessibility" and I've enjoyed my days of hacking deep in the bowels of a variety of language constructs but I think this is really not consistent with LV, at least as I understand LV and as far as I understand what you're presenting in your example.

Link to comment

QUOTE(Val Brown @ Sep 17 2007, 02:30 PM)

Yes.

I'm all for "openness" and "accessibility" and I've enjoyed my days of hacking deep in the bowels of a variety of language constructs but I think this is really not consistent with LV, at least as I understand LV and as far as I understand what you're presenting in your example.

I have to respectfully disagree ... if only because I've already used this trick in (soon to be) running code :D

Jaegen

Link to comment

QUOTE(jaegen @ Sep 17 2007, 03:22 PM)

I have to respectfully disagree ... if only because I've already used this trick in (soon to be) running code :D

Jaegen

I understand that but, from a slightly different perspective, you make my point (implicit as it might have been in that last post but obvious in others). It's possible (likely?) that this feature may be removed -- seen as a bug and eliminated. If that happens what will happen to your code?

Part of my trust in LV is that I KNOW what will and will not be there -- IF I stick with the fully documented features. I can and will -- and have! -- gotten support on them from NI when "something's changed" or "not working" and, as a developer, that reliability is of paramount importance. My days of chasing the "latest build of" some Unix-variant are LONG OVER, and I really don't want to get into THAT kind of stuff in re: to LV.

I understand -- others have different perspective, different styles, different uses and different tolerances for doing that kind of dance -- and that's fine. But the question WAS asked and I've just given my perspective, FWIW.

Link to comment
  • 2 weeks later...

Another thing that just came up, is in the CompareKey.vi.

Each of the comparison items needs to be a compare aggregates rather than elements.

This needs to be changed if the data type of the key becomes anything other than a scalar.

Any reason not to?

<edit>

Another item that bubble sorted to the top was the input/output array in 'In Order Fetch Keys' needs to be an array of the the key data type rather than just an array of strings.

Link to comment

The delay in replying to all the points here is a) I've been out of the office and b) I got a new computer that can build LV really really fast so I no longer have huge pauses during my day to check and reply to LAVA. I will return to this project, but it'll probably be a bit. I do want to get answers posted to the various questions, but this is sort of an ongoing side project rather than mainline work, so it keeps getting deprioritized.

Link to comment

QUOTE(Aristos Queue @ Oct 1 2007, 01:11 PM)

The delay in replying to all the points here is a) I've been out of the office and b) I got a new computer that can build LV really really fast so I no longer have huge pauses during my day to check and reply to LAVA. I will return to this project, but it'll probably be a bit. I do want to get answers posted to the various questions, but this is sort of an ongoing side project rather than mainline work, so it keeps getting deprioritized.

Maybe we should all pool our money together to get you a slower computer :P

Link to comment
  • 2 weeks later...

Earlier, I wrote this: QUOTE

There are points in this graph where we are modifying a value on one wire which results in a change in value on another parallel wire.
That statement is FALSE. I made this statement because I really believed that was what was happening in the diagram. When everyone started asking, "Where is it?" I went back to study the diagram in detail, and had to think about each and every wire branch, and I realized I was wrong. I am sorry for leading you about on a wild goose chase.

I said that use of a parent class in the child class object leads to a hole in LV's dataflow safety. I now retract that statement. There's still something at the back of my head nagging at me that a hole exists, but obviously the Map class doesn't demonstrate it. For now, assume the hole doesn't exist and this was just the fevered worrying of a developer who has spent too many years staring at LV class inplaceness and has become paranoid that some data-copy-bug is actively stalking him. At the moment, we have no known issues with this. Everyone who said, "I don't see why this should be a problem" was correct.

I also need to correct one other point about Always Copy primitive:

This error results from my own poor understanding of a new LV feature. In the original posting of the Map class, I used some Always Copy primitives to force a copy of the constant's value before wiring it to a Swap primitive. This turns out to be unnecessary. Since the Swap prim is not something that modifies values, I thought that the swap would be inplace with the instance coming out of the constant, and thus would cause problems for the constant. Turns out that LabVIEW is smarter than that and the Swap prim will make a copy rather than stomp on the constant's value. So the Always Copy prim is not needed.

The rest of my comments -- the need for the Swap prims, etc -- appear to be correct.

Now, on to the rest of the questions...

QUOTE(Tomi Maila @ Aug 30 2007, 03:11 AM)

I have decided that I didn't explain some aspects of this very well. Specifically, Tomi posted a version of the Map class with all the inplaceness tricks removed, which works fine, and he wanted to know why the inplace tricks are needed. So I have created a MUCH simpler example: the Linked List.

Download File:post-5877-1192399595.zip

This is a linear list of data, where it is easy to append to the start of the list without ever reallocating all the existing entries. To access items in the list, you have to traverse down the list -- there is no direct access the way there is with an array. I've gone with a very flat hierarchy for implementing this example so that the bare minimum number of VIs exist.

Just as with the Map, the key bit of magic is a child class that uses its parent class in the private data cluster. In this case, the parent class is Root and the child class is Node.

There are only four operations currently defined on the list:

  1. "Dump to string" allows the list to be shown as a string for debugging (so you can evaluate whether it is working right or not)
  2. "Insert at front 1" is one implementation of inserting at the front of the list
  3. "Insert at front 2" is a second implementation of inserting at the front of the list; compare the two implementations
  4. "Insert at index" walks down the list to a given index and performs an insert at that point

If you open Demo.vi you will see two link lists, one being inserted using version 1, the other being inserted using version 2.

Ok... so let me try to deal with a long list of questions that have been raised by the Map class. To begin with, the Swap block. Here are the block diagrams for LinkedList.lvlib:Node.lvclass:InsertAfter1.vi and LinkedList.lvlib:Node.lvclass:InsertAfter2.vi. (I used red text so you could see there was actually a difference in those two long names!)

post-5877-1192396031.png?width=400

post-5877-1192402015.png?width=400

In each picture, there's a point marked "A". In version 1, at point A, we make a full copy of the entire list. The information in the cluster is being moved out of one cluster and into another cluster. So LV decides there has to be a copy made so that the two clusters aren't sharing the same data. This defeats the purpose of the LinkedList which is supposed to do inserts without duplicating the list. In version 2, at point A, we use the Swap primitive, new in LV8.5. We take a default Root object and swap it with the contents of Next. Now the contents of the Unbundle are free to be put into a new cluster without duplicating all that data.

Tomi: Does that make sense?

QUOTE(NormKirchner @ Sep 26 2007, 03:08 PM)

The Map Node class is the parent for both Branch Node and Leaf Node. As such, it has to define the interface that both of the children classes must match. Map Node defines the API, and then Branch and Leaf implement it. In the simpler Linked List example, I have done away with the base case. I only have Root and Node, rather than a common parent for each of them. This decision not to have the common parent is a bad architecture decision in many cases because now if I want to have behavior on Root that does not apply to Node, I have no obvious place to put such code. As hierarchies get deeper, such decisions can lead to all sorts of weird hacks.

QUOTE(NormKirchner @ Oct 1 2007, 10:46 AM)

Another thing that just came up, is in the CompareKey.vi.

Each of the comparison items needs to be a compare aggregates rather than elements.

This needs to be changed if the data type of the key becomes anything other than a scalar.

Any reason not to?

If the data type changes from String to LabVIEW Object (as I intend to be able to do in the next version of LV), then Compare Elements will still work just fine -- a class is a scalar as far as any operations on the class are concerned. The fact that there is data bundled together underneath to implement that scalar is hidden away. Note: Do not take this comment as a reason to believe that we'll have class comparison working in the next version of LabVIEW. I may intend this happen, but then, I intended to have classes in 8.0, and we see what happened there...

QUOTE

Another item that bubble sorted to the top was the input/output array in 'In Order Fetch Keys' needs to be an array of the the key data type rather than just an array of strings.

Yes. I missed that one.

------------------------

Ok. I think I'm caught up on the backlog of questions on this thread. If there are any I missed, please repost them.

Link to comment

QUOTE(Tomi Maila @ Aug 29 2007, 11:25 AM)

2. How can I know if an operation really happens in-place or not if it's such a crusial thing when manipulating hierarchical data structures.
Use the "Show Buffer Allocations" tool to see where copies are being made. Read up on "playing hide the dots", a game known to advanced real-time programmers in LV. Or assert the inplaceness with the new tools in LV8.5. In the worst case, post your code and ask.

This really is a rarified statum of LV programming; users who need to know what it is are extreme statistical outliers. You notice that LV had been around for 20 years before we put in the inplaceness structure. That's because the vast majority do not care and do not need to care.

Link to comment
Link to comment

AQ thanks for clearing up the misunderstandings.

I guess a linked list is indeed a better demo because it is basically a simple tree without the more complicated stuff like balancing and searching. The most important thing about the inplaceness is demonstrated very clearly.

Joris

Link to comment

QUOTE(Tomi Maila @ Oct 15 2007, 04:33 AM)

I'm with Tomi on this one. Showing the dots gives no added clarity to the situation at all. Also I'm a little mixed up on why the nodes exist on the input terminals. I get it, since if we run it stand alone, it must acquire memory for them, but how do we reconcile this with the fact that it may be a subVI and the memory is already acquired? Or should we just always ignore on terminals?

Link to comment

QUOTE(Norm Kirchner @ Oct 16 2007, 06:59 PM)

I'm with Tomi on this one. Showing the dots gives no added clarity to the situation at all. Also I'm a little mixed up on why the nodes exist on the input terminals. I get it, since if we run it stand alone, it must acquire memory for them, but how do we reconcile this with the fact that it may be a subVI and the memory is already acquired? Or should we just always ignore on terminals?

Actually I think there is either a bug in show buffer allocations algorithm or the swap node doesn't work as it should or AQ didn't use it properly. Can anyone confirm which of these three options is the valid one.

Tomi

Link to comment

A) There's a dot on all the block diagram constants because there's a buffer allocated there (the constant value).

B) The copy dot on the Swap Primitive is on the upper input terminal, indicating a buffer is allocated for the incomming data. That's because a copy has to be made of the constant value coming in so that the swap can be performed without destroying the original buffer coming down from the constant. So the copy is of the constant, which is cheap because the default default value of any class is trivial to copy (there's only a single shared copy of the default default value).

C) Looking at the version that doesn't have the Swap primitive, there's a dot on the bundle node on the output, indicating that a buffer has been allocated for the output of this operation. The center terminal input value is copied into that output and then the element data from the left-side input is copied over that. There's only one dot because there's only one buffer allocated -- that one buffer being the output buffer.

Got it?

QUOTE(Norm Kirchner @ Oct 16 2007, 10:59 AM)

I'm with Tomi on this one. Showing the dots gives no added clarity to the situation at all. Also I'm a little mixed up on why the nodes exist on the input terminals. I get it, since if we run it stand alone, it must acquire memory for them, but how do we reconcile this with the fact that it may be a subVI and the memory is already acquired? Or should we just always ignore on terminals?
There is always a buffer somewhere for the inputs. That buffer may be local (when the VI is run top level) or it may be back on the caller VI (when the VI is used as a subVI). But there's always a buffer, so there's a dot on the input FPTerminals, just as there is always a dot on block diagram constants. You guys are really testing the limits of my inplaceness expertise. I realize your just trying to learn what's going on here, but yeesh, we're so deep into the underpinnings of LV at this point, these are details that I wouldn't expect a LV developer who'd been on R&D for five years to understand.
Link to comment

QUOTE(Aristos Queue @ Oct 16 2007, 12:31 PM)

these are details that I wouldn't expect a LV developer who'd been on R&D for five years to understand.

So if we get this, we can come work @ NI at, at least a 5 year R&D salary

BTW there is a dot on the output of the bundler on the inplace 'version', which you are not pointing out

Link to comment

QUOTE(Tomi Maila @ Oct 16 2007, 06:22 PM)

QUOTE(Aristos Queue @ Oct 16 2007, 07:31 PM)

C) Looking at the version that doesn't have the Swap primitive, there's a dot on the bundle node on the output, indicating that a buffer has been allocated for the output of this operation

I don't know if it is a bug in the "show buffer allocations" algorithm, but I must say it is not always consistent.

I would expect the algorithm to set a dot on all outputs that needs to allocate a buffer, i.e. on all places were the input buffer can not be reused.

In the picture below the dot appears only on the left shift register terminal, eventhough the code resizes the array twice in the loop, but there is no dot to indicate that there is a buffer allocated on the function outputs.

post-5958-1192609210.png?width=400

What AQ is saying, is just what I would expect, i.e. that the dot should appear on the output where the buffer has been allocated, but in the picture above it appears on the shift register instead of the build array output.

My point is that the location of the dots in not always correct (IMO), in respect to what actually causes the buffer allocation, and this makes the "hide the dots" game so difficult (or fun?) sometimes.

/J

Link to comment

QUOTE(Aristos Queue @ Oct 16 2007, 08:31 PM)

A) There's a dot on all the block diagram constants because there's a buffer allocated there (the constant value).

B) The copy dot on the Swap Primitive is on the upper input terminal, indicating a buffer is allocated for the incomming data. That's because a copy has to be made of the constant value coming in so that the swap can be performed without destroying the original buffer coming down from the constant. So the copy is of the constant, which is cheap because the default default value of any class is trivial to copy (there's only a single shared copy of the default default value).

C) Looking at the version that doesn't have the Swap primitive, there's a dot on the bundle node on the output, indicating that a buffer has been allocated for the output of this operation. The center terminal input value is copied into that output and then the element data from the left-side input is copied over that. There's only one dot because there's only one buffer allocated -- that one buffer being the output buffer.

Got it?

No... For each dot in the 'classic' VI there exists a dot in the optimized VI in the exact same position. But there are a few additional dots in the optimized VI. Based on this information optimized VI should perform worse than the classic VI.

index.php?act=attach&type=post&id=7272

Let's go even deeper to the world of inplaceness. I expect the classic VI to be something like the following as C++ code:

MgErr f(Node** Node_in, LabVIEW_Object** Data, Root** Root_out){	// Block 1	// 'New Node' is a constant	// Let's call the buffer originating from 'New Node' B	Node_Handle B = Create_And_Copy_Buffer(New_Node);	// Block 2	// Let's set the content of Next field of 'Node in'	// buffer to the Next field of newly created buffer B	// Let's also assume that LabVIEW is not smart enough to	// notice that Next Field of 'Node in' would not be used	// any more so it makes a buffer copy. 	(*B)-&gt;Next = Create_And_Copy_Buffer((*Node_in)-&gt;Next);	// Block 3	// Let's set the content of Data buffer	// to the Data field of newly created buffer B	// Let's also assume that LabVIEW is smart enough to	// reuse the original Data buffer	(*B)-&gt;Data = Data;	// Block 4	// Let's place the buffer B to the Next field of	// 'Node in' buffer	(*Node_in)-&gt;Next = B;	// Block 5	// Return value	Root_out = Node_in;	// Block 6	// Exit	return noErr;	}

I divided the code into six blocks. Let's compare the code with the actual LabVIEW code in the left hand side picture above. I expect the buffer allocation dots next to each constant and control to indicate buffer allocations that occur when VI is loaded to memory. These buffer allocations are not present in the C++ code above.

So let's take a deeper look at the block 1. In this block we copy the content of New Node constant to a new buffer that we call B. I expect LabVIEW buffer allocation tool to show this action as a dot on the bundle node which is directly connected to New Node constant. Is this right? If I'm right here, I must say this way of presenting the buffer allocations is unintuitive to me. I'd rather have the buffer allocation dot above the bundle node at the end of the wire originating from the New Node constant. That's where the buffer allocation happens from the LabVIEW programmer point of view.

In block 2 we make an (unnecessary and now avoidable) memory copy of by copying the content of Next field of Node In to the Next field of B. Show buffer allocations tool shows this allocation with a dot at the exactly same position as the dot for buffer allocation in block 1. So I assume there is no way for me to distinguish these two dots.

So let's see what happens in the optimized VI then. We have the buffer allocation of block 1 as before. We should not have the buffer allocation of block 2. Still we have all the same dots as before. There is now something I don't understand. To make the issue even more complicated I decided to make another inplace version of the VI. The VI is below. The buffer allocations are marked with red rectangles as before.

http://lavag.org/old_files/monthly_10_2007/post-4014-1192655221.png' target="_blank">post-4014-1192655221.png?width=400

Now according to LabVIEW there are fewer buffer allocations than in the exaple AQ posted. I must say I'm really confused. If someone is not, I congratulate.

P.S. A nice feature for the show buffer allocation tool would be to allow showing only dynamic buffer allocations and not static load time buffer allocations. These two allocations are very different in nature and require different optimizations. Second it would be nice if the buffer allocations could be shown at the inputs of nodes and not only at the output of nodes. This is especially true for the bundle node as it would be nice to know the buffer that is allocated exactly.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.