Jump to content

Map, implemented with classes, for 8.5


Recommended Posts

Here's my attempt at implementing Map in LV8.5. Very straightforward. I need someone to write the tree balancing algorithm on one of the subVIs (Map.lvlib:Branch Node.lvclass:Node Insert Pair.vi). I don't have the patience to hunt down the AVL-tree balancing algorithm and make it into LV code.

You'll find some *fascinating* code that definitely only works in LV8.5 (uses the new inplaceness nodes) on the block diagram of Map.lvlib:Branch Node.lvclass:Node Delete Key.vi. 20 points and a bowl of gruel to the first person who can explain why the *empty* Sequence Structure is labeled "Do not delete". ;-)

NOTE: The attachment got deleted when LAVA was wiped out in early 2009. Here is a link to it:

http://lavag.org/topic/5983-map-implemented-with-classes-for-85/page__view__findpost__p__70238

Link to comment
  • Replies 89
  • Created
  • Last Reply

Top Posters In This Topic

QUOTE(Aristos Queue @ Aug 23 2007, 01:56 PM)

That's a tricky one, I guess it has to do something with garbage collection.

Here is how it looks if you analyse it with a LabVIEW based UML Modeller ;-)

http://forums.lavag.org/index.php?act=attach&type=post&id=6725

BTW, how do you set the default data to a leaf Node in the Map class attribute named Root which is a Map Node Object?

Cheers, Mikael

Link to comment

QUOTE(Aristos Queue @ Aug 23 2007, 06:56 AM)

I modified your class to have a little more informative probing system. Now each class in the project has a probe that populates a tree recursively. This way you can actually see what is the structure of the binary tree used in this implementation.

http://forums.lavag.org/index.php?act=attach&type=post&id=6727

http://forums.lavag.org/index.php?act=attach&type=post&id=6728

QUOTE(Aristos Queue @ Aug 23 2007, 06:56 AM)

You'll find some *fascinating* code that definitely only works in LV8.5 (uses the new inplaceness nodes) on the block diagram of Map.lvlib:Branch Node.lvclass:Node Delete Key.vi. 20 points and a bowl of gruel to the first person who can explain why the *empty* Sequence Structure is labeled "Do not delete". ;-)

This particular VI is actually quite weird. Would you like to shed light on what's happing in the method and why all those switch nodes are needed and why do you have always copy nodes and why do you have that 'do not remove' sequence.

EDIT: I spend some time trying to figure out what the heck is going on in this particular VI. I can understand from dataflow point of view what's going on but I've no freaking idea of why AQ is using swap values and always copy nodes as well as this empty structure. I tried to profile the VI with LV memory profiler. The memory consumption was not always the same, which is weird. Removing the wire between the sequence frame and the bundle node seem to decrease the memory usage according to the profiler. I tried to use show buffer allocations tool. It seems that this tool is not very informative with LVOOP as there were dots everywhere. Champion or not, I still don't understand LabVIEW memory management and the functionality of the in-placeness algorithm at the depth I'd like to. If only there was some material to read but I guess I've read all there is.

p.s. Can you refer to some location such as Wikipedia where they describe this binary tree map algorithm you are using.

Tomi

Link to comment

QUOTE(Tomi Maila @ Aug 23 2007, 05:35 AM)

http://en.wikipedia.org/wiki/AVL_tree

a) the show buffers tool works fine for LV classes, and all those dots are places where buffers are allocated, but you need to know that allocating a new LVClass is dirt cheap -- 1 pointer in all cases since there's only a single shared copy of the default default value.

b) As for explanation of the various swap nodes and force copy blocks, read the LVOOP white paper and focus on the section titled "What is the in-memory layout of a class?" and the paragraph that starts with " For advanced LVOOP programmers there is a counterintuitive feature that we considered removing"

QUOTE(MikaelH @ Aug 23 2007, 01:06 AM)

BTW, how do you set the default data to a leaf Node in the Map class attribute named Root which is a Map Node Object?

In 8.5, drop a constant of class Child and wire it to an indicator of class Parent. Run the VI and then use Make Current Value Default. Change the indicator to a control and move it into the private data control.

Make Current Value Default does not work for LV classes in LV8.2.

PS: This may be a situation that your UML tool should be aware of ... Class A may have a control of class Parent in its private data, but it may use an even lower Child class as the default value of that control. I don't know if you care to highlight such a situation in the UML, but it is a relationship that LV has to track in order to know to load the child class into memory whenever Class A loads.

Link to comment

QUOTE(Aristos Queue @ Aug 23 2007, 04:38 PM)

b) As for explanation of the various swap nodes and force copy blocks, read the http://zone.ni.com/devzone/cda/tut/p/id/3574' target="_blank">LVOOP white paper and focus on the section titled "What is the in-memory layout of a class?" and the paragraph that starts with " For advanced LVOOP programmers there is a counterintuitive feature that we considered removing"

Ok. I read it. I still don't understand what's going on. I now understand that using LabVIEW in recursive data structures may not be safe. However I've created graphs and trees before and they've functioned if you don't care about occasional crashes. I didn't have any of these in-place memory nodes available in LabVIEW 8.2 or 8.2.1. If I interpret you correctly AQ, you are saying that one can use recrusive memory structures but one needs to use some hocus-pocus non-dataflow programming techniques to do this safely. Did I interpret you completely incorrectly? If not, what are these hocus-pocus tricks one needs to use?

Link to comment

QUOTE(Aristos Queue @ Aug 23 2007, 06:56 AM)

... 20 points and a bowl of gruel to the first person who can explain why the *empty* Sequence Structure is labeled "Do not delete". ;-)

My guess: Node Delete Key.vi of Branch Node class calls itself recursively from within in-place memory structure. The in-place memory structure can optimize memory usage only if the buffer that is being edited in-place doesn't get garbage collected. The empty sequence frame is there to keep the edited buffer in memory while the VI is being executed. This somehow affects LabVIEW compilers ability to optimize the execution of the outer in-place memory structure of the calling clone of Node Delete Key.vi.

Was it even close? :)

Link to comment

QUOTE(Aristos Queue @ Aug 23 2007, 11:38 PM)

http://en.wikipedia.org/wiki/AVL_tree''>http://en.wikipedia.org/wiki/AVL_tree' target="_blank">http://en.wikipedia.org/wiki/AVL_tree

PS: This may be a situation that your UML tool should be aware of ... Class A may have a control of class Parent in its private data, but it may use an even lower Child class as the default value of that control. I don't know if you care to highlight such a situation in the UML, but it is a relationship that LV has to track in order to know to load the child class into memory whenever Class A loads.

The default values is visualized like this:

MyAttribut:int=23

MyObject:BaseClass=LeafClass

But the UML modeller don't look at the default values right now.

When it comes to the default values for a referenced based class, these default values should be wired up in the Create/Constuct-method.

You could still add the equal sign in your design when generating code, but when you do a reverse engineering those will be overwritten.

-Mikael

Link to comment

QUOTE(NormKirchner @ Aug 28 2007, 08:54 PM)

And since we all saw what happened the last time Norm got excited, I'm going to go ahead and post this now before it gets worse. ;-)

Let's start with something easy... the case when we are setting a key-data pair into the map and we discover that the value is already in the map. Take a look at the block diagram of Map.lvlib:Branch Node.lvclass:Node Insert Pair.vi

http://forums.lavag.org/index.php?act=attach&type=post&id=6788

Here we are taking the existing data and swapping it with the new value. What does the Swap primitive buy us? Why didn't I just cross those wires? In this case, I could have, but by using the Swap prim, I'm avoiding two copies of data -- one copy out of the Map structure and one copy into the map structure. No copying needed at all. We're playing with pointers in memory here, gentlemen and ladies. Directly touching memory in a way that LabVIEW has never done before. I point out this simple case since it'll help understand the far trickier case of deleting a node from the tree...

For those of you without LV8.5, here is the block diagram for Map.lvlib:Branch.lvclass:Node Delete Key.vi which caused such consternation in previous posts...

http://forums.lavag.org/index.php?act=attach&type=post&id=6789

What in Hades is going on here?

  1. We are implementing a binary tree. That means that when we delete a node, we now have a left node and a right node. One of those gets put into the tree to take the position of the deleted node. The other must be inserted into the tree as if it was a new node.
  2. We want to do #1 WITHOUT MAKING A COPY OF ANYTHING. Why? Because a single node really is an entire sub-tree. We want to simply connect that subtree into a new place in the graph without duplicating all the pointers. Otherwise all of our efficiency is lost.
  3. The node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT. So the first thing to do is to give the node about to be deleted some new values for its left and right nodes. We use the Leaf Node constants. The Force Copy primitives (those little small dots for those of you without LV8.5) keeps LabVIEW from trying to be too smart about optimizing copies. It guarantees that we have an independent value of a Leaf Node, one for left and one for right. We Swap those two in place of the current subtrees of the node being deleted. So our left and right subtrees are now disconnected from the main tree.
  4. Why is the Sequence Structure there? Because if we don't wire the output of the Bundle node, LabVIEW will look at the diagram and say, "Hey! This output is unwired. That means the node is a no-op." And it will optimize out the bundle. But in this diagram, even though we're never going to use the deleted node ever again, it isn't a no-op -- it is what severs the deleted node's connection to the left and right subtrees. (Note: After further questions, I posted a clarification of this point here.)
  5. Now we have two subtrees. We test to see which one is the deeper tree and swap the wire values so that the deeper one is on the top wire. That one gets to be put in place of the deleted node because it keeps the maximum number of nodes at the highest possible level of the tree. The shallower subtree is then inserted into the map as if it were a new node.

Now, let's consider the implications of this entire VI hierarchy:

There are points in this graph where we are modifying a value on one wire which results in a change in value on another parallel wire. This is a massive no-no for dataflow programming. And yet it works in this situation -- reliably, predictably, and quickly -- without any semaphore or mutex locking. It is the sort of thing that a human being can prove is safe by code inspection but no amount of automated work from the compiler can confirm that (at least with the compiler tech that I have at my disposal -- the NI AI is still holding a grudge against me for introducing it to Goedel's Incompleteness Theorem).

If you coded it wrong, you could crash LabVIEW. Should I then close this door in the compiler? I don't think so. First of all, you can't do any of this without the memory management primitives. Anyone trying to make this work without the Swap will find themselves coding regular LabVIEW and completely unable to get the map to work the way they'd like it to work. Anyone who uses the Swap primitives is definitely on the cutting edge of LabVIEW, because it is so non-intuitive compared with simply crossing the wires. And if you're advanced enough to be playing with these, then I think the full power of options should be open to you.

This hierarchy gives you a Map that is perfect dataflow outside the class --- if you fork the wire, the entire Map duplicates and is thus dataflow safe for all operations, and it can be freely stored in LV2-style globals, global VIs or single-element queues if you decide you need a reference to one. It hides all the "pointer arithmetic" inside its private member VIs, thus guaranteeing the coherency and consistency of the Map and keeping the dangerous dataflow-unsafe functions from being called directly under conditions that might not be safe.

I'm thinking an essay covering the analysis of this diagram and the related VI hierarchy should be added to the Certified LabVIEW Architect exam.

Now, my next challenge to everyone...

Do any of you recall me posting this request for a hack? The suggestion to use MoveBlock is an excellent one. You'll need that in order to complete the next challenge...

Write a VI Hierarchy to implement an arbitrary cyclic graph in pure G. To VIs not in the Graph.lvclass hierarchy, the graph objects should be perfect dataflow -- if you fork the wire, the entire graph replicates. The exposed API should allow you to create a connection between arbitrary nodes in the graph, including the creation of cycles. Inside the class, you may do many dataflow unsafe activities, far worse than the moment to moment crimes of the Map.

I have no idea if this is even possible. This is what we call "research." It's what I spend my spare time on. :-) Good night, and good luck.

Link to comment

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

Now, my next challenge to everyone...

Do any of you recall me posting this request for a hack? The suggestion to use MoveBlock is an excellent one. You'll need that in order to complete the next challenge...

Write a VI Hierarchy to implement an arbitrary cyclic graph in pure G. To VIs not in the Graph.lvclass hierarchy, the graph objects should be perfect dataflow -- if you fork the wire, the entire graph replicates. The exposed API should allow you to create a connection between arbitrary nodes in the graph, including the creation of cycles. Inside the class, you may do many dataflow unsafe activities, far worse than the moment to moment crimes of the Map.

I have no idea if this is even possible. This is what we call "research." It's what I spend my spare time on. :-) Good night, and good luck.

I opened a new thread for this cyclic graph challenge so that we can concentrate in this thread to the binary tree modification operations. I think there is enough challenge in these for most of use for now.

Link to comment

I'm still trying to get hold on this issue. I feel I have a lot of questions but I'll need some time to formulate the questions properly. The questions are not maybe that much related to what AQ says in his answer but rahter what he doesn't say. This example assumes a lot of knowledge on how LabVIEW memory management works and this information is not that well documented. Even though it may be very clear issue for someone working in LV R&D, it is definitely not clear for me as I'm only dependent on the LabVIEW public documentation. For example the documentation of flat sequence doesn't have any information on how the flat sequence affects memory management.

To make the discussion on all these subjects easier I suggest the following. I've a little reorganized the layout of the block diagram we are talking about. I've given names to buffers on the block diagram to help follow the program flow and to give joint names to the buffers. Furthermore I've named the Swap Values nodes uniquely. Use these names I've given to refer to different parts of the block diagram we're talking about.

http://forums.lavag.org/index.php?act=attach&type=post&id=6791

Branch Node.lvclass:Delete Key.vi, case where the particular node 'X' is deleted.

http://forums.lavag.org/index.php?act=attach&type=post&id=6792

Branch Node.lvclass:Delete Key.vi, one of the two symmetric positions on the block diagram where the VI calls itself recursively.

Link to comment
QUOTE(Tomi Maila @ Aug 29 2007, 05:03 AM)
For example the documentation of flat sequence doesn't have any information on how the flat sequence affects memory management.
This has nothing to do with the flat sequence. We just need a wire coming out of the Bundle node going *somewhere*. The easiest "somewhere" to drop is an empty flat sequence.
Link to comment

QUOTE(Aristos Queue @ Aug 29 2007, 01:35 PM)

This has nothing to do with the flat sequence. We just need a wire coming out of the Bundle node going *somewhere*. The easiest "somewhere" to drop is an empty flat sequence.

Well, the carbage collection algorithm is not documented.

Link to comment

QUOTE(Aristos Queue @ Aug 29 2007, 06:35 AM)

This has nothing to do with the flat sequence. We just need a wire coming out of the Bundle node going *somewhere*. The easiest "somewhere" to drop is an empty flat sequence.

Performance for a class like this would be want to be optimized. Are there any gains to be had by terminating both wires into a single sequence structure? Did you use two sequence structures on purpose?

Link to comment

QUOTE(LV Punk @ Aug 29 2007, 06:42 AM)

Performance for a class like this would be want to be optimized. Are there any gains to be had by terminating both wires into a single sequence structure? Did you use two sequence structures on purpose?

Good question! The answer is "no."

I hadn't even paid attention to the second flat sequence... that's just my normal way of ignoring an error that I don't want to chain any further nor do I want to report. I could've just left the output unwired, but then the automatic error handling would've kicked in (yes, I could turn off automatic error handling on this VI, but it is my personal preference to leave that feature on ... I try to either handle the error or consciously ignore it as in this case. If I do neither, then the auto error handling flags my mistake. )

Notice that the To More Specific node could fail if the second class is a Leaf Node class. If that returns an error, then there's no point in doing the insert into the tree (we'd be inserting an empty node), so I use the error out to skip the execution of that next node. But beyond that point, there's no need to report an error. The Insert function itself cannot return an error of its own in this case, so we're not ignoring anything other than the type casting error.

Let me repeat: The first sequence structure has nothing to do with memory allocation or deallocation. It has to do with whether or not the Bundle node executes. If the output of the Bundle node is unwired, then the node will become a no-op, which in this case would be a problem.

QUOTE(Tomi)

Well, the garbage collection algorithm is not documented.

There is no garbage collection, so there's no algorithm to document. The data is simply left alone in the wire, and the next value to come down that wire will overwrite it. For details, take a look at the online help for the Request Deallocation primitive.

Link to comment

In the school we were always told that there is not such thing as too stupid questions. So let me start with the stupid ones.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

I don't even complitely understand what happens here. When I've a cluster constant from which I make two copies using Always Copy primitive and do sequence Unbundle-Swap Values-Bundle for the clusters, LabVIEW makes two buffer copies at the Swap Values node and two buffer copies at the Bundle nodes; one for each bundle. If I do exactly the same thing for LVOOP objects, LabVIEW doesn't do any memory copies. If I use in-place memory structure instead of Unbundle-Bundle sequence LabVIEW does memory copies for the cluster when in-place structure is called but doesn't do memory copies for classes.

1. So my first very simple question is what exactly does Switch Values node and In-Place Memory structure do? Why do they behave differently for different but apparently similar data types.

2. How can I know if an operation really happens in-place or not if it's such a crusial thing when manipulating hierarchical data structures.

3. In-place memory structure is a rather intuitive thing when you look at the examples of its usage in LabVIEW help. However when you make a subVI call inside in-place memory structure the things get more complicated. How does the subVI know that it needs to operate in-place. Is it passed a reference to input-buffer and a reference to output buffer normally. When a VI is called within in-place structure will these two references be the same? What if there are two or more subVIs inside the in-place structure. How does the knowledge of in-placeness propagate trough the VIs?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

We want to do #1 WITHOUT MAKING A COPY OF ANYTHING. Why? Because a single node really is an entire sub-tree. We want to simply connect that subtree into a new place in the graph without duplicating all the pointers. Otherwise all of our efficiency is lost.

If we just don't use in-place memory structures and don't mind of making copies, will we be able to use hierarchical data structures such as binary tree without any tricks and risks of interfering parallel wires?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

Haven't we used the Unbundle node to tell LabVIEW that we are going to use two of the data members, Left and Right in this case? Was the problem that if we didn't use Swap Values nodes to write dummy values l and r to the original object X(L,R) private data, LabVIEW would need to make data copies of the Left (L) and Right ® in order to be able to remove original object X(L,R) from memory? And it wouldn't be an in-place operation any more and we would loose the efficiency. Exactly when would X(L,R) disappear from memory?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

So the first thing to do is to give the node about to be deleted some new values for its left and right nodes. We use the Leaf Node constants. The Force Copy primitives (those little small dots for those of you without LV8.5) keeps LabVIEW from trying to be too smart about optimizing copies.

How do you know when is LabVIEW going to optimize copies incorrectly either in LV 8.5 or any of the future versions? I guess this is an undocumented issue... How would LabVIEW incorrectly optimize the code in this particular case?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

So what exatly as happened here. We have had four handles (pointer to pointer) for LabVIEW objects LH,RH,lH and rH. The handles themselves remain in the same addresses but the pointers the handles refer to get exchanged. So LH that has originally referred to a Lp to L. Now refers to another pointer lp to l. Am I right or wrong here?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

Why is the Sequence Structure there? Because if we don't wire the output of the Bundle node, LabVIEW will look at the diagram and say, "Hey! This output is unwired. That means the node is a no-op." And it will optimize out the bundle.

How do we know that in a future version of LabVIEW this trick will not be optimized as a no-op?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

How does LabVIEW know how to write the output of a class in-place on top of the original data. Well this is actually the same as the question I asked above. How does LabVIEW pass the in-placeness to a subVI and back from it in a call chain.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

There are points in this graph where we are modifying a value on one wire which results in a change in value on another parallel wire.

Just in case I don't yet see all these occasions, can you explicitly determine which changes to which wires modify the data in which other wires.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

I complitely agree :)

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

And yet it works in this situation -- reliably, predictably, and quickly -- without any semaphore or mutex locking.

I can't admit I could yet predict this behaviour.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

Actually to complitely understand these things, I'd need more examples of what I shouldn't do and what actions are not allowed rather than what actions the correct answers. I need to ask this one more time. If I don't try to do things in-place, will I still have risks of crashing LabVIEW or something similar when I modify hierarchical data structures?

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

Anyone who uses the Swap primitives is definitely on the cutting edge of LabVIEW, because it is so non-intuitive compared with simply crossing the wires. And if you're advanced enough to be playing with these, then I think the full power of options should be open to you.

I disagree. LabVIEW has always been a programming language that one can learn without reading books simply by checking the documentation of each node. Now these particula techniques are not documented anywhere. LabVIEW help doesn't tell much anything. Users are tempted to give these new memory management nodes a try. They may soon use them thinking they understand, but they don't. At least if they are as stupid as I'm. ;) There is simply no way a user can learn these techiques if there is not a very comprehensive introduction to these techiques. This is also true for the most advanced users like the ones here at LAVA.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

What exactly do you mean inside-the-class and outside-the-class. Do you mean that when you use unbundle and bundle nodes, that are only available inside the class, the pointer-like charasteristics appear. Or do you mean that when you use these new memory management functions the pointer like charasteristics appear? As far as I understand, forking a wire inside a class is no different from forking a wire outside a class.

QUOTE(Aristos Queue @ Aug 29 2007, 07:24 AM)

I'm thinking an essay covering the analysis of this diagram and the related VI hierarchy should be added to the Certified LabVIEW Architect exam.

Lucky you who got the exam before this addition :P

Link to comment

Quick reply for now on a single point...

QUOTE

What exactly do you mean inside-the-class and outside-the-class.

I mean that for VIs outside the class, they see a coherent "Map" object that behaves dataflow safe in all cases and a set of API functions that act on that map in a dataflow-safe way. But on VIs inside the class, there are calls to member functions that adjust parts of that map in ways that wouldn't be dataflow safe under all conditions, but the conditions are limited such that it works in the cases that are actually capable of executing.

QUOTE(Tomi Maila @ Aug 29 2007, 11:25 AM)

LabVIEW has always been a programming language that one can learn without reading books simply by checking the documentation of each node. Now these particula techniques are not documented anywhere. LabVIEW help doesn't tell much anything. Users are tempted to give these new memory management nodes a try. They may soon use them thinking they understand, but they don't. At least if they are as stupid as I'm.
;)
There is simply no way a user can learn these techiques if there is not a very comprehensive introduction to these techiques. This is also true for the most advanced users like the ones here at LAVA.

If enough people disagree, I'll fix the compiler so this doesn't work at all. It's easy enough to detect -- just prevent a class from using any ancestor class from being used as a data member. It is a door I left open, consciously and deliberately, because of this use case.

Part of posting it here is to evaluate whether this should remain part of the language syntax. It would be the work of a few moments to render classes that do this broken. As for the documentation, you're reading it. None of this has been done with LabVIEW until this Map class that I've posted. Its feasibility was only theoretical until I finished it recently.

It's like a manhole cover. Sometimes it's useful to go into the sewer drain, but you can get covered in crap. City work crews could lock down the manholes, or they could just leave the lid unlocked and brave users can descend into the depths. This hole is more dangerous than most of the ones on LabVIEW Street because of the crash issue. You want an example of things that are normally reserved for password-protected VIs that users aren't allowed to look at? This is a prime example.

So, you tell me... should we close this hole in the language syntax?

As for the users being tempted to try the memory management primitives -- yeah, I wasn't really in favor of adding those to the language. They encourage you to care about things that you really should leave to the compiler 99.9% of the time. I know they're useful, but I really think they should've been a separate toolkit -- the license key for them could've been handed out to Certified LabVIEW Developers as a prize when they passed their exams. But it wasn't my call to make. They are in the palettes. And yet, there's not going to be any horn blowing to announce them. There never will be any comprehensive training with these prims -- they're quietly sitting off in one palette, to be pointed out to users by AEs when they reach some otherwise unresolvable issue. And they can be explored by zealous LV users.

Side note... after today, I may not be posting much for the next week or so. Other stuff going on. I'll try to get a more complete answer to Tomi's questions later today, but no guarantees.

Link to comment

QUOTE(Aristos Queue @ Aug 29 2007, 09:50 AM)

So, you tell me... should we close this hole in the language syntax?

It seems to me (and I still don't understand everything that's going on here, fully) that the feature which should be removed (if we have to make a choice) is the Swap Values function rather than the ability for a child object to have a parent object in its private data. It seems that the "problem" is that the behavior of swapping values in memory is a function of how LabVIEW stores data in memory and this is something that we should generally not have to care about in the domain of dataflow.

Link to comment

QUOTE(Jim Kring @ Aug 29 2007, 07:43 PM)

It seems to me (and I still don't understand everything that's going on here, fully) that the feature which should be removed (if we have to make a choice) is the Swap Values function rather than the ability for a child object to have a parent object in its private data. It seems that the "problem" is that the behavior of swapping values in memory is a function of how LabVIEW stores data in memory and this is something that we should generally not have to care about in the domain of dataflow.

I have to say that this whole thread just makes my head hurt ! The swap Values primitive reminds me of the fun I had when I wrote my own CIN to manipulate the string handles in a string array - all of a sudden I noticed backwards propagation of data :unsure: !

Now I just need the new LV 8.5 DVDs to make their way from the darker recesses of my university to me...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.