Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. QUOTE(robijn @ Sep 7 2007, 06:14 AM)

    It would be nice to know more about when LabVIEW cleans up data of objects. Maybe AQ can shed some light on this ?

    LV doesn't clean up objects ever. Just as it does not clean up arrays or strings. The terminals are allocated when the VI loads. The terminal values may be stomped on when the VI runs. When a VI closes, the terminal space is deallocated. See the online documentation for the Request Deallocation primitive (in the palettes since LV7.1 I think) for details.

  2. QUOTE(Michael_Aivaliotis @ Sep 7 2007, 05:12 PM)

    I know this is not a bug and I'm not talking about child classes. I just want to create a VI and when I save it in the defined AP folder the class will include it. Is that so bad?

    "Is that so bad?" Yes. You're asking for the logically impossible.

    A: "Everything in this directory is going to be owned by this class."

    B: "Um, but you put the .lvclass file itself in that directory."

    A: "Oh, well, I mean that the .lvclass will own everything in the directory *except* the stuff that it can't possibly logically own."

    B: "Oh, that's better. I'm sure our documentation folks will love explaining that one, but with enough psycodelic drugs, I'm sure they can do it. ... But wait, what about the files that you copied into that directory by hand when LV wasn't even running?"

    A: "Well, the class should own those too."

    B: "You mean when you open the VI, LabVIEW should be smart enough to know about all possible libraries in the ENTIRE harddrive to know which ones are monitoring which directories so that when you say 'File>>Open' on one of those VIs, LabVIEW just mystically knows to open the library as well and edit it to own the VI?"

    A: "Well, yeah."

    B: "Fine. We'll just load ALL THE LIBRARIES ON YOUR DISK into memory every time you launch LV. That way we'll know which libraries are monitoring what."

    A: "Oh. Uh. Um... "

    I could go on. I fought hard against autopopulating folders. They are a false hope. But users demanded them. And so we have them. And I even got involved with the team that worked on them to try to make them work with libraries (and thus, by extension, classes). But there are so many logical contradictions and impossible relationship resolutions that I just can't believe it'll ever work out.

  3. QUOTE(Jim Kring @ Sep 10 2007, 05:19 PM)

    Maybe I don't understand the intended behavior of the "Save All (this Class)" feature (which is available as a right-click option on an LVClass in the Project Explorer). It appears to sometimes save VIs that are outside of the class (in my case, it saved some VIs in a child class). I assumed (perhaps incorrectly) that this feature would only save VIs that are members of the class (and no other VIs -- not even child classes). Can anyone, shed some light on this feature? It doesn't seem to be described anywhere in the LabVIEW help.

    It saves all the subVIs of any VI that is in the class too, if I recall correctly. So if you've got a VI in one class that calls VIs in another class, those other VIs would get saved. I didn't put that function together, but that's my memory of how it works.

  4. QUOTE(Harish @ Sep 5 2007, 06:55 AM)

    Can some one help me with some examples of performing serial communication classes using GOOP?

    I am new to GOOP, and want to perform some basic read / write functions using classes and objects.

    GOOP or LVOOP?

    If you're wanting to transmit the object from one LV to another LV:

    If you're using LVOOP
    -- the .lvclass files that are natively built into
    LV
    in version 8.2 and later -- you can just send the objects to the other system using any of the
    LV
    data communications protocols. You could, for example, use a Call By Reference node to pass the data by terminal. Or flatten to a string and use TCP/IP prims to send the string and then unflatten it on the other side. You just have to make sure that the .lvclass file itself is already loaded into memory in the remote system. An example that might be helpful is

    <labview>\examples\lvoop\Read Write Class Data To File\Read Write Class Data To File.lvproj

    Although this shows file IO and not communication, it shows how the object data can be treated as any other
    LV
    data.

    If you're using one of the GOOP toolkits
    -- Endevo's GOOP Toolkit, OpenGOOP or dqGOOP -- you'll have to talk to the authors of those tools for information on how to send an object from one system to another. I know that the objects themselves exist only locally, and I don't know what communication systems they've built.

    If you're wanting to just create serial output of the object for sending to some device:

    In both LVOOP and GOOP,
    just create a VI that takes the object in and produces a string output and generate whatever format string you need for the device.

    Helpful?

  5. I'd like to suggest a change to the factory function. The factory function "Image.lvclass:Create Image.vi" is the piece that takes the input image data and outputs the correct image class.

    Currently you have a case structure that splits out "24, 32" and calls Create RGB Image. And you have "1, 4, 8" which calls Create Indexed Image. In each of those VIs you have the same case structure test, splitting deeper.

    First of all, we're testing the same value twice; since we've already encoded the bit values into the top level Create Image.vi, let's just go all the way and skip the middle calls to Create RGB Image.vi and Create Indexed Image.vi.

    But we could also do this:

    On Image.lvclass, define a dynamic dispatch VI named "Get Bit Encoding.vi". This VI would take an instance of the class as input and outputs a constant integer that is the bit depth of this class. Then on each of the leaf-level child classes (not the middle-layer classes) define override VIs that return 1, 4, 8, 24 or 32 respectively. Now build an array of each of the 5 types of leaf-level image classes and make that array a constant. Now you can write a VI that takes image data as input, loops over that array, calls Get Bit Encoding.vi and says "if the bit encoding of this class is the same as the bit encoding in the image data, then create a new instance of that class from the image data."

    Now, in this version the only real advantage is that you don't have a case structure that is doing testing of the values -- you're instead asking each object "Could you encode this image data? If you can, then I'll make a new instance of you." That's much cleaner. Further, if the array, instead of being constant is a stored value (say, an LV2-style global), you could dynamically load new classes into the system and add them to the array at run time, and the code would still behave the same. Going to this length is definitely not worth your time on this project, but it is an idea that may be useful in the future.

  6. A) Please file a bug report on ni.com/devforums if the documentation is unclear for this feature. The online help is at

    National%20Instruments/LabVIEW%208.5/help/html/lvhowto.chm/control_to_class.html

    B) Here's some details of what is going on...

    1) You popup on X.ctl and select Convert Contents of Control to Class

    2) A new class is created named X.lvclass which contains X.ctl, the private data control of this class. This is NOT the same X.ctl as the one on which you popped up.

    3) The private data control's cluster is populated with the data from the original X.ctl.

    4) The original X.ctl is edited to have an instance of X.lvclass on its front panel, replacing whatever else was there already.

    The original X.ctl is not deleted so that VIs that look for it can still find it. Whenever you open a VI that uses X.ctl, that VI updates to match X.ctl (assuming X.ctl is a typedef). Thus any VI that used the old X.ctl type now uses the new X.ctl type, which is X.lvclass. Any VI that just passed the X.ctl value around to subVIs is fine. Any VI that tried to actually operate on the value of X.ctl is broken. To fix those VIs you need to move them inside the new X.lvclass so that they have the priviledge of being able to access the data OR you need to create accessor VIs on X.lvclass that these broken VIs can use.

  7. QUOTE(Michael_Aivaliotis @ Sep 1 2007, 01:31 AM)

    You use the alignment grid?

    I never use it so I can't tell ya.

    I don't even show the lines.

    Grid alignment, autowiring, autorouting, autotool, Express VI configuration... never let the environment do for you what you can spend hours doing yourself, right, Michael? :shifty:

    Seriously, though... it never ceases to amaze me the massive difference of opinion about these sorts of features between long-time users and new users of LabVIEW. To relatively new users, these are Heaven-sent productivity improvements, and they find going back to older versions without these to be cumbersome. Older users tend to view these with deep suspicion, even disgust that R&D spent time working on them, and generally see them as getting in the way of getting work done.

    I wonder if it is true of other programs to this extent? Certainly I know that autocorrection in MSWord -- which works fabulously well -- gets similar reception, old vs new.

  8. QUOTE(Hacti @ Aug 31 2007, 05:35 AM)

    My question is : Can I change the value of a shared variable programmatically (with property nodes, without hardcoding the variables) some other way then using the datasocket.

    If you have a finite set of shared variables, you could create a separate subVI for each variable that sets its value and then use VI Server to select which subVI to call.

  9. QUOTE(robijn @ Aug 29 2007, 10:21 AM)

    Wrong lesson to take away from this. The actual lesson is, "Wow, AQ coded a Map class in just three hours that is clean, efficient AND dataflow-safe without introducing the semaphore/mutex locking mechanisms that reference classes would've required."

    QUOTE(orko @ Aug 29 2007, 02:49 PM)

    QUOTE(Gavin Burnell @ Aug 29 2007, 02:38 PM)

    When you live on the cutting edge, you don't get documentation. ;-)

    And now, let's see if I can get a few of Tomi's questions answered:

    Question 1: Can I get more details about the "simple" Swap in the insert case?

    Tomi asked for further details about the "simple" case where we swap in data into the Map during an insert... He gave some examples of differences in behavior that he sees from the Show Buffer Allocations tool when using the Swap prim with different data types. I'm not sure what he saw exactly, but here's a more detailed explanation of my code:

    Suppose you have a cluster containing a string. You can unbundle, append to the string and bundle back into the cluster. LV doesn't make a copy of the string because it knows that the result is going back to the same place as the original data. Thus the unbundle terminal is said to be "inplace" to the bundle terminal.

    But, if you unbundled the string, appended to it and then sent that string somewhere else, without putting it back in the bundle, LV would make a copy of the string. The data isn't coming back to the cluster, so it cannot share memory with the original cluster because the original cluster needs to continue its execution with the unmodified string value.

    Now, consider those two concepts with respect to the Map class. The Map has a piece of data in location P. The new node being inserted has a piece of data at location Q. There are downstream terminals X and Y:

    http://forums.lavag.org/index.php?act=attach&type=post&id=6806''>http://forums.lavag.org/index.php?act=attach&type=post&id=6806'>http://forums.lavag.org/index.php?act=attach&type=post&id=6806

    Position P would like to be inplace to position X. But if I have new data coming into the cluster AND I need to have the original cluster available for writing, then LV has to make a copy of the data at P. Further, the data that is in Q is being moved into X, but those two memory locations cannot be made in place to each other because they are part of different allocated clusters in memory. So we also make a copy of Q. This is the case of the Unbundled data not going back to the original Bundle node after being modified.

    Using the Swap primitive avoids both copies. Instead of copying data out of the cluster and sending it somewhere else, we're now modifying the value in the cluster. How are we modifying the value? We're Unbundling data, then exchanging the value in that memory location with the value at another memory location, and then the wire is going back into the Bundle node. Thus LV knows not to make a copy of the data at the Unbundle node.

    QUOTE

    1. So my first very simple question is what exactly does Switch Values node and In-Place Memory structure do? Why do they behave differently for different but apparently similar data types.

    I have no idea what you're seeing. The class and the cluster should behave identically in the cases you're describing. Please post a VI.

    QUOTE

    2. How can I know if an operation really happens in-place or not if it's such a crusial thing when manipulating hierarchical data structures.

    The Show Buffer Allocations tool. And it isn't crucial, just more efficient. Yes, there is a difference. The Insert function will work without using the Swap, just with less efficiency. Nothing in LV was ever designed to do what I've done with Map. It is a coincidence arising from a thousand different design decisions that this is even viable.

    QUOTE

    3. In-place memory structure is a rather intuitive thing when you look at the examples of its usage in LabVIEW help. However when you make a subVI call inside in-place memory structure the things get more complicated. How does the subVI know that it needs to operate in-place.

    The subVI doesn't know. The subVI is compiled with its own inplaceness. If it doesn't preserve inplaceness across outputs, then the caller will copy the output back into the original buffer. There aren't any references.

    QUOTE

    If we just don't use in-place memory structures and don't mind of making copies, will we be able to use hierarchical data structures such as binary tree without any tricks and risks of interfering parallel wires?

    Honestly, I'm not sure. But I believe that the natural dataflow expression of LV will kick in and make creating the tree impossible.

    QUOTE

    QUOTE
    The node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT.

    Haven't we used the Unbundle node to tell LabVIEW that we are going to use two of the data members, Left and Right in this case? Was the problem that if we didn't use Swap Values nodes to write dummy values l and r to the original object X(L,R) private data, LabVIEW would need to make data copies of the Left (L) and Right ® in order to be able to remove original object X(L,R) from memory? And it wouldn't be an in-place operation any more and we would loose the efficiency.
    You got it. If we don't use the Swap primitive, then LV will kindly make a copy of the left and right subtrees for us so that we have independent copies separate from the node being deleted.

    QUOTE

    Exactly when would X(L,R) disappear from memory?

    Whenever the wire that the data is sitting in next executes and new data on the wire stomps on the old data.

    QUOTE

    How do you know when is LabVIEW going to optimize copies incorrectly either in
    LV
    8.5 or any of the future versions? I guess this is an undocumented issue... How would LabVIEW incorrectly optimize the code in this particular case?

    Let me be very clear here: If I do not wire the output of the Unbundle node, LV is correct to optimize out the Bundle node. Every rule of dataflow says that the execution of that bundle node should not be necessary -- there's nothing downstream from the Bundle node to need the data. Remember, we're violating dataflow here -- we're saying that we need LV to change the value on a wire even though that wire isn't going anywhere because the memory is actually shared with items that are going on down the wire.

    How will this keep working in future LV versions? Don't know. In fact, I'm pretty sure that LV's compiler will eventually be smart enough to optimize this out unless I deliberately put something in to keep this working. I'm not entirely sure that it should keep working. By many arguments, the code I've written for the Map is a bug that should be fixed, probably in 8.5.1. This bit of magick arises as the confluence of multiple features, and I've posted it here to see what everyone thinks. I see a certain elegance to it, but this may be the same romance a moth feels for a flame.

    QUOTE

    So what exatly as happened here. We have had four handles (pointer to pointer) for LabVIEW objects LH,RH,lH and rH. The handles themselves remain in the same addresses but the pointers the handles refer to get exchanged. So LH that has originally referred to a Lp to L. Now refers to another pointer lp to l. Am I right or wrong here?

    Completely correct.

    QUOTE

    How do we know that in a future version of LabVIEW this trick will not be optimized as a no-op?

    See previous comments.

    QUOTE

    Actually to complitely understand these things, I'd need more examples of what I shouldn't do and what actions are not allowed rather than what actions the correct answers. I need to ask this one more time. If I don't try to do things in-place, will I still have risks of crashing LabVIEW or something similar when I modify hierarchical data structures?

    I honestly am not sure. I tried to get people -- both inside NI and outside -- to work on this problem before release. And I put it in the white paper after release. Everyone got this blank look on their faces and responded with comments like, "I'm not really sure what you're even talking about. I think I'd need to see a working situation to be able to help." So, now everyone can see the scenario that I'm talking about.

    If we are going to see a problem, it will definitely show up in the graph case. If the graph can be built -- which I'm not sure is possible -- but if it can, the bug, if it exists, will have to come to light.

  10. Quick reply for now on a single point...

    QUOTE

    What exactly do you mean inside-the-class and outside-the-class.

    I mean that for VIs outside the class, they see a coherent "Map" object that behaves dataflow safe in all cases and a set of API functions that act on that map in a dataflow-safe way. But on VIs inside the class, there are calls to member functions that adjust parts of that map in ways that wouldn't be dataflow safe under all conditions, but the conditions are limited such that it works in the cases that are actually capable of executing.

    QUOTE(Tomi Maila @ Aug 29 2007, 11:25 AM)

    LabVIEW has always been a programming language that one can learn without reading books simply by checking the documentation of each node. Now these particula techniques are not documented anywhere. LabVIEW help doesn't tell much anything. Users are tempted to give these new memory management nodes a try. They may soon use them thinking they understand, but they don't. At least if they are as stupid as I'm.
    ;)
    There is simply no way a user can learn these techiques if there is not a very comprehensive introduction to these techiques. This is also true for the most advanced users like the ones here at LAVA.

    If enough people disagree, I'll fix the compiler so this doesn't work at all. It's easy enough to detect -- just prevent a class from using any ancestor class from being used as a data member. It is a door I left open, consciously and deliberately, because of this use case.

    Part of posting it here is to evaluate whether this should remain part of the language syntax. It would be the work of a few moments to render classes that do this broken. As for the documentation, you're reading it. None of this has been done with LabVIEW until this Map class that I've posted. Its feasibility was only theoretical until I finished it recently.

    It's like a manhole cover. Sometimes it's useful to go into the sewer drain, but you can get covered in crap. City work crews could lock down the manholes, or they could just leave the lid unlocked and brave users can descend into the depths. This hole is more dangerous than most of the ones on LabVIEW Street because of the crash issue. You want an example of things that are normally reserved for password-protected VIs that users aren't allowed to look at? This is a prime example.

    So, you tell me... should we close this hole in the language syntax?

    As for the users being tempted to try the memory management primitives -- yeah, I wasn't really in favor of adding those to the language. They encourage you to care about things that you really should leave to the compiler 99.9% of the time. I know they're useful, but I really think they should've been a separate toolkit -- the license key for them could've been handed out to Certified LabVIEW Developers as a prize when they passed their exams. But it wasn't my call to make. They are in the palettes. And yet, there's not going to be any horn blowing to announce them. There never will be any comprehensive training with these prims -- they're quietly sitting off in one palette, to be pointed out to users by AEs when they reach some otherwise unresolvable issue. And they can be explored by zealous LV users.

    Side note... after today, I may not be posting much for the next week or so. Other stuff going on. I'll try to get a more complete answer to Tomi's questions later today, but no guarantees.

  11. QUOTE(chrisdavis @ Aug 29 2007, 08:46 AM)

    It really is worth it. As a side benefit, everything that you source code control can usually be backed up with calls to the source code control program. For SVN you can use the concept of a "hot copy" to back up all of the data in your source code control repository to another path.

    Query: Has NI Week ever had a presentation on using source code control with LabVIEW? It seems to come up as a topic of conversation regularly, and perhaps NI should have a presentation in the library for sales folks to be able to present on this topic. Would that be a good topic to suggest for next year?

  12. QUOTE(LV Punk @ Aug 29 2007, 06:42 AM)

    Performance for a class like this would be want to be optimized. Are there any gains to be had by terminating both wires into a single sequence structure? Did you use two sequence structures on purpose?

    Good question! The answer is "no."

    I hadn't even paid attention to the second flat sequence... that's just my normal way of ignoring an error that I don't want to chain any further nor do I want to report. I could've just left the output unwired, but then the automatic error handling would've kicked in (yes, I could turn off automatic error handling on this VI, but it is my personal preference to leave that feature on ... I try to either handle the error or consciously ignore it as in this case. If I do neither, then the auto error handling flags my mistake. )

    Notice that the To More Specific node could fail if the second class is a Leaf Node class. If that returns an error, then there's no point in doing the insert into the tree (we'd be inserting an empty node), so I use the error out to skip the execution of that next node. But beyond that point, there's no need to report an error. The Insert function itself cannot return an error of its own in this case, so we're not ignoring anything other than the type casting error.

    Let me repeat: The first sequence structure has nothing to do with memory allocation or deallocation. It has to do with whether or not the Bundle node executes. If the output of the Bundle node is unwired, then the node will become a no-op, which in this case would be a problem.

    QUOTE(Tomi)

    Well, the garbage collection algorithm is not documented.

    There is no garbage collection, so there's no algorithm to document. The data is simply left alone in the wire, and the next value to come down that wire will overwrite it. For details, take a look at the online help for the Request Deallocation primitive.

  13. QUOTE(NormKirchner @ Aug 28 2007, 08:54 PM)

    And since we all saw what happened the last time Norm got excited, I'm going to go ahead and post this now before it gets worse. ;-)

    Let's start with something easy... the case when we are setting a key-data pair into the map and we discover that the value is already in the map. Take a look at the block diagram of Map.lvlib:Branch Node.lvclass:Node Insert Pair.vi

    http://forums.lavag.org/index.php?act=attach&type=post&id=6788

    Here we are taking the existing data and swapping it with the new value. What does the Swap primitive buy us? Why didn't I just cross those wires? In this case, I could have, but by using the Swap prim, I'm avoiding two copies of data -- one copy out of the Map structure and one copy into the map structure. No copying needed at all. We're playing with pointers in memory here, gentlemen and ladies. Directly touching memory in a way that LabVIEW has never done before. I point out this simple case since it'll help understand the far trickier case of deleting a node from the tree...

    For those of you without LV8.5, here is the block diagram for Map.lvlib:Branch.lvclass:Node Delete Key.vi which caused such consternation in previous posts...

    http://forums.lavag.org/index.php?act=attach&type=post&id=6789

    What in Hades is going on here?

    1. We are implementing a binary tree. That means that when we delete a node, we now have a left node and a right node. One of those gets put into the tree to take the position of the deleted node. The other must be inserted into the tree as if it was a new node.
    2. We want to do #1 WITHOUT MAKING A COPY OF ANYTHING. Why? Because a single node really is an entire sub-tree. We want to simply connect that subtree into a new place in the graph without duplicating all the pointers. Otherwise all of our efficiency is lost.
    3. The node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT. So the first thing to do is to give the node about to be deleted some new values for its left and right nodes. We use the Leaf Node constants. The Force Copy primitives (those little small dots for those of you without LV8.5) keeps LabVIEW from trying to be too smart about optimizing copies. It guarantees that we have an independent value of a Leaf Node, one for left and one for right. We Swap those two in place of the current subtrees of the node being deleted. So our left and right subtrees are now disconnected from the main tree.
    4. Why is the Sequence Structure there? Because if we don't wire the output of the Bundle node, LabVIEW will look at the diagram and say, "Hey! This output is unwired. That means the node is a no-op." And it will optimize out the bundle. But in this diagram, even though we're never going to use the deleted node ever again, it isn't a no-op -- it is what severs the deleted node's connection to the left and right subtrees. (Note: After further questions, I posted a clarification of this point here.)
    5. Now we have two subtrees. We test to see which one is the deeper tree and swap the wire values so that the deeper one is on the top wire. That one gets to be put in place of the deleted node because it keeps the maximum number of nodes at the highest possible level of the tree. The shallower subtree is then inserted into the map as if it were a new node.

    Now, let's consider the implications of this entire VI hierarchy:

    There are points in this graph where we are modifying a value on one wire which results in a change in value on another parallel wire. This is a massive no-no for dataflow programming. And yet it works in this situation -- reliably, predictably, and quickly -- without any semaphore or mutex locking. It is the sort of thing that a human being can prove is safe by code inspection but no amount of automated work from the compiler can confirm that (at least with the compiler tech that I have at my disposal -- the NI AI is still holding a grudge against me for introducing it to Goedel's Incompleteness Theorem).

    If you coded it wrong, you could crash LabVIEW. Should I then close this door in the compiler? I don't think so. First of all, you can't do any of this without the memory management primitives. Anyone trying to make this work without the Swap will find themselves coding regular LabVIEW and completely unable to get the map to work the way they'd like it to work. Anyone who uses the Swap primitives is definitely on the cutting edge of LabVIEW, because it is so non-intuitive compared with simply crossing the wires. And if you're advanced enough to be playing with these, then I think the full power of options should be open to you.

    This hierarchy gives you a Map that is perfect dataflow outside the class --- if you fork the wire, the entire Map duplicates and is thus dataflow safe for all operations, and it can be freely stored in LV2-style globals, global VIs or single-element queues if you decide you need a reference to one. It hides all the "pointer arithmetic" inside its private member VIs, thus guaranteeing the coherency and consistency of the Map and keeping the dangerous dataflow-unsafe functions from being called directly under conditions that might not be safe.

    I'm thinking an essay covering the analysis of this diagram and the related VI hierarchy should be added to the Certified LabVIEW Architect exam.

    Now, my next challenge to everyone...

    Do any of you recall me posting this request for a hack? The suggestion to use MoveBlock is an excellent one. You'll need that in order to complete the next challenge...

    Write a VI Hierarchy to implement an arbitrary cyclic graph in pure G. To VIs not in the Graph.lvclass hierarchy, the graph objects should be perfect dataflow -- if you fork the wire, the entire graph replicates. The exposed API should allow you to create a connection between arbitrary nodes in the graph, including the creation of cycles. Inside the class, you may do many dataflow unsafe activities, far worse than the moment to moment crimes of the Map.

    I have no idea if this is even possible. This is what we call "research." It's what I spend my spare time on. :-) Good night, and good luck.

  14. QUOTE(Gary Rubin @ Aug 28 2007, 09:19 AM)

    I hate ctrl-t. I especially hate that it's next to the ctrl-R, so it inadvertently gets hit sometimes and I have to resize my windows again.

    While I'm ranting about my fat-fingering, I wish ctrl-E and ctrl-W weren't next to each other...

    I know that the thread hijacking objection has been raised, but a good answer was given on the main topic, so I just want to note this:

    If you accidentally hit ctrl+t and tile your windows, you can use ctrl+z to undo that change. It won't undo the block diagram, but your carefully sized front panel window will be restored. Not everyone notices that sizing the panel is an undoable operation, so I thought I'd mention that.

  15. I'm going to come down pretty negative on this idea. I hope it is apparent that I'm objecting on technical grounds and not personal ones. Yes, LabVOOP is my baby, and it can be easy for me to get defensive about design decisions. I've taken the time to really look at jacedom's suggestion, and to make sure my mind is clear of any prideful predjudices. I think I can clearly reject this idea on technical merit.

    Why am I prefacing this? Because this discussion is over text, which doesn't convey conversation well at all. So I'm going out of my way to say that I appreciate users who contemplate and suggest alternate implementations. I really do. But I'm going to be very direct in my objections to this idea. I want everyone to understand that it is the idea that I'm knocking, not the person. Why do I say all this? Because I like being able to provide feedback on customer suggestions, to give the background details on why LV won't be implementing it, but I have found that not all customers take it well, and I've learned to just say "interesting idea, we'll think about it" whether the idea is good or bad so as to not give any feedback whatsoever. But jacedom's is a worthy suggestion that I'd like to respond to in full, because of the unique tack it takes for language design. I have no interest in starting a flame war, and I hope my comments are taken in that light.

    With all that in mind...

    QUOTE(Jacemdom @ Jun 15 2007, 09:34 AM)

    Even if you have such a central architect, you can't expect the central architect to do all the implementing. As various members of a team create various branches of the heirarchy, you have them integrating their changes into a central single VI. Even with the graphical merge of LV8.5, that's still creating a lot of contention for that VI. As time goes by, you may have a large enough hierarchy that no single person is even capable of describing all the possible classes -- I know of many hierarchies like this in C++ code, and one getting close to this already in G code. There would be many hierarchies that would easily expand beyond a single screen in their complexity.

    Also, what about when there is no "team" at all? How does a LV user extend a class developed by NI? How do multiple OpenG developers collaborate to extend each other's work? The design that requires such a central repository for the hierarchy necessarily limits the hierarchy extension only to those who can edit the hiearchy file. If Albert develops a class hierarchy and gives it to both Bob and Charlie, under your scheme, if Bob and Charlie both develop a new class (by editing the central cluster) they cannot deploy on the same machine since their clusters would be unable to be in memory at the same time.

    Further (and this one is critical) there would be no way for the parent class to be redesigned without impact on the child classes. The parent needs to be an independently replaceable unit. That allows for the complete overhaul of the private portions of that class without disturbing any of the child classes. Indeed, with the current LV scheme, the parent can be replaced even in a run-time engine without requiring the children to recompile at all.

    Although there are merits to your single cluster concept as a deployment step for optimization, as a development environment I just can't see it as viable at all.

    QUOTE

    All of the above is a significant organization of *people* required to make the software architecture viable. That creates significant barriers to usage.

    QUOTE

    Within a single organization, a Central Architect system can be very viable, but there are many other styles of programming team that are just as effective. You work in the Cathedral but others work in the Bazzar. I do not see how the single cluster concept makes an all-powerful architect's job easier; I do see how it makes the communal developers' work nigh on impossible.

    QUOTE

    As the proposed approach separates clearly data from functions and that the data cluster type def is the structural equivalent of a class, this means that yes all classes would be in memory, but what would that memory cost be, if all data contained mainly consisted of null/empty values?

    The memory cost is all the implementations of all the functions that are never invoked, possibly including some very large DLLs.

    Tying all the classes together would make many architectures I've already seen with LabVOOP not available at all. The core software can install a hierarchy. A new module can come along later and install an extension. This is the basis of the new Getting Started Window in LabVIEW 8.5. In fact, many modules can install extensions. Having all possible modules in memory is a major size bloat and is not worth it if you don't use those modules. With the Getting Started Window, the classes are small, so even if you have every possible module installed, the impact is small, but the same type of architecture can be applied to many cases, and in a lot of these cases the effect would be devastating. Take the error code cluster refactoring that I posted to the GOOP forum a couple months back. There may be specific apps that want very complex error handling routines that load all sorts of external DLLs for graph and chart displays or e-mail communication. These should not be loaded every time the user brings General Error Handler.vi into memory.

    There are many hierarchies where every single operation is defined for every single class in the hierarchy. I'm willing to bet that it is way more common to have a method at every level than for the majority to be empty.

    QUOTE

    It would basically only leave the clusters definitions in memory, would that be significant in todays multi gigabytes of ram systems, even in multi megabytes systems? You could still load the functions dynamically, significantly reducing the memory usage, as the majority of bytes reside in function definitions vs data container(TD) definitions.

    What gigabytes of ram? The RT targets have 64k. Or 8k. An FPGA fabric is very limited. Hitting a PDA requires a thoroughly stripped down runtime engine.

    QUOTE

    QUOTE
    3) The parent implementations would be open and visible to the child implementations. You'd lose the independence of separating implementation from interface.

    Is this valid considering that in dataflow the data is naturally separated from the functions, in contrast as in OO design where methods and properties are merged in one object?

    Ok. This is a completely bogus argument. And it is the root misunderstanding of all the by-reference vs by-value debate.

    Let's get something clear, everyone:

    In C++, if I declare an object of type XYZ like this:

    XYZ variable;

    The language does not suddenly clone all the functions in memory so that this object has its own copy of all the functions.

    The functions are still separate from the data insofar as the assembly code for them occupy the same region of memory whether I have one instance of the class or 1000 instances of the class.

    The ONLY merging of functions with data is in the header file. Which is equivalent to the .lvclass file. The binding between data and functions is EXACTLY THE SAME between JAVA, C++ and LabVIEW. And Smalltalk. And just about any other OO language you'd like to name (I would say "all other OO languages", but I leave room for someone having developed a language such as Befunge for OO).

    Yes, my argument 3 is valid. Very much valid. Any time you have children being designed such that they depend upon a particular implementation of the parent you have a violation of the most basic tennet of OO, encapsulation of data.

    QUOTE

    Therefore creating private data.

    See previous post on why ever having public or protected data is a very bad idea. I don't care that you *can* create private data under your scheme. I object to the idea that you *can* create public data. The default direction is really not under contention here. You can default it to public or default it to private -- but the fact that it can ever be set to public, whether as the default or by deliberate change, is bad.

    QUOTE

    QUOTE
    We do need to make the process of creating accessor VIs simpler.

    Is the added debugging complexity also being worked on, specially probing, that I believe to be a drawback of the chosen implementation? I believe that the ability to follow and look into the wire as been one of the main strength of LabVIEW and loosing that decelerates my ability to write working tested code.

    I think you changed topics here... give me a second...

    When you say "the chosen implementation", are you referring to the need to create accessors? When I first read this, that seemed to be what you're referring to here. That I would disagree with. The debugging challenge is dramatically simplified by requiring the accessor VIs because you have a bottle neck to catch all data value changes as they happen, rather than trying to set breakpoints and probes in places scattered throughout the code.

    But on re-reading, I think you're actually asking about the ability to display in the probe the full child data when the child is traveling on a parent wire. That is a feature I've asked my team to work on. It does pose quite a challenge, and I wouldn't expect to see it soon. But, having said that, I have yet to see much call for it. The majority of the time, if I'm debugging a parent wire, it is the parent's data cluster that I care about. The child's cluster doesn't get changed by parent operations and is rarely of interest at that point in the code. So, yes, it is an interesting problem worthy of attention, and there are cases where it would be useful. But I've spent the last year looking over the shoulders of LVClass developers, and I haven't seen this create an impediment to development. This isn't the same level of impediment as, for example, the Error Window feedback.

    QUOTE

    Does this mean that standardizing everything to this idea could simplify the architecture of LabVIEW itself? Could you have dynamic loading on those platforms, if dynamic loading only consisted of dynamically loading functions?

    No. You couldn't have dynamic loading at all. The whole point is to use this for targets such as FPGA where there is only a single deployment to the target and all possible classes are known at compile time.

    SUMMARY:

    In short, the central repository of data definition for an entire hierarchy is, in my opinion, unworkable for development. It is a useful concept for deployment only. Tying an entire hierarchy together limits extensibility and places restrictions on the types of software teams that can work on the software.

    I hope all the above makes sense.

  16. Offering help on this scale over discussion forum would be a tall order. I doubt that the LAVA folks are going to be able to help much, not because we're unwilling to help but because there's really no way to just teach LV over the discussion forums. Isn't there anyone who knows LV in your physical vicinity who might be able to help you? We're going to be more able to comment on a particular block diagram or a proposed VI hierarchy, but at this stage of your project, your question is sort of like landing on a new planet and asking us to brainstorm all the possible constellations from the stars in the sky. Too much space to cover and no clear place to begin. But if you get to the point where you have studied the stars in the heavens and are ready to propose a set of constellations, we can kibitz on that as to whether the constellations are recognizable and how well they reflect the LV mythology. (Have I stretched this metaphor too far? Perhaps, but you get the ... ahem ... picture, I hope.)

  17. QUOTE(Jim Kring @ Aug 24 2007, 01:09 PM)

    I just tried (in LabVIEW 8.5) to use Open VI Reference to open a reference to a VI that is contained inside an LLB and that doesn't work in built apps, either!!!

    Am I going crazy?

    Ok, Jim. Let's take a deep breath and start from the top. First, is the computer plugged in?

    Seriously... the Open VI Reference from LLB is used on every single test of the nightly autotest suite. It works. Not sure what's wrong with your code, but that definitely works.

  18. QUOTE(Val Brown @ Aug 24 2007, 10:00 AM)

    If I'd wanted to have been COMPELLED to use objects I would have simply used C++ from the beginning. I really do NOT want G to simply become a "pretty" IDE for C++. It doesn't NEED to happen; doesn't bring ANY real benefits; violates the central design and organizing principles of LabVIEW; and is an affront to those who were not only "early adopters" of LabVIEW, but have used it consistently for years.

    *chuckle* I'm sorry... reading this (and the other posts) I can't help thinking of Monty Python and the Holy Grail. "Help! Help! I'm being repressed!"

    You're in no danger whatsoever of classes being forced on you by the removal of clusters. Nor any other method of moving you toward classes by NI. The only thing that will compel you toward classes will be the awe-inspiring beauty of the coherent libraries of VIs that your peers produce in the next few years and the shame that you feel when you compare it to your own VI hierarchies. :worship: Why should we try to force you to use classes when your own base desires (for good code and sustainable designs) will draw you inevitably toward it?

    Tangent A: C++ does not compel the use of classes. You can backslide into C anytime you want. The C++ compiler accepts all C syntax.

    Tangent B: G as a pretty IDE for C++??? What a HORRIBLE vision!! Have you *seen* C++? It is more of a hack than a language. Like Darth Vader, C++ "is more machine than man now." My hat goes off to the hackers who designed it... there are amazing amazing aspects to it. But for arcane syntax, it wins over just about every language I've ever seen. G should not be a pretty IDE for any of the traditional programming languages. What it should be is a pretty IDE for expressing human concepts and needs to the CPU of the machine, in the most elegant, efficient and intelligible way possible... which is why you'll eventually _want_ classes.

  19. QUOTE(i2dx @ Aug 23 2007, 03:41 PM)

    LabVIEW 8.5. Read the Upgrade Notes on "recursion".

    QUOTE(Justin Goeres @ Aug 23 2007, 10:40 AM)

    This reminds me of AQ's old story about the "feature" where LabVIEW would crash if a wire greater than 65,536 pixels in length had a bend in it (or something along those lines).

    The full story is this... a user filed a bug report that "If I popup on a wire and select 'Clean Up Wire', if the wire is over 16k pixels long, LabVIEW just deletes the wire." I rejected the bug report on the grounds that LabVIEW had done the right thing to clean up the wire.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.