Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Everything posted by Aristos Queue

  1. Here's one that works for any array data type. Call it the Monte Carlo method... I hope everyone understands that I put the False case of the case structure right above the True case in this image... this is not just a case structure hanging in space. :-) The value of the Boolean is always True.
  2. QUOTE(crelf @ Sep 12 2007, 12:34 PM) Shouldn't the beer stein have something on it like "Keep it flowing" ? Something I pulled together real quick... (Saved in 8.5) Download File:post-5877-1189692492.zip
  3. QUOTE(PJM_labview @ Sep 12 2007, 11:32 PM) Same here. I actually wrote this feature (David: I *think* it first appeared in LV8.0) out of frustration with the string constants. Unfortunately, free labels have a different code path, and I've never gotten back to the project to learn that section of the code. Perhaps someday I'll get around to it. Or someone else will. I filed a CAR at the time so that I'd keep thinking about it. You know, in my spare time. ;-)
  4. QUOTE(Ben @ Sep 12 2007, 09:40 AM) I don't think that would've helped for your enum example nor Justin's wire-the-N. Those have been in LabVIEW for a long long time.
  5. QUOTE(yen @ Sep 12 2007, 07:55 AM) Serious insofar as a lot of us are intrigued, but as a real project, no. It's a pie-in-the-sky thing for the time being, something for *years* down the road. A lot of these sorts of big dreams take many years to work through. LV classes were being dreamed about 2 years before a team formed to work on them, and then took 6 years. LV FPGA was demo'd at NI Week five years before it released. The LV project window was the result of many years of customer feedback and a couple years of development time. These bigger shifts take a while. And this one is definitely in the idyllic "wouldn't that be nice" stage where it has only positives and no real negatives. ;-)
  6. QUOTE(Justin Goeres @ Sep 12 2007, 08:17 AM) You may have just learned this, but it could be worse. I've actually received a product suggestion that said "LabVIEW should have a better way to calculate the size of an array". The method used in the attached VI would make your diagram look something like this: http://lavag.org/old_files/monthly_09_2007/post-5877-1189607671.png' target="_blank"> So you're doing fine.... ;-)
  7. QUOTE(jaegen @ Sep 11 2007, 05:27 PM) Yes. It leaves room for future expansion of functionality that differs between leaf nodes and branch nodes.
  8. QUOTE(Justin Goeres @ Sep 11 2007, 04:19 PM) My personal vision? There cease to be files on disk representing VIs, libraries, projects, etc. Instead there is a database (MySQL, Oracle, whatever). That database contains all the VIs for a given EXE -- whether that is LabVIEW.exe or your built application/DLL. To move stuff from one machine to another, you would export VIs from the database to a portable database file, and then import them into the database on the other side. Source code control on the local machine is just transactions on the database. Source code control to a remote depot would involve LV brokering the transaction via temporary files to "check out, import, export, check in." No more save changes dialog asking you about Defer Decision No more cross-linking No more expensive disk tracking in the project window (new in 8.5) No more having to load all your VIs into memory for them to be aware of edits to subVIs No more confusion about disk hieararchy vs project hierarchy vs ownership hierarchy No more files changing when LV isn't looking No more confusion about which files are loading from where No more surprise missing files This might seem radical, but this is exactly the approach taken by some of the recent JAVA IDEs.
  9. QUOTE(Jim Kring @ Sep 11 2007, 04:42 PM) Or it means that the intermediate levels of your inheritance hiearchy should not have such a method; then the VIs can be named the same on all the leaf classes. This is actually good practice since it means that your intermediate classes are just API classes, not actually functionality, which is something I'm recommending more and more as time goes by. It solves an awful lot of programming bugs if the class levels that are not leaf level are not actually instantiated in your application. (There's a conversation somewhere in LAVA and/or DevZone about why LV doesn't have formal "abstract classes", but I can't find the link to it at the moment. If anyone knows where it is, please link it here.)
  10. QUOTE(Aristos Queue @ Sep 10 2007, 09:39 PM) Going through my notes, I found another impossible situation. Recall that in my previous post I said there was a problem of figuring out which library owns which directory when we load a VI from that directory into memory. Suppose we did solve that problem. In that case, suppose user creates Project A and in that creates Library Q to watch directory X. Save All and close all. Now user creates Project B and in that creates Library R to watch the same directory X. Now, when VIs are placed in that directory, which library should they become a part of? And before you say, "Well, LV should just prevent two libraries from ever owning the same directory X," consider the case of saving a backup copy of your library. You do "File >> Save As >> Unopened Copy" and save your library off under some other name. Do you really want us to prevent that because that would create two libraries owning the same directory? Why do G programmers want autopopulating directories and libraries to work together? The issue comes down to wanting to make it easy to associate a given VI with a given library and, in the same step, keep the disk structure up to date. I wish that we'd found something that worked more to integrate the "ownership" relationship with the "disk hierarchy" relationship. Autopopulating folders does nothing except automate a small part of the disk maintenance problem. They're nice for a flat VI hiearchy if you're the kind of user who moves VIs around a lot, breaking apart hiearchies and extracting VIs out of the middle of projects, without using LV to move them. But they don't (and can't) help with library organization at all.
  11. QUOTE(robijn @ Sep 7 2007, 06:14 AM) LV doesn't clean up objects ever. Just as it does not clean up arrays or strings. The terminals are allocated when the VI loads. The terminal values may be stomped on when the VI runs. When a VI closes, the terminal space is deallocated. See the online documentation for the Request Deallocation primitive (in the palettes since LV7.1 I think) for details.
  12. QUOTE(Michael_Aivaliotis @ Sep 7 2007, 05:12 PM) "Is that so bad?" Yes. You're asking for the logically impossible. A: "Everything in this directory is going to be owned by this class." B: "Um, but you put the .lvclass file itself in that directory." A: "Oh, well, I mean that the .lvclass will own everything in the directory *except* the stuff that it can't possibly logically own." B: "Oh, that's better. I'm sure our documentation folks will love explaining that one, but with enough psycodelic drugs, I'm sure they can do it. ... But wait, what about the files that you copied into that directory by hand when LV wasn't even running?" A: "Well, the class should own those too." B: "You mean when you open the VI, LabVIEW should be smart enough to know about all possible libraries in the ENTIRE harddrive to know which ones are monitoring which directories so that when you say 'File>>Open' on one of those VIs, LabVIEW just mystically knows to open the library as well and edit it to own the VI?" A: "Well, yeah." B: "Fine. We'll just load ALL THE LIBRARIES ON YOUR DISK into memory every time you launch LV. That way we'll know which libraries are monitoring what." A: "Oh. Uh. Um... " I could go on. I fought hard against autopopulating folders. They are a false hope. But users demanded them. And so we have them. And I even got involved with the team that worked on them to try to make them work with libraries (and thus, by extension, classes). But there are so many logical contradictions and impossible relationship resolutions that I just can't believe it'll ever work out.
  13. QUOTE(Jim Kring @ Sep 10 2007, 05:19 PM) It saves all the subVIs of any VI that is in the class too, if I recall correctly. So if you've got a VI in one class that calls VIs in another class, those other VIs would get saved. I didn't put that function together, but that's my memory of how it works.
  14. QUOTE(Harish @ Sep 5 2007, 06:55 AM) GOOP or LVOOP? If you're wanting to transmit the object from one LV to another LV: If you're using LVOOP -- the .lvclass files that are natively built into LV in version 8.2 and later -- you can just send the objects to the other system using any of the LV data communications protocols. You could, for example, use a Call By Reference node to pass the data by terminal. Or flatten to a string and use TCP/IP prims to send the string and then unflatten it on the other side. You just have to make sure that the .lvclass file itself is already loaded into memory in the remote system. An example that might be helpful is <labview>\examples\lvoop\Read Write Class Data To File\Read Write Class Data To File.lvproj Although this shows file IO and not communication, it shows how the object data can be treated as any other LV data. If you're using one of the GOOP toolkits -- Endevo's GOOP Toolkit, OpenGOOP or dqGOOP -- you'll have to talk to the authors of those tools for information on how to send an object from one system to another. I know that the objects themselves exist only locally, and I don't know what communication systems they've built. If you're wanting to just create serial output of the object for sending to some device: In both LVOOP and GOOP, just create a VI that takes the object in and produces a string output and generate whatever format string you need for the device. Helpful?
  15. QUOTE(shashank @ Sep 5 2007, 04:31 AM) Two options have just appeared on radar. If you're using LV 8.2... Go to ni.com/labs and click on the Generic Container download. If you're using LV 8.5... try this out: http://forums.lavag.org/Map-implemented-with-classes-for-85-t8914.html' target="_blank">http://forums.lavag.org/Map-implemented-wi...r-85-t8914.html Otherwise, there are two options that you have historically in LV: 1) Create two parallel arrays and add and remove from both simultaneously. You can use Search 1D Array to find the value in the X array and then return the complimentary value in the Y array. 2) You can use the attributes of a Variant; flatten your X to a string and use that as the key to store attributes whose value is y. Later you can retrieve the values by looking up attributes on the variant.
  16. I'd like to suggest a change to the factory function. The factory function "Image.lvclass:Create Image.vi" is the piece that takes the input image data and outputs the correct image class. Currently you have a case structure that splits out "24, 32" and calls Create RGB Image. And you have "1, 4, 8" which calls Create Indexed Image. In each of those VIs you have the same case structure test, splitting deeper. First of all, we're testing the same value twice; since we've already encoded the bit values into the top level Create Image.vi, let's just go all the way and skip the middle calls to Create RGB Image.vi and Create Indexed Image.vi. But we could also do this: On Image.lvclass, define a dynamic dispatch VI named "Get Bit Encoding.vi". This VI would take an instance of the class as input and outputs a constant integer that is the bit depth of this class. Then on each of the leaf-level child classes (not the middle-layer classes) define override VIs that return 1, 4, 8, 24 or 32 respectively. Now build an array of each of the 5 types of leaf-level image classes and make that array a constant. Now you can write a VI that takes image data as input, loops over that array, calls Get Bit Encoding.vi and says "if the bit encoding of this class is the same as the bit encoding in the image data, then create a new instance of that class from the image data." Now, in this version the only real advantage is that you don't have a case structure that is doing testing of the values -- you're instead asking each object "Could you encode this image data? If you can, then I'll make a new instance of you." That's much cleaner. Further, if the array, instead of being constant is a stored value (say, an LV2-style global), you could dynamically load new classes into the system and add them to the array at run time, and the code would still behave the same. Going to this length is definitely not worth your time on this project, but it is an idea that may be useful in the future.
  17. A) Please file a bug report on ni.com/devforums if the documentation is unclear for this feature. The online help is at National%20Instruments/LabVIEW%208.5/help/html/lvhowto.chm/control_to_class.html B) Here's some details of what is going on... 1) You popup on X.ctl and select Convert Contents of Control to Class 2) A new class is created named X.lvclass which contains X.ctl, the private data control of this class. This is NOT the same X.ctl as the one on which you popped up. 3) The private data control's cluster is populated with the data from the original X.ctl. 4) The original X.ctl is edited to have an instance of X.lvclass on its front panel, replacing whatever else was there already. The original X.ctl is not deleted so that VIs that look for it can still find it. Whenever you open a VI that uses X.ctl, that VI updates to match X.ctl (assuming X.ctl is a typedef). Thus any VI that used the old X.ctl type now uses the new X.ctl type, which is X.lvclass. Any VI that just passed the X.ctl value around to subVIs is fine. Any VI that tried to actually operate on the value of X.ctl is broken. To fix those VIs you need to move them inside the new X.lvclass so that they have the priviledge of being able to access the data OR you need to create accessor VIs on X.lvclass that these broken VIs can use.
  18. QUOTE(Michael_Aivaliotis @ Sep 1 2007, 01:31 AM) Grid alignment, autowiring, autorouting, autotool, Express VI configuration... never let the environment do for you what you can spend hours doing yourself, right, Michael? Seriously, though... it never ceases to amaze me the massive difference of opinion about these sorts of features between long-time users and new users of LabVIEW. To relatively new users, these are Heaven-sent productivity improvements, and they find going back to older versions without these to be cumbersome. Older users tend to view these with deep suspicion, even disgust that R&D spent time working on them, and generally see them as getting in the way of getting work done. I wonder if it is true of other programs to this extent? Certainly I know that autocorrection in MSWord -- which works fabulously well -- gets similar reception, old vs new.
  19. QUOTE(Hacti @ Aug 31 2007, 05:35 AM) If you have a finite set of shared variables, you could create a separate subVI for each variable that sets its value and then use VI Server to select which subVI to call.
  20. QUOTE(PJM_labview @ Aug 29 2007, 06:32 PM) Oh, duh. I always turn on the "resize new items to grid".
  21. QUOTE(robijn @ Aug 29 2007, 10:21 AM) Wrong lesson to take away from this. The actual lesson is, "Wow, AQ coded a Map class in just three hours that is clean, efficient AND dataflow-safe without introducing the semaphore/mutex locking mechanisms that reference classes would've required." QUOTE(orko @ Aug 29 2007, 02:49 PM) QUOTE(Gavin Burnell @ Aug 29 2007, 02:38 PM) I have to say that this whole thread just makes my head hurt ! You're not the only one, Gavin. I'm trying to keep up...but there's a lot to digest! When you live on the cutting edge, you don't get documentation. ;-) And now, let's see if I can get a few of Tomi's questions answered: Question 1: Can I get more details about the "simple" Swap in the insert case? Tomi asked for further details about the "simple" case where we swap in data into the Map during an insert... He gave some examples of differences in behavior that he sees from the Show Buffer Allocations tool when using the Swap prim with different data types. I'm not sure what he saw exactly, but here's a more detailed explanation of my code: Suppose you have a cluster containing a string. You can unbundle, append to the string and bundle back into the cluster. LV doesn't make a copy of the string because it knows that the result is going back to the same place as the original data. Thus the unbundle terminal is said to be "inplace" to the bundle terminal. But, if you unbundled the string, appended to it and then sent that string somewhere else, without putting it back in the bundle, LV would make a copy of the string. The data isn't coming back to the cluster, so it cannot share memory with the original cluster because the original cluster needs to continue its execution with the unmodified string value. Now, consider those two concepts with respect to the Map class. The Map has a piece of data in location P. The new node being inserted has a piece of data at location Q. There are downstream terminals X and Y: http://forums.lavag.org/index.php?act=attach&type=post&id=6806''>http://forums.lavag.org/index.php?act=attach&type=post&id=6806'>http://forums.lavag.org/index.php?act=attach&type=post&id=6806 Position P would like to be inplace to position X. But if I have new data coming into the cluster AND I need to have the original cluster available for writing, then LV has to make a copy of the data at P. Further, the data that is in Q is being moved into X, but those two memory locations cannot be made in place to each other because they are part of different allocated clusters in memory. So we also make a copy of Q. This is the case of the Unbundled data not going back to the original Bundle node after being modified. Using the Swap primitive avoids both copies. Instead of copying data out of the cluster and sending it somewhere else, we're now modifying the value in the cluster. How are we modifying the value? We're Unbundling data, then exchanging the value in that memory location with the value at another memory location, and then the wire is going back into the Bundle node. Thus LV knows not to make a copy of the data at the Unbundle node. QUOTE 1. So my first very simple question is what exactly does Switch Values node and In-Place Memory structure do? Why do they behave differently for different but apparently similar data types. I have no idea what you're seeing. The class and the cluster should behave identically in the cases you're describing. Please post a VI. QUOTE 2. How can I know if an operation really happens in-place or not if it's such a crusial thing when manipulating hierarchical data structures. The Show Buffer Allocations tool. And it isn't crucial, just more efficient. Yes, there is a difference. The Insert function will work without using the Swap, just with less efficiency. Nothing in LV was ever designed to do what I've done with Map. It is a coincidence arising from a thousand different design decisions that this is even viable. QUOTE 3. In-place memory structure is a rather intuitive thing when you look at the examples of its usage in LabVIEW help. However when you make a subVI call inside in-place memory structure the things get more complicated. How does the subVI know that it needs to operate in-place. The subVI doesn't know. The subVI is compiled with its own inplaceness. If it doesn't preserve inplaceness across outputs, then the caller will copy the output back into the original buffer. There aren't any references. QUOTE If we just don't use in-place memory structures and don't mind of making copies, will we be able to use hierarchical data structures such as binary tree without any tricks and risks of interfering parallel wires? Honestly, I'm not sure. But I believe that the natural dataflow expression of LV will kick in and make creating the tree impossible. QUOTE QUOTEThe node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT. Haven't we used the Unbundle node to tell LabVIEW that we are going to use two of the data members, Left and Right in this case? Was the problem that if we didn't use Swap Values nodes to write dummy values l and r to the original object X(L,R) private data, LabVIEW would need to make data copies of the Left (L) and Right ® in order to be able to remove original object X(L,R) from memory? And it wouldn't be an in-place operation any more and we would loose the efficiency. You got it. If we don't use the Swap primitive, then LV will kindly make a copy of the left and right subtrees for us so that we have independent copies separate from the node being deleted. QUOTE Exactly when would X(L,R) disappear from memory? Whenever the wire that the data is sitting in next executes and new data on the wire stomps on the old data. QUOTE How do you know when is LabVIEW going to optimize copies incorrectly either in LV 8.5 or any of the future versions? I guess this is an undocumented issue... How would LabVIEW incorrectly optimize the code in this particular case? Let me be very clear here: If I do not wire the output of the Unbundle node, LV is correct to optimize out the Bundle node. Every rule of dataflow says that the execution of that bundle node should not be necessary -- there's nothing downstream from the Bundle node to need the data. Remember, we're violating dataflow here -- we're saying that we need LV to change the value on a wire even though that wire isn't going anywhere because the memory is actually shared with items that are going on down the wire. How will this keep working in future LV versions? Don't know. In fact, I'm pretty sure that LV's compiler will eventually be smart enough to optimize this out unless I deliberately put something in to keep this working. I'm not entirely sure that it should keep working. By many arguments, the code I've written for the Map is a bug that should be fixed, probably in 8.5.1. This bit of magick arises as the confluence of multiple features, and I've posted it here to see what everyone thinks. I see a certain elegance to it, but this may be the same romance a moth feels for a flame. QUOTE So what exatly as happened here. We have had four handles (pointer to pointer) for LabVIEW objects LH,RH,lH and rH. The handles themselves remain in the same addresses but the pointers the handles refer to get exchanged. So LH that has originally referred to a Lp to L. Now refers to another pointer lp to l. Am I right or wrong here? Completely correct. QUOTE How do we know that in a future version of LabVIEW this trick will not be optimized as a no-op? See previous comments. QUOTE Actually to complitely understand these things, I'd need more examples of what I shouldn't do and what actions are not allowed rather than what actions the correct answers. I need to ask this one more time. If I don't try to do things in-place, will I still have risks of crashing LabVIEW or something similar when I modify hierarchical data structures? I honestly am not sure. I tried to get people -- both inside NI and outside -- to work on this problem before release. And I put it in the white paper after release. Everyone got this blank look on their faces and responded with comments like, "I'm not really sure what you're even talking about. I think I'd need to see a working situation to be able to help." So, now everyone can see the scenario that I'm talking about. If we are going to see a problem, it will definitely show up in the graph case. If the graph can be built -- which I'm not sure is possible -- but if it can, the bug, if it exists, will have to come to light.
  22. QUOTE(Eugen Graf @ Aug 29 2007, 08:35 AM) Curious... what makes 12 so bad? All the controls/indicators in the palettes are sized to 12 pixel increments.
  23. Quick reply for now on a single point... QUOTE I mean that for VIs outside the class, they see a coherent "Map" object that behaves dataflow safe in all cases and a set of API functions that act on that map in a dataflow-safe way. But on VIs inside the class, there are calls to member functions that adjust parts of that map in ways that wouldn't be dataflow safe under all conditions, but the conditions are limited such that it works in the cases that are actually capable of executing. QUOTE(Tomi Maila @ Aug 29 2007, 11:25 AM) LabVIEW has always been a programming language that one can learn without reading books simply by checking the documentation of each node. Now these particula techniques are not documented anywhere. LabVIEW help doesn't tell much anything. Users are tempted to give these new memory management nodes a try. They may soon use them thinking they understand, but they don't. At least if they are as stupid as I'm. There is simply no way a user can learn these techiques if there is not a very comprehensive introduction to these techiques. This is also true for the most advanced users like the ones here at LAVA. If enough people disagree, I'll fix the compiler so this doesn't work at all. It's easy enough to detect -- just prevent a class from using any ancestor class from being used as a data member. It is a door I left open, consciously and deliberately, because of this use case. Part of posting it here is to evaluate whether this should remain part of the language syntax. It would be the work of a few moments to render classes that do this broken. As for the documentation, you're reading it. None of this has been done with LabVIEW until this Map class that I've posted. Its feasibility was only theoretical until I finished it recently. It's like a manhole cover. Sometimes it's useful to go into the sewer drain, but you can get covered in crap. City work crews could lock down the manholes, or they could just leave the lid unlocked and brave users can descend into the depths. This hole is more dangerous than most of the ones on LabVIEW Street because of the crash issue. You want an example of things that are normally reserved for password-protected VIs that users aren't allowed to look at? This is a prime example. So, you tell me... should we close this hole in the language syntax? As for the users being tempted to try the memory management primitives -- yeah, I wasn't really in favor of adding those to the language. They encourage you to care about things that you really should leave to the compiler 99.9% of the time. I know they're useful, but I really think they should've been a separate toolkit -- the license key for them could've been handed out to Certified LabVIEW Developers as a prize when they passed their exams. But it wasn't my call to make. They are in the palettes. And yet, there's not going to be any horn blowing to announce them. There never will be any comprehensive training with these prims -- they're quietly sitting off in one palette, to be pointed out to users by AEs when they reach some otherwise unresolvable issue. And they can be explored by zealous LV users. Side note... after today, I may not be posting much for the next week or so. Other stuff going on. I'll try to get a more complete answer to Tomi's questions later today, but no guarantees.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.