-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
No. Individual branches you might be able to guarantee order of, such as where the output of node A forks to nodes B and C -- the compiler will always add those to the execution queue in a particular order. But if output of Node A forks to C and D, and B also forks to C and D, the order of C and D may vary depending upon whether A finishes first or B finishes first, and which finishes first can vary greatly depending upon the operating system thread scheduling. A "dataspace entry" is a single piece of LV data capable of going down a wire as a coherent LV data type. There are several dataspace entries that are not represented directly on the diagram, but, in general, any of the items you can interact with on the diagram are going to have a terminal representation at some point. Some things -- like a queue -- which aren't associated with any particular VI, have storage that is not directly a terminal.
-
Excellently done! I think I can add that the inplaceness algorithm -- where we decide a downstream wire can reuse an upstream wire's mem address and modify that value -- is logically equivalent to each wire having a reference count on that data, and when the ref count is one, the nodes can modify the address represented because no one is sharing it. If you created a DVR directly to a by-value wire's address, the refcount would not be one -- it would be two, one for the DVR and one for the parallel by-value wire. So until that other branch was done, you couldn't modify the data. A data copy is created so that the DVR is the one and only refcount for the data. There are various improvements we can sometimes do using top-swapping instead of deep copy, but in general, the DVR needs exclusive rights to that memory address.
-
Variant to class-data context dependend error
Aristos Queue replied to Jeffrey Habets's topic in VI Scripting
A bug? Only sort of. Yes, it's abug, yes it makes LV unstable, but, um... no, it is unlikely to befixed. This is documented somewhere but I can't find it at the moment. If Iload A.lvclass in AppInstance 1, that same piece of data cannot be usedin AppInstance 2 because if we close AppInstance 1, the class leaves memory, which wouldleave the data in AppInstance 2 without a class definition which wouldcause LV to crash the next time you copy, delete or access that data.(in fact, this crash is what's bringing down your VI, but we'll get tothat in a moment) and because the A.lvclass loaded in AppInstance 1 might not bethe same as the A.lvclass loaded in AppInstance 2. One might bec:\A.lvclass and the other might be d:\A.lvclass, and they might nothave anywhere close to the same definition. Applying the assemblyinstructions to copy one an object of the first class might createcompletely corrupt objects if applied to objects of the other class. The only correct way to shuffle object data from AppInstance 1 toAppInstance 2 is to flatten it to string and unflatten it from string.This happens automatically when the two AppInstances are on differentmachines -- the objects are flattened to go across the network. Thishappens automatically if the wire type is LVClass and not variant, evenwhen the two AppInstances are on the same computer. And it happensautomatically for variants when the two AppInstances are on the samecomputer if you wire a port number to the Open Application Instanceprimitive. However, it does not happen automatically when you're using varianttype to transfer the class data and the contexts are on the samemachine and you didn't specify a port number for the Open App Reference. Why doesn't it happen automatically? Because LV would have to flatten and then unflatten every variant as it crossed the context boundaries on the off chance that there's a class buried somewhere inside -- in the data, in an attribute, in a cluster of array of cluster of variant's attribute, etc. When I put that code into LV, performance of many many apps went way way way down. It seems that many scripting functions, editor extensions, etc, grab data across the boundaries. That's a real problem for LV classes, but I've got no way to detect it (even just testing for the data type at runtime was too much of a performance hit). As I said, this is documented in the shipping docs somewhere, and I know I've typed this up before. -
[speaking purely as a developer myself, not speaking in any way for NI for this post.] Oh, yeah, I've made this mistake before. Not with LV... another tool. In my case it was choosing to use Oracle (licensed to my employer) instead of MySQL (freeware tool). I left that employer and my tools were dead -- I had copies of all my databases, but no way to use them. In a case where I made the right decision, my copy of CorelDraw is mine, paid for out of my pocket, even though Iuse it frequently at work. NI probably would've bought Corel for me once I showed a need for it in my work, but all of my personal graphics projects are in Corel and I don't want to lose access to them if I everleave NI. Essentially, bsvingen faces a rather classic problem. He used a license registered to entity A to produce software for entity B, and now without access to person A's license, he's stuck without the ability to continue working on the project for entity B. The solution is not, IMHO, to say, "Never use the licensed tool, always use the free tool." The actual answer is, "Never start a project using tools that have licenses that differ from the owner of the project." In other words, if the project is for entity B, don't use entity A's licenses to develop it. So if this is my personal project, I need a personal license, rather than relying upon my office license, otherwise I end up one day not having access to that license. Is a personal copy of LV cheap? Heck no. But neither is a personal copy of a lot of cool software. I'd like to say, "use LV for everything it is appropriate for." But I'd also like to get a paycheck every couple weeks, which means LV costs money, which means "use LV if you can afford to use it in all the times where it is appropriate". It'd be nice if it were down around $250, like Visual Studio, but we don't have anywhere near the volume that MS does to be able to make back our expenses at that low a price. I don't think the licensing takes away from the *language* being a general purpose programming language -- by which I mean, if you have access to LV, you can use LV for any programming project, effectively and efficiently, to the same degree as any other programming language. Every language has its strengths and weaknesses, and LV's particular set of trade-offs are no worse than a lot of other languages. But I do agree with bsvingen that the licensing takes away from the development tool being a general purpose development tool -- by which I mean not all developers will have access to it. That's really what's at the heart of the open source/free software arguments, but while I definitely use open source stuff, and I've contributed back to a few open source projects, and I wrote a lot of shareware back in history, I don't think anyone has figured out how to pay a staff of programmers while charging nothing for the software.
-
One option: Use an edge of green and a core of something else, similar to the queue wires.
-
What you are asking for is not possible, at all, in a dataflow language. A raw pointer to data on a parallel by-value wire cannot be constructed meaningfully. Ever. That would completely destabilize LabVIEW's parallel architecture. The whole point of a data flow wire is that *nothing* other than the upstream primitive that created the value can modify that value. Nothing. If you have raw pointer access to the data on a parallel branch, you will almost always create wholly unpredictable behavior or a crash, with the crash being highly likely. We fully support references. We do not support pointers at all. But you don't refer to a piece of the block diagram. You refer to an address in memory. That other, parallel wire, does not refer to that piece of memory. It refers to its own piece of memory.
-
It's definitely not something I would force ever. It's just a recommendation that a couple folks think should be followed. I've had multiple situations where the mixture was useful, and haven't seen any negatives yet, myself.
-
Jim Kring wrote: Strong recommendation we've developed over the last few years: If your object contains even a single refnum, make the wire color be refnum green. I even updated the color picker in LabVIEW 2009 so that refnum green is one of the default colors displayed so it is easy to grab it for your wire designs (it's the last non-gray square in the row of pre-selected colors). Also, I've had a few people suggest that if one member of your private data control is a refnum then all of them should be refnums, so that everything in the object behaves like a reference. I'm not sure what I think of that, but I'll throw it out there for consideration. Unfortunately, the bookkeeping on this is significantly problematic. Essentially, you'd need the inside of the IPE to output an array of refnums that are "the refnums I currently have deref'd" and then pass that by dataflow to any nested (including inside subVIs) IPEs. But the subVIs don't have magic secret inputs/outputs that would allow this information to pass along, and trying to do it in a "global store that each IPE checks to see if refnum was used upstream" requires the IPE to know aspects of dataflow that are simply unknowable, such as what nodes are upstream of it. We did kick around embedding the information in the refnum wire itself, but with a reference, there's no guarantee that the reference is used in only one place.In the end, we couldn't find a way for the IPE to tell the difference between a refnum that was dereferenced upstream (and should error) and dereferenced in parallel (that will eventually be released), except in the most trivial of cases. If you're writing circular reference code, you'll need to carry along with the refnum a parallel array of refnums already opened, and at every IPE, manually check whether the chosen refnum is already referenced.
-
Using the branch, modify, merge paradigm
Aristos Queue replied to Daklu's topic in Source Code Control
I honestly believe there is no good solution. You have to choose the option that is least bad for your particular development team. For me, branches are the worst option, but I can easily understand someone believing single-trunk development being the worst. Perhaps all software teams should just have an uber secretary who keeps track of everything and let that person tell you what you can develop today. -
The Equals primitive compares values and returns true if the values are equal. All of the comparison primitives (=, !=, <, >, >=, <=, and Sort 1D Array) compare based on the value of the private data cluster. Online help includes details on how comparison works when the underlying clusters aren't the same.If you're wanting to say "is this object of this particular type", then you use the Two More Specific primitive. If you're wanting to say "are these two objects the same type", use the Preserve Run-Time Class primitive (new in LV 2009). "Get LV Class Path.vi" can also give you the exact class info. Very few of these should ever be needed in your code. "Using the full power of OO" means never* doing type testing -- that road leads straight back into the world of enums and case structures. * 'never' of course is relatively never, since we wouldn't have the prims if it was truly never.
-
Using the branch, modify, merge paradigm
Aristos Queue replied to Daklu's topic in Source Code Control
When you're working in the same branch as your fellow developers, VIs and .lvclass files should be locked when you check them out. Ideally, you should get a warning when you try to check out a file that says "this file is already checked out. Any changes you make may not be able to check in. Are you sure you want to check the file out?" When you're working on different branches, I don't have any answers for you other than coordinate with your fellow developers. It's a largely unsolvable problem. Multiple branches can quickly break down in pure text languages too. True, changes can *often* be integrated together, but often they can't. LabVIEW has every function as a separate file, which is nice -- its surprisingly rare for multiple developers to need the same function simultaneously. The private data control of LV classes are a nasty bottleneck. As rare as it is to need to change the same function, two developers needing to add data to a class is more common. I don't really have a solution for this. The closest you can come is to check out the class, add a placeholder typedef and quickly check the class back in. Then in your own code, update the typedef to the data you actually need. This assumes you're not flattening the class to disk and trying to store data of this class type. -
Can't downcast a dereferenced object?
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
It is a philosophy question, and I doubt there is a right answer. "Favor composition over inheritance" is a battle cry I have heard a lot over the last few years. Composition means I can "plug in" functionality from lots of sources. The trouble, as I see it, is that you end up writing a lot of wrapper functions in order than the outer object can expose the inner objects' APIs as its own. Now, the theory is that the inner object's API can change completely and as long as the new API can still support the old functionality, the outer API doesn't have to change -- you change the implementation of those wrappers, not the outer API. But what I observe is that it is rare to want to change the inner API and not change the outer one. Perhaps you're adding a new option, or you're breaking one atomic function into several individual parts. There's usually a reason you're doing that, and it is rare -- in my experience -- that the reason you're doing it is just so you can have a better internal API. Changes to deeper layer APIs are almost always driven by top-level demands for features. Or, to put it another way, it is rare that the core decides, "I'm going to change how I do things because it will be better." It is much more common for the outermost layer to say, "I need to serve the user differently... but I can't do that unless the core exposes some new functionality." Given that the desire for a new API is almost always driven by the layer closest to the user, when the core changes, the outer layer will want to change anyway, so if there's a tight binding (inheritance) between core and outer layer, so what? Not having that binding doesn't generally seem to save you. Sure you *could* avoid rewriting the outer layer when the core changes, but if the whole point of changing the core is so you *can* rewrite the outer, then you're going to rewrite the outer. Yes, I spend a lot of time changing inheritance trees around -- introducing new middle layer classes, wiping them out again a couple revisions later. I've got one tree of classes I'm working on at the moment where each layer of inheritance is adding one function, and one "concrete" class forking off of the trunk at each level. As I revise each of the concrete classes, they're sliding down the tree, gaining functionality, and I'll probably collapse a lot of the middle layers eventually. I find that the chain of inheritance is useful for navigating my code and I don't see a major drawback to refactoring the tree. Very few classes are persisted to disk, so that's rarely an issue, and in LV, the mutation logic built into the language generally handles everything I need it to. -
Can't downcast a dereferenced object?
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
Really? My opinion has generally been that deep hierarchies mean that you've adequately decomposed objects and you can inherit in the future off of any of the intermediate levels instead of being stuck with one concrete class that has everything. I've always thought that the deep hierarchies are highly extensible. -
The equals operation is a *value* comparison, not a reference comparision. All of the fields of the object must match exactly for equals to return true. We do not have any concept of a reference of the object.
-
Ourcode conventions say that when we insert something like that on yourdiagram, you should see a load warning when you load the VI explainingthe change. Did you get such a warning? If not, please file a CAR. Generallythis would be done for any node that used to silently fail some caseand in the new version it returns an error code for that case. Existingcode may have been written to assume the silent failure. To preserveprevious version functionality, we insert a function to conditionallyclear the new error code. Inserting a full "Clear Errors.vi"that wipes out all error information would be strange... The only caseI can imagine that applying to is if in the previous version the nodeutterly failed to propagate error in to error out and we have fixedthat, but, again, to preserve previous functionality, we clear allerrors downstream. But that's just a guess. [LATER] Looking closer atyour posted picture, that is just a conditional clearing of the errorcode, so I'll bet #2 applies.
-
My work machine is a PC. My home machines & laptop are all Mac. My C++ development is always done under Windows where I have access to that wonderful tool MS Visual Studio. I haven't ever found anything for text programming that comes close to the usability of MSVS. On the other hand, most of my G development these days is done on my laptop where I can use my touchpad, which I find much nicer than a mouse for LV programming (easier to move back and forth from keyboard to mouse as needed).
-
-
LV 2009 configuration file VIs "private"
Aristos Queue replied to James N's topic in Development Environment (IDE)
I seriously doubt it. The percent of LV in LV has gone up substantially with every release since 8.0. The problem is not how much of LV is written in LV, nor even how many VIs can you look at to see how something is done. The question is how many of those VIs can you use in your own code AND expect support for those VIs in the next version of LV. If every VI that ships with LV is one that we have to maintain indefinitely, we'll rapidly stagnate. We're not talking about password protecting the diagrams or anything. We're talking about making it so that you have to create your own copy of the VIs in order to use them. -
To the best of my knowledge, ShaunR is correct. The shared variables do not provide synchronization behaviors. On a local machine they are equivalent to global VIs. Over the network, they are a broadcast mechanism, which, by polling, you can use as a trigger, but I don't think you have any way to sleep until message received.
-
It's a question of reallocation on every call vs leaving the allocation in place to maximize performance. If the data can occassionally ramp up to "large number X", and you have enough ram, then it's a good idea to just leave large block always allocated for that large data event, no matter how rare it is. The larger the desired block, the longer it takes to allocate -- memory may have to be compacted to get a block big enough. Also, keep in mind that we're talking about the space for the top-level data, not the data itself. So if you enqueue three 1 megabyte arrays, and then you flush the queue, the queue is just keeping allocated the 12 bytes needed to store the array handles, not the arrays themselves.
-
Can't downcast a dereferenced object?
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
Glad to hear it worked out. -
When you write a C++ program, you write it in some editor (MSVC++, XCode, emacs, Notepad, Textedit, etc). Then you compile it. If you're tools are really disjoint, you use gcc to compile directly on the command line. If you have an Integrated Development Environment (XCode, MSVC++), you hit a key to explicitly compile. Now, in MSVC++, you can hit F7 to compile and then hit F5 to run. OR you can hit F5 to run, in which case MSVC++ will compile first and then, if the compile is successful, it will run. All of this is apparent to the programmer because the compilation takes a lot of time. There's a bit of hand-waving in the following, but I've tried to be accurate... The compilation process can be broken down as Parsing (analyzing the text of each .cpp file to create a tree of commands from the flat string) Compiling (translation of each parse tree into assembly instructions and saving that as a .o file) Linking (taking the assembly instructions from several individual .o files and combining them into a single .exe file, with jump instructions patched with addresses for the various subroutine calls) Optimizing (looking over the entire .exe file and removing parts that were duplicate among the various .o files, among many many many more optimizations) LabVIEW is a compiled language, but our programmers never sit and wait 30 minutes between fixing their last wire and seeing their code run. Why do you not see this time sink in LabVIEW? Parsing time = 0. LabVIEW has no text to parse. The tree of graphics is our initial command tree. We keep this tree up to date whenever you modify any aspect of the block diagram. We have to... otherwise you wouldn't have an error list window that was continuously updated... like C++, you'd only get error feedback when you actually tried to run. C# and MSVC# does much the same "always parsed" work that LV does. But they still pay a big parse penalty at load time. Compile time = same as C++, but this is a really fast step in any language, believe it or not. LabVIEW translates the initial command tree into a more optimized tree, iteratively, applying different transforms, until we arrive at assembly instructions. Linking time = not sure how ours compares to C++. Optimizing time = 0 in the development environment. We compile each VI to stand on its own, to be called by any caller VI. We don't optimize across the entire VI Hierarchy in the dev environment. Big optimizations are only done when you build an EXE, because that's when we know the finite set of conditions under which your VIs will be called.
-
How to rotate a control/indicator object
Aristos Queue replied to menghuihantang's topic in User Interface
Unfortunately, you can't time an animated gif well enough. The animations have too loose a time slice to be reliable, and anytime the UI thread gets tied up, they can hang. Further you have no ability to reset it in LabVIEW -- whenever you launched the VI, that would be the time displayed. -
Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous. Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues. Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).